venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
ICLR
Title Toward Efficient Low-Precision Training: Data Format Optimization and Hysteresis Quantization Abstract As the complexity and size of deep neural networks continue to increase, lowprecision training has been extensively studied in the last few years to reduce hardware overhead. Training performance is largely affected by the numeric formats representing different values in low-precision training, but finding an optimal format typically requires numerous training runs, which is a very time-consuming process. In this paper, we propose a method to efficiently find an optimal format for activations and errors without actual training. We employ this method to determine an 8-bit format suitable for training various models. In addition, we propose hysteresis quantization to suppress undesired fluctuation in quantized weights during training. This scheme enables deeply quantized training using 4-bit weights, exhibiting only 0.2% degradation for ResNet-18 trained on ImageNet. 1 INTRODUCTION Deep neural networks have been used in various fields such as vision, audio, natural language processing, and reinforcement learning. As larger and more complex neural networks are adopted, the energy and time consumed for training have become a critical issue in hardware implementation. Using low-bit representations in training significantly reduces hardware overhead and memory footprint; hence, neural network training with limited precision has been extensively studied recently. For instance, 16-bit formats are already adopted in commercial devices such as FP16 (IEEE, 2019) in Nvidia GPUs and bfloat16 (Kalamkar et al., 2019) in Google TPU (Wang et al., 2019). Also, Köster et al. (2017) suggested a new data format using a shared exponent suitable for low-precision training. Recently, it has been demonstrated that even 8-bit formats could be adopted in deep neural network training with reasonable accuracy (Sun et al., 2019; Fox et al., 2020). However, there are various issues in realizing low-precision training in practical applications as detailed below. Optimal data format for low-precision training: Training performance is susceptible to the data format we use to represent variables in the network. When a value is represented using a floatingpoint format with a fixed number of bits, there is a trade-off between dynamic range and precision. For instance, allocating more bits to the exponent part in a floating-point format enlarges the dynamic range but lowers precision due to fewer bits in the mantissa part. Recent studies on 8-bit training suggest various ways to reduce the dynamic range required for number representation to enhance representation precision. Early work on 8-bit training (Wang et al., 2018) adopts a 5-bit exponent to represent different variables using a single format, but Sun et al. (2019) examine the statistics of each variable and optimize the numeric formats separately. Specifically, the values used in the forward path (weight and activation) have a relatively narrow dynamic range, and only 4 bits are allocated to the exponent. Fox et al. (2020) propose to divide data into smaller blocks and assign a shared exponent bias to each block. Since the values in a block tend to exhibit similar statistics, the forward (weight and activation) and backward (error) paths could be represented using only 2-bit and 4-bit exponents, respectively. Note that the shared exponent bias is effectively identical to the scaling factor. If a variable has a value of m · 2e and a shared exponent bias of b, then its actual value is m · 2e+bias, which is identical to the scaling factor of 2bias. However, these approaches are difficult to generalize since we should empirically decide numeric formats for each task, neural network structure, and quantization scheme (Fig. 1). Furthermore, analyzing the statistics of each variable is not enough to determine an optimal format. Their distributions often have a long tail, and hence the dynamic range of the numeric format should be experimentally selected through many trial-and-errors in actual training. Performance degradation in from-scratch training: Previous studies on quantized models show that a model could achieve comparable accuracy to full-precision models even using 1- or 2-bit weights (Choi et al., 2019; Martinez et al., 2020) through fine-tuning a pre-trained model. However, in low-precision training where a neural network is trained from scratch using low-precision values and computations, the trained model typically shows a noticeable accuracy drop (Elhoushi et al., 2021). Fig. 1(b) shows the Top-1 validation accuracy of ResNet-18 (He et al., 2016) trained on ImageNet (Deng et al., 2009) for different training schemes. The weights are quantized into a 4-bit base-2 logarithmic format. From-scratch training of the model with quantized weights results in a 2.1% accuracy drop, whereas only 1.0% degradation is observed if we fine-tune a pre-trained model. This suggests that even though a better solution (i.e., a set of parameters) exists for a given format, it cannot be reached through from-scratch training. To formalize the issues above, here we divide quantization in low-precision training into two types: network quantization and data flow quantization. Network quantization refers to the quantization of the neural network model. An example of this type of quantization is weight quantization. In network quantization, we need to reduce the performance difference between from-scratch training and fine-tuning (Yang et al., 2019b). On the other hand, data flow quantization refers to the on-the-fly quantization that occurs when data propagate through the network in low-precision training. Examples include activation, error, and weight gradient quantizations. Additional errors are introduced in weight update computation due to this type of quantization, which leads to performance degradation. Hence, we need to find an optimal format to minimize accuracy drop due to computation errors in data flow quantization. In this paper, we present a systematic approach to implementing low-precision training on various models and tasks. First, we present a method to efficiently find an optimal format for data flow quantization. In addition, we introduce a hysteresis quantization technique, a new quantization method for network quantization that can mitigate the issues of from-scratch training. Our main contributions are: • We present a method that can predict the training performance of various numeric formats for data flow quantization. This method allows us to determine an appropriate data format for different neural network structures, datasets, and tasks efficiently. • Using the method above, we propose an optimal 8-bit format suitable for low-precision training of various models, which enables quantization of BatchNorm layer input and improves hardware efficiency with minimal performance degradation. • We propose a new quantization scheme that utilizes the hysteresis effect to improve the performance of from-scratch training in network quantization. This scheme enables ultra-low-precision training using 4-bit logarithmic weights. 2 DATA FLOW QUANTIZATION 2.1 NUMERIC FORMATS There are many numeric formats that can be constructed with n bits depending on how much dynamic range is required and how many valid bits are used for representing a value. For example, using 8 bits we could implement 8-bit fixed point integer format, 8-bit floating-point formats such as FP152, FP143, and FP125 (FP1xy represents 1 sign bit, x exponent bits, and y mantissa bits), 8-bit posit format (Gustafson & Yonemoto, 2017), and 8-bit float-fix format (Han et al., 2019). Since the diversity of formats that could be formulated using n bits is nearly unlimited, here we assume some constraints to limit candidates while still including widely used formats such as fixed-point and floating-point formats as below: • The MSB (Most Significant Bit) is used as a sign bit and other bits represent magnitude. Accordingly, only symmetric formats that have identical representable ranges for positive and negative numbers are considered. Two’s complement representation is slightly asymmetric since it can represent one more negative value, but it does not incur a significant difference. • The number of valid bits of a larger value is greater than or equal to the number of valid bits of a smaller value. The valid bits stand for significant digits in binary representation. • The ratio between consecutive representable values does not exceed 2. For example, the base-4 logarithmic format is excluded. We could obtain 166 8-bit formats that meet these constraints. Then, we reduce 1 and 2 valid bits in each format to obtain 7- and 6-bit formats, resulting in 498 formats in total. More information on the numeric formats considered in our experiments is provided in Appendix A.1. 2.2 ACTIVATION AND ERROR QUANTIZATION In a neural network consisting of n layers, the training process is described by Al+1 = fl(W t l , Al) (1) El = gl(W t l , El+1) (2) Gwl = hl(Al, El+1) (3) W t+1l = o(Gwl,W t l ) (4) where A, E, W , and Gw are activation, error, weight, and weight gradient, respectively. f , g, h, and o are forward, backward, gradient, and update functions. l and t represent the layer number and time step. We follow the quantized training scheme suggested by Fox et al. (2020), but with the following modifications to reduce hardware implementation costs. A and E are quantized not only for the GEMM input but also for the BatchNorm layer input. BatchNorm layer normalizes input using the mean and variance of each channel, but these values are obtained only after observing all the inputs from the previous layer, necessitating that all input values are temporarily stored in memory. Therefore, quantizing the BatchNorm layer’s input significantly reduces memory footprint and memory access overhead. Additionally, the scope of sharing exponent bias is extended to a layer (Al and El) to avoid the overhead of aligning partial sums from different blocks in block-wise exponent sharing. Finally, instead of determining the shared exponent bias by analyzing all values in the layer, we conservatively update it by detecting overflow and underutilization that occurred in the previous mini-batch. 2.3 INDICATORS OF TRAINING PERFORMANCE Effect of quantized error: Quantizing the error E in the backward path is independent of how the forward path behaves since the loss surface of the model does not change. Therefore, the optimalW that the network needs to reach through training remains the same regardless of the error quantization scheme. However, when the error is quantized, a quantization error ∆E is introduced in E, which incurs a noiseN∆E inGw through the gradient function in Eq. 3 and potentially updates each weight in the wrong direction. While some amount of noise may improve the training performance through regularization, using low-precision formats already introduces a large noise in the network, incurring performance degradation (see Appendix A.8). Therefore, we suggest that the weight gradient error N∆E could be a good indicator of degradation in training performance. One way to implement this is predicting performance using the magnitude ofN∆E ; however, if the noise is in the same direction as Gw, it would only change the amount of each update and result in a less severe effect. Instead, we could measure the misalignment between Gw + N∆E and Gw for performance prediction. The misalignment between two vectors is estimated by ∠(A,B) = cos−1 { A ·B ‖A‖2 · ‖B‖2 } (5) Then, the change in the update direction due to N∆E is ∠(Gw, Gw +N∆E). We can expect that the smaller ∠(Gw, Gw +N∆E), the better the training performance. Effect of quantized activation: Contrary to error quantization, activation quantization affects the way the forward path operates, and the loss surface of the model changes. Hence, the global optima of weight parameters shift, where the amount of shift would be proportional to the quantization noise. The displacement of global optima can be indirectly estimated using the direction of the weight gradients Gw. If the angle ∠(Gw, Gw + N∆A) is small, the deviation of the global optima is expected to be small as well, suggesting a better training performance. In the discussions above, we assumed that the angles ∠(Gw, Gw +N∆E) and ∠(Gw, Gw +N∆A) could be used to predict training performance. We experimentally prove this by comparing the training performance of different numeric formats. For 498 numeric formats in 6 to 8 bits, we compare the loss obtained from training with the proposed performance indicators (Fig. 2). Training loss is obtained by training ResNet-18 on CIFAR-10 dataset using SGD with a momentum of 0.9 for 60 epochs. The batch size is 128 images and the initial learning rate is 0.1, which is decayed by a cosine scheduler. We average angles from 100 mini-batches after quantizing a pre-trained model. Note that we use Gw of the first layer since it can reflect quantization errors that occur in the activations and errors of all the layers in the network. The weight gradients from the full-precision network, the network with quantized activations, and the network with quantized errors are Gw, Gw +N∆A, and GW +N∆E , respectively. Fig. 2 shows that using the misalignment angle results in not only a higher Spearman’s correlation but also a more distinct shape for low training losses, making it a better metric than the error magnitude. For instance, using the error magnitude would predict the best format for transformer incorrectly (see Fig. 8(e) in Appendix A.3). While obtaining the misalignment angle requires additional computations, its overhead is negligible since the part that requires the most time and computation is to obtain Gw, Gw + N∆E , and Gw + N∆A, which is still significantly lower than actual training. Using this method, we could determine the optimal format for a specific neural network model, dataset, and task very efficiently as we only need to measure the misalignment angle without time-consuming network training. For experiments in Fig. 2, the amount of computation is reduced by 99.6%, and the reduction will be even larger for larger datasets and complex networks that need more epochs for training. 2.4 OPTIMAL FORMAT FOR DATA FLOW QUANTIZATION Here we show that we could find an optimal format for training with quantized errors and activations using the proposed performance estimation method above. To find a format suitable for a wide range of models, we select six models with different architectures, layer types, and target tasks that are widely used in quantized training research for experiments: ResNet-18, ResNet-101, MobileNetV2 (Sandler et al., 2018), 2-layer LSTM, small transformer for translation on the IWSLT German to English dataset (Cettolo et al., 2014), and SSD-Lite (Liu et al., 2016) with MobileNetV2. We first measure misalignment angles for 166 8-bit formats. To verify the correlation between the training performance and the misalignment angles, we select four formats that exhibit low hardware implementation costs (INT8, FP152, FP143, and FP134) and train the networks using each format. While we may use different formats for activation and error, it requires a complicated datapath (Sun et al., 2019) and hence we only consider a single format for both variables. The experimental results in Fig. 3 demonstrate that the training performance is higher if both misalignment angles are small in all tasks and models, confirming that the proposed indicators could be used to determine the optimal numeric format. Fig. 3 suggests that FP134 and FP143 are the best candidates across all models. For hardware implementation, FP134 is the most optimal format due to its low implementation cost, which is discussed in Appendix A.7 in detail. Note that using the error magnitude leads to the same conclusion that FP134 is the best format for the target models. See Appendix A.3 for more details. 3 NETWORK QUANTIZATION In quantized neural networks, the weight parameters are generally quantized in a way that minimizes the quantization error (Choi et al., 2019; Martinez et al., 2020). For instance, if x is quantized into a fixed-point format through s × round(xs ), a proper value is selected for the scaling factor s to minimize the quantization error. However, as the weights continue to change during training, we need to calculate s for every update, which could cause significant overhead. Therefore, prior studies on low-precision training suggest constraining the scaling factor to the power of 2 in the shared exponent (Köster et al., 2017) or the shared exponent bias (Fox et al., 2020). In this section, we analyze the issues behind weight quantization and propose a new quantization scheme to mitigate those issues. 3.1 FLUCTUATION OF WEIGHT PARAMETERS In typical low-precision training, a master copy of weight parameters is separately maintained in high precision, and those weights are updated based on the computed weight gradient. This highprecision weight is quantized into a low-precision format and used for the forward path computation during training. If the scaling factor s is constrained to 2n, the quantization threshold remains the same unless s is updated due to overflow or underutilization. If the optimal weight is located between two representable values of a data format, the quantized weight would fluctuate alternately between the two values in each update (Fig. 4(a)) even for a very small weight update, causing large fluctuations and undermining training performance. 3.2 HYSTERESIS QUANTIZATION To mitigate the fluctuation issue above, we propose to introduce the concept of hysteresis to quantization. More specifically, we quantize each weight differently in a way that the quantized value tends to stay at its current value, effectively minimizing undesired oscillation between two values due to small weight updates. The equation below shows an example of the proposed quantization scheme. Qtw = { bwtc, if wt > Qt−1w dwte, if wt < Qt−1w (6) where w is the original value, Qw is its quantized value, and t is the time step. The proposed hysteresis quantization reduces fluctuation significantly, stabilizing the training process and allowing the network to reach global optima more efficiently. In Fig. 4(b), if the weight change ∆W is small, then enough number of those changes should be accumulated to flip Qw. Hence, the update frequency is now proportional to the weight gradient. This helps the network to learn better while suppressing fluctuations for small Gw values. Alternatively, we may mitigate weight quantization errors by adopting AdaRound (Nagel et al. (2020)), which learns whether each weight should be rounded up or down to produce the same output as high-precision weights. However, whenever full-precision weights are updated, we need to re-train the learnable parameters (i.e., quantization scheme of each weight), incurring a large overhead and undermining the benefit of low-precision training. 3.3 ULTRA-LOW-PRECISION FORMAT FOR NETWORK QUANTIZATION To verify the effectiveness of the proposed hysteresis quantization, we select 4-bit logarithmic representation as an ultra-low-precision format for weight parameters. This format has the same dynamic range as INT8 which is widely used for weight quantization, and is more hardware-efficient as multiplication is implemented only using simple shift operations. There have been attempts to use logarithmic weights in quantized neural networks (Lee et al., 2017; Elhoushi et al., 2021), but from-scratch training shows a significant performance degradation. In logarithmic data formats, the interval of quantization points is not uniform, making the effect of fluctuation more severe. Fig. 5 shows experimental results of ResNet-18 training on ImageNet using 4-bit logarithmic weights. Note that we apply channel-wise quantization to the convolutional layers to compensate for the insufficient expression range and layer-wise quantization to the other types of layers. Further details on the experimental setup are provided in Appendix A.5.1. First, we measure how many quantized weights Qw change when the network performs one weight update using a mini-batch and average them over the first 100 updates in the 60th epoch. The experimental result displayed in Fig. 5(a) clearly shows that using hysteresis significantly reduces weight change frequency and stabilizes the training process. Fig. 5(b) compares the training performance of quantization schemes with and without hysteresis. Hysteresis quantization not only speeds up training but also achieves better results at the end of training. Note that hysteresis quantization is applicable to other data formats, and additional experimental results can be found in Appendix A.4. 4 EXPERIMENTAL RESULTS FP32 FP8 4.1 LOW-PRECISION TRAINING SCHEME For low-precision training, we need to quantize four variables: activation, error, weight, and weight gradient. In our experiments, we apply the quantized training scheme detailed in 2.2 to all of these variables, as depicted in Fig. 6. As in previous studies on 8-bit training, the inputs of GEMM are all quantized into 8 bits. Additional functions are applied to GEMM results in the forward and backward paths. ReLU, tanh, and sigmoid functions are performed directly on the input, whereas the input of BatchNorm is re-quantized. 4.2 8-BIT LOW-PRECISION TRAINING In Section 2.4, we found that FP134 is the optimal format for low-precision training using the proposed performance prediction method. We measure the training performance of this format and compare it against other 8-bit data formats from recent studies by applying those formats to the training of various neural network models. More details on the experimental setup are provided in Appendix A.5. The performance of the proposed data format is summarized in Table 1. Overall, 8-bit training using FP134 achieves nearly the same performance as the full-precision training on all models. Even in MobileNetV2, which is known to be sensitive to quantization due to the small number of parameters, only 0.3% degradation occurred. Sun et al. (2019) show that HFP8 also exhibits only 0.2% accuracy degradation in MobileNetV2 (71.81% vs. 71.61%), but they quantize BatchNorm input into 16 bits instead of 8 bits, roughly doubling the memory access and computational complexity. Additionally, since the forward and backward paths employ different data formats, HFP8 is actually implemented using 9-bit MAC units in hardware (Agrawal et al., 2021). Table 2 compares the training performance of various data formats for ResNet-18 training. The columns w, x, dw, dx, and acc refer to weight, activation, weight gradient, error, and GEMM accumulation, respectively. Our FP134 format exhibits no accuracy drop compared to full-precision training. HFP8 (Sun et al., 2019) and BM8 (Fox et al., 2020) demonstrate similar performance, but they both use higher precision to represent BatchNorm inputs, and different formats are adopted in the forward and backward paths, necessitating complex computation units when implemented in hardware, as decribed above. In addition, BM8 assumes block-wise sharing of exponent bias, incurring additional overhead in memory access and data alignment. FP8-SEB (Park et al., 2021) addresses this issue by employing layer-wise exponent bias sharing and multi-way MAC units, but it results in a 0.7% accuracy drop for ResNet-18 training. Contrarily, our data format shows no performance degradation, while deeply quantizing BatchNorm inputs into the same format and allowing for a simple datapath by using an identical data format in the forward and backward paths. 4.3 ULTRA-LOW-PRECISION TRAINING WITH 4-BIT LOGARITHMIC WEIGHTS Elhoushi et al. (2021) recently demonstrated that 4-bit logarithmic weights could be used for network quantization. Fine-tuning of a pre-trained model only showed 0.2% accuracy degradation, but from-scratch training of the same model resulted in a 4.5% accuracy drop in ResNet-18 training (Table 3). Similarly, our experiments show 2.1% lower accuracy when training ResNet-18 using 4-bit logarithmic weights and FP134 format for other variables. However, using hysteresis quantization greatly improves the training performance and reduces accuracy degradation to 0.2%. This is identical to the training performance achieved through fine-tuning a pre-trained model by Elhoushi et al. (2021), confirming that hysteresis quantization effectively solves the issue of sub-optimal solutions in from-scratch training. In addition, Table 4 demonstrates that hysteresis quantization improves the training performance in all target models. Note that we quantized all trainable weights except for the BatchNorm parameters into 4 bits in experiments; the training performance could be further improved by using higher precision for error-sensitive parts such as the first/last layers and residual connections. 5 CONCLUSION In low-precision training, the dynamic range of a tensor is data-dependant, and hence an optimal data format depends on various factors such as model, dataset, and quantization scheme. We showed that the training performance of a specific data format for activation and error could be predicted by observing the errors introduced in the weight gradients. Based on this observation, we determined an optimal 8-bit format for low-precision training very efficiently without running numerous training runs. The proposed FP134 format achieved a similar or better accuracy compared to prior works, while allowing for efficient hardware implementation through quantizing BatchNorm inputs and using a unified data format in both forward and backward paths. In addition, we proposed the hysteresis quantization scheme for network quantization, which improves training performance by suppressing undesired fluctuations and stabilizing the training process. In ultra-low-precision training with 4-bit logarithmic weights, hysteresis quantization significantly improves training performance by mitigating sub-optimal solutions, closely matching the performance obtained through fine-tuning a pre-trained model. We expect that these two schemes can complement each other to enable practical low-precision training on various models and tasks. ACKNOWLEDGMENTS This work was supported by the National Research Foundation of Korea (Grant No. NRF2022R1C1C1006880). The EDA tool was supported by the IC Design Education Center. A APPENDIX A.1 VARIOUS FORMATS ANALYZED IN SECTION 2 In this paper, we made three assumptions on the quantization formats that were analyzed. Firstly, 1-bit is allocated as a sign bit, so only symmetric formats are allowed, and secondly, the number of valid bits with a large absolute numerical value must be greater than or equal to the number of valid bits with a small absolute numerical value. Lastly, the base does not exceed 2. Considering the above assumptions, we provide a systematical approach for generating different quantization methods that were used for analysis in Section 2, in order to create quantization methods that have trade-offs in terms of dynamic range and the number of valid bits. The quantization method is expressed with the following items: i) a list of decreasing positive real numbers P that contains the interval points (Eq. 7) and ii) a non-increasing integer list L that accompanies the interval list, with each item representing the number of valid bits (Eq. 8). Here, s is shared exponent bias. P = {2s+1, 2s, 2s−1, ..., 2s−K+1} where s ∈ N (7) L = {l0, l1, ..., lK−1} where lk ∈ N, i < j ⇒ li ≥ lj (8) The quantization points Q are generated in each of the intervals that are sliced with 2lk−1 evenly distributed datapoints. If the interval is {2s+1, 2s}, the quantization point Q can be expressed by Eq. 9. Q = {2s, 2s(1 + 1 2lk−1 ), 2s(1 + 2 2lk−1 ), ..., 2s(1 + 2lk−1 − 1 2lk−1 )} (9) Notice that L for an α-bit quantization must satisfy 2α−1 = 1 + K−1∑ k=0 2lk−1 (10) Since the format is symmetric, only half of the data points are assigned to positive numbers, so the exponent in Eq. 10 should be α− 1 instead of α. The reason for adding 1 is to include a zero value. For example, when shared exponent bias is -1, an 8-bit fixed-point quantization would be expressed as follows: P = {20, 2−1, 2−2, 2−3, 2−4, 2−5, 2−6, 2−7} (11) L = {7, 6, 5, 4, 3, 2, 1} (12) The first interval from 1 to 0.5 would be evenly sliced by 27−1 datapoints, the next interval from 0.5 to 0.25 with 26−1, etc. Various cases are shown in Fig. 7, with P plotted on the x-axis and L plotted on the y-axis. Since P represents the range of values due to shared exponent bias that is independent of the data format, L can represent all of the various data formats we consider in this paper. When selecting 8-bit formats, we chose the formats so that the intervals with less than 3 valid bits do not appear for more than two digits to reduce the search space, as they have an unnecessarily large dynamic range. Thus, formats such as [7,6,5,4,2,2,2,1] were excluded from the search space. Considering all of the generation rules, we selected 166 distinct 8-bit formats with different dynamic range and valid bits from [7,6,5,4,3,2,1] to [3,3,3,...,3,2,1]. After the number of valid bits for an 8-bit format is selected, 1 or 2 is subtracted from each value to create a corresponding 7-bit and 6-bit formats. For example, in the case of [6,5,5,5,5,4,4,4,3,2,1] 8-bit format, the 7-bit corresponding format is [5,4,4,4,4,3,3,3,2,1] and 6-bit corresponding format is [4,3,3,3,3,2,2,2,1]. From the generated 166 8-bit formats, 7-bit and 6-bit formats were also generated using this rule. A.2 SOFTWARE IMPLEMENTATION DETAILS To support quantized training for various formats, custom C++ and CUDA codes to emulate quantized data were written. Our custom C++ and CUDA extension code could perform quantizationrelated functions through utilizing the Python APIs in PyTorch for extendable research while maintaining high performance. We emulate the quantized value using custom code in the part that needs quantization throughout the network, and PyTorch built-in functions are used for computation kernels such as convolution and matrix multiplication. We created a package named lptorch, short for low precision PyTorch, and the code can be found in the supplementary material. A.3 ANGLE VS. MAGNITUDE TO PREDICT PERFORMANCE In addition to the misalignment angles of Gw (∠(Gw, Gw + N∆A) and ∠(Gw, Gw + N∆E)), as defined in Section 2.3, we used the magnitude of noise (|N∆A| and |N∆E |) in order to predict the final trained performance, and the results are shown in Fig. 8. Fig. 3 and Fig. 8 show that both the error magnitude and the misalignment angle are good metrics for determining optimal data format. For the six target models, both metrics suggest FP134 as the best format. However, the misalignment angle still better captures the training performance. For instance, in Fig. 8(e), although FP134 shows smaller noise magnitude, the actual training loss is smaller for FP143. Similarly, in Fig. 8(b), (c) and (f), although INT8 failed and FP152 succeeded in training, the absolute value of noise did not indicate a clear superior of the two formats. Based on these observations, we conclude that the misalignment angles are more suitable for predicting training performance compared against using the absolute value of noise. A.4 HYSTERESIS QUANTIZATION WITH INTEGER WEIGHTS In addition to 4-bit logarithmic weights, we also tested the hysteresis quantization scheme on a lowprecision integer format (INT4) that uses uniform quantization. The results are shown in Table 5. Experimental results show that using hysteresis improves the performance in most cases. In addition, in MobileNetV2 training with INT4 weights, training initially failed, but using hysteresis enables reliable training, which suggests that hysteresis quantization not only helps the network to reach the optimal point but also prevents divergence in an unwanted direction during the training process. However, it is interesting to see that the hysteresis quantization is less effective on the LSTM model for the INT4 format. We suspect that this is due to the weight distribution characteristics of the LSTM model. As shown in Fig. 9, most of the weights have a relatively large magnitude in the LSTM model when normalized, contrary to ResNet-18 in which the weights are more evenly distributed. In logarithmic formats, the relative amount of quantization error is similar for all values. In contrast, the relative amount of quantization error is smaller for large values in uniform quantization. Therefore, the weight parameters of LSTM are more severely affected by fluctuation in logarithmic formats, making our hysteresis quantization scheme more effective in those formats compared to uniform quantization. A.5.1 RESNET-18 (IMAGENET) A.5 EXPERIMENTAL DETAILS We conducted ImageNet experiments using SGD with a momentum of 0.9 for 90 epochs with a batch size of 256 images and an initial learning rate of 0.1 which is decayed by a factor of 10 at the 30th and 60th epochs. We used the ResNet-18 architecture from the official PyTorch implementation1. Fig. 10 shows Top-1 training & validation accuracy graphs. Observation of the training graph indicates that all of the results are close to the baseline within 0.2% with the exception of FP130 without hysteresis quantization. A.5.2 RESNET-101 (IMAGENET) We trained ResNet-101 by applying the same training method as ResNet-18. We conducted ImageNet experiments using SGD with a momentum of 0.9 for 90 epochs with a batch size of 256 images and an initial learning rate of 0.1 which is decayed by a factor of 10 at the 30th and 60th epochs. We used the ResNet-101 architecture from the official PyTorch implementation2. Fig. 11 shows Top-1 training & validation accuracy graphs. Observation of the training graph indicates that all of the results are close to the baseline with less than 0.3% performance drop except for FP130 without hysteresis quantization. A.5.3 MOBILENETV2 (IMAGENET) We conducted ImageNet experiments using SGD with a momentum of 0.9 for 270 epochs with a batch size of 256 images and cosine annealing with an initial learning rate of 0.05. We used the MobileNetV2 architecture from the official PyTorch implementation3. Fig. 12 shows Top1 training & validation accuracy graphs. Observation of the training graph indicates that FP130 1https://github.com/pytorch/examples/tree/master/imagenet 2https://github.com/pytorch/examples/tree/master/imagenet 3https://github.com/pytorch/examples/tree/master/imagenet without hysteresis leads to very unstable fluctuations throughout the training. On the other hand, in FP130 with hysteresis, training is less susceptible to fluctuations and follows the baseline (FP32) training closely until the learning rate decreases toward the latter part of learning, where both FP130 with hysteresis and FP134 show some degradation from the baseline. This is seen as a limitation due to the low precision of each format. A.5.4 2-LAYER LSTM (PTB) We adopted the 2-layer Long Short Term Memory (LSTM) network from PyTorch Examples4 for language modeling on the Penn Treebank dataset (Marcus et al., 1993). We ran experiments in batches of 20 sentences with an initial learning rate of 20 which is decayed by a factor of 4 at epoch 11, 16, 26, 31 and 37. The embedding and hidden dimensions are 650 and the sequence length is 35. Fig. 13 shows training & validation perplexity. A.5.5 TRANSFORMER MODEL (IWLST) We adopted the Transformer Base model from the FairSeq5 repository on the IWSLT’14 German to English translation task. We used Adam optimizer and default training parameters found in the repository and trained from scratch for 25 epochs. BLEU scores were calculated using the script from the repository. A.5.6 MOBILENETV2 + SSDLITE (VOC) We adopted a PyTorch implementation of SSDLite from the online repository6. The base network is MobileNetV2 which was pretrained with each format in Appendix A.5.3. The entire network is trained on VOC2012 and VOC2007 trainval datasets and evaluated on VOC2007 validation dataset. We used SGD with a momentum of 0.9 for 200 epochs in batches of 32 images and cosine annealing with an initial learning rate of 0.01. Fig. 14 shows validation loss at every 5 epochs. Even in this experiment, in the case of FP130 without hysteresis the loss fluctuates significantly, whereas in FP130 with hysteresis learning proceeds much more stably. FP134 showed similar results to the baseline regardless of hysteresis quantization. A.6 MODEL QUANTIZATION METHODS We quantized GEMM input and batchnorm input in all quantized training experiments. Among the six models used in the experiment, the quantization details for three representative structures are shown in the Fig. 15. In each structure of figure, inputs such as x, c, h, V, K, and Q are also all quantized in 8 bits. 4https://github.com/pytorch/examples/tree/master/word language model 5https://github.com/pytorch/fairseq 6https://github.com/qfgaohao/pytorch-ssd A.7 HARDWARE EVALUATION For hardware implementation cost comparisons, we implemented a conventional MAC unit and a multi-way MAC unit with integer-based accumulation (Tambe et al., 2020; Park et al., 2021) that support data formats presented in Section 4.2. For accumulation, we use FP169 with chunk-based accumulation (Wang et al., 2018). Experimental results in Table 6 show that FP134 exhibits lower Structure FP134 FP1431 HFP82 BM83 Flex16+54 FP134 FP1431 HFP82 BM83 Flex16+54 Conventional 1355 1320 1308 1460 3800 122 116 106 141 537 Multi-way 2-input 1335 1480 2342 1865 2268 178 178 283 258 371 4-input 888 1034 1615 1296 1885 120 135 205 184 351 8-input 678 836 1343 1074 1672 97 123 194 168 342 16-input 571 698 1065 957 1540 95 114 170 155 329 32-input 511 668 994 898 1485 87 111 170 152 326 64-input 509 638 955 856 1450 88 110 172 149 326 1 Park et al. (2021) 2 Sun et al. (2019) 3 Fox et al. (2020) 4 Köster et al. (2017) cost than FP143 and other formats in previous studies. Note that HFP8 (Sun et al., 2019) and BM8 (Fox et al., 2020) employ different formats for activation and error. Therefore, they need to be implemented in FP153 and FP145 to support all operations with a single MAC unit (Agrawal et al., 2021). Since Flex16+5 (Köster et al., 2017) requires 16-bit multiplication, its cost is significantly higher than other 8-bit formats. A conventional MAC unit consists of a multiplier and an accumulator. In the multiplier, the exponents of two input operands are summed while their mantissas are multiplied. The multiplication part is more complex, and hence it dominates the area of the multiplier. As a result, the size of the multiplier is larger when more bits are allocated to mantissa. In the accumulator, a floating-point adder adds the multiplication results to a partial sum in FP169. The adder is decomposed into a shifter that aligns the mantissa by the exponent difference, an integer adder that sums aligned mantissas, and a quantization unit that converts the result back to FP169. Since the result is re-quantized into FP169, the addition operation of aligned mantissas does not need to be lossless. FP169 format has a 10-bit mantissa including one hidden bit. We only need to accurately calculate higher 10 bits, which necessitates a 12-bit adder considering rounding. Shifting by more than 12 bits is not needed even if the result of the multiplier has a larger exponent range. Therefore, the shifter, adder, and quantization unit, which are the components of the accumulator, are not affected by the input format. There are minor differences such as an adder that calculates the difference between exponents and a shifter with a different bit width of the input, but their costs are ignorable. Contrarily, a multi-way MAC consists of a multiplier, a shifter for alignment, an adder tree, a normalization unit, and a final accumulator. The multiplier and the final accumulator are identical to those of the conventional MAC. However, since only one normalization unit and one final accumulator are shared across multiple inputs, their implementation cost becomes insignificant for a larger number of inputs. The shifter for alignment converts the multiplier output to an integer format since the cost of integer addition is lower than that of floating-point addition. Then, the adder tree sums those integer values, and the normalization unit converts the result back to a floating-point format. The cost of the shifter for alignment, adder tree, and normalization unit is all determined by the integer bit width, and the larger the exponent range of the input operands, the larger the required bit width, as shown in Fig. 16. In FP134, FP143, and FP152, the minimum integer bit widths are 23, 37, and 67 bits, respectively. Since the bit width is sufficiently large, the cost difference of these units exceeds the cost difference of the multiplier. Therefore, the cost of a multi-way MAC increases with the number of exponent bits. When designing a neural network training processor, some parts of the hardware (e.g., batch normalization, non-linear activation functions such as tanh and sigmoid, and softmax function) are typically implemented with higher precision to avoid performance drop. Hence, we need to consider data for- Direction FP134 FP143 HFP81 BM82 Flex16+53 FP134 FP143 HFP81 BM82 Flex16+53 To FP32 155 141 145 176 330 28 26 27 30 53 From FP32 139 144 152 162 427 19 20 22 23 55 1 Sun et al. (2019) 2 Fox et al. (2020) 3 Köster et al. (2017) mat conversion overheads when comparing different formats. If we consider various 8-bit data formats with different representation methods, as we did in Table 6, and assume that computations other than MAC operations are implemented in full precision, the processing architecture (except MAC units) will be identical for all formats. In addition, the on/off-chip memory space, control logics, and on-chip interconnects will remain the same. The only difference would be the low-precision MAC units and the data conversion units between full-precision and low-precision formats. However, the cost of conversion between low-precision and high-precision floating-point formats is typically very low and does not vary much with the low-precision format. For low-precision to high-precision conversion, we only have to add a bias-correction term to the exponent and add 0 after the mantissa. For high-precision to low-precision conversion, we need to add a bias-correction term to the exponent, clamp the overflowed value to the maximum, and round off the mantissa. The cost is very low compared to the MAC operation, and the cost difference between different low-precision formats is negligible. We have synthesized the conversion units for different formats, and their costs are presented in Table 7. The experimental results confirm that the overhead of data format conversion is significantly lower than MAC operations. In addition, all formats except Flexpoint exhibit similar conversion costs. In addition to the synthesis result for ASIC implementation in Table 6, we measured the hardware overhead of MAC units of different data formats on FPGA. Table 8 shows the synthesis results on Xilinx Artix-7 FPGA (XC7A100TCSG324-1). Those MAC units do not need block RAMs (BRAMs), and we used a compiler directive to avoid using DSP modules for fair comparisons. Table 8 shows a similar trend to Table 6; the cost of one MAC gradually decreases as the number of inputs increases in the multi-way MAC. Also, due to integer-based addition in the adder tree, the cost of FP134, which has the smallest dynamic range, exhibits lower costs than the other formats. A.8 EFFECT OF QUANTIZATION NOISE ON DATA FLOW QUANTIZATION Table 9 shows the training results when both activation and error are quantized in various data formats. If an appropriate amount of noise is introduced in the network during training, it will increase the training loss but reduce the validation loss, suggesting that the model has been improved due to the regularization effect. However, if the noise level continues to increase, the model’s performance will start to degrade at some point. For instance, when MobileNetV2 is quantized in FP134, its performance is improved through the regularization effect since the training loss increases while the validation loss decreases compared to FP32. However, both the training and validation losses increase when quantized in most cases, resulting in lower accuracy. This suggests that using a very low precision data format already introduces a large amount of noise in the network, incurring performance degradation. Hence, it is necessary to reduce error in the network to improve the training performance in low-precision training.
1. What is the focus of the paper regarding efficient deep neural network training using quantized networks? 2. What are the strengths of the proposed approach, particularly the hysteresis quantization scheme? 3. Are there any concerns or weaknesses regarding the experiment settings and comparisons with other works? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. What are the questions raised by the reviewer regarding the paper's methodology, results, and compatibility with other approaches?
Summary Of The Paper Review
Summary Of The Paper The submission starts from an interesting point that various quantized training environments may require different formats for training accurate deep neural networks. They present a metric based on the misalignment between ∂ ℓ ∂ w + n o i s e and ∂ ℓ ∂ w to determine the optimal format. To mitigate the fluctuation issue caused by network quantization, they propose a hysteresis quantization scheme to avoid frequent changes of quantized points. The experiment results fully support the effectiveness of the proposed methods. Review pros. This paper addresses a very relevant topic. Though model compression has been widely discussed in the recent literature, efficient training with quantized networks is an approach worth exploring. The technique part of this paper is simple and easy to follow. It is clear that the Hysteresis setting contributes to a consistent improvement over the standard approach on both image recognition and NLP tasks. cons. There are some related works missing in comparisons such as Flexpoint [1] and HFINT [2]. Especially, HFINT first notices the difference in weights distribution between CNN and NLP models/tasks then presents an adaptive floating-point format with hardware implementations. Compared with the static FP143/FP134 mode used in the current draft, they use an on-the-fly adaptive format. In light of this, it would be better to include a HARDWARE EVALUATION section to verify the effectiveness of the proposed "data flow quantization". Please denote "weight gradients" as G W or G W . W G commonly refers to matrix multiplication. Figure 2: How to distinguish 8-,7-,6-bit quantization formats? It seems that Figure 8 (in A.3) has almost the same tendency as Figure 2. In general, F134 performs the best. Why introduce a more complicated metric to determine the optimal format? Note that the authors conduct all experiments under the setting of FP134. How to obtain W G + N Δ E and W G + N Δ A ? Table1: Why does "1-input" consume a much larger area than conventional MAC? Besides, why conventional MAC consumes a smaller area along with the decrease of mantissa, yet Multi-way MAC behaves in the opposite way? I expect more results on LUTs, DSPs, BRAMs, and Power except for Area. Is hysteresis quantization compatible with uniform quantization? Does FP134 still involve a scaling factor s during training? ref. [1] Flexpoint: An Adaptive Numerical Format for Efficient Training of Deep Neural Networks. NIPS2017 [2] Algorithm-Hardware Co-Design of Adaptive Floating-Point Encodings for Resilient Deep Learning Inference. DAC2020 supplementary: Please include a README file in the supplementary material. How to understand the magic number 555555543210 used for "data format for data flow quantization"?
ICLR
Title Certified Robustness on Structural Graph Matching Abstract The vulnerability of graph matching (GM) to adversarial attacks has received increasing attention from emerging empirical studies, while the certified robustness of GM has not been explored. Inspired by the technique of randomized smoothing, in this paper, for the first time to our best knowledge, the certified robustness on GM is defined and a new certification strategy is designed called Structure-based Certified Robustness of Graph Matching (SCR-GM). Structural prior information of nodes is used to construct a joint smoothing distribution matrix with physical significance, which certifies a wider range than those obtained by previous iterative optimization methods. Furthermore, we propose a certified space that can be used to derive a strictly certified radius and two extra radii for evaluation. Experimental results on GM datasets reveal that our strategy achieves state-of-the-art l2 certified accuracy and regions. Source code will be made publicly available. 1 INTRODUCTION As a well-known NP-hard problem in its general form (Yan et al., 2016) with wide applications e.g. in computer vision and pattern recognition, graph matching (GM) refers to establishing correspondences among two (Cho et al., 2010) or multiple graphs (Jiang et al., 2021). Given two input graphs G1 = {V1,E1} and G2 = {V2,E2} with two sets of annotated nodes z1 ∈ Rn1×2 and z2 ∈ Rn2×2 (assumed in Euclidean space in this paper). Here, V1 ∈ Rdv×n1 and E1 ∈ Rde×m1 represent the feature matrix of n1 nodes and m1 edges (likewise for V2 and E2). The similarities between nodes and edges are formulated into a global affinity matrix K ∈ Rn1n2×n1n2 , whose diagonal and off-diagonal elements store the node-to-node and edge to-edge affinities. It aims to maximize the overall affinity score J of the matching nodes and the edges (Leordeanu & Hebert, 2005) in the form of quadratic assignment problem (QAP) (Loiola et al., 2007): max X J(X) = vec(X)⊤K vec(X), s.t. X ∈ {0, 1}n1×n2 ,X1n1 = 1n1 ,X⊤1n2 ≤ 1n2 , (1) where vec(X) denotes the column-wise vector of the matching solution X ∈ {0, 1}n1×n2 which can be a partial permutation matrix when n1 < n2. One common approach is to relax X’s raw binary constraint into a continuous one (between [0,1]), especially in the form of (partial) doubly-stochastic matrix S ∈ [0, 1]n1×n2 of which the sum of rows/columns is 1 (or zero for partial case). The final X can be obtained by the Hungarian algorithm (Burkard & Dell’Amico, 2009): X = Hung(S). Eq. 1 can also directly incorporate deep nets to obtain the learned affinity matrix K by learning the raw attributes of the graphs e.g. CNNs for images from which the visual graphs are extracted, as well as learning the structure via graph neural networks (GNNs) (Wang et al., 2019): K=NN(G1,G2). Studies on robustness of machine learning models have attracted wide attention, while the robustness of combinatorial solvers is an emergning and unmatured topic (Geisler et al., 2021; Lu et al., 2021). Under the deep GM paradigm, Ren et al. (2022) reveal that the combinatorial GM algorithms can also be sensitive to (additive) noise perturbations not only in appearance but also for structure, similar to the node classification models (Dai et al., 2018; Sun et al., 2018), and an empirical defense algorithm via an appearance-aware regularizer is proposed. So far, there still lacks principled certified defense to provide theoretical robustness guarantees for GM (let alone other combinatorial problems). In fact, existing certified robustness mechanisms (including randomized smoothing) in the graph domain (Rong et al., 2019; Bojchevski et al., 2020; Zügner & Günnemann, 2020; Jia et al., 2020) are confined to unconstrained node or graph-level classification/prediction within a single graph, which cannot be readily adopted for solving the cross-graph and combinatorial problems with structured output like the permutation matrix in GM. Certifiable robustness studies solvers whose prediction at any point x is verifiably constant within some set around x (Wong & Kolter, 2018). As a recent promising approach to achieve certified defense, randomized smoothing (RS) (Lecuyer et al., 2019; Cohen et al., 2019) provides a general robust guarantee applicable to large-scale neural networks against arbitrary attacks. Given an input x and a base classifier, randomized smoothing constructs a ‘smoothed classifier’ which is certifiable within the region characterized by x and the smoothing distribution D. RS has been used in certifying different models, e.g., image classification (Yang et al., 2020) and object detection in vision (Chiang et al., 2020). As an initiative for applying RS to GM1, in this paper we mainly consider two major challenges to solve. C1: varying-size of input graphs. It is not suitable to certify graphs with different sizes by using an identical smoothing distribution. C2: dependency of nodes in graph. The graph structure as a whole carries important information for certification. For the first challenge, we could refer to data-dependent certified robustness methods on image classification task. Some data-dependent methods (Alfarra et al., 2022; Eiras et al., 2021; Hong & Hong, 2022; Labarbarie et al., 2022) are proposed recently to vary and optimize the smoothing distributions D for larger certification region. Therefore, these methods can also be used to construct varying smoothing distributions for graphs with varying sizes. For the second challenge, we expect smoothing distributions constructing correlations between nodes in a graph, which is lacking for current randomized smoothing. Datadependent methods consider little on the heterogeneity and structure of inputs. For example, Alfarra et al. (2022) treat all pixels in one image equally, Eiras et al. (2021) treat pixels differently but cannot reveal their correlation. Thus none of them can overcome the second challenge. In this paper, we aim to solve certified robustness of GM, by analyzing the individual matching robustness of each node, instead of the whole variation of the output matching matrix X in Eq. 1. In particular, we study the node classification task when converting the relaxed solution S into the final matching X (see Eq. 1 and the discussion therein), as such the RS-type certification phase can be naturally introduced during the classification stage. Specifically, we propose the Structure-based Certified Robustness of Graph Matching (SCR-GM) which adopts joint Gaussian distribution instead of independent homogeneous distribution to construct the smoothing solvers. As adversarial attacks tend to perturb the strongly correlated nodes at the same time, the additive noise sampled from joint distribution with structural information and physical meaning can reveal this correlation. According to our theoretical analysis, we obtain the robustness guarantee on GM which describes a certified ℓ2-norm space ant its lower bound radius. In addition, we propose another two radii to help evaluate the robustness more comprehensively. We evaluate our strategy on Pascal VOC dataset (Everingham et al., 2010) with Berkeley annotations (Bourdev & Malik, 2009) and simulation dataset with random node sets. Experimental results reveal that our strategy outperforms the previous works (Cohen et al., 2019; Alfarra et al., 2022; Eiras et al., 2021) on structural GM for ℓ2 certified accuracy and regions. Our contributions are as follows: 1) We propose a general framework for incorporating existing RS-based techniques for certifying graph matching solvers, as long as (which is often the case for both learning-based and classic solvers) it involves a post-binarization step that converts the relaxed matching S (by an arbitrary relaxed GM solver) to node matching. 2) Based our proposed framework, we present the first definition, to our best knowledge (see Eq. 5) of certified robustness for a graph matching solver. 3) We propose a certification method dubbed structure-based certified robustness of GM (SCR-GM) (see Sec. 4.3). It uses jointly distributed noise to model dependent node matching certification. 4) A certified space and lower bound radius are derived to guarantee robustness of graph matching. Two radii are also devised for more complete evaluation of robustness, which complements potentially safe regions and largest feasible perturbations. 1Another challenge is how to better handle the constraints of X, which is related to how to extend the certification of the specific GM problem to other combinatorial solvers, which we leave for future work. 2 RELATED WORK We discuss works on certified robustness related to randomized smoothing and robustness of GM. Certified Robustness related to Randomized Smoothing Lecuyer et al. (2019) propose randomized smoothing firstly as a certified adversarial defense, and use it to train the first certifiably robust classifier for ImageNet. However, its guarantees are loose, then Cohen et al. (2019) shows that adding Gaussian noise to classifiers enjoys a strict ℓ2 certification radius, with follow-ups presenting new RS-type techniques, such as optimal perturbations at different norms, and certified robustness definitions for different tasks. Alfarra et al. (2022) show that the variance of the Gaussian distributions can be optimized at each input so as to maximize the certification region. Meanwhile, Eiras et al. (2021) extend isotropic smoothing distributions to generalized anisotropic counterparts. Hong & Hong (2022) adopt the same anisotropic defination and further design a noise generator to efficiently fine-tune the distributions. A recent work (Labarbarie et al., 2022) that relies on information geometry techniques manages to prove larger regions than previous methods. However, all previous smoothing distributions D deprive the favorable prior knowledge which mainly refers to the node location and graph structure in GM. Moreover, all of them at most certify a single image or graph but do not consider the combinatorial nature of the prediction as in GM. Robustness of Graph Matching Approximate GM solvers have been developed over the decades from traditional learning-free methods (Emmert-Streib et al., 2016) to learning-based ones (Yan et al., 2020). The seminal work (Zanfir & Sminchisescu, 2018) proposes a deep neural network based pipeline for visual GM, in which the visual appearance features are learned via CNN, with subsequent variants (Wang et al., 2019; Rolı́nek et al., 2020), among which a major improvement is to explore the structural information using different techniques e.g. GNN, rather than only appearance features for node/edge attributes as done in (Zanfir & Sminchisescu, 2018). Our work treats the GM solver as blackbox regardless it is learning-based or not, as long as it involves a continuous relaxation to obtain the intermediate double-stochastic matrix. There is also an emerging line of research on adversarial attack and defense on (deep) GM. The earlier work (Yu et al., 2019b) proposes a robust graph matching (RGM) model to improve the robustness against perturbations e.g. distortion, rotation, outliers and noise. Zhang et al. (2020) devise an adversarial attack model for deep GM networks, which uses kernel density estimation to construct dense regions such that the neighboring nodes are indistinguishable. Ren et al. (2021) devise two specific topology attacks in GM: inter-graph dispersion and intra-graph combination attacks, and propose a resilient defense model. Ren et al. (2022) design an attack perturbing input images and their hidden graphs together for deep (visual) GM, and further propose appearanceaware regularizers to enlarge the disparity among similar keypoints for defense. However, the above defense methods are all heuristic and lacks robustness certification in face of other unseen attacks. 3 PRELIMINARIES ON RANDOMIZED SMOOTHING The original RS (Cohen et al., 2019) can transform an arbitrary base classifier f into a smoothed classifier g that is certifiably robust under ℓ2 norm. For any input x, the smoothed classifier g returns the most probable prediction of f for the random variable N (x;σ2I), which is defined by: g(x) = argmax c∈Y P(f(x+ ε) = c), (2) where ε ∼ N ( 0, σ2I ) is isotropic Gaussian noise perturbing the input x. Then the certified radius within which the output is unchanged for g(x+ δ) = cA that measures the certified robustness is: R = ∥δ∥2 < σ 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , (3) where the most probable class cA is returned with probability pA and the ‘runner-up’ class is returned with probability pB . pA and pB are lower bound and upper bound of pA and pB respectively, and Φ−1 is the inverse of the standard Gaussian cumulative distribution function. The smoothed classifier g is robust around x within the ℓ2 radius in Eq. 3. To enhance the certification, Alfarra et al. (2022) and Eiras et al. (2021) propose isotropic and anisotropic distributions to maximize the certified region respectively. However, none of them can explicitly encode the prior information of the inputs (e.g. the graph topology in GM) which means their distributions are randomly initialized. Differently, we propose a correlation matrix to reveal the structural information in graphs, and in turn construct a joint Gaussian distribution to replace the single Gaussian distribution, which not only makes the initial distribution physically meaningful, but also eliminates the optimization process of finding the largest certified region. 4 METHODOLOGY We first define the smoothed GM solver (be either neural network or traditional solver) and propose a robustness guarantee. We then devise a new certification strategy SCR-GM using a physically meaningful joint smoothing distribution. We also give two new radii to aid evaluating robustness. 4.1 PROBLEM FORMULATION For pairwise GM with input ( G1,G2, z1, z2 ) , we mainly focus on the effect of perturbing two sets of annotated nodes z1 ∈ Rn1×2 and z2 ∈ Rn2×2. For visual GM (Zanfir & Sminchisescu, 2018; Ren et al., 2022) as widely considered in literature, z1 and z2 are node coordinates obtained by human annotation or keypoint detectors. During the certification for perturbing nodes, here we consider the node coordinates as the input while keep the node/edge attributes as unchanged. The robustness guarantees of perturbing features are given in Appendix B. As discussed in Sec. 3, randomized smoothing (RS) is a technique for constructing a smoothed function g from an arbitrary base function f . In this paper, we technically convert a whole matching problem into a set F regarding with binary classification based on the intermediate matrix S. The set F can be expressed as: F = {fi|fi : ( G1,G2, z1, z2 ) → rj , i ∈ n1, j ∈ n2}, where fi represents that the i-th node in z1 matches the j-th node in z2 and rj represents the j-th node rj in z2. Such a conversion allows us to certify the matching robustness for a single node, avoiding an imprecise certification for the entire matching matrix. The smoothed network gi returns whichever node in z2 is most likely to match the node in z1 when the input is perturbed by joint smoothing noise: gi = argmax rj∈z2 P(fi ( G1,G2, z1 + ε, z2 ) = rj), where ε ∼ N (0,Σ) , i ∈ n1, j ∈ n2. (4) For convenience, we simplify fi ( G1,G2, z1, z2 ) to fi(z1) and derive the results by perturbing z1 only, as it is equivalent to robustness certification under joint perturbation to z1 and z2. Furthermore, we propose a method which defines the smoothed function for certifying whole X as introduced in Appendix. E. The distribution of noise ε is a joint Gaussian distribution matrix whose variance represents the correlation between nodes. In addition, Σ is a hyperparameter for certified function which controls a robustness/accuracy trade-off and will be detailed in Sec. 4.3. Note that for robustness certification, we only consider those nodes that can obtain a unique solution argmax in Eq. 4. 4.2 ROBUSTNESS GUARANTEE Suppose that when the base function fi solves for the optimal matching of node i in z1, the most probable node rA in z2 is returned with probability pA = maxsi∈Si si, where Si is the i-th row of S. Similarly, the probability of ”runner-up” node rB in z2 is denoted as pB , pB = maxsi∈Si,rB ̸=rA si. We adopt an ℓ2 certified space to guarantee robustness of graph matching. Theorem 1 (ℓ2 certified space) Let fi(z1) be node matching function, gi be defined as in Eq. 4, and ε ∼ N (0,Σ). If pA ∈ [0, 1] and pB ∈ [0, 1] satisfy: P ( fi(z 1 + ε) = rA ) ≥ pA ≥ pB ≥ P(fi(z1 + ε) = rB), (5) then for gi(z1 + δ) = rA, we can get the certified ℓ2 space for the addictive noise δ: ∥δ⊤B−1∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , (6) where B⊤B = Σ, and B ∈ Rn1×n1 is a full rank and real symmetric matrix based on the node correlation in node matrix z1, and pA and pB are the lower bound of pA and upper bound of pB . The detail settings and properties of B and Σ are described and illustrated in Section 4.3. The complete proof of Theorem 1 is presented in Appendix A. Lemma 1 (Eigenvalue Comparison) For a real symmetric matrix A ∈ Rn×n, with λmax and λmin as its maximum and minimum of eigenvalues, then ∀X ∈ Rn, λminX⊤X ≤ X⊤AX ≤ λmaxX⊤X. Based on Lemma 1 and the certified space in Eq. 6, we can further obtain a certified ℓ2 norm radius: ∥δ⊤B−1∥2 = δ⊤Σ−1δ, (7) δ⊤Σ−1δ ≤ λmaxδ⊤δ, (8) ∥δ∥lower < 1 2 √ λmax ( Φ−1 ( pA ) − Φ−1 (pB) ) , (9) where λmax is the maximum eigenvalue of Σ−1. We let the upper bound of δ⊤Σ−1δ satisfy the constraint of Eq. 6, therefore a lower bound on ∥δ∥ can be obtained as ∥δ∥lower. Eq. 6 is an exact constraint on the perturbation space which is a hyperellipsoid, while Eq. 9 describes minor axis of the hyperellipsoid. Both of them are general expressions for arbitrary GM solvers and joint Gaussian smoothing distributions which will be shown in Sec. 4.3. 4.3 JOINT SMOOTHING DISTRIBUTION In contrast to isotropic (Alfarra et al., 2022) and anisotropic (Eiras et al., 2021) distributions, SCRGM reflects the structure of graph while achieving efficiency by avoiding gradient optimization. We first construct the correlation matrix B based on the similarity between nodes in matrix z1. B is a full rank and real symmetric matrix whose element bmn denotes the correlation between m-th and n-th node in z1. We define a similarity using Euclidean distance as follows: bmn = 1/(1 + dmn γ ), (10) where dmn is the Euclidean distance between the m-th and n-th nodes, and γ is the normalization coefficient which controls the degree of correlation. We also uses other three similarity measures to construct B including cosine similarity, pearson similarity and dice similarity as in Appendix C. Nodes in close proximity are more susceptible to perturbations with similar intensity, while perturbations added to nodes at larger distances are almost independent. The diagonal elements in B indicate the intensity of perturbation at nodes, while the non-diagonal elements reveal the correlation between nodes. Then by B⊤B = Σ, we can get the smoothing distribution Σ to sample the additive noise for the input. Σ is a positive definite matrix, which determines the feasibility of radii derived in this work. In contrast, the distribution in (Eiras et al., 2021) is a diagonal matrix with different diagonal elements, which cannot represent the correlation between nodes; and the distribution in (Alfarra et al., 2022) is a diagonal matrix with the same diagonal elements, which directly treats all nodes without difference. In fact when inter-node correlations and the differences of noise intensity are neglected, Σ can degenerate into the above two distributions. Therefore, Σ is a generalized setting that allows all distributions to be compared in the same framework. For comparison, we need to keep Σ at the same order of magnitude as the previous three distributions (Cohen et al., 2019; Eiras et al., 2021; Alfarra et al., 2022). We take a similar strategy as that in (Eiras et al., 2021) to ensure that: min i 1 λxi r (x,Σx) ≥ min i θxi r (x,Θ x) , (11) where λxi is the eigenvalue of (Σ x)−1, Θx is the distribution in (Eiras et al., 2021), θxi is the diagonal element of Θx and r = 12 ( Φ−1 ( pA ) − Φ−1 (pB) ) . Therefore, the four distributions mentioned above can be calculated and analyzed incrementally. The visualization of the four distributions calculated from a same original σ (Cohen et al., 2019) are shown in Fig. 1(a). Moreover, Σx can trade-off the certified accuracy and radius, the eigenvalue λxi is positively correlated with the certified accuracy and negatively correlated with the certified radius. 4.4 EVALUATING CERTIFICATES In Sec. 4.2, Eq. 6 reveals the certified space which is however difficult to quantify and compare. Though Eq. 9 represents a certified and quantifiable form, it actually ignores a large portion of the certified space. We therefore propose two more effective radii to help evaluate the robustness. Eq. 9 is the certification for the worst case of the input, Eq. 13 is the certification for all cases and Eq. 12 reveals the maximum potential for immunity to perturbations. Combining the three radii allows a complete evaluation of the robustness for solvers. By Lemma 1 and Eq. 7 we define a maximum radius of the certified space: ∥δ∥max = 1 2 √ λmin ( Φ−1 ( pA ) − Φ−1 (pB) ) ), (12) where λmin is the minimum eigenvalue of Σ−1, and δ⊤Σ−1δ ≥ λminδ⊤δ. ∥δ∥max denotes the ℓ2-norm maximum value for all possible perturbations. Inspired by (Eiras et al., 2021), we can also measure the certified space in terms of ellipsoidal volume. By using the formula for the volume of the ellipsoid: V (R) = rn √ πn/Γ(n/2 + 1) ∏n i=1 ξi (Kendall, 2004) where ξi is the i-th radius of the ellipsoid, we can get a proxy radius ∥δ∥volume as: ∥δ∥volume = r √ π/ n√Γ(n/2 + 1) 2n √√√√1/ n∏ i λi , (13) where r = 12 ( Φ−1 ( pA ) − Φ−1 (pB) ) and λi is the eigenvalue of Σ−1. When all λi are the same, the certification result will be the same as the traditional method (Cohen et al., 2019). As described in section 4.2, the certified space is a hyperellipsoid geometrically, ∥δ∥lower represents the minor axis, ∥δ∥max represents the major axis, ∥δ∥volumn is a proxy radius of a hypersphere with the same volume as the hyperellipsoid. The whole certification process is shown in Algorithm 1. 5 EXPERIMENTS We evaluate our strategy in three aspects: i) For deep graph matching, we compare three radii in Eq. 9, Eq. 12 and Eq. 13 obtained by different certified methods on four GM networks; ii) For nonlearning GM methods, we perform synthetic experiments on the widely-used solver RRWM (Cho et al., 2010); iii) We reveal the impact of Σ on the certification results by ablation study. 5.1 EVALUATION SETTINGS Following the GM literature (Wang et al., 2021), we mainly evaluate our method on Pascal VOC dataset (Everingham et al., 2010) with Berkeley annotations (Bourdev & Malik, 2009). All the experiments are conducted on CPU (Intel(R) Core(TM) i7-7820X CPU @ 3.60GHz) and GPU Algorithm 1 Graph Matching Robustness Certification with SCR-GM. Input: Graph pair (G1,G2) of size z1 and z2; set of base classifier F; DDRS (Alfarra et al., 2022) and ANCER (Eiras et al., 2021); original σ; normalization coefficient γ; sampling times k0. Output: Matching set M and radius set ∆. 1: Obtain data-dependent σ∗x by adapting (see details in Appendix C) an off-the-shelf DDRS method (Alfarra et al., 2022) to the graph setting; 2: Obtain Anisotropic Θx by adapting (see details in Appendix C)) an off-the-shelf ANCER method (Eiras et al., 2021); 3: Obtain B and regularized Σ described in Sec. 4.3 according to Eq. 10 and 11; 4: Sample k0 noisy samples for left node matrix:z11 ′ , . . . , z1k0 ′ ∼ N ( z1,Σ ) . 5: Compute the matching result for nodes in z1: M = {mi| argmaxrj∈z2 ∑k0 k=1 I { fi ( z1k ′ ) = rj } }. 6: Sample k(k = 10k0) noisy samples for G1’s node matrix:z11 ′ , . . . , z1k ′ ∼ N ( z1,Σ ) . 7: Calculate one-sided confidence lower bound pA and pB using M as described in (Cohen et al., 2019) for every node in z1, get set PA and PB . 8: for pA and pB in PA and PB do 9: if pA < 12 then 10: mi ABSTAIN; set ∥δi∥lower=∥δi∥max=∥δi∥volume=0, append ∆; 11: // Discard nodes with low matching confidence. 12: else 13: Compute radius ∥δi∥lower, ∥δi∥max and ∥δi∥volume described in Sec. 4.4, append ∆. 14: end if 15: end for 16: return M, ∆ (GTX 2080 Ti GPU). We validate the certified robustness on four representative deep GM models: GMN (Zanfir & Sminchisescu, 2018), PCA-GM (Wang et al., 2019), CIE-H (Yu et al., 2019a), NGMv2 (Wang et al., 2021) and also a non-deep method RRWM (Cho et al., 2010). In this work, data processing and parameter settings are the same as the original papers unless otherwise specified. The compared methods include RS (Cohen et al., 2019), DDRS (Alfarra et al., 2022) and ANCER (Eiras et al., 2021). Since the anisotropic method in (Hong & Hong, 2022) is the same as in (Eiras et al., 2021) and (Hong & Hong, 2022) does not provide any code, we choose to compare with (Eiras et al., 2021). We follow the procedure as much similar as possible to that in (Cohen et al., 2019). In (Cohen et al., 2019), the certified accuracy (CA) is defined as: CA(R) = Ex,y [1(∥δ∥ ≥ R)1{g(x) = y}] . In our method, g represents the smoothed function defined in Eq. 4, x denotes the input node in test set, and y is its ground truth matching node. ∥δ∥ denotes the certified radius calculated by Eq. 9, Eq. 12, Eq. 13, R is the scale of x-axis, 1 is an indicator function. To quantify the improvement, we use Average Certified Radius (ACR) in (Zhai et al., 2020): Ex,y [∥δ∥1{g(x) = y}] . We use ℓlower2 , ℓmax2 and ℓΣ2 to express ∥δ∥lower, ∥δ∥max and ∥δ∥volume in the experiments. 5.2 EXPERIMENTS ON DEEP GRAPH MATCHING We first set the initial σ of RS to σ ∈ {1, 5, 10, 15, 20}, and calculate the smoothing distribution of σ∗x in DDRS and Θ x in ANCER, where iteration number in DDRS and ANCER is equal to 100. Then we set normalization coefficient γ = 5 and compute the joint distribution matrix Σ of SCRGM. Fig. 1(b) shows the certified radius ∥δ∥lower and ACR on a sample of our method and baselines which indicates that the overall certified robustness of our methods is superior to the baselines. Then we evaluate our strategy on four deep GM methods, the relationship of top-1 certified accuracy and three radii (ℓlower2 , ℓ max 2 and ℓ Σ 2 ) are plotted in Fig. 2, which only shows the case of σ = 5. When the radius on x-axis is the same, the higher the certified accuracy on y-axis, the better the certified robustness. The certified accuracy of our method is slightly lower sometimes than baselines when ∥δ∥lower is small. However, when ∥δ∥lower is large, the accuracy of baseline decreases significantly or even fails completely while our method maintains a more respectable accuracy. When evaluating using ∥δ∥max and ∥δ∥volume, the advantages of our method are more obvious. We calculate the ACR ∥δ∥lower of four different RS-type methods (σ = 5) and four GM methods as shown in Tab. 1, which indicates that our method shows a better certified robustness performance over the whole dataset. To show the impact of certified robustness on the accuracy of the solvers, we use Tab. 2 to show the accuracy of base function, the standard accuracy and certified accuracy of different certified radius ∥δ∥lower using NGMv2 algorithm on Pascal VOC dataset. More results are detailed in Appendix D.1. 5.3 EXPERIMENTS ON NON-LEARNING GM METHODS For non-learning GM, we certify the effectiveness of SCR-GM using simulation experiments on classic non-learning solver RRWM. First we randomly generate two sets of node matrices and calculate their affinity matrix K using Gaussian kernel affinity function. Then we obtain the robustness results by perturbing node locations and edge features respectively using RS and SCR-GM smoothing distributions. We set σ = 0.5 and σ = 0.004 respectively in Fig. 3(a) and 3(b). Our method has similar performance corresponding to the same ∥δ∥lower as the baseline. Moreover, it performs better on the other two cases which indicates that the guarantee space certified by our method is wider and its overall robustness is better. We only compare the results using RS and SCR-GM in this experiment, because DDRS and ANCER require the gradient optimization of networks, and they are not applicable to non-learning GM solvers. 5.4 THE EFFECT OF JOINT SMOOTHING DISTRIBUTION First, we simplify B by retaining only the higher correlation values in the matrix according to the correlation radio p and setting other values to 0. The radio is set to p ∈ {0%, 20%, 40%, 60%, 80%, 100%} where 100% represents SCR-GM retaining all the correlation coefficients and 0% represents ANCER without correlation coefficients. Results in Fig. 4(a) demonstrate the effectiveness of the Σ which can be used to get a better certified robustness properties. Then, we verify the impact of initial σ for Σ and the results are plotted in Fig. 4(b). Hyperparameter σ determines the scale of Σ which controls a trade-off between certified robustness and accuracy. 6 CONCLUSION AND OUTLOOK We have proposed a definition of certified robustness on structural graph matching and design a method SCR-GM that utilizes the correlation between nodes to construct a joint smooth distribution. We obtain ℓ2 norm certified space and radius for certification. For evaluation, we propose two additional radii by eigenvalue properties. Experiments on deep GM networks and classic solvers show that our method achieves a state-of-art robustness guarantee. Potential impact & limitations. The currently technique is confined with the graph in Euclidean space (and specifically 2D graphs for experiments), a more general formulation is QAP where the perturb may be directly added on the affinity matrix K. A significant direction is enabling robustness certification on the combinatorial solvers whereby GM is one of such cases. We hope this work can inspire subsequent research in this promising area where theoretical results are welcomed given the recent intensive empirical studies (Bengio et al., 2021; Yan et al., 2020). A PROOFS OF THEOREM 1 Here we provide the complete proof for Theorem 1. We first prove the following Lemma 2 which is inspired by the Neyman-Pearson for Gaussians lemma derived in (Cohen et al., 2019) and introduce Lemma 3 which makes random vector independent after linear transformation. Lemma 2 (Neyman-Pearson for Joint Gaussian Noise) Let X ∼ N (x,Σ) and Y ∼ N (x+ δ,Σ). Let h : Rd → {0, 1} be any deterministic or random function. Then: 1. If S = { k ∈ Rd : δTΣ−1k ≤ β } for some β and P(h(X) = 1) ≥ P(X ∈ S), then P(h(Y ) = 1) ≥ P(Y ∈ S). 2. If S = { k ∈ Rd : δTΣ−1k ≥ β } for some β and P(h(X) = 1) ≤ P(X ∈ S), then P(h(Y ) = 1) ≤ P(Y ∈ S). Proof. This lemma is the special case of Neyman-Pearson when X and Y are joint Gaussian noises with means x and x+ δ. It suffices to simply show that for any β, there is some t > 0 for which:{ k : δTΣ−1k ≤ β } = { z : µY (k) µX(k) ≤ t } , { k : δTΣ−1k ≥ β } = { z : µY (k) µX(k) ≥ t } . (14) For ease of representation, we use S ∈ Rd×d (with element sij) instead of Σ−1. The likelihood ratio for this choice of X and Y turns out to be: uY (k) uX(k) = exp ( − 12 (k − (x+ δ)) TΣ−1(k − (x+ δ)) ) exp ( − 12 (k − x)TΣ−1(k − x) ) = exp ( − 12 ∑d i ∑d j (ki − (xi + δi)) sij (kj − (xj + δj)) ) exp ( − 12 ∑d i ∑d j (ki − xi) sij (kj − xj) ) = exp ( δTΣ−1k − δTΣ−1x− 1 2 δTΣ−1δ ) = exp ( δTΣ−1k + b ) ≤ t, where b is a constant, specifically b = −δTΣ−1x− 12δ TΣ−1δ. Therefore given any β, we may take t = exp(β + b) and get this correlation: δTΣ−1k ≤ β ⇐⇒ exp (β + b) ≤ t, δTΣ−1k ≥ β ⇐⇒ exp (β + b) ≥ t. (15) Lemma 3 (Joint Gaussian Distribution) If there is a random vector X ∼ N (µ,Σ), where µ ∈ Rn is the mean vector. A positive semi-definite real symmetric matrix Σ ∈ Sn×n++ is the covariance matrix of X . There is a full rank matrix B ∈ Rn×n, which makes X = BZ + µ, Z ∼ N (0, I) and B⊤B = Σ. Then we can prove Theorem 1, recall: Theorem 1. Let fi(z1) be node matching function, gi be defined as in Eq. 4, and ε ∼ N (0,Σ). If pA ∈ [0, 1] and pB ∈ [0, 1] satisfy: P ( fi(z 1 + ε) = rA ) ≥ pA ≥ pB ≥ P(fi(z1 + ε) = rB). (16) Then for gi(z1 + δ) = rA, we can get the certified ℓ2 space for the addictive noise δ: ∥δ⊤B−1∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , (17) where B⊤B = Σ, B ∈ Rn1×n1 is a full rank and real symmetric matrix based on the physical relationships in node matrix z1, and pA and pB are the lower bound of pA and the upper bound of pB , respectively. To show that gi(z1 + δ) = rA, it follows from the definition of gi that we need to show that P ( fi(z 1 + δ + ε) = rA ) ≥ P(fi(z1 + δ + ε) = rB). In the derivation, rB is actually not just “runner-up” node, but any node that is different from rA. We define the random variables: X := z1 + ε = N ( z1,Σ ) , Y := z1 + δ + ε = N ( z1 + δ,Σ ) . We know that: P (fi(X) = rA) ≥ pA, P (fi(X) = rB) ≤ pB . (18) Our goal is to show that P (fi(Y ) = rA) > P (fi(Y ) = rB) . (19) According to lemma 2, we can define the half-spaces: A = { k : δTΣ−1(k − z1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )} , B = { k : δTΣ−1(k − z1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) } . Claim 1 shows that P(X ∈ A) = pA, therefore we can get P (fi(X) = rA) ≥ P(X ∈ A). Hence we may apply Lemma 2 with h(z) := 1 [fi(z) = rA] to conclude: P (fi(Y ) = rA) ≥ P(Y ∈ A). (20) Similarly, we obtain P (fi(X) = rB) ≤ P(X ∈ B). Hence we may apply Lemma 2 with h(z) := 1 [fi(z) = rB ] to conclude: P (fi(Y ) = rB) ≤ P(Y ∈ B). (21) Combining Eq. 20 and 21, we can get the conditions of Eq. 19: P (f(Y ) = rA) ≥ P(Y ∈ A) > P(Y ∈ B) ≥ P (f(Y ) = rB) . (22) According to Claim 3 and Claim 4, we can get P(Y ∈ A) and P(Y ∈ B) as: P(Y ∈ A) = Φ ( Φ−1 ( pA ) − δ TΣ−1δ ∥δTΣ−1B∥ ) , P(Y ∈ B) = Φ ( Φ−1 (pB) + δTΣ−1δ ∥δTΣ−1B∥ ) . (23) Finally, we obtain that P(Y ∈ A) > P(Y ∈ B) if and only if: δTΣ−1δ ∥δTΣ−1B∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , δT (BTB)−1δ ∥δT (BTB)−1B∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) . Because B is a real symmetric matrix (BT = B), we can finally get: ∥δTB−1∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , which recovers the theorem statement. A.1 LINEAR TRANSFORMATION AND DERIVATION We obtain four equations based on linear transformation: Claim 1. P(X ∈ A) = pA Proof. Recall that A = { k : δTΣ−1(k − z1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )} and X ∼ N (z1,Σ), according to lemma 3, we can get: P(X ∈ A) = P ( δTΣ−1(X − z1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( δTΣ−1N (0,Σ) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( δTΣ−1BN (0, I) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( ∥δTΣ−1B∥N (0, 1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = Φ ( Φ−1 ( pA )) = pA. Claim 2. P(X ∈ B) = pB Proof. Recall that B = { k : δTΣ−1(k − z1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) } and X ∼ N (z1,Σ), according to lemma 3, we can get: P(X ∈ B) = P ( δTΣ−1(X − z1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( δTΣ−1N (0,Σ) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( δTΣ−1BN (0, I) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( ∥δTΣ−1B∥N (0, 1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = 1− Φ ( Φ−1 (1− pB) ) = pB . Claim 3. P(Y ∈ A) = Φ ( Φ−1 ( pA ) − δ TΣ−1δ ∥δTΣ−1B∥ ) Proof. Recall that A = { k : δTΣ−1(k − z1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )} and Y ∼ N (z1 + δ,Σ), according to lemma 3, we can get: P(Y ∈ A) = P ( δTΣ−1(Y − z1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( δTΣ−1N (δ,Σ) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( δTΣ−1(BN (0, I) + δ) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( δTΣ−1BN (0, I) + δTΣ−1δ ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( ∥δTΣ−1B∥N (0, 1) ≤ ∥δTΣ−1B∥Φ−1 ( pA ) − δTΣ−1δ ) = P ( N (0, 1) ≤ Φ−1 ( pA ) − δ TΣ−1δ ∥δTΣ−1B∥ ) = Φ ( Φ−1 ( pA ) − δ TΣ−1δ ∥δTΣ−1B∥ ) . Claim 4. P(Y ∈ B) = Φ ( Φ−1 (pB) + δTΣ−1δ ∥δTΣ−1B∥ ) Proof. Recall that B = { k : δTΣ−1(k − z1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) } and Y ∼ N (z1 + δ,Σ), according to lemma 3, we can get: P(Y ∈ B) = P ( δTΣ−1(Y − z1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( δTΣ−1N (δ,Σ) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( δTΣ−1(BN (0, I) + δ) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( δTΣ−1BN (0, I) + δTΣ−1δ ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( ∥δTΣ−1B∥N (0, 1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB)− δTΣ−1δ ) = P ( N (0, 1) ≥ Φ−1 (1− pB)− δTΣ−1δ ∥δTΣ−1B∥ ) = P ( N (0, 1) ≤ Φ−1 (pB) + δTΣ−1δ ∥δTΣ−1B∥ ) = Φ ( Φ−1 (pB) + δTΣ−1δ ∥δTΣ−1B∥ ) B ROBUSTNESS GUARANTEE WHEN PERTURBING FEATURES For GM with input ( G1,G2, z1, z2 ) for matching prediction X, we now focus on the effect of per- turbing node features. Recall that the set F can be expressed as: F = {fi|fi : ( G1,G2, z1, z2 ) → rj , i ∈ n1, j ∈ n2} where G1 = {V1,E1} and G2 = {V2,E2}, fi represents that the i-th node in z1 matches the j-th node in z2, rj is the j-th node in z2. Now we define a new smoothed network gi that returns whichever node in z2 is most likely to match the node in z1 when perturbing node features V1 ∈ Rdv×n1 by joint smoothing distribution noise: gi = argmax rj∈z2 P(fi ( V1 + ε,E1,V2,E2, z 1, z2 ) = rj), where ε ∼ N (0,Σ) , i ∈ n1, j ∈ n2. (24) For notational convenience, we simplify fi ( V1 + ε,E1,V2,E2, z 1, z2 ) to fi(V1). Suppose that when the base function fi solves for the optimal matching of node i in z1, the most probable node rA is returned with probability pA = maxsi∈Si si, where Si is the i-th row of S. The probability of ”runner-up” node rB is denoted as pB , pB = maxsi∈Si,rB ̸=rA si. Similarly, we obtain an ℓ2 certified space to guarantee robustness of graph matching when perturbing features as follows. Theorem 2 (ℓ2 certified space when perturbing features) Let fi(V1) be node matching function, gi be defined as in Eq. 24, and ε ∼ N (0,Σ). If pA ∈ [0, 1] and pB ∈ [0, 1] satisfy: P (fi(V1 + ε) = rA) ≥ pA ≥ pB ≥ P(fi(V1 + ε) = rB), (25) then for gi(V1 + δ) = rA, we can get the certified ℓ2 space for the addictive noise δ: ∥δ⊤B−1∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , (26) where pA and pB are the lower bound of pA and the upper bound of pB respectively. We set B⊤B = Σ where B ∈ R(dv×n1)×(dv×n1) is a diagonal matrix. Different from the correlation matrix in Eq. 10, B is a diagonal matrix similar as (Eiras et al., 2021). However, B is obtained by structure-based prior knowledge rather than the optimization process in (Eiras et al., 2021). We divide the node feature V1 into n1 parts and add independent identically distributed noise of the same intensity (denoted by bm,m ∈ n1) to each part. The noise intensity of m-th part bm is defined as bm = dmd σ where d is the whole distance between nodes in z 1, dm is the distance between the m-th node and other nodes, the original σ is the same as described in (Cohen et al., 2019). This setting indicates that outlier points are more resistant to perturbation. Finally we can derive the same radius forms as Eq. 9, 12 and 13. C EXPERIMENTAL SETUP In this work, we evaluate our strategy on deep graph matching networks and a classic non-learning solver. The procedures to obtain the baseline networks and the evaluation methods are detailed as follows. C.1 BASELINE OF CERTIFICATION METHODS In terms of certification, the baselines we considered are RS (Cohen et al., 2019), DDRS (Alfarra et al., 2022) and ANCER (Eiras et al., 2021). We adapt the off-the-shelf DDRS and ANCER to obtain the data-dependent distribution σ∗x and anisotropic distribution Θ x for graph matching. We add noise to graphs and use pA = maxsi∈Si si and pB = maxsi∈Si,rB ̸=rA si to calculate the gap value Φ−1(pA) − Φ−1(pB) (Si is the i-th row of S). The optimization equations and parameters remain the same as the original algorithms. Then we use SCR-GM to get our joint distribution Σ. Finally, we use the Monte Carlo algorithms in (Cohen et al., 2019) to sample noises according to different distributions and output three radii derived in Sec. 4.2 and 4.4. The sample number n and n0 are set to 1000 and 100 due to the efficiency of graph matching networks, and other parameters are the same as the original network settings (Cohen et al., 2019). We also use hypothesis test (Hung & Fithian, 2019) as in (Cohen et al., 2019) by using α to represent the probability of getting incorrect matching results. In this paper, we set α = 0.001, so there is a high probability (99.9% in this paper) to ensure the certification. α can be set arbitrarily small hence in theory our method is highly reliable. C.2 EVALUATION ON DEEP GRAPH MATCHING For deep graph matching, we mainly evaluate our method on Pascal VOC dataset (Everingham et al., 2010) with Berkeley annotations (Bourdev & Malik, 2009). We follow the protocol of (Wang et al., 2021) and filter out poorly annotated images. In the experiment, we use 100 inputs (containing approximately 650 nodes) of 20 categories to certify the matching robustness. We check our strategy on four representative deep graph matching methods: GMN (Zanfir & Sminchisescu, 2018), PCA-GM (Wang et al., 2019), CIE-H (Yu et al., 2019a) and NGMv2 (Wang et al., 2021), while use the checkpoints of these GM models collected by ThinkMatch (https: //github.com/Thinklab-SJTU/ThinkMatch). We directly evaluate the certified robustness of these networks without fine-tune training. C.3 EVALUATION ON NON-LEARNING METHOD For non-learning method, we mainly evaluate our method on simulation data which contains randomly generated node pairs. In the experiment, we use 100 inputs (each contains 5-10 nodes randomly) and evaluate the strategy on classic solver RRWM (Cho et al., 2010). For evaluation, we extract node features and calculate the affinity matrix K using Gaussian kernel affinity function. Then we perturb node locations and features separately and obtain the certified robustness results. C.4 SIMILARITY MEASURES In addition to Eq. 10, we also uses other three similarity measures to construct B including cosine similarity, pearson similarity and dice similarity as follows. For two points in the Euclidean space Rn: A = (a1, a2, · · · , an) and B = (b1, b2, · · · , bn), cosine similarity is defined as follows: Cosine Similarity(A,B) = A ·B ∥A∥2 · ∥B∥2 = ∑n i=1 aibi√∑n i=1 a 2 i · √∑n i=1 b 2 i ∈ [−1, 1]. (27) Table 1: The ACR ∥δ∥lower of four different RS-type methods (σ = 5) and four GM methods on Pascal VOC dataset. NGMv2 CIE-H PCA-GM GMN RS 4.189 2.880 2.745 2.037 DDRS 5.936 3.505 3.307 2.741 ANCER 6.300 3.367 3.179 2.517 SCR-GM 7.107* 3.726* 3.455* 2.745* Table 2: The accuracy of base function (BA) of NGMv2, standard accuracy (SA) and certified accuracy (CA) of different certified radius ∥δ∥lower using NGMv2 algorithm (σ = 5) on Pascal VOC dataset. BA (%) SA (%) CA (%) R=3.5 CA (%) R=7.0 CA (%) R=10.5 SCR-GM 77.3 75.6 63.7 51.5* 36.4* ANCER 77.3 76.5 64.2 49.1 23.8 DDRS 77.3 77.4* 66.6 50.5 18.2 RS 77.3 76.7 66.9* 0.0 0.0 Pearson similarity is defined as follows: Pearson Similarity(A,B) = cov(A,B) σA · σB = ∑ i=1 ( ai − Ā ) · ( bi − B̄ )√∑n i=1 ( ai − Ā )2 ·√∑ni=1 (bi − B̄)2 ∈ [−1, 1], (28) where Ā = ∑n i=1 ai/n, B̄ = ∑n i=1 bi/n. Dice similarity is defined as follows: Dice Similarity(A,B) = 2 ∑n i=1 aibi∑n i=1 (a 2 i + b 2 i ) , (29) where A and B can not be zero point at the same time. D EXPERIMENTAL RESULTS D.1 CERTIFICATION RESULTS OF DEEP GRAPH MATCHING D.1.1 PERTURBING NODE LOCATION For perturbing node location, we report certified accuracy at ℓlower2 , ℓ max 2 and ℓ Σ 2 radii, for each certified method RS (Cohen et al., 2019), DDRS (Alfarra et al., 2022), ANCER (Eiras et al., 2021) and SCR-GM, each network GMN (Zanfir & Sminchisescu, 2018), PCA-GM (Wang et al., 2019), CIE-H (Yu et al., 2019a) and NGMv2 (Wang et al., 2021), each original σ (σ = 1, 5, 10, 15 and 20). Figures 5, 6, 7 and 8 show certified results on different graph matching networks, respectively. In addition, we certify the effect of the normalization parameter γ, and Fig. 12 shows the results on NGMv2 (Wang et al., 2021) algorithm and σ is set to 5. Tab. 3 shows the impact of different choices for constructing B on the certified robustness. B constructed by Euclidean distance and Dice similarity perform better on the certified robustness. The advantage of B constructed by Euclidean distance is more obvious when the radius is larger. D.1.2 PERTURBING FEATURES For perturbing node features, we only compare our strategy with RS (Cohen et al., 2019) due to the excessive inefficiency of DDRS (Alfarra et al., 2022) and ANCER (Eiras et al., 2021). We set original σ as σ = 0.25, 0.5, 1, 1.5 and 2, other settings are the same as Appendix D.1.1. Fig. 9 shows certified results on different graph matching networks when perturbing node features. Table 3: The impact of different similarity measures for constructing B on the certified robustness. SA (%) CA (%) R=3.0 CA (%) R=6.0 CA (%) R=9.0 CA (%) R=12.0 Euclidean 75.2 64.3 53.3* 42.0* 24.1* Dice 75.6* 65.5* 52.3 41.0 23.5 Cosine 75.6 65.1 52.0 41.0 23.5 Pearson 75.6 65.2 51.8 40.7 23.6 Figure 5: Top-1 certified accuracy on ℓlower2 , ℓ max 2 and ℓ Σ 2 certification by different RS-type methods on NGMv2 methods. Hyperparameter σ trade-off the certified accuracy and radii. D.2 CERTIFICATION RESULTS OF NON-LEARNING METHODS In this section, we report certified accuracy at ℓlower2 , ℓ max 2 and ℓ Σ 2 radii, for certified method (Cohen et al., 2019) and SCR-GM on RRWM (Cho et al., 2010). We set original σ as σ = 0.3, 0.4 and 0.5 when perturbing node locations, while we set σ = 0.001, 0.004 and 0.006 when perturbing features. Fig. 13 and 14 show certified results on the classic solver. E CERTIFIED ROBUSTNESS OF THE SOLUTION X ’S STRUCTURE In Sec. 4, we focus on the certified robustness of node matching results in the graph rather than the whole graph matching result. Our work treats the GM solver as blackbox to get the relaxed matching S, then uses a post-binarization step to to modify the output format X and get the node matching function set F. Then we certify the robustness of F. However, we can also certify the robustness of the full matrix X which is able to utilize more graph structure information, as well as fully consider the constrains in Eq. 1. E.1 DEFINITION Consider a graph matching problem from input space to partial permutation matrices X . As discussed above, randomized smoothing (RS) is a technique for constructing a smoothed function g from an arbitrary base function f . When queried at the input ( G1,G2, z1, z2 ) , the smoothed function g returns whichever matrix X the base function f is most likely to return when z1 is perturbed by noise: g = argmax X∈X P(f ( G1,G2, z1 + ε, z2 ) = X), where ε ∼ N (0,Σ) . (30) The distribution of additive noise ε is a joint Gaussian distribution matrix whose variance Σ represents the correlations between nodes. In addition, Σ is a hyperparameter for certified function which controls the robustness/accuracy trade-off. E.2 ROBUSTNESS GUARANTEE FOR X We define a robustness guarantee with confidence c ∈ [0, 1], which ensures that the similarity between the output matrix calculated by g and its ground truth matrix Xg is not less than a confidence c. Suppose that when the base function f solves ( G1,G2, z1 + ε, z2 ) , its output matrices whose similarity to Xg is not less than c are returned with probability p: X ′ = { Xi ∣∣∣∣∣Xi ·XgXg ·Xg ≥ c,Xi ∈ X } , p = P(Xi|Xi ∈ X ′) (31) Our main result is that smoothed function g is robust within a ℓ2 certified space, which also holds if we replace p with a lower bound p. Theorem 3 (ℓ2 certified space for X ) Let f be a matching function, g be defined as in Eq. 30, and ε ∼ N (0,Σ). Suppose XA ∈ X ′ and p ∈ ( 12 , 1] satisfy: P(f ( G1,G2, z1 + ε, z2 ) = XA, XA ∈ X ′) ≥ p. (32) Then we can get the certified ℓ2 space for the addictive noise δ: ∥δ⊤B−1∥ < Φ−1 ( p ) , (33) which guarantees g ( G1,G2, z1 + δ, z2 ) ∈ X ′. In Eq. 6, B⊤B = Σ and B ∈ Rn1×n1 is a full rank and real symmetric matrix based on the node correlation in node matrix z1. The detail settings and properties of B and Σ are the same as in Section 4.3. Based on Lemma 1 and the certified space in Eq. 33, we can further obtain a certified ℓ2 norm radius: ∥δ∥lower < 1√ λmax ( Φ−1 ( p )) , (34) where λmax is the maximum eigenvalue of Σ−1. We can define a maximum radius of the certified space: ∥δ∥max = 1√ λmin ( Φ−1 ( p )) ), (35) Algorithm 2 Graph Matching Robustness Certification for X with SCR-GM. Input: Graph pair (G1,G2) of size z1 and z2; base function f of graph matching; DDRS (Alfarra et al., 2022) and ANCER (Eiras et al., 2021); original σ; normalization coefficient γ; sampling times k0; matrix similarity confidence c. Output: Matching result X̂g and radius R. 1: Obtain data-dependent σ∗x by adapting (see details in Appendix C) an off-the-shelf DDRS method (Alfarra et al., 2022) to the graph setting; 2: Obtain Anisotropic Θx by adapting (see details in Appendix C)) an off-the-shelf ANCER method (Eiras et al., 2021); 3: Obtain B and regularized Σ described in Sec. 4.3 according to Eq. 10 and 11; 4: Sample k0 noisy samples for G1’s node matrix:z11 ′ , . . . , z1k0 ′ ∼ N ( z1,Σ ) . 5: Compute the approximate ground truth matrix X̂g . 6: Sample k(k = 10k0) noisy samples for G1’s node matrix:z11 ′ , . . . , z1k ′ ∼ N ( z1,Σ ) and get an approximate output set X̂ . 7: Calculate one-sided confidence lower bound p using set X̂ and Eq. 31. 8: if p < 12 then 9: X ABSTAIN; set ∥δi∥lower=∥δi∥max=∥δi∥volume=0, append R; 10: //Discard matching result with low confidence. 11: else 12: Compute radius ∥δi∥lower, ∥δi∥max and ∥δi∥volume described in Sec. 4.4, append R. 13: end if 14: return X̂g , R where λmin is the minimum eigenvalue of Σ−1. The proxy radius ∥δ∥volume is as follows: ∥δ∥volume = r √ π/ n√Γ(n/2 + 1) 2n √√√√1/ n∏ i λi . (36) The whole robustness certification process is shown in Algorithm 2. In fact, we cannot get the real Xg and X during certification stage, so we use Monte Carlo sampling to estimate it. We first sample f ( G1,G2, z1 + ε, z2 ) with n0 times and add all permutation matrices to get Xs, then we use Sinkhorn and Hungarian algorithm to approximate Xg . During certification, if the approximated X̂g is not the same as the ground truth matrix Xg , we consider that the certification for this sample has failed. Then we sample f ( G1,G2, z1 + ε, z2 ) with n times and put all possible matrices into set X̂ to approximate X . When n is large, X̂ and X are relatively close. E.3 EXPERIMENTS We evaluate our methods on deep graph matching networks and non-learning solvers. The evaluation settings are the same as in Sec. 5.1. E.3.1 EXPERIMENTS ON DEEP GRAPH MATCHING We focus on certifying the robustness of node locality and compare ℓlower2 , ℓ max 2 , ℓ Σ 2 certification using four certified methods on four deep GM algorithms. We first set the initial σ of RS to σ ∈ {1, 5, 10, 15, 20}, the confidence c = 0.9 and calculate the smoothing distribution of σ∗x in DDRS and Θ x in ANCER, where iteration number in DDRS and ANCER is equal to 100. Then we set normalization coefficient γ = 5 and compute the joint distribution matrix Σ of SCR-GM. Then we evaluate our strategy on four deep GM methods, the relationship of top-1 certified accuracy and three radii (ℓlower2 , ℓ max 2 and ℓ Σ 2 ) is plotted in Fig. 10. When the radius on x-axis is the same, the higher the certified accuracy on y-axis, the better the certified robustness. Our method outperforms the baseline on NGMv2 algorithm, which means that the certified accuracy is higher when the radii (ℓlower2 , ℓ max 2 and ℓ Σ 2 ) is the same. On CIE-H and PCA-GM algorithms, the certified accuracy of our method is slightly lower sometimes than baselines when ℓlower2 radius is small. However, when ℓlower2 radius is large, the accuracy of baselines decrease significantly or even fail completely while our method maintains a more respectable accuracy. When evaluating using ℓmax2 and ℓ Σ 2 radii, the certified results of our method are similar as the baselines. On GMN algorithm, our certification results are a bit worse than ANCER. In short, the certified robustness advantage of our method is more obvious on the algorithm with better matching accuracy itself. E.3.2 EXPERIMENTS ON NON-LEARNING GM METHODS For non-learning GM, we certify the effectiveness of SCR-GM using simulation experiments on classic non-learning solver RRWM. First we randomly generate two sets of node matrices and calculate their affinity matrix K using Gaussian kernel affinity function. Then we obtain the robustness results by perturbing node locations and edge features respectively using RS and SCR-GM smoothing distributions. We set σ = 0.1 and σ = 0.0001 respectively in Fig. 11(a) and 11(b). Our method has similar performance of the certified accuracy corresponding to the same ∥δ∥lower with the baseline. However, our method performs better on ∥δ∥volume and ∥δ∥max which indicates that the guarantee space certified by our method is wider and its overall robustness is better. We only compare the results using RS and SCR-GM in this experiment, because DDRS and ANCER require the gradient optimization of networks, and they are not applicable to non-learning GM solvers. E.4 PROOF To show that g ( G1,G2, z1 + δ, z2 ) ∈ X ′, it follows from the definition of g that we need to show that: P(f ( G1,G2, z1 + ε+ δ, z2 ) = XA, XA ∈ X ′) ≥ P(f ( G1,G2, z1 + ε+ δ, z2 ) = XB , XB /∈ X ′). We define two random variables: I := ( G1,G2, z1 + ε, z2 ) = ( G1,G2,N ( z1,Σ ) , z2 ) O := ( G1,G2, z1 + ε+ δ, z2 ) = ( G1,G2,N ( z1 + δ,Σ ) , z2 ) . We know that: P(f(I) = XA, XA ∈ X ′) ≥ p. (37) Our goal is to show that P(f(O) = XA, XA ∈ X ′) > P(f(O) = XB , XB /∈ X ′). (38) According to lemma 2, we can define the half-spaces: A = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≤ ∥δTΣ−1B∥Φ−1 ( p )} , B = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≥ ∥δTΣ−1B∥Φ−1 ( p )} . Claim 1 shows that P(I ∈ A) = p, therefore we can get P(f(I) = XA, XA ∈ X ′) ≥ P(I ∈ A). Hence we may apply Lemma 2 to conclude: P(f(O) = XA, XA ∈ X ′) ≥ P(O ∈ A). (39) Similarly, we obtain P(f(I) = XB , XB /∈ X ′) ≤ P(I ∈ B). Hence we may apply Lemma 2 to conclude: P(f(O) = XB , XB /∈ X ′) ≤ P(O ∈ B). (40) Combining Eq. 39 and 40, we can get the conditions of Eq. 38: P(f(O) = XA, XA ∈ X ′) ≥ P(O ∈ A) > P(O ∈ B) ≥ P(f(O) = XB , XB /∈ X ′). (41) According to Claim 3 and Claim 4, we can get P(O ∈ A) and P(O ∈ B) as: P(O ∈ A) = Φ ( Φ−1 ( p ) − δ TΣ−1δ ∥δTΣ−1B∥ ) , P(O ∈ B) = Φ ( −Φ−1 ( p ) + δTΣ−1δ ∥δTΣ−1B∥ ) . (42) Finally, we obtain that P(O ∈ A) > P(O ∈ B) if and only if: δTΣ−1δ ∥δTΣ−1B∥ < Φ−1 ( p) ) , δT (BTB)−1δ ∥δT (BTB)−1B∥ < Φ−1 ( p) ) . Since B is a real symmetric matrix (BT = B), we can finally get: ∥δTB−1∥ < Φ−1 ( p) ) , which recovers the theorem statement. E.4.1 LINEAR TRANSFORMATION AND DERIVATION We obtain four equations based on linear transformation: Claim 1. P(I ∈ A) = p Proof. Recall that A = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≤ ∥δTΣ−1B∥Φ−1 ( p )} , according to lemma 3, we can get: P(I ∈ A) = P ( δTΣ−1(I − ( G1,G2, z1, z2 ) ) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1N (0,Σ) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1BN (0, I) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( ∥δTΣ−1B∥N (0, 1) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = Φ ( Φ−1 ( p )) = p. Claim 2. P(I ∈ B) = 1− p Proof. Recall that B = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≥ ∥δTΣ−1B∥Φ−1 ( p )} , according to lemma 3, we can get: P(I ∈ B) = P ( δTΣ−1(I − ( G1,G2, z1, z2 ) ) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1N (0,Σ) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1BN (0, I) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( ∥δTΣ−1B∥N (0, 1) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = 1− Φ ( Φ−1 ( p )) = 1− p. Claim 3. P(O ∈ A) = Φ ( Φ−1 ( p ) − δ TΣ−1δ ∥δTΣ−1B∥ ) Proof. Recall that A = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≤ ∥δTΣ−1B∥Φ−1 ( p )} and O ∼( G1,G2,N ( z1 + δ,Σ ) , z2 ) , according to lemma 3, we can get: P(O ∈ A) = P ( δTΣ−1(O − ( G1,G2, z1, z2 ) ) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1N (δ,Σ) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1(BN (0, I) + δ) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1BN (0, I) + δTΣ−1δ ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( ∥δTΣ−1B∥N (0, 1) ≤ ∥δTΣ−1B∥Φ−1 ( p ) − δTΣ−1δ ) = P ( N (0, 1) ≤ Φ−1 ( p ) − δ TΣ−1δ ∥δTΣ−1B∥ ) = Φ ( Φ−1 ( p ) − δ TΣ−1δ ∥δTΣ−1B∥ ) . Claim 4. P(O ∈ B) = Φ ( −Φ−1 ( p ) + δ TΣ−1δ ∥δTΣ−1B∥ ) Proof. Recall that B = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≥ ∥δTΣ−1B∥Φ−1 ( p )} and O ∼( G1,G2,N ( z1 + δ,Σ ) , z2 ) , according to lemma 3, we can get: P(O ∈ B) = P ( δTΣ−1((O − ( G1,G2, z1, z2 ) ) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1N (δ,Σ) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1(BN (0, I) + δ) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1BN (0, I) + δTΣ−1δ ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( ∥δTΣ−1B∥N (0, 1) ≥ ∥δTΣ−1B∥Φ−1 ( p ) − δTΣ−1δ ) = P ( N (0, 1) ≥ Φ−1 ( p ) − δ TΣ−1δ ∥δTΣ−1B∥ ) = P ( N (0, 1) ≤ −Φ−1 ( p ) + δTΣ−1δ ∥δTΣ−1B∥ ) = Φ ( −Φ−
1. What is the focus and contribution of the paper regarding probabilistic robustness in graph matching? 2. What are the strengths and weaknesses of the proposed approach for building a probabilistically robust classifier? 3. Do you have any concerns or questions about the motivation and problem setting of the paper? 4. How does randomized smoothing impact inference speed and accuracy in the context of graph matching? 5. Are there any tradeoffs between certified robustness and accuracy in graph matching, and how does the author's approach affect these factors? 6. What are the limitations of the paper regarding its experimental results and comparisons with other works? 7. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper considers the problem of the probabilistic robustness of graph-matching (GM) algorithms against norm-based adversarial perturbations. The authors develop new algorithms for randomized smoothing for GM by exploiting the structure of the problem. They start by decomposing the robust matching problem defined over the whole graph into subproblems defined over the individual nodes. Next, the noise for building the smoothed classifier is sampled from a joint gaussian distribution which is set up in a way to capture correlations between the different nodes in the graph. Experimental evaluation is performed on the popular Pascal-VOC dataset showing the effectiveness of the proposed method. Strengths And Weaknesses Strengths: The technical contributions for building a probabilistically robust classifier for GM presented here are novel and solid. I wonder if structured distributions can also be used for improving the certified radius of vision classifiers. The experimental results show that the proposed method can certify larger radii than standard baselines. Weaknesses: The problem and the approach are not well-motivated. While the robustness of vision models is necessary when deployed in safety-critical scenarios such as self-driving cars, it is not clear what security implications the non-robustness of graph matching can cause. The authors do not motivate the need for solving their problem but instead choose to start describing their approach in the introduction. Randomized smoothing makes inference slow due to the need for sample multiple points and also makes its prediction non-deterministic. Why is this a good approach for proving certified robustness for the GM setting? The authors do not describe the impact of certified robustness on the accuracy of the classifier. Is there a tradeoff here? For vision models, humans are the oracle and the goal is to achieve the same level of robustness as them. I do not see such an oracle for GM as even non-learned models are not robust. What level of accuracy and robustness is good in this setting? Clarity, Quality, Novelty And Reproducibility Clarity and quality: Besides the issues with clarity mentioned above, I have the following comments for the authors: Randomized smoothing for neural networks was introduced by https://arxiv.org/abs/1802.03471 and not Cohen et al. The authors do not cite this. 2. ∑ is an important hyperparameter controlling the robustness/accuracy tradeoff but the authors do not elaborate on the principles behind its construction (e.g., what properties it should satisfy). Eq. (10) seems a bit random to me, why would any other equation not work? The authors should consider performing a study on the impact of different choices for constructing ∑ on the certified robustness. In the equation for calculating p B , the authors use s i ≠ p A which is semantically not correct. In figure 2, how are the y-axis values computed? are they averaged over the whole dataset? Would the radii be larger if one considered the robustness of three nodes at a time instead of a single node as considered here? In general, considering the robustness of the full matrix should reveal more correlations right? which should yield larger radii. Did you choose single-node decompositions because it makes the problem tractable? The authors do not discuss the impact of sampling on the inference speed. Novelty: The technical ideas are sufficiently novel. Reproducibility: Code is provided to ensure reproducibility, although I did not try to run it!
ICLR
Title Certified Robustness on Structural Graph Matching Abstract The vulnerability of graph matching (GM) to adversarial attacks has received increasing attention from emerging empirical studies, while the certified robustness of GM has not been explored. Inspired by the technique of randomized smoothing, in this paper, for the first time to our best knowledge, the certified robustness on GM is defined and a new certification strategy is designed called Structure-based Certified Robustness of Graph Matching (SCR-GM). Structural prior information of nodes is used to construct a joint smoothing distribution matrix with physical significance, which certifies a wider range than those obtained by previous iterative optimization methods. Furthermore, we propose a certified space that can be used to derive a strictly certified radius and two extra radii for evaluation. Experimental results on GM datasets reveal that our strategy achieves state-of-the-art l2 certified accuracy and regions. Source code will be made publicly available. 1 INTRODUCTION As a well-known NP-hard problem in its general form (Yan et al., 2016) with wide applications e.g. in computer vision and pattern recognition, graph matching (GM) refers to establishing correspondences among two (Cho et al., 2010) or multiple graphs (Jiang et al., 2021). Given two input graphs G1 = {V1,E1} and G2 = {V2,E2} with two sets of annotated nodes z1 ∈ Rn1×2 and z2 ∈ Rn2×2 (assumed in Euclidean space in this paper). Here, V1 ∈ Rdv×n1 and E1 ∈ Rde×m1 represent the feature matrix of n1 nodes and m1 edges (likewise for V2 and E2). The similarities between nodes and edges are formulated into a global affinity matrix K ∈ Rn1n2×n1n2 , whose diagonal and off-diagonal elements store the node-to-node and edge to-edge affinities. It aims to maximize the overall affinity score J of the matching nodes and the edges (Leordeanu & Hebert, 2005) in the form of quadratic assignment problem (QAP) (Loiola et al., 2007): max X J(X) = vec(X)⊤K vec(X), s.t. X ∈ {0, 1}n1×n2 ,X1n1 = 1n1 ,X⊤1n2 ≤ 1n2 , (1) where vec(X) denotes the column-wise vector of the matching solution X ∈ {0, 1}n1×n2 which can be a partial permutation matrix when n1 < n2. One common approach is to relax X’s raw binary constraint into a continuous one (between [0,1]), especially in the form of (partial) doubly-stochastic matrix S ∈ [0, 1]n1×n2 of which the sum of rows/columns is 1 (or zero for partial case). The final X can be obtained by the Hungarian algorithm (Burkard & Dell’Amico, 2009): X = Hung(S). Eq. 1 can also directly incorporate deep nets to obtain the learned affinity matrix K by learning the raw attributes of the graphs e.g. CNNs for images from which the visual graphs are extracted, as well as learning the structure via graph neural networks (GNNs) (Wang et al., 2019): K=NN(G1,G2). Studies on robustness of machine learning models have attracted wide attention, while the robustness of combinatorial solvers is an emergning and unmatured topic (Geisler et al., 2021; Lu et al., 2021). Under the deep GM paradigm, Ren et al. (2022) reveal that the combinatorial GM algorithms can also be sensitive to (additive) noise perturbations not only in appearance but also for structure, similar to the node classification models (Dai et al., 2018; Sun et al., 2018), and an empirical defense algorithm via an appearance-aware regularizer is proposed. So far, there still lacks principled certified defense to provide theoretical robustness guarantees for GM (let alone other combinatorial problems). In fact, existing certified robustness mechanisms (including randomized smoothing) in the graph domain (Rong et al., 2019; Bojchevski et al., 2020; Zügner & Günnemann, 2020; Jia et al., 2020) are confined to unconstrained node or graph-level classification/prediction within a single graph, which cannot be readily adopted for solving the cross-graph and combinatorial problems with structured output like the permutation matrix in GM. Certifiable robustness studies solvers whose prediction at any point x is verifiably constant within some set around x (Wong & Kolter, 2018). As a recent promising approach to achieve certified defense, randomized smoothing (RS) (Lecuyer et al., 2019; Cohen et al., 2019) provides a general robust guarantee applicable to large-scale neural networks against arbitrary attacks. Given an input x and a base classifier, randomized smoothing constructs a ‘smoothed classifier’ which is certifiable within the region characterized by x and the smoothing distribution D. RS has been used in certifying different models, e.g., image classification (Yang et al., 2020) and object detection in vision (Chiang et al., 2020). As an initiative for applying RS to GM1, in this paper we mainly consider two major challenges to solve. C1: varying-size of input graphs. It is not suitable to certify graphs with different sizes by using an identical smoothing distribution. C2: dependency of nodes in graph. The graph structure as a whole carries important information for certification. For the first challenge, we could refer to data-dependent certified robustness methods on image classification task. Some data-dependent methods (Alfarra et al., 2022; Eiras et al., 2021; Hong & Hong, 2022; Labarbarie et al., 2022) are proposed recently to vary and optimize the smoothing distributions D for larger certification region. Therefore, these methods can also be used to construct varying smoothing distributions for graphs with varying sizes. For the second challenge, we expect smoothing distributions constructing correlations between nodes in a graph, which is lacking for current randomized smoothing. Datadependent methods consider little on the heterogeneity and structure of inputs. For example, Alfarra et al. (2022) treat all pixels in one image equally, Eiras et al. (2021) treat pixels differently but cannot reveal their correlation. Thus none of them can overcome the second challenge. In this paper, we aim to solve certified robustness of GM, by analyzing the individual matching robustness of each node, instead of the whole variation of the output matching matrix X in Eq. 1. In particular, we study the node classification task when converting the relaxed solution S into the final matching X (see Eq. 1 and the discussion therein), as such the RS-type certification phase can be naturally introduced during the classification stage. Specifically, we propose the Structure-based Certified Robustness of Graph Matching (SCR-GM) which adopts joint Gaussian distribution instead of independent homogeneous distribution to construct the smoothing solvers. As adversarial attacks tend to perturb the strongly correlated nodes at the same time, the additive noise sampled from joint distribution with structural information and physical meaning can reveal this correlation. According to our theoretical analysis, we obtain the robustness guarantee on GM which describes a certified ℓ2-norm space ant its lower bound radius. In addition, we propose another two radii to help evaluate the robustness more comprehensively. We evaluate our strategy on Pascal VOC dataset (Everingham et al., 2010) with Berkeley annotations (Bourdev & Malik, 2009) and simulation dataset with random node sets. Experimental results reveal that our strategy outperforms the previous works (Cohen et al., 2019; Alfarra et al., 2022; Eiras et al., 2021) on structural GM for ℓ2 certified accuracy and regions. Our contributions are as follows: 1) We propose a general framework for incorporating existing RS-based techniques for certifying graph matching solvers, as long as (which is often the case for both learning-based and classic solvers) it involves a post-binarization step that converts the relaxed matching S (by an arbitrary relaxed GM solver) to node matching. 2) Based our proposed framework, we present the first definition, to our best knowledge (see Eq. 5) of certified robustness for a graph matching solver. 3) We propose a certification method dubbed structure-based certified robustness of GM (SCR-GM) (see Sec. 4.3). It uses jointly distributed noise to model dependent node matching certification. 4) A certified space and lower bound radius are derived to guarantee robustness of graph matching. Two radii are also devised for more complete evaluation of robustness, which complements potentially safe regions and largest feasible perturbations. 1Another challenge is how to better handle the constraints of X, which is related to how to extend the certification of the specific GM problem to other combinatorial solvers, which we leave for future work. 2 RELATED WORK We discuss works on certified robustness related to randomized smoothing and robustness of GM. Certified Robustness related to Randomized Smoothing Lecuyer et al. (2019) propose randomized smoothing firstly as a certified adversarial defense, and use it to train the first certifiably robust classifier for ImageNet. However, its guarantees are loose, then Cohen et al. (2019) shows that adding Gaussian noise to classifiers enjoys a strict ℓ2 certification radius, with follow-ups presenting new RS-type techniques, such as optimal perturbations at different norms, and certified robustness definitions for different tasks. Alfarra et al. (2022) show that the variance of the Gaussian distributions can be optimized at each input so as to maximize the certification region. Meanwhile, Eiras et al. (2021) extend isotropic smoothing distributions to generalized anisotropic counterparts. Hong & Hong (2022) adopt the same anisotropic defination and further design a noise generator to efficiently fine-tune the distributions. A recent work (Labarbarie et al., 2022) that relies on information geometry techniques manages to prove larger regions than previous methods. However, all previous smoothing distributions D deprive the favorable prior knowledge which mainly refers to the node location and graph structure in GM. Moreover, all of them at most certify a single image or graph but do not consider the combinatorial nature of the prediction as in GM. Robustness of Graph Matching Approximate GM solvers have been developed over the decades from traditional learning-free methods (Emmert-Streib et al., 2016) to learning-based ones (Yan et al., 2020). The seminal work (Zanfir & Sminchisescu, 2018) proposes a deep neural network based pipeline for visual GM, in which the visual appearance features are learned via CNN, with subsequent variants (Wang et al., 2019; Rolı́nek et al., 2020), among which a major improvement is to explore the structural information using different techniques e.g. GNN, rather than only appearance features for node/edge attributes as done in (Zanfir & Sminchisescu, 2018). Our work treats the GM solver as blackbox regardless it is learning-based or not, as long as it involves a continuous relaxation to obtain the intermediate double-stochastic matrix. There is also an emerging line of research on adversarial attack and defense on (deep) GM. The earlier work (Yu et al., 2019b) proposes a robust graph matching (RGM) model to improve the robustness against perturbations e.g. distortion, rotation, outliers and noise. Zhang et al. (2020) devise an adversarial attack model for deep GM networks, which uses kernel density estimation to construct dense regions such that the neighboring nodes are indistinguishable. Ren et al. (2021) devise two specific topology attacks in GM: inter-graph dispersion and intra-graph combination attacks, and propose a resilient defense model. Ren et al. (2022) design an attack perturbing input images and their hidden graphs together for deep (visual) GM, and further propose appearanceaware regularizers to enlarge the disparity among similar keypoints for defense. However, the above defense methods are all heuristic and lacks robustness certification in face of other unseen attacks. 3 PRELIMINARIES ON RANDOMIZED SMOOTHING The original RS (Cohen et al., 2019) can transform an arbitrary base classifier f into a smoothed classifier g that is certifiably robust under ℓ2 norm. For any input x, the smoothed classifier g returns the most probable prediction of f for the random variable N (x;σ2I), which is defined by: g(x) = argmax c∈Y P(f(x+ ε) = c), (2) where ε ∼ N ( 0, σ2I ) is isotropic Gaussian noise perturbing the input x. Then the certified radius within which the output is unchanged for g(x+ δ) = cA that measures the certified robustness is: R = ∥δ∥2 < σ 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , (3) where the most probable class cA is returned with probability pA and the ‘runner-up’ class is returned with probability pB . pA and pB are lower bound and upper bound of pA and pB respectively, and Φ−1 is the inverse of the standard Gaussian cumulative distribution function. The smoothed classifier g is robust around x within the ℓ2 radius in Eq. 3. To enhance the certification, Alfarra et al. (2022) and Eiras et al. (2021) propose isotropic and anisotropic distributions to maximize the certified region respectively. However, none of them can explicitly encode the prior information of the inputs (e.g. the graph topology in GM) which means their distributions are randomly initialized. Differently, we propose a correlation matrix to reveal the structural information in graphs, and in turn construct a joint Gaussian distribution to replace the single Gaussian distribution, which not only makes the initial distribution physically meaningful, but also eliminates the optimization process of finding the largest certified region. 4 METHODOLOGY We first define the smoothed GM solver (be either neural network or traditional solver) and propose a robustness guarantee. We then devise a new certification strategy SCR-GM using a physically meaningful joint smoothing distribution. We also give two new radii to aid evaluating robustness. 4.1 PROBLEM FORMULATION For pairwise GM with input ( G1,G2, z1, z2 ) , we mainly focus on the effect of perturbing two sets of annotated nodes z1 ∈ Rn1×2 and z2 ∈ Rn2×2. For visual GM (Zanfir & Sminchisescu, 2018; Ren et al., 2022) as widely considered in literature, z1 and z2 are node coordinates obtained by human annotation or keypoint detectors. During the certification for perturbing nodes, here we consider the node coordinates as the input while keep the node/edge attributes as unchanged. The robustness guarantees of perturbing features are given in Appendix B. As discussed in Sec. 3, randomized smoothing (RS) is a technique for constructing a smoothed function g from an arbitrary base function f . In this paper, we technically convert a whole matching problem into a set F regarding with binary classification based on the intermediate matrix S. The set F can be expressed as: F = {fi|fi : ( G1,G2, z1, z2 ) → rj , i ∈ n1, j ∈ n2}, where fi represents that the i-th node in z1 matches the j-th node in z2 and rj represents the j-th node rj in z2. Such a conversion allows us to certify the matching robustness for a single node, avoiding an imprecise certification for the entire matching matrix. The smoothed network gi returns whichever node in z2 is most likely to match the node in z1 when the input is perturbed by joint smoothing noise: gi = argmax rj∈z2 P(fi ( G1,G2, z1 + ε, z2 ) = rj), where ε ∼ N (0,Σ) , i ∈ n1, j ∈ n2. (4) For convenience, we simplify fi ( G1,G2, z1, z2 ) to fi(z1) and derive the results by perturbing z1 only, as it is equivalent to robustness certification under joint perturbation to z1 and z2. Furthermore, we propose a method which defines the smoothed function for certifying whole X as introduced in Appendix. E. The distribution of noise ε is a joint Gaussian distribution matrix whose variance represents the correlation between nodes. In addition, Σ is a hyperparameter for certified function which controls a robustness/accuracy trade-off and will be detailed in Sec. 4.3. Note that for robustness certification, we only consider those nodes that can obtain a unique solution argmax in Eq. 4. 4.2 ROBUSTNESS GUARANTEE Suppose that when the base function fi solves for the optimal matching of node i in z1, the most probable node rA in z2 is returned with probability pA = maxsi∈Si si, where Si is the i-th row of S. Similarly, the probability of ”runner-up” node rB in z2 is denoted as pB , pB = maxsi∈Si,rB ̸=rA si. We adopt an ℓ2 certified space to guarantee robustness of graph matching. Theorem 1 (ℓ2 certified space) Let fi(z1) be node matching function, gi be defined as in Eq. 4, and ε ∼ N (0,Σ). If pA ∈ [0, 1] and pB ∈ [0, 1] satisfy: P ( fi(z 1 + ε) = rA ) ≥ pA ≥ pB ≥ P(fi(z1 + ε) = rB), (5) then for gi(z1 + δ) = rA, we can get the certified ℓ2 space for the addictive noise δ: ∥δ⊤B−1∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , (6) where B⊤B = Σ, and B ∈ Rn1×n1 is a full rank and real symmetric matrix based on the node correlation in node matrix z1, and pA and pB are the lower bound of pA and upper bound of pB . The detail settings and properties of B and Σ are described and illustrated in Section 4.3. The complete proof of Theorem 1 is presented in Appendix A. Lemma 1 (Eigenvalue Comparison) For a real symmetric matrix A ∈ Rn×n, with λmax and λmin as its maximum and minimum of eigenvalues, then ∀X ∈ Rn, λminX⊤X ≤ X⊤AX ≤ λmaxX⊤X. Based on Lemma 1 and the certified space in Eq. 6, we can further obtain a certified ℓ2 norm radius: ∥δ⊤B−1∥2 = δ⊤Σ−1δ, (7) δ⊤Σ−1δ ≤ λmaxδ⊤δ, (8) ∥δ∥lower < 1 2 √ λmax ( Φ−1 ( pA ) − Φ−1 (pB) ) , (9) where λmax is the maximum eigenvalue of Σ−1. We let the upper bound of δ⊤Σ−1δ satisfy the constraint of Eq. 6, therefore a lower bound on ∥δ∥ can be obtained as ∥δ∥lower. Eq. 6 is an exact constraint on the perturbation space which is a hyperellipsoid, while Eq. 9 describes minor axis of the hyperellipsoid. Both of them are general expressions for arbitrary GM solvers and joint Gaussian smoothing distributions which will be shown in Sec. 4.3. 4.3 JOINT SMOOTHING DISTRIBUTION In contrast to isotropic (Alfarra et al., 2022) and anisotropic (Eiras et al., 2021) distributions, SCRGM reflects the structure of graph while achieving efficiency by avoiding gradient optimization. We first construct the correlation matrix B based on the similarity between nodes in matrix z1. B is a full rank and real symmetric matrix whose element bmn denotes the correlation between m-th and n-th node in z1. We define a similarity using Euclidean distance as follows: bmn = 1/(1 + dmn γ ), (10) where dmn is the Euclidean distance between the m-th and n-th nodes, and γ is the normalization coefficient which controls the degree of correlation. We also uses other three similarity measures to construct B including cosine similarity, pearson similarity and dice similarity as in Appendix C. Nodes in close proximity are more susceptible to perturbations with similar intensity, while perturbations added to nodes at larger distances are almost independent. The diagonal elements in B indicate the intensity of perturbation at nodes, while the non-diagonal elements reveal the correlation between nodes. Then by B⊤B = Σ, we can get the smoothing distribution Σ to sample the additive noise for the input. Σ is a positive definite matrix, which determines the feasibility of radii derived in this work. In contrast, the distribution in (Eiras et al., 2021) is a diagonal matrix with different diagonal elements, which cannot represent the correlation between nodes; and the distribution in (Alfarra et al., 2022) is a diagonal matrix with the same diagonal elements, which directly treats all nodes without difference. In fact when inter-node correlations and the differences of noise intensity are neglected, Σ can degenerate into the above two distributions. Therefore, Σ is a generalized setting that allows all distributions to be compared in the same framework. For comparison, we need to keep Σ at the same order of magnitude as the previous three distributions (Cohen et al., 2019; Eiras et al., 2021; Alfarra et al., 2022). We take a similar strategy as that in (Eiras et al., 2021) to ensure that: min i 1 λxi r (x,Σx) ≥ min i θxi r (x,Θ x) , (11) where λxi is the eigenvalue of (Σ x)−1, Θx is the distribution in (Eiras et al., 2021), θxi is the diagonal element of Θx and r = 12 ( Φ−1 ( pA ) − Φ−1 (pB) ) . Therefore, the four distributions mentioned above can be calculated and analyzed incrementally. The visualization of the four distributions calculated from a same original σ (Cohen et al., 2019) are shown in Fig. 1(a). Moreover, Σx can trade-off the certified accuracy and radius, the eigenvalue λxi is positively correlated with the certified accuracy and negatively correlated with the certified radius. 4.4 EVALUATING CERTIFICATES In Sec. 4.2, Eq. 6 reveals the certified space which is however difficult to quantify and compare. Though Eq. 9 represents a certified and quantifiable form, it actually ignores a large portion of the certified space. We therefore propose two more effective radii to help evaluate the robustness. Eq. 9 is the certification for the worst case of the input, Eq. 13 is the certification for all cases and Eq. 12 reveals the maximum potential for immunity to perturbations. Combining the three radii allows a complete evaluation of the robustness for solvers. By Lemma 1 and Eq. 7 we define a maximum radius of the certified space: ∥δ∥max = 1 2 √ λmin ( Φ−1 ( pA ) − Φ−1 (pB) ) ), (12) where λmin is the minimum eigenvalue of Σ−1, and δ⊤Σ−1δ ≥ λminδ⊤δ. ∥δ∥max denotes the ℓ2-norm maximum value for all possible perturbations. Inspired by (Eiras et al., 2021), we can also measure the certified space in terms of ellipsoidal volume. By using the formula for the volume of the ellipsoid: V (R) = rn √ πn/Γ(n/2 + 1) ∏n i=1 ξi (Kendall, 2004) where ξi is the i-th radius of the ellipsoid, we can get a proxy radius ∥δ∥volume as: ∥δ∥volume = r √ π/ n√Γ(n/2 + 1) 2n √√√√1/ n∏ i λi , (13) where r = 12 ( Φ−1 ( pA ) − Φ−1 (pB) ) and λi is the eigenvalue of Σ−1. When all λi are the same, the certification result will be the same as the traditional method (Cohen et al., 2019). As described in section 4.2, the certified space is a hyperellipsoid geometrically, ∥δ∥lower represents the minor axis, ∥δ∥max represents the major axis, ∥δ∥volumn is a proxy radius of a hypersphere with the same volume as the hyperellipsoid. The whole certification process is shown in Algorithm 1. 5 EXPERIMENTS We evaluate our strategy in three aspects: i) For deep graph matching, we compare three radii in Eq. 9, Eq. 12 and Eq. 13 obtained by different certified methods on four GM networks; ii) For nonlearning GM methods, we perform synthetic experiments on the widely-used solver RRWM (Cho et al., 2010); iii) We reveal the impact of Σ on the certification results by ablation study. 5.1 EVALUATION SETTINGS Following the GM literature (Wang et al., 2021), we mainly evaluate our method on Pascal VOC dataset (Everingham et al., 2010) with Berkeley annotations (Bourdev & Malik, 2009). All the experiments are conducted on CPU (Intel(R) Core(TM) i7-7820X CPU @ 3.60GHz) and GPU Algorithm 1 Graph Matching Robustness Certification with SCR-GM. Input: Graph pair (G1,G2) of size z1 and z2; set of base classifier F; DDRS (Alfarra et al., 2022) and ANCER (Eiras et al., 2021); original σ; normalization coefficient γ; sampling times k0. Output: Matching set M and radius set ∆. 1: Obtain data-dependent σ∗x by adapting (see details in Appendix C) an off-the-shelf DDRS method (Alfarra et al., 2022) to the graph setting; 2: Obtain Anisotropic Θx by adapting (see details in Appendix C)) an off-the-shelf ANCER method (Eiras et al., 2021); 3: Obtain B and regularized Σ described in Sec. 4.3 according to Eq. 10 and 11; 4: Sample k0 noisy samples for left node matrix:z11 ′ , . . . , z1k0 ′ ∼ N ( z1,Σ ) . 5: Compute the matching result for nodes in z1: M = {mi| argmaxrj∈z2 ∑k0 k=1 I { fi ( z1k ′ ) = rj } }. 6: Sample k(k = 10k0) noisy samples for G1’s node matrix:z11 ′ , . . . , z1k ′ ∼ N ( z1,Σ ) . 7: Calculate one-sided confidence lower bound pA and pB using M as described in (Cohen et al., 2019) for every node in z1, get set PA and PB . 8: for pA and pB in PA and PB do 9: if pA < 12 then 10: mi ABSTAIN; set ∥δi∥lower=∥δi∥max=∥δi∥volume=0, append ∆; 11: // Discard nodes with low matching confidence. 12: else 13: Compute radius ∥δi∥lower, ∥δi∥max and ∥δi∥volume described in Sec. 4.4, append ∆. 14: end if 15: end for 16: return M, ∆ (GTX 2080 Ti GPU). We validate the certified robustness on four representative deep GM models: GMN (Zanfir & Sminchisescu, 2018), PCA-GM (Wang et al., 2019), CIE-H (Yu et al., 2019a), NGMv2 (Wang et al., 2021) and also a non-deep method RRWM (Cho et al., 2010). In this work, data processing and parameter settings are the same as the original papers unless otherwise specified. The compared methods include RS (Cohen et al., 2019), DDRS (Alfarra et al., 2022) and ANCER (Eiras et al., 2021). Since the anisotropic method in (Hong & Hong, 2022) is the same as in (Eiras et al., 2021) and (Hong & Hong, 2022) does not provide any code, we choose to compare with (Eiras et al., 2021). We follow the procedure as much similar as possible to that in (Cohen et al., 2019). In (Cohen et al., 2019), the certified accuracy (CA) is defined as: CA(R) = Ex,y [1(∥δ∥ ≥ R)1{g(x) = y}] . In our method, g represents the smoothed function defined in Eq. 4, x denotes the input node in test set, and y is its ground truth matching node. ∥δ∥ denotes the certified radius calculated by Eq. 9, Eq. 12, Eq. 13, R is the scale of x-axis, 1 is an indicator function. To quantify the improvement, we use Average Certified Radius (ACR) in (Zhai et al., 2020): Ex,y [∥δ∥1{g(x) = y}] . We use ℓlower2 , ℓmax2 and ℓΣ2 to express ∥δ∥lower, ∥δ∥max and ∥δ∥volume in the experiments. 5.2 EXPERIMENTS ON DEEP GRAPH MATCHING We first set the initial σ of RS to σ ∈ {1, 5, 10, 15, 20}, and calculate the smoothing distribution of σ∗x in DDRS and Θ x in ANCER, where iteration number in DDRS and ANCER is equal to 100. Then we set normalization coefficient γ = 5 and compute the joint distribution matrix Σ of SCRGM. Fig. 1(b) shows the certified radius ∥δ∥lower and ACR on a sample of our method and baselines which indicates that the overall certified robustness of our methods is superior to the baselines. Then we evaluate our strategy on four deep GM methods, the relationship of top-1 certified accuracy and three radii (ℓlower2 , ℓ max 2 and ℓ Σ 2 ) are plotted in Fig. 2, which only shows the case of σ = 5. When the radius on x-axis is the same, the higher the certified accuracy on y-axis, the better the certified robustness. The certified accuracy of our method is slightly lower sometimes than baselines when ∥δ∥lower is small. However, when ∥δ∥lower is large, the accuracy of baseline decreases significantly or even fails completely while our method maintains a more respectable accuracy. When evaluating using ∥δ∥max and ∥δ∥volume, the advantages of our method are more obvious. We calculate the ACR ∥δ∥lower of four different RS-type methods (σ = 5) and four GM methods as shown in Tab. 1, which indicates that our method shows a better certified robustness performance over the whole dataset. To show the impact of certified robustness on the accuracy of the solvers, we use Tab. 2 to show the accuracy of base function, the standard accuracy and certified accuracy of different certified radius ∥δ∥lower using NGMv2 algorithm on Pascal VOC dataset. More results are detailed in Appendix D.1. 5.3 EXPERIMENTS ON NON-LEARNING GM METHODS For non-learning GM, we certify the effectiveness of SCR-GM using simulation experiments on classic non-learning solver RRWM. First we randomly generate two sets of node matrices and calculate their affinity matrix K using Gaussian kernel affinity function. Then we obtain the robustness results by perturbing node locations and edge features respectively using RS and SCR-GM smoothing distributions. We set σ = 0.5 and σ = 0.004 respectively in Fig. 3(a) and 3(b). Our method has similar performance corresponding to the same ∥δ∥lower as the baseline. Moreover, it performs better on the other two cases which indicates that the guarantee space certified by our method is wider and its overall robustness is better. We only compare the results using RS and SCR-GM in this experiment, because DDRS and ANCER require the gradient optimization of networks, and they are not applicable to non-learning GM solvers. 5.4 THE EFFECT OF JOINT SMOOTHING DISTRIBUTION First, we simplify B by retaining only the higher correlation values in the matrix according to the correlation radio p and setting other values to 0. The radio is set to p ∈ {0%, 20%, 40%, 60%, 80%, 100%} where 100% represents SCR-GM retaining all the correlation coefficients and 0% represents ANCER without correlation coefficients. Results in Fig. 4(a) demonstrate the effectiveness of the Σ which can be used to get a better certified robustness properties. Then, we verify the impact of initial σ for Σ and the results are plotted in Fig. 4(b). Hyperparameter σ determines the scale of Σ which controls a trade-off between certified robustness and accuracy. 6 CONCLUSION AND OUTLOOK We have proposed a definition of certified robustness on structural graph matching and design a method SCR-GM that utilizes the correlation between nodes to construct a joint smooth distribution. We obtain ℓ2 norm certified space and radius for certification. For evaluation, we propose two additional radii by eigenvalue properties. Experiments on deep GM networks and classic solvers show that our method achieves a state-of-art robustness guarantee. Potential impact & limitations. The currently technique is confined with the graph in Euclidean space (and specifically 2D graphs for experiments), a more general formulation is QAP where the perturb may be directly added on the affinity matrix K. A significant direction is enabling robustness certification on the combinatorial solvers whereby GM is one of such cases. We hope this work can inspire subsequent research in this promising area where theoretical results are welcomed given the recent intensive empirical studies (Bengio et al., 2021; Yan et al., 2020). A PROOFS OF THEOREM 1 Here we provide the complete proof for Theorem 1. We first prove the following Lemma 2 which is inspired by the Neyman-Pearson for Gaussians lemma derived in (Cohen et al., 2019) and introduce Lemma 3 which makes random vector independent after linear transformation. Lemma 2 (Neyman-Pearson for Joint Gaussian Noise) Let X ∼ N (x,Σ) and Y ∼ N (x+ δ,Σ). Let h : Rd → {0, 1} be any deterministic or random function. Then: 1. If S = { k ∈ Rd : δTΣ−1k ≤ β } for some β and P(h(X) = 1) ≥ P(X ∈ S), then P(h(Y ) = 1) ≥ P(Y ∈ S). 2. If S = { k ∈ Rd : δTΣ−1k ≥ β } for some β and P(h(X) = 1) ≤ P(X ∈ S), then P(h(Y ) = 1) ≤ P(Y ∈ S). Proof. This lemma is the special case of Neyman-Pearson when X and Y are joint Gaussian noises with means x and x+ δ. It suffices to simply show that for any β, there is some t > 0 for which:{ k : δTΣ−1k ≤ β } = { z : µY (k) µX(k) ≤ t } , { k : δTΣ−1k ≥ β } = { z : µY (k) µX(k) ≥ t } . (14) For ease of representation, we use S ∈ Rd×d (with element sij) instead of Σ−1. The likelihood ratio for this choice of X and Y turns out to be: uY (k) uX(k) = exp ( − 12 (k − (x+ δ)) TΣ−1(k − (x+ δ)) ) exp ( − 12 (k − x)TΣ−1(k − x) ) = exp ( − 12 ∑d i ∑d j (ki − (xi + δi)) sij (kj − (xj + δj)) ) exp ( − 12 ∑d i ∑d j (ki − xi) sij (kj − xj) ) = exp ( δTΣ−1k − δTΣ−1x− 1 2 δTΣ−1δ ) = exp ( δTΣ−1k + b ) ≤ t, where b is a constant, specifically b = −δTΣ−1x− 12δ TΣ−1δ. Therefore given any β, we may take t = exp(β + b) and get this correlation: δTΣ−1k ≤ β ⇐⇒ exp (β + b) ≤ t, δTΣ−1k ≥ β ⇐⇒ exp (β + b) ≥ t. (15) Lemma 3 (Joint Gaussian Distribution) If there is a random vector X ∼ N (µ,Σ), where µ ∈ Rn is the mean vector. A positive semi-definite real symmetric matrix Σ ∈ Sn×n++ is the covariance matrix of X . There is a full rank matrix B ∈ Rn×n, which makes X = BZ + µ, Z ∼ N (0, I) and B⊤B = Σ. Then we can prove Theorem 1, recall: Theorem 1. Let fi(z1) be node matching function, gi be defined as in Eq. 4, and ε ∼ N (0,Σ). If pA ∈ [0, 1] and pB ∈ [0, 1] satisfy: P ( fi(z 1 + ε) = rA ) ≥ pA ≥ pB ≥ P(fi(z1 + ε) = rB). (16) Then for gi(z1 + δ) = rA, we can get the certified ℓ2 space for the addictive noise δ: ∥δ⊤B−1∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , (17) where B⊤B = Σ, B ∈ Rn1×n1 is a full rank and real symmetric matrix based on the physical relationships in node matrix z1, and pA and pB are the lower bound of pA and the upper bound of pB , respectively. To show that gi(z1 + δ) = rA, it follows from the definition of gi that we need to show that P ( fi(z 1 + δ + ε) = rA ) ≥ P(fi(z1 + δ + ε) = rB). In the derivation, rB is actually not just “runner-up” node, but any node that is different from rA. We define the random variables: X := z1 + ε = N ( z1,Σ ) , Y := z1 + δ + ε = N ( z1 + δ,Σ ) . We know that: P (fi(X) = rA) ≥ pA, P (fi(X) = rB) ≤ pB . (18) Our goal is to show that P (fi(Y ) = rA) > P (fi(Y ) = rB) . (19) According to lemma 2, we can define the half-spaces: A = { k : δTΣ−1(k − z1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )} , B = { k : δTΣ−1(k − z1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) } . Claim 1 shows that P(X ∈ A) = pA, therefore we can get P (fi(X) = rA) ≥ P(X ∈ A). Hence we may apply Lemma 2 with h(z) := 1 [fi(z) = rA] to conclude: P (fi(Y ) = rA) ≥ P(Y ∈ A). (20) Similarly, we obtain P (fi(X) = rB) ≤ P(X ∈ B). Hence we may apply Lemma 2 with h(z) := 1 [fi(z) = rB ] to conclude: P (fi(Y ) = rB) ≤ P(Y ∈ B). (21) Combining Eq. 20 and 21, we can get the conditions of Eq. 19: P (f(Y ) = rA) ≥ P(Y ∈ A) > P(Y ∈ B) ≥ P (f(Y ) = rB) . (22) According to Claim 3 and Claim 4, we can get P(Y ∈ A) and P(Y ∈ B) as: P(Y ∈ A) = Φ ( Φ−1 ( pA ) − δ TΣ−1δ ∥δTΣ−1B∥ ) , P(Y ∈ B) = Φ ( Φ−1 (pB) + δTΣ−1δ ∥δTΣ−1B∥ ) . (23) Finally, we obtain that P(Y ∈ A) > P(Y ∈ B) if and only if: δTΣ−1δ ∥δTΣ−1B∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , δT (BTB)−1δ ∥δT (BTB)−1B∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) . Because B is a real symmetric matrix (BT = B), we can finally get: ∥δTB−1∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , which recovers the theorem statement. A.1 LINEAR TRANSFORMATION AND DERIVATION We obtain four equations based on linear transformation: Claim 1. P(X ∈ A) = pA Proof. Recall that A = { k : δTΣ−1(k − z1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )} and X ∼ N (z1,Σ), according to lemma 3, we can get: P(X ∈ A) = P ( δTΣ−1(X − z1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( δTΣ−1N (0,Σ) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( δTΣ−1BN (0, I) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( ∥δTΣ−1B∥N (0, 1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = Φ ( Φ−1 ( pA )) = pA. Claim 2. P(X ∈ B) = pB Proof. Recall that B = { k : δTΣ−1(k − z1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) } and X ∼ N (z1,Σ), according to lemma 3, we can get: P(X ∈ B) = P ( δTΣ−1(X − z1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( δTΣ−1N (0,Σ) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( δTΣ−1BN (0, I) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( ∥δTΣ−1B∥N (0, 1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = 1− Φ ( Φ−1 (1− pB) ) = pB . Claim 3. P(Y ∈ A) = Φ ( Φ−1 ( pA ) − δ TΣ−1δ ∥δTΣ−1B∥ ) Proof. Recall that A = { k : δTΣ−1(k − z1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )} and Y ∼ N (z1 + δ,Σ), according to lemma 3, we can get: P(Y ∈ A) = P ( δTΣ−1(Y − z1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( δTΣ−1N (δ,Σ) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( δTΣ−1(BN (0, I) + δ) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( δTΣ−1BN (0, I) + δTΣ−1δ ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( ∥δTΣ−1B∥N (0, 1) ≤ ∥δTΣ−1B∥Φ−1 ( pA ) − δTΣ−1δ ) = P ( N (0, 1) ≤ Φ−1 ( pA ) − δ TΣ−1δ ∥δTΣ−1B∥ ) = Φ ( Φ−1 ( pA ) − δ TΣ−1δ ∥δTΣ−1B∥ ) . Claim 4. P(Y ∈ B) = Φ ( Φ−1 (pB) + δTΣ−1δ ∥δTΣ−1B∥ ) Proof. Recall that B = { k : δTΣ−1(k − z1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) } and Y ∼ N (z1 + δ,Σ), according to lemma 3, we can get: P(Y ∈ B) = P ( δTΣ−1(Y − z1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( δTΣ−1N (δ,Σ) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( δTΣ−1(BN (0, I) + δ) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( δTΣ−1BN (0, I) + δTΣ−1δ ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( ∥δTΣ−1B∥N (0, 1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB)− δTΣ−1δ ) = P ( N (0, 1) ≥ Φ−1 (1− pB)− δTΣ−1δ ∥δTΣ−1B∥ ) = P ( N (0, 1) ≤ Φ−1 (pB) + δTΣ−1δ ∥δTΣ−1B∥ ) = Φ ( Φ−1 (pB) + δTΣ−1δ ∥δTΣ−1B∥ ) B ROBUSTNESS GUARANTEE WHEN PERTURBING FEATURES For GM with input ( G1,G2, z1, z2 ) for matching prediction X, we now focus on the effect of per- turbing node features. Recall that the set F can be expressed as: F = {fi|fi : ( G1,G2, z1, z2 ) → rj , i ∈ n1, j ∈ n2} where G1 = {V1,E1} and G2 = {V2,E2}, fi represents that the i-th node in z1 matches the j-th node in z2, rj is the j-th node in z2. Now we define a new smoothed network gi that returns whichever node in z2 is most likely to match the node in z1 when perturbing node features V1 ∈ Rdv×n1 by joint smoothing distribution noise: gi = argmax rj∈z2 P(fi ( V1 + ε,E1,V2,E2, z 1, z2 ) = rj), where ε ∼ N (0,Σ) , i ∈ n1, j ∈ n2. (24) For notational convenience, we simplify fi ( V1 + ε,E1,V2,E2, z 1, z2 ) to fi(V1). Suppose that when the base function fi solves for the optimal matching of node i in z1, the most probable node rA is returned with probability pA = maxsi∈Si si, where Si is the i-th row of S. The probability of ”runner-up” node rB is denoted as pB , pB = maxsi∈Si,rB ̸=rA si. Similarly, we obtain an ℓ2 certified space to guarantee robustness of graph matching when perturbing features as follows. Theorem 2 (ℓ2 certified space when perturbing features) Let fi(V1) be node matching function, gi be defined as in Eq. 24, and ε ∼ N (0,Σ). If pA ∈ [0, 1] and pB ∈ [0, 1] satisfy: P (fi(V1 + ε) = rA) ≥ pA ≥ pB ≥ P(fi(V1 + ε) = rB), (25) then for gi(V1 + δ) = rA, we can get the certified ℓ2 space for the addictive noise δ: ∥δ⊤B−1∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , (26) where pA and pB are the lower bound of pA and the upper bound of pB respectively. We set B⊤B = Σ where B ∈ R(dv×n1)×(dv×n1) is a diagonal matrix. Different from the correlation matrix in Eq. 10, B is a diagonal matrix similar as (Eiras et al., 2021). However, B is obtained by structure-based prior knowledge rather than the optimization process in (Eiras et al., 2021). We divide the node feature V1 into n1 parts and add independent identically distributed noise of the same intensity (denoted by bm,m ∈ n1) to each part. The noise intensity of m-th part bm is defined as bm = dmd σ where d is the whole distance between nodes in z 1, dm is the distance between the m-th node and other nodes, the original σ is the same as described in (Cohen et al., 2019). This setting indicates that outlier points are more resistant to perturbation. Finally we can derive the same radius forms as Eq. 9, 12 and 13. C EXPERIMENTAL SETUP In this work, we evaluate our strategy on deep graph matching networks and a classic non-learning solver. The procedures to obtain the baseline networks and the evaluation methods are detailed as follows. C.1 BASELINE OF CERTIFICATION METHODS In terms of certification, the baselines we considered are RS (Cohen et al., 2019), DDRS (Alfarra et al., 2022) and ANCER (Eiras et al., 2021). We adapt the off-the-shelf DDRS and ANCER to obtain the data-dependent distribution σ∗x and anisotropic distribution Θ x for graph matching. We add noise to graphs and use pA = maxsi∈Si si and pB = maxsi∈Si,rB ̸=rA si to calculate the gap value Φ−1(pA) − Φ−1(pB) (Si is the i-th row of S). The optimization equations and parameters remain the same as the original algorithms. Then we use SCR-GM to get our joint distribution Σ. Finally, we use the Monte Carlo algorithms in (Cohen et al., 2019) to sample noises according to different distributions and output three radii derived in Sec. 4.2 and 4.4. The sample number n and n0 are set to 1000 and 100 due to the efficiency of graph matching networks, and other parameters are the same as the original network settings (Cohen et al., 2019). We also use hypothesis test (Hung & Fithian, 2019) as in (Cohen et al., 2019) by using α to represent the probability of getting incorrect matching results. In this paper, we set α = 0.001, so there is a high probability (99.9% in this paper) to ensure the certification. α can be set arbitrarily small hence in theory our method is highly reliable. C.2 EVALUATION ON DEEP GRAPH MATCHING For deep graph matching, we mainly evaluate our method on Pascal VOC dataset (Everingham et al., 2010) with Berkeley annotations (Bourdev & Malik, 2009). We follow the protocol of (Wang et al., 2021) and filter out poorly annotated images. In the experiment, we use 100 inputs (containing approximately 650 nodes) of 20 categories to certify the matching robustness. We check our strategy on four representative deep graph matching methods: GMN (Zanfir & Sminchisescu, 2018), PCA-GM (Wang et al., 2019), CIE-H (Yu et al., 2019a) and NGMv2 (Wang et al., 2021), while use the checkpoints of these GM models collected by ThinkMatch (https: //github.com/Thinklab-SJTU/ThinkMatch). We directly evaluate the certified robustness of these networks without fine-tune training. C.3 EVALUATION ON NON-LEARNING METHOD For non-learning method, we mainly evaluate our method on simulation data which contains randomly generated node pairs. In the experiment, we use 100 inputs (each contains 5-10 nodes randomly) and evaluate the strategy on classic solver RRWM (Cho et al., 2010). For evaluation, we extract node features and calculate the affinity matrix K using Gaussian kernel affinity function. Then we perturb node locations and features separately and obtain the certified robustness results. C.4 SIMILARITY MEASURES In addition to Eq. 10, we also uses other three similarity measures to construct B including cosine similarity, pearson similarity and dice similarity as follows. For two points in the Euclidean space Rn: A = (a1, a2, · · · , an) and B = (b1, b2, · · · , bn), cosine similarity is defined as follows: Cosine Similarity(A,B) = A ·B ∥A∥2 · ∥B∥2 = ∑n i=1 aibi√∑n i=1 a 2 i · √∑n i=1 b 2 i ∈ [−1, 1]. (27) Table 1: The ACR ∥δ∥lower of four different RS-type methods (σ = 5) and four GM methods on Pascal VOC dataset. NGMv2 CIE-H PCA-GM GMN RS 4.189 2.880 2.745 2.037 DDRS 5.936 3.505 3.307 2.741 ANCER 6.300 3.367 3.179 2.517 SCR-GM 7.107* 3.726* 3.455* 2.745* Table 2: The accuracy of base function (BA) of NGMv2, standard accuracy (SA) and certified accuracy (CA) of different certified radius ∥δ∥lower using NGMv2 algorithm (σ = 5) on Pascal VOC dataset. BA (%) SA (%) CA (%) R=3.5 CA (%) R=7.0 CA (%) R=10.5 SCR-GM 77.3 75.6 63.7 51.5* 36.4* ANCER 77.3 76.5 64.2 49.1 23.8 DDRS 77.3 77.4* 66.6 50.5 18.2 RS 77.3 76.7 66.9* 0.0 0.0 Pearson similarity is defined as follows: Pearson Similarity(A,B) = cov(A,B) σA · σB = ∑ i=1 ( ai − Ā ) · ( bi − B̄ )√∑n i=1 ( ai − Ā )2 ·√∑ni=1 (bi − B̄)2 ∈ [−1, 1], (28) where Ā = ∑n i=1 ai/n, B̄ = ∑n i=1 bi/n. Dice similarity is defined as follows: Dice Similarity(A,B) = 2 ∑n i=1 aibi∑n i=1 (a 2 i + b 2 i ) , (29) where A and B can not be zero point at the same time. D EXPERIMENTAL RESULTS D.1 CERTIFICATION RESULTS OF DEEP GRAPH MATCHING D.1.1 PERTURBING NODE LOCATION For perturbing node location, we report certified accuracy at ℓlower2 , ℓ max 2 and ℓ Σ 2 radii, for each certified method RS (Cohen et al., 2019), DDRS (Alfarra et al., 2022), ANCER (Eiras et al., 2021) and SCR-GM, each network GMN (Zanfir & Sminchisescu, 2018), PCA-GM (Wang et al., 2019), CIE-H (Yu et al., 2019a) and NGMv2 (Wang et al., 2021), each original σ (σ = 1, 5, 10, 15 and 20). Figures 5, 6, 7 and 8 show certified results on different graph matching networks, respectively. In addition, we certify the effect of the normalization parameter γ, and Fig. 12 shows the results on NGMv2 (Wang et al., 2021) algorithm and σ is set to 5. Tab. 3 shows the impact of different choices for constructing B on the certified robustness. B constructed by Euclidean distance and Dice similarity perform better on the certified robustness. The advantage of B constructed by Euclidean distance is more obvious when the radius is larger. D.1.2 PERTURBING FEATURES For perturbing node features, we only compare our strategy with RS (Cohen et al., 2019) due to the excessive inefficiency of DDRS (Alfarra et al., 2022) and ANCER (Eiras et al., 2021). We set original σ as σ = 0.25, 0.5, 1, 1.5 and 2, other settings are the same as Appendix D.1.1. Fig. 9 shows certified results on different graph matching networks when perturbing node features. Table 3: The impact of different similarity measures for constructing B on the certified robustness. SA (%) CA (%) R=3.0 CA (%) R=6.0 CA (%) R=9.0 CA (%) R=12.0 Euclidean 75.2 64.3 53.3* 42.0* 24.1* Dice 75.6* 65.5* 52.3 41.0 23.5 Cosine 75.6 65.1 52.0 41.0 23.5 Pearson 75.6 65.2 51.8 40.7 23.6 Figure 5: Top-1 certified accuracy on ℓlower2 , ℓ max 2 and ℓ Σ 2 certification by different RS-type methods on NGMv2 methods. Hyperparameter σ trade-off the certified accuracy and radii. D.2 CERTIFICATION RESULTS OF NON-LEARNING METHODS In this section, we report certified accuracy at ℓlower2 , ℓ max 2 and ℓ Σ 2 radii, for certified method (Cohen et al., 2019) and SCR-GM on RRWM (Cho et al., 2010). We set original σ as σ = 0.3, 0.4 and 0.5 when perturbing node locations, while we set σ = 0.001, 0.004 and 0.006 when perturbing features. Fig. 13 and 14 show certified results on the classic solver. E CERTIFIED ROBUSTNESS OF THE SOLUTION X ’S STRUCTURE In Sec. 4, we focus on the certified robustness of node matching results in the graph rather than the whole graph matching result. Our work treats the GM solver as blackbox to get the relaxed matching S, then uses a post-binarization step to to modify the output format X and get the node matching function set F. Then we certify the robustness of F. However, we can also certify the robustness of the full matrix X which is able to utilize more graph structure information, as well as fully consider the constrains in Eq. 1. E.1 DEFINITION Consider a graph matching problem from input space to partial permutation matrices X . As discussed above, randomized smoothing (RS) is a technique for constructing a smoothed function g from an arbitrary base function f . When queried at the input ( G1,G2, z1, z2 ) , the smoothed function g returns whichever matrix X the base function f is most likely to return when z1 is perturbed by noise: g = argmax X∈X P(f ( G1,G2, z1 + ε, z2 ) = X), where ε ∼ N (0,Σ) . (30) The distribution of additive noise ε is a joint Gaussian distribution matrix whose variance Σ represents the correlations between nodes. In addition, Σ is a hyperparameter for certified function which controls the robustness/accuracy trade-off. E.2 ROBUSTNESS GUARANTEE FOR X We define a robustness guarantee with confidence c ∈ [0, 1], which ensures that the similarity between the output matrix calculated by g and its ground truth matrix Xg is not less than a confidence c. Suppose that when the base function f solves ( G1,G2, z1 + ε, z2 ) , its output matrices whose similarity to Xg is not less than c are returned with probability p: X ′ = { Xi ∣∣∣∣∣Xi ·XgXg ·Xg ≥ c,Xi ∈ X } , p = P(Xi|Xi ∈ X ′) (31) Our main result is that smoothed function g is robust within a ℓ2 certified space, which also holds if we replace p with a lower bound p. Theorem 3 (ℓ2 certified space for X ) Let f be a matching function, g be defined as in Eq. 30, and ε ∼ N (0,Σ). Suppose XA ∈ X ′ and p ∈ ( 12 , 1] satisfy: P(f ( G1,G2, z1 + ε, z2 ) = XA, XA ∈ X ′) ≥ p. (32) Then we can get the certified ℓ2 space for the addictive noise δ: ∥δ⊤B−1∥ < Φ−1 ( p ) , (33) which guarantees g ( G1,G2, z1 + δ, z2 ) ∈ X ′. In Eq. 6, B⊤B = Σ and B ∈ Rn1×n1 is a full rank and real symmetric matrix based on the node correlation in node matrix z1. The detail settings and properties of B and Σ are the same as in Section 4.3. Based on Lemma 1 and the certified space in Eq. 33, we can further obtain a certified ℓ2 norm radius: ∥δ∥lower < 1√ λmax ( Φ−1 ( p )) , (34) where λmax is the maximum eigenvalue of Σ−1. We can define a maximum radius of the certified space: ∥δ∥max = 1√ λmin ( Φ−1 ( p )) ), (35) Algorithm 2 Graph Matching Robustness Certification for X with SCR-GM. Input: Graph pair (G1,G2) of size z1 and z2; base function f of graph matching; DDRS (Alfarra et al., 2022) and ANCER (Eiras et al., 2021); original σ; normalization coefficient γ; sampling times k0; matrix similarity confidence c. Output: Matching result X̂g and radius R. 1: Obtain data-dependent σ∗x by adapting (see details in Appendix C) an off-the-shelf DDRS method (Alfarra et al., 2022) to the graph setting; 2: Obtain Anisotropic Θx by adapting (see details in Appendix C)) an off-the-shelf ANCER method (Eiras et al., 2021); 3: Obtain B and regularized Σ described in Sec. 4.3 according to Eq. 10 and 11; 4: Sample k0 noisy samples for G1’s node matrix:z11 ′ , . . . , z1k0 ′ ∼ N ( z1,Σ ) . 5: Compute the approximate ground truth matrix X̂g . 6: Sample k(k = 10k0) noisy samples for G1’s node matrix:z11 ′ , . . . , z1k ′ ∼ N ( z1,Σ ) and get an approximate output set X̂ . 7: Calculate one-sided confidence lower bound p using set X̂ and Eq. 31. 8: if p < 12 then 9: X ABSTAIN; set ∥δi∥lower=∥δi∥max=∥δi∥volume=0, append R; 10: //Discard matching result with low confidence. 11: else 12: Compute radius ∥δi∥lower, ∥δi∥max and ∥δi∥volume described in Sec. 4.4, append R. 13: end if 14: return X̂g , R where λmin is the minimum eigenvalue of Σ−1. The proxy radius ∥δ∥volume is as follows: ∥δ∥volume = r √ π/ n√Γ(n/2 + 1) 2n √√√√1/ n∏ i λi . (36) The whole robustness certification process is shown in Algorithm 2. In fact, we cannot get the real Xg and X during certification stage, so we use Monte Carlo sampling to estimate it. We first sample f ( G1,G2, z1 + ε, z2 ) with n0 times and add all permutation matrices to get Xs, then we use Sinkhorn and Hungarian algorithm to approximate Xg . During certification, if the approximated X̂g is not the same as the ground truth matrix Xg , we consider that the certification for this sample has failed. Then we sample f ( G1,G2, z1 + ε, z2 ) with n times and put all possible matrices into set X̂ to approximate X . When n is large, X̂ and X are relatively close. E.3 EXPERIMENTS We evaluate our methods on deep graph matching networks and non-learning solvers. The evaluation settings are the same as in Sec. 5.1. E.3.1 EXPERIMENTS ON DEEP GRAPH MATCHING We focus on certifying the robustness of node locality and compare ℓlower2 , ℓ max 2 , ℓ Σ 2 certification using four certified methods on four deep GM algorithms. We first set the initial σ of RS to σ ∈ {1, 5, 10, 15, 20}, the confidence c = 0.9 and calculate the smoothing distribution of σ∗x in DDRS and Θ x in ANCER, where iteration number in DDRS and ANCER is equal to 100. Then we set normalization coefficient γ = 5 and compute the joint distribution matrix Σ of SCR-GM. Then we evaluate our strategy on four deep GM methods, the relationship of top-1 certified accuracy and three radii (ℓlower2 , ℓ max 2 and ℓ Σ 2 ) is plotted in Fig. 10. When the radius on x-axis is the same, the higher the certified accuracy on y-axis, the better the certified robustness. Our method outperforms the baseline on NGMv2 algorithm, which means that the certified accuracy is higher when the radii (ℓlower2 , ℓ max 2 and ℓ Σ 2 ) is the same. On CIE-H and PCA-GM algorithms, the certified accuracy of our method is slightly lower sometimes than baselines when ℓlower2 radius is small. However, when ℓlower2 radius is large, the accuracy of baselines decrease significantly or even fail completely while our method maintains a more respectable accuracy. When evaluating using ℓmax2 and ℓ Σ 2 radii, the certified results of our method are similar as the baselines. On GMN algorithm, our certification results are a bit worse than ANCER. In short, the certified robustness advantage of our method is more obvious on the algorithm with better matching accuracy itself. E.3.2 EXPERIMENTS ON NON-LEARNING GM METHODS For non-learning GM, we certify the effectiveness of SCR-GM using simulation experiments on classic non-learning solver RRWM. First we randomly generate two sets of node matrices and calculate their affinity matrix K using Gaussian kernel affinity function. Then we obtain the robustness results by perturbing node locations and edge features respectively using RS and SCR-GM smoothing distributions. We set σ = 0.1 and σ = 0.0001 respectively in Fig. 11(a) and 11(b). Our method has similar performance of the certified accuracy corresponding to the same ∥δ∥lower with the baseline. However, our method performs better on ∥δ∥volume and ∥δ∥max which indicates that the guarantee space certified by our method is wider and its overall robustness is better. We only compare the results using RS and SCR-GM in this experiment, because DDRS and ANCER require the gradient optimization of networks, and they are not applicable to non-learning GM solvers. E.4 PROOF To show that g ( G1,G2, z1 + δ, z2 ) ∈ X ′, it follows from the definition of g that we need to show that: P(f ( G1,G2, z1 + ε+ δ, z2 ) = XA, XA ∈ X ′) ≥ P(f ( G1,G2, z1 + ε+ δ, z2 ) = XB , XB /∈ X ′). We define two random variables: I := ( G1,G2, z1 + ε, z2 ) = ( G1,G2,N ( z1,Σ ) , z2 ) O := ( G1,G2, z1 + ε+ δ, z2 ) = ( G1,G2,N ( z1 + δ,Σ ) , z2 ) . We know that: P(f(I) = XA, XA ∈ X ′) ≥ p. (37) Our goal is to show that P(f(O) = XA, XA ∈ X ′) > P(f(O) = XB , XB /∈ X ′). (38) According to lemma 2, we can define the half-spaces: A = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≤ ∥δTΣ−1B∥Φ−1 ( p )} , B = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≥ ∥δTΣ−1B∥Φ−1 ( p )} . Claim 1 shows that P(I ∈ A) = p, therefore we can get P(f(I) = XA, XA ∈ X ′) ≥ P(I ∈ A). Hence we may apply Lemma 2 to conclude: P(f(O) = XA, XA ∈ X ′) ≥ P(O ∈ A). (39) Similarly, we obtain P(f(I) = XB , XB /∈ X ′) ≤ P(I ∈ B). Hence we may apply Lemma 2 to conclude: P(f(O) = XB , XB /∈ X ′) ≤ P(O ∈ B). (40) Combining Eq. 39 and 40, we can get the conditions of Eq. 38: P(f(O) = XA, XA ∈ X ′) ≥ P(O ∈ A) > P(O ∈ B) ≥ P(f(O) = XB , XB /∈ X ′). (41) According to Claim 3 and Claim 4, we can get P(O ∈ A) and P(O ∈ B) as: P(O ∈ A) = Φ ( Φ−1 ( p ) − δ TΣ−1δ ∥δTΣ−1B∥ ) , P(O ∈ B) = Φ ( −Φ−1 ( p ) + δTΣ−1δ ∥δTΣ−1B∥ ) . (42) Finally, we obtain that P(O ∈ A) > P(O ∈ B) if and only if: δTΣ−1δ ∥δTΣ−1B∥ < Φ−1 ( p) ) , δT (BTB)−1δ ∥δT (BTB)−1B∥ < Φ−1 ( p) ) . Since B is a real symmetric matrix (BT = B), we can finally get: ∥δTB−1∥ < Φ−1 ( p) ) , which recovers the theorem statement. E.4.1 LINEAR TRANSFORMATION AND DERIVATION We obtain four equations based on linear transformation: Claim 1. P(I ∈ A) = p Proof. Recall that A = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≤ ∥δTΣ−1B∥Φ−1 ( p )} , according to lemma 3, we can get: P(I ∈ A) = P ( δTΣ−1(I − ( G1,G2, z1, z2 ) ) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1N (0,Σ) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1BN (0, I) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( ∥δTΣ−1B∥N (0, 1) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = Φ ( Φ−1 ( p )) = p. Claim 2. P(I ∈ B) = 1− p Proof. Recall that B = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≥ ∥δTΣ−1B∥Φ−1 ( p )} , according to lemma 3, we can get: P(I ∈ B) = P ( δTΣ−1(I − ( G1,G2, z1, z2 ) ) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1N (0,Σ) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1BN (0, I) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( ∥δTΣ−1B∥N (0, 1) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = 1− Φ ( Φ−1 ( p )) = 1− p. Claim 3. P(O ∈ A) = Φ ( Φ−1 ( p ) − δ TΣ−1δ ∥δTΣ−1B∥ ) Proof. Recall that A = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≤ ∥δTΣ−1B∥Φ−1 ( p )} and O ∼( G1,G2,N ( z1 + δ,Σ ) , z2 ) , according to lemma 3, we can get: P(O ∈ A) = P ( δTΣ−1(O − ( G1,G2, z1, z2 ) ) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1N (δ,Σ) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1(BN (0, I) + δ) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1BN (0, I) + δTΣ−1δ ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( ∥δTΣ−1B∥N (0, 1) ≤ ∥δTΣ−1B∥Φ−1 ( p ) − δTΣ−1δ ) = P ( N (0, 1) ≤ Φ−1 ( p ) − δ TΣ−1δ ∥δTΣ−1B∥ ) = Φ ( Φ−1 ( p ) − δ TΣ−1δ ∥δTΣ−1B∥ ) . Claim 4. P(O ∈ B) = Φ ( −Φ−1 ( p ) + δ TΣ−1δ ∥δTΣ−1B∥ ) Proof. Recall that B = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≥ ∥δTΣ−1B∥Φ−1 ( p )} and O ∼( G1,G2,N ( z1 + δ,Σ ) , z2 ) , according to lemma 3, we can get: P(O ∈ B) = P ( δTΣ−1((O − ( G1,G2, z1, z2 ) ) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1N (δ,Σ) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1(BN (0, I) + δ) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1BN (0, I) + δTΣ−1δ ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( ∥δTΣ−1B∥N (0, 1) ≥ ∥δTΣ−1B∥Φ−1 ( p ) − δTΣ−1δ ) = P ( N (0, 1) ≥ Φ−1 ( p ) − δ TΣ−1δ ∥δTΣ−1B∥ ) = P ( N (0, 1) ≤ −Φ−1 ( p ) + δTΣ−1δ ∥δTΣ−1B∥ ) = Φ ( −Φ−
1. What is the focus and contribution of the paper on certified robustness for graph matching? 2. What are the strengths of the proposed approach, particularly in terms of its novelty and experimental results? 3. What are the weaknesses of the paper, especially regarding its potential applications and limitations in interpreting the results? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper provides a new result on certified robustness for graph matching. Its based on randomized smoothing (Cohen 2019), but the authors use a correlation matrix based on the graph information to construct a joint Gaussian distribution for smoothing (vs the standard single Gaussian distribution). The experimental results seem to show some effect, but it's very hard to quantify with the provided results (basically guessing about one line vs another). This heavily detracts from the work. Strengths And Weaknesses Strengths: New results on the certified robustness for graph matching. Experimental results seem to show a benefit Generally good write up, but weak experimental analysis Weaknesses: It's not obvious that there will ever be any application of these ideas (quite possibly low significance) Information provided on experiments makes it hard to quantify the improvement. Need metrics like AUC to quantize the performance better. (is AUC even the right metric?) Authors don’t have a good explanation for interpreting results either Clarity, Quality, Novelty And Reproducibility Clarity: Fairly clear for the material as far as the theory goes. Experimental results need more. Quality: Generally high quality. Novelty: Harder to judge, but I think there's some novelty. Significance might be low though. Reproducibility: Code provided.
ICLR
Title Certified Robustness on Structural Graph Matching Abstract The vulnerability of graph matching (GM) to adversarial attacks has received increasing attention from emerging empirical studies, while the certified robustness of GM has not been explored. Inspired by the technique of randomized smoothing, in this paper, for the first time to our best knowledge, the certified robustness on GM is defined and a new certification strategy is designed called Structure-based Certified Robustness of Graph Matching (SCR-GM). Structural prior information of nodes is used to construct a joint smoothing distribution matrix with physical significance, which certifies a wider range than those obtained by previous iterative optimization methods. Furthermore, we propose a certified space that can be used to derive a strictly certified radius and two extra radii for evaluation. Experimental results on GM datasets reveal that our strategy achieves state-of-the-art l2 certified accuracy and regions. Source code will be made publicly available. 1 INTRODUCTION As a well-known NP-hard problem in its general form (Yan et al., 2016) with wide applications e.g. in computer vision and pattern recognition, graph matching (GM) refers to establishing correspondences among two (Cho et al., 2010) or multiple graphs (Jiang et al., 2021). Given two input graphs G1 = {V1,E1} and G2 = {V2,E2} with two sets of annotated nodes z1 ∈ Rn1×2 and z2 ∈ Rn2×2 (assumed in Euclidean space in this paper). Here, V1 ∈ Rdv×n1 and E1 ∈ Rde×m1 represent the feature matrix of n1 nodes and m1 edges (likewise for V2 and E2). The similarities between nodes and edges are formulated into a global affinity matrix K ∈ Rn1n2×n1n2 , whose diagonal and off-diagonal elements store the node-to-node and edge to-edge affinities. It aims to maximize the overall affinity score J of the matching nodes and the edges (Leordeanu & Hebert, 2005) in the form of quadratic assignment problem (QAP) (Loiola et al., 2007): max X J(X) = vec(X)⊤K vec(X), s.t. X ∈ {0, 1}n1×n2 ,X1n1 = 1n1 ,X⊤1n2 ≤ 1n2 , (1) where vec(X) denotes the column-wise vector of the matching solution X ∈ {0, 1}n1×n2 which can be a partial permutation matrix when n1 < n2. One common approach is to relax X’s raw binary constraint into a continuous one (between [0,1]), especially in the form of (partial) doubly-stochastic matrix S ∈ [0, 1]n1×n2 of which the sum of rows/columns is 1 (or zero for partial case). The final X can be obtained by the Hungarian algorithm (Burkard & Dell’Amico, 2009): X = Hung(S). Eq. 1 can also directly incorporate deep nets to obtain the learned affinity matrix K by learning the raw attributes of the graphs e.g. CNNs for images from which the visual graphs are extracted, as well as learning the structure via graph neural networks (GNNs) (Wang et al., 2019): K=NN(G1,G2). Studies on robustness of machine learning models have attracted wide attention, while the robustness of combinatorial solvers is an emergning and unmatured topic (Geisler et al., 2021; Lu et al., 2021). Under the deep GM paradigm, Ren et al. (2022) reveal that the combinatorial GM algorithms can also be sensitive to (additive) noise perturbations not only in appearance but also for structure, similar to the node classification models (Dai et al., 2018; Sun et al., 2018), and an empirical defense algorithm via an appearance-aware regularizer is proposed. So far, there still lacks principled certified defense to provide theoretical robustness guarantees for GM (let alone other combinatorial problems). In fact, existing certified robustness mechanisms (including randomized smoothing) in the graph domain (Rong et al., 2019; Bojchevski et al., 2020; Zügner & Günnemann, 2020; Jia et al., 2020) are confined to unconstrained node or graph-level classification/prediction within a single graph, which cannot be readily adopted for solving the cross-graph and combinatorial problems with structured output like the permutation matrix in GM. Certifiable robustness studies solvers whose prediction at any point x is verifiably constant within some set around x (Wong & Kolter, 2018). As a recent promising approach to achieve certified defense, randomized smoothing (RS) (Lecuyer et al., 2019; Cohen et al., 2019) provides a general robust guarantee applicable to large-scale neural networks against arbitrary attacks. Given an input x and a base classifier, randomized smoothing constructs a ‘smoothed classifier’ which is certifiable within the region characterized by x and the smoothing distribution D. RS has been used in certifying different models, e.g., image classification (Yang et al., 2020) and object detection in vision (Chiang et al., 2020). As an initiative for applying RS to GM1, in this paper we mainly consider two major challenges to solve. C1: varying-size of input graphs. It is not suitable to certify graphs with different sizes by using an identical smoothing distribution. C2: dependency of nodes in graph. The graph structure as a whole carries important information for certification. For the first challenge, we could refer to data-dependent certified robustness methods on image classification task. Some data-dependent methods (Alfarra et al., 2022; Eiras et al., 2021; Hong & Hong, 2022; Labarbarie et al., 2022) are proposed recently to vary and optimize the smoothing distributions D for larger certification region. Therefore, these methods can also be used to construct varying smoothing distributions for graphs with varying sizes. For the second challenge, we expect smoothing distributions constructing correlations between nodes in a graph, which is lacking for current randomized smoothing. Datadependent methods consider little on the heterogeneity and structure of inputs. For example, Alfarra et al. (2022) treat all pixels in one image equally, Eiras et al. (2021) treat pixels differently but cannot reveal their correlation. Thus none of them can overcome the second challenge. In this paper, we aim to solve certified robustness of GM, by analyzing the individual matching robustness of each node, instead of the whole variation of the output matching matrix X in Eq. 1. In particular, we study the node classification task when converting the relaxed solution S into the final matching X (see Eq. 1 and the discussion therein), as such the RS-type certification phase can be naturally introduced during the classification stage. Specifically, we propose the Structure-based Certified Robustness of Graph Matching (SCR-GM) which adopts joint Gaussian distribution instead of independent homogeneous distribution to construct the smoothing solvers. As adversarial attacks tend to perturb the strongly correlated nodes at the same time, the additive noise sampled from joint distribution with structural information and physical meaning can reveal this correlation. According to our theoretical analysis, we obtain the robustness guarantee on GM which describes a certified ℓ2-norm space ant its lower bound radius. In addition, we propose another two radii to help evaluate the robustness more comprehensively. We evaluate our strategy on Pascal VOC dataset (Everingham et al., 2010) with Berkeley annotations (Bourdev & Malik, 2009) and simulation dataset with random node sets. Experimental results reveal that our strategy outperforms the previous works (Cohen et al., 2019; Alfarra et al., 2022; Eiras et al., 2021) on structural GM for ℓ2 certified accuracy and regions. Our contributions are as follows: 1) We propose a general framework for incorporating existing RS-based techniques for certifying graph matching solvers, as long as (which is often the case for both learning-based and classic solvers) it involves a post-binarization step that converts the relaxed matching S (by an arbitrary relaxed GM solver) to node matching. 2) Based our proposed framework, we present the first definition, to our best knowledge (see Eq. 5) of certified robustness for a graph matching solver. 3) We propose a certification method dubbed structure-based certified robustness of GM (SCR-GM) (see Sec. 4.3). It uses jointly distributed noise to model dependent node matching certification. 4) A certified space and lower bound radius are derived to guarantee robustness of graph matching. Two radii are also devised for more complete evaluation of robustness, which complements potentially safe regions and largest feasible perturbations. 1Another challenge is how to better handle the constraints of X, which is related to how to extend the certification of the specific GM problem to other combinatorial solvers, which we leave for future work. 2 RELATED WORK We discuss works on certified robustness related to randomized smoothing and robustness of GM. Certified Robustness related to Randomized Smoothing Lecuyer et al. (2019) propose randomized smoothing firstly as a certified adversarial defense, and use it to train the first certifiably robust classifier for ImageNet. However, its guarantees are loose, then Cohen et al. (2019) shows that adding Gaussian noise to classifiers enjoys a strict ℓ2 certification radius, with follow-ups presenting new RS-type techniques, such as optimal perturbations at different norms, and certified robustness definitions for different tasks. Alfarra et al. (2022) show that the variance of the Gaussian distributions can be optimized at each input so as to maximize the certification region. Meanwhile, Eiras et al. (2021) extend isotropic smoothing distributions to generalized anisotropic counterparts. Hong & Hong (2022) adopt the same anisotropic defination and further design a noise generator to efficiently fine-tune the distributions. A recent work (Labarbarie et al., 2022) that relies on information geometry techniques manages to prove larger regions than previous methods. However, all previous smoothing distributions D deprive the favorable prior knowledge which mainly refers to the node location and graph structure in GM. Moreover, all of them at most certify a single image or graph but do not consider the combinatorial nature of the prediction as in GM. Robustness of Graph Matching Approximate GM solvers have been developed over the decades from traditional learning-free methods (Emmert-Streib et al., 2016) to learning-based ones (Yan et al., 2020). The seminal work (Zanfir & Sminchisescu, 2018) proposes a deep neural network based pipeline for visual GM, in which the visual appearance features are learned via CNN, with subsequent variants (Wang et al., 2019; Rolı́nek et al., 2020), among which a major improvement is to explore the structural information using different techniques e.g. GNN, rather than only appearance features for node/edge attributes as done in (Zanfir & Sminchisescu, 2018). Our work treats the GM solver as blackbox regardless it is learning-based or not, as long as it involves a continuous relaxation to obtain the intermediate double-stochastic matrix. There is also an emerging line of research on adversarial attack and defense on (deep) GM. The earlier work (Yu et al., 2019b) proposes a robust graph matching (RGM) model to improve the robustness against perturbations e.g. distortion, rotation, outliers and noise. Zhang et al. (2020) devise an adversarial attack model for deep GM networks, which uses kernel density estimation to construct dense regions such that the neighboring nodes are indistinguishable. Ren et al. (2021) devise two specific topology attacks in GM: inter-graph dispersion and intra-graph combination attacks, and propose a resilient defense model. Ren et al. (2022) design an attack perturbing input images and their hidden graphs together for deep (visual) GM, and further propose appearanceaware regularizers to enlarge the disparity among similar keypoints for defense. However, the above defense methods are all heuristic and lacks robustness certification in face of other unseen attacks. 3 PRELIMINARIES ON RANDOMIZED SMOOTHING The original RS (Cohen et al., 2019) can transform an arbitrary base classifier f into a smoothed classifier g that is certifiably robust under ℓ2 norm. For any input x, the smoothed classifier g returns the most probable prediction of f for the random variable N (x;σ2I), which is defined by: g(x) = argmax c∈Y P(f(x+ ε) = c), (2) where ε ∼ N ( 0, σ2I ) is isotropic Gaussian noise perturbing the input x. Then the certified radius within which the output is unchanged for g(x+ δ) = cA that measures the certified robustness is: R = ∥δ∥2 < σ 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , (3) where the most probable class cA is returned with probability pA and the ‘runner-up’ class is returned with probability pB . pA and pB are lower bound and upper bound of pA and pB respectively, and Φ−1 is the inverse of the standard Gaussian cumulative distribution function. The smoothed classifier g is robust around x within the ℓ2 radius in Eq. 3. To enhance the certification, Alfarra et al. (2022) and Eiras et al. (2021) propose isotropic and anisotropic distributions to maximize the certified region respectively. However, none of them can explicitly encode the prior information of the inputs (e.g. the graph topology in GM) which means their distributions are randomly initialized. Differently, we propose a correlation matrix to reveal the structural information in graphs, and in turn construct a joint Gaussian distribution to replace the single Gaussian distribution, which not only makes the initial distribution physically meaningful, but also eliminates the optimization process of finding the largest certified region. 4 METHODOLOGY We first define the smoothed GM solver (be either neural network or traditional solver) and propose a robustness guarantee. We then devise a new certification strategy SCR-GM using a physically meaningful joint smoothing distribution. We also give two new radii to aid evaluating robustness. 4.1 PROBLEM FORMULATION For pairwise GM with input ( G1,G2, z1, z2 ) , we mainly focus on the effect of perturbing two sets of annotated nodes z1 ∈ Rn1×2 and z2 ∈ Rn2×2. For visual GM (Zanfir & Sminchisescu, 2018; Ren et al., 2022) as widely considered in literature, z1 and z2 are node coordinates obtained by human annotation or keypoint detectors. During the certification for perturbing nodes, here we consider the node coordinates as the input while keep the node/edge attributes as unchanged. The robustness guarantees of perturbing features are given in Appendix B. As discussed in Sec. 3, randomized smoothing (RS) is a technique for constructing a smoothed function g from an arbitrary base function f . In this paper, we technically convert a whole matching problem into a set F regarding with binary classification based on the intermediate matrix S. The set F can be expressed as: F = {fi|fi : ( G1,G2, z1, z2 ) → rj , i ∈ n1, j ∈ n2}, where fi represents that the i-th node in z1 matches the j-th node in z2 and rj represents the j-th node rj in z2. Such a conversion allows us to certify the matching robustness for a single node, avoiding an imprecise certification for the entire matching matrix. The smoothed network gi returns whichever node in z2 is most likely to match the node in z1 when the input is perturbed by joint smoothing noise: gi = argmax rj∈z2 P(fi ( G1,G2, z1 + ε, z2 ) = rj), where ε ∼ N (0,Σ) , i ∈ n1, j ∈ n2. (4) For convenience, we simplify fi ( G1,G2, z1, z2 ) to fi(z1) and derive the results by perturbing z1 only, as it is equivalent to robustness certification under joint perturbation to z1 and z2. Furthermore, we propose a method which defines the smoothed function for certifying whole X as introduced in Appendix. E. The distribution of noise ε is a joint Gaussian distribution matrix whose variance represents the correlation between nodes. In addition, Σ is a hyperparameter for certified function which controls a robustness/accuracy trade-off and will be detailed in Sec. 4.3. Note that for robustness certification, we only consider those nodes that can obtain a unique solution argmax in Eq. 4. 4.2 ROBUSTNESS GUARANTEE Suppose that when the base function fi solves for the optimal matching of node i in z1, the most probable node rA in z2 is returned with probability pA = maxsi∈Si si, where Si is the i-th row of S. Similarly, the probability of ”runner-up” node rB in z2 is denoted as pB , pB = maxsi∈Si,rB ̸=rA si. We adopt an ℓ2 certified space to guarantee robustness of graph matching. Theorem 1 (ℓ2 certified space) Let fi(z1) be node matching function, gi be defined as in Eq. 4, and ε ∼ N (0,Σ). If pA ∈ [0, 1] and pB ∈ [0, 1] satisfy: P ( fi(z 1 + ε) = rA ) ≥ pA ≥ pB ≥ P(fi(z1 + ε) = rB), (5) then for gi(z1 + δ) = rA, we can get the certified ℓ2 space for the addictive noise δ: ∥δ⊤B−1∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , (6) where B⊤B = Σ, and B ∈ Rn1×n1 is a full rank and real symmetric matrix based on the node correlation in node matrix z1, and pA and pB are the lower bound of pA and upper bound of pB . The detail settings and properties of B and Σ are described and illustrated in Section 4.3. The complete proof of Theorem 1 is presented in Appendix A. Lemma 1 (Eigenvalue Comparison) For a real symmetric matrix A ∈ Rn×n, with λmax and λmin as its maximum and minimum of eigenvalues, then ∀X ∈ Rn, λminX⊤X ≤ X⊤AX ≤ λmaxX⊤X. Based on Lemma 1 and the certified space in Eq. 6, we can further obtain a certified ℓ2 norm radius: ∥δ⊤B−1∥2 = δ⊤Σ−1δ, (7) δ⊤Σ−1δ ≤ λmaxδ⊤δ, (8) ∥δ∥lower < 1 2 √ λmax ( Φ−1 ( pA ) − Φ−1 (pB) ) , (9) where λmax is the maximum eigenvalue of Σ−1. We let the upper bound of δ⊤Σ−1δ satisfy the constraint of Eq. 6, therefore a lower bound on ∥δ∥ can be obtained as ∥δ∥lower. Eq. 6 is an exact constraint on the perturbation space which is a hyperellipsoid, while Eq. 9 describes minor axis of the hyperellipsoid. Both of them are general expressions for arbitrary GM solvers and joint Gaussian smoothing distributions which will be shown in Sec. 4.3. 4.3 JOINT SMOOTHING DISTRIBUTION In contrast to isotropic (Alfarra et al., 2022) and anisotropic (Eiras et al., 2021) distributions, SCRGM reflects the structure of graph while achieving efficiency by avoiding gradient optimization. We first construct the correlation matrix B based on the similarity between nodes in matrix z1. B is a full rank and real symmetric matrix whose element bmn denotes the correlation between m-th and n-th node in z1. We define a similarity using Euclidean distance as follows: bmn = 1/(1 + dmn γ ), (10) where dmn is the Euclidean distance between the m-th and n-th nodes, and γ is the normalization coefficient which controls the degree of correlation. We also uses other three similarity measures to construct B including cosine similarity, pearson similarity and dice similarity as in Appendix C. Nodes in close proximity are more susceptible to perturbations with similar intensity, while perturbations added to nodes at larger distances are almost independent. The diagonal elements in B indicate the intensity of perturbation at nodes, while the non-diagonal elements reveal the correlation between nodes. Then by B⊤B = Σ, we can get the smoothing distribution Σ to sample the additive noise for the input. Σ is a positive definite matrix, which determines the feasibility of radii derived in this work. In contrast, the distribution in (Eiras et al., 2021) is a diagonal matrix with different diagonal elements, which cannot represent the correlation between nodes; and the distribution in (Alfarra et al., 2022) is a diagonal matrix with the same diagonal elements, which directly treats all nodes without difference. In fact when inter-node correlations and the differences of noise intensity are neglected, Σ can degenerate into the above two distributions. Therefore, Σ is a generalized setting that allows all distributions to be compared in the same framework. For comparison, we need to keep Σ at the same order of magnitude as the previous three distributions (Cohen et al., 2019; Eiras et al., 2021; Alfarra et al., 2022). We take a similar strategy as that in (Eiras et al., 2021) to ensure that: min i 1 λxi r (x,Σx) ≥ min i θxi r (x,Θ x) , (11) where λxi is the eigenvalue of (Σ x)−1, Θx is the distribution in (Eiras et al., 2021), θxi is the diagonal element of Θx and r = 12 ( Φ−1 ( pA ) − Φ−1 (pB) ) . Therefore, the four distributions mentioned above can be calculated and analyzed incrementally. The visualization of the four distributions calculated from a same original σ (Cohen et al., 2019) are shown in Fig. 1(a). Moreover, Σx can trade-off the certified accuracy and radius, the eigenvalue λxi is positively correlated with the certified accuracy and negatively correlated with the certified radius. 4.4 EVALUATING CERTIFICATES In Sec. 4.2, Eq. 6 reveals the certified space which is however difficult to quantify and compare. Though Eq. 9 represents a certified and quantifiable form, it actually ignores a large portion of the certified space. We therefore propose two more effective radii to help evaluate the robustness. Eq. 9 is the certification for the worst case of the input, Eq. 13 is the certification for all cases and Eq. 12 reveals the maximum potential for immunity to perturbations. Combining the three radii allows a complete evaluation of the robustness for solvers. By Lemma 1 and Eq. 7 we define a maximum radius of the certified space: ∥δ∥max = 1 2 √ λmin ( Φ−1 ( pA ) − Φ−1 (pB) ) ), (12) where λmin is the minimum eigenvalue of Σ−1, and δ⊤Σ−1δ ≥ λminδ⊤δ. ∥δ∥max denotes the ℓ2-norm maximum value for all possible perturbations. Inspired by (Eiras et al., 2021), we can also measure the certified space in terms of ellipsoidal volume. By using the formula for the volume of the ellipsoid: V (R) = rn √ πn/Γ(n/2 + 1) ∏n i=1 ξi (Kendall, 2004) where ξi is the i-th radius of the ellipsoid, we can get a proxy radius ∥δ∥volume as: ∥δ∥volume = r √ π/ n√Γ(n/2 + 1) 2n √√√√1/ n∏ i λi , (13) where r = 12 ( Φ−1 ( pA ) − Φ−1 (pB) ) and λi is the eigenvalue of Σ−1. When all λi are the same, the certification result will be the same as the traditional method (Cohen et al., 2019). As described in section 4.2, the certified space is a hyperellipsoid geometrically, ∥δ∥lower represents the minor axis, ∥δ∥max represents the major axis, ∥δ∥volumn is a proxy radius of a hypersphere with the same volume as the hyperellipsoid. The whole certification process is shown in Algorithm 1. 5 EXPERIMENTS We evaluate our strategy in three aspects: i) For deep graph matching, we compare three radii in Eq. 9, Eq. 12 and Eq. 13 obtained by different certified methods on four GM networks; ii) For nonlearning GM methods, we perform synthetic experiments on the widely-used solver RRWM (Cho et al., 2010); iii) We reveal the impact of Σ on the certification results by ablation study. 5.1 EVALUATION SETTINGS Following the GM literature (Wang et al., 2021), we mainly evaluate our method on Pascal VOC dataset (Everingham et al., 2010) with Berkeley annotations (Bourdev & Malik, 2009). All the experiments are conducted on CPU (Intel(R) Core(TM) i7-7820X CPU @ 3.60GHz) and GPU Algorithm 1 Graph Matching Robustness Certification with SCR-GM. Input: Graph pair (G1,G2) of size z1 and z2; set of base classifier F; DDRS (Alfarra et al., 2022) and ANCER (Eiras et al., 2021); original σ; normalization coefficient γ; sampling times k0. Output: Matching set M and radius set ∆. 1: Obtain data-dependent σ∗x by adapting (see details in Appendix C) an off-the-shelf DDRS method (Alfarra et al., 2022) to the graph setting; 2: Obtain Anisotropic Θx by adapting (see details in Appendix C)) an off-the-shelf ANCER method (Eiras et al., 2021); 3: Obtain B and regularized Σ described in Sec. 4.3 according to Eq. 10 and 11; 4: Sample k0 noisy samples for left node matrix:z11 ′ , . . . , z1k0 ′ ∼ N ( z1,Σ ) . 5: Compute the matching result for nodes in z1: M = {mi| argmaxrj∈z2 ∑k0 k=1 I { fi ( z1k ′ ) = rj } }. 6: Sample k(k = 10k0) noisy samples for G1’s node matrix:z11 ′ , . . . , z1k ′ ∼ N ( z1,Σ ) . 7: Calculate one-sided confidence lower bound pA and pB using M as described in (Cohen et al., 2019) for every node in z1, get set PA and PB . 8: for pA and pB in PA and PB do 9: if pA < 12 then 10: mi ABSTAIN; set ∥δi∥lower=∥δi∥max=∥δi∥volume=0, append ∆; 11: // Discard nodes with low matching confidence. 12: else 13: Compute radius ∥δi∥lower, ∥δi∥max and ∥δi∥volume described in Sec. 4.4, append ∆. 14: end if 15: end for 16: return M, ∆ (GTX 2080 Ti GPU). We validate the certified robustness on four representative deep GM models: GMN (Zanfir & Sminchisescu, 2018), PCA-GM (Wang et al., 2019), CIE-H (Yu et al., 2019a), NGMv2 (Wang et al., 2021) and also a non-deep method RRWM (Cho et al., 2010). In this work, data processing and parameter settings are the same as the original papers unless otherwise specified. The compared methods include RS (Cohen et al., 2019), DDRS (Alfarra et al., 2022) and ANCER (Eiras et al., 2021). Since the anisotropic method in (Hong & Hong, 2022) is the same as in (Eiras et al., 2021) and (Hong & Hong, 2022) does not provide any code, we choose to compare with (Eiras et al., 2021). We follow the procedure as much similar as possible to that in (Cohen et al., 2019). In (Cohen et al., 2019), the certified accuracy (CA) is defined as: CA(R) = Ex,y [1(∥δ∥ ≥ R)1{g(x) = y}] . In our method, g represents the smoothed function defined in Eq. 4, x denotes the input node in test set, and y is its ground truth matching node. ∥δ∥ denotes the certified radius calculated by Eq. 9, Eq. 12, Eq. 13, R is the scale of x-axis, 1 is an indicator function. To quantify the improvement, we use Average Certified Radius (ACR) in (Zhai et al., 2020): Ex,y [∥δ∥1{g(x) = y}] . We use ℓlower2 , ℓmax2 and ℓΣ2 to express ∥δ∥lower, ∥δ∥max and ∥δ∥volume in the experiments. 5.2 EXPERIMENTS ON DEEP GRAPH MATCHING We first set the initial σ of RS to σ ∈ {1, 5, 10, 15, 20}, and calculate the smoothing distribution of σ∗x in DDRS and Θ x in ANCER, where iteration number in DDRS and ANCER is equal to 100. Then we set normalization coefficient γ = 5 and compute the joint distribution matrix Σ of SCRGM. Fig. 1(b) shows the certified radius ∥δ∥lower and ACR on a sample of our method and baselines which indicates that the overall certified robustness of our methods is superior to the baselines. Then we evaluate our strategy on four deep GM methods, the relationship of top-1 certified accuracy and three radii (ℓlower2 , ℓ max 2 and ℓ Σ 2 ) are plotted in Fig. 2, which only shows the case of σ = 5. When the radius on x-axis is the same, the higher the certified accuracy on y-axis, the better the certified robustness. The certified accuracy of our method is slightly lower sometimes than baselines when ∥δ∥lower is small. However, when ∥δ∥lower is large, the accuracy of baseline decreases significantly or even fails completely while our method maintains a more respectable accuracy. When evaluating using ∥δ∥max and ∥δ∥volume, the advantages of our method are more obvious. We calculate the ACR ∥δ∥lower of four different RS-type methods (σ = 5) and four GM methods as shown in Tab. 1, which indicates that our method shows a better certified robustness performance over the whole dataset. To show the impact of certified robustness on the accuracy of the solvers, we use Tab. 2 to show the accuracy of base function, the standard accuracy and certified accuracy of different certified radius ∥δ∥lower using NGMv2 algorithm on Pascal VOC dataset. More results are detailed in Appendix D.1. 5.3 EXPERIMENTS ON NON-LEARNING GM METHODS For non-learning GM, we certify the effectiveness of SCR-GM using simulation experiments on classic non-learning solver RRWM. First we randomly generate two sets of node matrices and calculate their affinity matrix K using Gaussian kernel affinity function. Then we obtain the robustness results by perturbing node locations and edge features respectively using RS and SCR-GM smoothing distributions. We set σ = 0.5 and σ = 0.004 respectively in Fig. 3(a) and 3(b). Our method has similar performance corresponding to the same ∥δ∥lower as the baseline. Moreover, it performs better on the other two cases which indicates that the guarantee space certified by our method is wider and its overall robustness is better. We only compare the results using RS and SCR-GM in this experiment, because DDRS and ANCER require the gradient optimization of networks, and they are not applicable to non-learning GM solvers. 5.4 THE EFFECT OF JOINT SMOOTHING DISTRIBUTION First, we simplify B by retaining only the higher correlation values in the matrix according to the correlation radio p and setting other values to 0. The radio is set to p ∈ {0%, 20%, 40%, 60%, 80%, 100%} where 100% represents SCR-GM retaining all the correlation coefficients and 0% represents ANCER without correlation coefficients. Results in Fig. 4(a) demonstrate the effectiveness of the Σ which can be used to get a better certified robustness properties. Then, we verify the impact of initial σ for Σ and the results are plotted in Fig. 4(b). Hyperparameter σ determines the scale of Σ which controls a trade-off between certified robustness and accuracy. 6 CONCLUSION AND OUTLOOK We have proposed a definition of certified robustness on structural graph matching and design a method SCR-GM that utilizes the correlation between nodes to construct a joint smooth distribution. We obtain ℓ2 norm certified space and radius for certification. For evaluation, we propose two additional radii by eigenvalue properties. Experiments on deep GM networks and classic solvers show that our method achieves a state-of-art robustness guarantee. Potential impact & limitations. The currently technique is confined with the graph in Euclidean space (and specifically 2D graphs for experiments), a more general formulation is QAP where the perturb may be directly added on the affinity matrix K. A significant direction is enabling robustness certification on the combinatorial solvers whereby GM is one of such cases. We hope this work can inspire subsequent research in this promising area where theoretical results are welcomed given the recent intensive empirical studies (Bengio et al., 2021; Yan et al., 2020). A PROOFS OF THEOREM 1 Here we provide the complete proof for Theorem 1. We first prove the following Lemma 2 which is inspired by the Neyman-Pearson for Gaussians lemma derived in (Cohen et al., 2019) and introduce Lemma 3 which makes random vector independent after linear transformation. Lemma 2 (Neyman-Pearson for Joint Gaussian Noise) Let X ∼ N (x,Σ) and Y ∼ N (x+ δ,Σ). Let h : Rd → {0, 1} be any deterministic or random function. Then: 1. If S = { k ∈ Rd : δTΣ−1k ≤ β } for some β and P(h(X) = 1) ≥ P(X ∈ S), then P(h(Y ) = 1) ≥ P(Y ∈ S). 2. If S = { k ∈ Rd : δTΣ−1k ≥ β } for some β and P(h(X) = 1) ≤ P(X ∈ S), then P(h(Y ) = 1) ≤ P(Y ∈ S). Proof. This lemma is the special case of Neyman-Pearson when X and Y are joint Gaussian noises with means x and x+ δ. It suffices to simply show that for any β, there is some t > 0 for which:{ k : δTΣ−1k ≤ β } = { z : µY (k) µX(k) ≤ t } , { k : δTΣ−1k ≥ β } = { z : µY (k) µX(k) ≥ t } . (14) For ease of representation, we use S ∈ Rd×d (with element sij) instead of Σ−1. The likelihood ratio for this choice of X and Y turns out to be: uY (k) uX(k) = exp ( − 12 (k − (x+ δ)) TΣ−1(k − (x+ δ)) ) exp ( − 12 (k − x)TΣ−1(k − x) ) = exp ( − 12 ∑d i ∑d j (ki − (xi + δi)) sij (kj − (xj + δj)) ) exp ( − 12 ∑d i ∑d j (ki − xi) sij (kj − xj) ) = exp ( δTΣ−1k − δTΣ−1x− 1 2 δTΣ−1δ ) = exp ( δTΣ−1k + b ) ≤ t, where b is a constant, specifically b = −δTΣ−1x− 12δ TΣ−1δ. Therefore given any β, we may take t = exp(β + b) and get this correlation: δTΣ−1k ≤ β ⇐⇒ exp (β + b) ≤ t, δTΣ−1k ≥ β ⇐⇒ exp (β + b) ≥ t. (15) Lemma 3 (Joint Gaussian Distribution) If there is a random vector X ∼ N (µ,Σ), where µ ∈ Rn is the mean vector. A positive semi-definite real symmetric matrix Σ ∈ Sn×n++ is the covariance matrix of X . There is a full rank matrix B ∈ Rn×n, which makes X = BZ + µ, Z ∼ N (0, I) and B⊤B = Σ. Then we can prove Theorem 1, recall: Theorem 1. Let fi(z1) be node matching function, gi be defined as in Eq. 4, and ε ∼ N (0,Σ). If pA ∈ [0, 1] and pB ∈ [0, 1] satisfy: P ( fi(z 1 + ε) = rA ) ≥ pA ≥ pB ≥ P(fi(z1 + ε) = rB). (16) Then for gi(z1 + δ) = rA, we can get the certified ℓ2 space for the addictive noise δ: ∥δ⊤B−1∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , (17) where B⊤B = Σ, B ∈ Rn1×n1 is a full rank and real symmetric matrix based on the physical relationships in node matrix z1, and pA and pB are the lower bound of pA and the upper bound of pB , respectively. To show that gi(z1 + δ) = rA, it follows from the definition of gi that we need to show that P ( fi(z 1 + δ + ε) = rA ) ≥ P(fi(z1 + δ + ε) = rB). In the derivation, rB is actually not just “runner-up” node, but any node that is different from rA. We define the random variables: X := z1 + ε = N ( z1,Σ ) , Y := z1 + δ + ε = N ( z1 + δ,Σ ) . We know that: P (fi(X) = rA) ≥ pA, P (fi(X) = rB) ≤ pB . (18) Our goal is to show that P (fi(Y ) = rA) > P (fi(Y ) = rB) . (19) According to lemma 2, we can define the half-spaces: A = { k : δTΣ−1(k − z1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )} , B = { k : δTΣ−1(k − z1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) } . Claim 1 shows that P(X ∈ A) = pA, therefore we can get P (fi(X) = rA) ≥ P(X ∈ A). Hence we may apply Lemma 2 with h(z) := 1 [fi(z) = rA] to conclude: P (fi(Y ) = rA) ≥ P(Y ∈ A). (20) Similarly, we obtain P (fi(X) = rB) ≤ P(X ∈ B). Hence we may apply Lemma 2 with h(z) := 1 [fi(z) = rB ] to conclude: P (fi(Y ) = rB) ≤ P(Y ∈ B). (21) Combining Eq. 20 and 21, we can get the conditions of Eq. 19: P (f(Y ) = rA) ≥ P(Y ∈ A) > P(Y ∈ B) ≥ P (f(Y ) = rB) . (22) According to Claim 3 and Claim 4, we can get P(Y ∈ A) and P(Y ∈ B) as: P(Y ∈ A) = Φ ( Φ−1 ( pA ) − δ TΣ−1δ ∥δTΣ−1B∥ ) , P(Y ∈ B) = Φ ( Φ−1 (pB) + δTΣ−1δ ∥δTΣ−1B∥ ) . (23) Finally, we obtain that P(Y ∈ A) > P(Y ∈ B) if and only if: δTΣ−1δ ∥δTΣ−1B∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , δT (BTB)−1δ ∥δT (BTB)−1B∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) . Because B is a real symmetric matrix (BT = B), we can finally get: ∥δTB−1∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , which recovers the theorem statement. A.1 LINEAR TRANSFORMATION AND DERIVATION We obtain four equations based on linear transformation: Claim 1. P(X ∈ A) = pA Proof. Recall that A = { k : δTΣ−1(k − z1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )} and X ∼ N (z1,Σ), according to lemma 3, we can get: P(X ∈ A) = P ( δTΣ−1(X − z1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( δTΣ−1N (0,Σ) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( δTΣ−1BN (0, I) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( ∥δTΣ−1B∥N (0, 1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = Φ ( Φ−1 ( pA )) = pA. Claim 2. P(X ∈ B) = pB Proof. Recall that B = { k : δTΣ−1(k − z1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) } and X ∼ N (z1,Σ), according to lemma 3, we can get: P(X ∈ B) = P ( δTΣ−1(X − z1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( δTΣ−1N (0,Σ) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( δTΣ−1BN (0, I) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( ∥δTΣ−1B∥N (0, 1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = 1− Φ ( Φ−1 (1− pB) ) = pB . Claim 3. P(Y ∈ A) = Φ ( Φ−1 ( pA ) − δ TΣ−1δ ∥δTΣ−1B∥ ) Proof. Recall that A = { k : δTΣ−1(k − z1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )} and Y ∼ N (z1 + δ,Σ), according to lemma 3, we can get: P(Y ∈ A) = P ( δTΣ−1(Y − z1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( δTΣ−1N (δ,Σ) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( δTΣ−1(BN (0, I) + δ) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( δTΣ−1BN (0, I) + δTΣ−1δ ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( ∥δTΣ−1B∥N (0, 1) ≤ ∥δTΣ−1B∥Φ−1 ( pA ) − δTΣ−1δ ) = P ( N (0, 1) ≤ Φ−1 ( pA ) − δ TΣ−1δ ∥δTΣ−1B∥ ) = Φ ( Φ−1 ( pA ) − δ TΣ−1δ ∥δTΣ−1B∥ ) . Claim 4. P(Y ∈ B) = Φ ( Φ−1 (pB) + δTΣ−1δ ∥δTΣ−1B∥ ) Proof. Recall that B = { k : δTΣ−1(k − z1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) } and Y ∼ N (z1 + δ,Σ), according to lemma 3, we can get: P(Y ∈ B) = P ( δTΣ−1(Y − z1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( δTΣ−1N (δ,Σ) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( δTΣ−1(BN (0, I) + δ) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( δTΣ−1BN (0, I) + δTΣ−1δ ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( ∥δTΣ−1B∥N (0, 1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB)− δTΣ−1δ ) = P ( N (0, 1) ≥ Φ−1 (1− pB)− δTΣ−1δ ∥δTΣ−1B∥ ) = P ( N (0, 1) ≤ Φ−1 (pB) + δTΣ−1δ ∥δTΣ−1B∥ ) = Φ ( Φ−1 (pB) + δTΣ−1δ ∥δTΣ−1B∥ ) B ROBUSTNESS GUARANTEE WHEN PERTURBING FEATURES For GM with input ( G1,G2, z1, z2 ) for matching prediction X, we now focus on the effect of per- turbing node features. Recall that the set F can be expressed as: F = {fi|fi : ( G1,G2, z1, z2 ) → rj , i ∈ n1, j ∈ n2} where G1 = {V1,E1} and G2 = {V2,E2}, fi represents that the i-th node in z1 matches the j-th node in z2, rj is the j-th node in z2. Now we define a new smoothed network gi that returns whichever node in z2 is most likely to match the node in z1 when perturbing node features V1 ∈ Rdv×n1 by joint smoothing distribution noise: gi = argmax rj∈z2 P(fi ( V1 + ε,E1,V2,E2, z 1, z2 ) = rj), where ε ∼ N (0,Σ) , i ∈ n1, j ∈ n2. (24) For notational convenience, we simplify fi ( V1 + ε,E1,V2,E2, z 1, z2 ) to fi(V1). Suppose that when the base function fi solves for the optimal matching of node i in z1, the most probable node rA is returned with probability pA = maxsi∈Si si, where Si is the i-th row of S. The probability of ”runner-up” node rB is denoted as pB , pB = maxsi∈Si,rB ̸=rA si. Similarly, we obtain an ℓ2 certified space to guarantee robustness of graph matching when perturbing features as follows. Theorem 2 (ℓ2 certified space when perturbing features) Let fi(V1) be node matching function, gi be defined as in Eq. 24, and ε ∼ N (0,Σ). If pA ∈ [0, 1] and pB ∈ [0, 1] satisfy: P (fi(V1 + ε) = rA) ≥ pA ≥ pB ≥ P(fi(V1 + ε) = rB), (25) then for gi(V1 + δ) = rA, we can get the certified ℓ2 space for the addictive noise δ: ∥δ⊤B−1∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , (26) where pA and pB are the lower bound of pA and the upper bound of pB respectively. We set B⊤B = Σ where B ∈ R(dv×n1)×(dv×n1) is a diagonal matrix. Different from the correlation matrix in Eq. 10, B is a diagonal matrix similar as (Eiras et al., 2021). However, B is obtained by structure-based prior knowledge rather than the optimization process in (Eiras et al., 2021). We divide the node feature V1 into n1 parts and add independent identically distributed noise of the same intensity (denoted by bm,m ∈ n1) to each part. The noise intensity of m-th part bm is defined as bm = dmd σ where d is the whole distance between nodes in z 1, dm is the distance between the m-th node and other nodes, the original σ is the same as described in (Cohen et al., 2019). This setting indicates that outlier points are more resistant to perturbation. Finally we can derive the same radius forms as Eq. 9, 12 and 13. C EXPERIMENTAL SETUP In this work, we evaluate our strategy on deep graph matching networks and a classic non-learning solver. The procedures to obtain the baseline networks and the evaluation methods are detailed as follows. C.1 BASELINE OF CERTIFICATION METHODS In terms of certification, the baselines we considered are RS (Cohen et al., 2019), DDRS (Alfarra et al., 2022) and ANCER (Eiras et al., 2021). We adapt the off-the-shelf DDRS and ANCER to obtain the data-dependent distribution σ∗x and anisotropic distribution Θ x for graph matching. We add noise to graphs and use pA = maxsi∈Si si and pB = maxsi∈Si,rB ̸=rA si to calculate the gap value Φ−1(pA) − Φ−1(pB) (Si is the i-th row of S). The optimization equations and parameters remain the same as the original algorithms. Then we use SCR-GM to get our joint distribution Σ. Finally, we use the Monte Carlo algorithms in (Cohen et al., 2019) to sample noises according to different distributions and output three radii derived in Sec. 4.2 and 4.4. The sample number n and n0 are set to 1000 and 100 due to the efficiency of graph matching networks, and other parameters are the same as the original network settings (Cohen et al., 2019). We also use hypothesis test (Hung & Fithian, 2019) as in (Cohen et al., 2019) by using α to represent the probability of getting incorrect matching results. In this paper, we set α = 0.001, so there is a high probability (99.9% in this paper) to ensure the certification. α can be set arbitrarily small hence in theory our method is highly reliable. C.2 EVALUATION ON DEEP GRAPH MATCHING For deep graph matching, we mainly evaluate our method on Pascal VOC dataset (Everingham et al., 2010) with Berkeley annotations (Bourdev & Malik, 2009). We follow the protocol of (Wang et al., 2021) and filter out poorly annotated images. In the experiment, we use 100 inputs (containing approximately 650 nodes) of 20 categories to certify the matching robustness. We check our strategy on four representative deep graph matching methods: GMN (Zanfir & Sminchisescu, 2018), PCA-GM (Wang et al., 2019), CIE-H (Yu et al., 2019a) and NGMv2 (Wang et al., 2021), while use the checkpoints of these GM models collected by ThinkMatch (https: //github.com/Thinklab-SJTU/ThinkMatch). We directly evaluate the certified robustness of these networks without fine-tune training. C.3 EVALUATION ON NON-LEARNING METHOD For non-learning method, we mainly evaluate our method on simulation data which contains randomly generated node pairs. In the experiment, we use 100 inputs (each contains 5-10 nodes randomly) and evaluate the strategy on classic solver RRWM (Cho et al., 2010). For evaluation, we extract node features and calculate the affinity matrix K using Gaussian kernel affinity function. Then we perturb node locations and features separately and obtain the certified robustness results. C.4 SIMILARITY MEASURES In addition to Eq. 10, we also uses other three similarity measures to construct B including cosine similarity, pearson similarity and dice similarity as follows. For two points in the Euclidean space Rn: A = (a1, a2, · · · , an) and B = (b1, b2, · · · , bn), cosine similarity is defined as follows: Cosine Similarity(A,B) = A ·B ∥A∥2 · ∥B∥2 = ∑n i=1 aibi√∑n i=1 a 2 i · √∑n i=1 b 2 i ∈ [−1, 1]. (27) Table 1: The ACR ∥δ∥lower of four different RS-type methods (σ = 5) and four GM methods on Pascal VOC dataset. NGMv2 CIE-H PCA-GM GMN RS 4.189 2.880 2.745 2.037 DDRS 5.936 3.505 3.307 2.741 ANCER 6.300 3.367 3.179 2.517 SCR-GM 7.107* 3.726* 3.455* 2.745* Table 2: The accuracy of base function (BA) of NGMv2, standard accuracy (SA) and certified accuracy (CA) of different certified radius ∥δ∥lower using NGMv2 algorithm (σ = 5) on Pascal VOC dataset. BA (%) SA (%) CA (%) R=3.5 CA (%) R=7.0 CA (%) R=10.5 SCR-GM 77.3 75.6 63.7 51.5* 36.4* ANCER 77.3 76.5 64.2 49.1 23.8 DDRS 77.3 77.4* 66.6 50.5 18.2 RS 77.3 76.7 66.9* 0.0 0.0 Pearson similarity is defined as follows: Pearson Similarity(A,B) = cov(A,B) σA · σB = ∑ i=1 ( ai − Ā ) · ( bi − B̄ )√∑n i=1 ( ai − Ā )2 ·√∑ni=1 (bi − B̄)2 ∈ [−1, 1], (28) where Ā = ∑n i=1 ai/n, B̄ = ∑n i=1 bi/n. Dice similarity is defined as follows: Dice Similarity(A,B) = 2 ∑n i=1 aibi∑n i=1 (a 2 i + b 2 i ) , (29) where A and B can not be zero point at the same time. D EXPERIMENTAL RESULTS D.1 CERTIFICATION RESULTS OF DEEP GRAPH MATCHING D.1.1 PERTURBING NODE LOCATION For perturbing node location, we report certified accuracy at ℓlower2 , ℓ max 2 and ℓ Σ 2 radii, for each certified method RS (Cohen et al., 2019), DDRS (Alfarra et al., 2022), ANCER (Eiras et al., 2021) and SCR-GM, each network GMN (Zanfir & Sminchisescu, 2018), PCA-GM (Wang et al., 2019), CIE-H (Yu et al., 2019a) and NGMv2 (Wang et al., 2021), each original σ (σ = 1, 5, 10, 15 and 20). Figures 5, 6, 7 and 8 show certified results on different graph matching networks, respectively. In addition, we certify the effect of the normalization parameter γ, and Fig. 12 shows the results on NGMv2 (Wang et al., 2021) algorithm and σ is set to 5. Tab. 3 shows the impact of different choices for constructing B on the certified robustness. B constructed by Euclidean distance and Dice similarity perform better on the certified robustness. The advantage of B constructed by Euclidean distance is more obvious when the radius is larger. D.1.2 PERTURBING FEATURES For perturbing node features, we only compare our strategy with RS (Cohen et al., 2019) due to the excessive inefficiency of DDRS (Alfarra et al., 2022) and ANCER (Eiras et al., 2021). We set original σ as σ = 0.25, 0.5, 1, 1.5 and 2, other settings are the same as Appendix D.1.1. Fig. 9 shows certified results on different graph matching networks when perturbing node features. Table 3: The impact of different similarity measures for constructing B on the certified robustness. SA (%) CA (%) R=3.0 CA (%) R=6.0 CA (%) R=9.0 CA (%) R=12.0 Euclidean 75.2 64.3 53.3* 42.0* 24.1* Dice 75.6* 65.5* 52.3 41.0 23.5 Cosine 75.6 65.1 52.0 41.0 23.5 Pearson 75.6 65.2 51.8 40.7 23.6 Figure 5: Top-1 certified accuracy on ℓlower2 , ℓ max 2 and ℓ Σ 2 certification by different RS-type methods on NGMv2 methods. Hyperparameter σ trade-off the certified accuracy and radii. D.2 CERTIFICATION RESULTS OF NON-LEARNING METHODS In this section, we report certified accuracy at ℓlower2 , ℓ max 2 and ℓ Σ 2 radii, for certified method (Cohen et al., 2019) and SCR-GM on RRWM (Cho et al., 2010). We set original σ as σ = 0.3, 0.4 and 0.5 when perturbing node locations, while we set σ = 0.001, 0.004 and 0.006 when perturbing features. Fig. 13 and 14 show certified results on the classic solver. E CERTIFIED ROBUSTNESS OF THE SOLUTION X ’S STRUCTURE In Sec. 4, we focus on the certified robustness of node matching results in the graph rather than the whole graph matching result. Our work treats the GM solver as blackbox to get the relaxed matching S, then uses a post-binarization step to to modify the output format X and get the node matching function set F. Then we certify the robustness of F. However, we can also certify the robustness of the full matrix X which is able to utilize more graph structure information, as well as fully consider the constrains in Eq. 1. E.1 DEFINITION Consider a graph matching problem from input space to partial permutation matrices X . As discussed above, randomized smoothing (RS) is a technique for constructing a smoothed function g from an arbitrary base function f . When queried at the input ( G1,G2, z1, z2 ) , the smoothed function g returns whichever matrix X the base function f is most likely to return when z1 is perturbed by noise: g = argmax X∈X P(f ( G1,G2, z1 + ε, z2 ) = X), where ε ∼ N (0,Σ) . (30) The distribution of additive noise ε is a joint Gaussian distribution matrix whose variance Σ represents the correlations between nodes. In addition, Σ is a hyperparameter for certified function which controls the robustness/accuracy trade-off. E.2 ROBUSTNESS GUARANTEE FOR X We define a robustness guarantee with confidence c ∈ [0, 1], which ensures that the similarity between the output matrix calculated by g and its ground truth matrix Xg is not less than a confidence c. Suppose that when the base function f solves ( G1,G2, z1 + ε, z2 ) , its output matrices whose similarity to Xg is not less than c are returned with probability p: X ′ = { Xi ∣∣∣∣∣Xi ·XgXg ·Xg ≥ c,Xi ∈ X } , p = P(Xi|Xi ∈ X ′) (31) Our main result is that smoothed function g is robust within a ℓ2 certified space, which also holds if we replace p with a lower bound p. Theorem 3 (ℓ2 certified space for X ) Let f be a matching function, g be defined as in Eq. 30, and ε ∼ N (0,Σ). Suppose XA ∈ X ′ and p ∈ ( 12 , 1] satisfy: P(f ( G1,G2, z1 + ε, z2 ) = XA, XA ∈ X ′) ≥ p. (32) Then we can get the certified ℓ2 space for the addictive noise δ: ∥δ⊤B−1∥ < Φ−1 ( p ) , (33) which guarantees g ( G1,G2, z1 + δ, z2 ) ∈ X ′. In Eq. 6, B⊤B = Σ and B ∈ Rn1×n1 is a full rank and real symmetric matrix based on the node correlation in node matrix z1. The detail settings and properties of B and Σ are the same as in Section 4.3. Based on Lemma 1 and the certified space in Eq. 33, we can further obtain a certified ℓ2 norm radius: ∥δ∥lower < 1√ λmax ( Φ−1 ( p )) , (34) where λmax is the maximum eigenvalue of Σ−1. We can define a maximum radius of the certified space: ∥δ∥max = 1√ λmin ( Φ−1 ( p )) ), (35) Algorithm 2 Graph Matching Robustness Certification for X with SCR-GM. Input: Graph pair (G1,G2) of size z1 and z2; base function f of graph matching; DDRS (Alfarra et al., 2022) and ANCER (Eiras et al., 2021); original σ; normalization coefficient γ; sampling times k0; matrix similarity confidence c. Output: Matching result X̂g and radius R. 1: Obtain data-dependent σ∗x by adapting (see details in Appendix C) an off-the-shelf DDRS method (Alfarra et al., 2022) to the graph setting; 2: Obtain Anisotropic Θx by adapting (see details in Appendix C)) an off-the-shelf ANCER method (Eiras et al., 2021); 3: Obtain B and regularized Σ described in Sec. 4.3 according to Eq. 10 and 11; 4: Sample k0 noisy samples for G1’s node matrix:z11 ′ , . . . , z1k0 ′ ∼ N ( z1,Σ ) . 5: Compute the approximate ground truth matrix X̂g . 6: Sample k(k = 10k0) noisy samples for G1’s node matrix:z11 ′ , . . . , z1k ′ ∼ N ( z1,Σ ) and get an approximate output set X̂ . 7: Calculate one-sided confidence lower bound p using set X̂ and Eq. 31. 8: if p < 12 then 9: X ABSTAIN; set ∥δi∥lower=∥δi∥max=∥δi∥volume=0, append R; 10: //Discard matching result with low confidence. 11: else 12: Compute radius ∥δi∥lower, ∥δi∥max and ∥δi∥volume described in Sec. 4.4, append R. 13: end if 14: return X̂g , R where λmin is the minimum eigenvalue of Σ−1. The proxy radius ∥δ∥volume is as follows: ∥δ∥volume = r √ π/ n√Γ(n/2 + 1) 2n √√√√1/ n∏ i λi . (36) The whole robustness certification process is shown in Algorithm 2. In fact, we cannot get the real Xg and X during certification stage, so we use Monte Carlo sampling to estimate it. We first sample f ( G1,G2, z1 + ε, z2 ) with n0 times and add all permutation matrices to get Xs, then we use Sinkhorn and Hungarian algorithm to approximate Xg . During certification, if the approximated X̂g is not the same as the ground truth matrix Xg , we consider that the certification for this sample has failed. Then we sample f ( G1,G2, z1 + ε, z2 ) with n times and put all possible matrices into set X̂ to approximate X . When n is large, X̂ and X are relatively close. E.3 EXPERIMENTS We evaluate our methods on deep graph matching networks and non-learning solvers. The evaluation settings are the same as in Sec. 5.1. E.3.1 EXPERIMENTS ON DEEP GRAPH MATCHING We focus on certifying the robustness of node locality and compare ℓlower2 , ℓ max 2 , ℓ Σ 2 certification using four certified methods on four deep GM algorithms. We first set the initial σ of RS to σ ∈ {1, 5, 10, 15, 20}, the confidence c = 0.9 and calculate the smoothing distribution of σ∗x in DDRS and Θ x in ANCER, where iteration number in DDRS and ANCER is equal to 100. Then we set normalization coefficient γ = 5 and compute the joint distribution matrix Σ of SCR-GM. Then we evaluate our strategy on four deep GM methods, the relationship of top-1 certified accuracy and three radii (ℓlower2 , ℓ max 2 and ℓ Σ 2 ) is plotted in Fig. 10. When the radius on x-axis is the same, the higher the certified accuracy on y-axis, the better the certified robustness. Our method outperforms the baseline on NGMv2 algorithm, which means that the certified accuracy is higher when the radii (ℓlower2 , ℓ max 2 and ℓ Σ 2 ) is the same. On CIE-H and PCA-GM algorithms, the certified accuracy of our method is slightly lower sometimes than baselines when ℓlower2 radius is small. However, when ℓlower2 radius is large, the accuracy of baselines decrease significantly or even fail completely while our method maintains a more respectable accuracy. When evaluating using ℓmax2 and ℓ Σ 2 radii, the certified results of our method are similar as the baselines. On GMN algorithm, our certification results are a bit worse than ANCER. In short, the certified robustness advantage of our method is more obvious on the algorithm with better matching accuracy itself. E.3.2 EXPERIMENTS ON NON-LEARNING GM METHODS For non-learning GM, we certify the effectiveness of SCR-GM using simulation experiments on classic non-learning solver RRWM. First we randomly generate two sets of node matrices and calculate their affinity matrix K using Gaussian kernel affinity function. Then we obtain the robustness results by perturbing node locations and edge features respectively using RS and SCR-GM smoothing distributions. We set σ = 0.1 and σ = 0.0001 respectively in Fig. 11(a) and 11(b). Our method has similar performance of the certified accuracy corresponding to the same ∥δ∥lower with the baseline. However, our method performs better on ∥δ∥volume and ∥δ∥max which indicates that the guarantee space certified by our method is wider and its overall robustness is better. We only compare the results using RS and SCR-GM in this experiment, because DDRS and ANCER require the gradient optimization of networks, and they are not applicable to non-learning GM solvers. E.4 PROOF To show that g ( G1,G2, z1 + δ, z2 ) ∈ X ′, it follows from the definition of g that we need to show that: P(f ( G1,G2, z1 + ε+ δ, z2 ) = XA, XA ∈ X ′) ≥ P(f ( G1,G2, z1 + ε+ δ, z2 ) = XB , XB /∈ X ′). We define two random variables: I := ( G1,G2, z1 + ε, z2 ) = ( G1,G2,N ( z1,Σ ) , z2 ) O := ( G1,G2, z1 + ε+ δ, z2 ) = ( G1,G2,N ( z1 + δ,Σ ) , z2 ) . We know that: P(f(I) = XA, XA ∈ X ′) ≥ p. (37) Our goal is to show that P(f(O) = XA, XA ∈ X ′) > P(f(O) = XB , XB /∈ X ′). (38) According to lemma 2, we can define the half-spaces: A = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≤ ∥δTΣ−1B∥Φ−1 ( p )} , B = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≥ ∥δTΣ−1B∥Φ−1 ( p )} . Claim 1 shows that P(I ∈ A) = p, therefore we can get P(f(I) = XA, XA ∈ X ′) ≥ P(I ∈ A). Hence we may apply Lemma 2 to conclude: P(f(O) = XA, XA ∈ X ′) ≥ P(O ∈ A). (39) Similarly, we obtain P(f(I) = XB , XB /∈ X ′) ≤ P(I ∈ B). Hence we may apply Lemma 2 to conclude: P(f(O) = XB , XB /∈ X ′) ≤ P(O ∈ B). (40) Combining Eq. 39 and 40, we can get the conditions of Eq. 38: P(f(O) = XA, XA ∈ X ′) ≥ P(O ∈ A) > P(O ∈ B) ≥ P(f(O) = XB , XB /∈ X ′). (41) According to Claim 3 and Claim 4, we can get P(O ∈ A) and P(O ∈ B) as: P(O ∈ A) = Φ ( Φ−1 ( p ) − δ TΣ−1δ ∥δTΣ−1B∥ ) , P(O ∈ B) = Φ ( −Φ−1 ( p ) + δTΣ−1δ ∥δTΣ−1B∥ ) . (42) Finally, we obtain that P(O ∈ A) > P(O ∈ B) if and only if: δTΣ−1δ ∥δTΣ−1B∥ < Φ−1 ( p) ) , δT (BTB)−1δ ∥δT (BTB)−1B∥ < Φ−1 ( p) ) . Since B is a real symmetric matrix (BT = B), we can finally get: ∥δTB−1∥ < Φ−1 ( p) ) , which recovers the theorem statement. E.4.1 LINEAR TRANSFORMATION AND DERIVATION We obtain four equations based on linear transformation: Claim 1. P(I ∈ A) = p Proof. Recall that A = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≤ ∥δTΣ−1B∥Φ−1 ( p )} , according to lemma 3, we can get: P(I ∈ A) = P ( δTΣ−1(I − ( G1,G2, z1, z2 ) ) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1N (0,Σ) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1BN (0, I) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( ∥δTΣ−1B∥N (0, 1) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = Φ ( Φ−1 ( p )) = p. Claim 2. P(I ∈ B) = 1− p Proof. Recall that B = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≥ ∥δTΣ−1B∥Φ−1 ( p )} , according to lemma 3, we can get: P(I ∈ B) = P ( δTΣ−1(I − ( G1,G2, z1, z2 ) ) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1N (0,Σ) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1BN (0, I) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( ∥δTΣ−1B∥N (0, 1) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = 1− Φ ( Φ−1 ( p )) = 1− p. Claim 3. P(O ∈ A) = Φ ( Φ−1 ( p ) − δ TΣ−1δ ∥δTΣ−1B∥ ) Proof. Recall that A = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≤ ∥δTΣ−1B∥Φ−1 ( p )} and O ∼( G1,G2,N ( z1 + δ,Σ ) , z2 ) , according to lemma 3, we can get: P(O ∈ A) = P ( δTΣ−1(O − ( G1,G2, z1, z2 ) ) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1N (δ,Σ) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1(BN (0, I) + δ) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1BN (0, I) + δTΣ−1δ ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( ∥δTΣ−1B∥N (0, 1) ≤ ∥δTΣ−1B∥Φ−1 ( p ) − δTΣ−1δ ) = P ( N (0, 1) ≤ Φ−1 ( p ) − δ TΣ−1δ ∥δTΣ−1B∥ ) = Φ ( Φ−1 ( p ) − δ TΣ−1δ ∥δTΣ−1B∥ ) . Claim 4. P(O ∈ B) = Φ ( −Φ−1 ( p ) + δ TΣ−1δ ∥δTΣ−1B∥ ) Proof. Recall that B = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≥ ∥δTΣ−1B∥Φ−1 ( p )} and O ∼( G1,G2,N ( z1 + δ,Σ ) , z2 ) , according to lemma 3, we can get: P(O ∈ B) = P ( δTΣ−1((O − ( G1,G2, z1, z2 ) ) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1N (δ,Σ) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1(BN (0, I) + δ) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1BN (0, I) + δTΣ−1δ ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( ∥δTΣ−1B∥N (0, 1) ≥ ∥δTΣ−1B∥Φ−1 ( p ) − δTΣ−1δ ) = P ( N (0, 1) ≥ Φ−1 ( p ) − δ TΣ−1δ ∥δTΣ−1B∥ ) = P ( N (0, 1) ≤ −Φ−1 ( p ) + δTΣ−1δ ∥δTΣ−1B∥ ) = Φ ( −Φ−
1. What is the focus and contribution of the paper regarding structural graph matching? 2. How does the proposed approach apply the generic randomized smoothing technique to graph matching? 3. What are the strengths and weaknesses of the paper, particularly in its formulation and application? 4. Do you have any questions or concerns regarding the paper's clarity, quality, novelty, and reproducibility?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper studies the problem of certifying the robustness of structural graph matching, where the goal is to return a matching (or assignment) between the nodes of two input graphs G1 and G2, based on node and edge attributes. A robust graph matching mechanism is supposed to be resilient to adversarial perturbations of the input. The approach taken in this paper is to cast the graph matching problem as a collection of classification problems, where each node in G1 corresponds to a classification problem, and the nodes of G2 are the possible classes. Thus, if v_i is a node in G1 and u_j is a node in G2, then classifying v_i into the class u_j means that the matching matches v_i to u_j. This formulation allows the paper to apply the generic randomizes smoothing (RS) technique to graph matching. RS is a general technique for turning non-robust classification mechanisms into certifiably robust ones, by adding some perturbation noise to the point that needs to classified, and classifying it to the most likely class (according to the input non-robust mechanism) of the perturbed point. Casting graph matching as a collection of classification problems as above allows applying RS, by adding noise to the nodes in G1. Furthermore, the covariance matrix of the Gaussian noise is chosen to reflect the structure of the input graphs, allowing for correlation between nodes, rather than adding independent noise, which according to the paper was the approach taken in prior work. Strengths And Weaknesses The paper appears to study a meaningful problem, and the proposed solution makes sense, up to some clarifications listed below. Clarity, Quality, Novelty And Reproducibility The paper is generally well-written and I could largely understand and follow with almost no background on the subject, yet some parts were harder to follow. For example, I'm not sure why in section 4.1 the point attributes become 2-dimensional whereas the original formulation was with an arbitrary dimension d, and whether this matters for the results in the paper. In the definition of F (same section) it was not clear what is the range of the f_i's (meaning that r_j is a node in G2) until discerning this indirectly from the later derivations. I am not sure what I'm supposed to be seeing in figure 1(b), which looks like four near-identical figures. And finally, I was not sure if and how your method makes sure that the robust matching mechanism maintains the marginal constraints of the relaxed matching (the last two constraints in equation (1)).
ICLR
Title Certified Robustness on Structural Graph Matching Abstract The vulnerability of graph matching (GM) to adversarial attacks has received increasing attention from emerging empirical studies, while the certified robustness of GM has not been explored. Inspired by the technique of randomized smoothing, in this paper, for the first time to our best knowledge, the certified robustness on GM is defined and a new certification strategy is designed called Structure-based Certified Robustness of Graph Matching (SCR-GM). Structural prior information of nodes is used to construct a joint smoothing distribution matrix with physical significance, which certifies a wider range than those obtained by previous iterative optimization methods. Furthermore, we propose a certified space that can be used to derive a strictly certified radius and two extra radii for evaluation. Experimental results on GM datasets reveal that our strategy achieves state-of-the-art l2 certified accuracy and regions. Source code will be made publicly available. 1 INTRODUCTION As a well-known NP-hard problem in its general form (Yan et al., 2016) with wide applications e.g. in computer vision and pattern recognition, graph matching (GM) refers to establishing correspondences among two (Cho et al., 2010) or multiple graphs (Jiang et al., 2021). Given two input graphs G1 = {V1,E1} and G2 = {V2,E2} with two sets of annotated nodes z1 ∈ Rn1×2 and z2 ∈ Rn2×2 (assumed in Euclidean space in this paper). Here, V1 ∈ Rdv×n1 and E1 ∈ Rde×m1 represent the feature matrix of n1 nodes and m1 edges (likewise for V2 and E2). The similarities between nodes and edges are formulated into a global affinity matrix K ∈ Rn1n2×n1n2 , whose diagonal and off-diagonal elements store the node-to-node and edge to-edge affinities. It aims to maximize the overall affinity score J of the matching nodes and the edges (Leordeanu & Hebert, 2005) in the form of quadratic assignment problem (QAP) (Loiola et al., 2007): max X J(X) = vec(X)⊤K vec(X), s.t. X ∈ {0, 1}n1×n2 ,X1n1 = 1n1 ,X⊤1n2 ≤ 1n2 , (1) where vec(X) denotes the column-wise vector of the matching solution X ∈ {0, 1}n1×n2 which can be a partial permutation matrix when n1 < n2. One common approach is to relax X’s raw binary constraint into a continuous one (between [0,1]), especially in the form of (partial) doubly-stochastic matrix S ∈ [0, 1]n1×n2 of which the sum of rows/columns is 1 (or zero for partial case). The final X can be obtained by the Hungarian algorithm (Burkard & Dell’Amico, 2009): X = Hung(S). Eq. 1 can also directly incorporate deep nets to obtain the learned affinity matrix K by learning the raw attributes of the graphs e.g. CNNs for images from which the visual graphs are extracted, as well as learning the structure via graph neural networks (GNNs) (Wang et al., 2019): K=NN(G1,G2). Studies on robustness of machine learning models have attracted wide attention, while the robustness of combinatorial solvers is an emergning and unmatured topic (Geisler et al., 2021; Lu et al., 2021). Under the deep GM paradigm, Ren et al. (2022) reveal that the combinatorial GM algorithms can also be sensitive to (additive) noise perturbations not only in appearance but also for structure, similar to the node classification models (Dai et al., 2018; Sun et al., 2018), and an empirical defense algorithm via an appearance-aware regularizer is proposed. So far, there still lacks principled certified defense to provide theoretical robustness guarantees for GM (let alone other combinatorial problems). In fact, existing certified robustness mechanisms (including randomized smoothing) in the graph domain (Rong et al., 2019; Bojchevski et al., 2020; Zügner & Günnemann, 2020; Jia et al., 2020) are confined to unconstrained node or graph-level classification/prediction within a single graph, which cannot be readily adopted for solving the cross-graph and combinatorial problems with structured output like the permutation matrix in GM. Certifiable robustness studies solvers whose prediction at any point x is verifiably constant within some set around x (Wong & Kolter, 2018). As a recent promising approach to achieve certified defense, randomized smoothing (RS) (Lecuyer et al., 2019; Cohen et al., 2019) provides a general robust guarantee applicable to large-scale neural networks against arbitrary attacks. Given an input x and a base classifier, randomized smoothing constructs a ‘smoothed classifier’ which is certifiable within the region characterized by x and the smoothing distribution D. RS has been used in certifying different models, e.g., image classification (Yang et al., 2020) and object detection in vision (Chiang et al., 2020). As an initiative for applying RS to GM1, in this paper we mainly consider two major challenges to solve. C1: varying-size of input graphs. It is not suitable to certify graphs with different sizes by using an identical smoothing distribution. C2: dependency of nodes in graph. The graph structure as a whole carries important information for certification. For the first challenge, we could refer to data-dependent certified robustness methods on image classification task. Some data-dependent methods (Alfarra et al., 2022; Eiras et al., 2021; Hong & Hong, 2022; Labarbarie et al., 2022) are proposed recently to vary and optimize the smoothing distributions D for larger certification region. Therefore, these methods can also be used to construct varying smoothing distributions for graphs with varying sizes. For the second challenge, we expect smoothing distributions constructing correlations between nodes in a graph, which is lacking for current randomized smoothing. Datadependent methods consider little on the heterogeneity and structure of inputs. For example, Alfarra et al. (2022) treat all pixels in one image equally, Eiras et al. (2021) treat pixels differently but cannot reveal their correlation. Thus none of them can overcome the second challenge. In this paper, we aim to solve certified robustness of GM, by analyzing the individual matching robustness of each node, instead of the whole variation of the output matching matrix X in Eq. 1. In particular, we study the node classification task when converting the relaxed solution S into the final matching X (see Eq. 1 and the discussion therein), as such the RS-type certification phase can be naturally introduced during the classification stage. Specifically, we propose the Structure-based Certified Robustness of Graph Matching (SCR-GM) which adopts joint Gaussian distribution instead of independent homogeneous distribution to construct the smoothing solvers. As adversarial attacks tend to perturb the strongly correlated nodes at the same time, the additive noise sampled from joint distribution with structural information and physical meaning can reveal this correlation. According to our theoretical analysis, we obtain the robustness guarantee on GM which describes a certified ℓ2-norm space ant its lower bound radius. In addition, we propose another two radii to help evaluate the robustness more comprehensively. We evaluate our strategy on Pascal VOC dataset (Everingham et al., 2010) with Berkeley annotations (Bourdev & Malik, 2009) and simulation dataset with random node sets. Experimental results reveal that our strategy outperforms the previous works (Cohen et al., 2019; Alfarra et al., 2022; Eiras et al., 2021) on structural GM for ℓ2 certified accuracy and regions. Our contributions are as follows: 1) We propose a general framework for incorporating existing RS-based techniques for certifying graph matching solvers, as long as (which is often the case for both learning-based and classic solvers) it involves a post-binarization step that converts the relaxed matching S (by an arbitrary relaxed GM solver) to node matching. 2) Based our proposed framework, we present the first definition, to our best knowledge (see Eq. 5) of certified robustness for a graph matching solver. 3) We propose a certification method dubbed structure-based certified robustness of GM (SCR-GM) (see Sec. 4.3). It uses jointly distributed noise to model dependent node matching certification. 4) A certified space and lower bound radius are derived to guarantee robustness of graph matching. Two radii are also devised for more complete evaluation of robustness, which complements potentially safe regions and largest feasible perturbations. 1Another challenge is how to better handle the constraints of X, which is related to how to extend the certification of the specific GM problem to other combinatorial solvers, which we leave for future work. 2 RELATED WORK We discuss works on certified robustness related to randomized smoothing and robustness of GM. Certified Robustness related to Randomized Smoothing Lecuyer et al. (2019) propose randomized smoothing firstly as a certified adversarial defense, and use it to train the first certifiably robust classifier for ImageNet. However, its guarantees are loose, then Cohen et al. (2019) shows that adding Gaussian noise to classifiers enjoys a strict ℓ2 certification radius, with follow-ups presenting new RS-type techniques, such as optimal perturbations at different norms, and certified robustness definitions for different tasks. Alfarra et al. (2022) show that the variance of the Gaussian distributions can be optimized at each input so as to maximize the certification region. Meanwhile, Eiras et al. (2021) extend isotropic smoothing distributions to generalized anisotropic counterparts. Hong & Hong (2022) adopt the same anisotropic defination and further design a noise generator to efficiently fine-tune the distributions. A recent work (Labarbarie et al., 2022) that relies on information geometry techniques manages to prove larger regions than previous methods. However, all previous smoothing distributions D deprive the favorable prior knowledge which mainly refers to the node location and graph structure in GM. Moreover, all of them at most certify a single image or graph but do not consider the combinatorial nature of the prediction as in GM. Robustness of Graph Matching Approximate GM solvers have been developed over the decades from traditional learning-free methods (Emmert-Streib et al., 2016) to learning-based ones (Yan et al., 2020). The seminal work (Zanfir & Sminchisescu, 2018) proposes a deep neural network based pipeline for visual GM, in which the visual appearance features are learned via CNN, with subsequent variants (Wang et al., 2019; Rolı́nek et al., 2020), among which a major improvement is to explore the structural information using different techniques e.g. GNN, rather than only appearance features for node/edge attributes as done in (Zanfir & Sminchisescu, 2018). Our work treats the GM solver as blackbox regardless it is learning-based or not, as long as it involves a continuous relaxation to obtain the intermediate double-stochastic matrix. There is also an emerging line of research on adversarial attack and defense on (deep) GM. The earlier work (Yu et al., 2019b) proposes a robust graph matching (RGM) model to improve the robustness against perturbations e.g. distortion, rotation, outliers and noise. Zhang et al. (2020) devise an adversarial attack model for deep GM networks, which uses kernel density estimation to construct dense regions such that the neighboring nodes are indistinguishable. Ren et al. (2021) devise two specific topology attacks in GM: inter-graph dispersion and intra-graph combination attacks, and propose a resilient defense model. Ren et al. (2022) design an attack perturbing input images and their hidden graphs together for deep (visual) GM, and further propose appearanceaware regularizers to enlarge the disparity among similar keypoints for defense. However, the above defense methods are all heuristic and lacks robustness certification in face of other unseen attacks. 3 PRELIMINARIES ON RANDOMIZED SMOOTHING The original RS (Cohen et al., 2019) can transform an arbitrary base classifier f into a smoothed classifier g that is certifiably robust under ℓ2 norm. For any input x, the smoothed classifier g returns the most probable prediction of f for the random variable N (x;σ2I), which is defined by: g(x) = argmax c∈Y P(f(x+ ε) = c), (2) where ε ∼ N ( 0, σ2I ) is isotropic Gaussian noise perturbing the input x. Then the certified radius within which the output is unchanged for g(x+ δ) = cA that measures the certified robustness is: R = ∥δ∥2 < σ 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , (3) where the most probable class cA is returned with probability pA and the ‘runner-up’ class is returned with probability pB . pA and pB are lower bound and upper bound of pA and pB respectively, and Φ−1 is the inverse of the standard Gaussian cumulative distribution function. The smoothed classifier g is robust around x within the ℓ2 radius in Eq. 3. To enhance the certification, Alfarra et al. (2022) and Eiras et al. (2021) propose isotropic and anisotropic distributions to maximize the certified region respectively. However, none of them can explicitly encode the prior information of the inputs (e.g. the graph topology in GM) which means their distributions are randomly initialized. Differently, we propose a correlation matrix to reveal the structural information in graphs, and in turn construct a joint Gaussian distribution to replace the single Gaussian distribution, which not only makes the initial distribution physically meaningful, but also eliminates the optimization process of finding the largest certified region. 4 METHODOLOGY We first define the smoothed GM solver (be either neural network or traditional solver) and propose a robustness guarantee. We then devise a new certification strategy SCR-GM using a physically meaningful joint smoothing distribution. We also give two new radii to aid evaluating robustness. 4.1 PROBLEM FORMULATION For pairwise GM with input ( G1,G2, z1, z2 ) , we mainly focus on the effect of perturbing two sets of annotated nodes z1 ∈ Rn1×2 and z2 ∈ Rn2×2. For visual GM (Zanfir & Sminchisescu, 2018; Ren et al., 2022) as widely considered in literature, z1 and z2 are node coordinates obtained by human annotation or keypoint detectors. During the certification for perturbing nodes, here we consider the node coordinates as the input while keep the node/edge attributes as unchanged. The robustness guarantees of perturbing features are given in Appendix B. As discussed in Sec. 3, randomized smoothing (RS) is a technique for constructing a smoothed function g from an arbitrary base function f . In this paper, we technically convert a whole matching problem into a set F regarding with binary classification based on the intermediate matrix S. The set F can be expressed as: F = {fi|fi : ( G1,G2, z1, z2 ) → rj , i ∈ n1, j ∈ n2}, where fi represents that the i-th node in z1 matches the j-th node in z2 and rj represents the j-th node rj in z2. Such a conversion allows us to certify the matching robustness for a single node, avoiding an imprecise certification for the entire matching matrix. The smoothed network gi returns whichever node in z2 is most likely to match the node in z1 when the input is perturbed by joint smoothing noise: gi = argmax rj∈z2 P(fi ( G1,G2, z1 + ε, z2 ) = rj), where ε ∼ N (0,Σ) , i ∈ n1, j ∈ n2. (4) For convenience, we simplify fi ( G1,G2, z1, z2 ) to fi(z1) and derive the results by perturbing z1 only, as it is equivalent to robustness certification under joint perturbation to z1 and z2. Furthermore, we propose a method which defines the smoothed function for certifying whole X as introduced in Appendix. E. The distribution of noise ε is a joint Gaussian distribution matrix whose variance represents the correlation between nodes. In addition, Σ is a hyperparameter for certified function which controls a robustness/accuracy trade-off and will be detailed in Sec. 4.3. Note that for robustness certification, we only consider those nodes that can obtain a unique solution argmax in Eq. 4. 4.2 ROBUSTNESS GUARANTEE Suppose that when the base function fi solves for the optimal matching of node i in z1, the most probable node rA in z2 is returned with probability pA = maxsi∈Si si, where Si is the i-th row of S. Similarly, the probability of ”runner-up” node rB in z2 is denoted as pB , pB = maxsi∈Si,rB ̸=rA si. We adopt an ℓ2 certified space to guarantee robustness of graph matching. Theorem 1 (ℓ2 certified space) Let fi(z1) be node matching function, gi be defined as in Eq. 4, and ε ∼ N (0,Σ). If pA ∈ [0, 1] and pB ∈ [0, 1] satisfy: P ( fi(z 1 + ε) = rA ) ≥ pA ≥ pB ≥ P(fi(z1 + ε) = rB), (5) then for gi(z1 + δ) = rA, we can get the certified ℓ2 space for the addictive noise δ: ∥δ⊤B−1∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , (6) where B⊤B = Σ, and B ∈ Rn1×n1 is a full rank and real symmetric matrix based on the node correlation in node matrix z1, and pA and pB are the lower bound of pA and upper bound of pB . The detail settings and properties of B and Σ are described and illustrated in Section 4.3. The complete proof of Theorem 1 is presented in Appendix A. Lemma 1 (Eigenvalue Comparison) For a real symmetric matrix A ∈ Rn×n, with λmax and λmin as its maximum and minimum of eigenvalues, then ∀X ∈ Rn, λminX⊤X ≤ X⊤AX ≤ λmaxX⊤X. Based on Lemma 1 and the certified space in Eq. 6, we can further obtain a certified ℓ2 norm radius: ∥δ⊤B−1∥2 = δ⊤Σ−1δ, (7) δ⊤Σ−1δ ≤ λmaxδ⊤δ, (8) ∥δ∥lower < 1 2 √ λmax ( Φ−1 ( pA ) − Φ−1 (pB) ) , (9) where λmax is the maximum eigenvalue of Σ−1. We let the upper bound of δ⊤Σ−1δ satisfy the constraint of Eq. 6, therefore a lower bound on ∥δ∥ can be obtained as ∥δ∥lower. Eq. 6 is an exact constraint on the perturbation space which is a hyperellipsoid, while Eq. 9 describes minor axis of the hyperellipsoid. Both of them are general expressions for arbitrary GM solvers and joint Gaussian smoothing distributions which will be shown in Sec. 4.3. 4.3 JOINT SMOOTHING DISTRIBUTION In contrast to isotropic (Alfarra et al., 2022) and anisotropic (Eiras et al., 2021) distributions, SCRGM reflects the structure of graph while achieving efficiency by avoiding gradient optimization. We first construct the correlation matrix B based on the similarity between nodes in matrix z1. B is a full rank and real symmetric matrix whose element bmn denotes the correlation between m-th and n-th node in z1. We define a similarity using Euclidean distance as follows: bmn = 1/(1 + dmn γ ), (10) where dmn is the Euclidean distance between the m-th and n-th nodes, and γ is the normalization coefficient which controls the degree of correlation. We also uses other three similarity measures to construct B including cosine similarity, pearson similarity and dice similarity as in Appendix C. Nodes in close proximity are more susceptible to perturbations with similar intensity, while perturbations added to nodes at larger distances are almost independent. The diagonal elements in B indicate the intensity of perturbation at nodes, while the non-diagonal elements reveal the correlation between nodes. Then by B⊤B = Σ, we can get the smoothing distribution Σ to sample the additive noise for the input. Σ is a positive definite matrix, which determines the feasibility of radii derived in this work. In contrast, the distribution in (Eiras et al., 2021) is a diagonal matrix with different diagonal elements, which cannot represent the correlation between nodes; and the distribution in (Alfarra et al., 2022) is a diagonal matrix with the same diagonal elements, which directly treats all nodes without difference. In fact when inter-node correlations and the differences of noise intensity are neglected, Σ can degenerate into the above two distributions. Therefore, Σ is a generalized setting that allows all distributions to be compared in the same framework. For comparison, we need to keep Σ at the same order of magnitude as the previous three distributions (Cohen et al., 2019; Eiras et al., 2021; Alfarra et al., 2022). We take a similar strategy as that in (Eiras et al., 2021) to ensure that: min i 1 λxi r (x,Σx) ≥ min i θxi r (x,Θ x) , (11) where λxi is the eigenvalue of (Σ x)−1, Θx is the distribution in (Eiras et al., 2021), θxi is the diagonal element of Θx and r = 12 ( Φ−1 ( pA ) − Φ−1 (pB) ) . Therefore, the four distributions mentioned above can be calculated and analyzed incrementally. The visualization of the four distributions calculated from a same original σ (Cohen et al., 2019) are shown in Fig. 1(a). Moreover, Σx can trade-off the certified accuracy and radius, the eigenvalue λxi is positively correlated with the certified accuracy and negatively correlated with the certified radius. 4.4 EVALUATING CERTIFICATES In Sec. 4.2, Eq. 6 reveals the certified space which is however difficult to quantify and compare. Though Eq. 9 represents a certified and quantifiable form, it actually ignores a large portion of the certified space. We therefore propose two more effective radii to help evaluate the robustness. Eq. 9 is the certification for the worst case of the input, Eq. 13 is the certification for all cases and Eq. 12 reveals the maximum potential for immunity to perturbations. Combining the three radii allows a complete evaluation of the robustness for solvers. By Lemma 1 and Eq. 7 we define a maximum radius of the certified space: ∥δ∥max = 1 2 √ λmin ( Φ−1 ( pA ) − Φ−1 (pB) ) ), (12) where λmin is the minimum eigenvalue of Σ−1, and δ⊤Σ−1δ ≥ λminδ⊤δ. ∥δ∥max denotes the ℓ2-norm maximum value for all possible perturbations. Inspired by (Eiras et al., 2021), we can also measure the certified space in terms of ellipsoidal volume. By using the formula for the volume of the ellipsoid: V (R) = rn √ πn/Γ(n/2 + 1) ∏n i=1 ξi (Kendall, 2004) where ξi is the i-th radius of the ellipsoid, we can get a proxy radius ∥δ∥volume as: ∥δ∥volume = r √ π/ n√Γ(n/2 + 1) 2n √√√√1/ n∏ i λi , (13) where r = 12 ( Φ−1 ( pA ) − Φ−1 (pB) ) and λi is the eigenvalue of Σ−1. When all λi are the same, the certification result will be the same as the traditional method (Cohen et al., 2019). As described in section 4.2, the certified space is a hyperellipsoid geometrically, ∥δ∥lower represents the minor axis, ∥δ∥max represents the major axis, ∥δ∥volumn is a proxy radius of a hypersphere with the same volume as the hyperellipsoid. The whole certification process is shown in Algorithm 1. 5 EXPERIMENTS We evaluate our strategy in three aspects: i) For deep graph matching, we compare three radii in Eq. 9, Eq. 12 and Eq. 13 obtained by different certified methods on four GM networks; ii) For nonlearning GM methods, we perform synthetic experiments on the widely-used solver RRWM (Cho et al., 2010); iii) We reveal the impact of Σ on the certification results by ablation study. 5.1 EVALUATION SETTINGS Following the GM literature (Wang et al., 2021), we mainly evaluate our method on Pascal VOC dataset (Everingham et al., 2010) with Berkeley annotations (Bourdev & Malik, 2009). All the experiments are conducted on CPU (Intel(R) Core(TM) i7-7820X CPU @ 3.60GHz) and GPU Algorithm 1 Graph Matching Robustness Certification with SCR-GM. Input: Graph pair (G1,G2) of size z1 and z2; set of base classifier F; DDRS (Alfarra et al., 2022) and ANCER (Eiras et al., 2021); original σ; normalization coefficient γ; sampling times k0. Output: Matching set M and radius set ∆. 1: Obtain data-dependent σ∗x by adapting (see details in Appendix C) an off-the-shelf DDRS method (Alfarra et al., 2022) to the graph setting; 2: Obtain Anisotropic Θx by adapting (see details in Appendix C)) an off-the-shelf ANCER method (Eiras et al., 2021); 3: Obtain B and regularized Σ described in Sec. 4.3 according to Eq. 10 and 11; 4: Sample k0 noisy samples for left node matrix:z11 ′ , . . . , z1k0 ′ ∼ N ( z1,Σ ) . 5: Compute the matching result for nodes in z1: M = {mi| argmaxrj∈z2 ∑k0 k=1 I { fi ( z1k ′ ) = rj } }. 6: Sample k(k = 10k0) noisy samples for G1’s node matrix:z11 ′ , . . . , z1k ′ ∼ N ( z1,Σ ) . 7: Calculate one-sided confidence lower bound pA and pB using M as described in (Cohen et al., 2019) for every node in z1, get set PA and PB . 8: for pA and pB in PA and PB do 9: if pA < 12 then 10: mi ABSTAIN; set ∥δi∥lower=∥δi∥max=∥δi∥volume=0, append ∆; 11: // Discard nodes with low matching confidence. 12: else 13: Compute radius ∥δi∥lower, ∥δi∥max and ∥δi∥volume described in Sec. 4.4, append ∆. 14: end if 15: end for 16: return M, ∆ (GTX 2080 Ti GPU). We validate the certified robustness on four representative deep GM models: GMN (Zanfir & Sminchisescu, 2018), PCA-GM (Wang et al., 2019), CIE-H (Yu et al., 2019a), NGMv2 (Wang et al., 2021) and also a non-deep method RRWM (Cho et al., 2010). In this work, data processing and parameter settings are the same as the original papers unless otherwise specified. The compared methods include RS (Cohen et al., 2019), DDRS (Alfarra et al., 2022) and ANCER (Eiras et al., 2021). Since the anisotropic method in (Hong & Hong, 2022) is the same as in (Eiras et al., 2021) and (Hong & Hong, 2022) does not provide any code, we choose to compare with (Eiras et al., 2021). We follow the procedure as much similar as possible to that in (Cohen et al., 2019). In (Cohen et al., 2019), the certified accuracy (CA) is defined as: CA(R) = Ex,y [1(∥δ∥ ≥ R)1{g(x) = y}] . In our method, g represents the smoothed function defined in Eq. 4, x denotes the input node in test set, and y is its ground truth matching node. ∥δ∥ denotes the certified radius calculated by Eq. 9, Eq. 12, Eq. 13, R is the scale of x-axis, 1 is an indicator function. To quantify the improvement, we use Average Certified Radius (ACR) in (Zhai et al., 2020): Ex,y [∥δ∥1{g(x) = y}] . We use ℓlower2 , ℓmax2 and ℓΣ2 to express ∥δ∥lower, ∥δ∥max and ∥δ∥volume in the experiments. 5.2 EXPERIMENTS ON DEEP GRAPH MATCHING We first set the initial σ of RS to σ ∈ {1, 5, 10, 15, 20}, and calculate the smoothing distribution of σ∗x in DDRS and Θ x in ANCER, where iteration number in DDRS and ANCER is equal to 100. Then we set normalization coefficient γ = 5 and compute the joint distribution matrix Σ of SCRGM. Fig. 1(b) shows the certified radius ∥δ∥lower and ACR on a sample of our method and baselines which indicates that the overall certified robustness of our methods is superior to the baselines. Then we evaluate our strategy on four deep GM methods, the relationship of top-1 certified accuracy and three radii (ℓlower2 , ℓ max 2 and ℓ Σ 2 ) are plotted in Fig. 2, which only shows the case of σ = 5. When the radius on x-axis is the same, the higher the certified accuracy on y-axis, the better the certified robustness. The certified accuracy of our method is slightly lower sometimes than baselines when ∥δ∥lower is small. However, when ∥δ∥lower is large, the accuracy of baseline decreases significantly or even fails completely while our method maintains a more respectable accuracy. When evaluating using ∥δ∥max and ∥δ∥volume, the advantages of our method are more obvious. We calculate the ACR ∥δ∥lower of four different RS-type methods (σ = 5) and four GM methods as shown in Tab. 1, which indicates that our method shows a better certified robustness performance over the whole dataset. To show the impact of certified robustness on the accuracy of the solvers, we use Tab. 2 to show the accuracy of base function, the standard accuracy and certified accuracy of different certified radius ∥δ∥lower using NGMv2 algorithm on Pascal VOC dataset. More results are detailed in Appendix D.1. 5.3 EXPERIMENTS ON NON-LEARNING GM METHODS For non-learning GM, we certify the effectiveness of SCR-GM using simulation experiments on classic non-learning solver RRWM. First we randomly generate two sets of node matrices and calculate their affinity matrix K using Gaussian kernel affinity function. Then we obtain the robustness results by perturbing node locations and edge features respectively using RS and SCR-GM smoothing distributions. We set σ = 0.5 and σ = 0.004 respectively in Fig. 3(a) and 3(b). Our method has similar performance corresponding to the same ∥δ∥lower as the baseline. Moreover, it performs better on the other two cases which indicates that the guarantee space certified by our method is wider and its overall robustness is better. We only compare the results using RS and SCR-GM in this experiment, because DDRS and ANCER require the gradient optimization of networks, and they are not applicable to non-learning GM solvers. 5.4 THE EFFECT OF JOINT SMOOTHING DISTRIBUTION First, we simplify B by retaining only the higher correlation values in the matrix according to the correlation radio p and setting other values to 0. The radio is set to p ∈ {0%, 20%, 40%, 60%, 80%, 100%} where 100% represents SCR-GM retaining all the correlation coefficients and 0% represents ANCER without correlation coefficients. Results in Fig. 4(a) demonstrate the effectiveness of the Σ which can be used to get a better certified robustness properties. Then, we verify the impact of initial σ for Σ and the results are plotted in Fig. 4(b). Hyperparameter σ determines the scale of Σ which controls a trade-off between certified robustness and accuracy. 6 CONCLUSION AND OUTLOOK We have proposed a definition of certified robustness on structural graph matching and design a method SCR-GM that utilizes the correlation between nodes to construct a joint smooth distribution. We obtain ℓ2 norm certified space and radius for certification. For evaluation, we propose two additional radii by eigenvalue properties. Experiments on deep GM networks and classic solvers show that our method achieves a state-of-art robustness guarantee. Potential impact & limitations. The currently technique is confined with the graph in Euclidean space (and specifically 2D graphs for experiments), a more general formulation is QAP where the perturb may be directly added on the affinity matrix K. A significant direction is enabling robustness certification on the combinatorial solvers whereby GM is one of such cases. We hope this work can inspire subsequent research in this promising area where theoretical results are welcomed given the recent intensive empirical studies (Bengio et al., 2021; Yan et al., 2020). A PROOFS OF THEOREM 1 Here we provide the complete proof for Theorem 1. We first prove the following Lemma 2 which is inspired by the Neyman-Pearson for Gaussians lemma derived in (Cohen et al., 2019) and introduce Lemma 3 which makes random vector independent after linear transformation. Lemma 2 (Neyman-Pearson for Joint Gaussian Noise) Let X ∼ N (x,Σ) and Y ∼ N (x+ δ,Σ). Let h : Rd → {0, 1} be any deterministic or random function. Then: 1. If S = { k ∈ Rd : δTΣ−1k ≤ β } for some β and P(h(X) = 1) ≥ P(X ∈ S), then P(h(Y ) = 1) ≥ P(Y ∈ S). 2. If S = { k ∈ Rd : δTΣ−1k ≥ β } for some β and P(h(X) = 1) ≤ P(X ∈ S), then P(h(Y ) = 1) ≤ P(Y ∈ S). Proof. This lemma is the special case of Neyman-Pearson when X and Y are joint Gaussian noises with means x and x+ δ. It suffices to simply show that for any β, there is some t > 0 for which:{ k : δTΣ−1k ≤ β } = { z : µY (k) µX(k) ≤ t } , { k : δTΣ−1k ≥ β } = { z : µY (k) µX(k) ≥ t } . (14) For ease of representation, we use S ∈ Rd×d (with element sij) instead of Σ−1. The likelihood ratio for this choice of X and Y turns out to be: uY (k) uX(k) = exp ( − 12 (k − (x+ δ)) TΣ−1(k − (x+ δ)) ) exp ( − 12 (k − x)TΣ−1(k − x) ) = exp ( − 12 ∑d i ∑d j (ki − (xi + δi)) sij (kj − (xj + δj)) ) exp ( − 12 ∑d i ∑d j (ki − xi) sij (kj − xj) ) = exp ( δTΣ−1k − δTΣ−1x− 1 2 δTΣ−1δ ) = exp ( δTΣ−1k + b ) ≤ t, where b is a constant, specifically b = −δTΣ−1x− 12δ TΣ−1δ. Therefore given any β, we may take t = exp(β + b) and get this correlation: δTΣ−1k ≤ β ⇐⇒ exp (β + b) ≤ t, δTΣ−1k ≥ β ⇐⇒ exp (β + b) ≥ t. (15) Lemma 3 (Joint Gaussian Distribution) If there is a random vector X ∼ N (µ,Σ), where µ ∈ Rn is the mean vector. A positive semi-definite real symmetric matrix Σ ∈ Sn×n++ is the covariance matrix of X . There is a full rank matrix B ∈ Rn×n, which makes X = BZ + µ, Z ∼ N (0, I) and B⊤B = Σ. Then we can prove Theorem 1, recall: Theorem 1. Let fi(z1) be node matching function, gi be defined as in Eq. 4, and ε ∼ N (0,Σ). If pA ∈ [0, 1] and pB ∈ [0, 1] satisfy: P ( fi(z 1 + ε) = rA ) ≥ pA ≥ pB ≥ P(fi(z1 + ε) = rB). (16) Then for gi(z1 + δ) = rA, we can get the certified ℓ2 space for the addictive noise δ: ∥δ⊤B−1∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , (17) where B⊤B = Σ, B ∈ Rn1×n1 is a full rank and real symmetric matrix based on the physical relationships in node matrix z1, and pA and pB are the lower bound of pA and the upper bound of pB , respectively. To show that gi(z1 + δ) = rA, it follows from the definition of gi that we need to show that P ( fi(z 1 + δ + ε) = rA ) ≥ P(fi(z1 + δ + ε) = rB). In the derivation, rB is actually not just “runner-up” node, but any node that is different from rA. We define the random variables: X := z1 + ε = N ( z1,Σ ) , Y := z1 + δ + ε = N ( z1 + δ,Σ ) . We know that: P (fi(X) = rA) ≥ pA, P (fi(X) = rB) ≤ pB . (18) Our goal is to show that P (fi(Y ) = rA) > P (fi(Y ) = rB) . (19) According to lemma 2, we can define the half-spaces: A = { k : δTΣ−1(k − z1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )} , B = { k : δTΣ−1(k − z1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) } . Claim 1 shows that P(X ∈ A) = pA, therefore we can get P (fi(X) = rA) ≥ P(X ∈ A). Hence we may apply Lemma 2 with h(z) := 1 [fi(z) = rA] to conclude: P (fi(Y ) = rA) ≥ P(Y ∈ A). (20) Similarly, we obtain P (fi(X) = rB) ≤ P(X ∈ B). Hence we may apply Lemma 2 with h(z) := 1 [fi(z) = rB ] to conclude: P (fi(Y ) = rB) ≤ P(Y ∈ B). (21) Combining Eq. 20 and 21, we can get the conditions of Eq. 19: P (f(Y ) = rA) ≥ P(Y ∈ A) > P(Y ∈ B) ≥ P (f(Y ) = rB) . (22) According to Claim 3 and Claim 4, we can get P(Y ∈ A) and P(Y ∈ B) as: P(Y ∈ A) = Φ ( Φ−1 ( pA ) − δ TΣ−1δ ∥δTΣ−1B∥ ) , P(Y ∈ B) = Φ ( Φ−1 (pB) + δTΣ−1δ ∥δTΣ−1B∥ ) . (23) Finally, we obtain that P(Y ∈ A) > P(Y ∈ B) if and only if: δTΣ−1δ ∥δTΣ−1B∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , δT (BTB)−1δ ∥δT (BTB)−1B∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) . Because B is a real symmetric matrix (BT = B), we can finally get: ∥δTB−1∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , which recovers the theorem statement. A.1 LINEAR TRANSFORMATION AND DERIVATION We obtain four equations based on linear transformation: Claim 1. P(X ∈ A) = pA Proof. Recall that A = { k : δTΣ−1(k − z1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )} and X ∼ N (z1,Σ), according to lemma 3, we can get: P(X ∈ A) = P ( δTΣ−1(X − z1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( δTΣ−1N (0,Σ) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( δTΣ−1BN (0, I) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( ∥δTΣ−1B∥N (0, 1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = Φ ( Φ−1 ( pA )) = pA. Claim 2. P(X ∈ B) = pB Proof. Recall that B = { k : δTΣ−1(k − z1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) } and X ∼ N (z1,Σ), according to lemma 3, we can get: P(X ∈ B) = P ( δTΣ−1(X − z1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( δTΣ−1N (0,Σ) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( δTΣ−1BN (0, I) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( ∥δTΣ−1B∥N (0, 1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = 1− Φ ( Φ−1 (1− pB) ) = pB . Claim 3. P(Y ∈ A) = Φ ( Φ−1 ( pA ) − δ TΣ−1δ ∥δTΣ−1B∥ ) Proof. Recall that A = { k : δTΣ−1(k − z1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )} and Y ∼ N (z1 + δ,Σ), according to lemma 3, we can get: P(Y ∈ A) = P ( δTΣ−1(Y − z1) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( δTΣ−1N (δ,Σ) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( δTΣ−1(BN (0, I) + δ) ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( δTΣ−1BN (0, I) + δTΣ−1δ ≤ ∥δTΣ−1B∥Φ−1 ( pA )) = P ( ∥δTΣ−1B∥N (0, 1) ≤ ∥δTΣ−1B∥Φ−1 ( pA ) − δTΣ−1δ ) = P ( N (0, 1) ≤ Φ−1 ( pA ) − δ TΣ−1δ ∥δTΣ−1B∥ ) = Φ ( Φ−1 ( pA ) − δ TΣ−1δ ∥δTΣ−1B∥ ) . Claim 4. P(Y ∈ B) = Φ ( Φ−1 (pB) + δTΣ−1δ ∥δTΣ−1B∥ ) Proof. Recall that B = { k : δTΣ−1(k − z1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) } and Y ∼ N (z1 + δ,Σ), according to lemma 3, we can get: P(Y ∈ B) = P ( δTΣ−1(Y − z1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( δTΣ−1N (δ,Σ) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( δTΣ−1(BN (0, I) + δ) ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( δTΣ−1BN (0, I) + δTΣ−1δ ≥ ∥δTΣ−1B∥Φ−1 (1− pB) ) = P ( ∥δTΣ−1B∥N (0, 1) ≥ ∥δTΣ−1B∥Φ−1 (1− pB)− δTΣ−1δ ) = P ( N (0, 1) ≥ Φ−1 (1− pB)− δTΣ−1δ ∥δTΣ−1B∥ ) = P ( N (0, 1) ≤ Φ−1 (pB) + δTΣ−1δ ∥δTΣ−1B∥ ) = Φ ( Φ−1 (pB) + δTΣ−1δ ∥δTΣ−1B∥ ) B ROBUSTNESS GUARANTEE WHEN PERTURBING FEATURES For GM with input ( G1,G2, z1, z2 ) for matching prediction X, we now focus on the effect of per- turbing node features. Recall that the set F can be expressed as: F = {fi|fi : ( G1,G2, z1, z2 ) → rj , i ∈ n1, j ∈ n2} where G1 = {V1,E1} and G2 = {V2,E2}, fi represents that the i-th node in z1 matches the j-th node in z2, rj is the j-th node in z2. Now we define a new smoothed network gi that returns whichever node in z2 is most likely to match the node in z1 when perturbing node features V1 ∈ Rdv×n1 by joint smoothing distribution noise: gi = argmax rj∈z2 P(fi ( V1 + ε,E1,V2,E2, z 1, z2 ) = rj), where ε ∼ N (0,Σ) , i ∈ n1, j ∈ n2. (24) For notational convenience, we simplify fi ( V1 + ε,E1,V2,E2, z 1, z2 ) to fi(V1). Suppose that when the base function fi solves for the optimal matching of node i in z1, the most probable node rA is returned with probability pA = maxsi∈Si si, where Si is the i-th row of S. The probability of ”runner-up” node rB is denoted as pB , pB = maxsi∈Si,rB ̸=rA si. Similarly, we obtain an ℓ2 certified space to guarantee robustness of graph matching when perturbing features as follows. Theorem 2 (ℓ2 certified space when perturbing features) Let fi(V1) be node matching function, gi be defined as in Eq. 24, and ε ∼ N (0,Σ). If pA ∈ [0, 1] and pB ∈ [0, 1] satisfy: P (fi(V1 + ε) = rA) ≥ pA ≥ pB ≥ P(fi(V1 + ε) = rB), (25) then for gi(V1 + δ) = rA, we can get the certified ℓ2 space for the addictive noise δ: ∥δ⊤B−1∥ < 1 2 ( Φ−1 ( pA ) − Φ−1 (pB) ) , (26) where pA and pB are the lower bound of pA and the upper bound of pB respectively. We set B⊤B = Σ where B ∈ R(dv×n1)×(dv×n1) is a diagonal matrix. Different from the correlation matrix in Eq. 10, B is a diagonal matrix similar as (Eiras et al., 2021). However, B is obtained by structure-based prior knowledge rather than the optimization process in (Eiras et al., 2021). We divide the node feature V1 into n1 parts and add independent identically distributed noise of the same intensity (denoted by bm,m ∈ n1) to each part. The noise intensity of m-th part bm is defined as bm = dmd σ where d is the whole distance between nodes in z 1, dm is the distance between the m-th node and other nodes, the original σ is the same as described in (Cohen et al., 2019). This setting indicates that outlier points are more resistant to perturbation. Finally we can derive the same radius forms as Eq. 9, 12 and 13. C EXPERIMENTAL SETUP In this work, we evaluate our strategy on deep graph matching networks and a classic non-learning solver. The procedures to obtain the baseline networks and the evaluation methods are detailed as follows. C.1 BASELINE OF CERTIFICATION METHODS In terms of certification, the baselines we considered are RS (Cohen et al., 2019), DDRS (Alfarra et al., 2022) and ANCER (Eiras et al., 2021). We adapt the off-the-shelf DDRS and ANCER to obtain the data-dependent distribution σ∗x and anisotropic distribution Θ x for graph matching. We add noise to graphs and use pA = maxsi∈Si si and pB = maxsi∈Si,rB ̸=rA si to calculate the gap value Φ−1(pA) − Φ−1(pB) (Si is the i-th row of S). The optimization equations and parameters remain the same as the original algorithms. Then we use SCR-GM to get our joint distribution Σ. Finally, we use the Monte Carlo algorithms in (Cohen et al., 2019) to sample noises according to different distributions and output three radii derived in Sec. 4.2 and 4.4. The sample number n and n0 are set to 1000 and 100 due to the efficiency of graph matching networks, and other parameters are the same as the original network settings (Cohen et al., 2019). We also use hypothesis test (Hung & Fithian, 2019) as in (Cohen et al., 2019) by using α to represent the probability of getting incorrect matching results. In this paper, we set α = 0.001, so there is a high probability (99.9% in this paper) to ensure the certification. α can be set arbitrarily small hence in theory our method is highly reliable. C.2 EVALUATION ON DEEP GRAPH MATCHING For deep graph matching, we mainly evaluate our method on Pascal VOC dataset (Everingham et al., 2010) with Berkeley annotations (Bourdev & Malik, 2009). We follow the protocol of (Wang et al., 2021) and filter out poorly annotated images. In the experiment, we use 100 inputs (containing approximately 650 nodes) of 20 categories to certify the matching robustness. We check our strategy on four representative deep graph matching methods: GMN (Zanfir & Sminchisescu, 2018), PCA-GM (Wang et al., 2019), CIE-H (Yu et al., 2019a) and NGMv2 (Wang et al., 2021), while use the checkpoints of these GM models collected by ThinkMatch (https: //github.com/Thinklab-SJTU/ThinkMatch). We directly evaluate the certified robustness of these networks without fine-tune training. C.3 EVALUATION ON NON-LEARNING METHOD For non-learning method, we mainly evaluate our method on simulation data which contains randomly generated node pairs. In the experiment, we use 100 inputs (each contains 5-10 nodes randomly) and evaluate the strategy on classic solver RRWM (Cho et al., 2010). For evaluation, we extract node features and calculate the affinity matrix K using Gaussian kernel affinity function. Then we perturb node locations and features separately and obtain the certified robustness results. C.4 SIMILARITY MEASURES In addition to Eq. 10, we also uses other three similarity measures to construct B including cosine similarity, pearson similarity and dice similarity as follows. For two points in the Euclidean space Rn: A = (a1, a2, · · · , an) and B = (b1, b2, · · · , bn), cosine similarity is defined as follows: Cosine Similarity(A,B) = A ·B ∥A∥2 · ∥B∥2 = ∑n i=1 aibi√∑n i=1 a 2 i · √∑n i=1 b 2 i ∈ [−1, 1]. (27) Table 1: The ACR ∥δ∥lower of four different RS-type methods (σ = 5) and four GM methods on Pascal VOC dataset. NGMv2 CIE-H PCA-GM GMN RS 4.189 2.880 2.745 2.037 DDRS 5.936 3.505 3.307 2.741 ANCER 6.300 3.367 3.179 2.517 SCR-GM 7.107* 3.726* 3.455* 2.745* Table 2: The accuracy of base function (BA) of NGMv2, standard accuracy (SA) and certified accuracy (CA) of different certified radius ∥δ∥lower using NGMv2 algorithm (σ = 5) on Pascal VOC dataset. BA (%) SA (%) CA (%) R=3.5 CA (%) R=7.0 CA (%) R=10.5 SCR-GM 77.3 75.6 63.7 51.5* 36.4* ANCER 77.3 76.5 64.2 49.1 23.8 DDRS 77.3 77.4* 66.6 50.5 18.2 RS 77.3 76.7 66.9* 0.0 0.0 Pearson similarity is defined as follows: Pearson Similarity(A,B) = cov(A,B) σA · σB = ∑ i=1 ( ai − Ā ) · ( bi − B̄ )√∑n i=1 ( ai − Ā )2 ·√∑ni=1 (bi − B̄)2 ∈ [−1, 1], (28) where Ā = ∑n i=1 ai/n, B̄ = ∑n i=1 bi/n. Dice similarity is defined as follows: Dice Similarity(A,B) = 2 ∑n i=1 aibi∑n i=1 (a 2 i + b 2 i ) , (29) where A and B can not be zero point at the same time. D EXPERIMENTAL RESULTS D.1 CERTIFICATION RESULTS OF DEEP GRAPH MATCHING D.1.1 PERTURBING NODE LOCATION For perturbing node location, we report certified accuracy at ℓlower2 , ℓ max 2 and ℓ Σ 2 radii, for each certified method RS (Cohen et al., 2019), DDRS (Alfarra et al., 2022), ANCER (Eiras et al., 2021) and SCR-GM, each network GMN (Zanfir & Sminchisescu, 2018), PCA-GM (Wang et al., 2019), CIE-H (Yu et al., 2019a) and NGMv2 (Wang et al., 2021), each original σ (σ = 1, 5, 10, 15 and 20). Figures 5, 6, 7 and 8 show certified results on different graph matching networks, respectively. In addition, we certify the effect of the normalization parameter γ, and Fig. 12 shows the results on NGMv2 (Wang et al., 2021) algorithm and σ is set to 5. Tab. 3 shows the impact of different choices for constructing B on the certified robustness. B constructed by Euclidean distance and Dice similarity perform better on the certified robustness. The advantage of B constructed by Euclidean distance is more obvious when the radius is larger. D.1.2 PERTURBING FEATURES For perturbing node features, we only compare our strategy with RS (Cohen et al., 2019) due to the excessive inefficiency of DDRS (Alfarra et al., 2022) and ANCER (Eiras et al., 2021). We set original σ as σ = 0.25, 0.5, 1, 1.5 and 2, other settings are the same as Appendix D.1.1. Fig. 9 shows certified results on different graph matching networks when perturbing node features. Table 3: The impact of different similarity measures for constructing B on the certified robustness. SA (%) CA (%) R=3.0 CA (%) R=6.0 CA (%) R=9.0 CA (%) R=12.0 Euclidean 75.2 64.3 53.3* 42.0* 24.1* Dice 75.6* 65.5* 52.3 41.0 23.5 Cosine 75.6 65.1 52.0 41.0 23.5 Pearson 75.6 65.2 51.8 40.7 23.6 Figure 5: Top-1 certified accuracy on ℓlower2 , ℓ max 2 and ℓ Σ 2 certification by different RS-type methods on NGMv2 methods. Hyperparameter σ trade-off the certified accuracy and radii. D.2 CERTIFICATION RESULTS OF NON-LEARNING METHODS In this section, we report certified accuracy at ℓlower2 , ℓ max 2 and ℓ Σ 2 radii, for certified method (Cohen et al., 2019) and SCR-GM on RRWM (Cho et al., 2010). We set original σ as σ = 0.3, 0.4 and 0.5 when perturbing node locations, while we set σ = 0.001, 0.004 and 0.006 when perturbing features. Fig. 13 and 14 show certified results on the classic solver. E CERTIFIED ROBUSTNESS OF THE SOLUTION X ’S STRUCTURE In Sec. 4, we focus on the certified robustness of node matching results in the graph rather than the whole graph matching result. Our work treats the GM solver as blackbox to get the relaxed matching S, then uses a post-binarization step to to modify the output format X and get the node matching function set F. Then we certify the robustness of F. However, we can also certify the robustness of the full matrix X which is able to utilize more graph structure information, as well as fully consider the constrains in Eq. 1. E.1 DEFINITION Consider a graph matching problem from input space to partial permutation matrices X . As discussed above, randomized smoothing (RS) is a technique for constructing a smoothed function g from an arbitrary base function f . When queried at the input ( G1,G2, z1, z2 ) , the smoothed function g returns whichever matrix X the base function f is most likely to return when z1 is perturbed by noise: g = argmax X∈X P(f ( G1,G2, z1 + ε, z2 ) = X), where ε ∼ N (0,Σ) . (30) The distribution of additive noise ε is a joint Gaussian distribution matrix whose variance Σ represents the correlations between nodes. In addition, Σ is a hyperparameter for certified function which controls the robustness/accuracy trade-off. E.2 ROBUSTNESS GUARANTEE FOR X We define a robustness guarantee with confidence c ∈ [0, 1], which ensures that the similarity between the output matrix calculated by g and its ground truth matrix Xg is not less than a confidence c. Suppose that when the base function f solves ( G1,G2, z1 + ε, z2 ) , its output matrices whose similarity to Xg is not less than c are returned with probability p: X ′ = { Xi ∣∣∣∣∣Xi ·XgXg ·Xg ≥ c,Xi ∈ X } , p = P(Xi|Xi ∈ X ′) (31) Our main result is that smoothed function g is robust within a ℓ2 certified space, which also holds if we replace p with a lower bound p. Theorem 3 (ℓ2 certified space for X ) Let f be a matching function, g be defined as in Eq. 30, and ε ∼ N (0,Σ). Suppose XA ∈ X ′ and p ∈ ( 12 , 1] satisfy: P(f ( G1,G2, z1 + ε, z2 ) = XA, XA ∈ X ′) ≥ p. (32) Then we can get the certified ℓ2 space for the addictive noise δ: ∥δ⊤B−1∥ < Φ−1 ( p ) , (33) which guarantees g ( G1,G2, z1 + δ, z2 ) ∈ X ′. In Eq. 6, B⊤B = Σ and B ∈ Rn1×n1 is a full rank and real symmetric matrix based on the node correlation in node matrix z1. The detail settings and properties of B and Σ are the same as in Section 4.3. Based on Lemma 1 and the certified space in Eq. 33, we can further obtain a certified ℓ2 norm radius: ∥δ∥lower < 1√ λmax ( Φ−1 ( p )) , (34) where λmax is the maximum eigenvalue of Σ−1. We can define a maximum radius of the certified space: ∥δ∥max = 1√ λmin ( Φ−1 ( p )) ), (35) Algorithm 2 Graph Matching Robustness Certification for X with SCR-GM. Input: Graph pair (G1,G2) of size z1 and z2; base function f of graph matching; DDRS (Alfarra et al., 2022) and ANCER (Eiras et al., 2021); original σ; normalization coefficient γ; sampling times k0; matrix similarity confidence c. Output: Matching result X̂g and radius R. 1: Obtain data-dependent σ∗x by adapting (see details in Appendix C) an off-the-shelf DDRS method (Alfarra et al., 2022) to the graph setting; 2: Obtain Anisotropic Θx by adapting (see details in Appendix C)) an off-the-shelf ANCER method (Eiras et al., 2021); 3: Obtain B and regularized Σ described in Sec. 4.3 according to Eq. 10 and 11; 4: Sample k0 noisy samples for G1’s node matrix:z11 ′ , . . . , z1k0 ′ ∼ N ( z1,Σ ) . 5: Compute the approximate ground truth matrix X̂g . 6: Sample k(k = 10k0) noisy samples for G1’s node matrix:z11 ′ , . . . , z1k ′ ∼ N ( z1,Σ ) and get an approximate output set X̂ . 7: Calculate one-sided confidence lower bound p using set X̂ and Eq. 31. 8: if p < 12 then 9: X ABSTAIN; set ∥δi∥lower=∥δi∥max=∥δi∥volume=0, append R; 10: //Discard matching result with low confidence. 11: else 12: Compute radius ∥δi∥lower, ∥δi∥max and ∥δi∥volume described in Sec. 4.4, append R. 13: end if 14: return X̂g , R where λmin is the minimum eigenvalue of Σ−1. The proxy radius ∥δ∥volume is as follows: ∥δ∥volume = r √ π/ n√Γ(n/2 + 1) 2n √√√√1/ n∏ i λi . (36) The whole robustness certification process is shown in Algorithm 2. In fact, we cannot get the real Xg and X during certification stage, so we use Monte Carlo sampling to estimate it. We first sample f ( G1,G2, z1 + ε, z2 ) with n0 times and add all permutation matrices to get Xs, then we use Sinkhorn and Hungarian algorithm to approximate Xg . During certification, if the approximated X̂g is not the same as the ground truth matrix Xg , we consider that the certification for this sample has failed. Then we sample f ( G1,G2, z1 + ε, z2 ) with n times and put all possible matrices into set X̂ to approximate X . When n is large, X̂ and X are relatively close. E.3 EXPERIMENTS We evaluate our methods on deep graph matching networks and non-learning solvers. The evaluation settings are the same as in Sec. 5.1. E.3.1 EXPERIMENTS ON DEEP GRAPH MATCHING We focus on certifying the robustness of node locality and compare ℓlower2 , ℓ max 2 , ℓ Σ 2 certification using four certified methods on four deep GM algorithms. We first set the initial σ of RS to σ ∈ {1, 5, 10, 15, 20}, the confidence c = 0.9 and calculate the smoothing distribution of σ∗x in DDRS and Θ x in ANCER, where iteration number in DDRS and ANCER is equal to 100. Then we set normalization coefficient γ = 5 and compute the joint distribution matrix Σ of SCR-GM. Then we evaluate our strategy on four deep GM methods, the relationship of top-1 certified accuracy and three radii (ℓlower2 , ℓ max 2 and ℓ Σ 2 ) is plotted in Fig. 10. When the radius on x-axis is the same, the higher the certified accuracy on y-axis, the better the certified robustness. Our method outperforms the baseline on NGMv2 algorithm, which means that the certified accuracy is higher when the radii (ℓlower2 , ℓ max 2 and ℓ Σ 2 ) is the same. On CIE-H and PCA-GM algorithms, the certified accuracy of our method is slightly lower sometimes than baselines when ℓlower2 radius is small. However, when ℓlower2 radius is large, the accuracy of baselines decrease significantly or even fail completely while our method maintains a more respectable accuracy. When evaluating using ℓmax2 and ℓ Σ 2 radii, the certified results of our method are similar as the baselines. On GMN algorithm, our certification results are a bit worse than ANCER. In short, the certified robustness advantage of our method is more obvious on the algorithm with better matching accuracy itself. E.3.2 EXPERIMENTS ON NON-LEARNING GM METHODS For non-learning GM, we certify the effectiveness of SCR-GM using simulation experiments on classic non-learning solver RRWM. First we randomly generate two sets of node matrices and calculate their affinity matrix K using Gaussian kernel affinity function. Then we obtain the robustness results by perturbing node locations and edge features respectively using RS and SCR-GM smoothing distributions. We set σ = 0.1 and σ = 0.0001 respectively in Fig. 11(a) and 11(b). Our method has similar performance of the certified accuracy corresponding to the same ∥δ∥lower with the baseline. However, our method performs better on ∥δ∥volume and ∥δ∥max which indicates that the guarantee space certified by our method is wider and its overall robustness is better. We only compare the results using RS and SCR-GM in this experiment, because DDRS and ANCER require the gradient optimization of networks, and they are not applicable to non-learning GM solvers. E.4 PROOF To show that g ( G1,G2, z1 + δ, z2 ) ∈ X ′, it follows from the definition of g that we need to show that: P(f ( G1,G2, z1 + ε+ δ, z2 ) = XA, XA ∈ X ′) ≥ P(f ( G1,G2, z1 + ε+ δ, z2 ) = XB , XB /∈ X ′). We define two random variables: I := ( G1,G2, z1 + ε, z2 ) = ( G1,G2,N ( z1,Σ ) , z2 ) O := ( G1,G2, z1 + ε+ δ, z2 ) = ( G1,G2,N ( z1 + δ,Σ ) , z2 ) . We know that: P(f(I) = XA, XA ∈ X ′) ≥ p. (37) Our goal is to show that P(f(O) = XA, XA ∈ X ′) > P(f(O) = XB , XB /∈ X ′). (38) According to lemma 2, we can define the half-spaces: A = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≤ ∥δTΣ−1B∥Φ−1 ( p )} , B = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≥ ∥δTΣ−1B∥Φ−1 ( p )} . Claim 1 shows that P(I ∈ A) = p, therefore we can get P(f(I) = XA, XA ∈ X ′) ≥ P(I ∈ A). Hence we may apply Lemma 2 to conclude: P(f(O) = XA, XA ∈ X ′) ≥ P(O ∈ A). (39) Similarly, we obtain P(f(I) = XB , XB /∈ X ′) ≤ P(I ∈ B). Hence we may apply Lemma 2 to conclude: P(f(O) = XB , XB /∈ X ′) ≤ P(O ∈ B). (40) Combining Eq. 39 and 40, we can get the conditions of Eq. 38: P(f(O) = XA, XA ∈ X ′) ≥ P(O ∈ A) > P(O ∈ B) ≥ P(f(O) = XB , XB /∈ X ′). (41) According to Claim 3 and Claim 4, we can get P(O ∈ A) and P(O ∈ B) as: P(O ∈ A) = Φ ( Φ−1 ( p ) − δ TΣ−1δ ∥δTΣ−1B∥ ) , P(O ∈ B) = Φ ( −Φ−1 ( p ) + δTΣ−1δ ∥δTΣ−1B∥ ) . (42) Finally, we obtain that P(O ∈ A) > P(O ∈ B) if and only if: δTΣ−1δ ∥δTΣ−1B∥ < Φ−1 ( p) ) , δT (BTB)−1δ ∥δT (BTB)−1B∥ < Φ−1 ( p) ) . Since B is a real symmetric matrix (BT = B), we can finally get: ∥δTB−1∥ < Φ−1 ( p) ) , which recovers the theorem statement. E.4.1 LINEAR TRANSFORMATION AND DERIVATION We obtain four equations based on linear transformation: Claim 1. P(I ∈ A) = p Proof. Recall that A = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≤ ∥δTΣ−1B∥Φ−1 ( p )} , according to lemma 3, we can get: P(I ∈ A) = P ( δTΣ−1(I − ( G1,G2, z1, z2 ) ) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1N (0,Σ) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1BN (0, I) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( ∥δTΣ−1B∥N (0, 1) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = Φ ( Φ−1 ( p )) = p. Claim 2. P(I ∈ B) = 1− p Proof. Recall that B = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≥ ∥δTΣ−1B∥Φ−1 ( p )} , according to lemma 3, we can get: P(I ∈ B) = P ( δTΣ−1(I − ( G1,G2, z1, z2 ) ) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1N (0,Σ) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1BN (0, I) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( ∥δTΣ−1B∥N (0, 1) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = 1− Φ ( Φ−1 ( p )) = 1− p. Claim 3. P(O ∈ A) = Φ ( Φ−1 ( p ) − δ TΣ−1δ ∥δTΣ−1B∥ ) Proof. Recall that A = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≤ ∥δTΣ−1B∥Φ−1 ( p )} and O ∼( G1,G2,N ( z1 + δ,Σ ) , z2 ) , according to lemma 3, we can get: P(O ∈ A) = P ( δTΣ−1(O − ( G1,G2, z1, z2 ) ) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1N (δ,Σ) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1(BN (0, I) + δ) ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1BN (0, I) + δTΣ−1δ ≤ ∥δTΣ−1B∥Φ−1 ( p )) = P ( ∥δTΣ−1B∥N (0, 1) ≤ ∥δTΣ−1B∥Φ−1 ( p ) − δTΣ−1δ ) = P ( N (0, 1) ≤ Φ−1 ( p ) − δ TΣ−1δ ∥δTΣ−1B∥ ) = Φ ( Φ−1 ( p ) − δ TΣ−1δ ∥δTΣ−1B∥ ) . Claim 4. P(O ∈ B) = Φ ( −Φ−1 ( p ) + δ TΣ−1δ ∥δTΣ−1B∥ ) Proof. Recall that B = { k : δTΣ−1(k − ( G1,G2, z1, z2 ) ) ≥ ∥δTΣ−1B∥Φ−1 ( p )} and O ∼( G1,G2,N ( z1 + δ,Σ ) , z2 ) , according to lemma 3, we can get: P(O ∈ B) = P ( δTΣ−1((O − ( G1,G2, z1, z2 ) ) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1N (δ,Σ) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1(BN (0, I) + δ) ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( δTΣ−1BN (0, I) + δTΣ−1δ ≥ ∥δTΣ−1B∥Φ−1 ( p )) = P ( ∥δTΣ−1B∥N (0, 1) ≥ ∥δTΣ−1B∥Φ−1 ( p ) − δTΣ−1δ ) = P ( N (0, 1) ≥ Φ−1 ( p ) − δ TΣ−1δ ∥δTΣ−1B∥ ) = P ( N (0, 1) ≤ −Φ−1 ( p ) + δTΣ−1δ ∥δTΣ−1B∥ ) = Φ ( −Φ−
1. What is the focus of the paper regarding graph matching? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis? 3. Do you have any concerns about the similarity of the technique used in the paper to other works? 4. How would you assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper considers the certified robustness of graph matching. Graph matching aims to find a matching between nodes of two graphs that maximizes the overall affinity score. This paper defines the certified robustness on graph matching by using randomized smoothing to the node classification stage. Specifically, they consider the randomized smoothing for the mapping function of each node while using a joint smoothing distribution matrix. They show the strictly certifies radius and two extra radii for evaluation. Finally, their experimental results show their certified robustness method performs better than previous methods on real-world datasets. Strengths And Weaknesses Strengths: This paper proposed a new setting for certified robustness of graph matching, which is a significant problem and can be used in many applications. They provided solid theoretical results on the strictly certified radius. Weaknesses: The techniques used to derive the theoretical results seem quite similar to the original randomized smoothing method (Cohen et al, 2019). The main difference is the noisy distribution here depends on the correlation between nodes, which is solved with standard linear algebra analysis. Figure 1(b) is hard for me to detect differences between methods. Clarity, Quality, Novelty And Reproducibility This paper is well-written and easy to follow. The certified robustness on graph matching considered in this paper is a new setting. I think the results in this paper are original and reproducible.
ICLR
Title Self-Supervised Variational Auto-Encoders Abstract Density estimation, compression, and data generation are crucial tasks in artificial intelligence. Variational Auto-Encoders (VAEs) constitute a single framework to achieve these goals. Here, we present a novel class of generative models, called self-supervised Variational Auto-Encoder (selfVAE), that utilizes deterministic and discrete transformations of data. This class of models allows performing both conditional and unconditional sampling while simplifying the objective function. First, we use a single self-supervised transformation as a latent variable, where a transformation is either downscaling or edge detection. Next, we consider a hierarchical architecture, i.e., multiple transformations, and we show its benefits compared to the VAE. The flexibility of selfVAE in data reconstruction finds a particularly interesting use case in data compression tasks, where we can trade-off memory for better data quality, and vice-versa. We present the performance of our approach on three benchmark image data (Cifar10, Imagenette64, and CelebA). N/A Density estimation, compression, and data generation are crucial tasks in artificial intelligence. Variational Auto-Encoders (VAEs) constitute a single framework to achieve these goals. Here, we present a novel class of generative models, called self-supervised Variational Auto-Encoder (selfVAE), that utilizes deterministic and discrete transformations of data. This class of models allows performing both conditional and unconditional sampling while simplifying the objective function. First, we use a single self-supervised transformation as a latent variable, where a transformation is either downscaling or edge detection. Next, we consider a hierarchical architecture, i.e., multiple transformations, and we show its benefits compared to the VAE. The flexibility of selfVAE in data reconstruction finds a particularly interesting use case in data compression tasks, where we can trade-off memory for better data quality, and vice-versa. We present the performance of our approach on three benchmark image data (Cifar10, Imagenette64, and CelebA). 1 INTRODUCTION The framework of variational autoencoders (VAEs) provides a principled approach for learning latentvariable models. As it utilizes a meaningful low-dimensional latent space with density estimation capabilities, it forms an attractive solution for generative modelling tasks. However, its performance in terms of the test log-likelihood and quality of generated samples is often disappointing, thus, many modifications were proposed. In general, one can obtain a tighter lower bound, and, thus, a more powerful and flexible model, by advancing over the following three components: the encoder (Rezende et al., 2014; van den Berg et al., 2018; Hoogeboom et al., 2020; Maaløe et al., 2016), the prior (or marginal over latents) (Chen et al., 2016; Habibian et al., 2019; Lavda et al., 2020; Lin & Clark, 2020; Tomczak & Welling, 2017) and the decoder (Gulrajani et al., 2016). Recent studies have shown that by employing deep hierarchical architectures and by carefully designing building blocks of the neural networks, VAEs can successfully model high-dimensional data and reach state-of-the-art test likelihoods (Zhao et al., 2017; Maaløe et al., 2019; Vahdat & Kautz, 2020). In this work, we present a novel class of VAEs, called self-supervised Variational Auto-Encoders, where we introduce additional variables to VAEs that result from discrete and deterministic transformations of observed images. Since the transformations are deterministic, and they provide a specific aspect of images (e.g., contextual information through detecting edges or downscaling), we refer to them as self-supervised representations. The introduction of the discrete and deterministic variables allows to train deep hierarchical models efficiently by decomposing the task of learning a highly complex distribution into training smaller and conditional distributions. In this way, the model allows to integrate the prior knowledge about the data, but still enables to synthesize unconditional samples. Furthermore, the discrete and deterministic variables could be used to conditionally reconstruct data, which could be of great use in data compression and super-resolution tasks. We make the following contributions: i) We propose an extension of the VAE framework by incorporating self-supervised representations of the data. ii) We analyze the impact of modelling natural images with different data transformations as self-supervised representations. iii) This new type of generative model (self-supervised Variational Auto-Encoders), which is able to perform both conditional and unconditional sampling, demonstrate improved quantitative performance in terms of density estimation and generative capabilities on image benchmarks. 2 BACKGROUND 2.1 VARIATIONAL AUTO-ENCODERS Let x 2 XD be a vector of observable variables, where X ✓ R or X ✓ Z, and z 2 RM denote a vector of latent variables. Since calculating p#(x) = R p#(x, z)dz is computationally intractable for non-linear stochastic dependencies, a variational family of distributions could be used for approximate inference. Then, the following objective function could be derived, namely, the evidence lower bound (ELBO) (Jordan et al., 1999): ln p#(x) Eq (z|x) [ln p✓(x|z) + ln p (z) ln q (z|x)] , (1) where q (z|x) is the variational posterior (or the encoder), p✓(x|z) is the conditional likelihood function (or the decoder) and p (z) is the prior (or marginal), , ✓ and denote parameters. The expectation is approximated by Monte Carlo sampling while exploiting the reparameterization trick in order to obtain unbiased gradient estimators. The models are parameterized by neural networks. This generative framework is known as Variational Auto-Encoder (VAE) (Kingma & Welling, 2013; Rezende et al., 2014). 2.2 VAES WITH BIJECTIVE PRIORS Even though the lower-bound suggests that the prior plays a crucial role in improving the variational bounds, usually a fixed distribution is used, e.g., a standard multivariate Gaussian. While being relatively simple and computationally cheap, the fixed prior is known to result in over-regularized models that tend to ignore most of the latent dimensions (Burda et al., 2015; Hoffman & Johnson, 2016; Tomczak & Welling, 2017). Moreover, even with powerful encoders, VAEs may still fail to match the variational posterior to a unit Gaussian prior (Rosca et al., 2018). However, it is possible to obtain a rich, multi-modal prior distribution p(z) by using a bijective (or flow-based) model (Dinh et al., 2016). Formally, given a latent code z, a base distribution pV (v) over latent variables v 2 RM , and f : RM ! RM consisting of a sequence of L diffeomorphic transformations1, where fi(vi 1) = vi, v0 = v and vL = z, the change of variable can be used sequentially to express the distribution of z as a function of v as follows: log p(z) = log pV (v) LX i=1 log @fi(vi 1) @vi 1 , (2) where @fi(vi 1)@vi 1 is the Jacobian-determinant of the ith transformation. Thus, using the bijective prior yields the following lower-bound: ln p(x) Eq (z|x) h log p✓(x|z) log q (z|x) + log pV (v0) + LX i=1 log @f 1i (vi) @vi i . (3) In this work, we utilize RealNVP (Dinh et al., 2016) as the prior, however, any other flow-based model could be used (Kingma & Dhariwal, 2018; Hoogeboom et al., 2020). For the experiments and ablation study that shows the impact of the bijective prior on VAEs, we refer to the appendix A.1. 3 METHOD 3.1 MOTIVATION The idea of self-supervised learning is about utilizing original unlabeled data to create additional context information. It could be achieved in multiple manners, e.g., by adding noise to data (Vincent et al., 2008) or masking data during training (Zhang et al., 2017). Self-supervised learning could also be seen as turning an unsupervised model into a supervised by, e.g., treating predicting next pixels as a classification task (Hénaff et al., 2019; Oord et al., 2018). These are only a few examples of a quickly growing research line (Liu et al., 2020). 1That is, invertible and differentiable transformations. i) Stochastic dependencies of self-supervised VAE ii) Hierarchical ssVAE self-supervised VAE Here, we propose to use non-trainable transformations to obtain information about image data. Our main hypothesis is that since working with highly-quality images is challenging, we could alleviate this problem by additionally considering partial information about them. Fitting a model to images of lower quality, and then enhancing them to match the target distribution seems to be overall an easier task (Chang et al., 2004; Gatopoulos et al., 2020). By incorporating compressed transformations (i.e., the self-supervised representations) that still contain global information, with the premise that it would be easier to approximate, the process of modelling a high-dimensional complex density breaks down into simpler tasks. In this way, the expressivity of the model will grow and gradually result into richer, better generations. A positive effect of the proposed framework is that the model allows us to integrate prior knowledge through the image transformations, without losing its uncon- ditional generative functionality. Overall, we end up with a two-level VAE with three latent variables, where one is a data transformation that can be obtained in a self-supervised fashion. In Figure 1 a schematic representation of the proposed approach with downscaling is presented. A number of exemplary image transformations are presented in Figure 2. We notice that with these transformations, even though they discard a lot of information, the global structure is preserved. As a result, in practice the model should have the ability to extract a general concept of the data, and add local information afterwards. In this work, we focus on downscaling (Figure 2.b, c & d) and edge detection or sketching (Fig. 2.i). 3.2 MODEL FORMULATION In our model, we consider representations that result from deterministic and discrete transformations of an image. Formally, we introduce a transformation d : XD ! XC that takes x and returns an image representation y, e.g., a downscaled image. Since we lose information about the original image, z could be seen as a variable that compensates lost details in x. Further we propose to introduce an additional latent variable, u 2 RN to model y and z. We can define the joint distribution of x and y as follows: p(x,y) = p(y|x)p(x), where p(y|x) = (y d(x)) due to the deterministic transformation d(·), where (·) is the Kronecker delta. Thus, the empirical distribution is (y d(x))pdata(x). However, since we are interested in decomposing the problem of modeling a complex distribution p(x), we propose to model p(x|y)p(y) instead, and utilize the variational inference of the form Q(u, z|x,y) = q(u|y)q(z|x) that yields: ln p(x,y) EQ ⇥ ln p✓(x|y, z) + ln p(z|u,y) + ln p(y|u) + ln p(u) ln q(z|x) ln q(u|y) ⇤ . (4) Intuitively, the premise for selfVAE is that the latents u will capture the global structure of the input data and the latents z will encode the missing information between y and x, guiding the model to discover the distribution of the target observations. In order to highlight the self-supervised part in our model, we refer to it as the self-supervised Variational Auto-Encoder (or selfVAE for short). Further, we propose to choose the following distributions: p(v) = N (v|0,1) p (u ) = p(v) FY i=1 det @fi(vi 1) @vi 1 1 p✓1 (y|u ) = IX i=1 ⇡(u)i Dlogistic ⇣ µ(u)i , s (u) i ⌘ q 1 (u|y ) = N (u|µ 1(y), diag ( 1(y)) ) q 2 (z|x ) = N (z|µ 2(x), diag ( 2(x)) ). p✓2 (z|y,u ) = N (z|µ✓2(y,u), diag ( ✓2(y,u) ) ) p✓3 (x|z,y ) = IX i=1 ⇡(z,y)i Dlogistic ⇣ µ(z,y)i , s (z,y) i ⌘ where Dlogistic is defined as the discretized logistic distribution (Salimans et al., 2017), and we utilize a flow-based model for p (u ). Notice that we use the discretized logistic distribution, because images are represented by values between 0 and 255. For integer-valued random variables, other distributions like Gaussian are inappropriate. 3.3 GENERATION AND RECONSTRUCTION IN SELFVAE As generative models, VAEs can be used to synthesize novel content through the following process: z ⇠ p(z) ! x ⇠ p(x|z), but also to reconstruct a data sample x⇤ by using the following scheme: z ⇠ q(z|x⇤) ! x ⇠ p(x|z). Interestingly, our approach allows to utilize more operations regarding data generation and reconstruction. First, analogously to VAEs, the selfVAE allows to generate data by applying the following hierarchical sampling process (generation): u ⇠ p(u) ! y ⇠ p(y|u) ! z ⇠ p(z|u,y) ! x ⇠ p(x|y, z). However, we can use the ground-truth y (i.e, y⇤ = d(x⇤)), and sample or infer z. Then, the generative process for the former (conditional generation) is: z ⇠ q(z|x⇤) ! x ⇠ p(x|y⇤, z), and for the latter (conditional reconstruction): u ⇠ q(u|y⇤) ! z ⇠ p(z|u,y⇤), ! x ⇠ p(x|y⇤, z). If y is a downscaling transformation of the input image, selfVAE can be used in a manner similar to the super-resolution (Gatopoulos et al., 2020). Alternatively, we can sample (or generate) y instead, and choose to sample or infer z. In this way, we can reconstruct an image in two ways, namely, reconstruction 1: y⇤ = d(x⇤) ! u ⇠ q(u|y⇤) ! y ⇠ p(y|u) ! z ⇠ p(z|u,y) ! x ⇠ p(x|z,y), and reconstruction 2: ⇣ y⇤ = d(x⇤) ! u ⇠ q(u|y⇤) ! y ⇠ p(y|u) ⌘ , then z ⇠ q(z|x⇤) ! x ⇠ p(x|y, z). The presented versions of generating and reconstructing images could be useful in the compression task. As we will see in the experiments, each option creates a different ratio of the reconstruction quality against the memory that we need to allocate to send information. However, every inferred variable needs to be sent, thus, more sampling corresponds to lower memory requirements. 3.4 HIERARCHICAL SELF-SUPERVISED VAE u … … x a) Generative Model yK u … …… x yK1 yK y1 yK1 y1 b) Inference Model u z y x i) Stochastic dependencies of self-supervised VAE ii) Stochastic dependencies of the hierarchical ssVAE zK1 zK z1z1 zK1 zK d d d Figure 3: Hierarchical selfVAE. The proposed approach can be further extended and generalized by introducing multiple transformations, in the way that it is illustrated in Figure 3. By incorporating a single (or multiple) self-supervised representation(s) of the data, the process of modelling a high-dimensional complex density breaks down into K simpler modeling tasks. Thus, we obtain a K-level VAE architecture, where the overall expressivity of the model grows even further and gradually results into generations of higher quality. Some transformations cannot be applied multiple times (e.g., edge detection), however, others could be used sequentially, e.g., downscaling. We take K self-supervised data transformations dk(·) that give K representations denoted by y1:K = [y1, . . . ,yK ], and the following variational distributions: Q(u, z|x,y1:K) = q(u|yK)q(z1|x) K 1Y k=1 q(zk+1|yk), (5) that yields the following objective: ln p(x,y1:K) EQ ⇥ ln p✓(x|y1, z1) + K 1X k=1 ln p(zk|yk, zk+1) + ln p(yk|yk+1, zk+1) + + ln p(zK |u,yK) + ln p(yK |u) + ln p(u) ln q(u|yK) ln q(z1|x) K 1X k=1 ln q(zk+1|yk) ⇤ . (6) 4 EXPERIMENTS 4.1 EXPERIMENTAL SETUP Datasets We evaluate the proposed model on CIFAR-10, Imagenette64 and CelebA: CIFAR-10 The CIFAR-10 dataset is a well-known image benchmark data containing 60.000 training examples and 10.000 validation examples. From the training data, we put aside 15% randomly selected images as the test set. We augment the training data by using random horizontal flips and random affine transformations and normalize the data uniformly in the range (0, 1). Imagenette64 Imagenette642 is a subset of 10 classes from the downscaled Imagenet dataset. We downscaled the dataset to 64px ⇥ 64px images. Similarly to CIFAR-10, we put aside 15% randomly selected training images as the test set. We used the same data augmentation as in CIFAR-10 CelebA The Large-scale CelebFaces Attributes (CelebA) Dataset consists of 202.599 images of celebrities. We cropped original images on the 40 vertical and 15 horizontal component of the top left corner of the crop box, which height and width were cropped to 148. Besides the uniform normalization of the image, no other augmentation was applied. Architectures Encoders and decoders consist of building blocks composed of DenseNets (Huang et al., 2016), channel-wise attention (Zhang et al., 2018), and ELUs (Clevert et al., 2015) as activation functions. The dimensionality of all the latent variables were kept at 8 ⇥ 8 ⇥ 16 = 1024 and all models were trained using AdaMax (Kingma & Ba, 2014) with data-dependent initialization (Salimans & Kingma, 2016). Regarding the selfVAEs, in CIFAR-10 we used an architecture with a single downscaled transformation (selfVAE-downscale), while on the remaining two datasets (CelebA 2https://github.com/fastai/imagenette and Imagenette64) we used a hierarchical 3-leveled selfVAE with downscaling, and a selfVAE with sketching. All models were employed with the bijective prior (RealNVP) comparable in terms of the number of parameters (the range of the weights of all models was from 32M to 42M). For more details, please refer to the appendix section A.2. Evaluation We approximate the negative log-likelihood using 512 IW-samples (Burda et al., 2015) and express the scores in bits per dimension (bpd). Additionally, for CIFAR-10, we use the Fréchet Inception Distance (FID) (Heusel et al., 2017). 4.2 QUANTITATIVE RESULTS We present the results of the experiments on the benchmark datasets in Table 1. First, we notice that on CIFAR-10 our implementation of the VAE is still lacking behind other generative models in terms of bpd, however, it is better or comparable in terms of FID. The selfVAE-downscale achieves worse bpd than the VAE. A possible explanation may lie in the small image size (32⇥ 32), as the benefits of breaking down the learning process in two or more steps are not obvious given the small target dimensional space. Nevertheless, the selfVAE-downscale achieves significantly better FID scores than the other generative models. This result could follow from the fact that downscaling allows to maintain context information about the original image and, as a result, a general coherence is of higher quality. Interestingly, on the two other datasets, a three-level selfVAE-downscale achieves significantly better bpd scores than the VAE with the bijective prior. This indicates the benefit of employing a multileveled self-supervised framework against the VAE in higher-dimensional data, where the plain model fails to scale efficiently. It seems that the hierarchical structure of self-supervised random variables allows to encode the missing information more efficiently in zk, in contrast to the vanilla VAE, where all information about images must be coded in z. This result is promising and indicates that the proposed approach is of great potential for generative modelling. 4.3 QUALITATIVE RESULTS We present generations on CIFAR-10 and Imagenette64 in Figure 4 and on CelebA in Figure 5, and reconstructions on CIFAR-10 and CelebA in Figure 6. i) se lfV A E - d ow ns ca le - 3l vl C IF A R -1 0 C el eb A We first notice that the generations from selfVAE seem to be more coherent, in contrast with these from VAE that produces overall more contextless and distorted generations. This result seems to be in line with the FID scores. Especially for CelebA, we observe impressive synthesis quality, great sampling diversity and coherent generations (Figure 5). On the Imagenette64 dataset, we can also observe crisper generations for our method compared to the VAE (Figure 4). Furthermore, the hierarchical selfVAE seems to be of a great potential for compression purposes. In contrast to the VAE, which is restricted to using a single way of reconstructing an image, the selfVAE allows four various options with different quality/memory ratios (Figure 6). In the selfVAE-sketch, we can retrieve the image with high accuracy by using only 16% of the original data, as it manages to encode all the texture of the image to z (Figure 11). This shows the advantage of choosing prior knowledge into the learning process. Lastly, the latents learn to add extra information, which defines the end result, and we can alter details of an image like facial expressions (Figure 12.ii). 5 CONCLUSION In this paper, we showed that taking deterministic and discrete transformations results in coherent generations of high visual quality, and allows to integrate prior knowledge without loosing its unconditional generative functionality. The experimental results seem to confirm that hierarchical architectures perform better and allow to obtain both better bpd scores and better generations and reconstructions. In the experiments, we considered two classes of image transformations, namely, downscaling and edge detection (sketching). However, there is a vast of possible other transformations (see Figure 2), and we leave investigating them for future work. Moreover, we find the proposed approach interesting for the compression task. A similar approach with a multi-scale auto-encoder for image compression was proposed, e.g, by Mentzer et al. (2019) or Razavi et al. (2019). However, we still use a probabilistic framework and indicate that various non-trainable image transformations (not only multiple scales) could be of great potential. ACKNOWLEDGMENTS Anonymized for the double-blind review.
1. What are the novel aspects introduced by the paper regarding VAEs? 2. What are the concerns regarding the transformation from x to y? 3. How does the reviewer assess the hierarchical self-supervised VAE and its potential impact on the inference process? 4. Are there any questions about the necessity of conditional information during the test phase? 5. Do the provided experiments effectively demonstrate the model's abilities? 6. Can the reviewer elaborate on the relationship between the proposed method and the general conditional VAE model mentioned in [1]?
Review
Review This paper targets richer and higher-quality generation with VAE. Two techniques are adopted to achieve the goal: 1). bijective model to enrich data generation with flexible prior. 2). presenting compressed variants of the input data, i.e. self -supervision as additional condition y , for reconstruction. The two techniques interact through a hierarchical sampling process, . . . y ∼ p ( y | u ) → z ∼ p ( z | u , y ) , thus benefits VAE generation with data-dependent prior and condition generation. The idea novel and reasonable. the paper is clearly presented. Here are some of my concerns. The author specifically argues the transformation x → y to be 'non-trainable', i.e. the mapping between x and y is deterministic. BUT, will modeling q ( x | z , y ) with discretized logistic distribution, affect the generation quality, since the likelihood is classicly assumed to be Gaussian distributed? The HIERARCHICAL SELF-SUPERVISED VAE is presented here to show the model can adopt multi-scaling information to benefit generation step-by-step. However, I am afraid, in this way, the inference would be much difficult since the flow-based bijective operation is hard to train already. Is the conditional information, e.g. the sketches, also need in the test phase? or unconditional generational setting is adopted here. It seems the experiments are not conducted on High-quality datasets. To me, the presented results can not obviously demonstrate the achievements of the model. Can you please explain the connection between your self-supervised VAE to the general conditional VAE model in [1]. [1] Sohn, Kihyuk, Honglak Lee, and Xinchen Yan. "Learning structured output representation using deep conditional generative models." Advances in neural information processing systems. 2015.
ICLR
Title Self-Supervised Variational Auto-Encoders Abstract Density estimation, compression, and data generation are crucial tasks in artificial intelligence. Variational Auto-Encoders (VAEs) constitute a single framework to achieve these goals. Here, we present a novel class of generative models, called self-supervised Variational Auto-Encoder (selfVAE), that utilizes deterministic and discrete transformations of data. This class of models allows performing both conditional and unconditional sampling while simplifying the objective function. First, we use a single self-supervised transformation as a latent variable, where a transformation is either downscaling or edge detection. Next, we consider a hierarchical architecture, i.e., multiple transformations, and we show its benefits compared to the VAE. The flexibility of selfVAE in data reconstruction finds a particularly interesting use case in data compression tasks, where we can trade-off memory for better data quality, and vice-versa. We present the performance of our approach on three benchmark image data (Cifar10, Imagenette64, and CelebA). N/A Density estimation, compression, and data generation are crucial tasks in artificial intelligence. Variational Auto-Encoders (VAEs) constitute a single framework to achieve these goals. Here, we present a novel class of generative models, called self-supervised Variational Auto-Encoder (selfVAE), that utilizes deterministic and discrete transformations of data. This class of models allows performing both conditional and unconditional sampling while simplifying the objective function. First, we use a single self-supervised transformation as a latent variable, where a transformation is either downscaling or edge detection. Next, we consider a hierarchical architecture, i.e., multiple transformations, and we show its benefits compared to the VAE. The flexibility of selfVAE in data reconstruction finds a particularly interesting use case in data compression tasks, where we can trade-off memory for better data quality, and vice-versa. We present the performance of our approach on three benchmark image data (Cifar10, Imagenette64, and CelebA). 1 INTRODUCTION The framework of variational autoencoders (VAEs) provides a principled approach for learning latentvariable models. As it utilizes a meaningful low-dimensional latent space with density estimation capabilities, it forms an attractive solution for generative modelling tasks. However, its performance in terms of the test log-likelihood and quality of generated samples is often disappointing, thus, many modifications were proposed. In general, one can obtain a tighter lower bound, and, thus, a more powerful and flexible model, by advancing over the following three components: the encoder (Rezende et al., 2014; van den Berg et al., 2018; Hoogeboom et al., 2020; Maaløe et al., 2016), the prior (or marginal over latents) (Chen et al., 2016; Habibian et al., 2019; Lavda et al., 2020; Lin & Clark, 2020; Tomczak & Welling, 2017) and the decoder (Gulrajani et al., 2016). Recent studies have shown that by employing deep hierarchical architectures and by carefully designing building blocks of the neural networks, VAEs can successfully model high-dimensional data and reach state-of-the-art test likelihoods (Zhao et al., 2017; Maaløe et al., 2019; Vahdat & Kautz, 2020). In this work, we present a novel class of VAEs, called self-supervised Variational Auto-Encoders, where we introduce additional variables to VAEs that result from discrete and deterministic transformations of observed images. Since the transformations are deterministic, and they provide a specific aspect of images (e.g., contextual information through detecting edges or downscaling), we refer to them as self-supervised representations. The introduction of the discrete and deterministic variables allows to train deep hierarchical models efficiently by decomposing the task of learning a highly complex distribution into training smaller and conditional distributions. In this way, the model allows to integrate the prior knowledge about the data, but still enables to synthesize unconditional samples. Furthermore, the discrete and deterministic variables could be used to conditionally reconstruct data, which could be of great use in data compression and super-resolution tasks. We make the following contributions: i) We propose an extension of the VAE framework by incorporating self-supervised representations of the data. ii) We analyze the impact of modelling natural images with different data transformations as self-supervised representations. iii) This new type of generative model (self-supervised Variational Auto-Encoders), which is able to perform both conditional and unconditional sampling, demonstrate improved quantitative performance in terms of density estimation and generative capabilities on image benchmarks. 2 BACKGROUND 2.1 VARIATIONAL AUTO-ENCODERS Let x 2 XD be a vector of observable variables, where X ✓ R or X ✓ Z, and z 2 RM denote a vector of latent variables. Since calculating p#(x) = R p#(x, z)dz is computationally intractable for non-linear stochastic dependencies, a variational family of distributions could be used for approximate inference. Then, the following objective function could be derived, namely, the evidence lower bound (ELBO) (Jordan et al., 1999): ln p#(x) Eq (z|x) [ln p✓(x|z) + ln p (z) ln q (z|x)] , (1) where q (z|x) is the variational posterior (or the encoder), p✓(x|z) is the conditional likelihood function (or the decoder) and p (z) is the prior (or marginal), , ✓ and denote parameters. The expectation is approximated by Monte Carlo sampling while exploiting the reparameterization trick in order to obtain unbiased gradient estimators. The models are parameterized by neural networks. This generative framework is known as Variational Auto-Encoder (VAE) (Kingma & Welling, 2013; Rezende et al., 2014). 2.2 VAES WITH BIJECTIVE PRIORS Even though the lower-bound suggests that the prior plays a crucial role in improving the variational bounds, usually a fixed distribution is used, e.g., a standard multivariate Gaussian. While being relatively simple and computationally cheap, the fixed prior is known to result in over-regularized models that tend to ignore most of the latent dimensions (Burda et al., 2015; Hoffman & Johnson, 2016; Tomczak & Welling, 2017). Moreover, even with powerful encoders, VAEs may still fail to match the variational posterior to a unit Gaussian prior (Rosca et al., 2018). However, it is possible to obtain a rich, multi-modal prior distribution p(z) by using a bijective (or flow-based) model (Dinh et al., 2016). Formally, given a latent code z, a base distribution pV (v) over latent variables v 2 RM , and f : RM ! RM consisting of a sequence of L diffeomorphic transformations1, where fi(vi 1) = vi, v0 = v and vL = z, the change of variable can be used sequentially to express the distribution of z as a function of v as follows: log p(z) = log pV (v) LX i=1 log @fi(vi 1) @vi 1 , (2) where @fi(vi 1)@vi 1 is the Jacobian-determinant of the ith transformation. Thus, using the bijective prior yields the following lower-bound: ln p(x) Eq (z|x) h log p✓(x|z) log q (z|x) + log pV (v0) + LX i=1 log @f 1i (vi) @vi i . (3) In this work, we utilize RealNVP (Dinh et al., 2016) as the prior, however, any other flow-based model could be used (Kingma & Dhariwal, 2018; Hoogeboom et al., 2020). For the experiments and ablation study that shows the impact of the bijective prior on VAEs, we refer to the appendix A.1. 3 METHOD 3.1 MOTIVATION The idea of self-supervised learning is about utilizing original unlabeled data to create additional context information. It could be achieved in multiple manners, e.g., by adding noise to data (Vincent et al., 2008) or masking data during training (Zhang et al., 2017). Self-supervised learning could also be seen as turning an unsupervised model into a supervised by, e.g., treating predicting next pixels as a classification task (Hénaff et al., 2019; Oord et al., 2018). These are only a few examples of a quickly growing research line (Liu et al., 2020). 1That is, invertible and differentiable transformations. i) Stochastic dependencies of self-supervised VAE ii) Hierarchical ssVAE self-supervised VAE Here, we propose to use non-trainable transformations to obtain information about image data. Our main hypothesis is that since working with highly-quality images is challenging, we could alleviate this problem by additionally considering partial information about them. Fitting a model to images of lower quality, and then enhancing them to match the target distribution seems to be overall an easier task (Chang et al., 2004; Gatopoulos et al., 2020). By incorporating compressed transformations (i.e., the self-supervised representations) that still contain global information, with the premise that it would be easier to approximate, the process of modelling a high-dimensional complex density breaks down into simpler tasks. In this way, the expressivity of the model will grow and gradually result into richer, better generations. A positive effect of the proposed framework is that the model allows us to integrate prior knowledge through the image transformations, without losing its uncon- ditional generative functionality. Overall, we end up with a two-level VAE with three latent variables, where one is a data transformation that can be obtained in a self-supervised fashion. In Figure 1 a schematic representation of the proposed approach with downscaling is presented. A number of exemplary image transformations are presented in Figure 2. We notice that with these transformations, even though they discard a lot of information, the global structure is preserved. As a result, in practice the model should have the ability to extract a general concept of the data, and add local information afterwards. In this work, we focus on downscaling (Figure 2.b, c & d) and edge detection or sketching (Fig. 2.i). 3.2 MODEL FORMULATION In our model, we consider representations that result from deterministic and discrete transformations of an image. Formally, we introduce a transformation d : XD ! XC that takes x and returns an image representation y, e.g., a downscaled image. Since we lose information about the original image, z could be seen as a variable that compensates lost details in x. Further we propose to introduce an additional latent variable, u 2 RN to model y and z. We can define the joint distribution of x and y as follows: p(x,y) = p(y|x)p(x), where p(y|x) = (y d(x)) due to the deterministic transformation d(·), where (·) is the Kronecker delta. Thus, the empirical distribution is (y d(x))pdata(x). However, since we are interested in decomposing the problem of modeling a complex distribution p(x), we propose to model p(x|y)p(y) instead, and utilize the variational inference of the form Q(u, z|x,y) = q(u|y)q(z|x) that yields: ln p(x,y) EQ ⇥ ln p✓(x|y, z) + ln p(z|u,y) + ln p(y|u) + ln p(u) ln q(z|x) ln q(u|y) ⇤ . (4) Intuitively, the premise for selfVAE is that the latents u will capture the global structure of the input data and the latents z will encode the missing information between y and x, guiding the model to discover the distribution of the target observations. In order to highlight the self-supervised part in our model, we refer to it as the self-supervised Variational Auto-Encoder (or selfVAE for short). Further, we propose to choose the following distributions: p(v) = N (v|0,1) p (u ) = p(v) FY i=1 det @fi(vi 1) @vi 1 1 p✓1 (y|u ) = IX i=1 ⇡(u)i Dlogistic ⇣ µ(u)i , s (u) i ⌘ q 1 (u|y ) = N (u|µ 1(y), diag ( 1(y)) ) q 2 (z|x ) = N (z|µ 2(x), diag ( 2(x)) ). p✓2 (z|y,u ) = N (z|µ✓2(y,u), diag ( ✓2(y,u) ) ) p✓3 (x|z,y ) = IX i=1 ⇡(z,y)i Dlogistic ⇣ µ(z,y)i , s (z,y) i ⌘ where Dlogistic is defined as the discretized logistic distribution (Salimans et al., 2017), and we utilize a flow-based model for p (u ). Notice that we use the discretized logistic distribution, because images are represented by values between 0 and 255. For integer-valued random variables, other distributions like Gaussian are inappropriate. 3.3 GENERATION AND RECONSTRUCTION IN SELFVAE As generative models, VAEs can be used to synthesize novel content through the following process: z ⇠ p(z) ! x ⇠ p(x|z), but also to reconstruct a data sample x⇤ by using the following scheme: z ⇠ q(z|x⇤) ! x ⇠ p(x|z). Interestingly, our approach allows to utilize more operations regarding data generation and reconstruction. First, analogously to VAEs, the selfVAE allows to generate data by applying the following hierarchical sampling process (generation): u ⇠ p(u) ! y ⇠ p(y|u) ! z ⇠ p(z|u,y) ! x ⇠ p(x|y, z). However, we can use the ground-truth y (i.e, y⇤ = d(x⇤)), and sample or infer z. Then, the generative process for the former (conditional generation) is: z ⇠ q(z|x⇤) ! x ⇠ p(x|y⇤, z), and for the latter (conditional reconstruction): u ⇠ q(u|y⇤) ! z ⇠ p(z|u,y⇤), ! x ⇠ p(x|y⇤, z). If y is a downscaling transformation of the input image, selfVAE can be used in a manner similar to the super-resolution (Gatopoulos et al., 2020). Alternatively, we can sample (or generate) y instead, and choose to sample or infer z. In this way, we can reconstruct an image in two ways, namely, reconstruction 1: y⇤ = d(x⇤) ! u ⇠ q(u|y⇤) ! y ⇠ p(y|u) ! z ⇠ p(z|u,y) ! x ⇠ p(x|z,y), and reconstruction 2: ⇣ y⇤ = d(x⇤) ! u ⇠ q(u|y⇤) ! y ⇠ p(y|u) ⌘ , then z ⇠ q(z|x⇤) ! x ⇠ p(x|y, z). The presented versions of generating and reconstructing images could be useful in the compression task. As we will see in the experiments, each option creates a different ratio of the reconstruction quality against the memory that we need to allocate to send information. However, every inferred variable needs to be sent, thus, more sampling corresponds to lower memory requirements. 3.4 HIERARCHICAL SELF-SUPERVISED VAE u … … x a) Generative Model yK u … …… x yK1 yK y1 yK1 y1 b) Inference Model u z y x i) Stochastic dependencies of self-supervised VAE ii) Stochastic dependencies of the hierarchical ssVAE zK1 zK z1z1 zK1 zK d d d Figure 3: Hierarchical selfVAE. The proposed approach can be further extended and generalized by introducing multiple transformations, in the way that it is illustrated in Figure 3. By incorporating a single (or multiple) self-supervised representation(s) of the data, the process of modelling a high-dimensional complex density breaks down into K simpler modeling tasks. Thus, we obtain a K-level VAE architecture, where the overall expressivity of the model grows even further and gradually results into generations of higher quality. Some transformations cannot be applied multiple times (e.g., edge detection), however, others could be used sequentially, e.g., downscaling. We take K self-supervised data transformations dk(·) that give K representations denoted by y1:K = [y1, . . . ,yK ], and the following variational distributions: Q(u, z|x,y1:K) = q(u|yK)q(z1|x) K 1Y k=1 q(zk+1|yk), (5) that yields the following objective: ln p(x,y1:K) EQ ⇥ ln p✓(x|y1, z1) + K 1X k=1 ln p(zk|yk, zk+1) + ln p(yk|yk+1, zk+1) + + ln p(zK |u,yK) + ln p(yK |u) + ln p(u) ln q(u|yK) ln q(z1|x) K 1X k=1 ln q(zk+1|yk) ⇤ . (6) 4 EXPERIMENTS 4.1 EXPERIMENTAL SETUP Datasets We evaluate the proposed model on CIFAR-10, Imagenette64 and CelebA: CIFAR-10 The CIFAR-10 dataset is a well-known image benchmark data containing 60.000 training examples and 10.000 validation examples. From the training data, we put aside 15% randomly selected images as the test set. We augment the training data by using random horizontal flips and random affine transformations and normalize the data uniformly in the range (0, 1). Imagenette64 Imagenette642 is a subset of 10 classes from the downscaled Imagenet dataset. We downscaled the dataset to 64px ⇥ 64px images. Similarly to CIFAR-10, we put aside 15% randomly selected training images as the test set. We used the same data augmentation as in CIFAR-10 CelebA The Large-scale CelebFaces Attributes (CelebA) Dataset consists of 202.599 images of celebrities. We cropped original images on the 40 vertical and 15 horizontal component of the top left corner of the crop box, which height and width were cropped to 148. Besides the uniform normalization of the image, no other augmentation was applied. Architectures Encoders and decoders consist of building blocks composed of DenseNets (Huang et al., 2016), channel-wise attention (Zhang et al., 2018), and ELUs (Clevert et al., 2015) as activation functions. The dimensionality of all the latent variables were kept at 8 ⇥ 8 ⇥ 16 = 1024 and all models were trained using AdaMax (Kingma & Ba, 2014) with data-dependent initialization (Salimans & Kingma, 2016). Regarding the selfVAEs, in CIFAR-10 we used an architecture with a single downscaled transformation (selfVAE-downscale), while on the remaining two datasets (CelebA 2https://github.com/fastai/imagenette and Imagenette64) we used a hierarchical 3-leveled selfVAE with downscaling, and a selfVAE with sketching. All models were employed with the bijective prior (RealNVP) comparable in terms of the number of parameters (the range of the weights of all models was from 32M to 42M). For more details, please refer to the appendix section A.2. Evaluation We approximate the negative log-likelihood using 512 IW-samples (Burda et al., 2015) and express the scores in bits per dimension (bpd). Additionally, for CIFAR-10, we use the Fréchet Inception Distance (FID) (Heusel et al., 2017). 4.2 QUANTITATIVE RESULTS We present the results of the experiments on the benchmark datasets in Table 1. First, we notice that on CIFAR-10 our implementation of the VAE is still lacking behind other generative models in terms of bpd, however, it is better or comparable in terms of FID. The selfVAE-downscale achieves worse bpd than the VAE. A possible explanation may lie in the small image size (32⇥ 32), as the benefits of breaking down the learning process in two or more steps are not obvious given the small target dimensional space. Nevertheless, the selfVAE-downscale achieves significantly better FID scores than the other generative models. This result could follow from the fact that downscaling allows to maintain context information about the original image and, as a result, a general coherence is of higher quality. Interestingly, on the two other datasets, a three-level selfVAE-downscale achieves significantly better bpd scores than the VAE with the bijective prior. This indicates the benefit of employing a multileveled self-supervised framework against the VAE in higher-dimensional data, where the plain model fails to scale efficiently. It seems that the hierarchical structure of self-supervised random variables allows to encode the missing information more efficiently in zk, in contrast to the vanilla VAE, where all information about images must be coded in z. This result is promising and indicates that the proposed approach is of great potential for generative modelling. 4.3 QUALITATIVE RESULTS We present generations on CIFAR-10 and Imagenette64 in Figure 4 and on CelebA in Figure 5, and reconstructions on CIFAR-10 and CelebA in Figure 6. i) se lfV A E - d ow ns ca le - 3l vl C IF A R -1 0 C el eb A We first notice that the generations from selfVAE seem to be more coherent, in contrast with these from VAE that produces overall more contextless and distorted generations. This result seems to be in line with the FID scores. Especially for CelebA, we observe impressive synthesis quality, great sampling diversity and coherent generations (Figure 5). On the Imagenette64 dataset, we can also observe crisper generations for our method compared to the VAE (Figure 4). Furthermore, the hierarchical selfVAE seems to be of a great potential for compression purposes. In contrast to the VAE, which is restricted to using a single way of reconstructing an image, the selfVAE allows four various options with different quality/memory ratios (Figure 6). In the selfVAE-sketch, we can retrieve the image with high accuracy by using only 16% of the original data, as it manages to encode all the texture of the image to z (Figure 11). This shows the advantage of choosing prior knowledge into the learning process. Lastly, the latents learn to add extra information, which defines the end result, and we can alter details of an image like facial expressions (Figure 12.ii). 5 CONCLUSION In this paper, we showed that taking deterministic and discrete transformations results in coherent generations of high visual quality, and allows to integrate prior knowledge without loosing its unconditional generative functionality. The experimental results seem to confirm that hierarchical architectures perform better and allow to obtain both better bpd scores and better generations and reconstructions. In the experiments, we considered two classes of image transformations, namely, downscaling and edge detection (sketching). However, there is a vast of possible other transformations (see Figure 2), and we leave investigating them for future work. Moreover, we find the proposed approach interesting for the compression task. A similar approach with a multi-scale auto-encoder for image compression was proposed, e.g, by Mentzer et al. (2019) or Razavi et al. (2019). However, we still use a probabilistic framework and indicate that various non-trainable image transformations (not only multiple scales) could be of great potential. ACKNOWLEDGMENTS Anonymized for the double-blind review.
1. What is the focus of the paper, particularly regarding the proposed self-supervised variational autoencoder? 2. What are the unique aspects of the method introduced in the paper, such as downscaling and edge detection? 3. How does the reviewer assess the clarity and quality of the paper's content, including its organization and figure discussions? 4. Are there any grammatical errors or areas for improvement in the writing style? 5. What is the overall outcome of the review, and what suggestions does the reviewer provide for improving the paper?
Review
Review Summary The paper presents a self-supervised variational auto-encoder called selfVAE. The work proposes the use of downscaling and edge detection as simpler representations of the input images to be reconstructed. The model should then learn to improve the low dimensional approximations to recover the higher dimensional ones in a hierarchical fashion. Quality & Clarity The paper is generally quite difficult to follow and the purpose, contributions and experiments are not presented clearly enough. The figures are not discussed in order, and the paper often references figures that are far away. There are a number of grammatical errors in the paper. Outcome The message of the paper generally was quite unclear and it could do with restructuring to assist readers.
ICLR
Title Self-Supervised Variational Auto-Encoders Abstract Density estimation, compression, and data generation are crucial tasks in artificial intelligence. Variational Auto-Encoders (VAEs) constitute a single framework to achieve these goals. Here, we present a novel class of generative models, called self-supervised Variational Auto-Encoder (selfVAE), that utilizes deterministic and discrete transformations of data. This class of models allows performing both conditional and unconditional sampling while simplifying the objective function. First, we use a single self-supervised transformation as a latent variable, where a transformation is either downscaling or edge detection. Next, we consider a hierarchical architecture, i.e., multiple transformations, and we show its benefits compared to the VAE. The flexibility of selfVAE in data reconstruction finds a particularly interesting use case in data compression tasks, where we can trade-off memory for better data quality, and vice-versa. We present the performance of our approach on three benchmark image data (Cifar10, Imagenette64, and CelebA). N/A Density estimation, compression, and data generation are crucial tasks in artificial intelligence. Variational Auto-Encoders (VAEs) constitute a single framework to achieve these goals. Here, we present a novel class of generative models, called self-supervised Variational Auto-Encoder (selfVAE), that utilizes deterministic and discrete transformations of data. This class of models allows performing both conditional and unconditional sampling while simplifying the objective function. First, we use a single self-supervised transformation as a latent variable, where a transformation is either downscaling or edge detection. Next, we consider a hierarchical architecture, i.e., multiple transformations, and we show its benefits compared to the VAE. The flexibility of selfVAE in data reconstruction finds a particularly interesting use case in data compression tasks, where we can trade-off memory for better data quality, and vice-versa. We present the performance of our approach on three benchmark image data (Cifar10, Imagenette64, and CelebA). 1 INTRODUCTION The framework of variational autoencoders (VAEs) provides a principled approach for learning latentvariable models. As it utilizes a meaningful low-dimensional latent space with density estimation capabilities, it forms an attractive solution for generative modelling tasks. However, its performance in terms of the test log-likelihood and quality of generated samples is often disappointing, thus, many modifications were proposed. In general, one can obtain a tighter lower bound, and, thus, a more powerful and flexible model, by advancing over the following three components: the encoder (Rezende et al., 2014; van den Berg et al., 2018; Hoogeboom et al., 2020; Maaløe et al., 2016), the prior (or marginal over latents) (Chen et al., 2016; Habibian et al., 2019; Lavda et al., 2020; Lin & Clark, 2020; Tomczak & Welling, 2017) and the decoder (Gulrajani et al., 2016). Recent studies have shown that by employing deep hierarchical architectures and by carefully designing building blocks of the neural networks, VAEs can successfully model high-dimensional data and reach state-of-the-art test likelihoods (Zhao et al., 2017; Maaløe et al., 2019; Vahdat & Kautz, 2020). In this work, we present a novel class of VAEs, called self-supervised Variational Auto-Encoders, where we introduce additional variables to VAEs that result from discrete and deterministic transformations of observed images. Since the transformations are deterministic, and they provide a specific aspect of images (e.g., contextual information through detecting edges or downscaling), we refer to them as self-supervised representations. The introduction of the discrete and deterministic variables allows to train deep hierarchical models efficiently by decomposing the task of learning a highly complex distribution into training smaller and conditional distributions. In this way, the model allows to integrate the prior knowledge about the data, but still enables to synthesize unconditional samples. Furthermore, the discrete and deterministic variables could be used to conditionally reconstruct data, which could be of great use in data compression and super-resolution tasks. We make the following contributions: i) We propose an extension of the VAE framework by incorporating self-supervised representations of the data. ii) We analyze the impact of modelling natural images with different data transformations as self-supervised representations. iii) This new type of generative model (self-supervised Variational Auto-Encoders), which is able to perform both conditional and unconditional sampling, demonstrate improved quantitative performance in terms of density estimation and generative capabilities on image benchmarks. 2 BACKGROUND 2.1 VARIATIONAL AUTO-ENCODERS Let x 2 XD be a vector of observable variables, where X ✓ R or X ✓ Z, and z 2 RM denote a vector of latent variables. Since calculating p#(x) = R p#(x, z)dz is computationally intractable for non-linear stochastic dependencies, a variational family of distributions could be used for approximate inference. Then, the following objective function could be derived, namely, the evidence lower bound (ELBO) (Jordan et al., 1999): ln p#(x) Eq (z|x) [ln p✓(x|z) + ln p (z) ln q (z|x)] , (1) where q (z|x) is the variational posterior (or the encoder), p✓(x|z) is the conditional likelihood function (or the decoder) and p (z) is the prior (or marginal), , ✓ and denote parameters. The expectation is approximated by Monte Carlo sampling while exploiting the reparameterization trick in order to obtain unbiased gradient estimators. The models are parameterized by neural networks. This generative framework is known as Variational Auto-Encoder (VAE) (Kingma & Welling, 2013; Rezende et al., 2014). 2.2 VAES WITH BIJECTIVE PRIORS Even though the lower-bound suggests that the prior plays a crucial role in improving the variational bounds, usually a fixed distribution is used, e.g., a standard multivariate Gaussian. While being relatively simple and computationally cheap, the fixed prior is known to result in over-regularized models that tend to ignore most of the latent dimensions (Burda et al., 2015; Hoffman & Johnson, 2016; Tomczak & Welling, 2017). Moreover, even with powerful encoders, VAEs may still fail to match the variational posterior to a unit Gaussian prior (Rosca et al., 2018). However, it is possible to obtain a rich, multi-modal prior distribution p(z) by using a bijective (or flow-based) model (Dinh et al., 2016). Formally, given a latent code z, a base distribution pV (v) over latent variables v 2 RM , and f : RM ! RM consisting of a sequence of L diffeomorphic transformations1, where fi(vi 1) = vi, v0 = v and vL = z, the change of variable can be used sequentially to express the distribution of z as a function of v as follows: log p(z) = log pV (v) LX i=1 log @fi(vi 1) @vi 1 , (2) where @fi(vi 1)@vi 1 is the Jacobian-determinant of the ith transformation. Thus, using the bijective prior yields the following lower-bound: ln p(x) Eq (z|x) h log p✓(x|z) log q (z|x) + log pV (v0) + LX i=1 log @f 1i (vi) @vi i . (3) In this work, we utilize RealNVP (Dinh et al., 2016) as the prior, however, any other flow-based model could be used (Kingma & Dhariwal, 2018; Hoogeboom et al., 2020). For the experiments and ablation study that shows the impact of the bijective prior on VAEs, we refer to the appendix A.1. 3 METHOD 3.1 MOTIVATION The idea of self-supervised learning is about utilizing original unlabeled data to create additional context information. It could be achieved in multiple manners, e.g., by adding noise to data (Vincent et al., 2008) or masking data during training (Zhang et al., 2017). Self-supervised learning could also be seen as turning an unsupervised model into a supervised by, e.g., treating predicting next pixels as a classification task (Hénaff et al., 2019; Oord et al., 2018). These are only a few examples of a quickly growing research line (Liu et al., 2020). 1That is, invertible and differentiable transformations. i) Stochastic dependencies of self-supervised VAE ii) Hierarchical ssVAE self-supervised VAE Here, we propose to use non-trainable transformations to obtain information about image data. Our main hypothesis is that since working with highly-quality images is challenging, we could alleviate this problem by additionally considering partial information about them. Fitting a model to images of lower quality, and then enhancing them to match the target distribution seems to be overall an easier task (Chang et al., 2004; Gatopoulos et al., 2020). By incorporating compressed transformations (i.e., the self-supervised representations) that still contain global information, with the premise that it would be easier to approximate, the process of modelling a high-dimensional complex density breaks down into simpler tasks. In this way, the expressivity of the model will grow and gradually result into richer, better generations. A positive effect of the proposed framework is that the model allows us to integrate prior knowledge through the image transformations, without losing its uncon- ditional generative functionality. Overall, we end up with a two-level VAE with three latent variables, where one is a data transformation that can be obtained in a self-supervised fashion. In Figure 1 a schematic representation of the proposed approach with downscaling is presented. A number of exemplary image transformations are presented in Figure 2. We notice that with these transformations, even though they discard a lot of information, the global structure is preserved. As a result, in practice the model should have the ability to extract a general concept of the data, and add local information afterwards. In this work, we focus on downscaling (Figure 2.b, c & d) and edge detection or sketching (Fig. 2.i). 3.2 MODEL FORMULATION In our model, we consider representations that result from deterministic and discrete transformations of an image. Formally, we introduce a transformation d : XD ! XC that takes x and returns an image representation y, e.g., a downscaled image. Since we lose information about the original image, z could be seen as a variable that compensates lost details in x. Further we propose to introduce an additional latent variable, u 2 RN to model y and z. We can define the joint distribution of x and y as follows: p(x,y) = p(y|x)p(x), where p(y|x) = (y d(x)) due to the deterministic transformation d(·), where (·) is the Kronecker delta. Thus, the empirical distribution is (y d(x))pdata(x). However, since we are interested in decomposing the problem of modeling a complex distribution p(x), we propose to model p(x|y)p(y) instead, and utilize the variational inference of the form Q(u, z|x,y) = q(u|y)q(z|x) that yields: ln p(x,y) EQ ⇥ ln p✓(x|y, z) + ln p(z|u,y) + ln p(y|u) + ln p(u) ln q(z|x) ln q(u|y) ⇤ . (4) Intuitively, the premise for selfVAE is that the latents u will capture the global structure of the input data and the latents z will encode the missing information between y and x, guiding the model to discover the distribution of the target observations. In order to highlight the self-supervised part in our model, we refer to it as the self-supervised Variational Auto-Encoder (or selfVAE for short). Further, we propose to choose the following distributions: p(v) = N (v|0,1) p (u ) = p(v) FY i=1 det @fi(vi 1) @vi 1 1 p✓1 (y|u ) = IX i=1 ⇡(u)i Dlogistic ⇣ µ(u)i , s (u) i ⌘ q 1 (u|y ) = N (u|µ 1(y), diag ( 1(y)) ) q 2 (z|x ) = N (z|µ 2(x), diag ( 2(x)) ). p✓2 (z|y,u ) = N (z|µ✓2(y,u), diag ( ✓2(y,u) ) ) p✓3 (x|z,y ) = IX i=1 ⇡(z,y)i Dlogistic ⇣ µ(z,y)i , s (z,y) i ⌘ where Dlogistic is defined as the discretized logistic distribution (Salimans et al., 2017), and we utilize a flow-based model for p (u ). Notice that we use the discretized logistic distribution, because images are represented by values between 0 and 255. For integer-valued random variables, other distributions like Gaussian are inappropriate. 3.3 GENERATION AND RECONSTRUCTION IN SELFVAE As generative models, VAEs can be used to synthesize novel content through the following process: z ⇠ p(z) ! x ⇠ p(x|z), but also to reconstruct a data sample x⇤ by using the following scheme: z ⇠ q(z|x⇤) ! x ⇠ p(x|z). Interestingly, our approach allows to utilize more operations regarding data generation and reconstruction. First, analogously to VAEs, the selfVAE allows to generate data by applying the following hierarchical sampling process (generation): u ⇠ p(u) ! y ⇠ p(y|u) ! z ⇠ p(z|u,y) ! x ⇠ p(x|y, z). However, we can use the ground-truth y (i.e, y⇤ = d(x⇤)), and sample or infer z. Then, the generative process for the former (conditional generation) is: z ⇠ q(z|x⇤) ! x ⇠ p(x|y⇤, z), and for the latter (conditional reconstruction): u ⇠ q(u|y⇤) ! z ⇠ p(z|u,y⇤), ! x ⇠ p(x|y⇤, z). If y is a downscaling transformation of the input image, selfVAE can be used in a manner similar to the super-resolution (Gatopoulos et al., 2020). Alternatively, we can sample (or generate) y instead, and choose to sample or infer z. In this way, we can reconstruct an image in two ways, namely, reconstruction 1: y⇤ = d(x⇤) ! u ⇠ q(u|y⇤) ! y ⇠ p(y|u) ! z ⇠ p(z|u,y) ! x ⇠ p(x|z,y), and reconstruction 2: ⇣ y⇤ = d(x⇤) ! u ⇠ q(u|y⇤) ! y ⇠ p(y|u) ⌘ , then z ⇠ q(z|x⇤) ! x ⇠ p(x|y, z). The presented versions of generating and reconstructing images could be useful in the compression task. As we will see in the experiments, each option creates a different ratio of the reconstruction quality against the memory that we need to allocate to send information. However, every inferred variable needs to be sent, thus, more sampling corresponds to lower memory requirements. 3.4 HIERARCHICAL SELF-SUPERVISED VAE u … … x a) Generative Model yK u … …… x yK1 yK y1 yK1 y1 b) Inference Model u z y x i) Stochastic dependencies of self-supervised VAE ii) Stochastic dependencies of the hierarchical ssVAE zK1 zK z1z1 zK1 zK d d d Figure 3: Hierarchical selfVAE. The proposed approach can be further extended and generalized by introducing multiple transformations, in the way that it is illustrated in Figure 3. By incorporating a single (or multiple) self-supervised representation(s) of the data, the process of modelling a high-dimensional complex density breaks down into K simpler modeling tasks. Thus, we obtain a K-level VAE architecture, where the overall expressivity of the model grows even further and gradually results into generations of higher quality. Some transformations cannot be applied multiple times (e.g., edge detection), however, others could be used sequentially, e.g., downscaling. We take K self-supervised data transformations dk(·) that give K representations denoted by y1:K = [y1, . . . ,yK ], and the following variational distributions: Q(u, z|x,y1:K) = q(u|yK)q(z1|x) K 1Y k=1 q(zk+1|yk), (5) that yields the following objective: ln p(x,y1:K) EQ ⇥ ln p✓(x|y1, z1) + K 1X k=1 ln p(zk|yk, zk+1) + ln p(yk|yk+1, zk+1) + + ln p(zK |u,yK) + ln p(yK |u) + ln p(u) ln q(u|yK) ln q(z1|x) K 1X k=1 ln q(zk+1|yk) ⇤ . (6) 4 EXPERIMENTS 4.1 EXPERIMENTAL SETUP Datasets We evaluate the proposed model on CIFAR-10, Imagenette64 and CelebA: CIFAR-10 The CIFAR-10 dataset is a well-known image benchmark data containing 60.000 training examples and 10.000 validation examples. From the training data, we put aside 15% randomly selected images as the test set. We augment the training data by using random horizontal flips and random affine transformations and normalize the data uniformly in the range (0, 1). Imagenette64 Imagenette642 is a subset of 10 classes from the downscaled Imagenet dataset. We downscaled the dataset to 64px ⇥ 64px images. Similarly to CIFAR-10, we put aside 15% randomly selected training images as the test set. We used the same data augmentation as in CIFAR-10 CelebA The Large-scale CelebFaces Attributes (CelebA) Dataset consists of 202.599 images of celebrities. We cropped original images on the 40 vertical and 15 horizontal component of the top left corner of the crop box, which height and width were cropped to 148. Besides the uniform normalization of the image, no other augmentation was applied. Architectures Encoders and decoders consist of building blocks composed of DenseNets (Huang et al., 2016), channel-wise attention (Zhang et al., 2018), and ELUs (Clevert et al., 2015) as activation functions. The dimensionality of all the latent variables were kept at 8 ⇥ 8 ⇥ 16 = 1024 and all models were trained using AdaMax (Kingma & Ba, 2014) with data-dependent initialization (Salimans & Kingma, 2016). Regarding the selfVAEs, in CIFAR-10 we used an architecture with a single downscaled transformation (selfVAE-downscale), while on the remaining two datasets (CelebA 2https://github.com/fastai/imagenette and Imagenette64) we used a hierarchical 3-leveled selfVAE with downscaling, and a selfVAE with sketching. All models were employed with the bijective prior (RealNVP) comparable in terms of the number of parameters (the range of the weights of all models was from 32M to 42M). For more details, please refer to the appendix section A.2. Evaluation We approximate the negative log-likelihood using 512 IW-samples (Burda et al., 2015) and express the scores in bits per dimension (bpd). Additionally, for CIFAR-10, we use the Fréchet Inception Distance (FID) (Heusel et al., 2017). 4.2 QUANTITATIVE RESULTS We present the results of the experiments on the benchmark datasets in Table 1. First, we notice that on CIFAR-10 our implementation of the VAE is still lacking behind other generative models in terms of bpd, however, it is better or comparable in terms of FID. The selfVAE-downscale achieves worse bpd than the VAE. A possible explanation may lie in the small image size (32⇥ 32), as the benefits of breaking down the learning process in two or more steps are not obvious given the small target dimensional space. Nevertheless, the selfVAE-downscale achieves significantly better FID scores than the other generative models. This result could follow from the fact that downscaling allows to maintain context information about the original image and, as a result, a general coherence is of higher quality. Interestingly, on the two other datasets, a three-level selfVAE-downscale achieves significantly better bpd scores than the VAE with the bijective prior. This indicates the benefit of employing a multileveled self-supervised framework against the VAE in higher-dimensional data, where the plain model fails to scale efficiently. It seems that the hierarchical structure of self-supervised random variables allows to encode the missing information more efficiently in zk, in contrast to the vanilla VAE, where all information about images must be coded in z. This result is promising and indicates that the proposed approach is of great potential for generative modelling. 4.3 QUALITATIVE RESULTS We present generations on CIFAR-10 and Imagenette64 in Figure 4 and on CelebA in Figure 5, and reconstructions on CIFAR-10 and CelebA in Figure 6. i) se lfV A E - d ow ns ca le - 3l vl C IF A R -1 0 C el eb A We first notice that the generations from selfVAE seem to be more coherent, in contrast with these from VAE that produces overall more contextless and distorted generations. This result seems to be in line with the FID scores. Especially for CelebA, we observe impressive synthesis quality, great sampling diversity and coherent generations (Figure 5). On the Imagenette64 dataset, we can also observe crisper generations for our method compared to the VAE (Figure 4). Furthermore, the hierarchical selfVAE seems to be of a great potential for compression purposes. In contrast to the VAE, which is restricted to using a single way of reconstructing an image, the selfVAE allows four various options with different quality/memory ratios (Figure 6). In the selfVAE-sketch, we can retrieve the image with high accuracy by using only 16% of the original data, as it manages to encode all the texture of the image to z (Figure 11). This shows the advantage of choosing prior knowledge into the learning process. Lastly, the latents learn to add extra information, which defines the end result, and we can alter details of an image like facial expressions (Figure 12.ii). 5 CONCLUSION In this paper, we showed that taking deterministic and discrete transformations results in coherent generations of high visual quality, and allows to integrate prior knowledge without loosing its unconditional generative functionality. The experimental results seem to confirm that hierarchical architectures perform better and allow to obtain both better bpd scores and better generations and reconstructions. In the experiments, we considered two classes of image transformations, namely, downscaling and edge detection (sketching). However, there is a vast of possible other transformations (see Figure 2), and we leave investigating them for future work. Moreover, we find the proposed approach interesting for the compression task. A similar approach with a multi-scale auto-encoder for image compression was proposed, e.g, by Mentzer et al. (2019) or Razavi et al. (2019). However, we still use a probabilistic framework and indicate that various non-trainable image transformations (not only multiple scales) could be of great potential. ACKNOWLEDGMENTS Anonymized for the double-blind review.
1. What are the strengths and weaknesses of the proposed approach in addressing the problem of VAEs ignoring some dimensions of the latent code? 2. How does the addition of self-supervised tasks improve the latent representation, and what is the effect of their performance on the quality of the representations? 3. How were the operations in Figure 3 balanced during training, and how did the authors ensure that the self-supervised tasks were not overfitting? 4. Why were the FID scores on CelebA and ImageNet-64 not provided, and why was the comparison with previous literature not made? 5. What is RE and KL in Table, and why were the values for previous methods omitted? 6. Are there any specific issues or concerns regarding the experimental design, data usage, or presentation of results in the paper?
Review
Review ################################### Pros: ∙ VAEs can ignore some dimensions of the latent code. Enforcing the posterior distributions to consider desired factors of variations in the input can be fulfilled by either making it more structured (i.e., quantization as in VQ-VAE-2) or introducing additional constraints. This paper tackles this problem by applying the latter, two self-supervised tasks: edge maps and downscaled versions of inputs. ∙ The idea of adding self-supervised tasks to improve latent representation is very interesting. When learning a more structured latent representation, image superresolution, or sketch-to-image networks are also trained. ################################### Cons: ∙ Hierarchical self-supervised tasks In section 3.4, multiple transformations explained. However, none of the experiments are conducted as a consecutive set of transformations. Does 3-level downscale mean a single downscaling three times or generating from u using four different networks and match each of these levels with z ? If yes, why does not the selfVAE-sketches model apply in a similar way hierarchically? ∙ How to train/balance operations In Figure 3, there are several modes of operations given. How did you balance these modes during training? ∙ Performance of self-supervised tasks: What is the effect of self-supervised tasks' performance on the quality of latent representations? Considering the literature in image superresolution and sketch-to-image, did you use a pretrained auxiliary generator? ∙ What is RE and KL in Table? Are they the summation of both reconstruction ( R E x , R E y ) and KL divergence ( K L z , K L u ) terms in the loss (Eq.2)? The reason why previous methods' RE/KL values were omitted should be stated. Similarly, why were the FID scores on CelebA and ImageNet-64 not given? Furthermore, the state-of-the-art FID scores on CIFAR-10 is better than the methods compared in Table 1. For instance, some examples of FID scores on CIFAR-10 are 18.9 in MoML [1], 29.3 in WP-GAN [2], 29.3 in spectrally normalized GAN [3], 26.4 in adversarial score matching [4], and so on. [1] https://arxiv.org/pdf/1806.11006.pdf [2] https://arxiv.org/pdf/1706.08500.pdf [3] https://arxiv.org/pdf/1802.05957.pdf [4] https://arxiv.org/pdf/2009.05475.pdf As the results on CelebA and Imagenet-64 were not compared with previous literature, it is difficult to understand whether the contribution w.r.t. vanilla VAE is due to the self-supervised task or merely the use of an additional stochastic variable ( u ) and networks. Minor issue: "Imagenette64" might cause confusion, I did not see this dataset name before. I suppose that it is "ImageNet resized to 64x64" as in PixelCNN paper. References for all datasets should be added. ################################### Reasons for score: Overall, I rate towards rejection. Even though the idea of bijective priors and doing this through self-supervised tasks is a novel approach, my major concern is that it is beyond the state-of-the-art in CIFAR-10, not compared to any other method on CelebA and ImageNet-64. Hopefully, the authors address my concerns above in the rebuttal period.
ICLR
Title Self-Supervised Variational Auto-Encoders Abstract Density estimation, compression, and data generation are crucial tasks in artificial intelligence. Variational Auto-Encoders (VAEs) constitute a single framework to achieve these goals. Here, we present a novel class of generative models, called self-supervised Variational Auto-Encoder (selfVAE), that utilizes deterministic and discrete transformations of data. This class of models allows performing both conditional and unconditional sampling while simplifying the objective function. First, we use a single self-supervised transformation as a latent variable, where a transformation is either downscaling or edge detection. Next, we consider a hierarchical architecture, i.e., multiple transformations, and we show its benefits compared to the VAE. The flexibility of selfVAE in data reconstruction finds a particularly interesting use case in data compression tasks, where we can trade-off memory for better data quality, and vice-versa. We present the performance of our approach on three benchmark image data (Cifar10, Imagenette64, and CelebA). N/A Density estimation, compression, and data generation are crucial tasks in artificial intelligence. Variational Auto-Encoders (VAEs) constitute a single framework to achieve these goals. Here, we present a novel class of generative models, called self-supervised Variational Auto-Encoder (selfVAE), that utilizes deterministic and discrete transformations of data. This class of models allows performing both conditional and unconditional sampling while simplifying the objective function. First, we use a single self-supervised transformation as a latent variable, where a transformation is either downscaling or edge detection. Next, we consider a hierarchical architecture, i.e., multiple transformations, and we show its benefits compared to the VAE. The flexibility of selfVAE in data reconstruction finds a particularly interesting use case in data compression tasks, where we can trade-off memory for better data quality, and vice-versa. We present the performance of our approach on three benchmark image data (Cifar10, Imagenette64, and CelebA). 1 INTRODUCTION The framework of variational autoencoders (VAEs) provides a principled approach for learning latentvariable models. As it utilizes a meaningful low-dimensional latent space with density estimation capabilities, it forms an attractive solution for generative modelling tasks. However, its performance in terms of the test log-likelihood and quality of generated samples is often disappointing, thus, many modifications were proposed. In general, one can obtain a tighter lower bound, and, thus, a more powerful and flexible model, by advancing over the following three components: the encoder (Rezende et al., 2014; van den Berg et al., 2018; Hoogeboom et al., 2020; Maaløe et al., 2016), the prior (or marginal over latents) (Chen et al., 2016; Habibian et al., 2019; Lavda et al., 2020; Lin & Clark, 2020; Tomczak & Welling, 2017) and the decoder (Gulrajani et al., 2016). Recent studies have shown that by employing deep hierarchical architectures and by carefully designing building blocks of the neural networks, VAEs can successfully model high-dimensional data and reach state-of-the-art test likelihoods (Zhao et al., 2017; Maaløe et al., 2019; Vahdat & Kautz, 2020). In this work, we present a novel class of VAEs, called self-supervised Variational Auto-Encoders, where we introduce additional variables to VAEs that result from discrete and deterministic transformations of observed images. Since the transformations are deterministic, and they provide a specific aspect of images (e.g., contextual information through detecting edges or downscaling), we refer to them as self-supervised representations. The introduction of the discrete and deterministic variables allows to train deep hierarchical models efficiently by decomposing the task of learning a highly complex distribution into training smaller and conditional distributions. In this way, the model allows to integrate the prior knowledge about the data, but still enables to synthesize unconditional samples. Furthermore, the discrete and deterministic variables could be used to conditionally reconstruct data, which could be of great use in data compression and super-resolution tasks. We make the following contributions: i) We propose an extension of the VAE framework by incorporating self-supervised representations of the data. ii) We analyze the impact of modelling natural images with different data transformations as self-supervised representations. iii) This new type of generative model (self-supervised Variational Auto-Encoders), which is able to perform both conditional and unconditional sampling, demonstrate improved quantitative performance in terms of density estimation and generative capabilities on image benchmarks. 2 BACKGROUND 2.1 VARIATIONAL AUTO-ENCODERS Let x 2 XD be a vector of observable variables, where X ✓ R or X ✓ Z, and z 2 RM denote a vector of latent variables. Since calculating p#(x) = R p#(x, z)dz is computationally intractable for non-linear stochastic dependencies, a variational family of distributions could be used for approximate inference. Then, the following objective function could be derived, namely, the evidence lower bound (ELBO) (Jordan et al., 1999): ln p#(x) Eq (z|x) [ln p✓(x|z) + ln p (z) ln q (z|x)] , (1) where q (z|x) is the variational posterior (or the encoder), p✓(x|z) is the conditional likelihood function (or the decoder) and p (z) is the prior (or marginal), , ✓ and denote parameters. The expectation is approximated by Monte Carlo sampling while exploiting the reparameterization trick in order to obtain unbiased gradient estimators. The models are parameterized by neural networks. This generative framework is known as Variational Auto-Encoder (VAE) (Kingma & Welling, 2013; Rezende et al., 2014). 2.2 VAES WITH BIJECTIVE PRIORS Even though the lower-bound suggests that the prior plays a crucial role in improving the variational bounds, usually a fixed distribution is used, e.g., a standard multivariate Gaussian. While being relatively simple and computationally cheap, the fixed prior is known to result in over-regularized models that tend to ignore most of the latent dimensions (Burda et al., 2015; Hoffman & Johnson, 2016; Tomczak & Welling, 2017). Moreover, even with powerful encoders, VAEs may still fail to match the variational posterior to a unit Gaussian prior (Rosca et al., 2018). However, it is possible to obtain a rich, multi-modal prior distribution p(z) by using a bijective (or flow-based) model (Dinh et al., 2016). Formally, given a latent code z, a base distribution pV (v) over latent variables v 2 RM , and f : RM ! RM consisting of a sequence of L diffeomorphic transformations1, where fi(vi 1) = vi, v0 = v and vL = z, the change of variable can be used sequentially to express the distribution of z as a function of v as follows: log p(z) = log pV (v) LX i=1 log @fi(vi 1) @vi 1 , (2) where @fi(vi 1)@vi 1 is the Jacobian-determinant of the ith transformation. Thus, using the bijective prior yields the following lower-bound: ln p(x) Eq (z|x) h log p✓(x|z) log q (z|x) + log pV (v0) + LX i=1 log @f 1i (vi) @vi i . (3) In this work, we utilize RealNVP (Dinh et al., 2016) as the prior, however, any other flow-based model could be used (Kingma & Dhariwal, 2018; Hoogeboom et al., 2020). For the experiments and ablation study that shows the impact of the bijective prior on VAEs, we refer to the appendix A.1. 3 METHOD 3.1 MOTIVATION The idea of self-supervised learning is about utilizing original unlabeled data to create additional context information. It could be achieved in multiple manners, e.g., by adding noise to data (Vincent et al., 2008) or masking data during training (Zhang et al., 2017). Self-supervised learning could also be seen as turning an unsupervised model into a supervised by, e.g., treating predicting next pixels as a classification task (Hénaff et al., 2019; Oord et al., 2018). These are only a few examples of a quickly growing research line (Liu et al., 2020). 1That is, invertible and differentiable transformations. i) Stochastic dependencies of self-supervised VAE ii) Hierarchical ssVAE self-supervised VAE Here, we propose to use non-trainable transformations to obtain information about image data. Our main hypothesis is that since working with highly-quality images is challenging, we could alleviate this problem by additionally considering partial information about them. Fitting a model to images of lower quality, and then enhancing them to match the target distribution seems to be overall an easier task (Chang et al., 2004; Gatopoulos et al., 2020). By incorporating compressed transformations (i.e., the self-supervised representations) that still contain global information, with the premise that it would be easier to approximate, the process of modelling a high-dimensional complex density breaks down into simpler tasks. In this way, the expressivity of the model will grow and gradually result into richer, better generations. A positive effect of the proposed framework is that the model allows us to integrate prior knowledge through the image transformations, without losing its uncon- ditional generative functionality. Overall, we end up with a two-level VAE with three latent variables, where one is a data transformation that can be obtained in a self-supervised fashion. In Figure 1 a schematic representation of the proposed approach with downscaling is presented. A number of exemplary image transformations are presented in Figure 2. We notice that with these transformations, even though they discard a lot of information, the global structure is preserved. As a result, in practice the model should have the ability to extract a general concept of the data, and add local information afterwards. In this work, we focus on downscaling (Figure 2.b, c & d) and edge detection or sketching (Fig. 2.i). 3.2 MODEL FORMULATION In our model, we consider representations that result from deterministic and discrete transformations of an image. Formally, we introduce a transformation d : XD ! XC that takes x and returns an image representation y, e.g., a downscaled image. Since we lose information about the original image, z could be seen as a variable that compensates lost details in x. Further we propose to introduce an additional latent variable, u 2 RN to model y and z. We can define the joint distribution of x and y as follows: p(x,y) = p(y|x)p(x), where p(y|x) = (y d(x)) due to the deterministic transformation d(·), where (·) is the Kronecker delta. Thus, the empirical distribution is (y d(x))pdata(x). However, since we are interested in decomposing the problem of modeling a complex distribution p(x), we propose to model p(x|y)p(y) instead, and utilize the variational inference of the form Q(u, z|x,y) = q(u|y)q(z|x) that yields: ln p(x,y) EQ ⇥ ln p✓(x|y, z) + ln p(z|u,y) + ln p(y|u) + ln p(u) ln q(z|x) ln q(u|y) ⇤ . (4) Intuitively, the premise for selfVAE is that the latents u will capture the global structure of the input data and the latents z will encode the missing information between y and x, guiding the model to discover the distribution of the target observations. In order to highlight the self-supervised part in our model, we refer to it as the self-supervised Variational Auto-Encoder (or selfVAE for short). Further, we propose to choose the following distributions: p(v) = N (v|0,1) p (u ) = p(v) FY i=1 det @fi(vi 1) @vi 1 1 p✓1 (y|u ) = IX i=1 ⇡(u)i Dlogistic ⇣ µ(u)i , s (u) i ⌘ q 1 (u|y ) = N (u|µ 1(y), diag ( 1(y)) ) q 2 (z|x ) = N (z|µ 2(x), diag ( 2(x)) ). p✓2 (z|y,u ) = N (z|µ✓2(y,u), diag ( ✓2(y,u) ) ) p✓3 (x|z,y ) = IX i=1 ⇡(z,y)i Dlogistic ⇣ µ(z,y)i , s (z,y) i ⌘ where Dlogistic is defined as the discretized logistic distribution (Salimans et al., 2017), and we utilize a flow-based model for p (u ). Notice that we use the discretized logistic distribution, because images are represented by values between 0 and 255. For integer-valued random variables, other distributions like Gaussian are inappropriate. 3.3 GENERATION AND RECONSTRUCTION IN SELFVAE As generative models, VAEs can be used to synthesize novel content through the following process: z ⇠ p(z) ! x ⇠ p(x|z), but also to reconstruct a data sample x⇤ by using the following scheme: z ⇠ q(z|x⇤) ! x ⇠ p(x|z). Interestingly, our approach allows to utilize more operations regarding data generation and reconstruction. First, analogously to VAEs, the selfVAE allows to generate data by applying the following hierarchical sampling process (generation): u ⇠ p(u) ! y ⇠ p(y|u) ! z ⇠ p(z|u,y) ! x ⇠ p(x|y, z). However, we can use the ground-truth y (i.e, y⇤ = d(x⇤)), and sample or infer z. Then, the generative process for the former (conditional generation) is: z ⇠ q(z|x⇤) ! x ⇠ p(x|y⇤, z), and for the latter (conditional reconstruction): u ⇠ q(u|y⇤) ! z ⇠ p(z|u,y⇤), ! x ⇠ p(x|y⇤, z). If y is a downscaling transformation of the input image, selfVAE can be used in a manner similar to the super-resolution (Gatopoulos et al., 2020). Alternatively, we can sample (or generate) y instead, and choose to sample or infer z. In this way, we can reconstruct an image in two ways, namely, reconstruction 1: y⇤ = d(x⇤) ! u ⇠ q(u|y⇤) ! y ⇠ p(y|u) ! z ⇠ p(z|u,y) ! x ⇠ p(x|z,y), and reconstruction 2: ⇣ y⇤ = d(x⇤) ! u ⇠ q(u|y⇤) ! y ⇠ p(y|u) ⌘ , then z ⇠ q(z|x⇤) ! x ⇠ p(x|y, z). The presented versions of generating and reconstructing images could be useful in the compression task. As we will see in the experiments, each option creates a different ratio of the reconstruction quality against the memory that we need to allocate to send information. However, every inferred variable needs to be sent, thus, more sampling corresponds to lower memory requirements. 3.4 HIERARCHICAL SELF-SUPERVISED VAE u … … x a) Generative Model yK u … …… x yK1 yK y1 yK1 y1 b) Inference Model u z y x i) Stochastic dependencies of self-supervised VAE ii) Stochastic dependencies of the hierarchical ssVAE zK1 zK z1z1 zK1 zK d d d Figure 3: Hierarchical selfVAE. The proposed approach can be further extended and generalized by introducing multiple transformations, in the way that it is illustrated in Figure 3. By incorporating a single (or multiple) self-supervised representation(s) of the data, the process of modelling a high-dimensional complex density breaks down into K simpler modeling tasks. Thus, we obtain a K-level VAE architecture, where the overall expressivity of the model grows even further and gradually results into generations of higher quality. Some transformations cannot be applied multiple times (e.g., edge detection), however, others could be used sequentially, e.g., downscaling. We take K self-supervised data transformations dk(·) that give K representations denoted by y1:K = [y1, . . . ,yK ], and the following variational distributions: Q(u, z|x,y1:K) = q(u|yK)q(z1|x) K 1Y k=1 q(zk+1|yk), (5) that yields the following objective: ln p(x,y1:K) EQ ⇥ ln p✓(x|y1, z1) + K 1X k=1 ln p(zk|yk, zk+1) + ln p(yk|yk+1, zk+1) + + ln p(zK |u,yK) + ln p(yK |u) + ln p(u) ln q(u|yK) ln q(z1|x) K 1X k=1 ln q(zk+1|yk) ⇤ . (6) 4 EXPERIMENTS 4.1 EXPERIMENTAL SETUP Datasets We evaluate the proposed model on CIFAR-10, Imagenette64 and CelebA: CIFAR-10 The CIFAR-10 dataset is a well-known image benchmark data containing 60.000 training examples and 10.000 validation examples. From the training data, we put aside 15% randomly selected images as the test set. We augment the training data by using random horizontal flips and random affine transformations and normalize the data uniformly in the range (0, 1). Imagenette64 Imagenette642 is a subset of 10 classes from the downscaled Imagenet dataset. We downscaled the dataset to 64px ⇥ 64px images. Similarly to CIFAR-10, we put aside 15% randomly selected training images as the test set. We used the same data augmentation as in CIFAR-10 CelebA The Large-scale CelebFaces Attributes (CelebA) Dataset consists of 202.599 images of celebrities. We cropped original images on the 40 vertical and 15 horizontal component of the top left corner of the crop box, which height and width were cropped to 148. Besides the uniform normalization of the image, no other augmentation was applied. Architectures Encoders and decoders consist of building blocks composed of DenseNets (Huang et al., 2016), channel-wise attention (Zhang et al., 2018), and ELUs (Clevert et al., 2015) as activation functions. The dimensionality of all the latent variables were kept at 8 ⇥ 8 ⇥ 16 = 1024 and all models were trained using AdaMax (Kingma & Ba, 2014) with data-dependent initialization (Salimans & Kingma, 2016). Regarding the selfVAEs, in CIFAR-10 we used an architecture with a single downscaled transformation (selfVAE-downscale), while on the remaining two datasets (CelebA 2https://github.com/fastai/imagenette and Imagenette64) we used a hierarchical 3-leveled selfVAE with downscaling, and a selfVAE with sketching. All models were employed with the bijective prior (RealNVP) comparable in terms of the number of parameters (the range of the weights of all models was from 32M to 42M). For more details, please refer to the appendix section A.2. Evaluation We approximate the negative log-likelihood using 512 IW-samples (Burda et al., 2015) and express the scores in bits per dimension (bpd). Additionally, for CIFAR-10, we use the Fréchet Inception Distance (FID) (Heusel et al., 2017). 4.2 QUANTITATIVE RESULTS We present the results of the experiments on the benchmark datasets in Table 1. First, we notice that on CIFAR-10 our implementation of the VAE is still lacking behind other generative models in terms of bpd, however, it is better or comparable in terms of FID. The selfVAE-downscale achieves worse bpd than the VAE. A possible explanation may lie in the small image size (32⇥ 32), as the benefits of breaking down the learning process in two or more steps are not obvious given the small target dimensional space. Nevertheless, the selfVAE-downscale achieves significantly better FID scores than the other generative models. This result could follow from the fact that downscaling allows to maintain context information about the original image and, as a result, a general coherence is of higher quality. Interestingly, on the two other datasets, a three-level selfVAE-downscale achieves significantly better bpd scores than the VAE with the bijective prior. This indicates the benefit of employing a multileveled self-supervised framework against the VAE in higher-dimensional data, where the plain model fails to scale efficiently. It seems that the hierarchical structure of self-supervised random variables allows to encode the missing information more efficiently in zk, in contrast to the vanilla VAE, where all information about images must be coded in z. This result is promising and indicates that the proposed approach is of great potential for generative modelling. 4.3 QUALITATIVE RESULTS We present generations on CIFAR-10 and Imagenette64 in Figure 4 and on CelebA in Figure 5, and reconstructions on CIFAR-10 and CelebA in Figure 6. i) se lfV A E - d ow ns ca le - 3l vl C IF A R -1 0 C el eb A We first notice that the generations from selfVAE seem to be more coherent, in contrast with these from VAE that produces overall more contextless and distorted generations. This result seems to be in line with the FID scores. Especially for CelebA, we observe impressive synthesis quality, great sampling diversity and coherent generations (Figure 5). On the Imagenette64 dataset, we can also observe crisper generations for our method compared to the VAE (Figure 4). Furthermore, the hierarchical selfVAE seems to be of a great potential for compression purposes. In contrast to the VAE, which is restricted to using a single way of reconstructing an image, the selfVAE allows four various options with different quality/memory ratios (Figure 6). In the selfVAE-sketch, we can retrieve the image with high accuracy by using only 16% of the original data, as it manages to encode all the texture of the image to z (Figure 11). This shows the advantage of choosing prior knowledge into the learning process. Lastly, the latents learn to add extra information, which defines the end result, and we can alter details of an image like facial expressions (Figure 12.ii). 5 CONCLUSION In this paper, we showed that taking deterministic and discrete transformations results in coherent generations of high visual quality, and allows to integrate prior knowledge without loosing its unconditional generative functionality. The experimental results seem to confirm that hierarchical architectures perform better and allow to obtain both better bpd scores and better generations and reconstructions. In the experiments, we considered two classes of image transformations, namely, downscaling and edge detection (sketching). However, there is a vast of possible other transformations (see Figure 2), and we leave investigating them for future work. Moreover, we find the proposed approach interesting for the compression task. A similar approach with a multi-scale auto-encoder for image compression was proposed, e.g, by Mentzer et al. (2019) or Razavi et al. (2019). However, we still use a probabilistic framework and indicate that various non-trainable image transformations (not only multiple scales) could be of great potential. ACKNOWLEDGMENTS Anonymized for the double-blind review.
1. What is the main contribution of the paper, and how does it differ from previous works? 2. What are the strengths and weaknesses of the proposed method, particularly regarding its technical aspects and derivations? 3. How does the reviewer assess the novelty and effectiveness of the proposed approach compared to other state-of-the-art methods? 4. Are there any concerns or suggestions regarding the experimental results and their presentation? 5. Can you provide further explanations or clarifications regarding some parts of the paper that may be difficult to understand?
Review
Review This paper focuses on the task of generating high-quality data with generative models. To be specific, the authors proposed a variant of variational autoencoder (VAE) model, named self-supervised VAE. The intuition behind this model is that by breaking down the complex generation task into simpler/smaller ones, complex models can be trained steadily with the guidance from the simpler-level task. To his end, a hierarchical generative model with multiple-level latent variables is proposed, in which lower-level latent variables are governed by lower-level data features. The lower-level feature is generally obtained by a determined and discrete transformation, like down scaling. In addition, to further the modeling capability, a flow-based prior is proposed to fit the data distribution. Experiments were conducted to evaluate the performance of the proposed generative model. Strength: The idea of guiding the complex image generation with easer tasks is interesting, and is maybe the right way to accomplish complex tasks. the ELOB directed in Eq.2 is intuitive and insightful. It also provides me theoretical support for the fact that employing two-level modeling and downscale transformation to generate a more vivid image is reasonable. Weakness: From a technical perspective, the proposed method is just the combination of flow-based VAE and auxiliary VAE. By using 3 auxiliary variables, the authors infer one of them by a discrete and determined variational distribution q(y|x) to simplify the training objective, where the downscale image y plays an important role in this model. My question is why not regard y as observed data and then model the joint distribution p(x,y). There are some mistakes in the derivation of Eq 2. In appendix A.4, during computing the entropy of q(w|x), the authors expresses it as E_{q(w|x)}[\log q(w|x)] = E_{q(z|y,x)}[\log q(z|y,x)] + ... . However, the first term in RHS is completely wrong. Actually, it should be E_{q(z|y,x)q(y|x)}[\log q(z|y,x)]. It seems that the authors use an equation in many places, that is E_{q(z|y,x)}[\log q(z|y,x)] = E_{q(z|y,x)q(u|y)q(y|x)}[\log q(z|y,x)]. But this equation is not ture, because the term q(z|y,x) insided the expectation is dependent on variable y. Besides, in the choice of distribution p(y|u) and p(x|z,y), they are set to be a mixture discrete logistic distribution. For each image x or y, are their pixels assumed to be i.i.d ? If so, you miss \prod_{y_j \in y} outside the \sum_{i=1}^{I} in the distribution definition. Bijective prior (RealNVP) is proposed in other works, and here simply employing it should not be regard as a contribution of this paper. Moreover, the authors only compare the effectiveness of different priors (i.e. Gaussian, mixture Gaussian, and RealNVP) on vanilla VAE and confirm the superiority of using an adaptive prior. However, I want to know what is the performance of self-supervised VAE if only using a standard Gaussian prior. Section 3.3 is not presented well and the idea behind the sentences is hard to follow. What are the differences between these generation and reconstruction methods, and what application scenarios are corresponding to them? They are just simply listed, without providing any analysis of the logic behind them. The experimental results cannot support the superiority of the proposed model in both of the quantity and quality comparisons. From the generated images, I cannot see too much difference between the SelfVAE and the vanilla VAE model, without to naming the more superior generative models, like GLOW, GANs etcs. Also, for the quantity comparison, the model is only compared with the outdated vanilla VAE in CelebA and Imagenet64, more recent generative models should be included here.
ICLR
Title PANDA - Adapting Pretrained Features for Anomaly Detection Abstract Anomaly detection methods require high-quality features. One way of obtaining strong features is to adapt pre-trained features to anomaly detection on the target distribution. Unfortunately, simple adaptation methods often result in catastrophic collapse (feature deterioration) and reduce performance. DeepSVDD combats collapse by removing biases from architectures, but this limits the adaptation performance gain. In this work, we propose two methods for combating collapse: i) a variant of early stopping that dynamically learns the stopping iteration ii) elastic regularization inspired by continual learning. In addition, we conduct a thorough investigation of Imagenet-pretrained features for one-class anomaly detection. Our method, PANDA, outperforms the state-of-the-art in the one-class and outlier exposure settings (CIFAR10: 96.2% vs. 90.1% and 98.9% vs. 95.6%) . 1 INTRODUCTION Detecting anomalous patterns in data is of key importance in science and industry. In the computational anomaly detection task, the learner observes a set of training examples. The learner is then tasked to classify novel test samples as normal or anomalous. There are multiple anomaly detection settings investigated in the literature, corresponding to different training conditions. In this work, we deal with two settings: i) when only normal images are used for training ii) Outlier Exposure (OE) where an external dataset simulating the anomalies is available. In recent years, deep learning methods have been introduced for anomaly detection, typically extending classical methods with deep neural networks. Different auxiliary tasks (e.g. autoencoders or rotation classification) are used to learn representations of the data, while a great variety of anomaly criteria are then used to determine if a given sample is normal or anomalous. An important issue for current methods is the reliance on limited normal training data for representation learning, which limits the quality of learned representations. A solution, that we will investigate in this work, is to pre-train features on a large external dataset, and use the features for anomaly detection. As there is likely to be some mismatch between the external dataset and the task of anomaly detection on the target distribution, feature adaptation is an attractive option. Unfortunately, feature adaptation for anomaly detection often suffers from catastrophic collapse - a form of deterioration of the pre-trained features, where all the samples, including anomalous, are mapped to the same point. DeepSVDD (Ruff et al., 2018) proposed to overcome collapse by removing biases from the model architecture, but this restricts network expressively and limits the pre-trained models that can be borrowed off-the-shelf. Perera & Patel (2019) proposed to jointly train anomaly detection with the original task which has several limitations and achieves only limited adaptation success. We propose two techniques to overcome catastrophic collapse: i) an adaptive early stopping method that selects the stopping iteration per-sample, using a novel generalization criterion ii) an elastic regularization, motivated by continual learning, that postpones the collapse. We also provide an extensive evaluation of Imagenet-pretrained features on one-class anomaly detection. Thorough experiments demonstrate that we outperform the state-of-the-art by a wide margin: e.g. CIFAR10 results: 96.2% vs. 90.1% without outlier exposure and 98.9% vs. 95.6% with outlier exposure. We present several insightful critical analyses: i) We show that pre-trained features strictly dominate current self-supervised RotNet-based feature learning methods. We discuss the relative merits of each paradigm and conclude that for most practical purposes, using pre-trained features is preferable. ii) We analyse the results of the popular method, DeepSVDD. We discover that the feature adaptation of its current architecture, which is designed to prevent collapse, does not improve over simple data whitening. iii) We show that collapse can be avoided using early stopping, and suggest an appropriate unsupervised criterion. We also show it can be mitigated using continual learning. 1.1 RELATED WORK Classical anomaly detection: The main categories of classical anomaly detection methods are: i) reconstruction-based: compress the training data using a bottleneck, and use a reconstruction loss as an anomaly criterion (e.g. (Candès et al., 2011; Jolliffe, 2011), K nearest neighbors (Eskin et al., 2002) and K-means (Hartigan & Wong, 1979)), ii) probabilistic: modeling the probability density function and labeling unlikely sampled as anomalous (e.g. Ensembles of Gaussian Mixture Models (Glodek et al., 2013), kernel density estimate (Latecki et al., 2007)) iii) one-class classification (OCC): finding a separating manifold between normal data and the rest of input space (e.g. Oneclass SVM (Scholkopf et al., 2000)). Deep learning methods: The introduction of deep learning has affected image anomaly detection in two ways: extension of classical methods with deep representations and novel self-supervised deep methods. Reconstruction-based methods have been enhanced by learning deep autoencoder-based bottlenecks (D’Oro et al., 2019) which can provide better models of image data. Deep methods extended classical methods by creating a better representations of the data for parametric assumptions about probabilities, a combination of reconstruction and probabilistic methods (such as DAGMM (Zong et al., 2018)), or in a combination with an OCC (Ruff et al., 2018). Novel deep methods have also been proposed for anomaly detection including GAN-based methods (Zong et al., 2018). Another set of novel deep methods use auxiliary self-supervised learning for anomaly detection. The seminal work by Golan & El-Yaniv (2018) was later extended by Hendrycks et al. (2019b) and Bergman & Hoshen (2020). Transferring pretrained representations: Learning deep features requires extensive datasets, preferably with labels. An attractive property of deep neural networks, is that representations learned on very extensive datasets, can be transferred to data-poor tasks. Specifically deep neural representations trained on the ImageNet dataset have been shown by Huh et al. (2016) to significantly boost performance on other datasets that are only vaguely related to some of the ImageNet classes. This can be performed with and without finetuning. Although much recent progress has been performed on self-supervised feature learning (Gidaris et al., 2018; Chen et al., 2020), such methods are typically outperformed by transferred pretrained features. Transferring ImageNet pre-trained features for out-of-distribution detection has been proposed by Hendrycks et al. (2019a). Similar pre-training has been proposed for one-class classification has been proposed by Perera & Patel (2019), however they require joint optimization with the original task. 2 BACKGROUND: FEATURE ADAPTATION FOR ANOMALY DETECTION 2.1 A THREE-STAGE FRAMEWORK We present our general framework in which we examine several adaptation-based anomaly detection methods, including our method. Let us assume that we are given a set Dtrain of normal training samples: x1, x2..xN . The framework consists of three steps: Feature extractor pretraining: A pre-trained feature extractor ψ0 is typically learned using selfsupervised learning (auto-encoding, rotation or jigsaw prediction). We denote the loss function of the auxiliary task Lpretrain. The auxiliary task can be learned either on the training set Dtrain or on an external dataset Dpretrain (such as ImageNet). In the latter case, the pretrained extractor can be obtained off-the-shelf. We will investigate and analyse the merits of each choice in Sec. 4.2. Feature adaptation: Features trained on auxiliary tasks or datasets may require adaptation before being used for anomaly scoring on the target data. This can be seen as a finetuning stage of the pre-trained features on the target training data. We denote the feature extractor after adaptation ψ. Anomaly scoring: Having adapted the features for anomaly detection, we extract the features ψ(x1), ψ(x2)..ψ(xN ) of the training set samples, we proceed to learn a scoring function, which describes how anomalous a sample is. Typically, the scoring function seeks to measure the density of normal data around the test sample ψ(x) (either by direct estimation or via some auxiliary task) and assign a high anomaly score to low density regions. 2.2 EXISTING FEATURE-ADAPTATION METHODS In this section, we review two seminal methods that use feature adaptation for anomaly detection: DeepSVDD: Ruff et al. (2018) suggest to first train an autoencoder E on the normal-only train images. The encoder is then used as the initial feature extractor ψ0(x) = E(x). As the features of the encoder are not specifically adapted to anomaly detection, DeepSVDD adapts ψ on the training data. The adaptation takes place by minimizing the compactness loss: Lcompact = ∑ x∈Dtrain ‖ψ(x)− c‖2 (1) Where c is a constant vector, typically the average of ψ0(x) on the training set. However, the authors were concerned of the trivial solution ψ = c, and suggested architectural restrictions to mitigate it, most importantly removing the biases from all layers. We empirically show that the effect of adaptation of the features in DeepSVDD does not outperform simple feature whitening (see Sec. 4.2.2). Joint optimization (JO): Perera & Patel (2019) proposed to use a deep feature extractor trained for object classification on the ImageNet dataset. Due to fear of ”learning a trivial solution due to the absence of a penalty for miss-classification”, the method do not adapt by finetuning on the compactness loss only. Instead, they relaxed the task setting, by assuming that a number (∼ 50k) of labelled original ImageNet images, Dpretrain, are still available at adaptation time. They proposed to train the features ψ under the compactness loss jointly with the original ImageNet classification linear layer W and its classification loss, here the CE loss with the true label `pretrain(p, y) = − log(py): LJoint = ∑ (x,y)∈Dpretrain `pretrain(softmax(Wψ(x)), y) + α ∑ x∈Dtrain ‖ψ(x)− c‖2 (2) Where W is the final linear classification layer and α is a hyper-parameter weighting the two losses. We note that the method has two main weaknesses: i) it requires retaining a significant number of the original training images which can be storage intensive ii) jointly training the two tasks may reduce the anomaly detection task accuracy, which is the only task of interest in this context. Our proposed method, PANDA, is able to sidestep these issues. 3 PANDA: FEATURE ADAPTATION FOR ANOMALY DETECTION We present PANDA (Pre-trained Anomaly Detection Adaptation), a new method for anomaly detection in images. The core of our method lies in adapting general pre-trained features to anomaly detection on the target distribution. Pre-trained feature extractor: Our method is agnostic to the specific pretrained feature extractor. We investigated different choices of the initial pre-trained feature extractor ψ0 and found that ImageNet pretrained features achieve better results. The assumption of the availability of the ImageNet trained feature extractor and its merits will be discussed at length in Sec. 4.2. Feature Adaptation: Similarly to SVDD and Joint Optimization, we also use the compactness loss (Eq. 1) to adapt the general pre-trained features to the task of anomaly detection on the target distribution. Instead of constraining the architecture or introducing external data into the adaptation procedure we tackle catastrophic collapse directly. The main issue is that the optimal solution of the compactness loss can result in ”collapse”, where all possible input values are mapped to the same point (ψ(x) = c, ∀x). Learning such features will not be useful for anomaly detection, as both normal and anomalous images will be mapped to the same output, preventing separability. The issue is broader than the trivial ”collapsed” solution after full convergence, but rather the more general issue of feature deterioration, where the original good properties of the pretrained features are lost. Even a non-trivial solution might not require the full discriminative ability of the original features which are none-the-less important for anomaly detection. To avoid this collapse, we suggest two options: (i) finetuning the pretrained extractor with compactness loss (Eq.1) and using sample-wise early stopping (ii) when collapse happens prematurely, before any significant adaptation happens, we suggest mitigating it using a Continual Learninginspired adaptive regularization. Sample-wise early stopping (PANDA-SES): Early stopping is one of the simplest methods used to regularize neural network. While stopping the training process after constant number of iterations (we use 2.3k minibatches) helps to control the collapse of the original features in most examined datasets (Sec. 4.2), in other cases, collapse occurs earlier in the training process - the best number of early stopping iterations may vary between datasets. We thus propose ”samplewise early stopping” (SES). The intuition for the method can be obtained from Fig. 1. We can see that anomaly detection accuracy is correlated to the ratio between the average compactness loss of test set anomalies and the average compactness loss of training set normal images. We thus propose to save checkpoints of our network at fixed intervals during the training process - corresponding to different early stopping iterations (ψ1, ψ2..ψT ), for each network ψt we compute the average loss on the training set images st. During inference, we score a target image x using each model ψt(x) = ft, and normalize the score by the relevant average score st. We set the maximal normalized score, as the anomaly score of this sample, as this roughly estimates the model that achieves the best separation between normal and anomalous samples. Note that each sample is scored using only its features ft, and the normal train set average score st, without seeing the labels of any other test set samples. Continual Learning (PANDA-EWC): We propose a new solution for overcoming premature feature collapse that draws inspiration from the field of continual learning. The task of continual learning tackles learning new tasks without forgetting the previously learned ones. We note however that our task is not identical to standard continual learning as: i) we deal with the one-class classification setting whereas continual-learning typically deals with multi-class classification ii) we aim to avoid forgetting the expressivity of the features but do not particularly care if the actual classification performance on the old task is degraded. A simple solution for preventing feature collapse is by regularization of the change in value of the weights of the feature extractor ψ from those of the pre-trained extractor ψ0. However, this solution is lacking as the features are more sensitive to some weights than others and this can be ”exploited” by the adaptation method. Following ideas from continual learning, we use elastic weight consolidation (EWC) (Kirkpatrick et al., 2017). Using a number of mini-batches (we use 100) of pretraining on the auxiliary task, we compute the diagonal of the Fisher information matrix F for all weight parameters of the network. Note that this only needs to happen once at the end of the pretraining stage and does not need to be repeated. The value of the Fisher matrix for diagonal element θ′ is given by: Fθ′ = E(x,y)∈Dpretrain [( ∂ ∂θ Lpretrain(x, y); θ ′ )2 |θ ] (3) We follow (Kirkpatrick et al., 2017) in using the diagonal of the Fisher information matrix Fθi , to weight the Euclidean distance of the change of each network parameter θi ∈ ψ0 and its corresponding parameter θ∗i ∈ ψ. This weighted distance can be interpreted as a measure of the curvature of the loss landscape as function of the parameters - larger values imply high curvature, inelastic weights. We use this regularization in combination with the compactness loss, the losses are weighted by the factor λ, which is a hyperparameter of the method (we always use λ = 104): Lθ = Lcompact(θ) + λ 2 · ∑ i Fθi(θi − θ∗i )2 (4) Network ψ is initialized with the parameters of the pretrained extractor ψ0 and trained with SGD. Anomaly scoring: Given strong features and appropriate adaptation, our transformed data typically follows the standard anomaly detection assumption i.e. high-density in regions of normal data. As in classical anomaly detection, scoring can be done by density estimation. Our method performs better with strong non-parametric anomaly scoring methods. We evaluate several anomaly scoring methods: i) Euclidean Distance to the mean of the training features ii) the K nearest-neighbor distance between the target (test set) features and the features of the training set images iii) Computing the K-means of the training set features, and computing the distance between the target sample features to the nearest mean. See Sec. 4.2.3 for comparison results. Outlier Exposure: An extension of the typical image anomaly detection task (Hendrycks et al., 2018), assumes the existence of an auxiliary dataset of images DOE , which are more similar to the anomalies than normal data. In case such information is available, we simply train a linear classification w layer together with the features ψ under a logistic regression loss (Eq. 5). As before, ψ is initialized with the weights from ψ0. After training ψ and w, we use w · ψ(x) as the anomaly score. Results and critical analysis of this setting are presented in Sec. 4.2. LOE = ∑ x∈Dtrain log(σ(1− w · ψ(x))) + ∑ x∈DOE log(σ(w · ψ(x))) (5) 4 IMAGE ANOMALY DETECTION 4.1 HIGH-LEVEL RESULTS In this section, we present high-level results of our method PANDA-EWC, (PANDA-SES can be found in Sec.4.2) compared to the state-of-the-art: One-class SVM (Scholkopf et al., 2000), DeepSVDD (Ruff et al., 2018), Multi-Head RotNet (Hendrycks et al., 2019b). We also compare our method to raw (unadapted) pretrained features. As Joint Optimization requires extra data, we did not add it to this table, but compare and outperform it in Tab. 4. We compare our PANDAOE to the OE baseline in Hendrycks et al. (2019b) on CIFAR10, as the code or results for other classes were unavailable. To investigate performance in domains significantly different from the dataset used to pretrain the features, we evaluated our method across a large range of datasets: standard datasets (CIFAR10/100, CatsVsDogs), Black-and-white dataset (Fashion MNIST), Small fine-grained datasets (Birds200/Oxford Flowers), Medical dataset (WBC), Very finegrained anomalies (MVTec), and aerieal images (DIOR). A detailed description of the datasets is found in the appendix Sec. C, and represenative frames are shown in Fig. 3. For outlier exposure (OE), we followed Hendrycks et al. (2018) and used 50k randomly sampled images from 80M Tiny Images. Implementation details are reported in Appendix D. The main results are i) pre-trained features achieve significantly better results than self-supervised features on all datasets. ii) Feature adaptation significantly improves the performance on larger datasets iii) Outlier exposure can further improve performance in the case where the given outliers are more similar to the anomalies than the normal data. OE achieves near perfect performance on CIFAR10/100 but hurts performance for Fashion MNIST/CatsVsDogs which are less similar to the 80M Tiny images dataset. A detailed analysis of the reason for better performance for each of these methods and an examination of its appropriateness will be presented in Sec. 4.2. 4.2 ANALYSIS AND FURTHER EVALUATION In this section we analyze the factors of variation in performance between different methods: 4.2.1 AN ANALYSIS OF THE CHOICE OF FEATURE REPRESENTATION A comparison of self-supervised and pre-trained features: In Tab. 1 and Tab. 2, we present a comparison between methods that use self-supervised and pre-trained feature representations. We see that the autoencoder used by DeepSVDD is particularly poor. The results of the MHRotNet as a feature extractor are better, but still underperform PANDA methods (see App. A for more details). The performance of the raw deep ResNet features without adaptation significantly outperforms all methods, including Fashion MNIST and DIOR which have significant differences from the ImageNet dataset. We can therefore conclude that ImageNet-pretrained features typically have significant advantages over self-supervised features. Tab. 2 shows that self-supervised methods do not perform well on small datasets as such methods require large numbers of normal samples in order to learn strong features. On the other hand ImageNet-pretrained features obtain very strong results. Do pretrained features generalize to anomaly detection on domains far from the pretraining dataset? The results in Tab. 2 on FMNIST, DIOR, WBC, MVTec suggest that it does. We evaluated the ImageNet-pretrained features on datasets of various sizes, domains, resolutions and symmetries. On all those datasets pretrained features outperformed the SOTA. These datasets include significantly different objects from those of ImageNet, but also fine-grained intra-object anomalies, and represent a spectrum of data types: aerial images, microscopy, industrial images. This shows that one of the main concerns of using pre-trained features, namely, generalizing to distant domains is not an issue in practice. On the different supervision settings for one-class anomaly detection: Anomaly detection methods employ different levels of supervision. Within the one-class classification task, one may use outlier exposure (OE) - an external dataset (e.g. ImageNet), pretrained features, or no external supervision at all. The most extensive supervision is used by OE, which requires a large external dataset at training time, and performs well only when such a dataset is from a similar domain to the anomalies (see Tab. 1). In cases where the dataset used for OE has significantly different properties, the network may not learn to distinguish between normal and anomalous data, as the normal and anomalous data may have more in common than the OE dataset. E.g. both normal and anomalous classes of Fashion MNIST are greyscale, OE using 80M Tiny Images will not be helpful. Pretrained features further improve OE, in cases where is suitable e.g. CIFAR10. Pretraining, like Outlier Exposure, is also achieved through an external labelled dataset, but differently from OE, the external dataset is only required once - at the pretraining stage and is not used again. Additionally, the same features are applicable for very different image domains from that of the pretraining dataset (e.g. Fashion MNIST - greyscale images, DIOR - aerial images, WBCmedical images, MVTec - industrial images). Self supervised feature learning requires no external dataset at all, which can potentially be an advantage. While there might be image anomaly detection tasks where ImageNet-pretrained weights are not applicable, we saw no evidence for such cases after examining a broad spectrum of domains and datasets (Tab. 8). This indicates that the extra supervision of the ImageNet-pretrained weights comes at virtually no cost. Can pretrained features boost the performance of RotNet-based methods? We did not find evidence that pretrained features improve the performance of RotNet-based AD methods such as Hendrycks et al. (2019b) (CIFAR10: 90.1% vs. 86.6% without and with pretraining). As can be seen in Tab. 3, pretrained features improve the auxiliary task performance on the normal data, but also on the anomalous samples. As such methods rely on a generalization gap between normal and anomalous samples, deep features actually reduce this gap, as a solution to the auxiliary task becomes feasible for both types of images. For a more detailed analysis see Appendix A. 4.2.2 FEATURE ADAPTATION METHODS Benefits of feature adaptation: Feature adaptation aims to make the distribution of the normal samples more compact, w.r.t. the anomalous samples. Our approach of finetuning pretrained features for compactness under EWC regularization, significantly improves the performance over ”raw” pretrained features (see Tab.1). While the distance from the normal train samples center, of both normal and anomalous test samples is reduced (see Fig.1), the average distance from the center of anomalous test samples is typically further than that of normal samples, in relative terms. This makes anomalies easier to detect by standard classifiers such as kNN. While PANDA-EWC may train more than 7.8k minibatches without catastrophic collapse on CIFAR10, performance of training without regularization usually peaks higher but collapse earlier. We therefore set our constant early stopping epoch such that the net trains with to 2.3k minibatches on all datasets for comparison. Our PANDA-SES method usually achieves an anomaly score not far from the unregularized early stopping peak performance, but is most important in cases where unregularized training fails completely. A comparison of PANDA against other adaptation methods: In Tab. 4 we compare PANDA against (i) JO (Perera & Patel, 2019) - co-training compactness with ImageNet classification which requires ImageNet data at training time. We can see that PANDA - EWC always outperforms JO feature adaptation. (ii) PANDA early stopping (ImageNet pretraining + adaptation, with early stopping after constant iterations number), generally has higher performance than PANDA-EWC, but has severe collapse issues on some classes. (iii) PANDA-SES is similar to early stopping, but PANDA- SES does not collapse as badly on CatsVsDogs dataset. We note that weighting equally the changes in all parameters ( ∑ i(θi − θ∗i )2 ) achieves similar results to early stopping. Which are the best layers to finetune? Fine-tuning all the layers, is prone to feature collapse, even with continual learning (see Tab.5). Finetuning Blocks 3 & 4, or 2, 3 & 4, results in similar performance. Finetuning only block 4 results in a very similar performance to linear whitening of the features according to the train samples (94.6 with whitening vs. 94.8 with finetuning only the last block). Similar effect as can be seen in the original DeepSVDD architecture (see also Tab.7, Appendix B). We therefore recommend finetuning Blocks 3 & 4. DeepSVDD architectural changes: DeepSVDD (Ruff et al., 2018) proposes various architectural changes, such as removing the bias parameters from the network, to prevent collapse to trivial features. We found empirically that the results obtained by the constrained architecture were about the same as those achieved with simple whitening of the data (64.8% vs. 64.6%, see Tab.7). We also ablated DeepSVDD by (re-)adding the biases into its LeNet architecture did not deteriorate its anomaly detection performance. Architectural modifications are not the focus of this work, further investigation into architectures less prone to feature collapse is left for future work. 4.2.3 ANOMALY SCORING FUNCTIONS Does kNN improve over distance to the center? kNN achieves an improvement of around 2% on average w.r.t. to distance to the center (CIFAR10: 94.2% vs 96.2%). Can we improve over the linear complexity of kNN? A naive implementation of kNN has linear runtime complexity in the number of training samples. K-means with a small number of clusters gives ∼1% decrease (CIFAR10: 94.9% vs 96.2%, with 10 means). We note that even for very large datasets, or many thousands of means, both kNN and K-means can run faster than real-time. 5 CONCLUSION AND OUTLOOK We proposed an anomaly detection method that adapts pretrained features and mitigates or avoids catastrophic collapse. We showed that our results significantly outperform current methods while addressing their limitations. We analysed the reasons for the strong performance of our method and related popular methods to the different stages of our framework. The main limitation of this work is the requirement for strong pretrained feature extractors. Much work was done on transferable image and text features and it is likely that current extractors can be effective to obtain features for time series and audio as well. Generic feature extractors are not currently available for tabular data, their development is an exciting direction for future work. A PRETRAINED FEATURES, ROTNET AUXILIARY TASKS AND GENERALIZATION Let us take a closer look at the application of RotNet-based methods for image anomaly detection. We will venture to understand why initializing RotNets with pretrained features may actually impair their anomaly detection performance. In such cases, a network for rotation classification is trained on normal samples, and used to classify the rotation (and translations) applied to a test rotated image. To score an anomaly, the image is deemed anomalous if its rotation prediction accuracy is worse than that of a typical normal image. To correctly classify a rotation of a new image, the network may use traits within the image that are associated with its correct alignment. Such features may be associated with the normal class, or with the entire dataset (common to both the anomalous classes together). For illustrative purposes, let us consider a normal class with images containing a deer, and the anomalous class with images containing a horse. The horns of the deer may indicate the ”upward” direction, but so does the position of the sky in the image, which is often sufficient to classify the rotation correctly. As shown in Tab.3, when initialized with pretrained features, the RotNet achieves very good performance on the auxiliary tasks, both within and outside the normal class, indicating the use the more general traits that are common to more classes. Although at first sight it may appear that the improved auxiliary task performance should improve the performance on anomaly detection, this is in fact not the case! The reason is that features that generalize better, achieve better performance on the auxiliary task for anomalous data. The gap between the performance on the auxiliary tasks will therefore be smaller than with randomlyinitialized networks - leading to degraded anomaly detection performance. For example, consider the illustrative example described above. A RotNet that ”overfits” to work only on the normal class deer, relying on the horns of the deer would classify rotations more accurately on deer images than horse images (as its main feature is horns). On the other hand, a RotNet that also uses more general traits can use the sky position for rotation angle prediction. In this case, it will achieve higher accuracy for both deer and horse images. The gap in performance is likely to be reduced, leading to lower anomaly detection success. The above argument can be formulated using mutual information: In cases where the additional traits unique to the class do not add much information regarding the correct rotation over the general features common to many classes, the class will have limited mutual information with the predicted rotation as well (conditional on the information already given traits common to the entire datasets). When the conditional mutual information between the predicted rotation and the class traits is decreases, we expect the predicted rotation to be less discriminative for anomaly detection, as we indeed see in Tab.6. It is interesting to note that using RotNet features for our transfer learning approach achieves inferior results to both MHRot and our method. Only through an ensemble of all rotations, as MHRot does, it achieves strong performance comparable to the MHRot performance. MHRot achieved 89.7% in our re-implementation. Using the MHRot features as ψ0, we compute the kNN distance of the unadapted features between test set images and train set image transformed by the same transformation. By ensembling the 36 transformations - using the average kNN distance, yields 88.7%. Another metric is computing the average kNN distance between test data transformed under a specfic transformation and the training set transformed by another transformation. By using the average same-transformation kNN distance minus the average different transformation kNN distance, achieves 89.8% - a little better than the RotNet performance. B FEATURE ADAPTATION, DEEPSVDD AND FEATURE COLLAPSE To understand whether DeepSVDD gains its significant performance from its pretrained features or from its feature adaptation, we tried to replace its feature adaptation by closed-form linear data whitening. For both pretrained features and anomaly scoring, we used the DeepSVDD original code (Ruff et al., 2018). We can see that a linear method such as data whitening achieves comparable results (Tab.7). We believe that large architectures are required for meaningful feature adaptation. C DETAILED DESCRIPTION OF DATASETS Standard datasets: We evaluate our method on a set of commonly used datasets: CIFAR10 (Krizhevsky et al., 2009): Consists of RGB images of 10 object classes. Fashion MNIST (Xiao et al., 2017): Consists of grayscale images of 10 fashion item classes. CIFAR100 (Krizhevsky et al., 2009): We use the coarse-grained version that consists of 20 classes. DogsVsCats: High resolution color images of two classes: cats and dogs. The data were extracted from the ASIRRA datasetElson et al. (2007), we split each class to the first 10,000 images as train and the last 2,500 as test. Small datasets: To further extend our results, we compared the methods on a number of small datasets from different domains: 102 Category Flowers & Caltech-UCSD Birds 200 (Nilsback & Zisserman, 2008) Wah et al. (2011): For each of those datasets we evaluated the methods using only each of the first 20 classes as normal, and using the entire test set for evaluation. MVTec (Bergmann et al., 2019): This datasets contain 15 different industrial products, with normal images of proper products for train and 2 − 9 types of manufacturing errors as anomalies. The anomalies in MVTec are in-class i.e. the anomalous images come from the same class of normal images with subtle variations. As can be seen in the results in Tab.2, self-supervised methods performed quite poorly on these datasets as they require many images to learn strong features. Simply using pretrained features was sufficient to obtain high accuracy (Tab.2). Symmetric datasets: We evaluated our method on datasets that contain symmetries, such as images that have no preferred angle (microscopy, aerial images. See Fig.3): WBC (Zheng et al., 2018): We used the 4 big classes in ”Dataset 1” of microscopy images of white blood cells, and a 80%/20% train-test split. DIOR (Li et al., 2020): We preprocessed the DIOR aerial image dataset by taking the segmented object in classes that have more than 50 images with size larger than 120 × 120 pixels. We see in Tab. 12 that for both symmetric datasets our method outperformed MHRot even more significantly. This experiment illustrates a weakness in self-supervised methods that need to exploit specific properties of the data e.g. rotational symmetry. When such properties do not exist in the data, the performance of self-supervised methods is reduced. In this case, rotation prediction conveys no information on rotationally invariant images, and presumably all the prediction performance of MHRot comes from the translation prediction task, which can be less accurate. D IMPLEMENTATION DETAILS PANDA Optimization: We finetune the two last blocks of an ImageNet pretrained ResNet152 using SGD optimizer with weight decay of w = 5 · 10−5, and momentum of m = 0.9. We use G = 10−3 gradient clipping. To have a comparable amount of training in the different dataset, we use define the duration of each of our train using a constant number of minibatches, 32 samples each. EWC: We use the fisher information matrix as obtained by (Kirkpatrick et al., 2017), as explained in Sec.3. We weight the EWC loss with λ = 104. After obtaining EWC regularization, we train our net training on 7.8k minibatches. Early stopping/Sample-wise early stopping: We save a copy of the net every 5 epochs. For early stopping we used the copy trained on 2.3k minibatches. For sample-wise early stopping we try all copies trained on up to 150k image samples. Anomaly scoring: Unless specified otherwise, we score the anomalies according to the kNN method with k = 2 nearest neighbours. When comparing different networks as in PANDA-SES method, we normalize each set of features by the typical kNN distance of its normal train features. To obtain the typical normal distance we would like to compute the average on the normal samples. However, computing the distance between normal training data has that issue that each point is its own nearest neighbour. Instead, we split the train set features (90% vs. 10%), and compute the kNN between the 10% validation images and the gallery 90% images. PANDA Outlier Exposure: The method was described in Sec.3. For synthetic outlier images, we used the first 48k images of 80 Million Tiny Images (Torralba et al., 2008) with CIFAR10 & CIFAR100 images removed. We finetune the last block of an ImageNet pretrained ResNet152 with SGD optimizer using 75 epochs and the following parameters: learning rate is 0.1 with gradient clipping, momentum is 0.9, and no weight decay. Baselines We compare to the following methods: OC-SVM: One-class SVM with the RBF kernel. The hyper-parameters (ν ∈ {0.1, ..., 0.9}, γ ∈ {2−7, ..., 22}) were optimized to maximize AUROC. DeepSVDD: We resize all the images to 32× 32 pixels and use the official pyTorch implementation with the CIFAR10 configuration. MHRot (Hendrycks et al., 2019b): An improved version of the original RotNet approach. For high-resolution images we used the current GitHub implementation. For low resolution images, we modified the code to the architecture described in the paper, replicating the numbers in the paper on CIFAR10. Outlier Exposure (MHRot): We use the outlier exposure performance as reported in Hendrycks et al. (2019b).
1. What is the focus of the paper regarding image anomaly detection? 2. What are the strengths of the proposed approach, particularly in addressing catastrophic collapse? 3. What are the weaknesses of the paper, especially regarding the novelty of the proposed solutions? 4. Do you have any concerns or suggestions regarding the experiments and comparisons with other works? 5. Are there any minor issues or typos in the paper that need attention?
Review
Review Methodology: The paper studies the problem of pre-trained model adaptation for image anomaly detection. It is argued that previous model adaptation schemes either lost model capacity (DeepSVDD) or required extra data with marginal improvement (joint optimization). To alleviate these issues, the paper proposes to regularize the model adaptation process by either 1) sample-wise early stopping, or 2) regularize the model deviation from the initial one. For sample-wise stopping, a sequence of adapted models are recorded at fixed intervals during the adaptation process to produce anomaly scores for each query example, and each score is further normalized with those of the normal examples in the training set using the same model. The final anomaly score is the maximum of these normalized scores of the query example. For model deviation regularization, an extra term is added to the original compactness loss to quantify the deviation in the model natural gradient space. Results on several image anomaly benchmarks show that the proposed method improved on both self-supervised and naive model adaptation methods. Pros: The paper approaches the problem of catastrophic collapse for image anomaly detection from a reasonable perspective, and the proposed solutions are shown to achieve desired effects. Cons: The overall strategy and most observations are mostly known from other similar applications. Early stopping is one of the basic tricks for deep model fine-tuning, and the use of natural gradient for model deviation (elastic weight consolidation (EWC)) was proposed in (Kirkpatrick et al., 2017). And all the rest are off-the-shelf implementations for image anomaly detection. Major observations are also not surprising: “We can therefore conclude that ImageNet-pretrained features typically have significant advantages over self-supervised features.” “This shows that one of the main concerns of using pre-trained features, namely, generalizing to distant domains is not an issue in practice.” These are all well known results in many other image modeling applications, and I have a hard time finding much novelty in these aspects. Interesting cases are not tested enough to justify the advantages of the proposed methods. The proposed sample-wise early stopping (and EWC too) compares only slightly favorably to the naive one that stops after a fixed number of iterations (fig 4). What about some other baseline adaptive early stopping schemes? E.g., cross-validating the #fine tuning iteration on the validation set, and so forth. I doubt that these easier early stopping methods would work better at a far less cost of the proposed SES as the latter needs to do T times number of inference for a single example. If so, why bother using the more expensive SES? Minor comments: It looks to me that there are typos in some critical parts of the paper. eq (3), it should be L(x, y; \theta), instead of L(x,y); \theta, right? The latter does not make sense to me… eq (5), it should be log(1-\sigma(w\phi(x))) instead of log(\sigma(1-w\phi(x))), right?
ICLR
Title PANDA - Adapting Pretrained Features for Anomaly Detection Abstract Anomaly detection methods require high-quality features. One way of obtaining strong features is to adapt pre-trained features to anomaly detection on the target distribution. Unfortunately, simple adaptation methods often result in catastrophic collapse (feature deterioration) and reduce performance. DeepSVDD combats collapse by removing biases from architectures, but this limits the adaptation performance gain. In this work, we propose two methods for combating collapse: i) a variant of early stopping that dynamically learns the stopping iteration ii) elastic regularization inspired by continual learning. In addition, we conduct a thorough investigation of Imagenet-pretrained features for one-class anomaly detection. Our method, PANDA, outperforms the state-of-the-art in the one-class and outlier exposure settings (CIFAR10: 96.2% vs. 90.1% and 98.9% vs. 95.6%) . 1 INTRODUCTION Detecting anomalous patterns in data is of key importance in science and industry. In the computational anomaly detection task, the learner observes a set of training examples. The learner is then tasked to classify novel test samples as normal or anomalous. There are multiple anomaly detection settings investigated in the literature, corresponding to different training conditions. In this work, we deal with two settings: i) when only normal images are used for training ii) Outlier Exposure (OE) where an external dataset simulating the anomalies is available. In recent years, deep learning methods have been introduced for anomaly detection, typically extending classical methods with deep neural networks. Different auxiliary tasks (e.g. autoencoders or rotation classification) are used to learn representations of the data, while a great variety of anomaly criteria are then used to determine if a given sample is normal or anomalous. An important issue for current methods is the reliance on limited normal training data for representation learning, which limits the quality of learned representations. A solution, that we will investigate in this work, is to pre-train features on a large external dataset, and use the features for anomaly detection. As there is likely to be some mismatch between the external dataset and the task of anomaly detection on the target distribution, feature adaptation is an attractive option. Unfortunately, feature adaptation for anomaly detection often suffers from catastrophic collapse - a form of deterioration of the pre-trained features, where all the samples, including anomalous, are mapped to the same point. DeepSVDD (Ruff et al., 2018) proposed to overcome collapse by removing biases from the model architecture, but this restricts network expressively and limits the pre-trained models that can be borrowed off-the-shelf. Perera & Patel (2019) proposed to jointly train anomaly detection with the original task which has several limitations and achieves only limited adaptation success. We propose two techniques to overcome catastrophic collapse: i) an adaptive early stopping method that selects the stopping iteration per-sample, using a novel generalization criterion ii) an elastic regularization, motivated by continual learning, that postpones the collapse. We also provide an extensive evaluation of Imagenet-pretrained features on one-class anomaly detection. Thorough experiments demonstrate that we outperform the state-of-the-art by a wide margin: e.g. CIFAR10 results: 96.2% vs. 90.1% without outlier exposure and 98.9% vs. 95.6% with outlier exposure. We present several insightful critical analyses: i) We show that pre-trained features strictly dominate current self-supervised RotNet-based feature learning methods. We discuss the relative merits of each paradigm and conclude that for most practical purposes, using pre-trained features is preferable. ii) We analyse the results of the popular method, DeepSVDD. We discover that the feature adaptation of its current architecture, which is designed to prevent collapse, does not improve over simple data whitening. iii) We show that collapse can be avoided using early stopping, and suggest an appropriate unsupervised criterion. We also show it can be mitigated using continual learning. 1.1 RELATED WORK Classical anomaly detection: The main categories of classical anomaly detection methods are: i) reconstruction-based: compress the training data using a bottleneck, and use a reconstruction loss as an anomaly criterion (e.g. (Candès et al., 2011; Jolliffe, 2011), K nearest neighbors (Eskin et al., 2002) and K-means (Hartigan & Wong, 1979)), ii) probabilistic: modeling the probability density function and labeling unlikely sampled as anomalous (e.g. Ensembles of Gaussian Mixture Models (Glodek et al., 2013), kernel density estimate (Latecki et al., 2007)) iii) one-class classification (OCC): finding a separating manifold between normal data and the rest of input space (e.g. Oneclass SVM (Scholkopf et al., 2000)). Deep learning methods: The introduction of deep learning has affected image anomaly detection in two ways: extension of classical methods with deep representations and novel self-supervised deep methods. Reconstruction-based methods have been enhanced by learning deep autoencoder-based bottlenecks (D’Oro et al., 2019) which can provide better models of image data. Deep methods extended classical methods by creating a better representations of the data for parametric assumptions about probabilities, a combination of reconstruction and probabilistic methods (such as DAGMM (Zong et al., 2018)), or in a combination with an OCC (Ruff et al., 2018). Novel deep methods have also been proposed for anomaly detection including GAN-based methods (Zong et al., 2018). Another set of novel deep methods use auxiliary self-supervised learning for anomaly detection. The seminal work by Golan & El-Yaniv (2018) was later extended by Hendrycks et al. (2019b) and Bergman & Hoshen (2020). Transferring pretrained representations: Learning deep features requires extensive datasets, preferably with labels. An attractive property of deep neural networks, is that representations learned on very extensive datasets, can be transferred to data-poor tasks. Specifically deep neural representations trained on the ImageNet dataset have been shown by Huh et al. (2016) to significantly boost performance on other datasets that are only vaguely related to some of the ImageNet classes. This can be performed with and without finetuning. Although much recent progress has been performed on self-supervised feature learning (Gidaris et al., 2018; Chen et al., 2020), such methods are typically outperformed by transferred pretrained features. Transferring ImageNet pre-trained features for out-of-distribution detection has been proposed by Hendrycks et al. (2019a). Similar pre-training has been proposed for one-class classification has been proposed by Perera & Patel (2019), however they require joint optimization with the original task. 2 BACKGROUND: FEATURE ADAPTATION FOR ANOMALY DETECTION 2.1 A THREE-STAGE FRAMEWORK We present our general framework in which we examine several adaptation-based anomaly detection methods, including our method. Let us assume that we are given a set Dtrain of normal training samples: x1, x2..xN . The framework consists of three steps: Feature extractor pretraining: A pre-trained feature extractor ψ0 is typically learned using selfsupervised learning (auto-encoding, rotation or jigsaw prediction). We denote the loss function of the auxiliary task Lpretrain. The auxiliary task can be learned either on the training set Dtrain or on an external dataset Dpretrain (such as ImageNet). In the latter case, the pretrained extractor can be obtained off-the-shelf. We will investigate and analyse the merits of each choice in Sec. 4.2. Feature adaptation: Features trained on auxiliary tasks or datasets may require adaptation before being used for anomaly scoring on the target data. This can be seen as a finetuning stage of the pre-trained features on the target training data. We denote the feature extractor after adaptation ψ. Anomaly scoring: Having adapted the features for anomaly detection, we extract the features ψ(x1), ψ(x2)..ψ(xN ) of the training set samples, we proceed to learn a scoring function, which describes how anomalous a sample is. Typically, the scoring function seeks to measure the density of normal data around the test sample ψ(x) (either by direct estimation or via some auxiliary task) and assign a high anomaly score to low density regions. 2.2 EXISTING FEATURE-ADAPTATION METHODS In this section, we review two seminal methods that use feature adaptation for anomaly detection: DeepSVDD: Ruff et al. (2018) suggest to first train an autoencoder E on the normal-only train images. The encoder is then used as the initial feature extractor ψ0(x) = E(x). As the features of the encoder are not specifically adapted to anomaly detection, DeepSVDD adapts ψ on the training data. The adaptation takes place by minimizing the compactness loss: Lcompact = ∑ x∈Dtrain ‖ψ(x)− c‖2 (1) Where c is a constant vector, typically the average of ψ0(x) on the training set. However, the authors were concerned of the trivial solution ψ = c, and suggested architectural restrictions to mitigate it, most importantly removing the biases from all layers. We empirically show that the effect of adaptation of the features in DeepSVDD does not outperform simple feature whitening (see Sec. 4.2.2). Joint optimization (JO): Perera & Patel (2019) proposed to use a deep feature extractor trained for object classification on the ImageNet dataset. Due to fear of ”learning a trivial solution due to the absence of a penalty for miss-classification”, the method do not adapt by finetuning on the compactness loss only. Instead, they relaxed the task setting, by assuming that a number (∼ 50k) of labelled original ImageNet images, Dpretrain, are still available at adaptation time. They proposed to train the features ψ under the compactness loss jointly with the original ImageNet classification linear layer W and its classification loss, here the CE loss with the true label `pretrain(p, y) = − log(py): LJoint = ∑ (x,y)∈Dpretrain `pretrain(softmax(Wψ(x)), y) + α ∑ x∈Dtrain ‖ψ(x)− c‖2 (2) Where W is the final linear classification layer and α is a hyper-parameter weighting the two losses. We note that the method has two main weaknesses: i) it requires retaining a significant number of the original training images which can be storage intensive ii) jointly training the two tasks may reduce the anomaly detection task accuracy, which is the only task of interest in this context. Our proposed method, PANDA, is able to sidestep these issues. 3 PANDA: FEATURE ADAPTATION FOR ANOMALY DETECTION We present PANDA (Pre-trained Anomaly Detection Adaptation), a new method for anomaly detection in images. The core of our method lies in adapting general pre-trained features to anomaly detection on the target distribution. Pre-trained feature extractor: Our method is agnostic to the specific pretrained feature extractor. We investigated different choices of the initial pre-trained feature extractor ψ0 and found that ImageNet pretrained features achieve better results. The assumption of the availability of the ImageNet trained feature extractor and its merits will be discussed at length in Sec. 4.2. Feature Adaptation: Similarly to SVDD and Joint Optimization, we also use the compactness loss (Eq. 1) to adapt the general pre-trained features to the task of anomaly detection on the target distribution. Instead of constraining the architecture or introducing external data into the adaptation procedure we tackle catastrophic collapse directly. The main issue is that the optimal solution of the compactness loss can result in ”collapse”, where all possible input values are mapped to the same point (ψ(x) = c, ∀x). Learning such features will not be useful for anomaly detection, as both normal and anomalous images will be mapped to the same output, preventing separability. The issue is broader than the trivial ”collapsed” solution after full convergence, but rather the more general issue of feature deterioration, where the original good properties of the pretrained features are lost. Even a non-trivial solution might not require the full discriminative ability of the original features which are none-the-less important for anomaly detection. To avoid this collapse, we suggest two options: (i) finetuning the pretrained extractor with compactness loss (Eq.1) and using sample-wise early stopping (ii) when collapse happens prematurely, before any significant adaptation happens, we suggest mitigating it using a Continual Learninginspired adaptive regularization. Sample-wise early stopping (PANDA-SES): Early stopping is one of the simplest methods used to regularize neural network. While stopping the training process after constant number of iterations (we use 2.3k minibatches) helps to control the collapse of the original features in most examined datasets (Sec. 4.2), in other cases, collapse occurs earlier in the training process - the best number of early stopping iterations may vary between datasets. We thus propose ”samplewise early stopping” (SES). The intuition for the method can be obtained from Fig. 1. We can see that anomaly detection accuracy is correlated to the ratio between the average compactness loss of test set anomalies and the average compactness loss of training set normal images. We thus propose to save checkpoints of our network at fixed intervals during the training process - corresponding to different early stopping iterations (ψ1, ψ2..ψT ), for each network ψt we compute the average loss on the training set images st. During inference, we score a target image x using each model ψt(x) = ft, and normalize the score by the relevant average score st. We set the maximal normalized score, as the anomaly score of this sample, as this roughly estimates the model that achieves the best separation between normal and anomalous samples. Note that each sample is scored using only its features ft, and the normal train set average score st, without seeing the labels of any other test set samples. Continual Learning (PANDA-EWC): We propose a new solution for overcoming premature feature collapse that draws inspiration from the field of continual learning. The task of continual learning tackles learning new tasks without forgetting the previously learned ones. We note however that our task is not identical to standard continual learning as: i) we deal with the one-class classification setting whereas continual-learning typically deals with multi-class classification ii) we aim to avoid forgetting the expressivity of the features but do not particularly care if the actual classification performance on the old task is degraded. A simple solution for preventing feature collapse is by regularization of the change in value of the weights of the feature extractor ψ from those of the pre-trained extractor ψ0. However, this solution is lacking as the features are more sensitive to some weights than others and this can be ”exploited” by the adaptation method. Following ideas from continual learning, we use elastic weight consolidation (EWC) (Kirkpatrick et al., 2017). Using a number of mini-batches (we use 100) of pretraining on the auxiliary task, we compute the diagonal of the Fisher information matrix F for all weight parameters of the network. Note that this only needs to happen once at the end of the pretraining stage and does not need to be repeated. The value of the Fisher matrix for diagonal element θ′ is given by: Fθ′ = E(x,y)∈Dpretrain [( ∂ ∂θ Lpretrain(x, y); θ ′ )2 |θ ] (3) We follow (Kirkpatrick et al., 2017) in using the diagonal of the Fisher information matrix Fθi , to weight the Euclidean distance of the change of each network parameter θi ∈ ψ0 and its corresponding parameter θ∗i ∈ ψ. This weighted distance can be interpreted as a measure of the curvature of the loss landscape as function of the parameters - larger values imply high curvature, inelastic weights. We use this regularization in combination with the compactness loss, the losses are weighted by the factor λ, which is a hyperparameter of the method (we always use λ = 104): Lθ = Lcompact(θ) + λ 2 · ∑ i Fθi(θi − θ∗i )2 (4) Network ψ is initialized with the parameters of the pretrained extractor ψ0 and trained with SGD. Anomaly scoring: Given strong features and appropriate adaptation, our transformed data typically follows the standard anomaly detection assumption i.e. high-density in regions of normal data. As in classical anomaly detection, scoring can be done by density estimation. Our method performs better with strong non-parametric anomaly scoring methods. We evaluate several anomaly scoring methods: i) Euclidean Distance to the mean of the training features ii) the K nearest-neighbor distance between the target (test set) features and the features of the training set images iii) Computing the K-means of the training set features, and computing the distance between the target sample features to the nearest mean. See Sec. 4.2.3 for comparison results. Outlier Exposure: An extension of the typical image anomaly detection task (Hendrycks et al., 2018), assumes the existence of an auxiliary dataset of images DOE , which are more similar to the anomalies than normal data. In case such information is available, we simply train a linear classification w layer together with the features ψ under a logistic regression loss (Eq. 5). As before, ψ is initialized with the weights from ψ0. After training ψ and w, we use w · ψ(x) as the anomaly score. Results and critical analysis of this setting are presented in Sec. 4.2. LOE = ∑ x∈Dtrain log(σ(1− w · ψ(x))) + ∑ x∈DOE log(σ(w · ψ(x))) (5) 4 IMAGE ANOMALY DETECTION 4.1 HIGH-LEVEL RESULTS In this section, we present high-level results of our method PANDA-EWC, (PANDA-SES can be found in Sec.4.2) compared to the state-of-the-art: One-class SVM (Scholkopf et al., 2000), DeepSVDD (Ruff et al., 2018), Multi-Head RotNet (Hendrycks et al., 2019b). We also compare our method to raw (unadapted) pretrained features. As Joint Optimization requires extra data, we did not add it to this table, but compare and outperform it in Tab. 4. We compare our PANDAOE to the OE baseline in Hendrycks et al. (2019b) on CIFAR10, as the code or results for other classes were unavailable. To investigate performance in domains significantly different from the dataset used to pretrain the features, we evaluated our method across a large range of datasets: standard datasets (CIFAR10/100, CatsVsDogs), Black-and-white dataset (Fashion MNIST), Small fine-grained datasets (Birds200/Oxford Flowers), Medical dataset (WBC), Very finegrained anomalies (MVTec), and aerieal images (DIOR). A detailed description of the datasets is found in the appendix Sec. C, and represenative frames are shown in Fig. 3. For outlier exposure (OE), we followed Hendrycks et al. (2018) and used 50k randomly sampled images from 80M Tiny Images. Implementation details are reported in Appendix D. The main results are i) pre-trained features achieve significantly better results than self-supervised features on all datasets. ii) Feature adaptation significantly improves the performance on larger datasets iii) Outlier exposure can further improve performance in the case where the given outliers are more similar to the anomalies than the normal data. OE achieves near perfect performance on CIFAR10/100 but hurts performance for Fashion MNIST/CatsVsDogs which are less similar to the 80M Tiny images dataset. A detailed analysis of the reason for better performance for each of these methods and an examination of its appropriateness will be presented in Sec. 4.2. 4.2 ANALYSIS AND FURTHER EVALUATION In this section we analyze the factors of variation in performance between different methods: 4.2.1 AN ANALYSIS OF THE CHOICE OF FEATURE REPRESENTATION A comparison of self-supervised and pre-trained features: In Tab. 1 and Tab. 2, we present a comparison between methods that use self-supervised and pre-trained feature representations. We see that the autoencoder used by DeepSVDD is particularly poor. The results of the MHRotNet as a feature extractor are better, but still underperform PANDA methods (see App. A for more details). The performance of the raw deep ResNet features without adaptation significantly outperforms all methods, including Fashion MNIST and DIOR which have significant differences from the ImageNet dataset. We can therefore conclude that ImageNet-pretrained features typically have significant advantages over self-supervised features. Tab. 2 shows that self-supervised methods do not perform well on small datasets as such methods require large numbers of normal samples in order to learn strong features. On the other hand ImageNet-pretrained features obtain very strong results. Do pretrained features generalize to anomaly detection on domains far from the pretraining dataset? The results in Tab. 2 on FMNIST, DIOR, WBC, MVTec suggest that it does. We evaluated the ImageNet-pretrained features on datasets of various sizes, domains, resolutions and symmetries. On all those datasets pretrained features outperformed the SOTA. These datasets include significantly different objects from those of ImageNet, but also fine-grained intra-object anomalies, and represent a spectrum of data types: aerial images, microscopy, industrial images. This shows that one of the main concerns of using pre-trained features, namely, generalizing to distant domains is not an issue in practice. On the different supervision settings for one-class anomaly detection: Anomaly detection methods employ different levels of supervision. Within the one-class classification task, one may use outlier exposure (OE) - an external dataset (e.g. ImageNet), pretrained features, or no external supervision at all. The most extensive supervision is used by OE, which requires a large external dataset at training time, and performs well only when such a dataset is from a similar domain to the anomalies (see Tab. 1). In cases where the dataset used for OE has significantly different properties, the network may not learn to distinguish between normal and anomalous data, as the normal and anomalous data may have more in common than the OE dataset. E.g. both normal and anomalous classes of Fashion MNIST are greyscale, OE using 80M Tiny Images will not be helpful. Pretrained features further improve OE, in cases where is suitable e.g. CIFAR10. Pretraining, like Outlier Exposure, is also achieved through an external labelled dataset, but differently from OE, the external dataset is only required once - at the pretraining stage and is not used again. Additionally, the same features are applicable for very different image domains from that of the pretraining dataset (e.g. Fashion MNIST - greyscale images, DIOR - aerial images, WBCmedical images, MVTec - industrial images). Self supervised feature learning requires no external dataset at all, which can potentially be an advantage. While there might be image anomaly detection tasks where ImageNet-pretrained weights are not applicable, we saw no evidence for such cases after examining a broad spectrum of domains and datasets (Tab. 8). This indicates that the extra supervision of the ImageNet-pretrained weights comes at virtually no cost. Can pretrained features boost the performance of RotNet-based methods? We did not find evidence that pretrained features improve the performance of RotNet-based AD methods such as Hendrycks et al. (2019b) (CIFAR10: 90.1% vs. 86.6% without and with pretraining). As can be seen in Tab. 3, pretrained features improve the auxiliary task performance on the normal data, but also on the anomalous samples. As such methods rely on a generalization gap between normal and anomalous samples, deep features actually reduce this gap, as a solution to the auxiliary task becomes feasible for both types of images. For a more detailed analysis see Appendix A. 4.2.2 FEATURE ADAPTATION METHODS Benefits of feature adaptation: Feature adaptation aims to make the distribution of the normal samples more compact, w.r.t. the anomalous samples. Our approach of finetuning pretrained features for compactness under EWC regularization, significantly improves the performance over ”raw” pretrained features (see Tab.1). While the distance from the normal train samples center, of both normal and anomalous test samples is reduced (see Fig.1), the average distance from the center of anomalous test samples is typically further than that of normal samples, in relative terms. This makes anomalies easier to detect by standard classifiers such as kNN. While PANDA-EWC may train more than 7.8k minibatches without catastrophic collapse on CIFAR10, performance of training without regularization usually peaks higher but collapse earlier. We therefore set our constant early stopping epoch such that the net trains with to 2.3k minibatches on all datasets for comparison. Our PANDA-SES method usually achieves an anomaly score not far from the unregularized early stopping peak performance, but is most important in cases where unregularized training fails completely. A comparison of PANDA against other adaptation methods: In Tab. 4 we compare PANDA against (i) JO (Perera & Patel, 2019) - co-training compactness with ImageNet classification which requires ImageNet data at training time. We can see that PANDA - EWC always outperforms JO feature adaptation. (ii) PANDA early stopping (ImageNet pretraining + adaptation, with early stopping after constant iterations number), generally has higher performance than PANDA-EWC, but has severe collapse issues on some classes. (iii) PANDA-SES is similar to early stopping, but PANDA- SES does not collapse as badly on CatsVsDogs dataset. We note that weighting equally the changes in all parameters ( ∑ i(θi − θ∗i )2 ) achieves similar results to early stopping. Which are the best layers to finetune? Fine-tuning all the layers, is prone to feature collapse, even with continual learning (see Tab.5). Finetuning Blocks 3 & 4, or 2, 3 & 4, results in similar performance. Finetuning only block 4 results in a very similar performance to linear whitening of the features according to the train samples (94.6 with whitening vs. 94.8 with finetuning only the last block). Similar effect as can be seen in the original DeepSVDD architecture (see also Tab.7, Appendix B). We therefore recommend finetuning Blocks 3 & 4. DeepSVDD architectural changes: DeepSVDD (Ruff et al., 2018) proposes various architectural changes, such as removing the bias parameters from the network, to prevent collapse to trivial features. We found empirically that the results obtained by the constrained architecture were about the same as those achieved with simple whitening of the data (64.8% vs. 64.6%, see Tab.7). We also ablated DeepSVDD by (re-)adding the biases into its LeNet architecture did not deteriorate its anomaly detection performance. Architectural modifications are not the focus of this work, further investigation into architectures less prone to feature collapse is left for future work. 4.2.3 ANOMALY SCORING FUNCTIONS Does kNN improve over distance to the center? kNN achieves an improvement of around 2% on average w.r.t. to distance to the center (CIFAR10: 94.2% vs 96.2%). Can we improve over the linear complexity of kNN? A naive implementation of kNN has linear runtime complexity in the number of training samples. K-means with a small number of clusters gives ∼1% decrease (CIFAR10: 94.9% vs 96.2%, with 10 means). We note that even for very large datasets, or many thousands of means, both kNN and K-means can run faster than real-time. 5 CONCLUSION AND OUTLOOK We proposed an anomaly detection method that adapts pretrained features and mitigates or avoids catastrophic collapse. We showed that our results significantly outperform current methods while addressing their limitations. We analysed the reasons for the strong performance of our method and related popular methods to the different stages of our framework. The main limitation of this work is the requirement for strong pretrained feature extractors. Much work was done on transferable image and text features and it is likely that current extractors can be effective to obtain features for time series and audio as well. Generic feature extractors are not currently available for tabular data, their development is an exciting direction for future work. A PRETRAINED FEATURES, ROTNET AUXILIARY TASKS AND GENERALIZATION Let us take a closer look at the application of RotNet-based methods for image anomaly detection. We will venture to understand why initializing RotNets with pretrained features may actually impair their anomaly detection performance. In such cases, a network for rotation classification is trained on normal samples, and used to classify the rotation (and translations) applied to a test rotated image. To score an anomaly, the image is deemed anomalous if its rotation prediction accuracy is worse than that of a typical normal image. To correctly classify a rotation of a new image, the network may use traits within the image that are associated with its correct alignment. Such features may be associated with the normal class, or with the entire dataset (common to both the anomalous classes together). For illustrative purposes, let us consider a normal class with images containing a deer, and the anomalous class with images containing a horse. The horns of the deer may indicate the ”upward” direction, but so does the position of the sky in the image, which is often sufficient to classify the rotation correctly. As shown in Tab.3, when initialized with pretrained features, the RotNet achieves very good performance on the auxiliary tasks, both within and outside the normal class, indicating the use the more general traits that are common to more classes. Although at first sight it may appear that the improved auxiliary task performance should improve the performance on anomaly detection, this is in fact not the case! The reason is that features that generalize better, achieve better performance on the auxiliary task for anomalous data. The gap between the performance on the auxiliary tasks will therefore be smaller than with randomlyinitialized networks - leading to degraded anomaly detection performance. For example, consider the illustrative example described above. A RotNet that ”overfits” to work only on the normal class deer, relying on the horns of the deer would classify rotations more accurately on deer images than horse images (as its main feature is horns). On the other hand, a RotNet that also uses more general traits can use the sky position for rotation angle prediction. In this case, it will achieve higher accuracy for both deer and horse images. The gap in performance is likely to be reduced, leading to lower anomaly detection success. The above argument can be formulated using mutual information: In cases where the additional traits unique to the class do not add much information regarding the correct rotation over the general features common to many classes, the class will have limited mutual information with the predicted rotation as well (conditional on the information already given traits common to the entire datasets). When the conditional mutual information between the predicted rotation and the class traits is decreases, we expect the predicted rotation to be less discriminative for anomaly detection, as we indeed see in Tab.6. It is interesting to note that using RotNet features for our transfer learning approach achieves inferior results to both MHRot and our method. Only through an ensemble of all rotations, as MHRot does, it achieves strong performance comparable to the MHRot performance. MHRot achieved 89.7% in our re-implementation. Using the MHRot features as ψ0, we compute the kNN distance of the unadapted features between test set images and train set image transformed by the same transformation. By ensembling the 36 transformations - using the average kNN distance, yields 88.7%. Another metric is computing the average kNN distance between test data transformed under a specfic transformation and the training set transformed by another transformation. By using the average same-transformation kNN distance minus the average different transformation kNN distance, achieves 89.8% - a little better than the RotNet performance. B FEATURE ADAPTATION, DEEPSVDD AND FEATURE COLLAPSE To understand whether DeepSVDD gains its significant performance from its pretrained features or from its feature adaptation, we tried to replace its feature adaptation by closed-form linear data whitening. For both pretrained features and anomaly scoring, we used the DeepSVDD original code (Ruff et al., 2018). We can see that a linear method such as data whitening achieves comparable results (Tab.7). We believe that large architectures are required for meaningful feature adaptation. C DETAILED DESCRIPTION OF DATASETS Standard datasets: We evaluate our method on a set of commonly used datasets: CIFAR10 (Krizhevsky et al., 2009): Consists of RGB images of 10 object classes. Fashion MNIST (Xiao et al., 2017): Consists of grayscale images of 10 fashion item classes. CIFAR100 (Krizhevsky et al., 2009): We use the coarse-grained version that consists of 20 classes. DogsVsCats: High resolution color images of two classes: cats and dogs. The data were extracted from the ASIRRA datasetElson et al. (2007), we split each class to the first 10,000 images as train and the last 2,500 as test. Small datasets: To further extend our results, we compared the methods on a number of small datasets from different domains: 102 Category Flowers & Caltech-UCSD Birds 200 (Nilsback & Zisserman, 2008) Wah et al. (2011): For each of those datasets we evaluated the methods using only each of the first 20 classes as normal, and using the entire test set for evaluation. MVTec (Bergmann et al., 2019): This datasets contain 15 different industrial products, with normal images of proper products for train and 2 − 9 types of manufacturing errors as anomalies. The anomalies in MVTec are in-class i.e. the anomalous images come from the same class of normal images with subtle variations. As can be seen in the results in Tab.2, self-supervised methods performed quite poorly on these datasets as they require many images to learn strong features. Simply using pretrained features was sufficient to obtain high accuracy (Tab.2). Symmetric datasets: We evaluated our method on datasets that contain symmetries, such as images that have no preferred angle (microscopy, aerial images. See Fig.3): WBC (Zheng et al., 2018): We used the 4 big classes in ”Dataset 1” of microscopy images of white blood cells, and a 80%/20% train-test split. DIOR (Li et al., 2020): We preprocessed the DIOR aerial image dataset by taking the segmented object in classes that have more than 50 images with size larger than 120 × 120 pixels. We see in Tab. 12 that for both symmetric datasets our method outperformed MHRot even more significantly. This experiment illustrates a weakness in self-supervised methods that need to exploit specific properties of the data e.g. rotational symmetry. When such properties do not exist in the data, the performance of self-supervised methods is reduced. In this case, rotation prediction conveys no information on rotationally invariant images, and presumably all the prediction performance of MHRot comes from the translation prediction task, which can be less accurate. D IMPLEMENTATION DETAILS PANDA Optimization: We finetune the two last blocks of an ImageNet pretrained ResNet152 using SGD optimizer with weight decay of w = 5 · 10−5, and momentum of m = 0.9. We use G = 10−3 gradient clipping. To have a comparable amount of training in the different dataset, we use define the duration of each of our train using a constant number of minibatches, 32 samples each. EWC: We use the fisher information matrix as obtained by (Kirkpatrick et al., 2017), as explained in Sec.3. We weight the EWC loss with λ = 104. After obtaining EWC regularization, we train our net training on 7.8k minibatches. Early stopping/Sample-wise early stopping: We save a copy of the net every 5 epochs. For early stopping we used the copy trained on 2.3k minibatches. For sample-wise early stopping we try all copies trained on up to 150k image samples. Anomaly scoring: Unless specified otherwise, we score the anomalies according to the kNN method with k = 2 nearest neighbours. When comparing different networks as in PANDA-SES method, we normalize each set of features by the typical kNN distance of its normal train features. To obtain the typical normal distance we would like to compute the average on the normal samples. However, computing the distance between normal training data has that issue that each point is its own nearest neighbour. Instead, we split the train set features (90% vs. 10%), and compute the kNN between the 10% validation images and the gallery 90% images. PANDA Outlier Exposure: The method was described in Sec.3. For synthetic outlier images, we used the first 48k images of 80 Million Tiny Images (Torralba et al., 2008) with CIFAR10 & CIFAR100 images removed. We finetune the last block of an ImageNet pretrained ResNet152 with SGD optimizer using 75 epochs and the following parameters: learning rate is 0.1 with gradient clipping, momentum is 0.9, and no weight decay. Baselines We compare to the following methods: OC-SVM: One-class SVM with the RBF kernel. The hyper-parameters (ν ∈ {0.1, ..., 0.9}, γ ∈ {2−7, ..., 22}) were optimized to maximize AUROC. DeepSVDD: We resize all the images to 32× 32 pixels and use the official pyTorch implementation with the CIFAR10 configuration. MHRot (Hendrycks et al., 2019b): An improved version of the original RotNet approach. For high-resolution images we used the current GitHub implementation. For low resolution images, we modified the code to the architecture described in the paper, replicating the numbers in the paper on CIFAR10. Outlier Exposure (MHRot): We use the outlier exposure performance as reported in Hendrycks et al. (2019b).
1. How does the paper define normal and anomaly in images? 2. Why does the paper argue that using pre-trained features in anomaly detection has the combating collapse? 3. Can the proposed method work for other computer vision tasks or very related tasks, such as video anomaly detection? 4. What is the technical contribution of the paper, and how significant is it? 5. Is the performance of the proposed method on five benchmarks due to the method's technical contribution or other factors?
Review
Review The paper lacks of a clear definition of normal and anomaly in images. For example, this could be illustrated in the intro or in each dataset separately. Qualitative results could also be helpful examples. Otherwise, it is hard to understand why each proposed method works, and only quantitative results are not enough to understand the proposed method. The paper argues that using pre-trained features in anomaly detection has the combating collapse, but why this happens in anomaly detection or why this problem is especially for anomaly detection is only explained in a few sentences. Another way to ask the question is that is it possible for the proposed method also works for other computer vision tasks or very related task (such as video anomaly detection)? Why? The performance of the proposed method achieved good results on five benchmarks, however, it technical contribution is not significant. The contribution mentioned in the paper are more about providing some good practice and analysis about adapting pertained features for image anomaly detection, and its generalization ability is also not clear. Overall, I would lean to rating 5 before the rebuttal.
ICLR
Title PANDA - Adapting Pretrained Features for Anomaly Detection Abstract Anomaly detection methods require high-quality features. One way of obtaining strong features is to adapt pre-trained features to anomaly detection on the target distribution. Unfortunately, simple adaptation methods often result in catastrophic collapse (feature deterioration) and reduce performance. DeepSVDD combats collapse by removing biases from architectures, but this limits the adaptation performance gain. In this work, we propose two methods for combating collapse: i) a variant of early stopping that dynamically learns the stopping iteration ii) elastic regularization inspired by continual learning. In addition, we conduct a thorough investigation of Imagenet-pretrained features for one-class anomaly detection. Our method, PANDA, outperforms the state-of-the-art in the one-class and outlier exposure settings (CIFAR10: 96.2% vs. 90.1% and 98.9% vs. 95.6%) . 1 INTRODUCTION Detecting anomalous patterns in data is of key importance in science and industry. In the computational anomaly detection task, the learner observes a set of training examples. The learner is then tasked to classify novel test samples as normal or anomalous. There are multiple anomaly detection settings investigated in the literature, corresponding to different training conditions. In this work, we deal with two settings: i) when only normal images are used for training ii) Outlier Exposure (OE) where an external dataset simulating the anomalies is available. In recent years, deep learning methods have been introduced for anomaly detection, typically extending classical methods with deep neural networks. Different auxiliary tasks (e.g. autoencoders or rotation classification) are used to learn representations of the data, while a great variety of anomaly criteria are then used to determine if a given sample is normal or anomalous. An important issue for current methods is the reliance on limited normal training data for representation learning, which limits the quality of learned representations. A solution, that we will investigate in this work, is to pre-train features on a large external dataset, and use the features for anomaly detection. As there is likely to be some mismatch between the external dataset and the task of anomaly detection on the target distribution, feature adaptation is an attractive option. Unfortunately, feature adaptation for anomaly detection often suffers from catastrophic collapse - a form of deterioration of the pre-trained features, where all the samples, including anomalous, are mapped to the same point. DeepSVDD (Ruff et al., 2018) proposed to overcome collapse by removing biases from the model architecture, but this restricts network expressively and limits the pre-trained models that can be borrowed off-the-shelf. Perera & Patel (2019) proposed to jointly train anomaly detection with the original task which has several limitations and achieves only limited adaptation success. We propose two techniques to overcome catastrophic collapse: i) an adaptive early stopping method that selects the stopping iteration per-sample, using a novel generalization criterion ii) an elastic regularization, motivated by continual learning, that postpones the collapse. We also provide an extensive evaluation of Imagenet-pretrained features on one-class anomaly detection. Thorough experiments demonstrate that we outperform the state-of-the-art by a wide margin: e.g. CIFAR10 results: 96.2% vs. 90.1% without outlier exposure and 98.9% vs. 95.6% with outlier exposure. We present several insightful critical analyses: i) We show that pre-trained features strictly dominate current self-supervised RotNet-based feature learning methods. We discuss the relative merits of each paradigm and conclude that for most practical purposes, using pre-trained features is preferable. ii) We analyse the results of the popular method, DeepSVDD. We discover that the feature adaptation of its current architecture, which is designed to prevent collapse, does not improve over simple data whitening. iii) We show that collapse can be avoided using early stopping, and suggest an appropriate unsupervised criterion. We also show it can be mitigated using continual learning. 1.1 RELATED WORK Classical anomaly detection: The main categories of classical anomaly detection methods are: i) reconstruction-based: compress the training data using a bottleneck, and use a reconstruction loss as an anomaly criterion (e.g. (Candès et al., 2011; Jolliffe, 2011), K nearest neighbors (Eskin et al., 2002) and K-means (Hartigan & Wong, 1979)), ii) probabilistic: modeling the probability density function and labeling unlikely sampled as anomalous (e.g. Ensembles of Gaussian Mixture Models (Glodek et al., 2013), kernel density estimate (Latecki et al., 2007)) iii) one-class classification (OCC): finding a separating manifold between normal data and the rest of input space (e.g. Oneclass SVM (Scholkopf et al., 2000)). Deep learning methods: The introduction of deep learning has affected image anomaly detection in two ways: extension of classical methods with deep representations and novel self-supervised deep methods. Reconstruction-based methods have been enhanced by learning deep autoencoder-based bottlenecks (D’Oro et al., 2019) which can provide better models of image data. Deep methods extended classical methods by creating a better representations of the data for parametric assumptions about probabilities, a combination of reconstruction and probabilistic methods (such as DAGMM (Zong et al., 2018)), or in a combination with an OCC (Ruff et al., 2018). Novel deep methods have also been proposed for anomaly detection including GAN-based methods (Zong et al., 2018). Another set of novel deep methods use auxiliary self-supervised learning for anomaly detection. The seminal work by Golan & El-Yaniv (2018) was later extended by Hendrycks et al. (2019b) and Bergman & Hoshen (2020). Transferring pretrained representations: Learning deep features requires extensive datasets, preferably with labels. An attractive property of deep neural networks, is that representations learned on very extensive datasets, can be transferred to data-poor tasks. Specifically deep neural representations trained on the ImageNet dataset have been shown by Huh et al. (2016) to significantly boost performance on other datasets that are only vaguely related to some of the ImageNet classes. This can be performed with and without finetuning. Although much recent progress has been performed on self-supervised feature learning (Gidaris et al., 2018; Chen et al., 2020), such methods are typically outperformed by transferred pretrained features. Transferring ImageNet pre-trained features for out-of-distribution detection has been proposed by Hendrycks et al. (2019a). Similar pre-training has been proposed for one-class classification has been proposed by Perera & Patel (2019), however they require joint optimization with the original task. 2 BACKGROUND: FEATURE ADAPTATION FOR ANOMALY DETECTION 2.1 A THREE-STAGE FRAMEWORK We present our general framework in which we examine several adaptation-based anomaly detection methods, including our method. Let us assume that we are given a set Dtrain of normal training samples: x1, x2..xN . The framework consists of three steps: Feature extractor pretraining: A pre-trained feature extractor ψ0 is typically learned using selfsupervised learning (auto-encoding, rotation or jigsaw prediction). We denote the loss function of the auxiliary task Lpretrain. The auxiliary task can be learned either on the training set Dtrain or on an external dataset Dpretrain (such as ImageNet). In the latter case, the pretrained extractor can be obtained off-the-shelf. We will investigate and analyse the merits of each choice in Sec. 4.2. Feature adaptation: Features trained on auxiliary tasks or datasets may require adaptation before being used for anomaly scoring on the target data. This can be seen as a finetuning stage of the pre-trained features on the target training data. We denote the feature extractor after adaptation ψ. Anomaly scoring: Having adapted the features for anomaly detection, we extract the features ψ(x1), ψ(x2)..ψ(xN ) of the training set samples, we proceed to learn a scoring function, which describes how anomalous a sample is. Typically, the scoring function seeks to measure the density of normal data around the test sample ψ(x) (either by direct estimation or via some auxiliary task) and assign a high anomaly score to low density regions. 2.2 EXISTING FEATURE-ADAPTATION METHODS In this section, we review two seminal methods that use feature adaptation for anomaly detection: DeepSVDD: Ruff et al. (2018) suggest to first train an autoencoder E on the normal-only train images. The encoder is then used as the initial feature extractor ψ0(x) = E(x). As the features of the encoder are not specifically adapted to anomaly detection, DeepSVDD adapts ψ on the training data. The adaptation takes place by minimizing the compactness loss: Lcompact = ∑ x∈Dtrain ‖ψ(x)− c‖2 (1) Where c is a constant vector, typically the average of ψ0(x) on the training set. However, the authors were concerned of the trivial solution ψ = c, and suggested architectural restrictions to mitigate it, most importantly removing the biases from all layers. We empirically show that the effect of adaptation of the features in DeepSVDD does not outperform simple feature whitening (see Sec. 4.2.2). Joint optimization (JO): Perera & Patel (2019) proposed to use a deep feature extractor trained for object classification on the ImageNet dataset. Due to fear of ”learning a trivial solution due to the absence of a penalty for miss-classification”, the method do not adapt by finetuning on the compactness loss only. Instead, they relaxed the task setting, by assuming that a number (∼ 50k) of labelled original ImageNet images, Dpretrain, are still available at adaptation time. They proposed to train the features ψ under the compactness loss jointly with the original ImageNet classification linear layer W and its classification loss, here the CE loss with the true label `pretrain(p, y) = − log(py): LJoint = ∑ (x,y)∈Dpretrain `pretrain(softmax(Wψ(x)), y) + α ∑ x∈Dtrain ‖ψ(x)− c‖2 (2) Where W is the final linear classification layer and α is a hyper-parameter weighting the two losses. We note that the method has two main weaknesses: i) it requires retaining a significant number of the original training images which can be storage intensive ii) jointly training the two tasks may reduce the anomaly detection task accuracy, which is the only task of interest in this context. Our proposed method, PANDA, is able to sidestep these issues. 3 PANDA: FEATURE ADAPTATION FOR ANOMALY DETECTION We present PANDA (Pre-trained Anomaly Detection Adaptation), a new method for anomaly detection in images. The core of our method lies in adapting general pre-trained features to anomaly detection on the target distribution. Pre-trained feature extractor: Our method is agnostic to the specific pretrained feature extractor. We investigated different choices of the initial pre-trained feature extractor ψ0 and found that ImageNet pretrained features achieve better results. The assumption of the availability of the ImageNet trained feature extractor and its merits will be discussed at length in Sec. 4.2. Feature Adaptation: Similarly to SVDD and Joint Optimization, we also use the compactness loss (Eq. 1) to adapt the general pre-trained features to the task of anomaly detection on the target distribution. Instead of constraining the architecture or introducing external data into the adaptation procedure we tackle catastrophic collapse directly. The main issue is that the optimal solution of the compactness loss can result in ”collapse”, where all possible input values are mapped to the same point (ψ(x) = c, ∀x). Learning such features will not be useful for anomaly detection, as both normal and anomalous images will be mapped to the same output, preventing separability. The issue is broader than the trivial ”collapsed” solution after full convergence, but rather the more general issue of feature deterioration, where the original good properties of the pretrained features are lost. Even a non-trivial solution might not require the full discriminative ability of the original features which are none-the-less important for anomaly detection. To avoid this collapse, we suggest two options: (i) finetuning the pretrained extractor with compactness loss (Eq.1) and using sample-wise early stopping (ii) when collapse happens prematurely, before any significant adaptation happens, we suggest mitigating it using a Continual Learninginspired adaptive regularization. Sample-wise early stopping (PANDA-SES): Early stopping is one of the simplest methods used to regularize neural network. While stopping the training process after constant number of iterations (we use 2.3k minibatches) helps to control the collapse of the original features in most examined datasets (Sec. 4.2), in other cases, collapse occurs earlier in the training process - the best number of early stopping iterations may vary between datasets. We thus propose ”samplewise early stopping” (SES). The intuition for the method can be obtained from Fig. 1. We can see that anomaly detection accuracy is correlated to the ratio between the average compactness loss of test set anomalies and the average compactness loss of training set normal images. We thus propose to save checkpoints of our network at fixed intervals during the training process - corresponding to different early stopping iterations (ψ1, ψ2..ψT ), for each network ψt we compute the average loss on the training set images st. During inference, we score a target image x using each model ψt(x) = ft, and normalize the score by the relevant average score st. We set the maximal normalized score, as the anomaly score of this sample, as this roughly estimates the model that achieves the best separation between normal and anomalous samples. Note that each sample is scored using only its features ft, and the normal train set average score st, without seeing the labels of any other test set samples. Continual Learning (PANDA-EWC): We propose a new solution for overcoming premature feature collapse that draws inspiration from the field of continual learning. The task of continual learning tackles learning new tasks without forgetting the previously learned ones. We note however that our task is not identical to standard continual learning as: i) we deal with the one-class classification setting whereas continual-learning typically deals with multi-class classification ii) we aim to avoid forgetting the expressivity of the features but do not particularly care if the actual classification performance on the old task is degraded. A simple solution for preventing feature collapse is by regularization of the change in value of the weights of the feature extractor ψ from those of the pre-trained extractor ψ0. However, this solution is lacking as the features are more sensitive to some weights than others and this can be ”exploited” by the adaptation method. Following ideas from continual learning, we use elastic weight consolidation (EWC) (Kirkpatrick et al., 2017). Using a number of mini-batches (we use 100) of pretraining on the auxiliary task, we compute the diagonal of the Fisher information matrix F for all weight parameters of the network. Note that this only needs to happen once at the end of the pretraining stage and does not need to be repeated. The value of the Fisher matrix for diagonal element θ′ is given by: Fθ′ = E(x,y)∈Dpretrain [( ∂ ∂θ Lpretrain(x, y); θ ′ )2 |θ ] (3) We follow (Kirkpatrick et al., 2017) in using the diagonal of the Fisher information matrix Fθi , to weight the Euclidean distance of the change of each network parameter θi ∈ ψ0 and its corresponding parameter θ∗i ∈ ψ. This weighted distance can be interpreted as a measure of the curvature of the loss landscape as function of the parameters - larger values imply high curvature, inelastic weights. We use this regularization in combination with the compactness loss, the losses are weighted by the factor λ, which is a hyperparameter of the method (we always use λ = 104): Lθ = Lcompact(θ) + λ 2 · ∑ i Fθi(θi − θ∗i )2 (4) Network ψ is initialized with the parameters of the pretrained extractor ψ0 and trained with SGD. Anomaly scoring: Given strong features and appropriate adaptation, our transformed data typically follows the standard anomaly detection assumption i.e. high-density in regions of normal data. As in classical anomaly detection, scoring can be done by density estimation. Our method performs better with strong non-parametric anomaly scoring methods. We evaluate several anomaly scoring methods: i) Euclidean Distance to the mean of the training features ii) the K nearest-neighbor distance between the target (test set) features and the features of the training set images iii) Computing the K-means of the training set features, and computing the distance between the target sample features to the nearest mean. See Sec. 4.2.3 for comparison results. Outlier Exposure: An extension of the typical image anomaly detection task (Hendrycks et al., 2018), assumes the existence of an auxiliary dataset of images DOE , which are more similar to the anomalies than normal data. In case such information is available, we simply train a linear classification w layer together with the features ψ under a logistic regression loss (Eq. 5). As before, ψ is initialized with the weights from ψ0. After training ψ and w, we use w · ψ(x) as the anomaly score. Results and critical analysis of this setting are presented in Sec. 4.2. LOE = ∑ x∈Dtrain log(σ(1− w · ψ(x))) + ∑ x∈DOE log(σ(w · ψ(x))) (5) 4 IMAGE ANOMALY DETECTION 4.1 HIGH-LEVEL RESULTS In this section, we present high-level results of our method PANDA-EWC, (PANDA-SES can be found in Sec.4.2) compared to the state-of-the-art: One-class SVM (Scholkopf et al., 2000), DeepSVDD (Ruff et al., 2018), Multi-Head RotNet (Hendrycks et al., 2019b). We also compare our method to raw (unadapted) pretrained features. As Joint Optimization requires extra data, we did not add it to this table, but compare and outperform it in Tab. 4. We compare our PANDAOE to the OE baseline in Hendrycks et al. (2019b) on CIFAR10, as the code or results for other classes were unavailable. To investigate performance in domains significantly different from the dataset used to pretrain the features, we evaluated our method across a large range of datasets: standard datasets (CIFAR10/100, CatsVsDogs), Black-and-white dataset (Fashion MNIST), Small fine-grained datasets (Birds200/Oxford Flowers), Medical dataset (WBC), Very finegrained anomalies (MVTec), and aerieal images (DIOR). A detailed description of the datasets is found in the appendix Sec. C, and represenative frames are shown in Fig. 3. For outlier exposure (OE), we followed Hendrycks et al. (2018) and used 50k randomly sampled images from 80M Tiny Images. Implementation details are reported in Appendix D. The main results are i) pre-trained features achieve significantly better results than self-supervised features on all datasets. ii) Feature adaptation significantly improves the performance on larger datasets iii) Outlier exposure can further improve performance in the case where the given outliers are more similar to the anomalies than the normal data. OE achieves near perfect performance on CIFAR10/100 but hurts performance for Fashion MNIST/CatsVsDogs which are less similar to the 80M Tiny images dataset. A detailed analysis of the reason for better performance for each of these methods and an examination of its appropriateness will be presented in Sec. 4.2. 4.2 ANALYSIS AND FURTHER EVALUATION In this section we analyze the factors of variation in performance between different methods: 4.2.1 AN ANALYSIS OF THE CHOICE OF FEATURE REPRESENTATION A comparison of self-supervised and pre-trained features: In Tab. 1 and Tab. 2, we present a comparison between methods that use self-supervised and pre-trained feature representations. We see that the autoencoder used by DeepSVDD is particularly poor. The results of the MHRotNet as a feature extractor are better, but still underperform PANDA methods (see App. A for more details). The performance of the raw deep ResNet features without adaptation significantly outperforms all methods, including Fashion MNIST and DIOR which have significant differences from the ImageNet dataset. We can therefore conclude that ImageNet-pretrained features typically have significant advantages over self-supervised features. Tab. 2 shows that self-supervised methods do not perform well on small datasets as such methods require large numbers of normal samples in order to learn strong features. On the other hand ImageNet-pretrained features obtain very strong results. Do pretrained features generalize to anomaly detection on domains far from the pretraining dataset? The results in Tab. 2 on FMNIST, DIOR, WBC, MVTec suggest that it does. We evaluated the ImageNet-pretrained features on datasets of various sizes, domains, resolutions and symmetries. On all those datasets pretrained features outperformed the SOTA. These datasets include significantly different objects from those of ImageNet, but also fine-grained intra-object anomalies, and represent a spectrum of data types: aerial images, microscopy, industrial images. This shows that one of the main concerns of using pre-trained features, namely, generalizing to distant domains is not an issue in practice. On the different supervision settings for one-class anomaly detection: Anomaly detection methods employ different levels of supervision. Within the one-class classification task, one may use outlier exposure (OE) - an external dataset (e.g. ImageNet), pretrained features, or no external supervision at all. The most extensive supervision is used by OE, which requires a large external dataset at training time, and performs well only when such a dataset is from a similar domain to the anomalies (see Tab. 1). In cases where the dataset used for OE has significantly different properties, the network may not learn to distinguish between normal and anomalous data, as the normal and anomalous data may have more in common than the OE dataset. E.g. both normal and anomalous classes of Fashion MNIST are greyscale, OE using 80M Tiny Images will not be helpful. Pretrained features further improve OE, in cases where is suitable e.g. CIFAR10. Pretraining, like Outlier Exposure, is also achieved through an external labelled dataset, but differently from OE, the external dataset is only required once - at the pretraining stage and is not used again. Additionally, the same features are applicable for very different image domains from that of the pretraining dataset (e.g. Fashion MNIST - greyscale images, DIOR - aerial images, WBCmedical images, MVTec - industrial images). Self supervised feature learning requires no external dataset at all, which can potentially be an advantage. While there might be image anomaly detection tasks where ImageNet-pretrained weights are not applicable, we saw no evidence for such cases after examining a broad spectrum of domains and datasets (Tab. 8). This indicates that the extra supervision of the ImageNet-pretrained weights comes at virtually no cost. Can pretrained features boost the performance of RotNet-based methods? We did not find evidence that pretrained features improve the performance of RotNet-based AD methods such as Hendrycks et al. (2019b) (CIFAR10: 90.1% vs. 86.6% without and with pretraining). As can be seen in Tab. 3, pretrained features improve the auxiliary task performance on the normal data, but also on the anomalous samples. As such methods rely on a generalization gap between normal and anomalous samples, deep features actually reduce this gap, as a solution to the auxiliary task becomes feasible for both types of images. For a more detailed analysis see Appendix A. 4.2.2 FEATURE ADAPTATION METHODS Benefits of feature adaptation: Feature adaptation aims to make the distribution of the normal samples more compact, w.r.t. the anomalous samples. Our approach of finetuning pretrained features for compactness under EWC regularization, significantly improves the performance over ”raw” pretrained features (see Tab.1). While the distance from the normal train samples center, of both normal and anomalous test samples is reduced (see Fig.1), the average distance from the center of anomalous test samples is typically further than that of normal samples, in relative terms. This makes anomalies easier to detect by standard classifiers such as kNN. While PANDA-EWC may train more than 7.8k minibatches without catastrophic collapse on CIFAR10, performance of training without regularization usually peaks higher but collapse earlier. We therefore set our constant early stopping epoch such that the net trains with to 2.3k minibatches on all datasets for comparison. Our PANDA-SES method usually achieves an anomaly score not far from the unregularized early stopping peak performance, but is most important in cases where unregularized training fails completely. A comparison of PANDA against other adaptation methods: In Tab. 4 we compare PANDA against (i) JO (Perera & Patel, 2019) - co-training compactness with ImageNet classification which requires ImageNet data at training time. We can see that PANDA - EWC always outperforms JO feature adaptation. (ii) PANDA early stopping (ImageNet pretraining + adaptation, with early stopping after constant iterations number), generally has higher performance than PANDA-EWC, but has severe collapse issues on some classes. (iii) PANDA-SES is similar to early stopping, but PANDA- SES does not collapse as badly on CatsVsDogs dataset. We note that weighting equally the changes in all parameters ( ∑ i(θi − θ∗i )2 ) achieves similar results to early stopping. Which are the best layers to finetune? Fine-tuning all the layers, is prone to feature collapse, even with continual learning (see Tab.5). Finetuning Blocks 3 & 4, or 2, 3 & 4, results in similar performance. Finetuning only block 4 results in a very similar performance to linear whitening of the features according to the train samples (94.6 with whitening vs. 94.8 with finetuning only the last block). Similar effect as can be seen in the original DeepSVDD architecture (see also Tab.7, Appendix B). We therefore recommend finetuning Blocks 3 & 4. DeepSVDD architectural changes: DeepSVDD (Ruff et al., 2018) proposes various architectural changes, such as removing the bias parameters from the network, to prevent collapse to trivial features. We found empirically that the results obtained by the constrained architecture were about the same as those achieved with simple whitening of the data (64.8% vs. 64.6%, see Tab.7). We also ablated DeepSVDD by (re-)adding the biases into its LeNet architecture did not deteriorate its anomaly detection performance. Architectural modifications are not the focus of this work, further investigation into architectures less prone to feature collapse is left for future work. 4.2.3 ANOMALY SCORING FUNCTIONS Does kNN improve over distance to the center? kNN achieves an improvement of around 2% on average w.r.t. to distance to the center (CIFAR10: 94.2% vs 96.2%). Can we improve over the linear complexity of kNN? A naive implementation of kNN has linear runtime complexity in the number of training samples. K-means with a small number of clusters gives ∼1% decrease (CIFAR10: 94.9% vs 96.2%, with 10 means). We note that even for very large datasets, or many thousands of means, both kNN and K-means can run faster than real-time. 5 CONCLUSION AND OUTLOOK We proposed an anomaly detection method that adapts pretrained features and mitigates or avoids catastrophic collapse. We showed that our results significantly outperform current methods while addressing their limitations. We analysed the reasons for the strong performance of our method and related popular methods to the different stages of our framework. The main limitation of this work is the requirement for strong pretrained feature extractors. Much work was done on transferable image and text features and it is likely that current extractors can be effective to obtain features for time series and audio as well. Generic feature extractors are not currently available for tabular data, their development is an exciting direction for future work. A PRETRAINED FEATURES, ROTNET AUXILIARY TASKS AND GENERALIZATION Let us take a closer look at the application of RotNet-based methods for image anomaly detection. We will venture to understand why initializing RotNets with pretrained features may actually impair their anomaly detection performance. In such cases, a network for rotation classification is trained on normal samples, and used to classify the rotation (and translations) applied to a test rotated image. To score an anomaly, the image is deemed anomalous if its rotation prediction accuracy is worse than that of a typical normal image. To correctly classify a rotation of a new image, the network may use traits within the image that are associated with its correct alignment. Such features may be associated with the normal class, or with the entire dataset (common to both the anomalous classes together). For illustrative purposes, let us consider a normal class with images containing a deer, and the anomalous class with images containing a horse. The horns of the deer may indicate the ”upward” direction, but so does the position of the sky in the image, which is often sufficient to classify the rotation correctly. As shown in Tab.3, when initialized with pretrained features, the RotNet achieves very good performance on the auxiliary tasks, both within and outside the normal class, indicating the use the more general traits that are common to more classes. Although at first sight it may appear that the improved auxiliary task performance should improve the performance on anomaly detection, this is in fact not the case! The reason is that features that generalize better, achieve better performance on the auxiliary task for anomalous data. The gap between the performance on the auxiliary tasks will therefore be smaller than with randomlyinitialized networks - leading to degraded anomaly detection performance. For example, consider the illustrative example described above. A RotNet that ”overfits” to work only on the normal class deer, relying on the horns of the deer would classify rotations more accurately on deer images than horse images (as its main feature is horns). On the other hand, a RotNet that also uses more general traits can use the sky position for rotation angle prediction. In this case, it will achieve higher accuracy for both deer and horse images. The gap in performance is likely to be reduced, leading to lower anomaly detection success. The above argument can be formulated using mutual information: In cases where the additional traits unique to the class do not add much information regarding the correct rotation over the general features common to many classes, the class will have limited mutual information with the predicted rotation as well (conditional on the information already given traits common to the entire datasets). When the conditional mutual information between the predicted rotation and the class traits is decreases, we expect the predicted rotation to be less discriminative for anomaly detection, as we indeed see in Tab.6. It is interesting to note that using RotNet features for our transfer learning approach achieves inferior results to both MHRot and our method. Only through an ensemble of all rotations, as MHRot does, it achieves strong performance comparable to the MHRot performance. MHRot achieved 89.7% in our re-implementation. Using the MHRot features as ψ0, we compute the kNN distance of the unadapted features between test set images and train set image transformed by the same transformation. By ensembling the 36 transformations - using the average kNN distance, yields 88.7%. Another metric is computing the average kNN distance between test data transformed under a specfic transformation and the training set transformed by another transformation. By using the average same-transformation kNN distance minus the average different transformation kNN distance, achieves 89.8% - a little better than the RotNet performance. B FEATURE ADAPTATION, DEEPSVDD AND FEATURE COLLAPSE To understand whether DeepSVDD gains its significant performance from its pretrained features or from its feature adaptation, we tried to replace its feature adaptation by closed-form linear data whitening. For both pretrained features and anomaly scoring, we used the DeepSVDD original code (Ruff et al., 2018). We can see that a linear method such as data whitening achieves comparable results (Tab.7). We believe that large architectures are required for meaningful feature adaptation. C DETAILED DESCRIPTION OF DATASETS Standard datasets: We evaluate our method on a set of commonly used datasets: CIFAR10 (Krizhevsky et al., 2009): Consists of RGB images of 10 object classes. Fashion MNIST (Xiao et al., 2017): Consists of grayscale images of 10 fashion item classes. CIFAR100 (Krizhevsky et al., 2009): We use the coarse-grained version that consists of 20 classes. DogsVsCats: High resolution color images of two classes: cats and dogs. The data were extracted from the ASIRRA datasetElson et al. (2007), we split each class to the first 10,000 images as train and the last 2,500 as test. Small datasets: To further extend our results, we compared the methods on a number of small datasets from different domains: 102 Category Flowers & Caltech-UCSD Birds 200 (Nilsback & Zisserman, 2008) Wah et al. (2011): For each of those datasets we evaluated the methods using only each of the first 20 classes as normal, and using the entire test set for evaluation. MVTec (Bergmann et al., 2019): This datasets contain 15 different industrial products, with normal images of proper products for train and 2 − 9 types of manufacturing errors as anomalies. The anomalies in MVTec are in-class i.e. the anomalous images come from the same class of normal images with subtle variations. As can be seen in the results in Tab.2, self-supervised methods performed quite poorly on these datasets as they require many images to learn strong features. Simply using pretrained features was sufficient to obtain high accuracy (Tab.2). Symmetric datasets: We evaluated our method on datasets that contain symmetries, such as images that have no preferred angle (microscopy, aerial images. See Fig.3): WBC (Zheng et al., 2018): We used the 4 big classes in ”Dataset 1” of microscopy images of white blood cells, and a 80%/20% train-test split. DIOR (Li et al., 2020): We preprocessed the DIOR aerial image dataset by taking the segmented object in classes that have more than 50 images with size larger than 120 × 120 pixels. We see in Tab. 12 that for both symmetric datasets our method outperformed MHRot even more significantly. This experiment illustrates a weakness in self-supervised methods that need to exploit specific properties of the data e.g. rotational symmetry. When such properties do not exist in the data, the performance of self-supervised methods is reduced. In this case, rotation prediction conveys no information on rotationally invariant images, and presumably all the prediction performance of MHRot comes from the translation prediction task, which can be less accurate. D IMPLEMENTATION DETAILS PANDA Optimization: We finetune the two last blocks of an ImageNet pretrained ResNet152 using SGD optimizer with weight decay of w = 5 · 10−5, and momentum of m = 0.9. We use G = 10−3 gradient clipping. To have a comparable amount of training in the different dataset, we use define the duration of each of our train using a constant number of minibatches, 32 samples each. EWC: We use the fisher information matrix as obtained by (Kirkpatrick et al., 2017), as explained in Sec.3. We weight the EWC loss with λ = 104. After obtaining EWC regularization, we train our net training on 7.8k minibatches. Early stopping/Sample-wise early stopping: We save a copy of the net every 5 epochs. For early stopping we used the copy trained on 2.3k minibatches. For sample-wise early stopping we try all copies trained on up to 150k image samples. Anomaly scoring: Unless specified otherwise, we score the anomalies according to the kNN method with k = 2 nearest neighbours. When comparing different networks as in PANDA-SES method, we normalize each set of features by the typical kNN distance of its normal train features. To obtain the typical normal distance we would like to compute the average on the normal samples. However, computing the distance between normal training data has that issue that each point is its own nearest neighbour. Instead, we split the train set features (90% vs. 10%), and compute the kNN between the 10% validation images and the gallery 90% images. PANDA Outlier Exposure: The method was described in Sec.3. For synthetic outlier images, we used the first 48k images of 80 Million Tiny Images (Torralba et al., 2008) with CIFAR10 & CIFAR100 images removed. We finetune the last block of an ImageNet pretrained ResNet152 with SGD optimizer using 75 epochs and the following parameters: learning rate is 0.1 with gradient clipping, momentum is 0.9, and no weight decay. Baselines We compare to the following methods: OC-SVM: One-class SVM with the RBF kernel. The hyper-parameters (ν ∈ {0.1, ..., 0.9}, γ ∈ {2−7, ..., 22}) were optimized to maximize AUROC. DeepSVDD: We resize all the images to 32× 32 pixels and use the official pyTorch implementation with the CIFAR10 configuration. MHRot (Hendrycks et al., 2019b): An improved version of the original RotNet approach. For high-resolution images we used the current GitHub implementation. For low resolution images, we modified the code to the architecture described in the paper, replicating the numbers in the paper on CIFAR10. Outlier Exposure (MHRot): We use the outlier exposure performance as reported in Hendrycks et al. (2019b).
1. What is the focus of the paper, and what are the proposed contributions? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its practical application? 3. Are there any concerns or questions regarding the effectiveness or novelty of the proposed method? 4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content?
Review
Review This paper proposed an algorithm for anomaly detection. The core of the approach includes two parts. One is sample-wise early stopping and anther is a new type of loss. Although the experiment on the proposed algorithm has better performance on some datasets such as MNIST and CIFAR10, the proposed method does not provide insight understanding about where the performance gain come from. Further, the proposed method is rather a practical assemble of variant existing method which improve the result. For these reason, I do not think this paper is ready to publish.
ICLR
Title PANDA - Adapting Pretrained Features for Anomaly Detection Abstract Anomaly detection methods require high-quality features. One way of obtaining strong features is to adapt pre-trained features to anomaly detection on the target distribution. Unfortunately, simple adaptation methods often result in catastrophic collapse (feature deterioration) and reduce performance. DeepSVDD combats collapse by removing biases from architectures, but this limits the adaptation performance gain. In this work, we propose two methods for combating collapse: i) a variant of early stopping that dynamically learns the stopping iteration ii) elastic regularization inspired by continual learning. In addition, we conduct a thorough investigation of Imagenet-pretrained features for one-class anomaly detection. Our method, PANDA, outperforms the state-of-the-art in the one-class and outlier exposure settings (CIFAR10: 96.2% vs. 90.1% and 98.9% vs. 95.6%) . 1 INTRODUCTION Detecting anomalous patterns in data is of key importance in science and industry. In the computational anomaly detection task, the learner observes a set of training examples. The learner is then tasked to classify novel test samples as normal or anomalous. There are multiple anomaly detection settings investigated in the literature, corresponding to different training conditions. In this work, we deal with two settings: i) when only normal images are used for training ii) Outlier Exposure (OE) where an external dataset simulating the anomalies is available. In recent years, deep learning methods have been introduced for anomaly detection, typically extending classical methods with deep neural networks. Different auxiliary tasks (e.g. autoencoders or rotation classification) are used to learn representations of the data, while a great variety of anomaly criteria are then used to determine if a given sample is normal or anomalous. An important issue for current methods is the reliance on limited normal training data for representation learning, which limits the quality of learned representations. A solution, that we will investigate in this work, is to pre-train features on a large external dataset, and use the features for anomaly detection. As there is likely to be some mismatch between the external dataset and the task of anomaly detection on the target distribution, feature adaptation is an attractive option. Unfortunately, feature adaptation for anomaly detection often suffers from catastrophic collapse - a form of deterioration of the pre-trained features, where all the samples, including anomalous, are mapped to the same point. DeepSVDD (Ruff et al., 2018) proposed to overcome collapse by removing biases from the model architecture, but this restricts network expressively and limits the pre-trained models that can be borrowed off-the-shelf. Perera & Patel (2019) proposed to jointly train anomaly detection with the original task which has several limitations and achieves only limited adaptation success. We propose two techniques to overcome catastrophic collapse: i) an adaptive early stopping method that selects the stopping iteration per-sample, using a novel generalization criterion ii) an elastic regularization, motivated by continual learning, that postpones the collapse. We also provide an extensive evaluation of Imagenet-pretrained features on one-class anomaly detection. Thorough experiments demonstrate that we outperform the state-of-the-art by a wide margin: e.g. CIFAR10 results: 96.2% vs. 90.1% without outlier exposure and 98.9% vs. 95.6% with outlier exposure. We present several insightful critical analyses: i) We show that pre-trained features strictly dominate current self-supervised RotNet-based feature learning methods. We discuss the relative merits of each paradigm and conclude that for most practical purposes, using pre-trained features is preferable. ii) We analyse the results of the popular method, DeepSVDD. We discover that the feature adaptation of its current architecture, which is designed to prevent collapse, does not improve over simple data whitening. iii) We show that collapse can be avoided using early stopping, and suggest an appropriate unsupervised criterion. We also show it can be mitigated using continual learning. 1.1 RELATED WORK Classical anomaly detection: The main categories of classical anomaly detection methods are: i) reconstruction-based: compress the training data using a bottleneck, and use a reconstruction loss as an anomaly criterion (e.g. (Candès et al., 2011; Jolliffe, 2011), K nearest neighbors (Eskin et al., 2002) and K-means (Hartigan & Wong, 1979)), ii) probabilistic: modeling the probability density function and labeling unlikely sampled as anomalous (e.g. Ensembles of Gaussian Mixture Models (Glodek et al., 2013), kernel density estimate (Latecki et al., 2007)) iii) one-class classification (OCC): finding a separating manifold between normal data and the rest of input space (e.g. Oneclass SVM (Scholkopf et al., 2000)). Deep learning methods: The introduction of deep learning has affected image anomaly detection in two ways: extension of classical methods with deep representations and novel self-supervised deep methods. Reconstruction-based methods have been enhanced by learning deep autoencoder-based bottlenecks (D’Oro et al., 2019) which can provide better models of image data. Deep methods extended classical methods by creating a better representations of the data for parametric assumptions about probabilities, a combination of reconstruction and probabilistic methods (such as DAGMM (Zong et al., 2018)), or in a combination with an OCC (Ruff et al., 2018). Novel deep methods have also been proposed for anomaly detection including GAN-based methods (Zong et al., 2018). Another set of novel deep methods use auxiliary self-supervised learning for anomaly detection. The seminal work by Golan & El-Yaniv (2018) was later extended by Hendrycks et al. (2019b) and Bergman & Hoshen (2020). Transferring pretrained representations: Learning deep features requires extensive datasets, preferably with labels. An attractive property of deep neural networks, is that representations learned on very extensive datasets, can be transferred to data-poor tasks. Specifically deep neural representations trained on the ImageNet dataset have been shown by Huh et al. (2016) to significantly boost performance on other datasets that are only vaguely related to some of the ImageNet classes. This can be performed with and without finetuning. Although much recent progress has been performed on self-supervised feature learning (Gidaris et al., 2018; Chen et al., 2020), such methods are typically outperformed by transferred pretrained features. Transferring ImageNet pre-trained features for out-of-distribution detection has been proposed by Hendrycks et al. (2019a). Similar pre-training has been proposed for one-class classification has been proposed by Perera & Patel (2019), however they require joint optimization with the original task. 2 BACKGROUND: FEATURE ADAPTATION FOR ANOMALY DETECTION 2.1 A THREE-STAGE FRAMEWORK We present our general framework in which we examine several adaptation-based anomaly detection methods, including our method. Let us assume that we are given a set Dtrain of normal training samples: x1, x2..xN . The framework consists of three steps: Feature extractor pretraining: A pre-trained feature extractor ψ0 is typically learned using selfsupervised learning (auto-encoding, rotation or jigsaw prediction). We denote the loss function of the auxiliary task Lpretrain. The auxiliary task can be learned either on the training set Dtrain or on an external dataset Dpretrain (such as ImageNet). In the latter case, the pretrained extractor can be obtained off-the-shelf. We will investigate and analyse the merits of each choice in Sec. 4.2. Feature adaptation: Features trained on auxiliary tasks or datasets may require adaptation before being used for anomaly scoring on the target data. This can be seen as a finetuning stage of the pre-trained features on the target training data. We denote the feature extractor after adaptation ψ. Anomaly scoring: Having adapted the features for anomaly detection, we extract the features ψ(x1), ψ(x2)..ψ(xN ) of the training set samples, we proceed to learn a scoring function, which describes how anomalous a sample is. Typically, the scoring function seeks to measure the density of normal data around the test sample ψ(x) (either by direct estimation or via some auxiliary task) and assign a high anomaly score to low density regions. 2.2 EXISTING FEATURE-ADAPTATION METHODS In this section, we review two seminal methods that use feature adaptation for anomaly detection: DeepSVDD: Ruff et al. (2018) suggest to first train an autoencoder E on the normal-only train images. The encoder is then used as the initial feature extractor ψ0(x) = E(x). As the features of the encoder are not specifically adapted to anomaly detection, DeepSVDD adapts ψ on the training data. The adaptation takes place by minimizing the compactness loss: Lcompact = ∑ x∈Dtrain ‖ψ(x)− c‖2 (1) Where c is a constant vector, typically the average of ψ0(x) on the training set. However, the authors were concerned of the trivial solution ψ = c, and suggested architectural restrictions to mitigate it, most importantly removing the biases from all layers. We empirically show that the effect of adaptation of the features in DeepSVDD does not outperform simple feature whitening (see Sec. 4.2.2). Joint optimization (JO): Perera & Patel (2019) proposed to use a deep feature extractor trained for object classification on the ImageNet dataset. Due to fear of ”learning a trivial solution due to the absence of a penalty for miss-classification”, the method do not adapt by finetuning on the compactness loss only. Instead, they relaxed the task setting, by assuming that a number (∼ 50k) of labelled original ImageNet images, Dpretrain, are still available at adaptation time. They proposed to train the features ψ under the compactness loss jointly with the original ImageNet classification linear layer W and its classification loss, here the CE loss with the true label `pretrain(p, y) = − log(py): LJoint = ∑ (x,y)∈Dpretrain `pretrain(softmax(Wψ(x)), y) + α ∑ x∈Dtrain ‖ψ(x)− c‖2 (2) Where W is the final linear classification layer and α is a hyper-parameter weighting the two losses. We note that the method has two main weaknesses: i) it requires retaining a significant number of the original training images which can be storage intensive ii) jointly training the two tasks may reduce the anomaly detection task accuracy, which is the only task of interest in this context. Our proposed method, PANDA, is able to sidestep these issues. 3 PANDA: FEATURE ADAPTATION FOR ANOMALY DETECTION We present PANDA (Pre-trained Anomaly Detection Adaptation), a new method for anomaly detection in images. The core of our method lies in adapting general pre-trained features to anomaly detection on the target distribution. Pre-trained feature extractor: Our method is agnostic to the specific pretrained feature extractor. We investigated different choices of the initial pre-trained feature extractor ψ0 and found that ImageNet pretrained features achieve better results. The assumption of the availability of the ImageNet trained feature extractor and its merits will be discussed at length in Sec. 4.2. Feature Adaptation: Similarly to SVDD and Joint Optimization, we also use the compactness loss (Eq. 1) to adapt the general pre-trained features to the task of anomaly detection on the target distribution. Instead of constraining the architecture or introducing external data into the adaptation procedure we tackle catastrophic collapse directly. The main issue is that the optimal solution of the compactness loss can result in ”collapse”, where all possible input values are mapped to the same point (ψ(x) = c, ∀x). Learning such features will not be useful for anomaly detection, as both normal and anomalous images will be mapped to the same output, preventing separability. The issue is broader than the trivial ”collapsed” solution after full convergence, but rather the more general issue of feature deterioration, where the original good properties of the pretrained features are lost. Even a non-trivial solution might not require the full discriminative ability of the original features which are none-the-less important for anomaly detection. To avoid this collapse, we suggest two options: (i) finetuning the pretrained extractor with compactness loss (Eq.1) and using sample-wise early stopping (ii) when collapse happens prematurely, before any significant adaptation happens, we suggest mitigating it using a Continual Learninginspired adaptive regularization. Sample-wise early stopping (PANDA-SES): Early stopping is one of the simplest methods used to regularize neural network. While stopping the training process after constant number of iterations (we use 2.3k minibatches) helps to control the collapse of the original features in most examined datasets (Sec. 4.2), in other cases, collapse occurs earlier in the training process - the best number of early stopping iterations may vary between datasets. We thus propose ”samplewise early stopping” (SES). The intuition for the method can be obtained from Fig. 1. We can see that anomaly detection accuracy is correlated to the ratio between the average compactness loss of test set anomalies and the average compactness loss of training set normal images. We thus propose to save checkpoints of our network at fixed intervals during the training process - corresponding to different early stopping iterations (ψ1, ψ2..ψT ), for each network ψt we compute the average loss on the training set images st. During inference, we score a target image x using each model ψt(x) = ft, and normalize the score by the relevant average score st. We set the maximal normalized score, as the anomaly score of this sample, as this roughly estimates the model that achieves the best separation between normal and anomalous samples. Note that each sample is scored using only its features ft, and the normal train set average score st, without seeing the labels of any other test set samples. Continual Learning (PANDA-EWC): We propose a new solution for overcoming premature feature collapse that draws inspiration from the field of continual learning. The task of continual learning tackles learning new tasks without forgetting the previously learned ones. We note however that our task is not identical to standard continual learning as: i) we deal with the one-class classification setting whereas continual-learning typically deals with multi-class classification ii) we aim to avoid forgetting the expressivity of the features but do not particularly care if the actual classification performance on the old task is degraded. A simple solution for preventing feature collapse is by regularization of the change in value of the weights of the feature extractor ψ from those of the pre-trained extractor ψ0. However, this solution is lacking as the features are more sensitive to some weights than others and this can be ”exploited” by the adaptation method. Following ideas from continual learning, we use elastic weight consolidation (EWC) (Kirkpatrick et al., 2017). Using a number of mini-batches (we use 100) of pretraining on the auxiliary task, we compute the diagonal of the Fisher information matrix F for all weight parameters of the network. Note that this only needs to happen once at the end of the pretraining stage and does not need to be repeated. The value of the Fisher matrix for diagonal element θ′ is given by: Fθ′ = E(x,y)∈Dpretrain [( ∂ ∂θ Lpretrain(x, y); θ ′ )2 |θ ] (3) We follow (Kirkpatrick et al., 2017) in using the diagonal of the Fisher information matrix Fθi , to weight the Euclidean distance of the change of each network parameter θi ∈ ψ0 and its corresponding parameter θ∗i ∈ ψ. This weighted distance can be interpreted as a measure of the curvature of the loss landscape as function of the parameters - larger values imply high curvature, inelastic weights. We use this regularization in combination with the compactness loss, the losses are weighted by the factor λ, which is a hyperparameter of the method (we always use λ = 104): Lθ = Lcompact(θ) + λ 2 · ∑ i Fθi(θi − θ∗i )2 (4) Network ψ is initialized with the parameters of the pretrained extractor ψ0 and trained with SGD. Anomaly scoring: Given strong features and appropriate adaptation, our transformed data typically follows the standard anomaly detection assumption i.e. high-density in regions of normal data. As in classical anomaly detection, scoring can be done by density estimation. Our method performs better with strong non-parametric anomaly scoring methods. We evaluate several anomaly scoring methods: i) Euclidean Distance to the mean of the training features ii) the K nearest-neighbor distance between the target (test set) features and the features of the training set images iii) Computing the K-means of the training set features, and computing the distance between the target sample features to the nearest mean. See Sec. 4.2.3 for comparison results. Outlier Exposure: An extension of the typical image anomaly detection task (Hendrycks et al., 2018), assumes the existence of an auxiliary dataset of images DOE , which are more similar to the anomalies than normal data. In case such information is available, we simply train a linear classification w layer together with the features ψ under a logistic regression loss (Eq. 5). As before, ψ is initialized with the weights from ψ0. After training ψ and w, we use w · ψ(x) as the anomaly score. Results and critical analysis of this setting are presented in Sec. 4.2. LOE = ∑ x∈Dtrain log(σ(1− w · ψ(x))) + ∑ x∈DOE log(σ(w · ψ(x))) (5) 4 IMAGE ANOMALY DETECTION 4.1 HIGH-LEVEL RESULTS In this section, we present high-level results of our method PANDA-EWC, (PANDA-SES can be found in Sec.4.2) compared to the state-of-the-art: One-class SVM (Scholkopf et al., 2000), DeepSVDD (Ruff et al., 2018), Multi-Head RotNet (Hendrycks et al., 2019b). We also compare our method to raw (unadapted) pretrained features. As Joint Optimization requires extra data, we did not add it to this table, but compare and outperform it in Tab. 4. We compare our PANDAOE to the OE baseline in Hendrycks et al. (2019b) on CIFAR10, as the code or results for other classes were unavailable. To investigate performance in domains significantly different from the dataset used to pretrain the features, we evaluated our method across a large range of datasets: standard datasets (CIFAR10/100, CatsVsDogs), Black-and-white dataset (Fashion MNIST), Small fine-grained datasets (Birds200/Oxford Flowers), Medical dataset (WBC), Very finegrained anomalies (MVTec), and aerieal images (DIOR). A detailed description of the datasets is found in the appendix Sec. C, and represenative frames are shown in Fig. 3. For outlier exposure (OE), we followed Hendrycks et al. (2018) and used 50k randomly sampled images from 80M Tiny Images. Implementation details are reported in Appendix D. The main results are i) pre-trained features achieve significantly better results than self-supervised features on all datasets. ii) Feature adaptation significantly improves the performance on larger datasets iii) Outlier exposure can further improve performance in the case where the given outliers are more similar to the anomalies than the normal data. OE achieves near perfect performance on CIFAR10/100 but hurts performance for Fashion MNIST/CatsVsDogs which are less similar to the 80M Tiny images dataset. A detailed analysis of the reason for better performance for each of these methods and an examination of its appropriateness will be presented in Sec. 4.2. 4.2 ANALYSIS AND FURTHER EVALUATION In this section we analyze the factors of variation in performance between different methods: 4.2.1 AN ANALYSIS OF THE CHOICE OF FEATURE REPRESENTATION A comparison of self-supervised and pre-trained features: In Tab. 1 and Tab. 2, we present a comparison between methods that use self-supervised and pre-trained feature representations. We see that the autoencoder used by DeepSVDD is particularly poor. The results of the MHRotNet as a feature extractor are better, but still underperform PANDA methods (see App. A for more details). The performance of the raw deep ResNet features without adaptation significantly outperforms all methods, including Fashion MNIST and DIOR which have significant differences from the ImageNet dataset. We can therefore conclude that ImageNet-pretrained features typically have significant advantages over self-supervised features. Tab. 2 shows that self-supervised methods do not perform well on small datasets as such methods require large numbers of normal samples in order to learn strong features. On the other hand ImageNet-pretrained features obtain very strong results. Do pretrained features generalize to anomaly detection on domains far from the pretraining dataset? The results in Tab. 2 on FMNIST, DIOR, WBC, MVTec suggest that it does. We evaluated the ImageNet-pretrained features on datasets of various sizes, domains, resolutions and symmetries. On all those datasets pretrained features outperformed the SOTA. These datasets include significantly different objects from those of ImageNet, but also fine-grained intra-object anomalies, and represent a spectrum of data types: aerial images, microscopy, industrial images. This shows that one of the main concerns of using pre-trained features, namely, generalizing to distant domains is not an issue in practice. On the different supervision settings for one-class anomaly detection: Anomaly detection methods employ different levels of supervision. Within the one-class classification task, one may use outlier exposure (OE) - an external dataset (e.g. ImageNet), pretrained features, or no external supervision at all. The most extensive supervision is used by OE, which requires a large external dataset at training time, and performs well only when such a dataset is from a similar domain to the anomalies (see Tab. 1). In cases where the dataset used for OE has significantly different properties, the network may not learn to distinguish between normal and anomalous data, as the normal and anomalous data may have more in common than the OE dataset. E.g. both normal and anomalous classes of Fashion MNIST are greyscale, OE using 80M Tiny Images will not be helpful. Pretrained features further improve OE, in cases where is suitable e.g. CIFAR10. Pretraining, like Outlier Exposure, is also achieved through an external labelled dataset, but differently from OE, the external dataset is only required once - at the pretraining stage and is not used again. Additionally, the same features are applicable for very different image domains from that of the pretraining dataset (e.g. Fashion MNIST - greyscale images, DIOR - aerial images, WBCmedical images, MVTec - industrial images). Self supervised feature learning requires no external dataset at all, which can potentially be an advantage. While there might be image anomaly detection tasks where ImageNet-pretrained weights are not applicable, we saw no evidence for such cases after examining a broad spectrum of domains and datasets (Tab. 8). This indicates that the extra supervision of the ImageNet-pretrained weights comes at virtually no cost. Can pretrained features boost the performance of RotNet-based methods? We did not find evidence that pretrained features improve the performance of RotNet-based AD methods such as Hendrycks et al. (2019b) (CIFAR10: 90.1% vs. 86.6% without and with pretraining). As can be seen in Tab. 3, pretrained features improve the auxiliary task performance on the normal data, but also on the anomalous samples. As such methods rely on a generalization gap between normal and anomalous samples, deep features actually reduce this gap, as a solution to the auxiliary task becomes feasible for both types of images. For a more detailed analysis see Appendix A. 4.2.2 FEATURE ADAPTATION METHODS Benefits of feature adaptation: Feature adaptation aims to make the distribution of the normal samples more compact, w.r.t. the anomalous samples. Our approach of finetuning pretrained features for compactness under EWC regularization, significantly improves the performance over ”raw” pretrained features (see Tab.1). While the distance from the normal train samples center, of both normal and anomalous test samples is reduced (see Fig.1), the average distance from the center of anomalous test samples is typically further than that of normal samples, in relative terms. This makes anomalies easier to detect by standard classifiers such as kNN. While PANDA-EWC may train more than 7.8k minibatches without catastrophic collapse on CIFAR10, performance of training without regularization usually peaks higher but collapse earlier. We therefore set our constant early stopping epoch such that the net trains with to 2.3k minibatches on all datasets for comparison. Our PANDA-SES method usually achieves an anomaly score not far from the unregularized early stopping peak performance, but is most important in cases where unregularized training fails completely. A comparison of PANDA against other adaptation methods: In Tab. 4 we compare PANDA against (i) JO (Perera & Patel, 2019) - co-training compactness with ImageNet classification which requires ImageNet data at training time. We can see that PANDA - EWC always outperforms JO feature adaptation. (ii) PANDA early stopping (ImageNet pretraining + adaptation, with early stopping after constant iterations number), generally has higher performance than PANDA-EWC, but has severe collapse issues on some classes. (iii) PANDA-SES is similar to early stopping, but PANDA- SES does not collapse as badly on CatsVsDogs dataset. We note that weighting equally the changes in all parameters ( ∑ i(θi − θ∗i )2 ) achieves similar results to early stopping. Which are the best layers to finetune? Fine-tuning all the layers, is prone to feature collapse, even with continual learning (see Tab.5). Finetuning Blocks 3 & 4, or 2, 3 & 4, results in similar performance. Finetuning only block 4 results in a very similar performance to linear whitening of the features according to the train samples (94.6 with whitening vs. 94.8 with finetuning only the last block). Similar effect as can be seen in the original DeepSVDD architecture (see also Tab.7, Appendix B). We therefore recommend finetuning Blocks 3 & 4. DeepSVDD architectural changes: DeepSVDD (Ruff et al., 2018) proposes various architectural changes, such as removing the bias parameters from the network, to prevent collapse to trivial features. We found empirically that the results obtained by the constrained architecture were about the same as those achieved with simple whitening of the data (64.8% vs. 64.6%, see Tab.7). We also ablated DeepSVDD by (re-)adding the biases into its LeNet architecture did not deteriorate its anomaly detection performance. Architectural modifications are not the focus of this work, further investigation into architectures less prone to feature collapse is left for future work. 4.2.3 ANOMALY SCORING FUNCTIONS Does kNN improve over distance to the center? kNN achieves an improvement of around 2% on average w.r.t. to distance to the center (CIFAR10: 94.2% vs 96.2%). Can we improve over the linear complexity of kNN? A naive implementation of kNN has linear runtime complexity in the number of training samples. K-means with a small number of clusters gives ∼1% decrease (CIFAR10: 94.9% vs 96.2%, with 10 means). We note that even for very large datasets, or many thousands of means, both kNN and K-means can run faster than real-time. 5 CONCLUSION AND OUTLOOK We proposed an anomaly detection method that adapts pretrained features and mitigates or avoids catastrophic collapse. We showed that our results significantly outperform current methods while addressing their limitations. We analysed the reasons for the strong performance of our method and related popular methods to the different stages of our framework. The main limitation of this work is the requirement for strong pretrained feature extractors. Much work was done on transferable image and text features and it is likely that current extractors can be effective to obtain features for time series and audio as well. Generic feature extractors are not currently available for tabular data, their development is an exciting direction for future work. A PRETRAINED FEATURES, ROTNET AUXILIARY TASKS AND GENERALIZATION Let us take a closer look at the application of RotNet-based methods for image anomaly detection. We will venture to understand why initializing RotNets with pretrained features may actually impair their anomaly detection performance. In such cases, a network for rotation classification is trained on normal samples, and used to classify the rotation (and translations) applied to a test rotated image. To score an anomaly, the image is deemed anomalous if its rotation prediction accuracy is worse than that of a typical normal image. To correctly classify a rotation of a new image, the network may use traits within the image that are associated with its correct alignment. Such features may be associated with the normal class, or with the entire dataset (common to both the anomalous classes together). For illustrative purposes, let us consider a normal class with images containing a deer, and the anomalous class with images containing a horse. The horns of the deer may indicate the ”upward” direction, but so does the position of the sky in the image, which is often sufficient to classify the rotation correctly. As shown in Tab.3, when initialized with pretrained features, the RotNet achieves very good performance on the auxiliary tasks, both within and outside the normal class, indicating the use the more general traits that are common to more classes. Although at first sight it may appear that the improved auxiliary task performance should improve the performance on anomaly detection, this is in fact not the case! The reason is that features that generalize better, achieve better performance on the auxiliary task for anomalous data. The gap between the performance on the auxiliary tasks will therefore be smaller than with randomlyinitialized networks - leading to degraded anomaly detection performance. For example, consider the illustrative example described above. A RotNet that ”overfits” to work only on the normal class deer, relying on the horns of the deer would classify rotations more accurately on deer images than horse images (as its main feature is horns). On the other hand, a RotNet that also uses more general traits can use the sky position for rotation angle prediction. In this case, it will achieve higher accuracy for both deer and horse images. The gap in performance is likely to be reduced, leading to lower anomaly detection success. The above argument can be formulated using mutual information: In cases where the additional traits unique to the class do not add much information regarding the correct rotation over the general features common to many classes, the class will have limited mutual information with the predicted rotation as well (conditional on the information already given traits common to the entire datasets). When the conditional mutual information between the predicted rotation and the class traits is decreases, we expect the predicted rotation to be less discriminative for anomaly detection, as we indeed see in Tab.6. It is interesting to note that using RotNet features for our transfer learning approach achieves inferior results to both MHRot and our method. Only through an ensemble of all rotations, as MHRot does, it achieves strong performance comparable to the MHRot performance. MHRot achieved 89.7% in our re-implementation. Using the MHRot features as ψ0, we compute the kNN distance of the unadapted features between test set images and train set image transformed by the same transformation. By ensembling the 36 transformations - using the average kNN distance, yields 88.7%. Another metric is computing the average kNN distance between test data transformed under a specfic transformation and the training set transformed by another transformation. By using the average same-transformation kNN distance minus the average different transformation kNN distance, achieves 89.8% - a little better than the RotNet performance. B FEATURE ADAPTATION, DEEPSVDD AND FEATURE COLLAPSE To understand whether DeepSVDD gains its significant performance from its pretrained features or from its feature adaptation, we tried to replace its feature adaptation by closed-form linear data whitening. For both pretrained features and anomaly scoring, we used the DeepSVDD original code (Ruff et al., 2018). We can see that a linear method such as data whitening achieves comparable results (Tab.7). We believe that large architectures are required for meaningful feature adaptation. C DETAILED DESCRIPTION OF DATASETS Standard datasets: We evaluate our method on a set of commonly used datasets: CIFAR10 (Krizhevsky et al., 2009): Consists of RGB images of 10 object classes. Fashion MNIST (Xiao et al., 2017): Consists of grayscale images of 10 fashion item classes. CIFAR100 (Krizhevsky et al., 2009): We use the coarse-grained version that consists of 20 classes. DogsVsCats: High resolution color images of two classes: cats and dogs. The data were extracted from the ASIRRA datasetElson et al. (2007), we split each class to the first 10,000 images as train and the last 2,500 as test. Small datasets: To further extend our results, we compared the methods on a number of small datasets from different domains: 102 Category Flowers & Caltech-UCSD Birds 200 (Nilsback & Zisserman, 2008) Wah et al. (2011): For each of those datasets we evaluated the methods using only each of the first 20 classes as normal, and using the entire test set for evaluation. MVTec (Bergmann et al., 2019): This datasets contain 15 different industrial products, with normal images of proper products for train and 2 − 9 types of manufacturing errors as anomalies. The anomalies in MVTec are in-class i.e. the anomalous images come from the same class of normal images with subtle variations. As can be seen in the results in Tab.2, self-supervised methods performed quite poorly on these datasets as they require many images to learn strong features. Simply using pretrained features was sufficient to obtain high accuracy (Tab.2). Symmetric datasets: We evaluated our method on datasets that contain symmetries, such as images that have no preferred angle (microscopy, aerial images. See Fig.3): WBC (Zheng et al., 2018): We used the 4 big classes in ”Dataset 1” of microscopy images of white blood cells, and a 80%/20% train-test split. DIOR (Li et al., 2020): We preprocessed the DIOR aerial image dataset by taking the segmented object in classes that have more than 50 images with size larger than 120 × 120 pixels. We see in Tab. 12 that for both symmetric datasets our method outperformed MHRot even more significantly. This experiment illustrates a weakness in self-supervised methods that need to exploit specific properties of the data e.g. rotational symmetry. When such properties do not exist in the data, the performance of self-supervised methods is reduced. In this case, rotation prediction conveys no information on rotationally invariant images, and presumably all the prediction performance of MHRot comes from the translation prediction task, which can be less accurate. D IMPLEMENTATION DETAILS PANDA Optimization: We finetune the two last blocks of an ImageNet pretrained ResNet152 using SGD optimizer with weight decay of w = 5 · 10−5, and momentum of m = 0.9. We use G = 10−3 gradient clipping. To have a comparable amount of training in the different dataset, we use define the duration of each of our train using a constant number of minibatches, 32 samples each. EWC: We use the fisher information matrix as obtained by (Kirkpatrick et al., 2017), as explained in Sec.3. We weight the EWC loss with λ = 104. After obtaining EWC regularization, we train our net training on 7.8k minibatches. Early stopping/Sample-wise early stopping: We save a copy of the net every 5 epochs. For early stopping we used the copy trained on 2.3k minibatches. For sample-wise early stopping we try all copies trained on up to 150k image samples. Anomaly scoring: Unless specified otherwise, we score the anomalies according to the kNN method with k = 2 nearest neighbours. When comparing different networks as in PANDA-SES method, we normalize each set of features by the typical kNN distance of its normal train features. To obtain the typical normal distance we would like to compute the average on the normal samples. However, computing the distance between normal training data has that issue that each point is its own nearest neighbour. Instead, we split the train set features (90% vs. 10%), and compute the kNN between the 10% validation images and the gallery 90% images. PANDA Outlier Exposure: The method was described in Sec.3. For synthetic outlier images, we used the first 48k images of 80 Million Tiny Images (Torralba et al., 2008) with CIFAR10 & CIFAR100 images removed. We finetune the last block of an ImageNet pretrained ResNet152 with SGD optimizer using 75 epochs and the following parameters: learning rate is 0.1 with gradient clipping, momentum is 0.9, and no weight decay. Baselines We compare to the following methods: OC-SVM: One-class SVM with the RBF kernel. The hyper-parameters (ν ∈ {0.1, ..., 0.9}, γ ∈ {2−7, ..., 22}) were optimized to maximize AUROC. DeepSVDD: We resize all the images to 32× 32 pixels and use the official pyTorch implementation with the CIFAR10 configuration. MHRot (Hendrycks et al., 2019b): An improved version of the original RotNet approach. For high-resolution images we used the current GitHub implementation. For low resolution images, we modified the code to the architecture described in the paper, replicating the numbers in the paper on CIFAR10. Outlier Exposure (MHRot): We use the outlier exposure performance as reported in Hendrycks et al. (2019b).
1. What are the strengths and weaknesses of the paper regarding its contributions to combating feature collapse during model adaptation in anomaly detection? 2. How does the reviewer assess the effectiveness and efficiency of the two proposed approaches, adaptive sample-based early stopping (SES) and continual learning by elastic weight consolidation (EWC)? 3. What are the implications of the paper's findings regarding pre-trained models versus self-supervised models for domain adaptation in anomaly detection tasks? 4. Are there any concerns or questions regarding the paper's experimental design, such as the choice of datasets, comparison with baseline methods, and evaluation metrics? 5. Does the paper provide sufficient information and analysis to support its claims and conclusions, particularly regarding the computational complexity and training time required by the proposed methods?
Review
Review This paper proposes a method to combat feature collapse during model adaptation in anomaly detection (AD), while maintaining performance gains. Feature collapse happens during fine-tuning adaptation of a pretrained model when using a compactness loss and results in all samples, even the anomalous ones, being mapped to the same point. Previous approaches include: removing the bias in all network units as was proposed in DeepSVDD [Ruff 2018]; joint optimization (JO) by using some of the data used to train the pre-trained model [Perera 2019]. This paper proposes 2 novel approaches: 1) adaptive sample-based early stopping (SES) based on a set of checkpoint-saved models after different numbers of iterations and an-inference-time selection of the max anomaly score; 2) continual learning by elastic weight consolidation (EWC). These methods are compared to the baselines DeepSVDD and JO and shown to outperform them on several AD tasks. Another study compares self-supervised approaches to the pretrained approach and show the latter to be more performant. Outlier exposure OE can be added to the proposed method, resulting in new SOT performance for several datasets. PROS: The paper shows convincingly that pre-trained models are superior to self-supervised models, even for datasets where the domain is far from the one used for pretraining (e.g. MVTec,DIOR,FMNIST on ImageNet). The paper proposes two techniques for domain adaptation of the pre-trained weights that mitigate the collapse problem. Those techniques are relatively simple and thus should be reproducible without too much difficulty. The results show that adaptation of the pre-trained weights does provide an extra boost in performance. The paper also studies which ResNET block need to be adapted for best results. It is shown that on CIFAR10, adapting only blocks 3+4 of the ResNET, result in the best performance. CONS: The paper does not study the computational complexity of the proposed methods. For example, the SES approach requires multiple inferences with several models and thus becomes much more costly. It is also unclear how the training time is affected by the EWC method. It is not clear which anomaly scoring function is used in the experiments. It is stated that several scoring function are evaluated: Euclidian distance, Knn, K-means. However, nowhere is it said which one is finally used. It is not specified which baseline results are taken from published papers, and which have been obtained by the authors. The numbers for JO, for example, seem to be obtained by the authors as the JO papers did not use the same datasets (CIFAR, etc.). On the other hand the DeepSVDD numbers seem to come from the published paper. Overall, this paper provides useful findings and novel approaches to weight adaptation in domain transfer for anomaly detection tasks. It is well written and contains a lot of relevant empirical experiments. The novel methods are clearly explained and should be reproducible. SOT is achieved on several AD benchmarks. A clear accept.
ICLR
Title TaskSet: A Dataset of Optimization Tasks Abstract We present TaskSet, a dataset of tasks for use in training and evaluating optimizers. TaskSet is unique in its size and diversity, containing over a thousand tasks ranging from image classification with fully connected or convolutional neural networks, to variational autoencoders, to non-volume preserving flows on a variety of datasets. As an example application of such a dataset we explore meta-learning an ordered list of hyperparameters to try sequentially. By learning this hyperparameter list from data generated using TaskSet we achieve large speedups in sample efficiency over random search. Next we use the diversity of the TaskSet and our method for learning hyperparameter lists to empirically explore the generalization of these lists to new optimization tasks in a variety of settings including ImageNet classification with Resnet50 and LM1B language modeling with transformers. As part of this work we have opensourced code for all tasks, as well as 29 million training curves for these problems and the corresponding hyperparameters.1 N/A We present TaskSet, a dataset of tasks for use in training and evaluating optimizers. TaskSet is unique in its size and diversity, containing over a thousand tasks ranging from image classification with fully connected or convolutional neural networks, to variational autoencoders, to non-volume preserving flows on a variety of datasets. As an example application of such a dataset we explore meta-learning an ordered list of hyperparameters to try sequentially. By learning this hyperparameter list from data generated using TaskSet we achieve large speedups in sample efficiency over random search. Next we use the diversity of the TaskSet and our method for learning hyperparameter lists to empirically explore the generalization of these lists to new optimization tasks in a variety of settings including ImageNet classification with Resnet50 and LM1B language modeling with transformers. As part of this work we have opensourced code for all tasks, as well as 29 million training curves for these problems and the corresponding hyperparameters.1 1 INTRODUCTION As machine learning moves to new domains, collecting diverse, rich, and application-relevant datasets is critical for its continued success. Historically, research on learning optimization algorithms have only leveraged single tasks (Andrychowicz et al., 2016; Metz et al., 2019a), or parametric synthetic tasks (Wichrowska et al., 2017), due to the difficulty of obtaining large sets of tasks. 1.1 TASKSET: A SET OF TASKS We present a set of tasks significantly larger than any optimizer dataset previously studied. We aim to better enable standardized research on optimizers, be that analysis of existing optimizers, or development of new learned learning algorithms. We call this suite of tasks TaskSet. Much in the same way that learned features in computer vision outpaced hand designed features (Krizhevsky et al., 2012; LeCun et al., 2015), we believe that data driven approaches to discover optimization algorithms will replace their hand designed counterparts resulting in increased performance and usability. To this end, standardizing a large suite of optimization tasks is an important first step towards more rigorous learned optimizer research. In this setting, a single “example” is an entire training procedure for a task defined by data, loss function, and architecture. Thus, TaskSet consists of over a thousand optimization tasks, largely focused on deep learning (neural networks). They include image classification using fully connected and convolutional models, generative models with variational autoencoders (Kingma & Welling, 2013) or flows (Dinh et al., 2016; Papamakarios et al., 2017), natural language processing tasks including both language modeling and classification, as well as synthetic tasks such as quadratics, and optimization test functions. The problems themselves are diverse in size, spanning 7 orders of magnitude in parameter count, but remain reasonably fast to compute as almost all tasks can be trained 10k iterations on a CPU in under one hour. To demonstrate the breadth of this dataset we show an embedding of all the tasks in Appendix A.1 in Figure S1. 1redacted url 1.2 AMORTIZING HYPERPARAMETER SEARCH Machine learning methods are growing ever more complex, and their computational demands are increasing at a frightening pace (Amodei & Hernandez, 2018). Unfortunately, most modern machine learning models also require extensive hyperparameter tuning. Often, hyperparameter search is many times more costly than the final algorithm, which ultimately has large economic and environmental costs (Strubell et al., 2019). The most common approach to hyperparameter tuning involves some form of quasi-random search over a pre-specified grid of hyperparameters. Building on past work (Wistuba et al., 2015b; Pfisterer et al., 2018), and serving as a typical example problem illustrative of the sort of research enabled by TaskSet, we explore a hyperparameter search strategy consisting of a simple ordered list of hyperparameters to try. The idea is that the first few elements in this list will cover most of the variation in good hyperparameters found in typical machine learning workloads. We choose the elements in this list by leveraging the diversity of tasks in TaskSet, by meta-learning a hyperparameter list that performs the best on the set of tasks in TaskSet. We then test this list of hyperparameters on new, larger machine learning tasks. Although learning the list of hyperparameters is costly (in total we train∼29 million models consisting of over 4,000 distinct hyper parameter configurations), our final published list is now available as a good starting guess for new tasks. Furthermore, we believe the raw training curves generated by this search will be useful for future hyperparameter analysis and meta-learning research, and we release it as part of this work. We additionally release code in Tensorflow (Abadi et al., 2016), Jax (Bradbury et al., 2018), and PyTorch (Paszke et al., 2019) for a reference optimizer which uses our learned hyperparameter list, and can be easily applied to any model. 2 TASKSET: A SET OF TASKS How should one choose what problems to include in a set of optimization tasks? In our case, we strive to include optimization tasks that have been influential in deep learning research over the last several decades, and will be representative of many common machine learning problems. Designing this dataset requires striking a balance between including realistic large-scale workloads and ensuring that tasks are fast to train so that using it for meta-learning is tractable. We construct our dataset largely out of neural network based tasks. Our chosen tasks have between ten thousand and one million parameters (much smaller than the billions commonly used today), as a result most problems can train in under an hour on a cloud CPU with 5 cores. We additionally focus on increased “task diversity” by including many different kinds of training algorithms, architectures, and datasets – inspired by past work in reinforcement learning which has demonstrated large numbers of problems and increased diversity around some domain of interest is useful for both training and generalization Heess et al. (2017); Tobin et al. (2017); Cobbe et al. (2018); OpenAI et al. (2019). Again though, a balance must be struck, as in the limit of too much diversity no learning can occur due to the no free lunch theorem (Wolpert & Macready, 1997). Our dataset, TaskSet, is made up of 1162 tasks in total. We define a task as the combination of a loss function, a dataset, and initialization. Specifically we define a task as a set of 4 functions: • Initialization: ()→ parameter initial values • Data generator: data split (e.g. train / valid / test)→ batch of data • Forward pass: (batch of data, params)→ loss • Gradient function: (input data, params)→ gradients ( dlossdparams ) A task has no tunable hyperparameters and, coupled with an optimizer, provides all the necessary information to train using first order optimization. This makes experimentation easier, as each task definition specifies hyperparameters such as batch size (Shallue et al., 2018; McCandlish et al., 2018) or initialization (Schoenholz et al., 2016; Yang & Schoenholz, 2017; Xiao et al., 2018; Li & Nguyen, 2019; Pretorius et al., 2018; Hayou et al., 2018; Karakida et al., 2018; Blumenfeld et al., 2019; Hayou et al., 2019) that no longer need to be tuned. We augment a set of “fixed” tasks which have been designed by hand, with “sampled” tasks that are randomly generated task instances. 2.1 SAMPLED FAMILIES OF TASKS Sampled tasks are created by sampling neural network architectures (e.g., MLPs, convnets), activation functions, datasets (e.g., images, text, quadratic functions, and synthetic tasks), and other properties. We organize these sampled tasks into similar families of tasks. See Appendix H for a complete description of these sampled tasks. Broadly, these are separated into tasks sampling image models (mlp, mlp_ae (Hinton & Salakhutdinov, 2006), mlp_vae (Kingma & Welling, 2013), conv_pooling, conv_fc, nvp (Dinh et al., 2016), maf (Papamakarios et al., 2017)), tasks sampling language models (char_rnn_language_model (Graves, 2013), word_rnn_language_model, rnn_text_classification), quadratics (quadratic) and other synthetic tasks (losg_tasks (Wichrowska et al., 2017)). Defining a sampling distribution that generates tasks that are always valid, and that run within a time constraint, is difficult. Instead, we define a broad distribution and make use of rejection sampling to remove tasks that are either too slow or that we are unable to optimize at all. By starting with a distribution that is too broad, and pruning it, we hope to achieve better coverage of tasks. 2.2 HAND DESIGNED TASKS In addition to the sampled tasks, we also include 107 hand designed tasks. These consist of more common tasks that both improve the coverage beyond the sampled tasks, and provide for better interpretability through a closer match to existing tasks in the literature. These tasks span image classification, text classification, language modeling, and generative modeling, as well as some synthetic tasks such as associative retrieval (Ba et al., 2016). We leave the description of each one of these tasks to Appendix H.3. 2.3 AGGREGATE STATISTICS OF TASKSET In Figure 1a we show histograms of compute times for all problems and find almost all problems train under an hour (see Appendix C for per task family histograms). In Figure 1c we plot a histogram of the number of parameters per tasks. Finally, in Figure 1b we show a distribution of task difficulty by plotting the fraction of optimizer configurations that achieve a certain loss value. We find that for some tasks as many as 50% of optimizers perform well while for others < 1% achieve a loss close to the smallest observed loss. For a qualitative visualization of TaskSet, see Appendix A 3 AMORTIZED HYPERPARAMETER SEARCH As a simple demonstration of using TaskSet for meta-learning research, we consider learning hyperparameter lists. This idea of learning lists of hyper parameters has been explored in (Wistuba et al., 2015b; Pfisterer et al., 2018). We define an optimizer as the pairing of an optimization algorithm and all its corresponding hyperparameters (e.g. learning rate). While sometimes practitioners use a single optimizer – e.g. Adam (Kingma & Ba, 2014) with default hyperparameters – most practitioners will often run multiple optimizers and use a validation set to select the best performer. 3.1 OPTIMIZER FAMILIES We define different parameterizations of hand designed optimizers as an optimizer family. The optimizer families we consider consist of: • Adam1p: One hyperparameter, the fixed learning rate α • Adam4p: Four Adam hyperparameters, α, β1, β2, and • Adam6p: Adam4p hyperparameters, and two additional hyperparameters controlling linear and exponential learning rate decays • Adam8p: The hyperparameters in Adam6p plus two additional hyperparameters for `1 and `2 regularization terms • NAdamW: A 10 hyperparameter search space based on NAdam (Dozat, 2016) with cosine learning rate decay, and weight decay. For the full update equations see Appendix D.1 for Adam and D.2 for NadamW. We chose Adam based on its use in existing work, and NAdam based on performance shown in (Choi et al., 2019). 3.2 LEARNED HYPERPARAMETER LISTS Traditionally researchers tune hyperparameters on a per model basis. While this often results in performance gains; it comes at the cost of immense compute, and researchers are almost never able to expend enough compute to saturate model performance (Shallue et al., 2018). As an alternative to per-problem tuning, we proposes instead tuning the search strategy itself on a dataset of tasks and transferring the knowledge gained to new tasks of interest. This idea is already implicitly done by humans – e.g. we don’t start a hyperparameter search with a learning rate of 106 – we use values that the community has found useful. This dataset-based tuning has a number of desirable properties. First, the resulting search strategies are much more efficient, resulting in large speedups in sample efficiency on unseen tasks over a random search baseline. Second, we are less restricted by the number of optimizer parameters we search over or by needing to define reasonable search spaces. For example, if there are redundant regions of search space, our learned optimizer will be less likely to sample them repeatedly, unlike random search. If there is a region of hyperparameter space that performs poorly on all problems, the learned search strategy will avoid it. In this work we parameterize the learned search strategy as an ordered list of optimizers to try (i.e. a list of hyperparameter configurations). Given a fixed number of task evaluations we would like to achieve the best possible performance on all tasks in the training set of tasks. For a length k list of optimizers we define our loss as: J(θ1,...,k) = ∑ τ∈tasks [ min i∈1..k f(τ, θi) ] , (1) where θi are the optimizer hyperparameters for element i in the list, and f is an appropriately normalized loss computed after training task τ . We seek to find an optimal list of optimizers as (similar to (Wistuba et al., 2015b)): θ∗1,...,k = arg min θ1,...,k J(θ1,...,k). (2) This is meant to serve as an example task, illustrative of the sort of research enabled by TaskSet. More advanced hyperparameter search strategies would no doubt yield even more performant results. 3.3 SCORING AN OPTIMIZER BY AVERAGING OVER TASKS To score a task, we initialize the parameters of the task and run 10,000 iterations of an optimizer. We monitor loss on each data split (train, validation, test) every 200 steps using an average over 50 mini-batches per evaluation. For all data presented in this paper we also compute averages over 5 random task parameter initializations. A side effect of the diverse task dataset is that losses span multiple orders of magnitude, making direct aggregation of performance problematic. To remedy this we normalize the loss values for all tasks linearly between 0 and 1 where 1 is validation loss at initialization and zero is the lowest validation loss achieved by any tested optimizer. Loss values greater than the loss at initialization are clipped to 1. To collapse an entire normalized training curve into a scalar cost, we compute the mean normalized loss over the 10,000 iterations. We find empirically that this choice is similar to taking the minimum (Appendix B.5). We leave exploring alternative methods such as performance profiles (Dolan & Moré, 2002) and Nash averaging (Balduzzi et al., 2018) for future work. 3.4 GREEDY LEARNING FROM RANDOM SEARCH Optimizing Eq. 2 is combinatorially expensive. To tractably solve this optimization problem, we introduce two approximations (Wistuba et al., 2015b). First, we shift the unconstrained search over the full space of optimizers to search over a finite set of optimizers, Θ. This finite set can be computed ahead of time and decouples the expensive procedure of training each task with an optimizer from training the learned search space. Separating data and training in this way has been done for both hyperparameter search (Eggensperger et al., 2015), and neural architecture search (Klein & Hutter, 2019; Ying et al., 2019). In total we trained 1,000 optimizer configurations for each of Adam1p, Adam4p, Adam6p, Adam8p, and NAdamW on all 1,162 tasks with 5 random seeds per pair. Second, we use a greedy heuristic to approximate the combinatorial search over sets of k optimizers. For a single optimizer trial, k = 1, we select the best performing optimizer on average across all training tasks. We then continue to select optimizer parameters such that the minimum of all optimizer-parameters per task, aggregated over all tasks is minimized. This shifts the complexity from exponential in k to linear. Finding a length k set of optimizers can thus be efficiently computed as follows: θ∗1 = arg min θ∈Θ [ ∑ τ∈tasks f(τ, θ) ] (3) θ∗k = arg min θ∈Θ [ ∑ τ∈tasks [min (b, f(τ, θ))] ] where b = min i∈1..(k−1) f(τ, θ∗i ). (4) We note that the first argument of the outer min, b, can be computed once per set of hyperparameters as it does not depend on θ. Finally, as our tasks are stochastic, we order optimizers based on validation loss and report test loss (Van Hasselt et al., 2016).2 This training strategy requires an original search space from which to collect data and build Θ. The search space we use is described in Appendix E.2. While large, we find that the optimal parameters for each task end up covering almost the entire space. At some point, no improvement can be obtained on any of the tasks in the dataset. At this point, we simply randomly order the remaining optimizers though expect more sophisticated methods could be employed. 4 EXPERIMENTS: TRAINING AND GENERALIZATION OF LEARNED HYPERPARAMETER LISTS With our dataset of tasks and data collected, we turn our attention to exploring training of the hyperparameter lists, and generalization beyond the suite of tasks in TaskSet. In this exploration, 2This technically means that increasing the number of optimizes could potentially decrease performance, but we find this rarely happens in practice. we hope to give a flavor of the types of research possible with TaskSet. Our main tool to show performance are figures that sweep the number of optimizers configurations on the x-axis, and show the best performance achieved for each number of optimizers tried, averaged over some set of tasks (Eq. 1). 4.1 LEARNED HYPERPARAMETER LISTS ARE MORE EFFICIENT THAN RANDOM SEARCH To demonstrate the impact of learning a search space, we take the 1,162 tasks split them into even train and test tasks. We then learn a search strategy using optimizers from the Adam8p family following Eq. 4 on the train tasks. Results in Figure 3. As baselines, we use random search with different search spaces, including just learning rate (Rand: Adam1p), the default Adam hyper parameters (Rand: Adam4p), as well as the Adam 8 dimensional search space (Rand: Adam8p). To better get a sense of performance, we show two additional “Refined” baselines which involve random sampling from better search space. For min/max, we sample from the minimum bounding box containing the best hyperparameters for each task. To improve the search space quality, we shrink this bounding box so 90% of the best hyperparameters are enclosed. Further considerations regarding search space volume are treated in E.1, and the precise search spaces are specified in Appendix E.2. Finally, one difficulty of working with offline data is the difficulty of running online hyperparameter optimization methods such as Bayesian Optimization without running additional compute. Future work will explore offline Bayesian methods. 4.2 MORE TASKS LEAD TO BETTER GENERALIZATION We next look at the effects of the number of training tasks on generalization. We take subsets of tasks of different size, and train hyperparameter lists using Eq.4. We compute test performance on the remainder of the tasks and plot loss averaged over different splits in Fig. 3. We find that a large number of tasks (more than 100) are required to achieve near-optimal test performance. This is surprising to us given how simple our learned search strategy is (simply a list of hyperparameters), but not wholly so given past work studying generalization in RL (Cobbe et al., 2018). 4.3 GENERALIZATION TO DIFFERENT TYPES OF PROBLEM For learned algorithms to be generally useful, some amount of generalization to unseen task families is required. To test this, we split our data into disjoint task types. We perform two splits: testing on RNN tasks and training on all others, and testing on autoencoder tasks and training on all others. As a best case baseline we additionally train search spaces on the test task families directly. We find an order of magnitude better sample efficiency than random search for both cases and find our learned search space is close in performance to search spaces trained on just the testing tasks (Fig. 3). 5 EXPERIMENTS: REALISTIC PROBLEMS In §4.3 and §B.1 we explored generalization of learned hyperparameter lists to held out tasks within the TaskSet dataset. While useful for analysis, these tasks are still far from the workloads commonly employed to solve real problems. In this section, we explore the performance of our learned search space on a number of state of the art models. These models drastically differ from the training set of tasks in parameter count and compute cost. We see these experiments as evidence that the tasks presented in TaskSet capture enough of the structure of “realistic” problems that TaskSet can be used to improve larger scale workloads. For all experiments in this section we take the optimizer ordering using the NAdamW optimizer family on all TaskSet tasks then apply the resulting search space to the target problem. The final ordered list of hyperparameters used is in Appendix G. We show results for ResNet50 on ImageNet, and Transformers on LM1B. Additional results with reinforcement learning using PPO are in Appendix B.2. First we explore ImageNet classification using a ResNet50. on We take the TPU implementation with default settings from the official Tensorflow models repository (Tensorflow, 2019), and swap out different optimizers. We show accuracy computed over the course of training as well as best performance for a given hyperparameter budget in Figure 4. We find that the learned search space vastly outperforms learning rate tuned Adam. Next we explore language modeling on LM1B with a Transformer. We take the transformer (Vaswani et al., 2017) example implemented in Jax (Bradbury et al., 2018) with Flax (Flax Developers, 2020). We train using a 2x2 TPU V2 configuration for 100k iterations. Once again we take all other hyperparameters as is and simply swap optimizer implementation. We find the learned hyperparameter list dramatically outperforms the default optimizer setting and the fixed learning rate baseline. Nevertheless, we emphasize that our method does not require any knowledge of the underlying problem to achieve faster results. See Appendix B.3 for this same transformer with a budget of 20k iterations. 6 RELATED WORK The idea of sets of tasks has been explored throughout machine learning. The majority of these suites are for use in evaluation where as our suite is targeted for meta-learning. The closest family of optimization tasks for evaluation to those presented here is DeepObs (Schneider et al., 2019) which includes 20 neural network tasks. Our task suite focuses on smaller problems and contains 50x more tasks. Outside of evaluation, task suites in reinforcement learning such as Obstacle Tower (Juliani et al., 2019), ProcGen (Cobbe et al., 2019), CoinRun (Cobbe et al., 2018), and Sonic (Nichol et al., 2018) focus on training algorithms that work across a variety of settings. The creation of TaskSet was motivated by the goal of learning learning algorithms, or metalearning (Schmidhuber, 1987; 1995; Hochreiter et al., 2001), and in particular learned optimizers (Bengio et al., 1990; Andrychowicz et al., 2016; Bello et al., 2017; Wichrowska et al., 2017; Li & Malik, 2017; Lv et al., 2017; Metz et al., 2019a;b). This use case is explored with this dataset in (Metz et al., 2020). In this work we do not use this task suite to train learned optimizers, but instead focus on learning a hyperparameter search strategy. Tuning hyperparameters by leveraging multiple tasks has been explored within the contexts of Bayesian optimization Swersky et al. (2013); Perrone & Shen (2019); Perrone et al. (2018) as well as meta-learning (Reif et al., 2012; Gomes et al., 2012; Feurer et al., 2014; Wistuba et al., 2015b;a; Chen et al., 2017; Pfisterer et al., 2018). See Appendix F.1 for a full discussion of sets of tasks in machine learning, Appendix F.2 for more info on optimization in machine learning, and Appendix F.3 for a discussion on existing hyper parameter search methods. 7 DISCUSSION Learning optimization algorithms represents a promising direction for accelerating machine learning research. For the resulting algorithms to become useful tools, however, we must further understand the relationships between training tasks, meta-optimization, and both iid and out of distribution generalization. This work takes steps towards this goal by introducing a significantly larger set of optimization tasks than ever previously considered. As an example use-case, we provide a thorough analysis of how TaskSet enables meta-optimization of simple, but performant hyperparameter lists. Despite this approach’s simplicity, the training of learned learning algorithms is computationally expensive. We hope to explore alternative parameterizations which will increase efficiency by, e.g., leveraging previous evaluations or partial model training (Swersky et al., 2014; Li et al., 2016). We are releasing the optimal hyperparameter list we have found as a drop-in replacement optimizer in a variety of deep learning frameworks (Tensorflow (Abadi et al., 2016), PyTorch (Paszke et al., 2019), and JAX (Bradbury et al., 2018)) in the hopes that the research community finds them useful. We believe this represents a new set of reasonable optimizer defaults for new problems. Finally, we hope TaskSet encourages more standardized research on general purpose optimizers. A TASKSET VISUALIZATION For a qualitative view, we constructed a feature space consisting of performance measurements for each task+optimizer pair (See §3.3). This forms a dense matrix of size number of tasks by number of optimizers. We then perform T-SNE (Maaten & Hinton, 2008; Van Der Maaten, 2014) to reduce the dimensionality to two and plot the results coloring by task family (Figure S1). Clusters in this space correspond to tasks that work well with similar optimizers. We find diversity of tasks with clusters occurring around similar families of tasks. A.1 TSNE OF TASKSET B ADDITIONAL EXPERIMENTS B.1 GENERALIZATION TO DIFFERENT SIZED PROBLEMS Training learned algorithms on large models is often infeasible for computational reasons. As such, one form of generalization needed when building learned algorithms is the ability to transfer to different sized models. As shown in Figure 1 the tasks in this suite contain a wide range of parameter counts, and can thus be used to test this kind of generalization. We split the tasks into 8 groups – one group per order of magnitude in parameter count, and train hyperparameter lists on one range and test on the rest. In Figure S2 we plot the fraction of the training loss achieved by the test loss on the target parameter range. We find peak performance around the model sizes used for training, and smooth falloff as the testing tasks become more dissimilar as measured by parameter count. We note that our problems are not evenly distributed across these groups thus each group will contain a different percentage of the underlying tasks. While this potentially confounds these results, we believe a similar bias occurs in realistic workloads as well. 0 1 2 3 4 5 6 7 number of parameters (log10) 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 tra in J / te st J 0-1 3-4 6-7 all Figure S2: We show learned search space generalization, measured as a ratio of the loss achieved in training and testing, versus the number of task parameters used during search space training. Generalization falls off as one moves further away from the training regime. In black we show that a uniform mixture of the 7 parameter buckets does not fall off. B.2 REINFORCEMENT LEARNING WITH PPO Figure S3: We find our learned hyperparameter lists performs about as well as random search on the NAdam search space, and worse than the random search on the learning rate tuned Adam search space. We test the learned hyperparameter lists on two continuous control reinforcement learning environments, half cheetah and humanoid, from Gym’s Mujoco environments(Todorov et al., 2012; Brockman et al., 2016). We use TF-Agents (Guadarrama et al., 2018) with all non-optimizer hyperparameters set via searching a mixture of environments. In figure B.2 we find our learned hyperparameter lists achieves comparable to slightly worse performance does not out perform learning rate tuning of Adam in both efficiency nor final performance. To diagnose this behavior we ran all 1k optimizers for both problems and found the learned hyperparameter list performs comparable to random search in the underlying space. To probe further, we computed spearman correlation on the performance of each optimizer as compared to the rest of the tasks in the task suite. We found considerably worse correlations than where present for tasks in the TaskSet. This is not surprising as TaskSet contains no reinforcement learning problems. B.3 LM1B TARGETING 20K ITERATIONS We show a transformer on LM1B similar to that shown in §5 except run for only 20k iterations, a fith of the steps. Results in Figure S4. We find the learned hyperparameter lists are much more efficient than either of the baselines. Figure S4: We find our learned hyperparameter lists out performs learning rate tuned Adam with both a constant, and a fixed learning rate schedule on a 53M parameter Transformer trained on LM1B. Left: Learning curves for the best of the optimizers. Right: Number of optimizers tried vs best test loss. B.4 PROBING SHORT HORIZON Often the goal when training a learned optimizers is to minimize performance after training some number of iterations. This is extremely computationally expensive and in practice approximations must be used. One common family of approximations is short horizon based methods. These methods rely upon somehow truncating training so that updates can be made to the learned optimizer more frequently. This is commonly done via truncated backprop (Werbos, 1990; Wichrowska et al., 2017; Figure S5: Hyperparameter lists trained on short horizon data generalize remarkably well. On the y-axis we show performance evaluated on the the full 10k training iterations for a given number of optimizers tried (x-axis). In color we show different number of steps used when evaluating task optimizer performance when training the hyperparameter list. Metz et al., 2019a; Wu et al., 2016), or proxy objectives such as only training for a handful of epoch (Zoph & Le, 2017). While this short horizon proxy is certainly not optimal(Wu et al., 2016), the performance gains are immense and in practice is what makes meta-training optimizers feasible. In our task suite, we test this short horizon learning by training hyperparameter lists only using some finite amount of training iterations per task and testing in the full training regieme (10k steps). Results in figure S5. We find that even when learning the hyperparameter list on a mere 200 steps, our hyperparameter list continues to generalize to outperform random search on Adam8p. This is promising as this suggests that training the learned hyperparameter list can be done with 1/50th of the total compute. This result is surprising to us as prior work indicates the effect of this bias can be severe (Wu et al., 2016; Metz et al., 2019a). We suspect it is due to the simplicity of the learned parameter space but leave a thorough analysis of this for future work. 100 101 102 103 0.1 0.5 1.0 default norm last quantile norm min norm random 0.0 0.2 0.4 0.6 0.8 1.0 default norm 0.0 0.2 0.4 0.6 0.8 1.0 las t q ua nt ile n or m 0.0 0.2 0.4 0.6 0.8 1.0 default norm 0.0 0.2 0.4 0.6 0.8 1.0 m in no rm Figure S6: Left: Aggregate performance (y-axis) vs number of optimizer tried (x-axis) for different normalization and aggregation techniques. In each curve we train the hyperparameter list with a different normalization and aggregation strategy and test with the default normalization and aggregation technique described in 3.3. We find some some strategies are near identical in performance (e.g. min norm), while others perform significantly worse – e.g. last quantile norm. In both cases, however, we still perform better than the underlying random search. Center: Correlation between default normalization and the quantile based normalization strategy. Correlation is quite low – 0.193 Pearson’s correlation. Right: Correlation between the default normalization using a mean to aggregate over validation over the course of training vs using a min over validation over the course training. We find a much higher correlation of 0.911. B.5 CHOICE OF NORMALIZATION FUNCTION There is no easy way to define a single metric for optimizer performance over a mixture of tasks. This paper picks a single normalization strategy based on minimum validation loss and the validation loss at initialization presented in §3.3. In this section we show the impact of choosing a different normalization and or aggregation technique. First, instead of computing the mean over learning curves as described in §3.3 we compute a min. Second, instead of rescaling based on init and min, we linearly rescale based on the 95 percentile of validation loss and the min validation loss achieved at the end of training each task.In Figure S6 we show learned hyperparameter list training and testing performance as a function of number of optimizers tried when training with different normalization techniques. We find using the min instead of mean results in a negligible change, while using the percentile loss more significantly hurts performance. This difference can be explained by Figure S6b and S6c where we show correlations between the two losses. We find the percentile loss has a much weaker correlation to the default normalizer. We suspect this difference is due to the fact that many optimizers diverage on tasks. By using the 95 percentile we upweight optimizers that do not diverge. B.6 TASK FAMILIES ARE DIVERSE To show the effects of diversity we train and test hyperparameter lists on each pair of task family. We additionally normalize each column from 0-1 to account for different mean losses across tasks. Results in Figure S7. While we do find some similarity in tasks – e.g. between MAF and NVP models, but no two tasks behave the same performance characteristics (no duplicate columns) suggesting that each task family is providing a different contribution to the space of all tasks. We also find when training on certain “far away” tasks, e.g. the quadratic family, we find poor performance on most other task families. m af nv p m lp_ va e m lp_ ae co nv _p oo lin g co nv _f c m lp wo rd _r nn _lm rn n_ te xt_ cla ss ch ar _r nn _lm los g_ ta sk s qu ad ra tic Train task family maf nvp mlp_vae mlp_ae conv_pooling conv_fc mlp word_rnn_lm rnn_text_class char_rnn_lm losg_tasks quadratic Te st ta sk fa m ily 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 no rm ali ze d los s Figure S7: Learning hyperparameter lists using one task family and testing on the remainder of task families. We normalize each column from 0-1 to account for different mean losses across tasks. Lower loss means better performance. We find some groups of similar tasks, but in general no two task families behave identically. B.7 EFFECTS OF THE META-TRAINING SEARCH SPACE SIZE Our offline learning technique described in §3.4 hinges on a finite set of optimizers collected via random search. This set is denote by Θ in Eq.4. In this section we probe the impact of this size. We take different sized subsets of the the thousand Adam8p optimizer configurations and train and test search spaces on different iid splits of tasks. We then plot performance as a function of this number of optimizers in Figure S9. Moving left in this figure corresponds to increasing the compute needed to train the learned hyperparameter list. We find performance continues to improve as the size of Θ grows. Given the high dimension of our meta-parameters, 8, this is not a surprise as the number of evaluations needed to explore the space will grow exponentially. We find that the full thousand trials are needed to out perform learning rate tuned Adam when only given a single optimizer evaluation. We find around 100 optimizers (size of Θ) are needed in the case of 10 optimizer trials (k = 10). Overall this sugjests that randomsearch might not be the most efficient learning method for creating hyperparameter lists. This is especially true as we work with optimizer families that have more hyperparameters. Other approximate learning methods should likely be explored such as truncated backprop through time as used by the learned optimizer community(Metz et al., 2019a), and/or population based methods (Balduzzi et al., 2019). lo sg _t a sk s q u a d ra ti c m lp m lp _a e co n v _p o o lin g co n v _f c m lp _v a e n v p rn n _t e x t_ cl a ss if ic a ti o n m a f fi x e d ch a r_ rn n _l a n g u a g e _m o d e l w o rd _r n n _l a n g u a g e _m o d e l 1 sec 10 sec 1 min 5 min 30 min 1 hr 2 hr ti m e t o t ra in 1 0 k st e p s Figure S8: Timings computed for each task family. We find most task families have a narrow distribution of compute times. 101 102 103 number optimizers 0.10 0.15 0.20 0.25 0.30 0.35 0.40 los s 1 hparams (Adam8p) 1 hparams (Adam4p) 10 hparams (Adam8p) 10 hparams (Adam4p) 100 hparams (Adam8p) 100 hparams (Adam4p) 1 -- Adam lr 10 - Adam lr Figure S9: Performance continues to improve as more and more optimizers are used when training the search spaces. On the x-axis we show number of optimzers (size of Θ, the number of hyperparameter evaluations used in training the learned hyperparameter list) and y-axis we show test loss achieved when applying the learned search space for a given fixed length, e.g. different values of k shown in color). We plot median with 25-75 percentile shaded over different random optimizer samples and iid task splits. Stars (with horizontal guide lines) denote best search for the corresponding number of hyperparameters for learning rate tuned Adam in half orders of magnitude. C TASK TIMINGS In Figure S8 we show box plots of training times for each problem. For each task we use the median step time recorded over a mixture of different physical devices and multipled by 10k to estimate a full training time. Future versions of this dataset of tasks will contain more variation within each task family. D OPTIMIZER FAMILY UPDATE EQUATIONS D.1 ADAM8P UPDATE EQUATIONS The 8 meta-parameters are: the learning rate, α, first and second moment momentum, β1, β2, the numerical stability term, , `2 and `1 regularization strength, and learning rate schedule constants λexp_decay and λlinear_decay. For Adam6p, we set `1 and `2 to zero. φ(0) =problem specified random initialization (S1) m(0) =0 (S2) v(0) =0 (S3) g(t) = d dφ(t) (f(x;φ(t)) + `2||φ(t)||22 + `1||φ(t)||1) (S4) m(t) =β1m (t−1) + g(t)(1− β1) (S5) v(t) =β2v (t−1) + (g(t))2(1− β2) (S6) m̂(t) = m(t) 1− βt+11 (S7) v̂(t) = v(t) 1− βt+12 (S8) u(t) = m̂(t)√ v̂(t) + (S9) s (t) linear =max(1− tλlinear_decay, 0) (S10) s(t)exp =exp(−tλexp_decay) (S11) φ(t+1) =αs (t) linears (t) expu (t) (S12) D.2 NADAMW UPDATE EQUATIONS This optimizer family has 10 hyper parameters. The base learning rate, αbase, first and second moment momentum, β1, β2, the numerical stability term, , `2WD `2 regularization strength, `2AdamW AdamW style weight decay, and a boolean to switch between NAdam and Adam, buse nesterov. The learning rate schedule is based off of a single cycle cosine decay with a warmup. It is controlled by 3 additional parameters – cwarmup, cconstant, and cmin learning rate mult. The learning rate is defined by: u =cwarmupT > t (S13) αdecay&constant =(αbase − cmin learning rate mult)(0.5 (S14) cos(tπ/(T − cconstant)) + 0.5)+ (S15) cmin learning rate mult (S16) αwarmup = t (Tcwarmup) (S17) α =(1− u)αdecay&constant + uαwarm (S18) The update equations of NAdamW are quite similar to that of Adam8p. For clarity we list the full update here. φ(0) =problem specified random initialization (S19) m(0) =0 (S20) v(0) =0 (S21) g(t) = d dφ(t) (f(x;φ(t)) + `2wd||φ(t)||22 (S22) m(t) =β1m (t−1) + g(t)(1− β1) (S23) v(t) =β2v (t−1) + (g(t))2(1− β2) (S24) m̂(t) = m(t) 1− βt+11 (S25) v̂(t) = v(t) 1− βt+12 (S26) u (t) heavy ball = m̂(t)√ v̂(t) + (S27) u (t) nesterov = β1m̂ (t) + (1− β1)g(t)√ v̂(t) + (S28) φ(t+1) =φ(t) − (1− buse nesterov)αu(t)heavy ball+ (S29) buse nesterovαu (t) nesterov − α`2AdamWφ(t) (S30) E OPTIMIZER FAMILY SEARCH SPACES E.1 SEARCH SPACE CONSIDERATIONS The performance of random search critically depends on the boundaries of the original search space. Without prior knowledge about the problems, however, picking a good search space is difficult. To explore this we additionally choose search spaces after collecting and looking at the data. We then use this search space to simulate random search within the constraints via rejection sampling. To find these search spaces we find the best hyper parameters for each task and construct new hyperparameter ranges with min and max values determined by the smallest and largest values of each hyperparameter which were the best hyperparameter for some task. This removes regions of the search space not used by any task. We also tested bounds based on the 5th and 95th percentile of best performing hyperparameters computed over all tasks. In the case of min and max, we find the optimal hyperparameters cover nearly all of the existing space, whereas the percentile based search spaces reduces the volume of the search hypercube by more than 90% leaving us with only ∼100 hyperparameter configurations. In Figure 3, we find, in all cases, learning the hyperparameter list is much more efficient. E.2 ADAM8P, ADAM6P, ADAM4P, ADAMLR SEARCH SPACES For Adam1p, Adam4p, Adam6p, and Adam8p we sample learning rate logritmically between 1e-8 and 10, beta1 and beta2 we parametrize as 1 − x and sample logrithmically between 1e-4 and 1 and 1e-6 and 1 respectively. For learning rate schedules we sample linear decay between 1e-7, 1e-4 logrithmically and exponential decay logrithmically between 1e-3, 1e-6. We sample both `1 and `2 logrithmcally between 1e-8, 1e1. E.3 NADAMW SEARCH SPACE This search space was chosen heuristically in an effort to generalize to new problems. We would like to emphasize that it was not tuned. We used our insight from Adam based optimizer families and chose this. No iterations where done. We expect more iterations will improve not only in distribution performance, but alsos generalization performance. The initial learning rate, αbase is sampled from log space between 1e− 5 and 1.0. 1− β1 is sampled logrithmically between 1e − 3, and 1.0. 1 − β2 is sampled between 1e − 5, and 1.0. is sampled logarithmically between 1e − 8 and 1e4. We sample using nesterov (buse nesterov) 50% of the time. We sample `2WD and `2AdamW logrithmically between 1e− 5 and 1e− 1. Equal probabilities of a third we either use both terms, zero out `2WD, or zero out `2AdamW . With 50% probability we use a nonzero min learning rate multiplier sampled logrithmically between 1e− 5 and 1.0. With 50% probability we sample the warm up fraction, cwarmup between 1e-5 and 1e-1, otherwise it is set to zero. Finally, we uniformly sample the amount of time the learning rate is held constant(cconstant) between 0 and 1. F EXTENDED RELATED WORK F.1 SETS OF TASKS Benchmarks consisting of multiple tasks are becoming an increasingly common technique for measuring improvement in algorithm design. Reinforcement learning has Atari Bellemare et al. (2013), DMLab Beattie et al. (2016), gym Brockman et al. (2016), and dm_control Tassa et al. (2018). Natural language processing has evaluation sets such as GLUE (Wang et al., 2018), Super GLUE (Wang et al., 2019), and the NLPDecathalon (McCann et al., 2018). In computer vision there is (Zhai et al., 2019) which studies transfer learning of image features. In black box optimization there is Nevergrad (Rapin & Teytaud, 2018), COmparing Continuous Optimizers (COCO) (Hansen et al., 2016) and a number of tasks to test Bayesian hyperparameter optimization presented in (Dewancker et al., 2016). For first order gradient methods there are unit tests for stochastic optimization (Schaul et al., 2013) which studies toy optimization functions, and DeepObs (Schneider et al., 2019) which includes 20 neural network tasks. Hyperparameter tuning practices on these benchmarks vary between tuning on each task separately, to tuning one set of hyperparameters for all problems. In Atari (Bellemare et al., 2013), for example, it is common practice to tune hyperparameters on a subset of tasks and evaluate on the full set. This protocol can further be extended by leveraging unseen levels or games at test time as done in Obstacle Tower (Juliani et al., 2019), ProcGen (Cobbe et al., 2019), CoinRun (Cobbe et al., 2018), and Sonic (Nichol et al., 2018). We believe generalization to unseen tasks is key for learned algorithms to be useful thus our learned search space experiments mirror this setting by making use of hold out tasks. Existing meta-learning data sets share similar goals to our work but focus on different domains. In few shot learning there is MiniImageNet (Vinyals et al., 2016) which is built procedurally from the ImageNet dataset (Russakovsky et al., 2015). Meta-Dataset (Triantafillou et al., 2019) takes this further and also focuses on generalization by constructing few shot learning tasks using images from a number of different domains for evaluation purposes. The automated machine learning community has OpenML (Vanschoren et al., 2013) with a focus on selecting and tuning non-neural algorithms. For learning optimizers, the use of task suites has been limited and ad-hoc. Many works use a single or small number of standard machine learning tasks (Andrychowicz et al., 2016; Li & Malik, 2017; Lv et al., 2017; Metz et al., 2019a). Wichrowska et al. (2017) uses a set of synthetic problems meant to emulate many different kinds of loss surfaces. While existing collections of tasks exist for optimizer evaluation, e.g. (Schneider et al., 2019), they contain too small a number of tasks to act as a comprehensive training set for learning algorithms, and many of their tasks are additionally too computationally expensive to be useful during learning. F.2 HAND DESIGNED AND LEARNED OPTIMIZERS Optimization is core to machine learning and thus the focus of extensive work. Methods such as Nesterov momentum (Nesterov, 1983), AdaGrad (Duchi et al., 2011), RMSProp (Tieleman & Hinton, 2012), and Adam (Kingma & Ba, 2014) have all shown considerable improvements in both the speed of optimization and ease of use by exposing robust, and easier to tune hyperparameters than SGD (Sivaprasad et al., 2019). Adaptive step size methods in particular have emerged at the forefront with many works building from it including AdamW (Loshchilov & Hutter, 2017), RAdam (Liu et al., 2019), Novograd (Ginsburg et al., 2019), and NAdam Dozat (2016). Recently, there has been a focus on comparing optimizers either for best performance, or ease of use (Wilson et al., 2017; Choi et al., 2019; Schneider et al., 2019; Sivaprasad et al., 2019). This has proven difficult as performance is heavily dependent on the choice of search space for optimization hyperparameters (Choi et al., 2019). Learned optimizers represent a parallel thread in the development of optimizers. By learning as opposed to hand-designing optimizers, researchers hope to not only increase performance but also ease of use (e.g. minimize the number of hyperparameters required or lower hyperparameter sensitivity) (Bengio et al., 1990; Schmidhuber, 1995; Hochreiter et al., 2001). Recently, there has been renewed interest in parameterizating learning algorithms with neural networks and learning these optimizers on neural network based losses (Andrychowicz et al., 2016; Wichrowska et al., 2017; Li & Malik, 2017; Lv et al., 2017; Metz et al., 2019a;b). Other approaches make learn symbolic parameterizations for new optimizers (Bello et al., 2017). These various methods are all trained and evaluated on different distributions of tasks making comparison across papers challenging. The dataset of tasks presented here will hopefully aid in the ability to compare and evaluate progress in learned optimizer research. In this work, we develop a much more minimal type of “learned optimizer” than previous work which developed new functional forms for the optimizer. Optimization involves not only the functional form of the optimizer, but also the rules for choosing hyperparameters and applying the optimizer. We focus on this second aspect of optimization and learn a hyperparameter search space to improve the performance of existing hand designed methods. F.3 HYPERPARAMETER SEARCH Hyperparameter search is a key component in machine learning. Considerable improvements have been made in language Melis et al. (2017), computer vision (Snoek et al., 2012), and RL (Chen et al., 2018) simply by tuning better. Often no single hyperparameter configuration works well across all tasks for existing optimization methods. Most current hyperparameter search methods involve trying a very large number of hyperparameters for every new task, which is computationally infeasible for large tasks, and additionally can severely limit the number of hyperparameters that can be tuned. Many common techniques such as random search (Bergstra & Bengio, 2012; Bousquet et al., 2017), Bayesian optimization (Snoek et al., 2012; 2015), tree parzen estimators (Bergstra et al., 2011), or sequential halving (Kumar et al., 2018) require setting a hyperparameter search space by hand which is not only difficult but often wildly inefficient. Learning hyperparameters or search strategies by leveraging multiple tasks has been explored within the context of Bayesian optimization Swersky et al. (2013); Perrone & Shen (2019); Perrone et al. (2018) as well as under the term meta-learning in Chen et al. (2017) in which an LSTM is meta-trained to produce function locations to query. The cost of hyperparameter search is often large as each evaluation requires training a model to completion. Often multi-fidelity based approaches are used which leverage “simpler” tasks and transfer the resulting hyperparameters (Hutter et al., 2018). Common approaches include training on partial function evaluations Swersky et al. (2014); Domhan et al. (2015); Li et al. (2016); Klein et al. (2016); Falkner et al. (2018), or leveraging simplified data and models (Petrak, 2000; Zoph & Le, 2016; Brock et al., 2017). Our dataset of tasks serves as a: “simpler” set of tasks to train on; a large and diverse enough set of problems that optimization algorithms trained on it may be expected to generalize; and a framework to test transfer across different types of problems. G LIST OF NADAM HPARAMS Idx Lr warmup constant Min LR mult beta1 beta2 epsilon nesterov l2 reg l2 weight decay 0 1.24e-3 0.000 0.477 1.01e-3 0.94666 0.94067 8.114e-8 False 0.000e+00 7.258e-5 1 5.33e-3 0.000 0.172 0.0 0.96047 0.99922 8.665e-8 True 0.000e+00 5.563e-3 2 2.12e-4 0.000 0.210 1.39e-3 0.62297 0.97278 1.540e-7 False 0.000e+00 5.361e-2 3 4.06e-1 0.000 0.324 0.0 0.99724 0.98680 1.079e+02 True 0.000e+00 1.562e-2 4 2.05e-2 0.000 0.885 1.57e-5 0.35731 0.86043 8.874e-5 True 0.000e+00 7.217e-2 5 5.95e-4 0.008 0.378 0.0 0.89130 0.99983 1.483e-7 True 0.000e+00 4.087e-2 6 7.53e-3 0.000 0.422 9.55e-4 0.69192 0.98434 3.593e-8 False 0.000e+00 3.060e-4 7 4.69e-3 0.000 0.509 0.0 0.99639 0.98820 2.056e-5 False 0.000e+00 3.552e-2 8 2.95e-1 0.000 0.201 0.0 0.99678 0.99981 7.498e+00 False 3.792e-4 3.463e-4 9 2.04e-3 0.000 0.527 0.0 0.49995 0.99755 5.630e-8 True 0.000e+00 2.796e-2 10 7.39e-1 0.001 0.556 3.31e-3 0.99691 0.80639 2.900e+03 False 0.000e+00 7.851e-2 11 8.12e-3 0.000 0.207 0.0 0.17785 0.96033 7.971e-2 False 0.000e+00 1.489e-2 12 3.33e-2 0.000 0.369 0.0 0.69592 0.99997 5.510e-6 True 0.000e+00 1.362e-5 13 6.95e-3 0.000 0.014 0.0 0.99412 0.99305 4.352e-7 False 0.000e+00 3.142e-5 14 1.88e-1 0.000 0.205 1.08e-1 0.98597 0.56531 3.335e+00 True 1.265e-5 3.868e-3 15 9.47e-4 0.007 0.452 0.0 0.43977 0.09422 2.120e-7 False 0.000e+00 6.902e-3 16 3.75e-3 0.000 0.184 0.0 0.87756 0.96128 3.163e-3 True 7.468e-5 2.627e-3 17 7.25e-1 0.000 0.495 0.0 0.99800 0.99781 3.608e+00 True 1.656e-5 3.911e-2 18 4.58e-3 0.000 0.107 3.66e-1 0.42294 0.99963 4.174e-6 True 0.000e+00 4.446e-3 19 3.07e-4 0.007 0.518 0.0 0.57863 0.99625 9.881e-6 False 0.000e+00 5.521e-2 20 2.94e-5 0.000 0.830 8.27e-5 0.96916 0.99896 7.782e-7 True 3.364e-4 3.416e-3 21 1.65e-4 0.002 0.457 2.70e-1 0.95280 0.04565 2.832e-6 True 0.000e+00 1.141e-2 22 9.17e-1 0.010 0.897 2.67e-2 0.45061 0.99244 4.945e-1 False 1.253e-3 0.000e+00 23 2.36e-3 0.000 0.986 0.0 0.98560 0.99997 1.080e-8 True 0.000e+00 3.023e-3 24 2.14e-2 0.000 0.128 0.0 0.98741 0.99336 1.266e-4 False 0.000e+00 5.194e-4 25 5.91e-2 0.000 0.062 0.0 0.99794 0.99383 3.447e+02 True 0.000e+00 3.935e-2 26 1.57e-3 0.000 0.251 0.0 0.91820 0.99991 4.675e-5 False 0.000e+00 4.112e-5 27 4.43e-1 0.000 0.702 0.0 0.94375 0.93551 2.335e-8 True 0.000e+00 8.325e-5 28 2.98e-3 0.008 0.046 0.0 0.68612 0.94232 6.614e-2 False 6.489e-5 0.000e+00 29 1.65e-2 0.004 0.082 4.92e-4 0.95717 0.99789 3.068e+01 True 0.000e+00 8.920e-2 30 5.58e-3 0.000 0.538 0.0 0.97559 0.99990 3.238e-8 True 0.000e+00 4.896e-4 31 8.54e-1 0.000 0.229 0.0 0.93129 0.50200 2.051e-2 False 2.068e-4 2.801e-2 32 7.38e-3 0.000 0.722 8.78e-2 0.21456 0.99752 2.862e-2 False 0.000e+00 8.439e-2 33 4.26e-4 0.001 0.923 2.06e-1 0.47239 0.99974 8.221e-5 False 1.248e-5 0.000e+00 34 6.04e-3 0.000 0.698 0.0 0.97849 0.91449 1.806e+00 False 3.183e-3 1.762e-2 35 8.86e-3 0.000 0.104 1.66e-1 0.98967 0.99720 1.493e-2 True 0.000e+00 2.253e-2 36 1.51e-2 0.000 0.431 1.99e-3 0.80488 0.97878 2.538e-8 True 0.000e+00 2.269e-5 37 2.50e-3 0.000 0.009 0.0 0.98127 0.99988 1.799e-7 False 0.000e+00 1.303e-2 38 3.42e-4 0.000 0.827 6.38e-1 0.25217 0.96572 2.928e-7 True 0.000e+00 1.318e-3 39 6.94e-5 0.000 0.085 0.0 0.98674 0.42709 2.387e-7 False 0.000e+00 2.071e-4 40 3.03e-2 0.001 0.313 0.0 0.90610 0.99997 4.449e-3 True 0.000e+00 2.813e-5 41 4.64e-3 0.000 0.495 2.26e-5 0.64658 0.54108 3.528e-8 False 0.000e+00 2.996e-5 42 2.25e-3 0.000 0.722 0.0 0.97967 0.97518 1.488e-7 True 1.812e-5 2.180e-2 43 6.66e-4 0.000 0.632 2.79e-5 0.65968 0.99997 6.848e-6 True 0.000e+00 3.130e-3 44 3.31e-3 0.000 0.146 0.0 0.90447 0.99970 6.618e-6 True 0.000e+00 2.184e-2 45 7.84e-4 0.016 0.124 0.0 0.95065 0.99685 2.141e-2 False 0.000e+00 4.024e-5 46 6.16e-3 0.016 0.623 0.0 0.98823 0.98744 1.616e-6 False 0.000e+00 1.544e-2 47 3.26e-4 0.000 0.738 1.61e-4 0.78425 0.99998 3.468e-3 False 0.000e+00 4.709e-2 48 4.12e-3 0.001 0.205 0.0 0.99561 0.75382 2.390e-6 True 0.000e+00 3.631e-2 49 6.26e-1 0.000 0.932 2.52e-3 0.99401 0.83521 2.431e+00 True 0.000e+00 1.048e-2 Top 50 hyper parameters found using the NAdamW search space. We find diverse learning rates, with very little warmup used. We additionally find most good performing optimizers make use of AdamW style weight decay. Finally, matching insight from (Choi et al., 2019), we find large values of . H DESCRIPTION OF TASKS IN TASK SUITE In this section we detail the task distribution used throughout this work. In addition to this text, a Tensorflow (Abadi et al., 2016) implementation is also released at github.com/google-research/googleresearch/tree/master/task_set. H.1 SAMPLED TASKS H.1.1 DEFAULT SAMPLED COMPONENTS As many of the sampled tasks are neural networks. We define common sampling routines used by all the sampled tasks. Activation functions: We define a distribution of activation functions which is sampled corresponding the following listing both name and weight. These are a mix of standard functions (relu, tanh) to less standard (cos). • relu: 6 • tanh: 3 • cos: 1 • elu: 1 • sigmoid: 1 • swish (Ramachandran et al., 2017): 1 • leaky relu (with α = 0.4): 1 • leaky relu (with α = 0.2): 1 • leaky relu (with α = 0.1): 1 Initializations: We sample initializers according to a weighted distribution. Each initialization sample also optionally samples hyperparameters (e.g. for random normal initializers we sample standard deviation of the underlying distribution). • he normal (He et al., 2015): 2 • he uniform (He et al., 2015): 2 • glorot normal (Glorot & Bengio, 2010): 2 • glorot uniform (Glorot & Bengio, 2010): 2 • orthogonal: 1. We sample the “gain”, or multiplication of the orthogonal matrix logarithmi- cally between [0.1, 10]. • random uniform 1.0: This is defined between [−s, s] where s is sampled logarithmically between [0.1, 10]. • random normal: 1.0: The std is sampled logarithmically between (0.1, 10). • truncated normal: 1.0: The std is sampled logarithmically between (0.1, 10). • variance scaling: 1.0: The scale is sampled logarithmically between (0.1, 10). RNN Cores: We define a distribution over different types of RNN cores used by the sequential tasks. With equal probability we sample either a vanilla RNN (Elman, 1990), GRU(Chung et al., 2014), or LSTM(Hochreiter & Schmidhuber, 1997). For each cell we either sample 1 shared initialization method or sample a different initialization method per parameter vector with a 4:1 ratio. We sample the core hidden dimension logarithmically between [32, 128]. H.1.2 SAMPLED DATASETS Image Datasets: We sample uniformly from the following image datasets. Each dataset additionally has sampled parameters. For all datasets we make use of four data splits: train, valid-inner, valid-outer, test. Train is used to train models, valid-inner is used while training models to allow for modification of the training procedure (e.g. if validation loss doesn’t increase, drop learning rate). Valid-outer is used to select meta-parameters. Test should not be used during meta-training. For all datasets, we sample a switch with low probability (10% of the time) to only use training data and thus not test generalization. This ensures that our learned optimizers are capable of optimizing a loss as opposed to a mix of optimizing and generalizing. Mnist: Batch size is sampled logarithmically between [8, 512]. We sample the number of training images logarithmically between [1000, 55000] (LeCun, 1998). Fashion Mnist: Batch size is sampled logarithmically between [8, 512]. We sample the number of training images logarithmically between [1000, 55000] (Xiao et al., 2017). Cifar10: Batch size is sampled logarithmically between [8, 256]. The number of training examples is sampled logarithmically [1000, 50000] (Krizhevsky et al., 2009). Cifar100: Batch size is sampled logarithmically between [8, 256]. The number of training examples is sampled logarithmically [1000, 50000] (Krizhevsky et al., 2009). {food101_32x32, coil100_32x32, deep_weeds_32x32, sun397_32x32}: These dataset take the original set of images and resize them to 32x32 using OpenCV’s (Bradski, 2000) cubic interpolation. We ignore aspect ratio for this resize. Batch size is sampled logarithmically between [8, 256] (Bossard et al., 2014; Nene et al., 1996; Olsen et al., 2019; Xiao et al., 2010). Imagenet32x32 / Imagenet16x16: The ImageNet 32x32 and 16x16 dataset as created by Chrabaszcz et al. (2017). Batch size is logrithmically sampled between [8, 256]. H.1.3 TEXT CLASSIFICATION: IMDB sentiment classification: We use text from the IMDB movie reviews dataset(Maas et al., 2011) and tokenize using subwords using a vocab size of 8k(Sennrich et al., 2015). We then take length s random slice from each example where s is sampled logarithmically between [8, 64]. These examples are then batched into a batch size logarithmically sampled between [8, 512]. We sample the number of training examples logarithmically between [1000, 55000] and with 10% probability just use training data instead of valid / test to test pure optimization as opposed to generalization. H.1.4 CHARACTER AND WORD LANGUAGE MODELING For the character and word language modeling datasets we make use of the following data sources: imdb movie reviews(Maas et al., 2011), amazon product reviews (ama) using the Books, Camera, Home, and Video subset each as separate datasets, LM1B(Chelba et al., 2013), and Wikipedia(Foundation) taken from the 20190301 dump using the zh, ru, ja, hab, and en language codes. We split each article by new lines and only keep resulting examples that contain more than 5 characters. For infrastructure reasons, we only use a million articles from each language and only 200k examples to build the tokenizer. Byte encoding: We take length s random slices of each example where s is sampled logarithmically between [10, 160]. These examples are then batched into a batch size logarithmically sampled between [8, 512]. With probability 0.2 we restrict the number of training examples to a number logarithmically sampled between [1000, 50000]. Finally, with a 10% probability just use training data instead of valid / test to test pure optimization as opposed to generalization. subword encoding: We encode the text as subwords with a vocabsize of 8k (Sennrich et al., 2015). We then take length s random slices of each example where s is sampled logarithmically between [10, 256]. These examples are then batched into a batch size logarithmically sampled between [8, 512]. With probability 0.2 we restrict the number of training examples to a number logarithmically sampled between [1000, 50000]. Finally, with a 10% probability just use training data instead of valid / test to test pure optimization as opposed to generalization. H.2 SAMPLED TASKS H.2.1 MLP This task family consists of a multi layer perceptron trained on flattened image data. The amount of layers is sampled uniformly from [1, 6]. Layer hidden unit sizes are sampled logarithmically between [16, 128] with different number of hidden units per layer. One activation function is chosen for the whole network and is chosen as described in H.1.1. One shared initializer strategy is also sampled. The image dataset used is also sampled. Two sampled configurations are shown below. 1 { 2 "layer_sizes": [ 3 71 4 ], 5 "activation": "leaky_relu2", 6 "w_init": [ 7 "he_normal", 8 null 9 ], 10 "dataset": [ 11 "sun397_32x32", 12 { 13 "bs": 32, 14 "just_train": false, 15 "num_train": null 16 }, 17 { 18 "crop_amount": 0, 19 "flip_left_right": false, 20 "flip_
1. What is the main contribution of the paper regarding the learned optimizers? 2. What are the strengths and weaknesses of the proposed dataset of tasks for evaluating learned optimizers? 3. Do you have any concerns about the choices made in creating the dataset and how they impact future research? 4. How does the reviewer assess the focus of the paper and its relevance to the community? 5. Are there any typos or minor errors in the review that should be addressed?
Review
Review Summary: This paper proposes a dataset of tasks to help evaluate learned optimizers. The learned optimizers are evaluated by the loss that they achieve on held-out tasks after 10k steps. Using this dataset, the main strategy considered is to use search spaces that parametrize optimizers and learn a list of hyperparameter configurations for the optimizer that are tried sequentially. The authors show that the learned hyperparameter configuration list learned achieves better performance than (constrained) random search on multiple optimizer search spaces. Finally, they show that the learned hyperparameter list transfer well to realistic problems such as training a ResNet-50 model on ImageNet and training a transformer architecture on LM1B, outperforming reasonable baselines. Pros: Creating a dataset of tasks for learning optimizers is a interesting and useful goal. While there have been some sets of tasks used in the learned optimizers literature, there isn't a standard dataset for this task. A large number of tasks is proposed. The hyperparameter list trained compares favorably with random search across the other tasks. The experiments are interesting overall and show some insights about the performance of the learned list with increasing number of tasks. Cons: While the goal of finding a good dataset of tasks for learned optimizers is a worthy, I find that the paper does not adequately discuss and explore the choices that went into creating this dataset. Namely, how were tasks picked? What code implementations were used? What are some limitations of the current dataset that could be addressed in future research? How are the tasks represented? How can a researcher use this dataset of tasks to explore new algorithms? Most of the value of proposing a new benchmark or dataset is explaining the choices that went into creating it and packaging well so that other researchers can use it easily. I think that this could be better realized in the paper. For example, is future research based on this dataset meant to be done offline (on the optimization curves collected) or online (by running additional configurations for these search spaces)? How are new methods to be benchmarked through this dataset? How are existing datasets for this missing important aspects? This is not adequately defined. Given this, I think that the paper could do a better job setting the stage for future research building on this dataset. The focus on the dataset of tasks is poorly realized in the paper, which is devoted in great part to how an ordered list of 1000 hyperparameter configurations for a learned optimizer performs well in comparison with other random search baselines. While this was well executed overall, it is not initially the focus of the paper. I believe that more value for the community could be derived by focusing on the creation of the dataset rather than on the introduction of a new heuristic. Typo: . on We take the TPU... ==> . We take the TPU...
ICLR
Title TaskSet: A Dataset of Optimization Tasks Abstract We present TaskSet, a dataset of tasks for use in training and evaluating optimizers. TaskSet is unique in its size and diversity, containing over a thousand tasks ranging from image classification with fully connected or convolutional neural networks, to variational autoencoders, to non-volume preserving flows on a variety of datasets. As an example application of such a dataset we explore meta-learning an ordered list of hyperparameters to try sequentially. By learning this hyperparameter list from data generated using TaskSet we achieve large speedups in sample efficiency over random search. Next we use the diversity of the TaskSet and our method for learning hyperparameter lists to empirically explore the generalization of these lists to new optimization tasks in a variety of settings including ImageNet classification with Resnet50 and LM1B language modeling with transformers. As part of this work we have opensourced code for all tasks, as well as 29 million training curves for these problems and the corresponding hyperparameters.1 N/A We present TaskSet, a dataset of tasks for use in training and evaluating optimizers. TaskSet is unique in its size and diversity, containing over a thousand tasks ranging from image classification with fully connected or convolutional neural networks, to variational autoencoders, to non-volume preserving flows on a variety of datasets. As an example application of such a dataset we explore meta-learning an ordered list of hyperparameters to try sequentially. By learning this hyperparameter list from data generated using TaskSet we achieve large speedups in sample efficiency over random search. Next we use the diversity of the TaskSet and our method for learning hyperparameter lists to empirically explore the generalization of these lists to new optimization tasks in a variety of settings including ImageNet classification with Resnet50 and LM1B language modeling with transformers. As part of this work we have opensourced code for all tasks, as well as 29 million training curves for these problems and the corresponding hyperparameters.1 1 INTRODUCTION As machine learning moves to new domains, collecting diverse, rich, and application-relevant datasets is critical for its continued success. Historically, research on learning optimization algorithms have only leveraged single tasks (Andrychowicz et al., 2016; Metz et al., 2019a), or parametric synthetic tasks (Wichrowska et al., 2017), due to the difficulty of obtaining large sets of tasks. 1.1 TASKSET: A SET OF TASKS We present a set of tasks significantly larger than any optimizer dataset previously studied. We aim to better enable standardized research on optimizers, be that analysis of existing optimizers, or development of new learned learning algorithms. We call this suite of tasks TaskSet. Much in the same way that learned features in computer vision outpaced hand designed features (Krizhevsky et al., 2012; LeCun et al., 2015), we believe that data driven approaches to discover optimization algorithms will replace their hand designed counterparts resulting in increased performance and usability. To this end, standardizing a large suite of optimization tasks is an important first step towards more rigorous learned optimizer research. In this setting, a single “example” is an entire training procedure for a task defined by data, loss function, and architecture. Thus, TaskSet consists of over a thousand optimization tasks, largely focused on deep learning (neural networks). They include image classification using fully connected and convolutional models, generative models with variational autoencoders (Kingma & Welling, 2013) or flows (Dinh et al., 2016; Papamakarios et al., 2017), natural language processing tasks including both language modeling and classification, as well as synthetic tasks such as quadratics, and optimization test functions. The problems themselves are diverse in size, spanning 7 orders of magnitude in parameter count, but remain reasonably fast to compute as almost all tasks can be trained 10k iterations on a CPU in under one hour. To demonstrate the breadth of this dataset we show an embedding of all the tasks in Appendix A.1 in Figure S1. 1redacted url 1.2 AMORTIZING HYPERPARAMETER SEARCH Machine learning methods are growing ever more complex, and their computational demands are increasing at a frightening pace (Amodei & Hernandez, 2018). Unfortunately, most modern machine learning models also require extensive hyperparameter tuning. Often, hyperparameter search is many times more costly than the final algorithm, which ultimately has large economic and environmental costs (Strubell et al., 2019). The most common approach to hyperparameter tuning involves some form of quasi-random search over a pre-specified grid of hyperparameters. Building on past work (Wistuba et al., 2015b; Pfisterer et al., 2018), and serving as a typical example problem illustrative of the sort of research enabled by TaskSet, we explore a hyperparameter search strategy consisting of a simple ordered list of hyperparameters to try. The idea is that the first few elements in this list will cover most of the variation in good hyperparameters found in typical machine learning workloads. We choose the elements in this list by leveraging the diversity of tasks in TaskSet, by meta-learning a hyperparameter list that performs the best on the set of tasks in TaskSet. We then test this list of hyperparameters on new, larger machine learning tasks. Although learning the list of hyperparameters is costly (in total we train∼29 million models consisting of over 4,000 distinct hyper parameter configurations), our final published list is now available as a good starting guess for new tasks. Furthermore, we believe the raw training curves generated by this search will be useful for future hyperparameter analysis and meta-learning research, and we release it as part of this work. We additionally release code in Tensorflow (Abadi et al., 2016), Jax (Bradbury et al., 2018), and PyTorch (Paszke et al., 2019) for a reference optimizer which uses our learned hyperparameter list, and can be easily applied to any model. 2 TASKSET: A SET OF TASKS How should one choose what problems to include in a set of optimization tasks? In our case, we strive to include optimization tasks that have been influential in deep learning research over the last several decades, and will be representative of many common machine learning problems. Designing this dataset requires striking a balance between including realistic large-scale workloads and ensuring that tasks are fast to train so that using it for meta-learning is tractable. We construct our dataset largely out of neural network based tasks. Our chosen tasks have between ten thousand and one million parameters (much smaller than the billions commonly used today), as a result most problems can train in under an hour on a cloud CPU with 5 cores. We additionally focus on increased “task diversity” by including many different kinds of training algorithms, architectures, and datasets – inspired by past work in reinforcement learning which has demonstrated large numbers of problems and increased diversity around some domain of interest is useful for both training and generalization Heess et al. (2017); Tobin et al. (2017); Cobbe et al. (2018); OpenAI et al. (2019). Again though, a balance must be struck, as in the limit of too much diversity no learning can occur due to the no free lunch theorem (Wolpert & Macready, 1997). Our dataset, TaskSet, is made up of 1162 tasks in total. We define a task as the combination of a loss function, a dataset, and initialization. Specifically we define a task as a set of 4 functions: • Initialization: ()→ parameter initial values • Data generator: data split (e.g. train / valid / test)→ batch of data • Forward pass: (batch of data, params)→ loss • Gradient function: (input data, params)→ gradients ( dlossdparams ) A task has no tunable hyperparameters and, coupled with an optimizer, provides all the necessary information to train using first order optimization. This makes experimentation easier, as each task definition specifies hyperparameters such as batch size (Shallue et al., 2018; McCandlish et al., 2018) or initialization (Schoenholz et al., 2016; Yang & Schoenholz, 2017; Xiao et al., 2018; Li & Nguyen, 2019; Pretorius et al., 2018; Hayou et al., 2018; Karakida et al., 2018; Blumenfeld et al., 2019; Hayou et al., 2019) that no longer need to be tuned. We augment a set of “fixed” tasks which have been designed by hand, with “sampled” tasks that are randomly generated task instances. 2.1 SAMPLED FAMILIES OF TASKS Sampled tasks are created by sampling neural network architectures (e.g., MLPs, convnets), activation functions, datasets (e.g., images, text, quadratic functions, and synthetic tasks), and other properties. We organize these sampled tasks into similar families of tasks. See Appendix H for a complete description of these sampled tasks. Broadly, these are separated into tasks sampling image models (mlp, mlp_ae (Hinton & Salakhutdinov, 2006), mlp_vae (Kingma & Welling, 2013), conv_pooling, conv_fc, nvp (Dinh et al., 2016), maf (Papamakarios et al., 2017)), tasks sampling language models (char_rnn_language_model (Graves, 2013), word_rnn_language_model, rnn_text_classification), quadratics (quadratic) and other synthetic tasks (losg_tasks (Wichrowska et al., 2017)). Defining a sampling distribution that generates tasks that are always valid, and that run within a time constraint, is difficult. Instead, we define a broad distribution and make use of rejection sampling to remove tasks that are either too slow or that we are unable to optimize at all. By starting with a distribution that is too broad, and pruning it, we hope to achieve better coverage of tasks. 2.2 HAND DESIGNED TASKS In addition to the sampled tasks, we also include 107 hand designed tasks. These consist of more common tasks that both improve the coverage beyond the sampled tasks, and provide for better interpretability through a closer match to existing tasks in the literature. These tasks span image classification, text classification, language modeling, and generative modeling, as well as some synthetic tasks such as associative retrieval (Ba et al., 2016). We leave the description of each one of these tasks to Appendix H.3. 2.3 AGGREGATE STATISTICS OF TASKSET In Figure 1a we show histograms of compute times for all problems and find almost all problems train under an hour (see Appendix C for per task family histograms). In Figure 1c we plot a histogram of the number of parameters per tasks. Finally, in Figure 1b we show a distribution of task difficulty by plotting the fraction of optimizer configurations that achieve a certain loss value. We find that for some tasks as many as 50% of optimizers perform well while for others < 1% achieve a loss close to the smallest observed loss. For a qualitative visualization of TaskSet, see Appendix A 3 AMORTIZED HYPERPARAMETER SEARCH As a simple demonstration of using TaskSet for meta-learning research, we consider learning hyperparameter lists. This idea of learning lists of hyper parameters has been explored in (Wistuba et al., 2015b; Pfisterer et al., 2018). We define an optimizer as the pairing of an optimization algorithm and all its corresponding hyperparameters (e.g. learning rate). While sometimes practitioners use a single optimizer – e.g. Adam (Kingma & Ba, 2014) with default hyperparameters – most practitioners will often run multiple optimizers and use a validation set to select the best performer. 3.1 OPTIMIZER FAMILIES We define different parameterizations of hand designed optimizers as an optimizer family. The optimizer families we consider consist of: • Adam1p: One hyperparameter, the fixed learning rate α • Adam4p: Four Adam hyperparameters, α, β1, β2, and • Adam6p: Adam4p hyperparameters, and two additional hyperparameters controlling linear and exponential learning rate decays • Adam8p: The hyperparameters in Adam6p plus two additional hyperparameters for `1 and `2 regularization terms • NAdamW: A 10 hyperparameter search space based on NAdam (Dozat, 2016) with cosine learning rate decay, and weight decay. For the full update equations see Appendix D.1 for Adam and D.2 for NadamW. We chose Adam based on its use in existing work, and NAdam based on performance shown in (Choi et al., 2019). 3.2 LEARNED HYPERPARAMETER LISTS Traditionally researchers tune hyperparameters on a per model basis. While this often results in performance gains; it comes at the cost of immense compute, and researchers are almost never able to expend enough compute to saturate model performance (Shallue et al., 2018). As an alternative to per-problem tuning, we proposes instead tuning the search strategy itself on a dataset of tasks and transferring the knowledge gained to new tasks of interest. This idea is already implicitly done by humans – e.g. we don’t start a hyperparameter search with a learning rate of 106 – we use values that the community has found useful. This dataset-based tuning has a number of desirable properties. First, the resulting search strategies are much more efficient, resulting in large speedups in sample efficiency on unseen tasks over a random search baseline. Second, we are less restricted by the number of optimizer parameters we search over or by needing to define reasonable search spaces. For example, if there are redundant regions of search space, our learned optimizer will be less likely to sample them repeatedly, unlike random search. If there is a region of hyperparameter space that performs poorly on all problems, the learned search strategy will avoid it. In this work we parameterize the learned search strategy as an ordered list of optimizers to try (i.e. a list of hyperparameter configurations). Given a fixed number of task evaluations we would like to achieve the best possible performance on all tasks in the training set of tasks. For a length k list of optimizers we define our loss as: J(θ1,...,k) = ∑ τ∈tasks [ min i∈1..k f(τ, θi) ] , (1) where θi are the optimizer hyperparameters for element i in the list, and f is an appropriately normalized loss computed after training task τ . We seek to find an optimal list of optimizers as (similar to (Wistuba et al., 2015b)): θ∗1,...,k = arg min θ1,...,k J(θ1,...,k). (2) This is meant to serve as an example task, illustrative of the sort of research enabled by TaskSet. More advanced hyperparameter search strategies would no doubt yield even more performant results. 3.3 SCORING AN OPTIMIZER BY AVERAGING OVER TASKS To score a task, we initialize the parameters of the task and run 10,000 iterations of an optimizer. We monitor loss on each data split (train, validation, test) every 200 steps using an average over 50 mini-batches per evaluation. For all data presented in this paper we also compute averages over 5 random task parameter initializations. A side effect of the diverse task dataset is that losses span multiple orders of magnitude, making direct aggregation of performance problematic. To remedy this we normalize the loss values for all tasks linearly between 0 and 1 where 1 is validation loss at initialization and zero is the lowest validation loss achieved by any tested optimizer. Loss values greater than the loss at initialization are clipped to 1. To collapse an entire normalized training curve into a scalar cost, we compute the mean normalized loss over the 10,000 iterations. We find empirically that this choice is similar to taking the minimum (Appendix B.5). We leave exploring alternative methods such as performance profiles (Dolan & Moré, 2002) and Nash averaging (Balduzzi et al., 2018) for future work. 3.4 GREEDY LEARNING FROM RANDOM SEARCH Optimizing Eq. 2 is combinatorially expensive. To tractably solve this optimization problem, we introduce two approximations (Wistuba et al., 2015b). First, we shift the unconstrained search over the full space of optimizers to search over a finite set of optimizers, Θ. This finite set can be computed ahead of time and decouples the expensive procedure of training each task with an optimizer from training the learned search space. Separating data and training in this way has been done for both hyperparameter search (Eggensperger et al., 2015), and neural architecture search (Klein & Hutter, 2019; Ying et al., 2019). In total we trained 1,000 optimizer configurations for each of Adam1p, Adam4p, Adam6p, Adam8p, and NAdamW on all 1,162 tasks with 5 random seeds per pair. Second, we use a greedy heuristic to approximate the combinatorial search over sets of k optimizers. For a single optimizer trial, k = 1, we select the best performing optimizer on average across all training tasks. We then continue to select optimizer parameters such that the minimum of all optimizer-parameters per task, aggregated over all tasks is minimized. This shifts the complexity from exponential in k to linear. Finding a length k set of optimizers can thus be efficiently computed as follows: θ∗1 = arg min θ∈Θ [ ∑ τ∈tasks f(τ, θ) ] (3) θ∗k = arg min θ∈Θ [ ∑ τ∈tasks [min (b, f(τ, θ))] ] where b = min i∈1..(k−1) f(τ, θ∗i ). (4) We note that the first argument of the outer min, b, can be computed once per set of hyperparameters as it does not depend on θ. Finally, as our tasks are stochastic, we order optimizers based on validation loss and report test loss (Van Hasselt et al., 2016).2 This training strategy requires an original search space from which to collect data and build Θ. The search space we use is described in Appendix E.2. While large, we find that the optimal parameters for each task end up covering almost the entire space. At some point, no improvement can be obtained on any of the tasks in the dataset. At this point, we simply randomly order the remaining optimizers though expect more sophisticated methods could be employed. 4 EXPERIMENTS: TRAINING AND GENERALIZATION OF LEARNED HYPERPARAMETER LISTS With our dataset of tasks and data collected, we turn our attention to exploring training of the hyperparameter lists, and generalization beyond the suite of tasks in TaskSet. In this exploration, 2This technically means that increasing the number of optimizes could potentially decrease performance, but we find this rarely happens in practice. we hope to give a flavor of the types of research possible with TaskSet. Our main tool to show performance are figures that sweep the number of optimizers configurations on the x-axis, and show the best performance achieved for each number of optimizers tried, averaged over some set of tasks (Eq. 1). 4.1 LEARNED HYPERPARAMETER LISTS ARE MORE EFFICIENT THAN RANDOM SEARCH To demonstrate the impact of learning a search space, we take the 1,162 tasks split them into even train and test tasks. We then learn a search strategy using optimizers from the Adam8p family following Eq. 4 on the train tasks. Results in Figure 3. As baselines, we use random search with different search spaces, including just learning rate (Rand: Adam1p), the default Adam hyper parameters (Rand: Adam4p), as well as the Adam 8 dimensional search space (Rand: Adam8p). To better get a sense of performance, we show two additional “Refined” baselines which involve random sampling from better search space. For min/max, we sample from the minimum bounding box containing the best hyperparameters for each task. To improve the search space quality, we shrink this bounding box so 90% of the best hyperparameters are enclosed. Further considerations regarding search space volume are treated in E.1, and the precise search spaces are specified in Appendix E.2. Finally, one difficulty of working with offline data is the difficulty of running online hyperparameter optimization methods such as Bayesian Optimization without running additional compute. Future work will explore offline Bayesian methods. 4.2 MORE TASKS LEAD TO BETTER GENERALIZATION We next look at the effects of the number of training tasks on generalization. We take subsets of tasks of different size, and train hyperparameter lists using Eq.4. We compute test performance on the remainder of the tasks and plot loss averaged over different splits in Fig. 3. We find that a large number of tasks (more than 100) are required to achieve near-optimal test performance. This is surprising to us given how simple our learned search strategy is (simply a list of hyperparameters), but not wholly so given past work studying generalization in RL (Cobbe et al., 2018). 4.3 GENERALIZATION TO DIFFERENT TYPES OF PROBLEM For learned algorithms to be generally useful, some amount of generalization to unseen task families is required. To test this, we split our data into disjoint task types. We perform two splits: testing on RNN tasks and training on all others, and testing on autoencoder tasks and training on all others. As a best case baseline we additionally train search spaces on the test task families directly. We find an order of magnitude better sample efficiency than random search for both cases and find our learned search space is close in performance to search spaces trained on just the testing tasks (Fig. 3). 5 EXPERIMENTS: REALISTIC PROBLEMS In §4.3 and §B.1 we explored generalization of learned hyperparameter lists to held out tasks within the TaskSet dataset. While useful for analysis, these tasks are still far from the workloads commonly employed to solve real problems. In this section, we explore the performance of our learned search space on a number of state of the art models. These models drastically differ from the training set of tasks in parameter count and compute cost. We see these experiments as evidence that the tasks presented in TaskSet capture enough of the structure of “realistic” problems that TaskSet can be used to improve larger scale workloads. For all experiments in this section we take the optimizer ordering using the NAdamW optimizer family on all TaskSet tasks then apply the resulting search space to the target problem. The final ordered list of hyperparameters used is in Appendix G. We show results for ResNet50 on ImageNet, and Transformers on LM1B. Additional results with reinforcement learning using PPO are in Appendix B.2. First we explore ImageNet classification using a ResNet50. on We take the TPU implementation with default settings from the official Tensorflow models repository (Tensorflow, 2019), and swap out different optimizers. We show accuracy computed over the course of training as well as best performance for a given hyperparameter budget in Figure 4. We find that the learned search space vastly outperforms learning rate tuned Adam. Next we explore language modeling on LM1B with a Transformer. We take the transformer (Vaswani et al., 2017) example implemented in Jax (Bradbury et al., 2018) with Flax (Flax Developers, 2020). We train using a 2x2 TPU V2 configuration for 100k iterations. Once again we take all other hyperparameters as is and simply swap optimizer implementation. We find the learned hyperparameter list dramatically outperforms the default optimizer setting and the fixed learning rate baseline. Nevertheless, we emphasize that our method does not require any knowledge of the underlying problem to achieve faster results. See Appendix B.3 for this same transformer with a budget of 20k iterations. 6 RELATED WORK The idea of sets of tasks has been explored throughout machine learning. The majority of these suites are for use in evaluation where as our suite is targeted for meta-learning. The closest family of optimization tasks for evaluation to those presented here is DeepObs (Schneider et al., 2019) which includes 20 neural network tasks. Our task suite focuses on smaller problems and contains 50x more tasks. Outside of evaluation, task suites in reinforcement learning such as Obstacle Tower (Juliani et al., 2019), ProcGen (Cobbe et al., 2019), CoinRun (Cobbe et al., 2018), and Sonic (Nichol et al., 2018) focus on training algorithms that work across a variety of settings. The creation of TaskSet was motivated by the goal of learning learning algorithms, or metalearning (Schmidhuber, 1987; 1995; Hochreiter et al., 2001), and in particular learned optimizers (Bengio et al., 1990; Andrychowicz et al., 2016; Bello et al., 2017; Wichrowska et al., 2017; Li & Malik, 2017; Lv et al., 2017; Metz et al., 2019a;b). This use case is explored with this dataset in (Metz et al., 2020). In this work we do not use this task suite to train learned optimizers, but instead focus on learning a hyperparameter search strategy. Tuning hyperparameters by leveraging multiple tasks has been explored within the contexts of Bayesian optimization Swersky et al. (2013); Perrone & Shen (2019); Perrone et al. (2018) as well as meta-learning (Reif et al., 2012; Gomes et al., 2012; Feurer et al., 2014; Wistuba et al., 2015b;a; Chen et al., 2017; Pfisterer et al., 2018). See Appendix F.1 for a full discussion of sets of tasks in machine learning, Appendix F.2 for more info on optimization in machine learning, and Appendix F.3 for a discussion on existing hyper parameter search methods. 7 DISCUSSION Learning optimization algorithms represents a promising direction for accelerating machine learning research. For the resulting algorithms to become useful tools, however, we must further understand the relationships between training tasks, meta-optimization, and both iid and out of distribution generalization. This work takes steps towards this goal by introducing a significantly larger set of optimization tasks than ever previously considered. As an example use-case, we provide a thorough analysis of how TaskSet enables meta-optimization of simple, but performant hyperparameter lists. Despite this approach’s simplicity, the training of learned learning algorithms is computationally expensive. We hope to explore alternative parameterizations which will increase efficiency by, e.g., leveraging previous evaluations or partial model training (Swersky et al., 2014; Li et al., 2016). We are releasing the optimal hyperparameter list we have found as a drop-in replacement optimizer in a variety of deep learning frameworks (Tensorflow (Abadi et al., 2016), PyTorch (Paszke et al., 2019), and JAX (Bradbury et al., 2018)) in the hopes that the research community finds them useful. We believe this represents a new set of reasonable optimizer defaults for new problems. Finally, we hope TaskSet encourages more standardized research on general purpose optimizers. A TASKSET VISUALIZATION For a qualitative view, we constructed a feature space consisting of performance measurements for each task+optimizer pair (See §3.3). This forms a dense matrix of size number of tasks by number of optimizers. We then perform T-SNE (Maaten & Hinton, 2008; Van Der Maaten, 2014) to reduce the dimensionality to two and plot the results coloring by task family (Figure S1). Clusters in this space correspond to tasks that work well with similar optimizers. We find diversity of tasks with clusters occurring around similar families of tasks. A.1 TSNE OF TASKSET B ADDITIONAL EXPERIMENTS B.1 GENERALIZATION TO DIFFERENT SIZED PROBLEMS Training learned algorithms on large models is often infeasible for computational reasons. As such, one form of generalization needed when building learned algorithms is the ability to transfer to different sized models. As shown in Figure 1 the tasks in this suite contain a wide range of parameter counts, and can thus be used to test this kind of generalization. We split the tasks into 8 groups – one group per order of magnitude in parameter count, and train hyperparameter lists on one range and test on the rest. In Figure S2 we plot the fraction of the training loss achieved by the test loss on the target parameter range. We find peak performance around the model sizes used for training, and smooth falloff as the testing tasks become more dissimilar as measured by parameter count. We note that our problems are not evenly distributed across these groups thus each group will contain a different percentage of the underlying tasks. While this potentially confounds these results, we believe a similar bias occurs in realistic workloads as well. 0 1 2 3 4 5 6 7 number of parameters (log10) 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 tra in J / te st J 0-1 3-4 6-7 all Figure S2: We show learned search space generalization, measured as a ratio of the loss achieved in training and testing, versus the number of task parameters used during search space training. Generalization falls off as one moves further away from the training regime. In black we show that a uniform mixture of the 7 parameter buckets does not fall off. B.2 REINFORCEMENT LEARNING WITH PPO Figure S3: We find our learned hyperparameter lists performs about as well as random search on the NAdam search space, and worse than the random search on the learning rate tuned Adam search space. We test the learned hyperparameter lists on two continuous control reinforcement learning environments, half cheetah and humanoid, from Gym’s Mujoco environments(Todorov et al., 2012; Brockman et al., 2016). We use TF-Agents (Guadarrama et al., 2018) with all non-optimizer hyperparameters set via searching a mixture of environments. In figure B.2 we find our learned hyperparameter lists achieves comparable to slightly worse performance does not out perform learning rate tuning of Adam in both efficiency nor final performance. To diagnose this behavior we ran all 1k optimizers for both problems and found the learned hyperparameter list performs comparable to random search in the underlying space. To probe further, we computed spearman correlation on the performance of each optimizer as compared to the rest of the tasks in the task suite. We found considerably worse correlations than where present for tasks in the TaskSet. This is not surprising as TaskSet contains no reinforcement learning problems. B.3 LM1B TARGETING 20K ITERATIONS We show a transformer on LM1B similar to that shown in §5 except run for only 20k iterations, a fith of the steps. Results in Figure S4. We find the learned hyperparameter lists are much more efficient than either of the baselines. Figure S4: We find our learned hyperparameter lists out performs learning rate tuned Adam with both a constant, and a fixed learning rate schedule on a 53M parameter Transformer trained on LM1B. Left: Learning curves for the best of the optimizers. Right: Number of optimizers tried vs best test loss. B.4 PROBING SHORT HORIZON Often the goal when training a learned optimizers is to minimize performance after training some number of iterations. This is extremely computationally expensive and in practice approximations must be used. One common family of approximations is short horizon based methods. These methods rely upon somehow truncating training so that updates can be made to the learned optimizer more frequently. This is commonly done via truncated backprop (Werbos, 1990; Wichrowska et al., 2017; Figure S5: Hyperparameter lists trained on short horizon data generalize remarkably well. On the y-axis we show performance evaluated on the the full 10k training iterations for a given number of optimizers tried (x-axis). In color we show different number of steps used when evaluating task optimizer performance when training the hyperparameter list. Metz et al., 2019a; Wu et al., 2016), or proxy objectives such as only training for a handful of epoch (Zoph & Le, 2017). While this short horizon proxy is certainly not optimal(Wu et al., 2016), the performance gains are immense and in practice is what makes meta-training optimizers feasible. In our task suite, we test this short horizon learning by training hyperparameter lists only using some finite amount of training iterations per task and testing in the full training regieme (10k steps). Results in figure S5. We find that even when learning the hyperparameter list on a mere 200 steps, our hyperparameter list continues to generalize to outperform random search on Adam8p. This is promising as this suggests that training the learned hyperparameter list can be done with 1/50th of the total compute. This result is surprising to us as prior work indicates the effect of this bias can be severe (Wu et al., 2016; Metz et al., 2019a). We suspect it is due to the simplicity of the learned parameter space but leave a thorough analysis of this for future work. 100 101 102 103 0.1 0.5 1.0 default norm last quantile norm min norm random 0.0 0.2 0.4 0.6 0.8 1.0 default norm 0.0 0.2 0.4 0.6 0.8 1.0 las t q ua nt ile n or m 0.0 0.2 0.4 0.6 0.8 1.0 default norm 0.0 0.2 0.4 0.6 0.8 1.0 m in no rm Figure S6: Left: Aggregate performance (y-axis) vs number of optimizer tried (x-axis) for different normalization and aggregation techniques. In each curve we train the hyperparameter list with a different normalization and aggregation strategy and test with the default normalization and aggregation technique described in 3.3. We find some some strategies are near identical in performance (e.g. min norm), while others perform significantly worse – e.g. last quantile norm. In both cases, however, we still perform better than the underlying random search. Center: Correlation between default normalization and the quantile based normalization strategy. Correlation is quite low – 0.193 Pearson’s correlation. Right: Correlation between the default normalization using a mean to aggregate over validation over the course of training vs using a min over validation over the course training. We find a much higher correlation of 0.911. B.5 CHOICE OF NORMALIZATION FUNCTION There is no easy way to define a single metric for optimizer performance over a mixture of tasks. This paper picks a single normalization strategy based on minimum validation loss and the validation loss at initialization presented in §3.3. In this section we show the impact of choosing a different normalization and or aggregation technique. First, instead of computing the mean over learning curves as described in §3.3 we compute a min. Second, instead of rescaling based on init and min, we linearly rescale based on the 95 percentile of validation loss and the min validation loss achieved at the end of training each task.In Figure S6 we show learned hyperparameter list training and testing performance as a function of number of optimizers tried when training with different normalization techniques. We find using the min instead of mean results in a negligible change, while using the percentile loss more significantly hurts performance. This difference can be explained by Figure S6b and S6c where we show correlations between the two losses. We find the percentile loss has a much weaker correlation to the default normalizer. We suspect this difference is due to the fact that many optimizers diverage on tasks. By using the 95 percentile we upweight optimizers that do not diverge. B.6 TASK FAMILIES ARE DIVERSE To show the effects of diversity we train and test hyperparameter lists on each pair of task family. We additionally normalize each column from 0-1 to account for different mean losses across tasks. Results in Figure S7. While we do find some similarity in tasks – e.g. between MAF and NVP models, but no two tasks behave the same performance characteristics (no duplicate columns) suggesting that each task family is providing a different contribution to the space of all tasks. We also find when training on certain “far away” tasks, e.g. the quadratic family, we find poor performance on most other task families. m af nv p m lp_ va e m lp_ ae co nv _p oo lin g co nv _f c m lp wo rd _r nn _lm rn n_ te xt_ cla ss ch ar _r nn _lm los g_ ta sk s qu ad ra tic Train task family maf nvp mlp_vae mlp_ae conv_pooling conv_fc mlp word_rnn_lm rnn_text_class char_rnn_lm losg_tasks quadratic Te st ta sk fa m ily 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 no rm ali ze d los s Figure S7: Learning hyperparameter lists using one task family and testing on the remainder of task families. We normalize each column from 0-1 to account for different mean losses across tasks. Lower loss means better performance. We find some groups of similar tasks, but in general no two task families behave identically. B.7 EFFECTS OF THE META-TRAINING SEARCH SPACE SIZE Our offline learning technique described in §3.4 hinges on a finite set of optimizers collected via random search. This set is denote by Θ in Eq.4. In this section we probe the impact of this size. We take different sized subsets of the the thousand Adam8p optimizer configurations and train and test search spaces on different iid splits of tasks. We then plot performance as a function of this number of optimizers in Figure S9. Moving left in this figure corresponds to increasing the compute needed to train the learned hyperparameter list. We find performance continues to improve as the size of Θ grows. Given the high dimension of our meta-parameters, 8, this is not a surprise as the number of evaluations needed to explore the space will grow exponentially. We find that the full thousand trials are needed to out perform learning rate tuned Adam when only given a single optimizer evaluation. We find around 100 optimizers (size of Θ) are needed in the case of 10 optimizer trials (k = 10). Overall this sugjests that randomsearch might not be the most efficient learning method for creating hyperparameter lists. This is especially true as we work with optimizer families that have more hyperparameters. Other approximate learning methods should likely be explored such as truncated backprop through time as used by the learned optimizer community(Metz et al., 2019a), and/or population based methods (Balduzzi et al., 2019). lo sg _t a sk s q u a d ra ti c m lp m lp _a e co n v _p o o lin g co n v _f c m lp _v a e n v p rn n _t e x t_ cl a ss if ic a ti o n m a f fi x e d ch a r_ rn n _l a n g u a g e _m o d e l w o rd _r n n _l a n g u a g e _m o d e l 1 sec 10 sec 1 min 5 min 30 min 1 hr 2 hr ti m e t o t ra in 1 0 k st e p s Figure S8: Timings computed for each task family. We find most task families have a narrow distribution of compute times. 101 102 103 number optimizers 0.10 0.15 0.20 0.25 0.30 0.35 0.40 los s 1 hparams (Adam8p) 1 hparams (Adam4p) 10 hparams (Adam8p) 10 hparams (Adam4p) 100 hparams (Adam8p) 100 hparams (Adam4p) 1 -- Adam lr 10 - Adam lr Figure S9: Performance continues to improve as more and more optimizers are used when training the search spaces. On the x-axis we show number of optimzers (size of Θ, the number of hyperparameter evaluations used in training the learned hyperparameter list) and y-axis we show test loss achieved when applying the learned search space for a given fixed length, e.g. different values of k shown in color). We plot median with 25-75 percentile shaded over different random optimizer samples and iid task splits. Stars (with horizontal guide lines) denote best search for the corresponding number of hyperparameters for learning rate tuned Adam in half orders of magnitude. C TASK TIMINGS In Figure S8 we show box plots of training times for each problem. For each task we use the median step time recorded over a mixture of different physical devices and multipled by 10k to estimate a full training time. Future versions of this dataset of tasks will contain more variation within each task family. D OPTIMIZER FAMILY UPDATE EQUATIONS D.1 ADAM8P UPDATE EQUATIONS The 8 meta-parameters are: the learning rate, α, first and second moment momentum, β1, β2, the numerical stability term, , `2 and `1 regularization strength, and learning rate schedule constants λexp_decay and λlinear_decay. For Adam6p, we set `1 and `2 to zero. φ(0) =problem specified random initialization (S1) m(0) =0 (S2) v(0) =0 (S3) g(t) = d dφ(t) (f(x;φ(t)) + `2||φ(t)||22 + `1||φ(t)||1) (S4) m(t) =β1m (t−1) + g(t)(1− β1) (S5) v(t) =β2v (t−1) + (g(t))2(1− β2) (S6) m̂(t) = m(t) 1− βt+11 (S7) v̂(t) = v(t) 1− βt+12 (S8) u(t) = m̂(t)√ v̂(t) + (S9) s (t) linear =max(1− tλlinear_decay, 0) (S10) s(t)exp =exp(−tλexp_decay) (S11) φ(t+1) =αs (t) linears (t) expu (t) (S12) D.2 NADAMW UPDATE EQUATIONS This optimizer family has 10 hyper parameters. The base learning rate, αbase, first and second moment momentum, β1, β2, the numerical stability term, , `2WD `2 regularization strength, `2AdamW AdamW style weight decay, and a boolean to switch between NAdam and Adam, buse nesterov. The learning rate schedule is based off of a single cycle cosine decay with a warmup. It is controlled by 3 additional parameters – cwarmup, cconstant, and cmin learning rate mult. The learning rate is defined by: u =cwarmupT > t (S13) αdecay&constant =(αbase − cmin learning rate mult)(0.5 (S14) cos(tπ/(T − cconstant)) + 0.5)+ (S15) cmin learning rate mult (S16) αwarmup = t (Tcwarmup) (S17) α =(1− u)αdecay&constant + uαwarm (S18) The update equations of NAdamW are quite similar to that of Adam8p. For clarity we list the full update here. φ(0) =problem specified random initialization (S19) m(0) =0 (S20) v(0) =0 (S21) g(t) = d dφ(t) (f(x;φ(t)) + `2wd||φ(t)||22 (S22) m(t) =β1m (t−1) + g(t)(1− β1) (S23) v(t) =β2v (t−1) + (g(t))2(1− β2) (S24) m̂(t) = m(t) 1− βt+11 (S25) v̂(t) = v(t) 1− βt+12 (S26) u (t) heavy ball = m̂(t)√ v̂(t) + (S27) u (t) nesterov = β1m̂ (t) + (1− β1)g(t)√ v̂(t) + (S28) φ(t+1) =φ(t) − (1− buse nesterov)αu(t)heavy ball+ (S29) buse nesterovαu (t) nesterov − α`2AdamWφ(t) (S30) E OPTIMIZER FAMILY SEARCH SPACES E.1 SEARCH SPACE CONSIDERATIONS The performance of random search critically depends on the boundaries of the original search space. Without prior knowledge about the problems, however, picking a good search space is difficult. To explore this we additionally choose search spaces after collecting and looking at the data. We then use this search space to simulate random search within the constraints via rejection sampling. To find these search spaces we find the best hyper parameters for each task and construct new hyperparameter ranges with min and max values determined by the smallest and largest values of each hyperparameter which were the best hyperparameter for some task. This removes regions of the search space not used by any task. We also tested bounds based on the 5th and 95th percentile of best performing hyperparameters computed over all tasks. In the case of min and max, we find the optimal hyperparameters cover nearly all of the existing space, whereas the percentile based search spaces reduces the volume of the search hypercube by more than 90% leaving us with only ∼100 hyperparameter configurations. In Figure 3, we find, in all cases, learning the hyperparameter list is much more efficient. E.2 ADAM8P, ADAM6P, ADAM4P, ADAMLR SEARCH SPACES For Adam1p, Adam4p, Adam6p, and Adam8p we sample learning rate logritmically between 1e-8 and 10, beta1 and beta2 we parametrize as 1 − x and sample logrithmically between 1e-4 and 1 and 1e-6 and 1 respectively. For learning rate schedules we sample linear decay between 1e-7, 1e-4 logrithmically and exponential decay logrithmically between 1e-3, 1e-6. We sample both `1 and `2 logrithmcally between 1e-8, 1e1. E.3 NADAMW SEARCH SPACE This search space was chosen heuristically in an effort to generalize to new problems. We would like to emphasize that it was not tuned. We used our insight from Adam based optimizer families and chose this. No iterations where done. We expect more iterations will improve not only in distribution performance, but alsos generalization performance. The initial learning rate, αbase is sampled from log space between 1e− 5 and 1.0. 1− β1 is sampled logrithmically between 1e − 3, and 1.0. 1 − β2 is sampled between 1e − 5, and 1.0. is sampled logarithmically between 1e − 8 and 1e4. We sample using nesterov (buse nesterov) 50% of the time. We sample `2WD and `2AdamW logrithmically between 1e− 5 and 1e− 1. Equal probabilities of a third we either use both terms, zero out `2WD, or zero out `2AdamW . With 50% probability we use a nonzero min learning rate multiplier sampled logrithmically between 1e− 5 and 1.0. With 50% probability we sample the warm up fraction, cwarmup between 1e-5 and 1e-1, otherwise it is set to zero. Finally, we uniformly sample the amount of time the learning rate is held constant(cconstant) between 0 and 1. F EXTENDED RELATED WORK F.1 SETS OF TASKS Benchmarks consisting of multiple tasks are becoming an increasingly common technique for measuring improvement in algorithm design. Reinforcement learning has Atari Bellemare et al. (2013), DMLab Beattie et al. (2016), gym Brockman et al. (2016), and dm_control Tassa et al. (2018). Natural language processing has evaluation sets such as GLUE (Wang et al., 2018), Super GLUE (Wang et al., 2019), and the NLPDecathalon (McCann et al., 2018). In computer vision there is (Zhai et al., 2019) which studies transfer learning of image features. In black box optimization there is Nevergrad (Rapin & Teytaud, 2018), COmparing Continuous Optimizers (COCO) (Hansen et al., 2016) and a number of tasks to test Bayesian hyperparameter optimization presented in (Dewancker et al., 2016). For first order gradient methods there are unit tests for stochastic optimization (Schaul et al., 2013) which studies toy optimization functions, and DeepObs (Schneider et al., 2019) which includes 20 neural network tasks. Hyperparameter tuning practices on these benchmarks vary between tuning on each task separately, to tuning one set of hyperparameters for all problems. In Atari (Bellemare et al., 2013), for example, it is common practice to tune hyperparameters on a subset of tasks and evaluate on the full set. This protocol can further be extended by leveraging unseen levels or games at test time as done in Obstacle Tower (Juliani et al., 2019), ProcGen (Cobbe et al., 2019), CoinRun (Cobbe et al., 2018), and Sonic (Nichol et al., 2018). We believe generalization to unseen tasks is key for learned algorithms to be useful thus our learned search space experiments mirror this setting by making use of hold out tasks. Existing meta-learning data sets share similar goals to our work but focus on different domains. In few shot learning there is MiniImageNet (Vinyals et al., 2016) which is built procedurally from the ImageNet dataset (Russakovsky et al., 2015). Meta-Dataset (Triantafillou et al., 2019) takes this further and also focuses on generalization by constructing few shot learning tasks using images from a number of different domains for evaluation purposes. The automated machine learning community has OpenML (Vanschoren et al., 2013) with a focus on selecting and tuning non-neural algorithms. For learning optimizers, the use of task suites has been limited and ad-hoc. Many works use a single or small number of standard machine learning tasks (Andrychowicz et al., 2016; Li & Malik, 2017; Lv et al., 2017; Metz et al., 2019a). Wichrowska et al. (2017) uses a set of synthetic problems meant to emulate many different kinds of loss surfaces. While existing collections of tasks exist for optimizer evaluation, e.g. (Schneider et al., 2019), they contain too small a number of tasks to act as a comprehensive training set for learning algorithms, and many of their tasks are additionally too computationally expensive to be useful during learning. F.2 HAND DESIGNED AND LEARNED OPTIMIZERS Optimization is core to machine learning and thus the focus of extensive work. Methods such as Nesterov momentum (Nesterov, 1983), AdaGrad (Duchi et al., 2011), RMSProp (Tieleman & Hinton, 2012), and Adam (Kingma & Ba, 2014) have all shown considerable improvements in both the speed of optimization and ease of use by exposing robust, and easier to tune hyperparameters than SGD (Sivaprasad et al., 2019). Adaptive step size methods in particular have emerged at the forefront with many works building from it including AdamW (Loshchilov & Hutter, 2017), RAdam (Liu et al., 2019), Novograd (Ginsburg et al., 2019), and NAdam Dozat (2016). Recently, there has been a focus on comparing optimizers either for best performance, or ease of use (Wilson et al., 2017; Choi et al., 2019; Schneider et al., 2019; Sivaprasad et al., 2019). This has proven difficult as performance is heavily dependent on the choice of search space for optimization hyperparameters (Choi et al., 2019). Learned optimizers represent a parallel thread in the development of optimizers. By learning as opposed to hand-designing optimizers, researchers hope to not only increase performance but also ease of use (e.g. minimize the number of hyperparameters required or lower hyperparameter sensitivity) (Bengio et al., 1990; Schmidhuber, 1995; Hochreiter et al., 2001). Recently, there has been renewed interest in parameterizating learning algorithms with neural networks and learning these optimizers on neural network based losses (Andrychowicz et al., 2016; Wichrowska et al., 2017; Li & Malik, 2017; Lv et al., 2017; Metz et al., 2019a;b). Other approaches make learn symbolic parameterizations for new optimizers (Bello et al., 2017). These various methods are all trained and evaluated on different distributions of tasks making comparison across papers challenging. The dataset of tasks presented here will hopefully aid in the ability to compare and evaluate progress in learned optimizer research. In this work, we develop a much more minimal type of “learned optimizer” than previous work which developed new functional forms for the optimizer. Optimization involves not only the functional form of the optimizer, but also the rules for choosing hyperparameters and applying the optimizer. We focus on this second aspect of optimization and learn a hyperparameter search space to improve the performance of existing hand designed methods. F.3 HYPERPARAMETER SEARCH Hyperparameter search is a key component in machine learning. Considerable improvements have been made in language Melis et al. (2017), computer vision (Snoek et al., 2012), and RL (Chen et al., 2018) simply by tuning better. Often no single hyperparameter configuration works well across all tasks for existing optimization methods. Most current hyperparameter search methods involve trying a very large number of hyperparameters for every new task, which is computationally infeasible for large tasks, and additionally can severely limit the number of hyperparameters that can be tuned. Many common techniques such as random search (Bergstra & Bengio, 2012; Bousquet et al., 2017), Bayesian optimization (Snoek et al., 2012; 2015), tree parzen estimators (Bergstra et al., 2011), or sequential halving (Kumar et al., 2018) require setting a hyperparameter search space by hand which is not only difficult but often wildly inefficient. Learning hyperparameters or search strategies by leveraging multiple tasks has been explored within the context of Bayesian optimization Swersky et al. (2013); Perrone & Shen (2019); Perrone et al. (2018) as well as under the term meta-learning in Chen et al. (2017) in which an LSTM is meta-trained to produce function locations to query. The cost of hyperparameter search is often large as each evaluation requires training a model to completion. Often multi-fidelity based approaches are used which leverage “simpler” tasks and transfer the resulting hyperparameters (Hutter et al., 2018). Common approaches include training on partial function evaluations Swersky et al. (2014); Domhan et al. (2015); Li et al. (2016); Klein et al. (2016); Falkner et al. (2018), or leveraging simplified data and models (Petrak, 2000; Zoph & Le, 2016; Brock et al., 2017). Our dataset of tasks serves as a: “simpler” set of tasks to train on; a large and diverse enough set of problems that optimization algorithms trained on it may be expected to generalize; and a framework to test transfer across different types of problems. G LIST OF NADAM HPARAMS Idx Lr warmup constant Min LR mult beta1 beta2 epsilon nesterov l2 reg l2 weight decay 0 1.24e-3 0.000 0.477 1.01e-3 0.94666 0.94067 8.114e-8 False 0.000e+00 7.258e-5 1 5.33e-3 0.000 0.172 0.0 0.96047 0.99922 8.665e-8 True 0.000e+00 5.563e-3 2 2.12e-4 0.000 0.210 1.39e-3 0.62297 0.97278 1.540e-7 False 0.000e+00 5.361e-2 3 4.06e-1 0.000 0.324 0.0 0.99724 0.98680 1.079e+02 True 0.000e+00 1.562e-2 4 2.05e-2 0.000 0.885 1.57e-5 0.35731 0.86043 8.874e-5 True 0.000e+00 7.217e-2 5 5.95e-4 0.008 0.378 0.0 0.89130 0.99983 1.483e-7 True 0.000e+00 4.087e-2 6 7.53e-3 0.000 0.422 9.55e-4 0.69192 0.98434 3.593e-8 False 0.000e+00 3.060e-4 7 4.69e-3 0.000 0.509 0.0 0.99639 0.98820 2.056e-5 False 0.000e+00 3.552e-2 8 2.95e-1 0.000 0.201 0.0 0.99678 0.99981 7.498e+00 False 3.792e-4 3.463e-4 9 2.04e-3 0.000 0.527 0.0 0.49995 0.99755 5.630e-8 True 0.000e+00 2.796e-2 10 7.39e-1 0.001 0.556 3.31e-3 0.99691 0.80639 2.900e+03 False 0.000e+00 7.851e-2 11 8.12e-3 0.000 0.207 0.0 0.17785 0.96033 7.971e-2 False 0.000e+00 1.489e-2 12 3.33e-2 0.000 0.369 0.0 0.69592 0.99997 5.510e-6 True 0.000e+00 1.362e-5 13 6.95e-3 0.000 0.014 0.0 0.99412 0.99305 4.352e-7 False 0.000e+00 3.142e-5 14 1.88e-1 0.000 0.205 1.08e-1 0.98597 0.56531 3.335e+00 True 1.265e-5 3.868e-3 15 9.47e-4 0.007 0.452 0.0 0.43977 0.09422 2.120e-7 False 0.000e+00 6.902e-3 16 3.75e-3 0.000 0.184 0.0 0.87756 0.96128 3.163e-3 True 7.468e-5 2.627e-3 17 7.25e-1 0.000 0.495 0.0 0.99800 0.99781 3.608e+00 True 1.656e-5 3.911e-2 18 4.58e-3 0.000 0.107 3.66e-1 0.42294 0.99963 4.174e-6 True 0.000e+00 4.446e-3 19 3.07e-4 0.007 0.518 0.0 0.57863 0.99625 9.881e-6 False 0.000e+00 5.521e-2 20 2.94e-5 0.000 0.830 8.27e-5 0.96916 0.99896 7.782e-7 True 3.364e-4 3.416e-3 21 1.65e-4 0.002 0.457 2.70e-1 0.95280 0.04565 2.832e-6 True 0.000e+00 1.141e-2 22 9.17e-1 0.010 0.897 2.67e-2 0.45061 0.99244 4.945e-1 False 1.253e-3 0.000e+00 23 2.36e-3 0.000 0.986 0.0 0.98560 0.99997 1.080e-8 True 0.000e+00 3.023e-3 24 2.14e-2 0.000 0.128 0.0 0.98741 0.99336 1.266e-4 False 0.000e+00 5.194e-4 25 5.91e-2 0.000 0.062 0.0 0.99794 0.99383 3.447e+02 True 0.000e+00 3.935e-2 26 1.57e-3 0.000 0.251 0.0 0.91820 0.99991 4.675e-5 False 0.000e+00 4.112e-5 27 4.43e-1 0.000 0.702 0.0 0.94375 0.93551 2.335e-8 True 0.000e+00 8.325e-5 28 2.98e-3 0.008 0.046 0.0 0.68612 0.94232 6.614e-2 False 6.489e-5 0.000e+00 29 1.65e-2 0.004 0.082 4.92e-4 0.95717 0.99789 3.068e+01 True 0.000e+00 8.920e-2 30 5.58e-3 0.000 0.538 0.0 0.97559 0.99990 3.238e-8 True 0.000e+00 4.896e-4 31 8.54e-1 0.000 0.229 0.0 0.93129 0.50200 2.051e-2 False 2.068e-4 2.801e-2 32 7.38e-3 0.000 0.722 8.78e-2 0.21456 0.99752 2.862e-2 False 0.000e+00 8.439e-2 33 4.26e-4 0.001 0.923 2.06e-1 0.47239 0.99974 8.221e-5 False 1.248e-5 0.000e+00 34 6.04e-3 0.000 0.698 0.0 0.97849 0.91449 1.806e+00 False 3.183e-3 1.762e-2 35 8.86e-3 0.000 0.104 1.66e-1 0.98967 0.99720 1.493e-2 True 0.000e+00 2.253e-2 36 1.51e-2 0.000 0.431 1.99e-3 0.80488 0.97878 2.538e-8 True 0.000e+00 2.269e-5 37 2.50e-3 0.000 0.009 0.0 0.98127 0.99988 1.799e-7 False 0.000e+00 1.303e-2 38 3.42e-4 0.000 0.827 6.38e-1 0.25217 0.96572 2.928e-7 True 0.000e+00 1.318e-3 39 6.94e-5 0.000 0.085 0.0 0.98674 0.42709 2.387e-7 False 0.000e+00 2.071e-4 40 3.03e-2 0.001 0.313 0.0 0.90610 0.99997 4.449e-3 True 0.000e+00 2.813e-5 41 4.64e-3 0.000 0.495 2.26e-5 0.64658 0.54108 3.528e-8 False 0.000e+00 2.996e-5 42 2.25e-3 0.000 0.722 0.0 0.97967 0.97518 1.488e-7 True 1.812e-5 2.180e-2 43 6.66e-4 0.000 0.632 2.79e-5 0.65968 0.99997 6.848e-6 True 0.000e+00 3.130e-3 44 3.31e-3 0.000 0.146 0.0 0.90447 0.99970 6.618e-6 True 0.000e+00 2.184e-2 45 7.84e-4 0.016 0.124 0.0 0.95065 0.99685 2.141e-2 False 0.000e+00 4.024e-5 46 6.16e-3 0.016 0.623 0.0 0.98823 0.98744 1.616e-6 False 0.000e+00 1.544e-2 47 3.26e-4 0.000 0.738 1.61e-4 0.78425 0.99998 3.468e-3 False 0.000e+00 4.709e-2 48 4.12e-3 0.001 0.205 0.0 0.99561 0.75382 2.390e-6 True 0.000e+00 3.631e-2 49 6.26e-1 0.000 0.932 2.52e-3 0.99401 0.83521 2.431e+00 True 0.000e+00 1.048e-2 Top 50 hyper parameters found using the NAdamW search space. We find diverse learning rates, with very little warmup used. We additionally find most good performing optimizers make use of AdamW style weight decay. Finally, matching insight from (Choi et al., 2019), we find large values of . H DESCRIPTION OF TASKS IN TASK SUITE In this section we detail the task distribution used throughout this work. In addition to this text, a Tensorflow (Abadi et al., 2016) implementation is also released at github.com/google-research/googleresearch/tree/master/task_set. H.1 SAMPLED TASKS H.1.1 DEFAULT SAMPLED COMPONENTS As many of the sampled tasks are neural networks. We define common sampling routines used by all the sampled tasks. Activation functions: We define a distribution of activation functions which is sampled corresponding the following listing both name and weight. These are a mix of standard functions (relu, tanh) to less standard (cos). • relu: 6 • tanh: 3 • cos: 1 • elu: 1 • sigmoid: 1 • swish (Ramachandran et al., 2017): 1 • leaky relu (with α = 0.4): 1 • leaky relu (with α = 0.2): 1 • leaky relu (with α = 0.1): 1 Initializations: We sample initializers according to a weighted distribution. Each initialization sample also optionally samples hyperparameters (e.g. for random normal initializers we sample standard deviation of the underlying distribution). • he normal (He et al., 2015): 2 • he uniform (He et al., 2015): 2 • glorot normal (Glorot & Bengio, 2010): 2 • glorot uniform (Glorot & Bengio, 2010): 2 • orthogonal: 1. We sample the “gain”, or multiplication of the orthogonal matrix logarithmi- cally between [0.1, 10]. • random uniform 1.0: This is defined between [−s, s] where s is sampled logarithmically between [0.1, 10]. • random normal: 1.0: The std is sampled logarithmically between (0.1, 10). • truncated normal: 1.0: The std is sampled logarithmically between (0.1, 10). • variance scaling: 1.0: The scale is sampled logarithmically between (0.1, 10). RNN Cores: We define a distribution over different types of RNN cores used by the sequential tasks. With equal probability we sample either a vanilla RNN (Elman, 1990), GRU(Chung et al., 2014), or LSTM(Hochreiter & Schmidhuber, 1997). For each cell we either sample 1 shared initialization method or sample a different initialization method per parameter vector with a 4:1 ratio. We sample the core hidden dimension logarithmically between [32, 128]. H.1.2 SAMPLED DATASETS Image Datasets: We sample uniformly from the following image datasets. Each dataset additionally has sampled parameters. For all datasets we make use of four data splits: train, valid-inner, valid-outer, test. Train is used to train models, valid-inner is used while training models to allow for modification of the training procedure (e.g. if validation loss doesn’t increase, drop learning rate). Valid-outer is used to select meta-parameters. Test should not be used during meta-training. For all datasets, we sample a switch with low probability (10% of the time) to only use training data and thus not test generalization. This ensures that our learned optimizers are capable of optimizing a loss as opposed to a mix of optimizing and generalizing. Mnist: Batch size is sampled logarithmically between [8, 512]. We sample the number of training images logarithmically between [1000, 55000] (LeCun, 1998). Fashion Mnist: Batch size is sampled logarithmically between [8, 512]. We sample the number of training images logarithmically between [1000, 55000] (Xiao et al., 2017). Cifar10: Batch size is sampled logarithmically between [8, 256]. The number of training examples is sampled logarithmically [1000, 50000] (Krizhevsky et al., 2009). Cifar100: Batch size is sampled logarithmically between [8, 256]. The number of training examples is sampled logarithmically [1000, 50000] (Krizhevsky et al., 2009). {food101_32x32, coil100_32x32, deep_weeds_32x32, sun397_32x32}: These dataset take the original set of images and resize them to 32x32 using OpenCV’s (Bradski, 2000) cubic interpolation. We ignore aspect ratio for this resize. Batch size is sampled logarithmically between [8, 256] (Bossard et al., 2014; Nene et al., 1996; Olsen et al., 2019; Xiao et al., 2010). Imagenet32x32 / Imagenet16x16: The ImageNet 32x32 and 16x16 dataset as created by Chrabaszcz et al. (2017). Batch size is logrithmically sampled between [8, 256]. H.1.3 TEXT CLASSIFICATION: IMDB sentiment classification: We use text from the IMDB movie reviews dataset(Maas et al., 2011) and tokenize using subwords using a vocab size of 8k(Sennrich et al., 2015). We then take length s random slice from each example where s is sampled logarithmically between [8, 64]. These examples are then batched into a batch size logarithmically sampled between [8, 512]. We sample the number of training examples logarithmically between [1000, 55000] and with 10% probability just use training data instead of valid / test to test pure optimization as opposed to generalization. H.1.4 CHARACTER AND WORD LANGUAGE MODELING For the character and word language modeling datasets we make use of the following data sources: imdb movie reviews(Maas et al., 2011), amazon product reviews (ama) using the Books, Camera, Home, and Video subset each as separate datasets, LM1B(Chelba et al., 2013), and Wikipedia(Foundation) taken from the 20190301 dump using the zh, ru, ja, hab, and en language codes. We split each article by new lines and only keep resulting examples that contain more than 5 characters. For infrastructure reasons, we only use a million articles from each language and only 200k examples to build the tokenizer. Byte encoding: We take length s random slices of each example where s is sampled logarithmically between [10, 160]. These examples are then batched into a batch size logarithmically sampled between [8, 512]. With probability 0.2 we restrict the number of training examples to a number logarithmically sampled between [1000, 50000]. Finally, with a 10% probability just use training data instead of valid / test to test pure optimization as opposed to generalization. subword encoding: We encode the text as subwords with a vocabsize of 8k (Sennrich et al., 2015). We then take length s random slices of each example where s is sampled logarithmically between [10, 256]. These examples are then batched into a batch size logarithmically sampled between [8, 512]. With probability 0.2 we restrict the number of training examples to a number logarithmically sampled between [1000, 50000]. Finally, with a 10% probability just use training data instead of valid / test to test pure optimization as opposed to generalization. H.2 SAMPLED TASKS H.2.1 MLP This task family consists of a multi layer perceptron trained on flattened image data. The amount of layers is sampled uniformly from [1, 6]. Layer hidden unit sizes are sampled logarithmically between [16, 128] with different number of hidden units per layer. One activation function is chosen for the whole network and is chosen as described in H.1.1. One shared initializer strategy is also sampled. The image dataset used is also sampled. Two sampled configurations are shown below. 1 { 2 "layer_sizes": [ 3 71 4 ], 5 "activation": "leaky_relu2", 6 "w_init": [ 7 "he_normal", 8 null 9 ], 10 "dataset": [ 11 "sun397_32x32", 12 { 13 "bs": 32, 14 "just_train": false, 15 "num_train": null 16 }, 17 { 18 "crop_amount": 0, 19 "flip_left_right": false, 20 "flip_
1. What is the focus of the paper, and what are the authors trying to achieve? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its practical applications? 3. Do you have any concerns or questions about the experimental design and results presented in the paper? 4. How does the reviewer assess the novelty and significance of the paper's contribution? 5. Are there any limitations or areas for improvement in the paper that the reviewer identifies?
Review
Review This paper proposes a new dataset which contains experiment / model details coupled with optimizer information so as to model the behavior of optimizer, and their effect on performance on test set. The paper is not very difficult to follow, but I am not super convinced of an actual practical use cases. I think that the authors should provide a concrete examples for real life test time applications. I suppose the meta-learning algorithm for the optimizer would take the experiment definition, and map this information to an optimal optimizer, but I think that it would be easier for the reader if this information could be made mode explicit in the paper, perhaps with a concrete example. I also think that the comparison between the proposed meta learning approach, vs. the regular hyperparameter search on a given dataset should be made clearer. Right now it is limited to figure 3, and in my opinion the details on how the random search is carried out is not clear enough. What is the range of hyper parameters that are sampled? What are the distributions from which the hyperparameters come from? Also, it is hard to make the conclusion only from the experiments provided in Figure 3 that, the proposed meta-learning approach would be preferable over the standard hyperparameter search just by the two tasks explored in this particular figure. Ideally a third dimension of tasks should also be added to the figures so that we know that this meta learning approach generalizes over a variety of tasks. (The same comments apply to figure 4, if I understand correctly, which does similar experiments on more realistic models/tasks) If I am missing something that is already in the paper, I apologize, but without further experimental evidence which suggests that the proposed meta learning scheme would be clearly preferable over standard hyperparameter search, it is hard to see a clear-cut application for this paper. I appreciate the ambitious task that this paper is trying to tackle, but I feel more convincing experimental evidence, and better presentation of the experiments is required to consolidate the case that this paper is trying to make.
ICLR
Title TaskSet: A Dataset of Optimization Tasks Abstract We present TaskSet, a dataset of tasks for use in training and evaluating optimizers. TaskSet is unique in its size and diversity, containing over a thousand tasks ranging from image classification with fully connected or convolutional neural networks, to variational autoencoders, to non-volume preserving flows on a variety of datasets. As an example application of such a dataset we explore meta-learning an ordered list of hyperparameters to try sequentially. By learning this hyperparameter list from data generated using TaskSet we achieve large speedups in sample efficiency over random search. Next we use the diversity of the TaskSet and our method for learning hyperparameter lists to empirically explore the generalization of these lists to new optimization tasks in a variety of settings including ImageNet classification with Resnet50 and LM1B language modeling with transformers. As part of this work we have opensourced code for all tasks, as well as 29 million training curves for these problems and the corresponding hyperparameters.1 N/A We present TaskSet, a dataset of tasks for use in training and evaluating optimizers. TaskSet is unique in its size and diversity, containing over a thousand tasks ranging from image classification with fully connected or convolutional neural networks, to variational autoencoders, to non-volume preserving flows on a variety of datasets. As an example application of such a dataset we explore meta-learning an ordered list of hyperparameters to try sequentially. By learning this hyperparameter list from data generated using TaskSet we achieve large speedups in sample efficiency over random search. Next we use the diversity of the TaskSet and our method for learning hyperparameter lists to empirically explore the generalization of these lists to new optimization tasks in a variety of settings including ImageNet classification with Resnet50 and LM1B language modeling with transformers. As part of this work we have opensourced code for all tasks, as well as 29 million training curves for these problems and the corresponding hyperparameters.1 1 INTRODUCTION As machine learning moves to new domains, collecting diverse, rich, and application-relevant datasets is critical for its continued success. Historically, research on learning optimization algorithms have only leveraged single tasks (Andrychowicz et al., 2016; Metz et al., 2019a), or parametric synthetic tasks (Wichrowska et al., 2017), due to the difficulty of obtaining large sets of tasks. 1.1 TASKSET: A SET OF TASKS We present a set of tasks significantly larger than any optimizer dataset previously studied. We aim to better enable standardized research on optimizers, be that analysis of existing optimizers, or development of new learned learning algorithms. We call this suite of tasks TaskSet. Much in the same way that learned features in computer vision outpaced hand designed features (Krizhevsky et al., 2012; LeCun et al., 2015), we believe that data driven approaches to discover optimization algorithms will replace their hand designed counterparts resulting in increased performance and usability. To this end, standardizing a large suite of optimization tasks is an important first step towards more rigorous learned optimizer research. In this setting, a single “example” is an entire training procedure for a task defined by data, loss function, and architecture. Thus, TaskSet consists of over a thousand optimization tasks, largely focused on deep learning (neural networks). They include image classification using fully connected and convolutional models, generative models with variational autoencoders (Kingma & Welling, 2013) or flows (Dinh et al., 2016; Papamakarios et al., 2017), natural language processing tasks including both language modeling and classification, as well as synthetic tasks such as quadratics, and optimization test functions. The problems themselves are diverse in size, spanning 7 orders of magnitude in parameter count, but remain reasonably fast to compute as almost all tasks can be trained 10k iterations on a CPU in under one hour. To demonstrate the breadth of this dataset we show an embedding of all the tasks in Appendix A.1 in Figure S1. 1redacted url 1.2 AMORTIZING HYPERPARAMETER SEARCH Machine learning methods are growing ever more complex, and their computational demands are increasing at a frightening pace (Amodei & Hernandez, 2018). Unfortunately, most modern machine learning models also require extensive hyperparameter tuning. Often, hyperparameter search is many times more costly than the final algorithm, which ultimately has large economic and environmental costs (Strubell et al., 2019). The most common approach to hyperparameter tuning involves some form of quasi-random search over a pre-specified grid of hyperparameters. Building on past work (Wistuba et al., 2015b; Pfisterer et al., 2018), and serving as a typical example problem illustrative of the sort of research enabled by TaskSet, we explore a hyperparameter search strategy consisting of a simple ordered list of hyperparameters to try. The idea is that the first few elements in this list will cover most of the variation in good hyperparameters found in typical machine learning workloads. We choose the elements in this list by leveraging the diversity of tasks in TaskSet, by meta-learning a hyperparameter list that performs the best on the set of tasks in TaskSet. We then test this list of hyperparameters on new, larger machine learning tasks. Although learning the list of hyperparameters is costly (in total we train∼29 million models consisting of over 4,000 distinct hyper parameter configurations), our final published list is now available as a good starting guess for new tasks. Furthermore, we believe the raw training curves generated by this search will be useful for future hyperparameter analysis and meta-learning research, and we release it as part of this work. We additionally release code in Tensorflow (Abadi et al., 2016), Jax (Bradbury et al., 2018), and PyTorch (Paszke et al., 2019) for a reference optimizer which uses our learned hyperparameter list, and can be easily applied to any model. 2 TASKSET: A SET OF TASKS How should one choose what problems to include in a set of optimization tasks? In our case, we strive to include optimization tasks that have been influential in deep learning research over the last several decades, and will be representative of many common machine learning problems. Designing this dataset requires striking a balance between including realistic large-scale workloads and ensuring that tasks are fast to train so that using it for meta-learning is tractable. We construct our dataset largely out of neural network based tasks. Our chosen tasks have between ten thousand and one million parameters (much smaller than the billions commonly used today), as a result most problems can train in under an hour on a cloud CPU with 5 cores. We additionally focus on increased “task diversity” by including many different kinds of training algorithms, architectures, and datasets – inspired by past work in reinforcement learning which has demonstrated large numbers of problems and increased diversity around some domain of interest is useful for both training and generalization Heess et al. (2017); Tobin et al. (2017); Cobbe et al. (2018); OpenAI et al. (2019). Again though, a balance must be struck, as in the limit of too much diversity no learning can occur due to the no free lunch theorem (Wolpert & Macready, 1997). Our dataset, TaskSet, is made up of 1162 tasks in total. We define a task as the combination of a loss function, a dataset, and initialization. Specifically we define a task as a set of 4 functions: • Initialization: ()→ parameter initial values • Data generator: data split (e.g. train / valid / test)→ batch of data • Forward pass: (batch of data, params)→ loss • Gradient function: (input data, params)→ gradients ( dlossdparams ) A task has no tunable hyperparameters and, coupled with an optimizer, provides all the necessary information to train using first order optimization. This makes experimentation easier, as each task definition specifies hyperparameters such as batch size (Shallue et al., 2018; McCandlish et al., 2018) or initialization (Schoenholz et al., 2016; Yang & Schoenholz, 2017; Xiao et al., 2018; Li & Nguyen, 2019; Pretorius et al., 2018; Hayou et al., 2018; Karakida et al., 2018; Blumenfeld et al., 2019; Hayou et al., 2019) that no longer need to be tuned. We augment a set of “fixed” tasks which have been designed by hand, with “sampled” tasks that are randomly generated task instances. 2.1 SAMPLED FAMILIES OF TASKS Sampled tasks are created by sampling neural network architectures (e.g., MLPs, convnets), activation functions, datasets (e.g., images, text, quadratic functions, and synthetic tasks), and other properties. We organize these sampled tasks into similar families of tasks. See Appendix H for a complete description of these sampled tasks. Broadly, these are separated into tasks sampling image models (mlp, mlp_ae (Hinton & Salakhutdinov, 2006), mlp_vae (Kingma & Welling, 2013), conv_pooling, conv_fc, nvp (Dinh et al., 2016), maf (Papamakarios et al., 2017)), tasks sampling language models (char_rnn_language_model (Graves, 2013), word_rnn_language_model, rnn_text_classification), quadratics (quadratic) and other synthetic tasks (losg_tasks (Wichrowska et al., 2017)). Defining a sampling distribution that generates tasks that are always valid, and that run within a time constraint, is difficult. Instead, we define a broad distribution and make use of rejection sampling to remove tasks that are either too slow or that we are unable to optimize at all. By starting with a distribution that is too broad, and pruning it, we hope to achieve better coverage of tasks. 2.2 HAND DESIGNED TASKS In addition to the sampled tasks, we also include 107 hand designed tasks. These consist of more common tasks that both improve the coverage beyond the sampled tasks, and provide for better interpretability through a closer match to existing tasks in the literature. These tasks span image classification, text classification, language modeling, and generative modeling, as well as some synthetic tasks such as associative retrieval (Ba et al., 2016). We leave the description of each one of these tasks to Appendix H.3. 2.3 AGGREGATE STATISTICS OF TASKSET In Figure 1a we show histograms of compute times for all problems and find almost all problems train under an hour (see Appendix C for per task family histograms). In Figure 1c we plot a histogram of the number of parameters per tasks. Finally, in Figure 1b we show a distribution of task difficulty by plotting the fraction of optimizer configurations that achieve a certain loss value. We find that for some tasks as many as 50% of optimizers perform well while for others < 1% achieve a loss close to the smallest observed loss. For a qualitative visualization of TaskSet, see Appendix A 3 AMORTIZED HYPERPARAMETER SEARCH As a simple demonstration of using TaskSet for meta-learning research, we consider learning hyperparameter lists. This idea of learning lists of hyper parameters has been explored in (Wistuba et al., 2015b; Pfisterer et al., 2018). We define an optimizer as the pairing of an optimization algorithm and all its corresponding hyperparameters (e.g. learning rate). While sometimes practitioners use a single optimizer – e.g. Adam (Kingma & Ba, 2014) with default hyperparameters – most practitioners will often run multiple optimizers and use a validation set to select the best performer. 3.1 OPTIMIZER FAMILIES We define different parameterizations of hand designed optimizers as an optimizer family. The optimizer families we consider consist of: • Adam1p: One hyperparameter, the fixed learning rate α • Adam4p: Four Adam hyperparameters, α, β1, β2, and • Adam6p: Adam4p hyperparameters, and two additional hyperparameters controlling linear and exponential learning rate decays • Adam8p: The hyperparameters in Adam6p plus two additional hyperparameters for `1 and `2 regularization terms • NAdamW: A 10 hyperparameter search space based on NAdam (Dozat, 2016) with cosine learning rate decay, and weight decay. For the full update equations see Appendix D.1 for Adam and D.2 for NadamW. We chose Adam based on its use in existing work, and NAdam based on performance shown in (Choi et al., 2019). 3.2 LEARNED HYPERPARAMETER LISTS Traditionally researchers tune hyperparameters on a per model basis. While this often results in performance gains; it comes at the cost of immense compute, and researchers are almost never able to expend enough compute to saturate model performance (Shallue et al., 2018). As an alternative to per-problem tuning, we proposes instead tuning the search strategy itself on a dataset of tasks and transferring the knowledge gained to new tasks of interest. This idea is already implicitly done by humans – e.g. we don’t start a hyperparameter search with a learning rate of 106 – we use values that the community has found useful. This dataset-based tuning has a number of desirable properties. First, the resulting search strategies are much more efficient, resulting in large speedups in sample efficiency on unseen tasks over a random search baseline. Second, we are less restricted by the number of optimizer parameters we search over or by needing to define reasonable search spaces. For example, if there are redundant regions of search space, our learned optimizer will be less likely to sample them repeatedly, unlike random search. If there is a region of hyperparameter space that performs poorly on all problems, the learned search strategy will avoid it. In this work we parameterize the learned search strategy as an ordered list of optimizers to try (i.e. a list of hyperparameter configurations). Given a fixed number of task evaluations we would like to achieve the best possible performance on all tasks in the training set of tasks. For a length k list of optimizers we define our loss as: J(θ1,...,k) = ∑ τ∈tasks [ min i∈1..k f(τ, θi) ] , (1) where θi are the optimizer hyperparameters for element i in the list, and f is an appropriately normalized loss computed after training task τ . We seek to find an optimal list of optimizers as (similar to (Wistuba et al., 2015b)): θ∗1,...,k = arg min θ1,...,k J(θ1,...,k). (2) This is meant to serve as an example task, illustrative of the sort of research enabled by TaskSet. More advanced hyperparameter search strategies would no doubt yield even more performant results. 3.3 SCORING AN OPTIMIZER BY AVERAGING OVER TASKS To score a task, we initialize the parameters of the task and run 10,000 iterations of an optimizer. We monitor loss on each data split (train, validation, test) every 200 steps using an average over 50 mini-batches per evaluation. For all data presented in this paper we also compute averages over 5 random task parameter initializations. A side effect of the diverse task dataset is that losses span multiple orders of magnitude, making direct aggregation of performance problematic. To remedy this we normalize the loss values for all tasks linearly between 0 and 1 where 1 is validation loss at initialization and zero is the lowest validation loss achieved by any tested optimizer. Loss values greater than the loss at initialization are clipped to 1. To collapse an entire normalized training curve into a scalar cost, we compute the mean normalized loss over the 10,000 iterations. We find empirically that this choice is similar to taking the minimum (Appendix B.5). We leave exploring alternative methods such as performance profiles (Dolan & Moré, 2002) and Nash averaging (Balduzzi et al., 2018) for future work. 3.4 GREEDY LEARNING FROM RANDOM SEARCH Optimizing Eq. 2 is combinatorially expensive. To tractably solve this optimization problem, we introduce two approximations (Wistuba et al., 2015b). First, we shift the unconstrained search over the full space of optimizers to search over a finite set of optimizers, Θ. This finite set can be computed ahead of time and decouples the expensive procedure of training each task with an optimizer from training the learned search space. Separating data and training in this way has been done for both hyperparameter search (Eggensperger et al., 2015), and neural architecture search (Klein & Hutter, 2019; Ying et al., 2019). In total we trained 1,000 optimizer configurations for each of Adam1p, Adam4p, Adam6p, Adam8p, and NAdamW on all 1,162 tasks with 5 random seeds per pair. Second, we use a greedy heuristic to approximate the combinatorial search over sets of k optimizers. For a single optimizer trial, k = 1, we select the best performing optimizer on average across all training tasks. We then continue to select optimizer parameters such that the minimum of all optimizer-parameters per task, aggregated over all tasks is minimized. This shifts the complexity from exponential in k to linear. Finding a length k set of optimizers can thus be efficiently computed as follows: θ∗1 = arg min θ∈Θ [ ∑ τ∈tasks f(τ, θ) ] (3) θ∗k = arg min θ∈Θ [ ∑ τ∈tasks [min (b, f(τ, θ))] ] where b = min i∈1..(k−1) f(τ, θ∗i ). (4) We note that the first argument of the outer min, b, can be computed once per set of hyperparameters as it does not depend on θ. Finally, as our tasks are stochastic, we order optimizers based on validation loss and report test loss (Van Hasselt et al., 2016).2 This training strategy requires an original search space from which to collect data and build Θ. The search space we use is described in Appendix E.2. While large, we find that the optimal parameters for each task end up covering almost the entire space. At some point, no improvement can be obtained on any of the tasks in the dataset. At this point, we simply randomly order the remaining optimizers though expect more sophisticated methods could be employed. 4 EXPERIMENTS: TRAINING AND GENERALIZATION OF LEARNED HYPERPARAMETER LISTS With our dataset of tasks and data collected, we turn our attention to exploring training of the hyperparameter lists, and generalization beyond the suite of tasks in TaskSet. In this exploration, 2This technically means that increasing the number of optimizes could potentially decrease performance, but we find this rarely happens in practice. we hope to give a flavor of the types of research possible with TaskSet. Our main tool to show performance are figures that sweep the number of optimizers configurations on the x-axis, and show the best performance achieved for each number of optimizers tried, averaged over some set of tasks (Eq. 1). 4.1 LEARNED HYPERPARAMETER LISTS ARE MORE EFFICIENT THAN RANDOM SEARCH To demonstrate the impact of learning a search space, we take the 1,162 tasks split them into even train and test tasks. We then learn a search strategy using optimizers from the Adam8p family following Eq. 4 on the train tasks. Results in Figure 3. As baselines, we use random search with different search spaces, including just learning rate (Rand: Adam1p), the default Adam hyper parameters (Rand: Adam4p), as well as the Adam 8 dimensional search space (Rand: Adam8p). To better get a sense of performance, we show two additional “Refined” baselines which involve random sampling from better search space. For min/max, we sample from the minimum bounding box containing the best hyperparameters for each task. To improve the search space quality, we shrink this bounding box so 90% of the best hyperparameters are enclosed. Further considerations regarding search space volume are treated in E.1, and the precise search spaces are specified in Appendix E.2. Finally, one difficulty of working with offline data is the difficulty of running online hyperparameter optimization methods such as Bayesian Optimization without running additional compute. Future work will explore offline Bayesian methods. 4.2 MORE TASKS LEAD TO BETTER GENERALIZATION We next look at the effects of the number of training tasks on generalization. We take subsets of tasks of different size, and train hyperparameter lists using Eq.4. We compute test performance on the remainder of the tasks and plot loss averaged over different splits in Fig. 3. We find that a large number of tasks (more than 100) are required to achieve near-optimal test performance. This is surprising to us given how simple our learned search strategy is (simply a list of hyperparameters), but not wholly so given past work studying generalization in RL (Cobbe et al., 2018). 4.3 GENERALIZATION TO DIFFERENT TYPES OF PROBLEM For learned algorithms to be generally useful, some amount of generalization to unseen task families is required. To test this, we split our data into disjoint task types. We perform two splits: testing on RNN tasks and training on all others, and testing on autoencoder tasks and training on all others. As a best case baseline we additionally train search spaces on the test task families directly. We find an order of magnitude better sample efficiency than random search for both cases and find our learned search space is close in performance to search spaces trained on just the testing tasks (Fig. 3). 5 EXPERIMENTS: REALISTIC PROBLEMS In §4.3 and §B.1 we explored generalization of learned hyperparameter lists to held out tasks within the TaskSet dataset. While useful for analysis, these tasks are still far from the workloads commonly employed to solve real problems. In this section, we explore the performance of our learned search space on a number of state of the art models. These models drastically differ from the training set of tasks in parameter count and compute cost. We see these experiments as evidence that the tasks presented in TaskSet capture enough of the structure of “realistic” problems that TaskSet can be used to improve larger scale workloads. For all experiments in this section we take the optimizer ordering using the NAdamW optimizer family on all TaskSet tasks then apply the resulting search space to the target problem. The final ordered list of hyperparameters used is in Appendix G. We show results for ResNet50 on ImageNet, and Transformers on LM1B. Additional results with reinforcement learning using PPO are in Appendix B.2. First we explore ImageNet classification using a ResNet50. on We take the TPU implementation with default settings from the official Tensorflow models repository (Tensorflow, 2019), and swap out different optimizers. We show accuracy computed over the course of training as well as best performance for a given hyperparameter budget in Figure 4. We find that the learned search space vastly outperforms learning rate tuned Adam. Next we explore language modeling on LM1B with a Transformer. We take the transformer (Vaswani et al., 2017) example implemented in Jax (Bradbury et al., 2018) with Flax (Flax Developers, 2020). We train using a 2x2 TPU V2 configuration for 100k iterations. Once again we take all other hyperparameters as is and simply swap optimizer implementation. We find the learned hyperparameter list dramatically outperforms the default optimizer setting and the fixed learning rate baseline. Nevertheless, we emphasize that our method does not require any knowledge of the underlying problem to achieve faster results. See Appendix B.3 for this same transformer with a budget of 20k iterations. 6 RELATED WORK The idea of sets of tasks has been explored throughout machine learning. The majority of these suites are for use in evaluation where as our suite is targeted for meta-learning. The closest family of optimization tasks for evaluation to those presented here is DeepObs (Schneider et al., 2019) which includes 20 neural network tasks. Our task suite focuses on smaller problems and contains 50x more tasks. Outside of evaluation, task suites in reinforcement learning such as Obstacle Tower (Juliani et al., 2019), ProcGen (Cobbe et al., 2019), CoinRun (Cobbe et al., 2018), and Sonic (Nichol et al., 2018) focus on training algorithms that work across a variety of settings. The creation of TaskSet was motivated by the goal of learning learning algorithms, or metalearning (Schmidhuber, 1987; 1995; Hochreiter et al., 2001), and in particular learned optimizers (Bengio et al., 1990; Andrychowicz et al., 2016; Bello et al., 2017; Wichrowska et al., 2017; Li & Malik, 2017; Lv et al., 2017; Metz et al., 2019a;b). This use case is explored with this dataset in (Metz et al., 2020). In this work we do not use this task suite to train learned optimizers, but instead focus on learning a hyperparameter search strategy. Tuning hyperparameters by leveraging multiple tasks has been explored within the contexts of Bayesian optimization Swersky et al. (2013); Perrone & Shen (2019); Perrone et al. (2018) as well as meta-learning (Reif et al., 2012; Gomes et al., 2012; Feurer et al., 2014; Wistuba et al., 2015b;a; Chen et al., 2017; Pfisterer et al., 2018). See Appendix F.1 for a full discussion of sets of tasks in machine learning, Appendix F.2 for more info on optimization in machine learning, and Appendix F.3 for a discussion on existing hyper parameter search methods. 7 DISCUSSION Learning optimization algorithms represents a promising direction for accelerating machine learning research. For the resulting algorithms to become useful tools, however, we must further understand the relationships between training tasks, meta-optimization, and both iid and out of distribution generalization. This work takes steps towards this goal by introducing a significantly larger set of optimization tasks than ever previously considered. As an example use-case, we provide a thorough analysis of how TaskSet enables meta-optimization of simple, but performant hyperparameter lists. Despite this approach’s simplicity, the training of learned learning algorithms is computationally expensive. We hope to explore alternative parameterizations which will increase efficiency by, e.g., leveraging previous evaluations or partial model training (Swersky et al., 2014; Li et al., 2016). We are releasing the optimal hyperparameter list we have found as a drop-in replacement optimizer in a variety of deep learning frameworks (Tensorflow (Abadi et al., 2016), PyTorch (Paszke et al., 2019), and JAX (Bradbury et al., 2018)) in the hopes that the research community finds them useful. We believe this represents a new set of reasonable optimizer defaults for new problems. Finally, we hope TaskSet encourages more standardized research on general purpose optimizers. A TASKSET VISUALIZATION For a qualitative view, we constructed a feature space consisting of performance measurements for each task+optimizer pair (See §3.3). This forms a dense matrix of size number of tasks by number of optimizers. We then perform T-SNE (Maaten & Hinton, 2008; Van Der Maaten, 2014) to reduce the dimensionality to two and plot the results coloring by task family (Figure S1). Clusters in this space correspond to tasks that work well with similar optimizers. We find diversity of tasks with clusters occurring around similar families of tasks. A.1 TSNE OF TASKSET B ADDITIONAL EXPERIMENTS B.1 GENERALIZATION TO DIFFERENT SIZED PROBLEMS Training learned algorithms on large models is often infeasible for computational reasons. As such, one form of generalization needed when building learned algorithms is the ability to transfer to different sized models. As shown in Figure 1 the tasks in this suite contain a wide range of parameter counts, and can thus be used to test this kind of generalization. We split the tasks into 8 groups – one group per order of magnitude in parameter count, and train hyperparameter lists on one range and test on the rest. In Figure S2 we plot the fraction of the training loss achieved by the test loss on the target parameter range. We find peak performance around the model sizes used for training, and smooth falloff as the testing tasks become more dissimilar as measured by parameter count. We note that our problems are not evenly distributed across these groups thus each group will contain a different percentage of the underlying tasks. While this potentially confounds these results, we believe a similar bias occurs in realistic workloads as well. 0 1 2 3 4 5 6 7 number of parameters (log10) 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 tra in J / te st J 0-1 3-4 6-7 all Figure S2: We show learned search space generalization, measured as a ratio of the loss achieved in training and testing, versus the number of task parameters used during search space training. Generalization falls off as one moves further away from the training regime. In black we show that a uniform mixture of the 7 parameter buckets does not fall off. B.2 REINFORCEMENT LEARNING WITH PPO Figure S3: We find our learned hyperparameter lists performs about as well as random search on the NAdam search space, and worse than the random search on the learning rate tuned Adam search space. We test the learned hyperparameter lists on two continuous control reinforcement learning environments, half cheetah and humanoid, from Gym’s Mujoco environments(Todorov et al., 2012; Brockman et al., 2016). We use TF-Agents (Guadarrama et al., 2018) with all non-optimizer hyperparameters set via searching a mixture of environments. In figure B.2 we find our learned hyperparameter lists achieves comparable to slightly worse performance does not out perform learning rate tuning of Adam in both efficiency nor final performance. To diagnose this behavior we ran all 1k optimizers for both problems and found the learned hyperparameter list performs comparable to random search in the underlying space. To probe further, we computed spearman correlation on the performance of each optimizer as compared to the rest of the tasks in the task suite. We found considerably worse correlations than where present for tasks in the TaskSet. This is not surprising as TaskSet contains no reinforcement learning problems. B.3 LM1B TARGETING 20K ITERATIONS We show a transformer on LM1B similar to that shown in §5 except run for only 20k iterations, a fith of the steps. Results in Figure S4. We find the learned hyperparameter lists are much more efficient than either of the baselines. Figure S4: We find our learned hyperparameter lists out performs learning rate tuned Adam with both a constant, and a fixed learning rate schedule on a 53M parameter Transformer trained on LM1B. Left: Learning curves for the best of the optimizers. Right: Number of optimizers tried vs best test loss. B.4 PROBING SHORT HORIZON Often the goal when training a learned optimizers is to minimize performance after training some number of iterations. This is extremely computationally expensive and in practice approximations must be used. One common family of approximations is short horizon based methods. These methods rely upon somehow truncating training so that updates can be made to the learned optimizer more frequently. This is commonly done via truncated backprop (Werbos, 1990; Wichrowska et al., 2017; Figure S5: Hyperparameter lists trained on short horizon data generalize remarkably well. On the y-axis we show performance evaluated on the the full 10k training iterations for a given number of optimizers tried (x-axis). In color we show different number of steps used when evaluating task optimizer performance when training the hyperparameter list. Metz et al., 2019a; Wu et al., 2016), or proxy objectives such as only training for a handful of epoch (Zoph & Le, 2017). While this short horizon proxy is certainly not optimal(Wu et al., 2016), the performance gains are immense and in practice is what makes meta-training optimizers feasible. In our task suite, we test this short horizon learning by training hyperparameter lists only using some finite amount of training iterations per task and testing in the full training regieme (10k steps). Results in figure S5. We find that even when learning the hyperparameter list on a mere 200 steps, our hyperparameter list continues to generalize to outperform random search on Adam8p. This is promising as this suggests that training the learned hyperparameter list can be done with 1/50th of the total compute. This result is surprising to us as prior work indicates the effect of this bias can be severe (Wu et al., 2016; Metz et al., 2019a). We suspect it is due to the simplicity of the learned parameter space but leave a thorough analysis of this for future work. 100 101 102 103 0.1 0.5 1.0 default norm last quantile norm min norm random 0.0 0.2 0.4 0.6 0.8 1.0 default norm 0.0 0.2 0.4 0.6 0.8 1.0 las t q ua nt ile n or m 0.0 0.2 0.4 0.6 0.8 1.0 default norm 0.0 0.2 0.4 0.6 0.8 1.0 m in no rm Figure S6: Left: Aggregate performance (y-axis) vs number of optimizer tried (x-axis) for different normalization and aggregation techniques. In each curve we train the hyperparameter list with a different normalization and aggregation strategy and test with the default normalization and aggregation technique described in 3.3. We find some some strategies are near identical in performance (e.g. min norm), while others perform significantly worse – e.g. last quantile norm. In both cases, however, we still perform better than the underlying random search. Center: Correlation between default normalization and the quantile based normalization strategy. Correlation is quite low – 0.193 Pearson’s correlation. Right: Correlation between the default normalization using a mean to aggregate over validation over the course of training vs using a min over validation over the course training. We find a much higher correlation of 0.911. B.5 CHOICE OF NORMALIZATION FUNCTION There is no easy way to define a single metric for optimizer performance over a mixture of tasks. This paper picks a single normalization strategy based on minimum validation loss and the validation loss at initialization presented in §3.3. In this section we show the impact of choosing a different normalization and or aggregation technique. First, instead of computing the mean over learning curves as described in §3.3 we compute a min. Second, instead of rescaling based on init and min, we linearly rescale based on the 95 percentile of validation loss and the min validation loss achieved at the end of training each task.In Figure S6 we show learned hyperparameter list training and testing performance as a function of number of optimizers tried when training with different normalization techniques. We find using the min instead of mean results in a negligible change, while using the percentile loss more significantly hurts performance. This difference can be explained by Figure S6b and S6c where we show correlations between the two losses. We find the percentile loss has a much weaker correlation to the default normalizer. We suspect this difference is due to the fact that many optimizers diverage on tasks. By using the 95 percentile we upweight optimizers that do not diverge. B.6 TASK FAMILIES ARE DIVERSE To show the effects of diversity we train and test hyperparameter lists on each pair of task family. We additionally normalize each column from 0-1 to account for different mean losses across tasks. Results in Figure S7. While we do find some similarity in tasks – e.g. between MAF and NVP models, but no two tasks behave the same performance characteristics (no duplicate columns) suggesting that each task family is providing a different contribution to the space of all tasks. We also find when training on certain “far away” tasks, e.g. the quadratic family, we find poor performance on most other task families. m af nv p m lp_ va e m lp_ ae co nv _p oo lin g co nv _f c m lp wo rd _r nn _lm rn n_ te xt_ cla ss ch ar _r nn _lm los g_ ta sk s qu ad ra tic Train task family maf nvp mlp_vae mlp_ae conv_pooling conv_fc mlp word_rnn_lm rnn_text_class char_rnn_lm losg_tasks quadratic Te st ta sk fa m ily 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 no rm ali ze d los s Figure S7: Learning hyperparameter lists using one task family and testing on the remainder of task families. We normalize each column from 0-1 to account for different mean losses across tasks. Lower loss means better performance. We find some groups of similar tasks, but in general no two task families behave identically. B.7 EFFECTS OF THE META-TRAINING SEARCH SPACE SIZE Our offline learning technique described in §3.4 hinges on a finite set of optimizers collected via random search. This set is denote by Θ in Eq.4. In this section we probe the impact of this size. We take different sized subsets of the the thousand Adam8p optimizer configurations and train and test search spaces on different iid splits of tasks. We then plot performance as a function of this number of optimizers in Figure S9. Moving left in this figure corresponds to increasing the compute needed to train the learned hyperparameter list. We find performance continues to improve as the size of Θ grows. Given the high dimension of our meta-parameters, 8, this is not a surprise as the number of evaluations needed to explore the space will grow exponentially. We find that the full thousand trials are needed to out perform learning rate tuned Adam when only given a single optimizer evaluation. We find around 100 optimizers (size of Θ) are needed in the case of 10 optimizer trials (k = 10). Overall this sugjests that randomsearch might not be the most efficient learning method for creating hyperparameter lists. This is especially true as we work with optimizer families that have more hyperparameters. Other approximate learning methods should likely be explored such as truncated backprop through time as used by the learned optimizer community(Metz et al., 2019a), and/or population based methods (Balduzzi et al., 2019). lo sg _t a sk s q u a d ra ti c m lp m lp _a e co n v _p o o lin g co n v _f c m lp _v a e n v p rn n _t e x t_ cl a ss if ic a ti o n m a f fi x e d ch a r_ rn n _l a n g u a g e _m o d e l w o rd _r n n _l a n g u a g e _m o d e l 1 sec 10 sec 1 min 5 min 30 min 1 hr 2 hr ti m e t o t ra in 1 0 k st e p s Figure S8: Timings computed for each task family. We find most task families have a narrow distribution of compute times. 101 102 103 number optimizers 0.10 0.15 0.20 0.25 0.30 0.35 0.40 los s 1 hparams (Adam8p) 1 hparams (Adam4p) 10 hparams (Adam8p) 10 hparams (Adam4p) 100 hparams (Adam8p) 100 hparams (Adam4p) 1 -- Adam lr 10 - Adam lr Figure S9: Performance continues to improve as more and more optimizers are used when training the search spaces. On the x-axis we show number of optimzers (size of Θ, the number of hyperparameter evaluations used in training the learned hyperparameter list) and y-axis we show test loss achieved when applying the learned search space for a given fixed length, e.g. different values of k shown in color). We plot median with 25-75 percentile shaded over different random optimizer samples and iid task splits. Stars (with horizontal guide lines) denote best search for the corresponding number of hyperparameters for learning rate tuned Adam in half orders of magnitude. C TASK TIMINGS In Figure S8 we show box plots of training times for each problem. For each task we use the median step time recorded over a mixture of different physical devices and multipled by 10k to estimate a full training time. Future versions of this dataset of tasks will contain more variation within each task family. D OPTIMIZER FAMILY UPDATE EQUATIONS D.1 ADAM8P UPDATE EQUATIONS The 8 meta-parameters are: the learning rate, α, first and second moment momentum, β1, β2, the numerical stability term, , `2 and `1 regularization strength, and learning rate schedule constants λexp_decay and λlinear_decay. For Adam6p, we set `1 and `2 to zero. φ(0) =problem specified random initialization (S1) m(0) =0 (S2) v(0) =0 (S3) g(t) = d dφ(t) (f(x;φ(t)) + `2||φ(t)||22 + `1||φ(t)||1) (S4) m(t) =β1m (t−1) + g(t)(1− β1) (S5) v(t) =β2v (t−1) + (g(t))2(1− β2) (S6) m̂(t) = m(t) 1− βt+11 (S7) v̂(t) = v(t) 1− βt+12 (S8) u(t) = m̂(t)√ v̂(t) + (S9) s (t) linear =max(1− tλlinear_decay, 0) (S10) s(t)exp =exp(−tλexp_decay) (S11) φ(t+1) =αs (t) linears (t) expu (t) (S12) D.2 NADAMW UPDATE EQUATIONS This optimizer family has 10 hyper parameters. The base learning rate, αbase, first and second moment momentum, β1, β2, the numerical stability term, , `2WD `2 regularization strength, `2AdamW AdamW style weight decay, and a boolean to switch between NAdam and Adam, buse nesterov. The learning rate schedule is based off of a single cycle cosine decay with a warmup. It is controlled by 3 additional parameters – cwarmup, cconstant, and cmin learning rate mult. The learning rate is defined by: u =cwarmupT > t (S13) αdecay&constant =(αbase − cmin learning rate mult)(0.5 (S14) cos(tπ/(T − cconstant)) + 0.5)+ (S15) cmin learning rate mult (S16) αwarmup = t (Tcwarmup) (S17) α =(1− u)αdecay&constant + uαwarm (S18) The update equations of NAdamW are quite similar to that of Adam8p. For clarity we list the full update here. φ(0) =problem specified random initialization (S19) m(0) =0 (S20) v(0) =0 (S21) g(t) = d dφ(t) (f(x;φ(t)) + `2wd||φ(t)||22 (S22) m(t) =β1m (t−1) + g(t)(1− β1) (S23) v(t) =β2v (t−1) + (g(t))2(1− β2) (S24) m̂(t) = m(t) 1− βt+11 (S25) v̂(t) = v(t) 1− βt+12 (S26) u (t) heavy ball = m̂(t)√ v̂(t) + (S27) u (t) nesterov = β1m̂ (t) + (1− β1)g(t)√ v̂(t) + (S28) φ(t+1) =φ(t) − (1− buse nesterov)αu(t)heavy ball+ (S29) buse nesterovαu (t) nesterov − α`2AdamWφ(t) (S30) E OPTIMIZER FAMILY SEARCH SPACES E.1 SEARCH SPACE CONSIDERATIONS The performance of random search critically depends on the boundaries of the original search space. Without prior knowledge about the problems, however, picking a good search space is difficult. To explore this we additionally choose search spaces after collecting and looking at the data. We then use this search space to simulate random search within the constraints via rejection sampling. To find these search spaces we find the best hyper parameters for each task and construct new hyperparameter ranges with min and max values determined by the smallest and largest values of each hyperparameter which were the best hyperparameter for some task. This removes regions of the search space not used by any task. We also tested bounds based on the 5th and 95th percentile of best performing hyperparameters computed over all tasks. In the case of min and max, we find the optimal hyperparameters cover nearly all of the existing space, whereas the percentile based search spaces reduces the volume of the search hypercube by more than 90% leaving us with only ∼100 hyperparameter configurations. In Figure 3, we find, in all cases, learning the hyperparameter list is much more efficient. E.2 ADAM8P, ADAM6P, ADAM4P, ADAMLR SEARCH SPACES For Adam1p, Adam4p, Adam6p, and Adam8p we sample learning rate logritmically between 1e-8 and 10, beta1 and beta2 we parametrize as 1 − x and sample logrithmically between 1e-4 and 1 and 1e-6 and 1 respectively. For learning rate schedules we sample linear decay between 1e-7, 1e-4 logrithmically and exponential decay logrithmically between 1e-3, 1e-6. We sample both `1 and `2 logrithmcally between 1e-8, 1e1. E.3 NADAMW SEARCH SPACE This search space was chosen heuristically in an effort to generalize to new problems. We would like to emphasize that it was not tuned. We used our insight from Adam based optimizer families and chose this. No iterations where done. We expect more iterations will improve not only in distribution performance, but alsos generalization performance. The initial learning rate, αbase is sampled from log space between 1e− 5 and 1.0. 1− β1 is sampled logrithmically between 1e − 3, and 1.0. 1 − β2 is sampled between 1e − 5, and 1.0. is sampled logarithmically between 1e − 8 and 1e4. We sample using nesterov (buse nesterov) 50% of the time. We sample `2WD and `2AdamW logrithmically between 1e− 5 and 1e− 1. Equal probabilities of a third we either use both terms, zero out `2WD, or zero out `2AdamW . With 50% probability we use a nonzero min learning rate multiplier sampled logrithmically between 1e− 5 and 1.0. With 50% probability we sample the warm up fraction, cwarmup between 1e-5 and 1e-1, otherwise it is set to zero. Finally, we uniformly sample the amount of time the learning rate is held constant(cconstant) between 0 and 1. F EXTENDED RELATED WORK F.1 SETS OF TASKS Benchmarks consisting of multiple tasks are becoming an increasingly common technique for measuring improvement in algorithm design. Reinforcement learning has Atari Bellemare et al. (2013), DMLab Beattie et al. (2016), gym Brockman et al. (2016), and dm_control Tassa et al. (2018). Natural language processing has evaluation sets such as GLUE (Wang et al., 2018), Super GLUE (Wang et al., 2019), and the NLPDecathalon (McCann et al., 2018). In computer vision there is (Zhai et al., 2019) which studies transfer learning of image features. In black box optimization there is Nevergrad (Rapin & Teytaud, 2018), COmparing Continuous Optimizers (COCO) (Hansen et al., 2016) and a number of tasks to test Bayesian hyperparameter optimization presented in (Dewancker et al., 2016). For first order gradient methods there are unit tests for stochastic optimization (Schaul et al., 2013) which studies toy optimization functions, and DeepObs (Schneider et al., 2019) which includes 20 neural network tasks. Hyperparameter tuning practices on these benchmarks vary between tuning on each task separately, to tuning one set of hyperparameters for all problems. In Atari (Bellemare et al., 2013), for example, it is common practice to tune hyperparameters on a subset of tasks and evaluate on the full set. This protocol can further be extended by leveraging unseen levels or games at test time as done in Obstacle Tower (Juliani et al., 2019), ProcGen (Cobbe et al., 2019), CoinRun (Cobbe et al., 2018), and Sonic (Nichol et al., 2018). We believe generalization to unseen tasks is key for learned algorithms to be useful thus our learned search space experiments mirror this setting by making use of hold out tasks. Existing meta-learning data sets share similar goals to our work but focus on different domains. In few shot learning there is MiniImageNet (Vinyals et al., 2016) which is built procedurally from the ImageNet dataset (Russakovsky et al., 2015). Meta-Dataset (Triantafillou et al., 2019) takes this further and also focuses on generalization by constructing few shot learning tasks using images from a number of different domains for evaluation purposes. The automated machine learning community has OpenML (Vanschoren et al., 2013) with a focus on selecting and tuning non-neural algorithms. For learning optimizers, the use of task suites has been limited and ad-hoc. Many works use a single or small number of standard machine learning tasks (Andrychowicz et al., 2016; Li & Malik, 2017; Lv et al., 2017; Metz et al., 2019a). Wichrowska et al. (2017) uses a set of synthetic problems meant to emulate many different kinds of loss surfaces. While existing collections of tasks exist for optimizer evaluation, e.g. (Schneider et al., 2019), they contain too small a number of tasks to act as a comprehensive training set for learning algorithms, and many of their tasks are additionally too computationally expensive to be useful during learning. F.2 HAND DESIGNED AND LEARNED OPTIMIZERS Optimization is core to machine learning and thus the focus of extensive work. Methods such as Nesterov momentum (Nesterov, 1983), AdaGrad (Duchi et al., 2011), RMSProp (Tieleman & Hinton, 2012), and Adam (Kingma & Ba, 2014) have all shown considerable improvements in both the speed of optimization and ease of use by exposing robust, and easier to tune hyperparameters than SGD (Sivaprasad et al., 2019). Adaptive step size methods in particular have emerged at the forefront with many works building from it including AdamW (Loshchilov & Hutter, 2017), RAdam (Liu et al., 2019), Novograd (Ginsburg et al., 2019), and NAdam Dozat (2016). Recently, there has been a focus on comparing optimizers either for best performance, or ease of use (Wilson et al., 2017; Choi et al., 2019; Schneider et al., 2019; Sivaprasad et al., 2019). This has proven difficult as performance is heavily dependent on the choice of search space for optimization hyperparameters (Choi et al., 2019). Learned optimizers represent a parallel thread in the development of optimizers. By learning as opposed to hand-designing optimizers, researchers hope to not only increase performance but also ease of use (e.g. minimize the number of hyperparameters required or lower hyperparameter sensitivity) (Bengio et al., 1990; Schmidhuber, 1995; Hochreiter et al., 2001). Recently, there has been renewed interest in parameterizating learning algorithms with neural networks and learning these optimizers on neural network based losses (Andrychowicz et al., 2016; Wichrowska et al., 2017; Li & Malik, 2017; Lv et al., 2017; Metz et al., 2019a;b). Other approaches make learn symbolic parameterizations for new optimizers (Bello et al., 2017). These various methods are all trained and evaluated on different distributions of tasks making comparison across papers challenging. The dataset of tasks presented here will hopefully aid in the ability to compare and evaluate progress in learned optimizer research. In this work, we develop a much more minimal type of “learned optimizer” than previous work which developed new functional forms for the optimizer. Optimization involves not only the functional form of the optimizer, but also the rules for choosing hyperparameters and applying the optimizer. We focus on this second aspect of optimization and learn a hyperparameter search space to improve the performance of existing hand designed methods. F.3 HYPERPARAMETER SEARCH Hyperparameter search is a key component in machine learning. Considerable improvements have been made in language Melis et al. (2017), computer vision (Snoek et al., 2012), and RL (Chen et al., 2018) simply by tuning better. Often no single hyperparameter configuration works well across all tasks for existing optimization methods. Most current hyperparameter search methods involve trying a very large number of hyperparameters for every new task, which is computationally infeasible for large tasks, and additionally can severely limit the number of hyperparameters that can be tuned. Many common techniques such as random search (Bergstra & Bengio, 2012; Bousquet et al., 2017), Bayesian optimization (Snoek et al., 2012; 2015), tree parzen estimators (Bergstra et al., 2011), or sequential halving (Kumar et al., 2018) require setting a hyperparameter search space by hand which is not only difficult but often wildly inefficient. Learning hyperparameters or search strategies by leveraging multiple tasks has been explored within the context of Bayesian optimization Swersky et al. (2013); Perrone & Shen (2019); Perrone et al. (2018) as well as under the term meta-learning in Chen et al. (2017) in which an LSTM is meta-trained to produce function locations to query. The cost of hyperparameter search is often large as each evaluation requires training a model to completion. Often multi-fidelity based approaches are used which leverage “simpler” tasks and transfer the resulting hyperparameters (Hutter et al., 2018). Common approaches include training on partial function evaluations Swersky et al. (2014); Domhan et al. (2015); Li et al. (2016); Klein et al. (2016); Falkner et al. (2018), or leveraging simplified data and models (Petrak, 2000; Zoph & Le, 2016; Brock et al., 2017). Our dataset of tasks serves as a: “simpler” set of tasks to train on; a large and diverse enough set of problems that optimization algorithms trained on it may be expected to generalize; and a framework to test transfer across different types of problems. G LIST OF NADAM HPARAMS Idx Lr warmup constant Min LR mult beta1 beta2 epsilon nesterov l2 reg l2 weight decay 0 1.24e-3 0.000 0.477 1.01e-3 0.94666 0.94067 8.114e-8 False 0.000e+00 7.258e-5 1 5.33e-3 0.000 0.172 0.0 0.96047 0.99922 8.665e-8 True 0.000e+00 5.563e-3 2 2.12e-4 0.000 0.210 1.39e-3 0.62297 0.97278 1.540e-7 False 0.000e+00 5.361e-2 3 4.06e-1 0.000 0.324 0.0 0.99724 0.98680 1.079e+02 True 0.000e+00 1.562e-2 4 2.05e-2 0.000 0.885 1.57e-5 0.35731 0.86043 8.874e-5 True 0.000e+00 7.217e-2 5 5.95e-4 0.008 0.378 0.0 0.89130 0.99983 1.483e-7 True 0.000e+00 4.087e-2 6 7.53e-3 0.000 0.422 9.55e-4 0.69192 0.98434 3.593e-8 False 0.000e+00 3.060e-4 7 4.69e-3 0.000 0.509 0.0 0.99639 0.98820 2.056e-5 False 0.000e+00 3.552e-2 8 2.95e-1 0.000 0.201 0.0 0.99678 0.99981 7.498e+00 False 3.792e-4 3.463e-4 9 2.04e-3 0.000 0.527 0.0 0.49995 0.99755 5.630e-8 True 0.000e+00 2.796e-2 10 7.39e-1 0.001 0.556 3.31e-3 0.99691 0.80639 2.900e+03 False 0.000e+00 7.851e-2 11 8.12e-3 0.000 0.207 0.0 0.17785 0.96033 7.971e-2 False 0.000e+00 1.489e-2 12 3.33e-2 0.000 0.369 0.0 0.69592 0.99997 5.510e-6 True 0.000e+00 1.362e-5 13 6.95e-3 0.000 0.014 0.0 0.99412 0.99305 4.352e-7 False 0.000e+00 3.142e-5 14 1.88e-1 0.000 0.205 1.08e-1 0.98597 0.56531 3.335e+00 True 1.265e-5 3.868e-3 15 9.47e-4 0.007 0.452 0.0 0.43977 0.09422 2.120e-7 False 0.000e+00 6.902e-3 16 3.75e-3 0.000 0.184 0.0 0.87756 0.96128 3.163e-3 True 7.468e-5 2.627e-3 17 7.25e-1 0.000 0.495 0.0 0.99800 0.99781 3.608e+00 True 1.656e-5 3.911e-2 18 4.58e-3 0.000 0.107 3.66e-1 0.42294 0.99963 4.174e-6 True 0.000e+00 4.446e-3 19 3.07e-4 0.007 0.518 0.0 0.57863 0.99625 9.881e-6 False 0.000e+00 5.521e-2 20 2.94e-5 0.000 0.830 8.27e-5 0.96916 0.99896 7.782e-7 True 3.364e-4 3.416e-3 21 1.65e-4 0.002 0.457 2.70e-1 0.95280 0.04565 2.832e-6 True 0.000e+00 1.141e-2 22 9.17e-1 0.010 0.897 2.67e-2 0.45061 0.99244 4.945e-1 False 1.253e-3 0.000e+00 23 2.36e-3 0.000 0.986 0.0 0.98560 0.99997 1.080e-8 True 0.000e+00 3.023e-3 24 2.14e-2 0.000 0.128 0.0 0.98741 0.99336 1.266e-4 False 0.000e+00 5.194e-4 25 5.91e-2 0.000 0.062 0.0 0.99794 0.99383 3.447e+02 True 0.000e+00 3.935e-2 26 1.57e-3 0.000 0.251 0.0 0.91820 0.99991 4.675e-5 False 0.000e+00 4.112e-5 27 4.43e-1 0.000 0.702 0.0 0.94375 0.93551 2.335e-8 True 0.000e+00 8.325e-5 28 2.98e-3 0.008 0.046 0.0 0.68612 0.94232 6.614e-2 False 6.489e-5 0.000e+00 29 1.65e-2 0.004 0.082 4.92e-4 0.95717 0.99789 3.068e+01 True 0.000e+00 8.920e-2 30 5.58e-3 0.000 0.538 0.0 0.97559 0.99990 3.238e-8 True 0.000e+00 4.896e-4 31 8.54e-1 0.000 0.229 0.0 0.93129 0.50200 2.051e-2 False 2.068e-4 2.801e-2 32 7.38e-3 0.000 0.722 8.78e-2 0.21456 0.99752 2.862e-2 False 0.000e+00 8.439e-2 33 4.26e-4 0.001 0.923 2.06e-1 0.47239 0.99974 8.221e-5 False 1.248e-5 0.000e+00 34 6.04e-3 0.000 0.698 0.0 0.97849 0.91449 1.806e+00 False 3.183e-3 1.762e-2 35 8.86e-3 0.000 0.104 1.66e-1 0.98967 0.99720 1.493e-2 True 0.000e+00 2.253e-2 36 1.51e-2 0.000 0.431 1.99e-3 0.80488 0.97878 2.538e-8 True 0.000e+00 2.269e-5 37 2.50e-3 0.000 0.009 0.0 0.98127 0.99988 1.799e-7 False 0.000e+00 1.303e-2 38 3.42e-4 0.000 0.827 6.38e-1 0.25217 0.96572 2.928e-7 True 0.000e+00 1.318e-3 39 6.94e-5 0.000 0.085 0.0 0.98674 0.42709 2.387e-7 False 0.000e+00 2.071e-4 40 3.03e-2 0.001 0.313 0.0 0.90610 0.99997 4.449e-3 True 0.000e+00 2.813e-5 41 4.64e-3 0.000 0.495 2.26e-5 0.64658 0.54108 3.528e-8 False 0.000e+00 2.996e-5 42 2.25e-3 0.000 0.722 0.0 0.97967 0.97518 1.488e-7 True 1.812e-5 2.180e-2 43 6.66e-4 0.000 0.632 2.79e-5 0.65968 0.99997 6.848e-6 True 0.000e+00 3.130e-3 44 3.31e-3 0.000 0.146 0.0 0.90447 0.99970 6.618e-6 True 0.000e+00 2.184e-2 45 7.84e-4 0.016 0.124 0.0 0.95065 0.99685 2.141e-2 False 0.000e+00 4.024e-5 46 6.16e-3 0.016 0.623 0.0 0.98823 0.98744 1.616e-6 False 0.000e+00 1.544e-2 47 3.26e-4 0.000 0.738 1.61e-4 0.78425 0.99998 3.468e-3 False 0.000e+00 4.709e-2 48 4.12e-3 0.001 0.205 0.0 0.99561 0.75382 2.390e-6 True 0.000e+00 3.631e-2 49 6.26e-1 0.000 0.932 2.52e-3 0.99401 0.83521 2.431e+00 True 0.000e+00 1.048e-2 Top 50 hyper parameters found using the NAdamW search space. We find diverse learning rates, with very little warmup used. We additionally find most good performing optimizers make use of AdamW style weight decay. Finally, matching insight from (Choi et al., 2019), we find large values of . H DESCRIPTION OF TASKS IN TASK SUITE In this section we detail the task distribution used throughout this work. In addition to this text, a Tensorflow (Abadi et al., 2016) implementation is also released at github.com/google-research/googleresearch/tree/master/task_set. H.1 SAMPLED TASKS H.1.1 DEFAULT SAMPLED COMPONENTS As many of the sampled tasks are neural networks. We define common sampling routines used by all the sampled tasks. Activation functions: We define a distribution of activation functions which is sampled corresponding the following listing both name and weight. These are a mix of standard functions (relu, tanh) to less standard (cos). • relu: 6 • tanh: 3 • cos: 1 • elu: 1 • sigmoid: 1 • swish (Ramachandran et al., 2017): 1 • leaky relu (with α = 0.4): 1 • leaky relu (with α = 0.2): 1 • leaky relu (with α = 0.1): 1 Initializations: We sample initializers according to a weighted distribution. Each initialization sample also optionally samples hyperparameters (e.g. for random normal initializers we sample standard deviation of the underlying distribution). • he normal (He et al., 2015): 2 • he uniform (He et al., 2015): 2 • glorot normal (Glorot & Bengio, 2010): 2 • glorot uniform (Glorot & Bengio, 2010): 2 • orthogonal: 1. We sample the “gain”, or multiplication of the orthogonal matrix logarithmi- cally between [0.1, 10]. • random uniform 1.0: This is defined between [−s, s] where s is sampled logarithmically between [0.1, 10]. • random normal: 1.0: The std is sampled logarithmically between (0.1, 10). • truncated normal: 1.0: The std is sampled logarithmically between (0.1, 10). • variance scaling: 1.0: The scale is sampled logarithmically between (0.1, 10). RNN Cores: We define a distribution over different types of RNN cores used by the sequential tasks. With equal probability we sample either a vanilla RNN (Elman, 1990), GRU(Chung et al., 2014), or LSTM(Hochreiter & Schmidhuber, 1997). For each cell we either sample 1 shared initialization method or sample a different initialization method per parameter vector with a 4:1 ratio. We sample the core hidden dimension logarithmically between [32, 128]. H.1.2 SAMPLED DATASETS Image Datasets: We sample uniformly from the following image datasets. Each dataset additionally has sampled parameters. For all datasets we make use of four data splits: train, valid-inner, valid-outer, test. Train is used to train models, valid-inner is used while training models to allow for modification of the training procedure (e.g. if validation loss doesn’t increase, drop learning rate). Valid-outer is used to select meta-parameters. Test should not be used during meta-training. For all datasets, we sample a switch with low probability (10% of the time) to only use training data and thus not test generalization. This ensures that our learned optimizers are capable of optimizing a loss as opposed to a mix of optimizing and generalizing. Mnist: Batch size is sampled logarithmically between [8, 512]. We sample the number of training images logarithmically between [1000, 55000] (LeCun, 1998). Fashion Mnist: Batch size is sampled logarithmically between [8, 512]. We sample the number of training images logarithmically between [1000, 55000] (Xiao et al., 2017). Cifar10: Batch size is sampled logarithmically between [8, 256]. The number of training examples is sampled logarithmically [1000, 50000] (Krizhevsky et al., 2009). Cifar100: Batch size is sampled logarithmically between [8, 256]. The number of training examples is sampled logarithmically [1000, 50000] (Krizhevsky et al., 2009). {food101_32x32, coil100_32x32, deep_weeds_32x32, sun397_32x32}: These dataset take the original set of images and resize them to 32x32 using OpenCV’s (Bradski, 2000) cubic interpolation. We ignore aspect ratio for this resize. Batch size is sampled logarithmically between [8, 256] (Bossard et al., 2014; Nene et al., 1996; Olsen et al., 2019; Xiao et al., 2010). Imagenet32x32 / Imagenet16x16: The ImageNet 32x32 and 16x16 dataset as created by Chrabaszcz et al. (2017). Batch size is logrithmically sampled between [8, 256]. H.1.3 TEXT CLASSIFICATION: IMDB sentiment classification: We use text from the IMDB movie reviews dataset(Maas et al., 2011) and tokenize using subwords using a vocab size of 8k(Sennrich et al., 2015). We then take length s random slice from each example where s is sampled logarithmically between [8, 64]. These examples are then batched into a batch size logarithmically sampled between [8, 512]. We sample the number of training examples logarithmically between [1000, 55000] and with 10% probability just use training data instead of valid / test to test pure optimization as opposed to generalization. H.1.4 CHARACTER AND WORD LANGUAGE MODELING For the character and word language modeling datasets we make use of the following data sources: imdb movie reviews(Maas et al., 2011), amazon product reviews (ama) using the Books, Camera, Home, and Video subset each as separate datasets, LM1B(Chelba et al., 2013), and Wikipedia(Foundation) taken from the 20190301 dump using the zh, ru, ja, hab, and en language codes. We split each article by new lines and only keep resulting examples that contain more than 5 characters. For infrastructure reasons, we only use a million articles from each language and only 200k examples to build the tokenizer. Byte encoding: We take length s random slices of each example where s is sampled logarithmically between [10, 160]. These examples are then batched into a batch size logarithmically sampled between [8, 512]. With probability 0.2 we restrict the number of training examples to a number logarithmically sampled between [1000, 50000]. Finally, with a 10% probability just use training data instead of valid / test to test pure optimization as opposed to generalization. subword encoding: We encode the text as subwords with a vocabsize of 8k (Sennrich et al., 2015). We then take length s random slices of each example where s is sampled logarithmically between [10, 256]. These examples are then batched into a batch size logarithmically sampled between [8, 512]. With probability 0.2 we restrict the number of training examples to a number logarithmically sampled between [1000, 50000]. Finally, with a 10% probability just use training data instead of valid / test to test pure optimization as opposed to generalization. H.2 SAMPLED TASKS H.2.1 MLP This task family consists of a multi layer perceptron trained on flattened image data. The amount of layers is sampled uniformly from [1, 6]. Layer hidden unit sizes are sampled logarithmically between [16, 128] with different number of hidden units per layer. One activation function is chosen for the whole network and is chosen as described in H.1.1. One shared initializer strategy is also sampled. The image dataset used is also sampled. Two sampled configurations are shown below. 1 { 2 "layer_sizes": [ 3 71 4 ], 5 "activation": "leaky_relu2", 6 "w_init": [ 7 "he_normal", 8 null 9 ], 10 "dataset": [ 11 "sun397_32x32", 12 { 13 "bs": 32, 14 "just_train": false, 15 "num_train": null 16 }, 17 { 18 "crop_amount": 0, 19 "flip_left_right": false, 20 "flip_
1. What is the main contribution of the paper regarding learned optimizers? 2. What are the strengths and weaknesses of the proposed TaskSet? 3. How does the reviewer assess the presentation and clarity of the paper's content? 4. What are the minor comments and concerns regarding Figure 1b and the application of TaskSet for meta-learning hyper-parameter initializations? 5. Is there any confusion or uncertainty regarding the motivation and utility of the TaskSet suite?
Review
Review The paper presents a suite of deep learning focused optimization problems that would facilitate the development of learned optimizers. This is very useful and can streamline research in learned optimizers while providing a benchmarking suite that can be used for training as well as evaluation. In my opinion, this is very valuable. However, the presentation of the task suite and then the subsequent application could use significant clarification. For example, it was not clear to me until I made multiple passes that the application presented in the paper is about meta-learning hyper-parameter initializations for deep learning optimizers by levaraging the suite to generate the meta-learning data set with various task executions. Moreover, given that the main advantage, to the best of my understanding, of the proposed TaskSet is in learning optimizers, the choice of tasks in the paper lack proper qualitative or quantitative justification. As mentioned in the paper, it is useful to have a wide set of tasks for better generalization of learned optimizers, too broad a set could hinder the meta-learning. In that case, it is not clearly discussed (at least in the main paper) why these set of tasks are selected and why/how they lead to better learned optimizers. If the suite consists of all combinations of deep learning architectures, applications and data sets without any meaningful discussion of why it is better for learning optimizers (or even for meta-learning hyper-parameter initializations) than just learning optimizers separately on each task (or even task class), it reduces the possible usefulness of the suite. Given the motivation and the utility of the suite, I am leaning towards an accept. However, the lack of proper justification for the choices (beyond just covering a laundry list of deep learning architectures, data sets and applications) to the best of my understanding leaves me partially unsatisfied. Beyond the above, I have a few minor comments: Subsection 1.2 is very confusing. Why are we talking about hyperparameters if the taskset consists of first order optimization problems? It should better (and more explicitly) presented as a potential application of the TaskSet suite for meta-learning hyper-parameter initializations for some optimizers. Figure 1b is confusing to me. I'm unable to wrap my head around what each line means and how the x axis ordered and why the fraction is always increasing? Moreover, given the nonconvex and randomized nature of many of the discussed tasks, how is it decided that an optimizer achieves a particular loss -- the optimizer might reach different level with different restarts. This paper shows an application of TaskSet for meta-learning hyper-parameters of specific optimization algorithms. I am not sure, in the context of HPO, why one would use the TaskSet given something like the Bayesmark framework [A]. It has HPO tasks with associated data with ability to add new tasks/data, and the task set covers a wider class of methods. [A] bayesmark.readthedocs.io
ICLR
Title TaskSet: A Dataset of Optimization Tasks Abstract We present TaskSet, a dataset of tasks for use in training and evaluating optimizers. TaskSet is unique in its size and diversity, containing over a thousand tasks ranging from image classification with fully connected or convolutional neural networks, to variational autoencoders, to non-volume preserving flows on a variety of datasets. As an example application of such a dataset we explore meta-learning an ordered list of hyperparameters to try sequentially. By learning this hyperparameter list from data generated using TaskSet we achieve large speedups in sample efficiency over random search. Next we use the diversity of the TaskSet and our method for learning hyperparameter lists to empirically explore the generalization of these lists to new optimization tasks in a variety of settings including ImageNet classification with Resnet50 and LM1B language modeling with transformers. As part of this work we have opensourced code for all tasks, as well as 29 million training curves for these problems and the corresponding hyperparameters.1 N/A We present TaskSet, a dataset of tasks for use in training and evaluating optimizers. TaskSet is unique in its size and diversity, containing over a thousand tasks ranging from image classification with fully connected or convolutional neural networks, to variational autoencoders, to non-volume preserving flows on a variety of datasets. As an example application of such a dataset we explore meta-learning an ordered list of hyperparameters to try sequentially. By learning this hyperparameter list from data generated using TaskSet we achieve large speedups in sample efficiency over random search. Next we use the diversity of the TaskSet and our method for learning hyperparameter lists to empirically explore the generalization of these lists to new optimization tasks in a variety of settings including ImageNet classification with Resnet50 and LM1B language modeling with transformers. As part of this work we have opensourced code for all tasks, as well as 29 million training curves for these problems and the corresponding hyperparameters.1 1 INTRODUCTION As machine learning moves to new domains, collecting diverse, rich, and application-relevant datasets is critical for its continued success. Historically, research on learning optimization algorithms have only leveraged single tasks (Andrychowicz et al., 2016; Metz et al., 2019a), or parametric synthetic tasks (Wichrowska et al., 2017), due to the difficulty of obtaining large sets of tasks. 1.1 TASKSET: A SET OF TASKS We present a set of tasks significantly larger than any optimizer dataset previously studied. We aim to better enable standardized research on optimizers, be that analysis of existing optimizers, or development of new learned learning algorithms. We call this suite of tasks TaskSet. Much in the same way that learned features in computer vision outpaced hand designed features (Krizhevsky et al., 2012; LeCun et al., 2015), we believe that data driven approaches to discover optimization algorithms will replace their hand designed counterparts resulting in increased performance and usability. To this end, standardizing a large suite of optimization tasks is an important first step towards more rigorous learned optimizer research. In this setting, a single “example” is an entire training procedure for a task defined by data, loss function, and architecture. Thus, TaskSet consists of over a thousand optimization tasks, largely focused on deep learning (neural networks). They include image classification using fully connected and convolutional models, generative models with variational autoencoders (Kingma & Welling, 2013) or flows (Dinh et al., 2016; Papamakarios et al., 2017), natural language processing tasks including both language modeling and classification, as well as synthetic tasks such as quadratics, and optimization test functions. The problems themselves are diverse in size, spanning 7 orders of magnitude in parameter count, but remain reasonably fast to compute as almost all tasks can be trained 10k iterations on a CPU in under one hour. To demonstrate the breadth of this dataset we show an embedding of all the tasks in Appendix A.1 in Figure S1. 1redacted url 1.2 AMORTIZING HYPERPARAMETER SEARCH Machine learning methods are growing ever more complex, and their computational demands are increasing at a frightening pace (Amodei & Hernandez, 2018). Unfortunately, most modern machine learning models also require extensive hyperparameter tuning. Often, hyperparameter search is many times more costly than the final algorithm, which ultimately has large economic and environmental costs (Strubell et al., 2019). The most common approach to hyperparameter tuning involves some form of quasi-random search over a pre-specified grid of hyperparameters. Building on past work (Wistuba et al., 2015b; Pfisterer et al., 2018), and serving as a typical example problem illustrative of the sort of research enabled by TaskSet, we explore a hyperparameter search strategy consisting of a simple ordered list of hyperparameters to try. The idea is that the first few elements in this list will cover most of the variation in good hyperparameters found in typical machine learning workloads. We choose the elements in this list by leveraging the diversity of tasks in TaskSet, by meta-learning a hyperparameter list that performs the best on the set of tasks in TaskSet. We then test this list of hyperparameters on new, larger machine learning tasks. Although learning the list of hyperparameters is costly (in total we train∼29 million models consisting of over 4,000 distinct hyper parameter configurations), our final published list is now available as a good starting guess for new tasks. Furthermore, we believe the raw training curves generated by this search will be useful for future hyperparameter analysis and meta-learning research, and we release it as part of this work. We additionally release code in Tensorflow (Abadi et al., 2016), Jax (Bradbury et al., 2018), and PyTorch (Paszke et al., 2019) for a reference optimizer which uses our learned hyperparameter list, and can be easily applied to any model. 2 TASKSET: A SET OF TASKS How should one choose what problems to include in a set of optimization tasks? In our case, we strive to include optimization tasks that have been influential in deep learning research over the last several decades, and will be representative of many common machine learning problems. Designing this dataset requires striking a balance between including realistic large-scale workloads and ensuring that tasks are fast to train so that using it for meta-learning is tractable. We construct our dataset largely out of neural network based tasks. Our chosen tasks have between ten thousand and one million parameters (much smaller than the billions commonly used today), as a result most problems can train in under an hour on a cloud CPU with 5 cores. We additionally focus on increased “task diversity” by including many different kinds of training algorithms, architectures, and datasets – inspired by past work in reinforcement learning which has demonstrated large numbers of problems and increased diversity around some domain of interest is useful for both training and generalization Heess et al. (2017); Tobin et al. (2017); Cobbe et al. (2018); OpenAI et al. (2019). Again though, a balance must be struck, as in the limit of too much diversity no learning can occur due to the no free lunch theorem (Wolpert & Macready, 1997). Our dataset, TaskSet, is made up of 1162 tasks in total. We define a task as the combination of a loss function, a dataset, and initialization. Specifically we define a task as a set of 4 functions: • Initialization: ()→ parameter initial values • Data generator: data split (e.g. train / valid / test)→ batch of data • Forward pass: (batch of data, params)→ loss • Gradient function: (input data, params)→ gradients ( dlossdparams ) A task has no tunable hyperparameters and, coupled with an optimizer, provides all the necessary information to train using first order optimization. This makes experimentation easier, as each task definition specifies hyperparameters such as batch size (Shallue et al., 2018; McCandlish et al., 2018) or initialization (Schoenholz et al., 2016; Yang & Schoenholz, 2017; Xiao et al., 2018; Li & Nguyen, 2019; Pretorius et al., 2018; Hayou et al., 2018; Karakida et al., 2018; Blumenfeld et al., 2019; Hayou et al., 2019) that no longer need to be tuned. We augment a set of “fixed” tasks which have been designed by hand, with “sampled” tasks that are randomly generated task instances. 2.1 SAMPLED FAMILIES OF TASKS Sampled tasks are created by sampling neural network architectures (e.g., MLPs, convnets), activation functions, datasets (e.g., images, text, quadratic functions, and synthetic tasks), and other properties. We organize these sampled tasks into similar families of tasks. See Appendix H for a complete description of these sampled tasks. Broadly, these are separated into tasks sampling image models (mlp, mlp_ae (Hinton & Salakhutdinov, 2006), mlp_vae (Kingma & Welling, 2013), conv_pooling, conv_fc, nvp (Dinh et al., 2016), maf (Papamakarios et al., 2017)), tasks sampling language models (char_rnn_language_model (Graves, 2013), word_rnn_language_model, rnn_text_classification), quadratics (quadratic) and other synthetic tasks (losg_tasks (Wichrowska et al., 2017)). Defining a sampling distribution that generates tasks that are always valid, and that run within a time constraint, is difficult. Instead, we define a broad distribution and make use of rejection sampling to remove tasks that are either too slow or that we are unable to optimize at all. By starting with a distribution that is too broad, and pruning it, we hope to achieve better coverage of tasks. 2.2 HAND DESIGNED TASKS In addition to the sampled tasks, we also include 107 hand designed tasks. These consist of more common tasks that both improve the coverage beyond the sampled tasks, and provide for better interpretability through a closer match to existing tasks in the literature. These tasks span image classification, text classification, language modeling, and generative modeling, as well as some synthetic tasks such as associative retrieval (Ba et al., 2016). We leave the description of each one of these tasks to Appendix H.3. 2.3 AGGREGATE STATISTICS OF TASKSET In Figure 1a we show histograms of compute times for all problems and find almost all problems train under an hour (see Appendix C for per task family histograms). In Figure 1c we plot a histogram of the number of parameters per tasks. Finally, in Figure 1b we show a distribution of task difficulty by plotting the fraction of optimizer configurations that achieve a certain loss value. We find that for some tasks as many as 50% of optimizers perform well while for others < 1% achieve a loss close to the smallest observed loss. For a qualitative visualization of TaskSet, see Appendix A 3 AMORTIZED HYPERPARAMETER SEARCH As a simple demonstration of using TaskSet for meta-learning research, we consider learning hyperparameter lists. This idea of learning lists of hyper parameters has been explored in (Wistuba et al., 2015b; Pfisterer et al., 2018). We define an optimizer as the pairing of an optimization algorithm and all its corresponding hyperparameters (e.g. learning rate). While sometimes practitioners use a single optimizer – e.g. Adam (Kingma & Ba, 2014) with default hyperparameters – most practitioners will often run multiple optimizers and use a validation set to select the best performer. 3.1 OPTIMIZER FAMILIES We define different parameterizations of hand designed optimizers as an optimizer family. The optimizer families we consider consist of: • Adam1p: One hyperparameter, the fixed learning rate α • Adam4p: Four Adam hyperparameters, α, β1, β2, and • Adam6p: Adam4p hyperparameters, and two additional hyperparameters controlling linear and exponential learning rate decays • Adam8p: The hyperparameters in Adam6p plus two additional hyperparameters for `1 and `2 regularization terms • NAdamW: A 10 hyperparameter search space based on NAdam (Dozat, 2016) with cosine learning rate decay, and weight decay. For the full update equations see Appendix D.1 for Adam and D.2 for NadamW. We chose Adam based on its use in existing work, and NAdam based on performance shown in (Choi et al., 2019). 3.2 LEARNED HYPERPARAMETER LISTS Traditionally researchers tune hyperparameters on a per model basis. While this often results in performance gains; it comes at the cost of immense compute, and researchers are almost never able to expend enough compute to saturate model performance (Shallue et al., 2018). As an alternative to per-problem tuning, we proposes instead tuning the search strategy itself on a dataset of tasks and transferring the knowledge gained to new tasks of interest. This idea is already implicitly done by humans – e.g. we don’t start a hyperparameter search with a learning rate of 106 – we use values that the community has found useful. This dataset-based tuning has a number of desirable properties. First, the resulting search strategies are much more efficient, resulting in large speedups in sample efficiency on unseen tasks over a random search baseline. Second, we are less restricted by the number of optimizer parameters we search over or by needing to define reasonable search spaces. For example, if there are redundant regions of search space, our learned optimizer will be less likely to sample them repeatedly, unlike random search. If there is a region of hyperparameter space that performs poorly on all problems, the learned search strategy will avoid it. In this work we parameterize the learned search strategy as an ordered list of optimizers to try (i.e. a list of hyperparameter configurations). Given a fixed number of task evaluations we would like to achieve the best possible performance on all tasks in the training set of tasks. For a length k list of optimizers we define our loss as: J(θ1,...,k) = ∑ τ∈tasks [ min i∈1..k f(τ, θi) ] , (1) where θi are the optimizer hyperparameters for element i in the list, and f is an appropriately normalized loss computed after training task τ . We seek to find an optimal list of optimizers as (similar to (Wistuba et al., 2015b)): θ∗1,...,k = arg min θ1,...,k J(θ1,...,k). (2) This is meant to serve as an example task, illustrative of the sort of research enabled by TaskSet. More advanced hyperparameter search strategies would no doubt yield even more performant results. 3.3 SCORING AN OPTIMIZER BY AVERAGING OVER TASKS To score a task, we initialize the parameters of the task and run 10,000 iterations of an optimizer. We monitor loss on each data split (train, validation, test) every 200 steps using an average over 50 mini-batches per evaluation. For all data presented in this paper we also compute averages over 5 random task parameter initializations. A side effect of the diverse task dataset is that losses span multiple orders of magnitude, making direct aggregation of performance problematic. To remedy this we normalize the loss values for all tasks linearly between 0 and 1 where 1 is validation loss at initialization and zero is the lowest validation loss achieved by any tested optimizer. Loss values greater than the loss at initialization are clipped to 1. To collapse an entire normalized training curve into a scalar cost, we compute the mean normalized loss over the 10,000 iterations. We find empirically that this choice is similar to taking the minimum (Appendix B.5). We leave exploring alternative methods such as performance profiles (Dolan & Moré, 2002) and Nash averaging (Balduzzi et al., 2018) for future work. 3.4 GREEDY LEARNING FROM RANDOM SEARCH Optimizing Eq. 2 is combinatorially expensive. To tractably solve this optimization problem, we introduce two approximations (Wistuba et al., 2015b). First, we shift the unconstrained search over the full space of optimizers to search over a finite set of optimizers, Θ. This finite set can be computed ahead of time and decouples the expensive procedure of training each task with an optimizer from training the learned search space. Separating data and training in this way has been done for both hyperparameter search (Eggensperger et al., 2015), and neural architecture search (Klein & Hutter, 2019; Ying et al., 2019). In total we trained 1,000 optimizer configurations for each of Adam1p, Adam4p, Adam6p, Adam8p, and NAdamW on all 1,162 tasks with 5 random seeds per pair. Second, we use a greedy heuristic to approximate the combinatorial search over sets of k optimizers. For a single optimizer trial, k = 1, we select the best performing optimizer on average across all training tasks. We then continue to select optimizer parameters such that the minimum of all optimizer-parameters per task, aggregated over all tasks is minimized. This shifts the complexity from exponential in k to linear. Finding a length k set of optimizers can thus be efficiently computed as follows: θ∗1 = arg min θ∈Θ [ ∑ τ∈tasks f(τ, θ) ] (3) θ∗k = arg min θ∈Θ [ ∑ τ∈tasks [min (b, f(τ, θ))] ] where b = min i∈1..(k−1) f(τ, θ∗i ). (4) We note that the first argument of the outer min, b, can be computed once per set of hyperparameters as it does not depend on θ. Finally, as our tasks are stochastic, we order optimizers based on validation loss and report test loss (Van Hasselt et al., 2016).2 This training strategy requires an original search space from which to collect data and build Θ. The search space we use is described in Appendix E.2. While large, we find that the optimal parameters for each task end up covering almost the entire space. At some point, no improvement can be obtained on any of the tasks in the dataset. At this point, we simply randomly order the remaining optimizers though expect more sophisticated methods could be employed. 4 EXPERIMENTS: TRAINING AND GENERALIZATION OF LEARNED HYPERPARAMETER LISTS With our dataset of tasks and data collected, we turn our attention to exploring training of the hyperparameter lists, and generalization beyond the suite of tasks in TaskSet. In this exploration, 2This technically means that increasing the number of optimizes could potentially decrease performance, but we find this rarely happens in practice. we hope to give a flavor of the types of research possible with TaskSet. Our main tool to show performance are figures that sweep the number of optimizers configurations on the x-axis, and show the best performance achieved for each number of optimizers tried, averaged over some set of tasks (Eq. 1). 4.1 LEARNED HYPERPARAMETER LISTS ARE MORE EFFICIENT THAN RANDOM SEARCH To demonstrate the impact of learning a search space, we take the 1,162 tasks split them into even train and test tasks. We then learn a search strategy using optimizers from the Adam8p family following Eq. 4 on the train tasks. Results in Figure 3. As baselines, we use random search with different search spaces, including just learning rate (Rand: Adam1p), the default Adam hyper parameters (Rand: Adam4p), as well as the Adam 8 dimensional search space (Rand: Adam8p). To better get a sense of performance, we show two additional “Refined” baselines which involve random sampling from better search space. For min/max, we sample from the minimum bounding box containing the best hyperparameters for each task. To improve the search space quality, we shrink this bounding box so 90% of the best hyperparameters are enclosed. Further considerations regarding search space volume are treated in E.1, and the precise search spaces are specified in Appendix E.2. Finally, one difficulty of working with offline data is the difficulty of running online hyperparameter optimization methods such as Bayesian Optimization without running additional compute. Future work will explore offline Bayesian methods. 4.2 MORE TASKS LEAD TO BETTER GENERALIZATION We next look at the effects of the number of training tasks on generalization. We take subsets of tasks of different size, and train hyperparameter lists using Eq.4. We compute test performance on the remainder of the tasks and plot loss averaged over different splits in Fig. 3. We find that a large number of tasks (more than 100) are required to achieve near-optimal test performance. This is surprising to us given how simple our learned search strategy is (simply a list of hyperparameters), but not wholly so given past work studying generalization in RL (Cobbe et al., 2018). 4.3 GENERALIZATION TO DIFFERENT TYPES OF PROBLEM For learned algorithms to be generally useful, some amount of generalization to unseen task families is required. To test this, we split our data into disjoint task types. We perform two splits: testing on RNN tasks and training on all others, and testing on autoencoder tasks and training on all others. As a best case baseline we additionally train search spaces on the test task families directly. We find an order of magnitude better sample efficiency than random search for both cases and find our learned search space is close in performance to search spaces trained on just the testing tasks (Fig. 3). 5 EXPERIMENTS: REALISTIC PROBLEMS In §4.3 and §B.1 we explored generalization of learned hyperparameter lists to held out tasks within the TaskSet dataset. While useful for analysis, these tasks are still far from the workloads commonly employed to solve real problems. In this section, we explore the performance of our learned search space on a number of state of the art models. These models drastically differ from the training set of tasks in parameter count and compute cost. We see these experiments as evidence that the tasks presented in TaskSet capture enough of the structure of “realistic” problems that TaskSet can be used to improve larger scale workloads. For all experiments in this section we take the optimizer ordering using the NAdamW optimizer family on all TaskSet tasks then apply the resulting search space to the target problem. The final ordered list of hyperparameters used is in Appendix G. We show results for ResNet50 on ImageNet, and Transformers on LM1B. Additional results with reinforcement learning using PPO are in Appendix B.2. First we explore ImageNet classification using a ResNet50. on We take the TPU implementation with default settings from the official Tensorflow models repository (Tensorflow, 2019), and swap out different optimizers. We show accuracy computed over the course of training as well as best performance for a given hyperparameter budget in Figure 4. We find that the learned search space vastly outperforms learning rate tuned Adam. Next we explore language modeling on LM1B with a Transformer. We take the transformer (Vaswani et al., 2017) example implemented in Jax (Bradbury et al., 2018) with Flax (Flax Developers, 2020). We train using a 2x2 TPU V2 configuration for 100k iterations. Once again we take all other hyperparameters as is and simply swap optimizer implementation. We find the learned hyperparameter list dramatically outperforms the default optimizer setting and the fixed learning rate baseline. Nevertheless, we emphasize that our method does not require any knowledge of the underlying problem to achieve faster results. See Appendix B.3 for this same transformer with a budget of 20k iterations. 6 RELATED WORK The idea of sets of tasks has been explored throughout machine learning. The majority of these suites are for use in evaluation where as our suite is targeted for meta-learning. The closest family of optimization tasks for evaluation to those presented here is DeepObs (Schneider et al., 2019) which includes 20 neural network tasks. Our task suite focuses on smaller problems and contains 50x more tasks. Outside of evaluation, task suites in reinforcement learning such as Obstacle Tower (Juliani et al., 2019), ProcGen (Cobbe et al., 2019), CoinRun (Cobbe et al., 2018), and Sonic (Nichol et al., 2018) focus on training algorithms that work across a variety of settings. The creation of TaskSet was motivated by the goal of learning learning algorithms, or metalearning (Schmidhuber, 1987; 1995; Hochreiter et al., 2001), and in particular learned optimizers (Bengio et al., 1990; Andrychowicz et al., 2016; Bello et al., 2017; Wichrowska et al., 2017; Li & Malik, 2017; Lv et al., 2017; Metz et al., 2019a;b). This use case is explored with this dataset in (Metz et al., 2020). In this work we do not use this task suite to train learned optimizers, but instead focus on learning a hyperparameter search strategy. Tuning hyperparameters by leveraging multiple tasks has been explored within the contexts of Bayesian optimization Swersky et al. (2013); Perrone & Shen (2019); Perrone et al. (2018) as well as meta-learning (Reif et al., 2012; Gomes et al., 2012; Feurer et al., 2014; Wistuba et al., 2015b;a; Chen et al., 2017; Pfisterer et al., 2018). See Appendix F.1 for a full discussion of sets of tasks in machine learning, Appendix F.2 for more info on optimization in machine learning, and Appendix F.3 for a discussion on existing hyper parameter search methods. 7 DISCUSSION Learning optimization algorithms represents a promising direction for accelerating machine learning research. For the resulting algorithms to become useful tools, however, we must further understand the relationships between training tasks, meta-optimization, and both iid and out of distribution generalization. This work takes steps towards this goal by introducing a significantly larger set of optimization tasks than ever previously considered. As an example use-case, we provide a thorough analysis of how TaskSet enables meta-optimization of simple, but performant hyperparameter lists. Despite this approach’s simplicity, the training of learned learning algorithms is computationally expensive. We hope to explore alternative parameterizations which will increase efficiency by, e.g., leveraging previous evaluations or partial model training (Swersky et al., 2014; Li et al., 2016). We are releasing the optimal hyperparameter list we have found as a drop-in replacement optimizer in a variety of deep learning frameworks (Tensorflow (Abadi et al., 2016), PyTorch (Paszke et al., 2019), and JAX (Bradbury et al., 2018)) in the hopes that the research community finds them useful. We believe this represents a new set of reasonable optimizer defaults for new problems. Finally, we hope TaskSet encourages more standardized research on general purpose optimizers. A TASKSET VISUALIZATION For a qualitative view, we constructed a feature space consisting of performance measurements for each task+optimizer pair (See §3.3). This forms a dense matrix of size number of tasks by number of optimizers. We then perform T-SNE (Maaten & Hinton, 2008; Van Der Maaten, 2014) to reduce the dimensionality to two and plot the results coloring by task family (Figure S1). Clusters in this space correspond to tasks that work well with similar optimizers. We find diversity of tasks with clusters occurring around similar families of tasks. A.1 TSNE OF TASKSET B ADDITIONAL EXPERIMENTS B.1 GENERALIZATION TO DIFFERENT SIZED PROBLEMS Training learned algorithms on large models is often infeasible for computational reasons. As such, one form of generalization needed when building learned algorithms is the ability to transfer to different sized models. As shown in Figure 1 the tasks in this suite contain a wide range of parameter counts, and can thus be used to test this kind of generalization. We split the tasks into 8 groups – one group per order of magnitude in parameter count, and train hyperparameter lists on one range and test on the rest. In Figure S2 we plot the fraction of the training loss achieved by the test loss on the target parameter range. We find peak performance around the model sizes used for training, and smooth falloff as the testing tasks become more dissimilar as measured by parameter count. We note that our problems are not evenly distributed across these groups thus each group will contain a different percentage of the underlying tasks. While this potentially confounds these results, we believe a similar bias occurs in realistic workloads as well. 0 1 2 3 4 5 6 7 number of parameters (log10) 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 tra in J / te st J 0-1 3-4 6-7 all Figure S2: We show learned search space generalization, measured as a ratio of the loss achieved in training and testing, versus the number of task parameters used during search space training. Generalization falls off as one moves further away from the training regime. In black we show that a uniform mixture of the 7 parameter buckets does not fall off. B.2 REINFORCEMENT LEARNING WITH PPO Figure S3: We find our learned hyperparameter lists performs about as well as random search on the NAdam search space, and worse than the random search on the learning rate tuned Adam search space. We test the learned hyperparameter lists on two continuous control reinforcement learning environments, half cheetah and humanoid, from Gym’s Mujoco environments(Todorov et al., 2012; Brockman et al., 2016). We use TF-Agents (Guadarrama et al., 2018) with all non-optimizer hyperparameters set via searching a mixture of environments. In figure B.2 we find our learned hyperparameter lists achieves comparable to slightly worse performance does not out perform learning rate tuning of Adam in both efficiency nor final performance. To diagnose this behavior we ran all 1k optimizers for both problems and found the learned hyperparameter list performs comparable to random search in the underlying space. To probe further, we computed spearman correlation on the performance of each optimizer as compared to the rest of the tasks in the task suite. We found considerably worse correlations than where present for tasks in the TaskSet. This is not surprising as TaskSet contains no reinforcement learning problems. B.3 LM1B TARGETING 20K ITERATIONS We show a transformer on LM1B similar to that shown in §5 except run for only 20k iterations, a fith of the steps. Results in Figure S4. We find the learned hyperparameter lists are much more efficient than either of the baselines. Figure S4: We find our learned hyperparameter lists out performs learning rate tuned Adam with both a constant, and a fixed learning rate schedule on a 53M parameter Transformer trained on LM1B. Left: Learning curves for the best of the optimizers. Right: Number of optimizers tried vs best test loss. B.4 PROBING SHORT HORIZON Often the goal when training a learned optimizers is to minimize performance after training some number of iterations. This is extremely computationally expensive and in practice approximations must be used. One common family of approximations is short horizon based methods. These methods rely upon somehow truncating training so that updates can be made to the learned optimizer more frequently. This is commonly done via truncated backprop (Werbos, 1990; Wichrowska et al., 2017; Figure S5: Hyperparameter lists trained on short horizon data generalize remarkably well. On the y-axis we show performance evaluated on the the full 10k training iterations for a given number of optimizers tried (x-axis). In color we show different number of steps used when evaluating task optimizer performance when training the hyperparameter list. Metz et al., 2019a; Wu et al., 2016), or proxy objectives such as only training for a handful of epoch (Zoph & Le, 2017). While this short horizon proxy is certainly not optimal(Wu et al., 2016), the performance gains are immense and in practice is what makes meta-training optimizers feasible. In our task suite, we test this short horizon learning by training hyperparameter lists only using some finite amount of training iterations per task and testing in the full training regieme (10k steps). Results in figure S5. We find that even when learning the hyperparameter list on a mere 200 steps, our hyperparameter list continues to generalize to outperform random search on Adam8p. This is promising as this suggests that training the learned hyperparameter list can be done with 1/50th of the total compute. This result is surprising to us as prior work indicates the effect of this bias can be severe (Wu et al., 2016; Metz et al., 2019a). We suspect it is due to the simplicity of the learned parameter space but leave a thorough analysis of this for future work. 100 101 102 103 0.1 0.5 1.0 default norm last quantile norm min norm random 0.0 0.2 0.4 0.6 0.8 1.0 default norm 0.0 0.2 0.4 0.6 0.8 1.0 las t q ua nt ile n or m 0.0 0.2 0.4 0.6 0.8 1.0 default norm 0.0 0.2 0.4 0.6 0.8 1.0 m in no rm Figure S6: Left: Aggregate performance (y-axis) vs number of optimizer tried (x-axis) for different normalization and aggregation techniques. In each curve we train the hyperparameter list with a different normalization and aggregation strategy and test with the default normalization and aggregation technique described in 3.3. We find some some strategies are near identical in performance (e.g. min norm), while others perform significantly worse – e.g. last quantile norm. In both cases, however, we still perform better than the underlying random search. Center: Correlation between default normalization and the quantile based normalization strategy. Correlation is quite low – 0.193 Pearson’s correlation. Right: Correlation between the default normalization using a mean to aggregate over validation over the course of training vs using a min over validation over the course training. We find a much higher correlation of 0.911. B.5 CHOICE OF NORMALIZATION FUNCTION There is no easy way to define a single metric for optimizer performance over a mixture of tasks. This paper picks a single normalization strategy based on minimum validation loss and the validation loss at initialization presented in §3.3. In this section we show the impact of choosing a different normalization and or aggregation technique. First, instead of computing the mean over learning curves as described in §3.3 we compute a min. Second, instead of rescaling based on init and min, we linearly rescale based on the 95 percentile of validation loss and the min validation loss achieved at the end of training each task.In Figure S6 we show learned hyperparameter list training and testing performance as a function of number of optimizers tried when training with different normalization techniques. We find using the min instead of mean results in a negligible change, while using the percentile loss more significantly hurts performance. This difference can be explained by Figure S6b and S6c where we show correlations between the two losses. We find the percentile loss has a much weaker correlation to the default normalizer. We suspect this difference is due to the fact that many optimizers diverage on tasks. By using the 95 percentile we upweight optimizers that do not diverge. B.6 TASK FAMILIES ARE DIVERSE To show the effects of diversity we train and test hyperparameter lists on each pair of task family. We additionally normalize each column from 0-1 to account for different mean losses across tasks. Results in Figure S7. While we do find some similarity in tasks – e.g. between MAF and NVP models, but no two tasks behave the same performance characteristics (no duplicate columns) suggesting that each task family is providing a different contribution to the space of all tasks. We also find when training on certain “far away” tasks, e.g. the quadratic family, we find poor performance on most other task families. m af nv p m lp_ va e m lp_ ae co nv _p oo lin g co nv _f c m lp wo rd _r nn _lm rn n_ te xt_ cla ss ch ar _r nn _lm los g_ ta sk s qu ad ra tic Train task family maf nvp mlp_vae mlp_ae conv_pooling conv_fc mlp word_rnn_lm rnn_text_class char_rnn_lm losg_tasks quadratic Te st ta sk fa m ily 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 no rm ali ze d los s Figure S7: Learning hyperparameter lists using one task family and testing on the remainder of task families. We normalize each column from 0-1 to account for different mean losses across tasks. Lower loss means better performance. We find some groups of similar tasks, but in general no two task families behave identically. B.7 EFFECTS OF THE META-TRAINING SEARCH SPACE SIZE Our offline learning technique described in §3.4 hinges on a finite set of optimizers collected via random search. This set is denote by Θ in Eq.4. In this section we probe the impact of this size. We take different sized subsets of the the thousand Adam8p optimizer configurations and train and test search spaces on different iid splits of tasks. We then plot performance as a function of this number of optimizers in Figure S9. Moving left in this figure corresponds to increasing the compute needed to train the learned hyperparameter list. We find performance continues to improve as the size of Θ grows. Given the high dimension of our meta-parameters, 8, this is not a surprise as the number of evaluations needed to explore the space will grow exponentially. We find that the full thousand trials are needed to out perform learning rate tuned Adam when only given a single optimizer evaluation. We find around 100 optimizers (size of Θ) are needed in the case of 10 optimizer trials (k = 10). Overall this sugjests that randomsearch might not be the most efficient learning method for creating hyperparameter lists. This is especially true as we work with optimizer families that have more hyperparameters. Other approximate learning methods should likely be explored such as truncated backprop through time as used by the learned optimizer community(Metz et al., 2019a), and/or population based methods (Balduzzi et al., 2019). lo sg _t a sk s q u a d ra ti c m lp m lp _a e co n v _p o o lin g co n v _f c m lp _v a e n v p rn n _t e x t_ cl a ss if ic a ti o n m a f fi x e d ch a r_ rn n _l a n g u a g e _m o d e l w o rd _r n n _l a n g u a g e _m o d e l 1 sec 10 sec 1 min 5 min 30 min 1 hr 2 hr ti m e t o t ra in 1 0 k st e p s Figure S8: Timings computed for each task family. We find most task families have a narrow distribution of compute times. 101 102 103 number optimizers 0.10 0.15 0.20 0.25 0.30 0.35 0.40 los s 1 hparams (Adam8p) 1 hparams (Adam4p) 10 hparams (Adam8p) 10 hparams (Adam4p) 100 hparams (Adam8p) 100 hparams (Adam4p) 1 -- Adam lr 10 - Adam lr Figure S9: Performance continues to improve as more and more optimizers are used when training the search spaces. On the x-axis we show number of optimzers (size of Θ, the number of hyperparameter evaluations used in training the learned hyperparameter list) and y-axis we show test loss achieved when applying the learned search space for a given fixed length, e.g. different values of k shown in color). We plot median with 25-75 percentile shaded over different random optimizer samples and iid task splits. Stars (with horizontal guide lines) denote best search for the corresponding number of hyperparameters for learning rate tuned Adam in half orders of magnitude. C TASK TIMINGS In Figure S8 we show box plots of training times for each problem. For each task we use the median step time recorded over a mixture of different physical devices and multipled by 10k to estimate a full training time. Future versions of this dataset of tasks will contain more variation within each task family. D OPTIMIZER FAMILY UPDATE EQUATIONS D.1 ADAM8P UPDATE EQUATIONS The 8 meta-parameters are: the learning rate, α, first and second moment momentum, β1, β2, the numerical stability term, , `2 and `1 regularization strength, and learning rate schedule constants λexp_decay and λlinear_decay. For Adam6p, we set `1 and `2 to zero. φ(0) =problem specified random initialization (S1) m(0) =0 (S2) v(0) =0 (S3) g(t) = d dφ(t) (f(x;φ(t)) + `2||φ(t)||22 + `1||φ(t)||1) (S4) m(t) =β1m (t−1) + g(t)(1− β1) (S5) v(t) =β2v (t−1) + (g(t))2(1− β2) (S6) m̂(t) = m(t) 1− βt+11 (S7) v̂(t) = v(t) 1− βt+12 (S8) u(t) = m̂(t)√ v̂(t) + (S9) s (t) linear =max(1− tλlinear_decay, 0) (S10) s(t)exp =exp(−tλexp_decay) (S11) φ(t+1) =αs (t) linears (t) expu (t) (S12) D.2 NADAMW UPDATE EQUATIONS This optimizer family has 10 hyper parameters. The base learning rate, αbase, first and second moment momentum, β1, β2, the numerical stability term, , `2WD `2 regularization strength, `2AdamW AdamW style weight decay, and a boolean to switch between NAdam and Adam, buse nesterov. The learning rate schedule is based off of a single cycle cosine decay with a warmup. It is controlled by 3 additional parameters – cwarmup, cconstant, and cmin learning rate mult. The learning rate is defined by: u =cwarmupT > t (S13) αdecay&constant =(αbase − cmin learning rate mult)(0.5 (S14) cos(tπ/(T − cconstant)) + 0.5)+ (S15) cmin learning rate mult (S16) αwarmup = t (Tcwarmup) (S17) α =(1− u)αdecay&constant + uαwarm (S18) The update equations of NAdamW are quite similar to that of Adam8p. For clarity we list the full update here. φ(0) =problem specified random initialization (S19) m(0) =0 (S20) v(0) =0 (S21) g(t) = d dφ(t) (f(x;φ(t)) + `2wd||φ(t)||22 (S22) m(t) =β1m (t−1) + g(t)(1− β1) (S23) v(t) =β2v (t−1) + (g(t))2(1− β2) (S24) m̂(t) = m(t) 1− βt+11 (S25) v̂(t) = v(t) 1− βt+12 (S26) u (t) heavy ball = m̂(t)√ v̂(t) + (S27) u (t) nesterov = β1m̂ (t) + (1− β1)g(t)√ v̂(t) + (S28) φ(t+1) =φ(t) − (1− buse nesterov)αu(t)heavy ball+ (S29) buse nesterovαu (t) nesterov − α`2AdamWφ(t) (S30) E OPTIMIZER FAMILY SEARCH SPACES E.1 SEARCH SPACE CONSIDERATIONS The performance of random search critically depends on the boundaries of the original search space. Without prior knowledge about the problems, however, picking a good search space is difficult. To explore this we additionally choose search spaces after collecting and looking at the data. We then use this search space to simulate random search within the constraints via rejection sampling. To find these search spaces we find the best hyper parameters for each task and construct new hyperparameter ranges with min and max values determined by the smallest and largest values of each hyperparameter which were the best hyperparameter for some task. This removes regions of the search space not used by any task. We also tested bounds based on the 5th and 95th percentile of best performing hyperparameters computed over all tasks. In the case of min and max, we find the optimal hyperparameters cover nearly all of the existing space, whereas the percentile based search spaces reduces the volume of the search hypercube by more than 90% leaving us with only ∼100 hyperparameter configurations. In Figure 3, we find, in all cases, learning the hyperparameter list is much more efficient. E.2 ADAM8P, ADAM6P, ADAM4P, ADAMLR SEARCH SPACES For Adam1p, Adam4p, Adam6p, and Adam8p we sample learning rate logritmically between 1e-8 and 10, beta1 and beta2 we parametrize as 1 − x and sample logrithmically between 1e-4 and 1 and 1e-6 and 1 respectively. For learning rate schedules we sample linear decay between 1e-7, 1e-4 logrithmically and exponential decay logrithmically between 1e-3, 1e-6. We sample both `1 and `2 logrithmcally between 1e-8, 1e1. E.3 NADAMW SEARCH SPACE This search space was chosen heuristically in an effort to generalize to new problems. We would like to emphasize that it was not tuned. We used our insight from Adam based optimizer families and chose this. No iterations where done. We expect more iterations will improve not only in distribution performance, but alsos generalization performance. The initial learning rate, αbase is sampled from log space between 1e− 5 and 1.0. 1− β1 is sampled logrithmically between 1e − 3, and 1.0. 1 − β2 is sampled between 1e − 5, and 1.0. is sampled logarithmically between 1e − 8 and 1e4. We sample using nesterov (buse nesterov) 50% of the time. We sample `2WD and `2AdamW logrithmically between 1e− 5 and 1e− 1. Equal probabilities of a third we either use both terms, zero out `2WD, or zero out `2AdamW . With 50% probability we use a nonzero min learning rate multiplier sampled logrithmically between 1e− 5 and 1.0. With 50% probability we sample the warm up fraction, cwarmup between 1e-5 and 1e-1, otherwise it is set to zero. Finally, we uniformly sample the amount of time the learning rate is held constant(cconstant) between 0 and 1. F EXTENDED RELATED WORK F.1 SETS OF TASKS Benchmarks consisting of multiple tasks are becoming an increasingly common technique for measuring improvement in algorithm design. Reinforcement learning has Atari Bellemare et al. (2013), DMLab Beattie et al. (2016), gym Brockman et al. (2016), and dm_control Tassa et al. (2018). Natural language processing has evaluation sets such as GLUE (Wang et al., 2018), Super GLUE (Wang et al., 2019), and the NLPDecathalon (McCann et al., 2018). In computer vision there is (Zhai et al., 2019) which studies transfer learning of image features. In black box optimization there is Nevergrad (Rapin & Teytaud, 2018), COmparing Continuous Optimizers (COCO) (Hansen et al., 2016) and a number of tasks to test Bayesian hyperparameter optimization presented in (Dewancker et al., 2016). For first order gradient methods there are unit tests for stochastic optimization (Schaul et al., 2013) which studies toy optimization functions, and DeepObs (Schneider et al., 2019) which includes 20 neural network tasks. Hyperparameter tuning practices on these benchmarks vary between tuning on each task separately, to tuning one set of hyperparameters for all problems. In Atari (Bellemare et al., 2013), for example, it is common practice to tune hyperparameters on a subset of tasks and evaluate on the full set. This protocol can further be extended by leveraging unseen levels or games at test time as done in Obstacle Tower (Juliani et al., 2019), ProcGen (Cobbe et al., 2019), CoinRun (Cobbe et al., 2018), and Sonic (Nichol et al., 2018). We believe generalization to unseen tasks is key for learned algorithms to be useful thus our learned search space experiments mirror this setting by making use of hold out tasks. Existing meta-learning data sets share similar goals to our work but focus on different domains. In few shot learning there is MiniImageNet (Vinyals et al., 2016) which is built procedurally from the ImageNet dataset (Russakovsky et al., 2015). Meta-Dataset (Triantafillou et al., 2019) takes this further and also focuses on generalization by constructing few shot learning tasks using images from a number of different domains for evaluation purposes. The automated machine learning community has OpenML (Vanschoren et al., 2013) with a focus on selecting and tuning non-neural algorithms. For learning optimizers, the use of task suites has been limited and ad-hoc. Many works use a single or small number of standard machine learning tasks (Andrychowicz et al., 2016; Li & Malik, 2017; Lv et al., 2017; Metz et al., 2019a). Wichrowska et al. (2017) uses a set of synthetic problems meant to emulate many different kinds of loss surfaces. While existing collections of tasks exist for optimizer evaluation, e.g. (Schneider et al., 2019), they contain too small a number of tasks to act as a comprehensive training set for learning algorithms, and many of their tasks are additionally too computationally expensive to be useful during learning. F.2 HAND DESIGNED AND LEARNED OPTIMIZERS Optimization is core to machine learning and thus the focus of extensive work. Methods such as Nesterov momentum (Nesterov, 1983), AdaGrad (Duchi et al., 2011), RMSProp (Tieleman & Hinton, 2012), and Adam (Kingma & Ba, 2014) have all shown considerable improvements in both the speed of optimization and ease of use by exposing robust, and easier to tune hyperparameters than SGD (Sivaprasad et al., 2019). Adaptive step size methods in particular have emerged at the forefront with many works building from it including AdamW (Loshchilov & Hutter, 2017), RAdam (Liu et al., 2019), Novograd (Ginsburg et al., 2019), and NAdam Dozat (2016). Recently, there has been a focus on comparing optimizers either for best performance, or ease of use (Wilson et al., 2017; Choi et al., 2019; Schneider et al., 2019; Sivaprasad et al., 2019). This has proven difficult as performance is heavily dependent on the choice of search space for optimization hyperparameters (Choi et al., 2019). Learned optimizers represent a parallel thread in the development of optimizers. By learning as opposed to hand-designing optimizers, researchers hope to not only increase performance but also ease of use (e.g. minimize the number of hyperparameters required or lower hyperparameter sensitivity) (Bengio et al., 1990; Schmidhuber, 1995; Hochreiter et al., 2001). Recently, there has been renewed interest in parameterizating learning algorithms with neural networks and learning these optimizers on neural network based losses (Andrychowicz et al., 2016; Wichrowska et al., 2017; Li & Malik, 2017; Lv et al., 2017; Metz et al., 2019a;b). Other approaches make learn symbolic parameterizations for new optimizers (Bello et al., 2017). These various methods are all trained and evaluated on different distributions of tasks making comparison across papers challenging. The dataset of tasks presented here will hopefully aid in the ability to compare and evaluate progress in learned optimizer research. In this work, we develop a much more minimal type of “learned optimizer” than previous work which developed new functional forms for the optimizer. Optimization involves not only the functional form of the optimizer, but also the rules for choosing hyperparameters and applying the optimizer. We focus on this second aspect of optimization and learn a hyperparameter search space to improve the performance of existing hand designed methods. F.3 HYPERPARAMETER SEARCH Hyperparameter search is a key component in machine learning. Considerable improvements have been made in language Melis et al. (2017), computer vision (Snoek et al., 2012), and RL (Chen et al., 2018) simply by tuning better. Often no single hyperparameter configuration works well across all tasks for existing optimization methods. Most current hyperparameter search methods involve trying a very large number of hyperparameters for every new task, which is computationally infeasible for large tasks, and additionally can severely limit the number of hyperparameters that can be tuned. Many common techniques such as random search (Bergstra & Bengio, 2012; Bousquet et al., 2017), Bayesian optimization (Snoek et al., 2012; 2015), tree parzen estimators (Bergstra et al., 2011), or sequential halving (Kumar et al., 2018) require setting a hyperparameter search space by hand which is not only difficult but often wildly inefficient. Learning hyperparameters or search strategies by leveraging multiple tasks has been explored within the context of Bayesian optimization Swersky et al. (2013); Perrone & Shen (2019); Perrone et al. (2018) as well as under the term meta-learning in Chen et al. (2017) in which an LSTM is meta-trained to produce function locations to query. The cost of hyperparameter search is often large as each evaluation requires training a model to completion. Often multi-fidelity based approaches are used which leverage “simpler” tasks and transfer the resulting hyperparameters (Hutter et al., 2018). Common approaches include training on partial function evaluations Swersky et al. (2014); Domhan et al. (2015); Li et al. (2016); Klein et al. (2016); Falkner et al. (2018), or leveraging simplified data and models (Petrak, 2000; Zoph & Le, 2016; Brock et al., 2017). Our dataset of tasks serves as a: “simpler” set of tasks to train on; a large and diverse enough set of problems that optimization algorithms trained on it may be expected to generalize; and a framework to test transfer across different types of problems. G LIST OF NADAM HPARAMS Idx Lr warmup constant Min LR mult beta1 beta2 epsilon nesterov l2 reg l2 weight decay 0 1.24e-3 0.000 0.477 1.01e-3 0.94666 0.94067 8.114e-8 False 0.000e+00 7.258e-5 1 5.33e-3 0.000 0.172 0.0 0.96047 0.99922 8.665e-8 True 0.000e+00 5.563e-3 2 2.12e-4 0.000 0.210 1.39e-3 0.62297 0.97278 1.540e-7 False 0.000e+00 5.361e-2 3 4.06e-1 0.000 0.324 0.0 0.99724 0.98680 1.079e+02 True 0.000e+00 1.562e-2 4 2.05e-2 0.000 0.885 1.57e-5 0.35731 0.86043 8.874e-5 True 0.000e+00 7.217e-2 5 5.95e-4 0.008 0.378 0.0 0.89130 0.99983 1.483e-7 True 0.000e+00 4.087e-2 6 7.53e-3 0.000 0.422 9.55e-4 0.69192 0.98434 3.593e-8 False 0.000e+00 3.060e-4 7 4.69e-3 0.000 0.509 0.0 0.99639 0.98820 2.056e-5 False 0.000e+00 3.552e-2 8 2.95e-1 0.000 0.201 0.0 0.99678 0.99981 7.498e+00 False 3.792e-4 3.463e-4 9 2.04e-3 0.000 0.527 0.0 0.49995 0.99755 5.630e-8 True 0.000e+00 2.796e-2 10 7.39e-1 0.001 0.556 3.31e-3 0.99691 0.80639 2.900e+03 False 0.000e+00 7.851e-2 11 8.12e-3 0.000 0.207 0.0 0.17785 0.96033 7.971e-2 False 0.000e+00 1.489e-2 12 3.33e-2 0.000 0.369 0.0 0.69592 0.99997 5.510e-6 True 0.000e+00 1.362e-5 13 6.95e-3 0.000 0.014 0.0 0.99412 0.99305 4.352e-7 False 0.000e+00 3.142e-5 14 1.88e-1 0.000 0.205 1.08e-1 0.98597 0.56531 3.335e+00 True 1.265e-5 3.868e-3 15 9.47e-4 0.007 0.452 0.0 0.43977 0.09422 2.120e-7 False 0.000e+00 6.902e-3 16 3.75e-3 0.000 0.184 0.0 0.87756 0.96128 3.163e-3 True 7.468e-5 2.627e-3 17 7.25e-1 0.000 0.495 0.0 0.99800 0.99781 3.608e+00 True 1.656e-5 3.911e-2 18 4.58e-3 0.000 0.107 3.66e-1 0.42294 0.99963 4.174e-6 True 0.000e+00 4.446e-3 19 3.07e-4 0.007 0.518 0.0 0.57863 0.99625 9.881e-6 False 0.000e+00 5.521e-2 20 2.94e-5 0.000 0.830 8.27e-5 0.96916 0.99896 7.782e-7 True 3.364e-4 3.416e-3 21 1.65e-4 0.002 0.457 2.70e-1 0.95280 0.04565 2.832e-6 True 0.000e+00 1.141e-2 22 9.17e-1 0.010 0.897 2.67e-2 0.45061 0.99244 4.945e-1 False 1.253e-3 0.000e+00 23 2.36e-3 0.000 0.986 0.0 0.98560 0.99997 1.080e-8 True 0.000e+00 3.023e-3 24 2.14e-2 0.000 0.128 0.0 0.98741 0.99336 1.266e-4 False 0.000e+00 5.194e-4 25 5.91e-2 0.000 0.062 0.0 0.99794 0.99383 3.447e+02 True 0.000e+00 3.935e-2 26 1.57e-3 0.000 0.251 0.0 0.91820 0.99991 4.675e-5 False 0.000e+00 4.112e-5 27 4.43e-1 0.000 0.702 0.0 0.94375 0.93551 2.335e-8 True 0.000e+00 8.325e-5 28 2.98e-3 0.008 0.046 0.0 0.68612 0.94232 6.614e-2 False 6.489e-5 0.000e+00 29 1.65e-2 0.004 0.082 4.92e-4 0.95717 0.99789 3.068e+01 True 0.000e+00 8.920e-2 30 5.58e-3 0.000 0.538 0.0 0.97559 0.99990 3.238e-8 True 0.000e+00 4.896e-4 31 8.54e-1 0.000 0.229 0.0 0.93129 0.50200 2.051e-2 False 2.068e-4 2.801e-2 32 7.38e-3 0.000 0.722 8.78e-2 0.21456 0.99752 2.862e-2 False 0.000e+00 8.439e-2 33 4.26e-4 0.001 0.923 2.06e-1 0.47239 0.99974 8.221e-5 False 1.248e-5 0.000e+00 34 6.04e-3 0.000 0.698 0.0 0.97849 0.91449 1.806e+00 False 3.183e-3 1.762e-2 35 8.86e-3 0.000 0.104 1.66e-1 0.98967 0.99720 1.493e-2 True 0.000e+00 2.253e-2 36 1.51e-2 0.000 0.431 1.99e-3 0.80488 0.97878 2.538e-8 True 0.000e+00 2.269e-5 37 2.50e-3 0.000 0.009 0.0 0.98127 0.99988 1.799e-7 False 0.000e+00 1.303e-2 38 3.42e-4 0.000 0.827 6.38e-1 0.25217 0.96572 2.928e-7 True 0.000e+00 1.318e-3 39 6.94e-5 0.000 0.085 0.0 0.98674 0.42709 2.387e-7 False 0.000e+00 2.071e-4 40 3.03e-2 0.001 0.313 0.0 0.90610 0.99997 4.449e-3 True 0.000e+00 2.813e-5 41 4.64e-3 0.000 0.495 2.26e-5 0.64658 0.54108 3.528e-8 False 0.000e+00 2.996e-5 42 2.25e-3 0.000 0.722 0.0 0.97967 0.97518 1.488e-7 True 1.812e-5 2.180e-2 43 6.66e-4 0.000 0.632 2.79e-5 0.65968 0.99997 6.848e-6 True 0.000e+00 3.130e-3 44 3.31e-3 0.000 0.146 0.0 0.90447 0.99970 6.618e-6 True 0.000e+00 2.184e-2 45 7.84e-4 0.016 0.124 0.0 0.95065 0.99685 2.141e-2 False 0.000e+00 4.024e-5 46 6.16e-3 0.016 0.623 0.0 0.98823 0.98744 1.616e-6 False 0.000e+00 1.544e-2 47 3.26e-4 0.000 0.738 1.61e-4 0.78425 0.99998 3.468e-3 False 0.000e+00 4.709e-2 48 4.12e-3 0.001 0.205 0.0 0.99561 0.75382 2.390e-6 True 0.000e+00 3.631e-2 49 6.26e-1 0.000 0.932 2.52e-3 0.99401 0.83521 2.431e+00 True 0.000e+00 1.048e-2 Top 50 hyper parameters found using the NAdamW search space. We find diverse learning rates, with very little warmup used. We additionally find most good performing optimizers make use of AdamW style weight decay. Finally, matching insight from (Choi et al., 2019), we find large values of . H DESCRIPTION OF TASKS IN TASK SUITE In this section we detail the task distribution used throughout this work. In addition to this text, a Tensorflow (Abadi et al., 2016) implementation is also released at github.com/google-research/googleresearch/tree/master/task_set. H.1 SAMPLED TASKS H.1.1 DEFAULT SAMPLED COMPONENTS As many of the sampled tasks are neural networks. We define common sampling routines used by all the sampled tasks. Activation functions: We define a distribution of activation functions which is sampled corresponding the following listing both name and weight. These are a mix of standard functions (relu, tanh) to less standard (cos). • relu: 6 • tanh: 3 • cos: 1 • elu: 1 • sigmoid: 1 • swish (Ramachandran et al., 2017): 1 • leaky relu (with α = 0.4): 1 • leaky relu (with α = 0.2): 1 • leaky relu (with α = 0.1): 1 Initializations: We sample initializers according to a weighted distribution. Each initialization sample also optionally samples hyperparameters (e.g. for random normal initializers we sample standard deviation of the underlying distribution). • he normal (He et al., 2015): 2 • he uniform (He et al., 2015): 2 • glorot normal (Glorot & Bengio, 2010): 2 • glorot uniform (Glorot & Bengio, 2010): 2 • orthogonal: 1. We sample the “gain”, or multiplication of the orthogonal matrix logarithmi- cally between [0.1, 10]. • random uniform 1.0: This is defined between [−s, s] where s is sampled logarithmically between [0.1, 10]. • random normal: 1.0: The std is sampled logarithmically between (0.1, 10). • truncated normal: 1.0: The std is sampled logarithmically between (0.1, 10). • variance scaling: 1.0: The scale is sampled logarithmically between (0.1, 10). RNN Cores: We define a distribution over different types of RNN cores used by the sequential tasks. With equal probability we sample either a vanilla RNN (Elman, 1990), GRU(Chung et al., 2014), or LSTM(Hochreiter & Schmidhuber, 1997). For each cell we either sample 1 shared initialization method or sample a different initialization method per parameter vector with a 4:1 ratio. We sample the core hidden dimension logarithmically between [32, 128]. H.1.2 SAMPLED DATASETS Image Datasets: We sample uniformly from the following image datasets. Each dataset additionally has sampled parameters. For all datasets we make use of four data splits: train, valid-inner, valid-outer, test. Train is used to train models, valid-inner is used while training models to allow for modification of the training procedure (e.g. if validation loss doesn’t increase, drop learning rate). Valid-outer is used to select meta-parameters. Test should not be used during meta-training. For all datasets, we sample a switch with low probability (10% of the time) to only use training data and thus not test generalization. This ensures that our learned optimizers are capable of optimizing a loss as opposed to a mix of optimizing and generalizing. Mnist: Batch size is sampled logarithmically between [8, 512]. We sample the number of training images logarithmically between [1000, 55000] (LeCun, 1998). Fashion Mnist: Batch size is sampled logarithmically between [8, 512]. We sample the number of training images logarithmically between [1000, 55000] (Xiao et al., 2017). Cifar10: Batch size is sampled logarithmically between [8, 256]. The number of training examples is sampled logarithmically [1000, 50000] (Krizhevsky et al., 2009). Cifar100: Batch size is sampled logarithmically between [8, 256]. The number of training examples is sampled logarithmically [1000, 50000] (Krizhevsky et al., 2009). {food101_32x32, coil100_32x32, deep_weeds_32x32, sun397_32x32}: These dataset take the original set of images and resize them to 32x32 using OpenCV’s (Bradski, 2000) cubic interpolation. We ignore aspect ratio for this resize. Batch size is sampled logarithmically between [8, 256] (Bossard et al., 2014; Nene et al., 1996; Olsen et al., 2019; Xiao et al., 2010). Imagenet32x32 / Imagenet16x16: The ImageNet 32x32 and 16x16 dataset as created by Chrabaszcz et al. (2017). Batch size is logrithmically sampled between [8, 256]. H.1.3 TEXT CLASSIFICATION: IMDB sentiment classification: We use text from the IMDB movie reviews dataset(Maas et al., 2011) and tokenize using subwords using a vocab size of 8k(Sennrich et al., 2015). We then take length s random slice from each example where s is sampled logarithmically between [8, 64]. These examples are then batched into a batch size logarithmically sampled between [8, 512]. We sample the number of training examples logarithmically between [1000, 55000] and with 10% probability just use training data instead of valid / test to test pure optimization as opposed to generalization. H.1.4 CHARACTER AND WORD LANGUAGE MODELING For the character and word language modeling datasets we make use of the following data sources: imdb movie reviews(Maas et al., 2011), amazon product reviews (ama) using the Books, Camera, Home, and Video subset each as separate datasets, LM1B(Chelba et al., 2013), and Wikipedia(Foundation) taken from the 20190301 dump using the zh, ru, ja, hab, and en language codes. We split each article by new lines and only keep resulting examples that contain more than 5 characters. For infrastructure reasons, we only use a million articles from each language and only 200k examples to build the tokenizer. Byte encoding: We take length s random slices of each example where s is sampled logarithmically between [10, 160]. These examples are then batched into a batch size logarithmically sampled between [8, 512]. With probability 0.2 we restrict the number of training examples to a number logarithmically sampled between [1000, 50000]. Finally, with a 10% probability just use training data instead of valid / test to test pure optimization as opposed to generalization. subword encoding: We encode the text as subwords with a vocabsize of 8k (Sennrich et al., 2015). We then take length s random slices of each example where s is sampled logarithmically between [10, 256]. These examples are then batched into a batch size logarithmically sampled between [8, 512]. With probability 0.2 we restrict the number of training examples to a number logarithmically sampled between [1000, 50000]. Finally, with a 10% probability just use training data instead of valid / test to test pure optimization as opposed to generalization. H.2 SAMPLED TASKS H.2.1 MLP This task family consists of a multi layer perceptron trained on flattened image data. The amount of layers is sampled uniformly from [1, 6]. Layer hidden unit sizes are sampled logarithmically between [16, 128] with different number of hidden units per layer. One activation function is chosen for the whole network and is chosen as described in H.1.1. One shared initializer strategy is also sampled. The image dataset used is also sampled. Two sampled configurations are shown below. 1 { 2 "layer_sizes": [ 3 71 4 ], 5 "activation": "leaky_relu2", 6 "w_init": [ 7 "he_normal", 8 null 9 ], 10 "dataset": [ 11 "sun397_32x32", 12 { 13 "bs": 32, 14 "just_train": false, 15 "num_train": null 16 }, 17 { 18 "crop_amount": 0, 19 "flip_left_right": false, 20 "flip_
1. What is the purpose and significance of TaskSet in optimizing learning tasks? 2. How effective is TaskSet in evaluating and choosing optimizers for various tasks? 3. Are there any concerns regarding overfitting to specific tasks when utilizing TaskSet? 4. Can we assume that an optimizer chosen using TaskSet will perform well on future tasks? 5. Does TaskSet provide a novel contribution to the field of machine learning, or is it primarily a useful tool for facilitating future research?
Review
Review This work presents TaskSet, a collection of optimization tasks consisting of different combinations of data, loss function, and network architecture. The tasks are useful when choosing and evaluating different optimizers (e.g. ADAM) for learning tasks. The usefulness of this collection is demonstrated for a hyperparameter search problem. The main question I had about this work is why is the chosen collection the right set of tasks to be considering? Do I have any assurance that an optimizer chosen using TaskSet will be any good on future tasks? How can we know that we don't overfit to these particular tasks when choosing an optimizer? Is there any notion of two tasks being drawn from the same distribution? However, my main concern with this paper is that, while TaskSet may be a useful tool for facilitating future research, it is not clear to me that it itself represents an advancement of novel research, which I think should be the bar for acceptance to a major conference. The work does not make any claims, or present any results beyond a use-case for the set of tasks. That's not to say that TaskSet isn't a useful tool, helpful for future research. But it itself does not represent such research. Because of this I recommend the work be rejected.
ICLR
Title Visually-Augmented Language Modeling Abstract Human language is grounded on multimodal knowledge including visual knowledge like colors, sizes, and shapes. However, current large-scale pre-trained language models rely on text-only self-supervised training with massive text data, which precludes them from utilizing relevant visual information when necessary. To address this, we propose a novel pre-training framework, named VALM, to Visually-augment text tokens with retrieved relevant images for Language Modeling. Specifically, VALM builds on a novel latent text-image alignment method via an image retrieval module to fetch corresponding images given a textual context. With the visually-augmented context, VALM uses a visual knowledge fusion layer to enable multimodal grounded language modeling by attending to both text context and visual knowledge in images. We evaluate VALM on various visual knowledge-intensive commonsense reasoning tasks, which require visual information to excel. The experimental results illustrate that VALM outperforms all strong language-only and vision-language baselines with substantial gains in reasoning object commonsense including color, size, and shape. Our code is available at https://github.com/Victorwz/VaLM. 1 INTRODUCTION Large-scale pre-trained language models (PLMs) have achieved great success in promoting state of the art on various natural language understanding and generation tasks (Devlin et al., 2019; Radford et al., 2019; Liu et al., 2019; Yang et al., 2019; Brown et al., 2020; Wang et al., 2022). PLM self-supervision training largely benefits from harvesting local context information in the pre-training corpus. To further strengthen such contextual self-supervision, recent seminal works, e.g. GPT-3 (Brown et al., 2020) and Megatron-LM (Narayanan et al., 2021), focus on increasing the model size and the scale of pre-training corpus. With billions of parameters, these tremendous PLMs exhibit incredible ability as zero-shot or few-shot learners. More remarkably, PLMs can achieve human-parity performance on various downstream tasks, even without any task-specific supervision. Another major research line of PLMs is to enhance the language model with auxiliary knowledge (Wei et al., 2021), including entity knowledge (Yu et al., 2020), relational knowledge (Zhang et al., 2019; Qin et al., 2021), text chunk (Lewis et al., 2020; Wu et al., 2022; Borgeaud et al., 2021), etc. The incorporation of various knowledge resources to PLMs mitigates the drawbacks of local contextual attention, bringing additional relevant global context that benefits both language understanding and generation tasks. Since current unimodal PLMs lack visual knowledge grounding, they inevitably suffer from the hallucination problem, which refers to the inconsistent or false statements generated by PLMs with respect to the world knowledge (Logan et al., 2019). For instance, the PLMs may predict the color of the sky as red only due to the statistical contextual correlations between the token “color” and “red” in the pre-training corpus, neglecting the commonsense facts. In this paper, we propose a novel framework to enable language model pre-training to take full advantage of both local text context and corresponding visual knowledge. Recent work on joint visionlanguage model (VLM) pre-training (Su et al., 2020; Tan & Bansal, 2020) relies on explicit alignments between text and image, e.g. supervised image captioning data, which limits the cross-modality fusion during fine-tuning/inference over text without accompanying images. As a consequence, later in our experiments (section 3), those prominent VLMs are found to achieve unsatisfactory performance on visual knowledge-intensive commonsense reasoning tasks. Instead, we design a flexible text-image alignment mechanism via an image retrieval module that gathers related images for each token as visual augmentation. To achieve better language-vision grounding, we propose a visual knowledge fusion layer to enable joint attention across visually-augmented context including both textual tokens and retrieved images. Based on this, we build up a Visually-augmented Language Model, VALM, with flexible on-the-fly visual knowledge enhancement. We evaluate the effectiveness of the proposed VALM on various commonsense reasoning and language-only benchmarks. Experimental results demonstrate that our model consistently outperforms the unimodal and multimodal baselines in terms of object commonsense reasoning. Remarkably, our method substantially improves +14.50%, +17.80%, and +11.68% accuracy on MEMORYCOLOR, RELATIVESIZE and OBJECTSHAPE datasets, respectively. Additional experiments on natural language understanding tasks also validate that the proposed visually-augmented language modeling framework could be helpful to improve the fundamental natural language understanding capability of PLMs. Our contributions are summarized as follows: • We propose a novel visually-augmented casual language model, VALM, to enable the language model to utilize visual knowledge flexibly and effectively. Through the proposed visual knowledge fused language modeling, VALM is capable of accomplishing tasks with the high demand of cross-modality knowledge, such as visual commonsense reasoning. • We design a framework to construct flexible on-the-fly text-image alignments and fuse augmented images into the context of language modeling. We implement an image retrieval module to query token-level representation in a large-scale cached image database and retrieve its nearest neighbors as the augmentation. With the proposed visual knowledge fusion layer, VALM can effectively take full advantage of both language information from local text context and visual information from retrieved images. • Experimental results demonstrate that VALM effectively alleviates the hallucination problem of PLMs via introducing visual knowledge in language model pre-training. VALM achieves significant performance improvements in inferring the commonsense object properties. 2 METHODS We propose a novel multi-modal pre-trained language model, which is augmented with retrieved images, named VALM. The architecture of VALM is presented in Figure 1. VALM augments each token in pre-training text corpus with k retrieved related images. VALM uses an image retrieval module to retrieve corresponding images for each token. The image retrieval module deploys a pre-trained CLIP model, which is capable of unifying the textual query and image candidates into a joint embedding space. VALM constructs a cached large-scale image knowledge base using image encoder of CLIP, and uses the contextual representation of each token as textual query to search its nearest neighbors in image knowledge base. With the help of the unified text and image embedding space provided by CLIP, the image nearest neighbors are taken as augmented images of each token to construct text and image alignments. We then propose a visual-knowledge fusion layer to enable learned hidden state to attend to both texts and augmented images. 2.1 VALM: VISUALLY-AUGMENTED LANGUAGE MODELING Given an input text sequence {xi}Ni=1, the embedding layer first encodes input vector {xi}Ni=1 into embedding space and outputs the initial hidden state H0 to the successive Transformer decoder layers. Then the proposed VALM model encodes H0 into visual knowledge fused contextual representations at difference levels H = {Hl}Ll=1 via L − 1 Transformer decoder layers and one special visual knowledge fusion layer. Each Transformer decoder layer is identical to Vaswani et al. (2017), which outputs the contextual representations at different semantic levels given the representation from the previous layer Hl = Layerl(H l−1), l ∈ [1, L]. The visual knowledge fusion layer is proposed as a variant of the Transformer decoder layer to incorporate visual knowledge in contextual learning via joint attention on both text contexts and augmented images. The visual knowledge fusion layer is injected in the second-to-last layer of VALM. The visual knowledge is stored in corresponding augmented image representations, obtained from image retrieval module {{zij}Kj=1} = frt(xi). Then the visual knowledge fusion layer takes the input including both contextual representation of the previous layer and augmented image sets and outputs a visual-knowledge fused contextual representation HL−1 = VisualLayer({HL−2i , {zij}Kj=1}Ni=1). Finally, the output contextual representations are passed into the output projection layer and a softmax function is used to compute the token probability P (xi|x1, · · · ,xi−1) = softmax(WHL + b). We conduct generative unsupervised pre-training (Radford et al., 2019) for VALM on a large-scale text corpus. The training objective of VALM is the standard left-to-right language modeling objective, which maximizes the likelihood of the next word token based on the left context: max ∑ x∈D |x|∑ i=1 logP (xi|x1, · · · ,xi−1), (1) where x represents a sentence randomly sampled from the large-scale pre-training text corpus D. 2.2 IMAGE RETRIEVAL The visual knowledge corresponding to a specific token is stored in its correlated images. Therefore, to prepare the fused visual knowledge, VALM deploys an image retrieval module to retrieve augmented images, denoted as frt(·). In order to achieve multi-modality text-image retrieval, it is of great importance to building up a discriminator to assess the correlation of every image in the extremely large-scale open image knowledge bases to the specific text representation. CLIP (Radford et al., 2021) proposed a simple-yet-effective method to connect images and texts into a unified multi-modal embedding space. We directly deploy the pre-trained CLIP model to encode the images and texts to enable a nearest neighbor text-image retrieval. Specifically, the pre-trained CLIP model we use in constructing the image retrieval module includes a ResNet-50x16 (He et al., 2016) model as an image encoder and a Transformer (Vaswani et al., 2017) model as a text encoder. Here, we only use the CLIP model as the backbone of our image retrieval module, and the CLIP parameters are not updated during the pre-training process of VALM. Image Knowledge Base Creation. The image knowledge base of the retrieval module is the cache of a set of image keys, which are the high-level visual representations of images. Given an image z ∈ Dimg, such visual representation can be obtained via forwarding image z to the pre-trained CLIP image encoder. Then the whole image knowledge base (Z) is constructed by taking the output hidden state fθI (x) as image keys: Z = ⋃ z∈Dimg{fθI (z)}, where θI represents the image encoder parameters. Textual Query. We take the contextual representation of each token as the query in the nearest neighbor search. For each sentence x ∈ D, the contextual representation of i-th token is computed via fθT (x<i), where θT represents the text encoder parameters. As the input sequence length of VALM generally exceeds the input length limitation of 75 tokens of CLIP text encoder, the long context x<i is cut off into a context-chunk yi for fitting in CLIP text encoder: yi = { x[t,i−1], i− t < 75, x[i−75,i−1], i− t ≥ 75, where t is the index of the closest stop character before i-th token. Then the textual query for i-th token is computed as its context-chunk representation as fθT (yi). kNN Text-Image Retrieval. The retrieval module uses the contextual representation to search the cached image knowledge base (Z) and retrieves k nearest neighbor image keys w.r.t. dot product distance. As the pre-trained CLIP model has learned a joint embedding space for text and image domain, the retrieved images {zij}Kj=1 are thus regarded as the top-k relevant images to the query. 2.3 VISUAL KNOWLEDGE FUSION With the help of the image retrieval module, each token in the pre-training corpus is augmented with k corresponding images, and these augmented images are represented in the joint embedding space with texts. Then the augmented image representations are directly treated as auxiliary “context” in the learning process. As the conventional Transformer decoder layer uses the multi-head self-attention (Vaswani et al., 2017) to learn the contextual representation, we extend it to a joint-attention mechanism and propose a novel visual knowledge fusion layer to enable each token to attend to both contexts and retrieval images jointly. In addition, due to the inconsistency in magnitude and distribution between contextual hidden states and retrieved image representations, we apply Layer Normalization (Ba et al., 2016) on retrieved K image representations to alleviate such inconsistency, denoted as LNimg. Assume that the hidden state output for i-th token is hi and the corresponding retrieved images are {zij}Kj=1, the hidden state HL−1i is computed as: Q = HL−2WQ + bQ,K = HL−2WK + bK ,V = HL−2WV + bV , (2) k̇ik = LNimg(zik)W K + bKimg, v̇ik = LNimg(zik)W V + bVimg, (3) ei = QiK T √ d , ai = exp (ei)∑L j=1 exp (eij) + ∑K k=1 exp (eik) , (4) eik = Qik̇ T ik√ d , aik = exp (eik)∑L j=1 exp (eij) + ∑K k=1 exp (eik) , (5) HL−1i = aiV + ∑ k aikv̇ik, (6) where Qi, k̇ik, v̇ik ∈ RE, K,V ∈ R|x|×E, ei, ai ∈ R|x|. The hidden state output from the previous layer HL−1i is linearly projected into contextual queries, keys, and values Q,K,V separately. K is the number of retrieved images for each token, and E is the embedding dimension for both context and image representations. In order to generate image-specific attention keys and values, we adopt image-specific bias bKimg, b V img in linear projections and reuse the contextual projection weights WK ,WV to generate image-specific attention keys and values. Moreover, it is vital to mention that the image-specific attention keys and values are distinct for each query token, which is highly different from self-attention where the contextual keys and values are kept the same for each token. A secondary subscript k is used to denote different image representations for the i-th token. 3 EXPERIMENTS 3.1 PRETRAINING SETUP Text Corpus. We use the English corpus of CC-100 (Conneau et al., 2020) as the pre-training text corpus for both VALM and baseline GPT-2∗. CC-100 corpus is one of the largest high-quality Task Example Prompt Object / Pair Answer web-crawl text data. The English monolingual dataset of CC-100 contains about 55 billion tokens, stored in 301 GiBs disk storage. Due to the limitation of computing resources, we only consume 15% of CC-100 English monolingual corpus for pre-training VALM and baseline GPT-2∗. Image Data. We use the LAION Open Image Dataset (Schuhmann et al., 2021) as the image knowledge base for dense retrieval. To the best of our knowledge, the LAION Open Dataset is one of the world’s largest openly available image-text-pair dataset with 400 million samples. Due to the disk space limitation, we randomly select half of LAION images for the dense text-image retrieval, which is 200M images in total. Pre-training Hyperparameters. The proposed model deploys transformer decoder architecture with 124M trainable parameters. Hyperparameter setting and training details are presented in Appendix B.1. Retrieval Module. For the implementation of the dense text-image retrieval module, we use the faiss (Johnson et al., 2021) toolkit to construct the efficient index. The faiss index contains the whole 200M image keys and provides the efficient nearest neighbor search. For efficiency purposes, we quantize all image keys to 32 bytes. faiss index stores image keys in clusters to speed up the search, which requires the additional training process to learn the cluster centroids. We use 10M keys for learning 131k cluster centroids and search 32 clusters to find the nearest neighbors during inference. We load the faiss index to GPU to achieve efficient dense text-image retrieval. 3.2 VISUAL KNOWLEDGE INTENSIVE TASKS The visual information stored in retrieved images can play a useful role in providing relevant visual knowledge to help language models perform better grounded commonsense reasoning. Such helpful visual information can be colors, positions, sizes, spatial relations, etc. The task of object commonsense reasoning requires language models to predict the correct visual property for a given object. To excel these tasks typically require models to capture and utilize intensive visual knowledge without any explicit text demonstrations or external knowledge bases. Due to reporting biases, such descriptive text of object properties rarely appears in text corpora, likely making this type of knowledge absent from language models. Thus, those visual knowledge-intensive tasks are likely challenging for both language models and vision-language models. We first compared VALM and recent baselines on four object commonsense reasoning datasets, MEMORYCOLOR (Norlund et al., 2021), COLORTERMS (Bruni et al., 2012), OBJECTSHAPE (Zhang et al., 2022a) reasoning, and RELATIVESIZE (Bagherinezhad et al., 2016). In addition, we use another physical interaction question answering dataset (PIQA) (Bisk et al., 2020), to evaluate whether such visual commonsense knowledge could be implicitly encoded and utilized in the question answering process. In Table 1, we provide examples for different visual commonsense reasoning tasks. MEMORYCOLOR and COLORTERMS Dataset. The memory color of a concrete object is the typical color an object appears in, e.g. the color of banana is mostly memorized as yellow. Norlund et al. (2021) proposed this dataset for evaluating visual knowledge transfer in multi-modal language models. The dataset contains 109 objects paired with their memory color labels, an illustrating picture, and a descriptor. The COLORTERMS dataset also contains a list of common items manually labeled with their commonsense color. Both datasets hold a set of 11 color labels. OBJECTSHAPE Dataset. Zhang et al. (2022a) proposed a visual commonsense dataset with a set of object attributes like shape. The dataset of object shapes contains 140 objects with their shape label. The OBJECTSHAPE dataset consists of 12 shape categories. RELATIVESIZE Dataset. Bagherinezhad et al. (2016) proposed the RELATIVESIZE dataset, which includes a total of 486 object pairs between 41 physical objects. The task of object size reasoning requires the model to predict the size relations between two given objects, e.g., an ant is smaller than an elephant. The size information is again rarely included and described in text, while it is much easier to capture from the images. We convert the size relation reasoning task into a binary question-answering form with "Yes"/"No" answers. PHYSICAL INTERACTION QUESTION ANSWERING. Physical Interaction Question Answering (PIQA) is proposed and designed to investigate the physical commonsense knowledge of existing language models (Bisk et al., 2020). Completing such question answering tasks requires the language model to effectively utilize physical commonsense knowledge, i.e. knowledge of basic properties of the objects (flexibility, curvature, and being porous). Language models are supposed to first achieve the perception of objects and later encode such physical knowledge into the language modeling process. Each data sample in PIQA contains one goal and two solutions. The model is supposed to figure out and predict the more reasonable and appropriate solution between two candidates. Evaluation Setting. We evaluate VALM and all baseline methods in a zero-shot manner without any task-specific tuning. Specifically, VALM takes the input consisting of textual prompts and objects during inference and predicts the property label as the last token. The prompts used in evaluating object color, shape, and size reasoning performance are listed in Appendix Table 11. We use the top-1 accuracy as the evaluation metric and compute the average accuracy of all listed prompts to increase evaluation robustness. For PIQA, we follow Shwartz et al. (2020) to use the cross-entropy loss as the scorer for each potential solution score(sij) = CE([gi, sij ]), j ∈ [0, 1]. Then the solution with lower scores is selected as the prediction. The classification accuracy is used as the evaluation metric. Baselines. We consider both pretrained language-only and vision-language models as baselines. In particular, three strong language models are considered for comparison with VALM, including 1) GPT-2∗ (Radford et al., 2019); 2) BERT Devlin et al. (2019); and 3) CaptionBERT (Zhang et al., 2022a), a pre-trained auto-encoding language model on Oscar’s (Li et al., 2020) caption-based text data. Here, GPT-2∗ is re-implemented and trained from scratch using the identical training data, hyper-parameter settings, and model size as the proposed VALM. Additionally, we also compare VALM with prominent vision-language models, including 1) OSCAR (Li et al., 2020), a pre-trained vision-language model with learned representations that capture channel-invariant factors (i.e. object tags) at the semantic level; 2) VisualBERT (Li et al., 2019), a vision-language model with learned joint contextualized representations across vision and language; 3) CLIP (Radford et al., 2021), a vision-language system with one image encoder and one text encoder which are mapped into a same cross-modal embedding space. We directly use OSCAR and VisualBERT as auto-encoding language models for zero-shot evaluations. For CLIP, we first retrieve the corresponding image using the concatenated query prompt and the given object. Then, the dot-product similarity of the retrieved image vector and the candidate-aware text vector (including the query prompt, the given object, and one candidate label) is used to rank. Finally, the top-ranked candidate label is regarded as the prediction for evaluation. Results. The main results on four object commonsense reasoning datasets are summarized in Table 2. The two variants of VALM (K = 4, 8) significantly outperform all considered language models and vision-language models on object color and shape reasoning datasets, with an improvement of +14.50%, +13.56%, and +11.68% on MEMORYCOLOR, COLORTERMS, and OBJECTSHAPE respectively. Moreover, the proposed VALM with K = 4 achieves an encouraging result with +17.80% accuracy gain over the best baseline, VisualBERT on RELATIVESIZE. The substantial improvements on these datasets demonstrate that VALM takes full advantage of visual knowledge (object visual property) to complete the corresponding visual commonsense reasoning. Surprisingly, the zero-shot evaluation results of all auto-encoding language models and vision-language models are below 40% accuracy on object color and shape reasoning datasets. Although pretrained with aligned text-image pairs, those vision-language models cannot effectively utilize relevant visual knowledge from their jointly contextualized vision-language representations. Among language models, the auto-regressive PLMs significantly outperform auto-encoding PLMs, suggesting that auto-regressive PLMs are likely better at zero-shot reasoners. We also observe that retrieving more images for each token results in a performance drop for object size and shape reasoning. We attribute the degradation to the increased noise brought by augmenting with more images which causes model confusion when differentiating relevant visual information from irrelevant one. PIQA is a more challenging task that requires the model to reason useful implicit object properties and utilize these commonsense in the question answering process. The results on PIQA are presented in Table 3. As is shown, VALM outperforms all baseline language models with +2.11% accuracy improvement. The two variants of VALM achieve almost identical performance because the selection for the correct solution is based on the language modeling perplexity, indicating that the two variants demonstrate similar language modeling capability. 3.3 NATURAL LANGUAGE UNDERSTANDING AND LANGUAGE MODELING TASKS The casual language modeling pre-training task enables PLMs to naturally perform natural language understanding (NLU) and long-text modeling. Therefore, the zero-shot natural language understanding and language modeling performance are widely adopted to evaluate the capability of PLMs (Radford et al., 2019). Here, we evaluate VALM and the most relevant language model baseline GPT-2∗ on four NLU datasets, SST-2 (Socher et al., 2013), MPQA (Wiebe et al., 2005), DBPeida (Auer et al., 2007), and AGNews (Zhang et al., 2015). The prediction accuracy is used as the evaluation metric. In addition, following Radford et al. (2019), Wikitext-103 (Merity et al., 2017) and Lambada corpus (Paperno et al., 2016) are considered to study the language modeling performance in a zero-shot manner. We report perplexity for two corpora and also report last-word prediction accuracy for Lambada corpus. The results on natural language understanding are summarized in Table 4. It is easy to see that VALM achieves decent improvements on all four NLU tasks, indicating that the cross-modality knowledge learned in our model is likely helpful for typical natural language understanding. Thus, our visually-augmented language modeling framework can be further explored to enhance the natural language understanding ability of PLMs. Table 5 illustrates the results of language modeling tasks. Again, VALM slightly improves the perplexity on both datasets, +0.68 on Wikitext-103 and +0.08 on Lambda. A similar trend is observed for the final word prediction accuracy on Lambada. Different from previous visual knowledge intensive commonsense reasoning tasks (subsection 3.2), we find that VALM models with different numbers of retrieved images (K = 8 vs K = 4) perform similarly on the intrinsic language modeling task, suggesting that VALM can effectively ignore irrelevant visual information when the task is unlikely to benefit from visual knowledge. In other words, visual commonsense reasoning tasks require more fine-grained fusions of text and image, i.e. locating the text object in the image set, extracting relevant vision information, and verbalizing reasoning output. In contrast, a certain portion of text from general language modeling corpora s is probably not visually related. Thus, only a coarse-grained fusion is sufficient here (e.g. deciding if the image set is useful), making the language modeling evaluation less affected by the retrieval noise from augmented images. 3.4 ABLATION STUDIES So far, we empirically verify the effectiveness and superiority of VALM in utilizing visual knowledge for both visual knowledge-intensive tasks and traditional language understanding and modeling. To figure out how the visual information takes effect in our model, we focus on two questions here: 1) Is the model capable of using the retrieved image representations as "auxiliary" contexts? What is the effect of disabling such retrieved image representations during inference? To evaluate this, we design an ablation model which set K = 0 and disables image retrieval and fusion during inference. 2) Does the model learn to leverage visual knowledge in the retrieved images? What is the effect of directly augmenting randomly-retrieved image representations during inference? Thus, an ablation model which retrieves random images as augmentations during inference is used for probing. The results of the two ablation models, Randomly-Retrieval and Disable-Retrieval during the inference stage, are listed in the first two rows of Table 6. As we can see, both changes to the image retrieval result in noticeable performance degradation on all evaluation tasks. In particular, we find that disabling the image retrieval and augmenting no image during inference also makes a huge difference to the language modeling perplexity on two corpora, which is more related to pure text corpus rather than augmented images. Therefore, it suggests that VALM is able to effectively capture rich semantics from both pretraining sources, i.e. text corpus as well as augmented images. In other words, the improved zero-shot task transferability of VALM relies on visual information from augmented images, which complements the linguistic knowledge learned via text-based self-supervised learning. The results of the Randomly-Retrieval ablation model further illustrate that achieving the capability of integrating visual knowledge cannot be realized by only augmenting unrelated images to language models, while only context-relevant images can make a true difference. VALM proposes a novel visual knowledge fusion layer with a joint context-image attention mechanism as a key component to fuse visual knowledge into the language model. The separate linear projection layers are regarded as important components to map contexts into different embedding spaces for attention keys and values. Therefore, the proposed joint self-attention mechanism naturally holds three variations to generate image keys and values: establish image-specific linear projections, reuse contextual linear projections, and only use specific linear bias for augmented images. We conduct the ablation study to evaluate the effect of these three alternatives on image linear projections. The results in Table 6 demonstrate that adopting image-specific projection bias outperforms directly sharing the contextual projection bias. Introducing additional image-specific linear projection weights does not lead to further performance improvement. Thus, we take the strategy of only adding additional linear bias for augmented images and reuse contextual linear weights in generating visual attention keys and values for implementation convenience and parameter efficiency. img in Equation 3 for augmented images. The proposed model VALM is shown in the last row which introduces only image-specific bias and reuses contextual weight in attention key and value projection layers. 4 RELATED WORK Pre-trained Language Models. Pre-trained language models (PLMs) revolutionized NLP research. Enabled by attention mechanism (Bahdanau et al., 2015) and Transformer architecture (Vaswani et al., 2017), the state-of-the-art PLMs, including BERT (Liu et al., 2019), GPT (Radford et al., 2018; 2019), RoBERTa (Liu et al., 2019), ELECTRA (Clark et al., 2020), T5 (Raffel et al., 2020), and OPT (Zhang et al., 2022b), have become the dominant approach in NLP tasks via the paradigm of pre-training on large-scale text corpora and fine-tuning on downstream tasks. With the exponential scaling up of model size, a surprising fact emerged that the PLMs like GPT-3 (Brown et al., 2020) can work as few-shot or zero-shot learners. Vision-Language Models. Vision-language tasks are at the intersection area of both modalities of vision and language, like visual-question answering (Agrawal et al., 2015), and image captioning (Chen et al., 2015). ViL-BERT (Lu et al., 2019) firstly proposed to generate image region features via object detection and then learn joint multi-modal representations via an interacted two-stream model. OSCAR (Li et al., 2020) proposed to introduce object tags detected in images as anchor points to solve the issue of high demand for image-text alignments. Another significant pathway for VLMs is to construct a unified embedding space for texts and images and use textual prompts to extract task-specific labels during inference, of which the representative models are CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021). Visually-Grounded Language Learning. Visually-grounded language learning is an emerging research topic in vision-language learning, in which the proposed VALM can be categorized in this area with other prior works like Vokenization (Tan & Bansal, 2020), VidLanKD (Tang et al., 2021), and iACE (Lu et al., 2022). Visual information and knowledge can be memorized by the PLMs via fusion layer or concatenated inputs. However, extracting and utilizing the visual information efficiently and effectively is still difficult for uni-modal language models. Vokenization concatenated tokens and token-related images as “vokens", transferring sentence-level caption text to token-level “voken" with a Vokenizer model. 5 CONCLUSION In this paper, we propose a multi-modal framework VALM to enable auto-regressive language modeling to effectively utilize visual knowledge. Specifically, an effective text-to-image retrieval module is designed to construct latent text-image alignments for visually-grounded language modeling. Empowered by pre-training, VALM achieves improved zero-shot task transfer on downstream tasks. Experiments on various visual knowledge-intensive tasks demonstrate the effectiveness of our model over recent vision-language models. VALM also achieves decent improvements over language models on multiple representative natural language understanding tasks. For future work, we plan to adapt the model architecture to encoder-only and encoder-decoder Transformer backbones. We are also interested in more input modalities for VALM. ACKNOWLEDGEMENTS We would like to thank the anonymous reviewers for the helpful comments. We appreciate Zewen Chi and Hangbo Bao for the fruitful discussions, and Yaru Hao for helpful suggestions on evaluation benchmarks. A ADDITIONAL RESULTS A.1 TIME-COST EFFECTS OF RETRIEVAL AND IMAGESET SIZE Introducing efficient image retrieval on GPU brings a linear increase in inference time cost (about 2.1 times of text-only GPT-2∗ baseline), shown in Table 7. This cost is negligible with larger-size language models because the model forward cost will increase many times while the retrieval cost will not change with the model size. The retrieval cost can be further improved by searching fewer clusters or decreasing the number of encoding bytes for approximate image keys, with a minor trade-off on the performance. Moreover, efficient nearest neighbor search is an active research area (Guo et al., 2020) and we could try other efficient search tools to accelerate the retrieval. As the introduced retrieval time cost is proportional to the size of imageset for dense retrieval, we provide more details on the relationship between retrieval time cost and imageset size, presented in Table 7. Concluded from Table 7, there is no significant performance decrease with the smaller imageset size from the original 200M down to 10M. As the 10M set is still large and sufficient for providing enough visual knowledge, we will consider deploying a 10M size imageset to train VALM for potential real-time industry applications. A.2 COMPARISONS WITH ADDITIONAL STRONG BASELINES We compare VALM with Vokenization (Tan & Bansal, 2020) on four visual-knowledge-intensive tasks, and the results are shown in Table 8. In addition, we evaluate the performance of large language models on the visual–knowledge-intensive tasks for stronger and more fair comparisons. We evaluate the OPT (1.3B parameters) (Zhang et al., 2022b) model on these visual–knowledgeintensive tasks and the results are presented in Table 8. VALM(124M parameters) significantly outperforms the OPT-1.3B on four datasets, which further demonstrates the challenge of solving those visual-knowledge-intensive tasks and the effectiveness of our method. A.3 SCALING EFFECT OF VALM We train the 355M model (GPT-2 Medium Size) of VALM (k=8) to evaluate the effects of scaling up model parameters. The results are presented in Table 9 and the model performance is significantly improved on four visual knowledge-intensive datasets. We will seek more computation resources to train large size VALM models. A.4 ABLATION STUDY OF K We further conduct another ablation study by setting the number of augmented images K = 1 for VALM, which is very similar to the CLIP (Radford et al., 2021) inference. The results are presented in Table 10. VALM (k=1) significantly outperforms CLIP in all visual-knowledge-intensive tasks, validating the effectiveness of our method. A.5 CASE STUDIES We provide a case study in the object color reasoning task for VALM. In order to reason the correct commonsense color of objects sky and parsley, VALM takes the input combination of the prompt and the object as “the color of [object] is”. Then we present the retrieval results of top-4 corresponding images to the textual query in Figure 2. A.6 COLORIZATION EFFECT We conduct another interesting ablation case study to evaluate the effect of image color changes in the object color reasoning task. Specifically, VALM predicts the color label of an apple as red based on the commonsense in both contexts and retrieved images. The original prediction probability distribution is presented in Blue Bars of Figure 3(b). Then we replace the retrieved images with K unusual images of green apples in OBJECTCOLORIZATION dataset (Anwar et al., 2020), shown in Figure 3(a). The predicted probability distribution for 11 color types given replaced colorization objects is presented in Orange Bars of Figure 3(b). We could observe a clear probability increase in the color type of green and a decrease in that of red, which is confronted with the colorization process. This ablation study demonstrates VALM is capable of extracting useful visual knowledge from retrieved object images and inferring correct semantics based on that. Given retrieved object images in different colors, VALM could extract the correct color knowledge and adopt it in its semantic inference stage. B EXPERIMENTAL DETAILS B.1 PRE-TRAINING HYPERPARAMETERS AND TRAINING DETAILS The implementation of models and all experiments are based on the fairseq (Ott et al., 2019) toolkit. The proposed model deploys transformer decoder architecture with 124M trainable parameters, in which nlayer = 12, nhead = 12, dembed = 768. We deploy Adam (Kingma & Ba, 2015) (β1 = 0.9, β2 = 0.98) optimizer and train all models with lr = 0.0005, twarmup = 4000, dropout = 0.1, bsz = 128, len = 512. The layer normalization over the retrieved image keys is initialized with ϵ of 0.00001. VALM reuses the identical lower-cased byte pair encoding (BPE) (Sennrich et al., 2016) representation with a 49152 vocab size of CLIP text encoder. The proposed VALM and re-implemented GPT-2∗ are trained for 500k steps using 16 Nvidia Tesla V100-SXM2-32GB GPUs. The encoded 200M image knowledge base takes up 274GiBs disk storage and the trained faiss approximate retrieval index takes another 14GiBs storage. B.2 PROBE TEMPLATES We present all zero-shot query prompts and labels for 4 object commonsense reasoning datasets and 4 natural language understanding benchmarks in Tabele 11.
1. What is the focus and contribution of the paper regarding image-text data? 2. What are the strengths of the proposed approach, particularly in its novel components and empirical results? 3. What are the weaknesses of the paper, especially concerning the image retrieval component? 4. Do you have any concerns or suggestions for improving the method, such as conducting ablation studies? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a pre-training framework, called VALM, to jointly train on image-text data. The novelty of this work, compared to previous works in similar field, is how the image-text pairs are created. While previous works use pre-curated image-text aligned pairs, this work instead uses images retrieved using text as a query and then jointly fuses them through attention layers. The claim is that this will help the model perform better on tasks requiring visual commonsense reasoning. Strengths And Weaknesses Strengths: -- The empirical results in the paper are quite strong and surpass pre-trained text models (GPT) as well as vision-language models (VisualBERT) on 4 reasoning task datasets and several language understanding/modeling tasks. While the differences in language-only tasks is small, the gains compared to baselines in visual reasoning tasks is large as claimed. -- The proposed model (esp visual knowledge fusion component) is novel and elegant and simple, which will serve as motivation for following works. -- The paper is clearly written and easy to follow. Weaknesses: -- The main weakness in the method is the use of frozen image retrieval component. The concern is that if this component is not end-to-end trained with the rest of the model, will the model quality be limited by the quality of image retrieval. While the reviewer acknowledges that empirical results show a large gap between CLIP and VALM on all tasks, it is worth wondering if the gains are due to embedding multiple images with text in VALM (compared to single image-text pair used originally in CLIP). An ablation with different values of k (number of images retrieved) will be helpful here. Clarity, Quality, Novelty And Reproducibility The paper is quite clear, quality of writing and results is high. The method is novel and authors mention they will be releasing the code on github later.
ICLR
Title Visually-Augmented Language Modeling Abstract Human language is grounded on multimodal knowledge including visual knowledge like colors, sizes, and shapes. However, current large-scale pre-trained language models rely on text-only self-supervised training with massive text data, which precludes them from utilizing relevant visual information when necessary. To address this, we propose a novel pre-training framework, named VALM, to Visually-augment text tokens with retrieved relevant images for Language Modeling. Specifically, VALM builds on a novel latent text-image alignment method via an image retrieval module to fetch corresponding images given a textual context. With the visually-augmented context, VALM uses a visual knowledge fusion layer to enable multimodal grounded language modeling by attending to both text context and visual knowledge in images. We evaluate VALM on various visual knowledge-intensive commonsense reasoning tasks, which require visual information to excel. The experimental results illustrate that VALM outperforms all strong language-only and vision-language baselines with substantial gains in reasoning object commonsense including color, size, and shape. Our code is available at https://github.com/Victorwz/VaLM. 1 INTRODUCTION Large-scale pre-trained language models (PLMs) have achieved great success in promoting state of the art on various natural language understanding and generation tasks (Devlin et al., 2019; Radford et al., 2019; Liu et al., 2019; Yang et al., 2019; Brown et al., 2020; Wang et al., 2022). PLM self-supervision training largely benefits from harvesting local context information in the pre-training corpus. To further strengthen such contextual self-supervision, recent seminal works, e.g. GPT-3 (Brown et al., 2020) and Megatron-LM (Narayanan et al., 2021), focus on increasing the model size and the scale of pre-training corpus. With billions of parameters, these tremendous PLMs exhibit incredible ability as zero-shot or few-shot learners. More remarkably, PLMs can achieve human-parity performance on various downstream tasks, even without any task-specific supervision. Another major research line of PLMs is to enhance the language model with auxiliary knowledge (Wei et al., 2021), including entity knowledge (Yu et al., 2020), relational knowledge (Zhang et al., 2019; Qin et al., 2021), text chunk (Lewis et al., 2020; Wu et al., 2022; Borgeaud et al., 2021), etc. The incorporation of various knowledge resources to PLMs mitigates the drawbacks of local contextual attention, bringing additional relevant global context that benefits both language understanding and generation tasks. Since current unimodal PLMs lack visual knowledge grounding, they inevitably suffer from the hallucination problem, which refers to the inconsistent or false statements generated by PLMs with respect to the world knowledge (Logan et al., 2019). For instance, the PLMs may predict the color of the sky as red only due to the statistical contextual correlations between the token “color” and “red” in the pre-training corpus, neglecting the commonsense facts. In this paper, we propose a novel framework to enable language model pre-training to take full advantage of both local text context and corresponding visual knowledge. Recent work on joint visionlanguage model (VLM) pre-training (Su et al., 2020; Tan & Bansal, 2020) relies on explicit alignments between text and image, e.g. supervised image captioning data, which limits the cross-modality fusion during fine-tuning/inference over text without accompanying images. As a consequence, later in our experiments (section 3), those prominent VLMs are found to achieve unsatisfactory performance on visual knowledge-intensive commonsense reasoning tasks. Instead, we design a flexible text-image alignment mechanism via an image retrieval module that gathers related images for each token as visual augmentation. To achieve better language-vision grounding, we propose a visual knowledge fusion layer to enable joint attention across visually-augmented context including both textual tokens and retrieved images. Based on this, we build up a Visually-augmented Language Model, VALM, with flexible on-the-fly visual knowledge enhancement. We evaluate the effectiveness of the proposed VALM on various commonsense reasoning and language-only benchmarks. Experimental results demonstrate that our model consistently outperforms the unimodal and multimodal baselines in terms of object commonsense reasoning. Remarkably, our method substantially improves +14.50%, +17.80%, and +11.68% accuracy on MEMORYCOLOR, RELATIVESIZE and OBJECTSHAPE datasets, respectively. Additional experiments on natural language understanding tasks also validate that the proposed visually-augmented language modeling framework could be helpful to improve the fundamental natural language understanding capability of PLMs. Our contributions are summarized as follows: • We propose a novel visually-augmented casual language model, VALM, to enable the language model to utilize visual knowledge flexibly and effectively. Through the proposed visual knowledge fused language modeling, VALM is capable of accomplishing tasks with the high demand of cross-modality knowledge, such as visual commonsense reasoning. • We design a framework to construct flexible on-the-fly text-image alignments and fuse augmented images into the context of language modeling. We implement an image retrieval module to query token-level representation in a large-scale cached image database and retrieve its nearest neighbors as the augmentation. With the proposed visual knowledge fusion layer, VALM can effectively take full advantage of both language information from local text context and visual information from retrieved images. • Experimental results demonstrate that VALM effectively alleviates the hallucination problem of PLMs via introducing visual knowledge in language model pre-training. VALM achieves significant performance improvements in inferring the commonsense object properties. 2 METHODS We propose a novel multi-modal pre-trained language model, which is augmented with retrieved images, named VALM. The architecture of VALM is presented in Figure 1. VALM augments each token in pre-training text corpus with k retrieved related images. VALM uses an image retrieval module to retrieve corresponding images for each token. The image retrieval module deploys a pre-trained CLIP model, which is capable of unifying the textual query and image candidates into a joint embedding space. VALM constructs a cached large-scale image knowledge base using image encoder of CLIP, and uses the contextual representation of each token as textual query to search its nearest neighbors in image knowledge base. With the help of the unified text and image embedding space provided by CLIP, the image nearest neighbors are taken as augmented images of each token to construct text and image alignments. We then propose a visual-knowledge fusion layer to enable learned hidden state to attend to both texts and augmented images. 2.1 VALM: VISUALLY-AUGMENTED LANGUAGE MODELING Given an input text sequence {xi}Ni=1, the embedding layer first encodes input vector {xi}Ni=1 into embedding space and outputs the initial hidden state H0 to the successive Transformer decoder layers. Then the proposed VALM model encodes H0 into visual knowledge fused contextual representations at difference levels H = {Hl}Ll=1 via L − 1 Transformer decoder layers and one special visual knowledge fusion layer. Each Transformer decoder layer is identical to Vaswani et al. (2017), which outputs the contextual representations at different semantic levels given the representation from the previous layer Hl = Layerl(H l−1), l ∈ [1, L]. The visual knowledge fusion layer is proposed as a variant of the Transformer decoder layer to incorporate visual knowledge in contextual learning via joint attention on both text contexts and augmented images. The visual knowledge fusion layer is injected in the second-to-last layer of VALM. The visual knowledge is stored in corresponding augmented image representations, obtained from image retrieval module {{zij}Kj=1} = frt(xi). Then the visual knowledge fusion layer takes the input including both contextual representation of the previous layer and augmented image sets and outputs a visual-knowledge fused contextual representation HL−1 = VisualLayer({HL−2i , {zij}Kj=1}Ni=1). Finally, the output contextual representations are passed into the output projection layer and a softmax function is used to compute the token probability P (xi|x1, · · · ,xi−1) = softmax(WHL + b). We conduct generative unsupervised pre-training (Radford et al., 2019) for VALM on a large-scale text corpus. The training objective of VALM is the standard left-to-right language modeling objective, which maximizes the likelihood of the next word token based on the left context: max ∑ x∈D |x|∑ i=1 logP (xi|x1, · · · ,xi−1), (1) where x represents a sentence randomly sampled from the large-scale pre-training text corpus D. 2.2 IMAGE RETRIEVAL The visual knowledge corresponding to a specific token is stored in its correlated images. Therefore, to prepare the fused visual knowledge, VALM deploys an image retrieval module to retrieve augmented images, denoted as frt(·). In order to achieve multi-modality text-image retrieval, it is of great importance to building up a discriminator to assess the correlation of every image in the extremely large-scale open image knowledge bases to the specific text representation. CLIP (Radford et al., 2021) proposed a simple-yet-effective method to connect images and texts into a unified multi-modal embedding space. We directly deploy the pre-trained CLIP model to encode the images and texts to enable a nearest neighbor text-image retrieval. Specifically, the pre-trained CLIP model we use in constructing the image retrieval module includes a ResNet-50x16 (He et al., 2016) model as an image encoder and a Transformer (Vaswani et al., 2017) model as a text encoder. Here, we only use the CLIP model as the backbone of our image retrieval module, and the CLIP parameters are not updated during the pre-training process of VALM. Image Knowledge Base Creation. The image knowledge base of the retrieval module is the cache of a set of image keys, which are the high-level visual representations of images. Given an image z ∈ Dimg, such visual representation can be obtained via forwarding image z to the pre-trained CLIP image encoder. Then the whole image knowledge base (Z) is constructed by taking the output hidden state fθI (x) as image keys: Z = ⋃ z∈Dimg{fθI (z)}, where θI represents the image encoder parameters. Textual Query. We take the contextual representation of each token as the query in the nearest neighbor search. For each sentence x ∈ D, the contextual representation of i-th token is computed via fθT (x<i), where θT represents the text encoder parameters. As the input sequence length of VALM generally exceeds the input length limitation of 75 tokens of CLIP text encoder, the long context x<i is cut off into a context-chunk yi for fitting in CLIP text encoder: yi = { x[t,i−1], i− t < 75, x[i−75,i−1], i− t ≥ 75, where t is the index of the closest stop character before i-th token. Then the textual query for i-th token is computed as its context-chunk representation as fθT (yi). kNN Text-Image Retrieval. The retrieval module uses the contextual representation to search the cached image knowledge base (Z) and retrieves k nearest neighbor image keys w.r.t. dot product distance. As the pre-trained CLIP model has learned a joint embedding space for text and image domain, the retrieved images {zij}Kj=1 are thus regarded as the top-k relevant images to the query. 2.3 VISUAL KNOWLEDGE FUSION With the help of the image retrieval module, each token in the pre-training corpus is augmented with k corresponding images, and these augmented images are represented in the joint embedding space with texts. Then the augmented image representations are directly treated as auxiliary “context” in the learning process. As the conventional Transformer decoder layer uses the multi-head self-attention (Vaswani et al., 2017) to learn the contextual representation, we extend it to a joint-attention mechanism and propose a novel visual knowledge fusion layer to enable each token to attend to both contexts and retrieval images jointly. In addition, due to the inconsistency in magnitude and distribution between contextual hidden states and retrieved image representations, we apply Layer Normalization (Ba et al., 2016) on retrieved K image representations to alleviate such inconsistency, denoted as LNimg. Assume that the hidden state output for i-th token is hi and the corresponding retrieved images are {zij}Kj=1, the hidden state HL−1i is computed as: Q = HL−2WQ + bQ,K = HL−2WK + bK ,V = HL−2WV + bV , (2) k̇ik = LNimg(zik)W K + bKimg, v̇ik = LNimg(zik)W V + bVimg, (3) ei = QiK T √ d , ai = exp (ei)∑L j=1 exp (eij) + ∑K k=1 exp (eik) , (4) eik = Qik̇ T ik√ d , aik = exp (eik)∑L j=1 exp (eij) + ∑K k=1 exp (eik) , (5) HL−1i = aiV + ∑ k aikv̇ik, (6) where Qi, k̇ik, v̇ik ∈ RE, K,V ∈ R|x|×E, ei, ai ∈ R|x|. The hidden state output from the previous layer HL−1i is linearly projected into contextual queries, keys, and values Q,K,V separately. K is the number of retrieved images for each token, and E is the embedding dimension for both context and image representations. In order to generate image-specific attention keys and values, we adopt image-specific bias bKimg, b V img in linear projections and reuse the contextual projection weights WK ,WV to generate image-specific attention keys and values. Moreover, it is vital to mention that the image-specific attention keys and values are distinct for each query token, which is highly different from self-attention where the contextual keys and values are kept the same for each token. A secondary subscript k is used to denote different image representations for the i-th token. 3 EXPERIMENTS 3.1 PRETRAINING SETUP Text Corpus. We use the English corpus of CC-100 (Conneau et al., 2020) as the pre-training text corpus for both VALM and baseline GPT-2∗. CC-100 corpus is one of the largest high-quality Task Example Prompt Object / Pair Answer web-crawl text data. The English monolingual dataset of CC-100 contains about 55 billion tokens, stored in 301 GiBs disk storage. Due to the limitation of computing resources, we only consume 15% of CC-100 English monolingual corpus for pre-training VALM and baseline GPT-2∗. Image Data. We use the LAION Open Image Dataset (Schuhmann et al., 2021) as the image knowledge base for dense retrieval. To the best of our knowledge, the LAION Open Dataset is one of the world’s largest openly available image-text-pair dataset with 400 million samples. Due to the disk space limitation, we randomly select half of LAION images for the dense text-image retrieval, which is 200M images in total. Pre-training Hyperparameters. The proposed model deploys transformer decoder architecture with 124M trainable parameters. Hyperparameter setting and training details are presented in Appendix B.1. Retrieval Module. For the implementation of the dense text-image retrieval module, we use the faiss (Johnson et al., 2021) toolkit to construct the efficient index. The faiss index contains the whole 200M image keys and provides the efficient nearest neighbor search. For efficiency purposes, we quantize all image keys to 32 bytes. faiss index stores image keys in clusters to speed up the search, which requires the additional training process to learn the cluster centroids. We use 10M keys for learning 131k cluster centroids and search 32 clusters to find the nearest neighbors during inference. We load the faiss index to GPU to achieve efficient dense text-image retrieval. 3.2 VISUAL KNOWLEDGE INTENSIVE TASKS The visual information stored in retrieved images can play a useful role in providing relevant visual knowledge to help language models perform better grounded commonsense reasoning. Such helpful visual information can be colors, positions, sizes, spatial relations, etc. The task of object commonsense reasoning requires language models to predict the correct visual property for a given object. To excel these tasks typically require models to capture and utilize intensive visual knowledge without any explicit text demonstrations or external knowledge bases. Due to reporting biases, such descriptive text of object properties rarely appears in text corpora, likely making this type of knowledge absent from language models. Thus, those visual knowledge-intensive tasks are likely challenging for both language models and vision-language models. We first compared VALM and recent baselines on four object commonsense reasoning datasets, MEMORYCOLOR (Norlund et al., 2021), COLORTERMS (Bruni et al., 2012), OBJECTSHAPE (Zhang et al., 2022a) reasoning, and RELATIVESIZE (Bagherinezhad et al., 2016). In addition, we use another physical interaction question answering dataset (PIQA) (Bisk et al., 2020), to evaluate whether such visual commonsense knowledge could be implicitly encoded and utilized in the question answering process. In Table 1, we provide examples for different visual commonsense reasoning tasks. MEMORYCOLOR and COLORTERMS Dataset. The memory color of a concrete object is the typical color an object appears in, e.g. the color of banana is mostly memorized as yellow. Norlund et al. (2021) proposed this dataset for evaluating visual knowledge transfer in multi-modal language models. The dataset contains 109 objects paired with their memory color labels, an illustrating picture, and a descriptor. The COLORTERMS dataset also contains a list of common items manually labeled with their commonsense color. Both datasets hold a set of 11 color labels. OBJECTSHAPE Dataset. Zhang et al. (2022a) proposed a visual commonsense dataset with a set of object attributes like shape. The dataset of object shapes contains 140 objects with their shape label. The OBJECTSHAPE dataset consists of 12 shape categories. RELATIVESIZE Dataset. Bagherinezhad et al. (2016) proposed the RELATIVESIZE dataset, which includes a total of 486 object pairs between 41 physical objects. The task of object size reasoning requires the model to predict the size relations between two given objects, e.g., an ant is smaller than an elephant. The size information is again rarely included and described in text, while it is much easier to capture from the images. We convert the size relation reasoning task into a binary question-answering form with "Yes"/"No" answers. PHYSICAL INTERACTION QUESTION ANSWERING. Physical Interaction Question Answering (PIQA) is proposed and designed to investigate the physical commonsense knowledge of existing language models (Bisk et al., 2020). Completing such question answering tasks requires the language model to effectively utilize physical commonsense knowledge, i.e. knowledge of basic properties of the objects (flexibility, curvature, and being porous). Language models are supposed to first achieve the perception of objects and later encode such physical knowledge into the language modeling process. Each data sample in PIQA contains one goal and two solutions. The model is supposed to figure out and predict the more reasonable and appropriate solution between two candidates. Evaluation Setting. We evaluate VALM and all baseline methods in a zero-shot manner without any task-specific tuning. Specifically, VALM takes the input consisting of textual prompts and objects during inference and predicts the property label as the last token. The prompts used in evaluating object color, shape, and size reasoning performance are listed in Appendix Table 11. We use the top-1 accuracy as the evaluation metric and compute the average accuracy of all listed prompts to increase evaluation robustness. For PIQA, we follow Shwartz et al. (2020) to use the cross-entropy loss as the scorer for each potential solution score(sij) = CE([gi, sij ]), j ∈ [0, 1]. Then the solution with lower scores is selected as the prediction. The classification accuracy is used as the evaluation metric. Baselines. We consider both pretrained language-only and vision-language models as baselines. In particular, three strong language models are considered for comparison with VALM, including 1) GPT-2∗ (Radford et al., 2019); 2) BERT Devlin et al. (2019); and 3) CaptionBERT (Zhang et al., 2022a), a pre-trained auto-encoding language model on Oscar’s (Li et al., 2020) caption-based text data. Here, GPT-2∗ is re-implemented and trained from scratch using the identical training data, hyper-parameter settings, and model size as the proposed VALM. Additionally, we also compare VALM with prominent vision-language models, including 1) OSCAR (Li et al., 2020), a pre-trained vision-language model with learned representations that capture channel-invariant factors (i.e. object tags) at the semantic level; 2) VisualBERT (Li et al., 2019), a vision-language model with learned joint contextualized representations across vision and language; 3) CLIP (Radford et al., 2021), a vision-language system with one image encoder and one text encoder which are mapped into a same cross-modal embedding space. We directly use OSCAR and VisualBERT as auto-encoding language models for zero-shot evaluations. For CLIP, we first retrieve the corresponding image using the concatenated query prompt and the given object. Then, the dot-product similarity of the retrieved image vector and the candidate-aware text vector (including the query prompt, the given object, and one candidate label) is used to rank. Finally, the top-ranked candidate label is regarded as the prediction for evaluation. Results. The main results on four object commonsense reasoning datasets are summarized in Table 2. The two variants of VALM (K = 4, 8) significantly outperform all considered language models and vision-language models on object color and shape reasoning datasets, with an improvement of +14.50%, +13.56%, and +11.68% on MEMORYCOLOR, COLORTERMS, and OBJECTSHAPE respectively. Moreover, the proposed VALM with K = 4 achieves an encouraging result with +17.80% accuracy gain over the best baseline, VisualBERT on RELATIVESIZE. The substantial improvements on these datasets demonstrate that VALM takes full advantage of visual knowledge (object visual property) to complete the corresponding visual commonsense reasoning. Surprisingly, the zero-shot evaluation results of all auto-encoding language models and vision-language models are below 40% accuracy on object color and shape reasoning datasets. Although pretrained with aligned text-image pairs, those vision-language models cannot effectively utilize relevant visual knowledge from their jointly contextualized vision-language representations. Among language models, the auto-regressive PLMs significantly outperform auto-encoding PLMs, suggesting that auto-regressive PLMs are likely better at zero-shot reasoners. We also observe that retrieving more images for each token results in a performance drop for object size and shape reasoning. We attribute the degradation to the increased noise brought by augmenting with more images which causes model confusion when differentiating relevant visual information from irrelevant one. PIQA is a more challenging task that requires the model to reason useful implicit object properties and utilize these commonsense in the question answering process. The results on PIQA are presented in Table 3. As is shown, VALM outperforms all baseline language models with +2.11% accuracy improvement. The two variants of VALM achieve almost identical performance because the selection for the correct solution is based on the language modeling perplexity, indicating that the two variants demonstrate similar language modeling capability. 3.3 NATURAL LANGUAGE UNDERSTANDING AND LANGUAGE MODELING TASKS The casual language modeling pre-training task enables PLMs to naturally perform natural language understanding (NLU) and long-text modeling. Therefore, the zero-shot natural language understanding and language modeling performance are widely adopted to evaluate the capability of PLMs (Radford et al., 2019). Here, we evaluate VALM and the most relevant language model baseline GPT-2∗ on four NLU datasets, SST-2 (Socher et al., 2013), MPQA (Wiebe et al., 2005), DBPeida (Auer et al., 2007), and AGNews (Zhang et al., 2015). The prediction accuracy is used as the evaluation metric. In addition, following Radford et al. (2019), Wikitext-103 (Merity et al., 2017) and Lambada corpus (Paperno et al., 2016) are considered to study the language modeling performance in a zero-shot manner. We report perplexity for two corpora and also report last-word prediction accuracy for Lambada corpus. The results on natural language understanding are summarized in Table 4. It is easy to see that VALM achieves decent improvements on all four NLU tasks, indicating that the cross-modality knowledge learned in our model is likely helpful for typical natural language understanding. Thus, our visually-augmented language modeling framework can be further explored to enhance the natural language understanding ability of PLMs. Table 5 illustrates the results of language modeling tasks. Again, VALM slightly improves the perplexity on both datasets, +0.68 on Wikitext-103 and +0.08 on Lambda. A similar trend is observed for the final word prediction accuracy on Lambada. Different from previous visual knowledge intensive commonsense reasoning tasks (subsection 3.2), we find that VALM models with different numbers of retrieved images (K = 8 vs K = 4) perform similarly on the intrinsic language modeling task, suggesting that VALM can effectively ignore irrelevant visual information when the task is unlikely to benefit from visual knowledge. In other words, visual commonsense reasoning tasks require more fine-grained fusions of text and image, i.e. locating the text object in the image set, extracting relevant vision information, and verbalizing reasoning output. In contrast, a certain portion of text from general language modeling corpora s is probably not visually related. Thus, only a coarse-grained fusion is sufficient here (e.g. deciding if the image set is useful), making the language modeling evaluation less affected by the retrieval noise from augmented images. 3.4 ABLATION STUDIES So far, we empirically verify the effectiveness and superiority of VALM in utilizing visual knowledge for both visual knowledge-intensive tasks and traditional language understanding and modeling. To figure out how the visual information takes effect in our model, we focus on two questions here: 1) Is the model capable of using the retrieved image representations as "auxiliary" contexts? What is the effect of disabling such retrieved image representations during inference? To evaluate this, we design an ablation model which set K = 0 and disables image retrieval and fusion during inference. 2) Does the model learn to leverage visual knowledge in the retrieved images? What is the effect of directly augmenting randomly-retrieved image representations during inference? Thus, an ablation model which retrieves random images as augmentations during inference is used for probing. The results of the two ablation models, Randomly-Retrieval and Disable-Retrieval during the inference stage, are listed in the first two rows of Table 6. As we can see, both changes to the image retrieval result in noticeable performance degradation on all evaluation tasks. In particular, we find that disabling the image retrieval and augmenting no image during inference also makes a huge difference to the language modeling perplexity on two corpora, which is more related to pure text corpus rather than augmented images. Therefore, it suggests that VALM is able to effectively capture rich semantics from both pretraining sources, i.e. text corpus as well as augmented images. In other words, the improved zero-shot task transferability of VALM relies on visual information from augmented images, which complements the linguistic knowledge learned via text-based self-supervised learning. The results of the Randomly-Retrieval ablation model further illustrate that achieving the capability of integrating visual knowledge cannot be realized by only augmenting unrelated images to language models, while only context-relevant images can make a true difference. VALM proposes a novel visual knowledge fusion layer with a joint context-image attention mechanism as a key component to fuse visual knowledge into the language model. The separate linear projection layers are regarded as important components to map contexts into different embedding spaces for attention keys and values. Therefore, the proposed joint self-attention mechanism naturally holds three variations to generate image keys and values: establish image-specific linear projections, reuse contextual linear projections, and only use specific linear bias for augmented images. We conduct the ablation study to evaluate the effect of these three alternatives on image linear projections. The results in Table 6 demonstrate that adopting image-specific projection bias outperforms directly sharing the contextual projection bias. Introducing additional image-specific linear projection weights does not lead to further performance improvement. Thus, we take the strategy of only adding additional linear bias for augmented images and reuse contextual linear weights in generating visual attention keys and values for implementation convenience and parameter efficiency. img in Equation 3 for augmented images. The proposed model VALM is shown in the last row which introduces only image-specific bias and reuses contextual weight in attention key and value projection layers. 4 RELATED WORK Pre-trained Language Models. Pre-trained language models (PLMs) revolutionized NLP research. Enabled by attention mechanism (Bahdanau et al., 2015) and Transformer architecture (Vaswani et al., 2017), the state-of-the-art PLMs, including BERT (Liu et al., 2019), GPT (Radford et al., 2018; 2019), RoBERTa (Liu et al., 2019), ELECTRA (Clark et al., 2020), T5 (Raffel et al., 2020), and OPT (Zhang et al., 2022b), have become the dominant approach in NLP tasks via the paradigm of pre-training on large-scale text corpora and fine-tuning on downstream tasks. With the exponential scaling up of model size, a surprising fact emerged that the PLMs like GPT-3 (Brown et al., 2020) can work as few-shot or zero-shot learners. Vision-Language Models. Vision-language tasks are at the intersection area of both modalities of vision and language, like visual-question answering (Agrawal et al., 2015), and image captioning (Chen et al., 2015). ViL-BERT (Lu et al., 2019) firstly proposed to generate image region features via object detection and then learn joint multi-modal representations via an interacted two-stream model. OSCAR (Li et al., 2020) proposed to introduce object tags detected in images as anchor points to solve the issue of high demand for image-text alignments. Another significant pathway for VLMs is to construct a unified embedding space for texts and images and use textual prompts to extract task-specific labels during inference, of which the representative models are CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021). Visually-Grounded Language Learning. Visually-grounded language learning is an emerging research topic in vision-language learning, in which the proposed VALM can be categorized in this area with other prior works like Vokenization (Tan & Bansal, 2020), VidLanKD (Tang et al., 2021), and iACE (Lu et al., 2022). Visual information and knowledge can be memorized by the PLMs via fusion layer or concatenated inputs. However, extracting and utilizing the visual information efficiently and effectively is still difficult for uni-modal language models. Vokenization concatenated tokens and token-related images as “vokens", transferring sentence-level caption text to token-level “voken" with a Vokenizer model. 5 CONCLUSION In this paper, we propose a multi-modal framework VALM to enable auto-regressive language modeling to effectively utilize visual knowledge. Specifically, an effective text-to-image retrieval module is designed to construct latent text-image alignments for visually-grounded language modeling. Empowered by pre-training, VALM achieves improved zero-shot task transfer on downstream tasks. Experiments on various visual knowledge-intensive tasks demonstrate the effectiveness of our model over recent vision-language models. VALM also achieves decent improvements over language models on multiple representative natural language understanding tasks. For future work, we plan to adapt the model architecture to encoder-only and encoder-decoder Transformer backbones. We are also interested in more input modalities for VALM. ACKNOWLEDGEMENTS We would like to thank the anonymous reviewers for the helpful comments. We appreciate Zewen Chi and Hangbo Bao for the fruitful discussions, and Yaru Hao for helpful suggestions on evaluation benchmarks. A ADDITIONAL RESULTS A.1 TIME-COST EFFECTS OF RETRIEVAL AND IMAGESET SIZE Introducing efficient image retrieval on GPU brings a linear increase in inference time cost (about 2.1 times of text-only GPT-2∗ baseline), shown in Table 7. This cost is negligible with larger-size language models because the model forward cost will increase many times while the retrieval cost will not change with the model size. The retrieval cost can be further improved by searching fewer clusters or decreasing the number of encoding bytes for approximate image keys, with a minor trade-off on the performance. Moreover, efficient nearest neighbor search is an active research area (Guo et al., 2020) and we could try other efficient search tools to accelerate the retrieval. As the introduced retrieval time cost is proportional to the size of imageset for dense retrieval, we provide more details on the relationship between retrieval time cost and imageset size, presented in Table 7. Concluded from Table 7, there is no significant performance decrease with the smaller imageset size from the original 200M down to 10M. As the 10M set is still large and sufficient for providing enough visual knowledge, we will consider deploying a 10M size imageset to train VALM for potential real-time industry applications. A.2 COMPARISONS WITH ADDITIONAL STRONG BASELINES We compare VALM with Vokenization (Tan & Bansal, 2020) on four visual-knowledge-intensive tasks, and the results are shown in Table 8. In addition, we evaluate the performance of large language models on the visual–knowledge-intensive tasks for stronger and more fair comparisons. We evaluate the OPT (1.3B parameters) (Zhang et al., 2022b) model on these visual–knowledgeintensive tasks and the results are presented in Table 8. VALM(124M parameters) significantly outperforms the OPT-1.3B on four datasets, which further demonstrates the challenge of solving those visual-knowledge-intensive tasks and the effectiveness of our method. A.3 SCALING EFFECT OF VALM We train the 355M model (GPT-2 Medium Size) of VALM (k=8) to evaluate the effects of scaling up model parameters. The results are presented in Table 9 and the model performance is significantly improved on four visual knowledge-intensive datasets. We will seek more computation resources to train large size VALM models. A.4 ABLATION STUDY OF K We further conduct another ablation study by setting the number of augmented images K = 1 for VALM, which is very similar to the CLIP (Radford et al., 2021) inference. The results are presented in Table 10. VALM (k=1) significantly outperforms CLIP in all visual-knowledge-intensive tasks, validating the effectiveness of our method. A.5 CASE STUDIES We provide a case study in the object color reasoning task for VALM. In order to reason the correct commonsense color of objects sky and parsley, VALM takes the input combination of the prompt and the object as “the color of [object] is”. Then we present the retrieval results of top-4 corresponding images to the textual query in Figure 2. A.6 COLORIZATION EFFECT We conduct another interesting ablation case study to evaluate the effect of image color changes in the object color reasoning task. Specifically, VALM predicts the color label of an apple as red based on the commonsense in both contexts and retrieved images. The original prediction probability distribution is presented in Blue Bars of Figure 3(b). Then we replace the retrieved images with K unusual images of green apples in OBJECTCOLORIZATION dataset (Anwar et al., 2020), shown in Figure 3(a). The predicted probability distribution for 11 color types given replaced colorization objects is presented in Orange Bars of Figure 3(b). We could observe a clear probability increase in the color type of green and a decrease in that of red, which is confronted with the colorization process. This ablation study demonstrates VALM is capable of extracting useful visual knowledge from retrieved object images and inferring correct semantics based on that. Given retrieved object images in different colors, VALM could extract the correct color knowledge and adopt it in its semantic inference stage. B EXPERIMENTAL DETAILS B.1 PRE-TRAINING HYPERPARAMETERS AND TRAINING DETAILS The implementation of models and all experiments are based on the fairseq (Ott et al., 2019) toolkit. The proposed model deploys transformer decoder architecture with 124M trainable parameters, in which nlayer = 12, nhead = 12, dembed = 768. We deploy Adam (Kingma & Ba, 2015) (β1 = 0.9, β2 = 0.98) optimizer and train all models with lr = 0.0005, twarmup = 4000, dropout = 0.1, bsz = 128, len = 512. The layer normalization over the retrieved image keys is initialized with ϵ of 0.00001. VALM reuses the identical lower-cased byte pair encoding (BPE) (Sennrich et al., 2016) representation with a 49152 vocab size of CLIP text encoder. The proposed VALM and re-implemented GPT-2∗ are trained for 500k steps using 16 Nvidia Tesla V100-SXM2-32GB GPUs. The encoded 200M image knowledge base takes up 274GiBs disk storage and the trained faiss approximate retrieval index takes another 14GiBs storage. B.2 PROBE TEMPLATES We present all zero-shot query prompts and labels for 4 object commonsense reasoning datasets and 4 natural language understanding benchmarks in Tabele 11.
1. What is the main contribution of the paper regarding language models and visual knowledge? 2. What are the strengths and weaknesses of the proposed approach, particularly in its ability to incorporate external visual knowledge and its limitations in evaluating retrieved images and impact on runtime? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any suggestions or comments provided by the reviewer regarding the paper's presentation, minor issues, and potential applications of the proposed method?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper improves a language model's performance on pure language tasks about visual concepts by augmenting its internal representation with dynamically retrieved images. (motivation) Global context (external knowledge about entities, relations, etc.) has been incorporated into pre-trained language models (PLMs), but so far it has not been visual. This paper incorporates external visual knowledge into their PLM (called VaLM) to prevent VaLM from hallucinating inconsistent statements and give VaLM visual knowledge that is harder to obtain from text. (approach) VaLM is a standarder decoder style PLM, used like any standard PLM and trained on the CC-100 dataset (text only). The difference is that the second to last self attention layer is replaced with a Visual Knowledge Fusion Layer (VKFL). First the VKFL uses CLIP to retrieve K (4 or 8) relevant images from a database of 200M natural images (from LAION) where relevance is according to the text input VaLM has seen so far. Second the VKFL fuses these relevant images with VaLM's hidden state at the previous layer to produce the output hidden state - the tokens are the same, they just have additional visual knowledge. (experiments) Results show that When asked to complete text prompts about the typical color, shape, and relative size of objects, VaLM outperforms both PLMs and pretrained vision-language models by a large margin. VaLM outperforms baselines at pysical commensense QA (PIQA). VaLM is at parity with and sometimes better than baselines at traditional language understanding and language modeling tasks. Ablations show disabling image retrieval or retrieving random (not relevant) images hurts performance. Ablations also justify design decisions in the attention mechanism. VaLM improves language modeling performance by adding global image context internally. Strengths And Weaknesses Strengths Incorporating visual background knowledge in a language only model makes a lot of sense and seems novel. Furthermore, VaLM is a simple and effective realization of the idea in practice. The writing is clear and straightforward, without frills. Weaknesses There are couple areas where the model should have been evaluted: There is limited analysis of the images retrieved by the retrieval module. Using the figure 1 example, the idea is that it will say the sky is blue because the retrieved images tend to be blue. Are the retrieved images consistent with its answer? That is, given that it says the sky is blue, does it actually retrieve images where the sky is blue? If the retrieved images are manually substituted with alternative images where the sky is green then does VaLM say the sky is green? It would be good to have a systematic evaluation of the retrieved images, but it would also help to simply provide examples. A related concern is about the images recalled for each piece of a sentence. As I understand it, the visual knowledge fusion layer recalls a different set of K images for each sentence part (e.g. different images for "The color", "The color of", "The color of sky", etc.). How do the recalled images vary over the course of the sentence? Do they stay the same when non-semantic words like "of" are used? Do they change appropriately when semantic words like "sky" are used? There is no evaluation of how this impacts the model's runtime. How long does a forward pass take when generating every word requires a kNN lookup? There are also some points where the presentation could be clearer: The motivation in 3.2 ("object properties rarely appears in text corpora") seems like it should also belong in the introduction. To me this is a key reason to expect this approach to be helpful. The text doesn't mention what the "Majority" row in Table 4 means. Finally, here are some minor suggestions and comments: The attention mechanism over K retrieved images is novel, but a fairly straightforward extension of the attention mechanism once one has already decided to retrieve relevant images. The major contribution is in retrieving relevant images. Can VaLM be used as a vision-language model? Essentially, what if it was given vision and langauge tasks (e.g., VQA) and then the image retrieval module was replaced with a module that simply returned the image(s) associated with the VL task example? Would it perform well at the VL task? In general, does it treat the retrieved images as global / abstract context or does it also consider them as specific context local to those images? Clarity, Quality, Novelty And Reproducibility Novelty and Significance: The idea is novel, timely and very relevant to recent progress in transformers. It provides a crisp solution to a clear problem. There are also many potential directions to both build on this work and apply it to improve current state of the art solutions. Quality: The experiments provide strong support that VaLM works well, though further investigation of how the retrieval module is working would be good to add in this paper. Clarity: The presentation is very simple and clear. Reproducibility: The information in the paper is enough to implement the model. It provides hyperparameter details as well as the prompts used for evaluation. A code release is also promised.
ICLR
Title Visually-Augmented Language Modeling Abstract Human language is grounded on multimodal knowledge including visual knowledge like colors, sizes, and shapes. However, current large-scale pre-trained language models rely on text-only self-supervised training with massive text data, which precludes them from utilizing relevant visual information when necessary. To address this, we propose a novel pre-training framework, named VALM, to Visually-augment text tokens with retrieved relevant images for Language Modeling. Specifically, VALM builds on a novel latent text-image alignment method via an image retrieval module to fetch corresponding images given a textual context. With the visually-augmented context, VALM uses a visual knowledge fusion layer to enable multimodal grounded language modeling by attending to both text context and visual knowledge in images. We evaluate VALM on various visual knowledge-intensive commonsense reasoning tasks, which require visual information to excel. The experimental results illustrate that VALM outperforms all strong language-only and vision-language baselines with substantial gains in reasoning object commonsense including color, size, and shape. Our code is available at https://github.com/Victorwz/VaLM. 1 INTRODUCTION Large-scale pre-trained language models (PLMs) have achieved great success in promoting state of the art on various natural language understanding and generation tasks (Devlin et al., 2019; Radford et al., 2019; Liu et al., 2019; Yang et al., 2019; Brown et al., 2020; Wang et al., 2022). PLM self-supervision training largely benefits from harvesting local context information in the pre-training corpus. To further strengthen such contextual self-supervision, recent seminal works, e.g. GPT-3 (Brown et al., 2020) and Megatron-LM (Narayanan et al., 2021), focus on increasing the model size and the scale of pre-training corpus. With billions of parameters, these tremendous PLMs exhibit incredible ability as zero-shot or few-shot learners. More remarkably, PLMs can achieve human-parity performance on various downstream tasks, even without any task-specific supervision. Another major research line of PLMs is to enhance the language model with auxiliary knowledge (Wei et al., 2021), including entity knowledge (Yu et al., 2020), relational knowledge (Zhang et al., 2019; Qin et al., 2021), text chunk (Lewis et al., 2020; Wu et al., 2022; Borgeaud et al., 2021), etc. The incorporation of various knowledge resources to PLMs mitigates the drawbacks of local contextual attention, bringing additional relevant global context that benefits both language understanding and generation tasks. Since current unimodal PLMs lack visual knowledge grounding, they inevitably suffer from the hallucination problem, which refers to the inconsistent or false statements generated by PLMs with respect to the world knowledge (Logan et al., 2019). For instance, the PLMs may predict the color of the sky as red only due to the statistical contextual correlations between the token “color” and “red” in the pre-training corpus, neglecting the commonsense facts. In this paper, we propose a novel framework to enable language model pre-training to take full advantage of both local text context and corresponding visual knowledge. Recent work on joint visionlanguage model (VLM) pre-training (Su et al., 2020; Tan & Bansal, 2020) relies on explicit alignments between text and image, e.g. supervised image captioning data, which limits the cross-modality fusion during fine-tuning/inference over text without accompanying images. As a consequence, later in our experiments (section 3), those prominent VLMs are found to achieve unsatisfactory performance on visual knowledge-intensive commonsense reasoning tasks. Instead, we design a flexible text-image alignment mechanism via an image retrieval module that gathers related images for each token as visual augmentation. To achieve better language-vision grounding, we propose a visual knowledge fusion layer to enable joint attention across visually-augmented context including both textual tokens and retrieved images. Based on this, we build up a Visually-augmented Language Model, VALM, with flexible on-the-fly visual knowledge enhancement. We evaluate the effectiveness of the proposed VALM on various commonsense reasoning and language-only benchmarks. Experimental results demonstrate that our model consistently outperforms the unimodal and multimodal baselines in terms of object commonsense reasoning. Remarkably, our method substantially improves +14.50%, +17.80%, and +11.68% accuracy on MEMORYCOLOR, RELATIVESIZE and OBJECTSHAPE datasets, respectively. Additional experiments on natural language understanding tasks also validate that the proposed visually-augmented language modeling framework could be helpful to improve the fundamental natural language understanding capability of PLMs. Our contributions are summarized as follows: • We propose a novel visually-augmented casual language model, VALM, to enable the language model to utilize visual knowledge flexibly and effectively. Through the proposed visual knowledge fused language modeling, VALM is capable of accomplishing tasks with the high demand of cross-modality knowledge, such as visual commonsense reasoning. • We design a framework to construct flexible on-the-fly text-image alignments and fuse augmented images into the context of language modeling. We implement an image retrieval module to query token-level representation in a large-scale cached image database and retrieve its nearest neighbors as the augmentation. With the proposed visual knowledge fusion layer, VALM can effectively take full advantage of both language information from local text context and visual information from retrieved images. • Experimental results demonstrate that VALM effectively alleviates the hallucination problem of PLMs via introducing visual knowledge in language model pre-training. VALM achieves significant performance improvements in inferring the commonsense object properties. 2 METHODS We propose a novel multi-modal pre-trained language model, which is augmented with retrieved images, named VALM. The architecture of VALM is presented in Figure 1. VALM augments each token in pre-training text corpus with k retrieved related images. VALM uses an image retrieval module to retrieve corresponding images for each token. The image retrieval module deploys a pre-trained CLIP model, which is capable of unifying the textual query and image candidates into a joint embedding space. VALM constructs a cached large-scale image knowledge base using image encoder of CLIP, and uses the contextual representation of each token as textual query to search its nearest neighbors in image knowledge base. With the help of the unified text and image embedding space provided by CLIP, the image nearest neighbors are taken as augmented images of each token to construct text and image alignments. We then propose a visual-knowledge fusion layer to enable learned hidden state to attend to both texts and augmented images. 2.1 VALM: VISUALLY-AUGMENTED LANGUAGE MODELING Given an input text sequence {xi}Ni=1, the embedding layer first encodes input vector {xi}Ni=1 into embedding space and outputs the initial hidden state H0 to the successive Transformer decoder layers. Then the proposed VALM model encodes H0 into visual knowledge fused contextual representations at difference levels H = {Hl}Ll=1 via L − 1 Transformer decoder layers and one special visual knowledge fusion layer. Each Transformer decoder layer is identical to Vaswani et al. (2017), which outputs the contextual representations at different semantic levels given the representation from the previous layer Hl = Layerl(H l−1), l ∈ [1, L]. The visual knowledge fusion layer is proposed as a variant of the Transformer decoder layer to incorporate visual knowledge in contextual learning via joint attention on both text contexts and augmented images. The visual knowledge fusion layer is injected in the second-to-last layer of VALM. The visual knowledge is stored in corresponding augmented image representations, obtained from image retrieval module {{zij}Kj=1} = frt(xi). Then the visual knowledge fusion layer takes the input including both contextual representation of the previous layer and augmented image sets and outputs a visual-knowledge fused contextual representation HL−1 = VisualLayer({HL−2i , {zij}Kj=1}Ni=1). Finally, the output contextual representations are passed into the output projection layer and a softmax function is used to compute the token probability P (xi|x1, · · · ,xi−1) = softmax(WHL + b). We conduct generative unsupervised pre-training (Radford et al., 2019) for VALM on a large-scale text corpus. The training objective of VALM is the standard left-to-right language modeling objective, which maximizes the likelihood of the next word token based on the left context: max ∑ x∈D |x|∑ i=1 logP (xi|x1, · · · ,xi−1), (1) where x represents a sentence randomly sampled from the large-scale pre-training text corpus D. 2.2 IMAGE RETRIEVAL The visual knowledge corresponding to a specific token is stored in its correlated images. Therefore, to prepare the fused visual knowledge, VALM deploys an image retrieval module to retrieve augmented images, denoted as frt(·). In order to achieve multi-modality text-image retrieval, it is of great importance to building up a discriminator to assess the correlation of every image in the extremely large-scale open image knowledge bases to the specific text representation. CLIP (Radford et al., 2021) proposed a simple-yet-effective method to connect images and texts into a unified multi-modal embedding space. We directly deploy the pre-trained CLIP model to encode the images and texts to enable a nearest neighbor text-image retrieval. Specifically, the pre-trained CLIP model we use in constructing the image retrieval module includes a ResNet-50x16 (He et al., 2016) model as an image encoder and a Transformer (Vaswani et al., 2017) model as a text encoder. Here, we only use the CLIP model as the backbone of our image retrieval module, and the CLIP parameters are not updated during the pre-training process of VALM. Image Knowledge Base Creation. The image knowledge base of the retrieval module is the cache of a set of image keys, which are the high-level visual representations of images. Given an image z ∈ Dimg, such visual representation can be obtained via forwarding image z to the pre-trained CLIP image encoder. Then the whole image knowledge base (Z) is constructed by taking the output hidden state fθI (x) as image keys: Z = ⋃ z∈Dimg{fθI (z)}, where θI represents the image encoder parameters. Textual Query. We take the contextual representation of each token as the query in the nearest neighbor search. For each sentence x ∈ D, the contextual representation of i-th token is computed via fθT (x<i), where θT represents the text encoder parameters. As the input sequence length of VALM generally exceeds the input length limitation of 75 tokens of CLIP text encoder, the long context x<i is cut off into a context-chunk yi for fitting in CLIP text encoder: yi = { x[t,i−1], i− t < 75, x[i−75,i−1], i− t ≥ 75, where t is the index of the closest stop character before i-th token. Then the textual query for i-th token is computed as its context-chunk representation as fθT (yi). kNN Text-Image Retrieval. The retrieval module uses the contextual representation to search the cached image knowledge base (Z) and retrieves k nearest neighbor image keys w.r.t. dot product distance. As the pre-trained CLIP model has learned a joint embedding space for text and image domain, the retrieved images {zij}Kj=1 are thus regarded as the top-k relevant images to the query. 2.3 VISUAL KNOWLEDGE FUSION With the help of the image retrieval module, each token in the pre-training corpus is augmented with k corresponding images, and these augmented images are represented in the joint embedding space with texts. Then the augmented image representations are directly treated as auxiliary “context” in the learning process. As the conventional Transformer decoder layer uses the multi-head self-attention (Vaswani et al., 2017) to learn the contextual representation, we extend it to a joint-attention mechanism and propose a novel visual knowledge fusion layer to enable each token to attend to both contexts and retrieval images jointly. In addition, due to the inconsistency in magnitude and distribution between contextual hidden states and retrieved image representations, we apply Layer Normalization (Ba et al., 2016) on retrieved K image representations to alleviate such inconsistency, denoted as LNimg. Assume that the hidden state output for i-th token is hi and the corresponding retrieved images are {zij}Kj=1, the hidden state HL−1i is computed as: Q = HL−2WQ + bQ,K = HL−2WK + bK ,V = HL−2WV + bV , (2) k̇ik = LNimg(zik)W K + bKimg, v̇ik = LNimg(zik)W V + bVimg, (3) ei = QiK T √ d , ai = exp (ei)∑L j=1 exp (eij) + ∑K k=1 exp (eik) , (4) eik = Qik̇ T ik√ d , aik = exp (eik)∑L j=1 exp (eij) + ∑K k=1 exp (eik) , (5) HL−1i = aiV + ∑ k aikv̇ik, (6) where Qi, k̇ik, v̇ik ∈ RE, K,V ∈ R|x|×E, ei, ai ∈ R|x|. The hidden state output from the previous layer HL−1i is linearly projected into contextual queries, keys, and values Q,K,V separately. K is the number of retrieved images for each token, and E is the embedding dimension for both context and image representations. In order to generate image-specific attention keys and values, we adopt image-specific bias bKimg, b V img in linear projections and reuse the contextual projection weights WK ,WV to generate image-specific attention keys and values. Moreover, it is vital to mention that the image-specific attention keys and values are distinct for each query token, which is highly different from self-attention where the contextual keys and values are kept the same for each token. A secondary subscript k is used to denote different image representations for the i-th token. 3 EXPERIMENTS 3.1 PRETRAINING SETUP Text Corpus. We use the English corpus of CC-100 (Conneau et al., 2020) as the pre-training text corpus for both VALM and baseline GPT-2∗. CC-100 corpus is one of the largest high-quality Task Example Prompt Object / Pair Answer web-crawl text data. The English monolingual dataset of CC-100 contains about 55 billion tokens, stored in 301 GiBs disk storage. Due to the limitation of computing resources, we only consume 15% of CC-100 English monolingual corpus for pre-training VALM and baseline GPT-2∗. Image Data. We use the LAION Open Image Dataset (Schuhmann et al., 2021) as the image knowledge base for dense retrieval. To the best of our knowledge, the LAION Open Dataset is one of the world’s largest openly available image-text-pair dataset with 400 million samples. Due to the disk space limitation, we randomly select half of LAION images for the dense text-image retrieval, which is 200M images in total. Pre-training Hyperparameters. The proposed model deploys transformer decoder architecture with 124M trainable parameters. Hyperparameter setting and training details are presented in Appendix B.1. Retrieval Module. For the implementation of the dense text-image retrieval module, we use the faiss (Johnson et al., 2021) toolkit to construct the efficient index. The faiss index contains the whole 200M image keys and provides the efficient nearest neighbor search. For efficiency purposes, we quantize all image keys to 32 bytes. faiss index stores image keys in clusters to speed up the search, which requires the additional training process to learn the cluster centroids. We use 10M keys for learning 131k cluster centroids and search 32 clusters to find the nearest neighbors during inference. We load the faiss index to GPU to achieve efficient dense text-image retrieval. 3.2 VISUAL KNOWLEDGE INTENSIVE TASKS The visual information stored in retrieved images can play a useful role in providing relevant visual knowledge to help language models perform better grounded commonsense reasoning. Such helpful visual information can be colors, positions, sizes, spatial relations, etc. The task of object commonsense reasoning requires language models to predict the correct visual property for a given object. To excel these tasks typically require models to capture and utilize intensive visual knowledge without any explicit text demonstrations or external knowledge bases. Due to reporting biases, such descriptive text of object properties rarely appears in text corpora, likely making this type of knowledge absent from language models. Thus, those visual knowledge-intensive tasks are likely challenging for both language models and vision-language models. We first compared VALM and recent baselines on four object commonsense reasoning datasets, MEMORYCOLOR (Norlund et al., 2021), COLORTERMS (Bruni et al., 2012), OBJECTSHAPE (Zhang et al., 2022a) reasoning, and RELATIVESIZE (Bagherinezhad et al., 2016). In addition, we use another physical interaction question answering dataset (PIQA) (Bisk et al., 2020), to evaluate whether such visual commonsense knowledge could be implicitly encoded and utilized in the question answering process. In Table 1, we provide examples for different visual commonsense reasoning tasks. MEMORYCOLOR and COLORTERMS Dataset. The memory color of a concrete object is the typical color an object appears in, e.g. the color of banana is mostly memorized as yellow. Norlund et al. (2021) proposed this dataset for evaluating visual knowledge transfer in multi-modal language models. The dataset contains 109 objects paired with their memory color labels, an illustrating picture, and a descriptor. The COLORTERMS dataset also contains a list of common items manually labeled with their commonsense color. Both datasets hold a set of 11 color labels. OBJECTSHAPE Dataset. Zhang et al. (2022a) proposed a visual commonsense dataset with a set of object attributes like shape. The dataset of object shapes contains 140 objects with their shape label. The OBJECTSHAPE dataset consists of 12 shape categories. RELATIVESIZE Dataset. Bagherinezhad et al. (2016) proposed the RELATIVESIZE dataset, which includes a total of 486 object pairs between 41 physical objects. The task of object size reasoning requires the model to predict the size relations between two given objects, e.g., an ant is smaller than an elephant. The size information is again rarely included and described in text, while it is much easier to capture from the images. We convert the size relation reasoning task into a binary question-answering form with "Yes"/"No" answers. PHYSICAL INTERACTION QUESTION ANSWERING. Physical Interaction Question Answering (PIQA) is proposed and designed to investigate the physical commonsense knowledge of existing language models (Bisk et al., 2020). Completing such question answering tasks requires the language model to effectively utilize physical commonsense knowledge, i.e. knowledge of basic properties of the objects (flexibility, curvature, and being porous). Language models are supposed to first achieve the perception of objects and later encode such physical knowledge into the language modeling process. Each data sample in PIQA contains one goal and two solutions. The model is supposed to figure out and predict the more reasonable and appropriate solution between two candidates. Evaluation Setting. We evaluate VALM and all baseline methods in a zero-shot manner without any task-specific tuning. Specifically, VALM takes the input consisting of textual prompts and objects during inference and predicts the property label as the last token. The prompts used in evaluating object color, shape, and size reasoning performance are listed in Appendix Table 11. We use the top-1 accuracy as the evaluation metric and compute the average accuracy of all listed prompts to increase evaluation robustness. For PIQA, we follow Shwartz et al. (2020) to use the cross-entropy loss as the scorer for each potential solution score(sij) = CE([gi, sij ]), j ∈ [0, 1]. Then the solution with lower scores is selected as the prediction. The classification accuracy is used as the evaluation metric. Baselines. We consider both pretrained language-only and vision-language models as baselines. In particular, three strong language models are considered for comparison with VALM, including 1) GPT-2∗ (Radford et al., 2019); 2) BERT Devlin et al. (2019); and 3) CaptionBERT (Zhang et al., 2022a), a pre-trained auto-encoding language model on Oscar’s (Li et al., 2020) caption-based text data. Here, GPT-2∗ is re-implemented and trained from scratch using the identical training data, hyper-parameter settings, and model size as the proposed VALM. Additionally, we also compare VALM with prominent vision-language models, including 1) OSCAR (Li et al., 2020), a pre-trained vision-language model with learned representations that capture channel-invariant factors (i.e. object tags) at the semantic level; 2) VisualBERT (Li et al., 2019), a vision-language model with learned joint contextualized representations across vision and language; 3) CLIP (Radford et al., 2021), a vision-language system with one image encoder and one text encoder which are mapped into a same cross-modal embedding space. We directly use OSCAR and VisualBERT as auto-encoding language models for zero-shot evaluations. For CLIP, we first retrieve the corresponding image using the concatenated query prompt and the given object. Then, the dot-product similarity of the retrieved image vector and the candidate-aware text vector (including the query prompt, the given object, and one candidate label) is used to rank. Finally, the top-ranked candidate label is regarded as the prediction for evaluation. Results. The main results on four object commonsense reasoning datasets are summarized in Table 2. The two variants of VALM (K = 4, 8) significantly outperform all considered language models and vision-language models on object color and shape reasoning datasets, with an improvement of +14.50%, +13.56%, and +11.68% on MEMORYCOLOR, COLORTERMS, and OBJECTSHAPE respectively. Moreover, the proposed VALM with K = 4 achieves an encouraging result with +17.80% accuracy gain over the best baseline, VisualBERT on RELATIVESIZE. The substantial improvements on these datasets demonstrate that VALM takes full advantage of visual knowledge (object visual property) to complete the corresponding visual commonsense reasoning. Surprisingly, the zero-shot evaluation results of all auto-encoding language models and vision-language models are below 40% accuracy on object color and shape reasoning datasets. Although pretrained with aligned text-image pairs, those vision-language models cannot effectively utilize relevant visual knowledge from their jointly contextualized vision-language representations. Among language models, the auto-regressive PLMs significantly outperform auto-encoding PLMs, suggesting that auto-regressive PLMs are likely better at zero-shot reasoners. We also observe that retrieving more images for each token results in a performance drop for object size and shape reasoning. We attribute the degradation to the increased noise brought by augmenting with more images which causes model confusion when differentiating relevant visual information from irrelevant one. PIQA is a more challenging task that requires the model to reason useful implicit object properties and utilize these commonsense in the question answering process. The results on PIQA are presented in Table 3. As is shown, VALM outperforms all baseline language models with +2.11% accuracy improvement. The two variants of VALM achieve almost identical performance because the selection for the correct solution is based on the language modeling perplexity, indicating that the two variants demonstrate similar language modeling capability. 3.3 NATURAL LANGUAGE UNDERSTANDING AND LANGUAGE MODELING TASKS The casual language modeling pre-training task enables PLMs to naturally perform natural language understanding (NLU) and long-text modeling. Therefore, the zero-shot natural language understanding and language modeling performance are widely adopted to evaluate the capability of PLMs (Radford et al., 2019). Here, we evaluate VALM and the most relevant language model baseline GPT-2∗ on four NLU datasets, SST-2 (Socher et al., 2013), MPQA (Wiebe et al., 2005), DBPeida (Auer et al., 2007), and AGNews (Zhang et al., 2015). The prediction accuracy is used as the evaluation metric. In addition, following Radford et al. (2019), Wikitext-103 (Merity et al., 2017) and Lambada corpus (Paperno et al., 2016) are considered to study the language modeling performance in a zero-shot manner. We report perplexity for two corpora and also report last-word prediction accuracy for Lambada corpus. The results on natural language understanding are summarized in Table 4. It is easy to see that VALM achieves decent improvements on all four NLU tasks, indicating that the cross-modality knowledge learned in our model is likely helpful for typical natural language understanding. Thus, our visually-augmented language modeling framework can be further explored to enhance the natural language understanding ability of PLMs. Table 5 illustrates the results of language modeling tasks. Again, VALM slightly improves the perplexity on both datasets, +0.68 on Wikitext-103 and +0.08 on Lambda. A similar trend is observed for the final word prediction accuracy on Lambada. Different from previous visual knowledge intensive commonsense reasoning tasks (subsection 3.2), we find that VALM models with different numbers of retrieved images (K = 8 vs K = 4) perform similarly on the intrinsic language modeling task, suggesting that VALM can effectively ignore irrelevant visual information when the task is unlikely to benefit from visual knowledge. In other words, visual commonsense reasoning tasks require more fine-grained fusions of text and image, i.e. locating the text object in the image set, extracting relevant vision information, and verbalizing reasoning output. In contrast, a certain portion of text from general language modeling corpora s is probably not visually related. Thus, only a coarse-grained fusion is sufficient here (e.g. deciding if the image set is useful), making the language modeling evaluation less affected by the retrieval noise from augmented images. 3.4 ABLATION STUDIES So far, we empirically verify the effectiveness and superiority of VALM in utilizing visual knowledge for both visual knowledge-intensive tasks and traditional language understanding and modeling. To figure out how the visual information takes effect in our model, we focus on two questions here: 1) Is the model capable of using the retrieved image representations as "auxiliary" contexts? What is the effect of disabling such retrieved image representations during inference? To evaluate this, we design an ablation model which set K = 0 and disables image retrieval and fusion during inference. 2) Does the model learn to leverage visual knowledge in the retrieved images? What is the effect of directly augmenting randomly-retrieved image representations during inference? Thus, an ablation model which retrieves random images as augmentations during inference is used for probing. The results of the two ablation models, Randomly-Retrieval and Disable-Retrieval during the inference stage, are listed in the first two rows of Table 6. As we can see, both changes to the image retrieval result in noticeable performance degradation on all evaluation tasks. In particular, we find that disabling the image retrieval and augmenting no image during inference also makes a huge difference to the language modeling perplexity on two corpora, which is more related to pure text corpus rather than augmented images. Therefore, it suggests that VALM is able to effectively capture rich semantics from both pretraining sources, i.e. text corpus as well as augmented images. In other words, the improved zero-shot task transferability of VALM relies on visual information from augmented images, which complements the linguistic knowledge learned via text-based self-supervised learning. The results of the Randomly-Retrieval ablation model further illustrate that achieving the capability of integrating visual knowledge cannot be realized by only augmenting unrelated images to language models, while only context-relevant images can make a true difference. VALM proposes a novel visual knowledge fusion layer with a joint context-image attention mechanism as a key component to fuse visual knowledge into the language model. The separate linear projection layers are regarded as important components to map contexts into different embedding spaces for attention keys and values. Therefore, the proposed joint self-attention mechanism naturally holds three variations to generate image keys and values: establish image-specific linear projections, reuse contextual linear projections, and only use specific linear bias for augmented images. We conduct the ablation study to evaluate the effect of these three alternatives on image linear projections. The results in Table 6 demonstrate that adopting image-specific projection bias outperforms directly sharing the contextual projection bias. Introducing additional image-specific linear projection weights does not lead to further performance improvement. Thus, we take the strategy of only adding additional linear bias for augmented images and reuse contextual linear weights in generating visual attention keys and values for implementation convenience and parameter efficiency. img in Equation 3 for augmented images. The proposed model VALM is shown in the last row which introduces only image-specific bias and reuses contextual weight in attention key and value projection layers. 4 RELATED WORK Pre-trained Language Models. Pre-trained language models (PLMs) revolutionized NLP research. Enabled by attention mechanism (Bahdanau et al., 2015) and Transformer architecture (Vaswani et al., 2017), the state-of-the-art PLMs, including BERT (Liu et al., 2019), GPT (Radford et al., 2018; 2019), RoBERTa (Liu et al., 2019), ELECTRA (Clark et al., 2020), T5 (Raffel et al., 2020), and OPT (Zhang et al., 2022b), have become the dominant approach in NLP tasks via the paradigm of pre-training on large-scale text corpora and fine-tuning on downstream tasks. With the exponential scaling up of model size, a surprising fact emerged that the PLMs like GPT-3 (Brown et al., 2020) can work as few-shot or zero-shot learners. Vision-Language Models. Vision-language tasks are at the intersection area of both modalities of vision and language, like visual-question answering (Agrawal et al., 2015), and image captioning (Chen et al., 2015). ViL-BERT (Lu et al., 2019) firstly proposed to generate image region features via object detection and then learn joint multi-modal representations via an interacted two-stream model. OSCAR (Li et al., 2020) proposed to introduce object tags detected in images as anchor points to solve the issue of high demand for image-text alignments. Another significant pathway for VLMs is to construct a unified embedding space for texts and images and use textual prompts to extract task-specific labels during inference, of which the representative models are CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021). Visually-Grounded Language Learning. Visually-grounded language learning is an emerging research topic in vision-language learning, in which the proposed VALM can be categorized in this area with other prior works like Vokenization (Tan & Bansal, 2020), VidLanKD (Tang et al., 2021), and iACE (Lu et al., 2022). Visual information and knowledge can be memorized by the PLMs via fusion layer or concatenated inputs. However, extracting and utilizing the visual information efficiently and effectively is still difficult for uni-modal language models. Vokenization concatenated tokens and token-related images as “vokens", transferring sentence-level caption text to token-level “voken" with a Vokenizer model. 5 CONCLUSION In this paper, we propose a multi-modal framework VALM to enable auto-regressive language modeling to effectively utilize visual knowledge. Specifically, an effective text-to-image retrieval module is designed to construct latent text-image alignments for visually-grounded language modeling. Empowered by pre-training, VALM achieves improved zero-shot task transfer on downstream tasks. Experiments on various visual knowledge-intensive tasks demonstrate the effectiveness of our model over recent vision-language models. VALM also achieves decent improvements over language models on multiple representative natural language understanding tasks. For future work, we plan to adapt the model architecture to encoder-only and encoder-decoder Transformer backbones. We are also interested in more input modalities for VALM. ACKNOWLEDGEMENTS We would like to thank the anonymous reviewers for the helpful comments. We appreciate Zewen Chi and Hangbo Bao for the fruitful discussions, and Yaru Hao for helpful suggestions on evaluation benchmarks. A ADDITIONAL RESULTS A.1 TIME-COST EFFECTS OF RETRIEVAL AND IMAGESET SIZE Introducing efficient image retrieval on GPU brings a linear increase in inference time cost (about 2.1 times of text-only GPT-2∗ baseline), shown in Table 7. This cost is negligible with larger-size language models because the model forward cost will increase many times while the retrieval cost will not change with the model size. The retrieval cost can be further improved by searching fewer clusters or decreasing the number of encoding bytes for approximate image keys, with a minor trade-off on the performance. Moreover, efficient nearest neighbor search is an active research area (Guo et al., 2020) and we could try other efficient search tools to accelerate the retrieval. As the introduced retrieval time cost is proportional to the size of imageset for dense retrieval, we provide more details on the relationship between retrieval time cost and imageset size, presented in Table 7. Concluded from Table 7, there is no significant performance decrease with the smaller imageset size from the original 200M down to 10M. As the 10M set is still large and sufficient for providing enough visual knowledge, we will consider deploying a 10M size imageset to train VALM for potential real-time industry applications. A.2 COMPARISONS WITH ADDITIONAL STRONG BASELINES We compare VALM with Vokenization (Tan & Bansal, 2020) on four visual-knowledge-intensive tasks, and the results are shown in Table 8. In addition, we evaluate the performance of large language models on the visual–knowledge-intensive tasks for stronger and more fair comparisons. We evaluate the OPT (1.3B parameters) (Zhang et al., 2022b) model on these visual–knowledgeintensive tasks and the results are presented in Table 8. VALM(124M parameters) significantly outperforms the OPT-1.3B on four datasets, which further demonstrates the challenge of solving those visual-knowledge-intensive tasks and the effectiveness of our method. A.3 SCALING EFFECT OF VALM We train the 355M model (GPT-2 Medium Size) of VALM (k=8) to evaluate the effects of scaling up model parameters. The results are presented in Table 9 and the model performance is significantly improved on four visual knowledge-intensive datasets. We will seek more computation resources to train large size VALM models. A.4 ABLATION STUDY OF K We further conduct another ablation study by setting the number of augmented images K = 1 for VALM, which is very similar to the CLIP (Radford et al., 2021) inference. The results are presented in Table 10. VALM (k=1) significantly outperforms CLIP in all visual-knowledge-intensive tasks, validating the effectiveness of our method. A.5 CASE STUDIES We provide a case study in the object color reasoning task for VALM. In order to reason the correct commonsense color of objects sky and parsley, VALM takes the input combination of the prompt and the object as “the color of [object] is”. Then we present the retrieval results of top-4 corresponding images to the textual query in Figure 2. A.6 COLORIZATION EFFECT We conduct another interesting ablation case study to evaluate the effect of image color changes in the object color reasoning task. Specifically, VALM predicts the color label of an apple as red based on the commonsense in both contexts and retrieved images. The original prediction probability distribution is presented in Blue Bars of Figure 3(b). Then we replace the retrieved images with K unusual images of green apples in OBJECTCOLORIZATION dataset (Anwar et al., 2020), shown in Figure 3(a). The predicted probability distribution for 11 color types given replaced colorization objects is presented in Orange Bars of Figure 3(b). We could observe a clear probability increase in the color type of green and a decrease in that of red, which is confronted with the colorization process. This ablation study demonstrates VALM is capable of extracting useful visual knowledge from retrieved object images and inferring correct semantics based on that. Given retrieved object images in different colors, VALM could extract the correct color knowledge and adopt it in its semantic inference stage. B EXPERIMENTAL DETAILS B.1 PRE-TRAINING HYPERPARAMETERS AND TRAINING DETAILS The implementation of models and all experiments are based on the fairseq (Ott et al., 2019) toolkit. The proposed model deploys transformer decoder architecture with 124M trainable parameters, in which nlayer = 12, nhead = 12, dembed = 768. We deploy Adam (Kingma & Ba, 2015) (β1 = 0.9, β2 = 0.98) optimizer and train all models with lr = 0.0005, twarmup = 4000, dropout = 0.1, bsz = 128, len = 512. The layer normalization over the retrieved image keys is initialized with ϵ of 0.00001. VALM reuses the identical lower-cased byte pair encoding (BPE) (Sennrich et al., 2016) representation with a 49152 vocab size of CLIP text encoder. The proposed VALM and re-implemented GPT-2∗ are trained for 500k steps using 16 Nvidia Tesla V100-SXM2-32GB GPUs. The encoded 200M image knowledge base takes up 274GiBs disk storage and the trained faiss approximate retrieval index takes another 14GiBs storage. B.2 PROBE TEMPLATES We present all zero-shot query prompts and labels for 4 object commonsense reasoning datasets and 4 natural language understanding benchmarks in Tabele 11.
1. What is the main contribution of the paper, and how does it address the problem of injecting visual information into language models? 2. What are the strengths and weaknesses of the proposed method, particularly regarding its novelty and computational efficiency? 3. Are there any concerns or questions regarding the experimental design, such as the choice of dataset, image quality, and the size of the database? 4. Are there any unclear illustrations or explanations in the paper that need further clarification? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper To augment language models with relevant visual information, a visually-augmented language model (VALM) is proposed in this paper. The core idea is to retrieve relevant images by CLIP model and then fuse them to the second last Transformer layer of PLM. Strengths And Weaknesses Strengths: The research problem of how to inject visual information into language modeling for visual-demand tasks is interesting. The method is intuitive and novel to a certain degree. In many datasets, the performance of the proposed model outperforms compared methods/baselines Weaknesses: Indexing images, retrieving images and fusing image features to PLM all take extra computation and time as opposed to original PLM, especially retrieving I think. The analysis of additional training and inference time brought by the proposed method should be added. Lack of experimental comparison against recent visual-augmented language model works. For example, Vokenization and iACE. The database image quality matters for the retrieval and downstream task performance. But the ablation/analysis is missing. I mainly have two questions: (1) Does the dataset have to be an image-text dataset, such as ImageNet? Theoretically, it can also be an image-only dataset since CLIP is already trained. (2) Does the size of the database matter? How about we only take 100M, 10M, or even 1M data from Laion? Some unclear illustrations. (1) In Sec2.3, is z_ij the image feature output from the same CLIP image encoder used in retrieval? (2) In the 3rd last sentence of Sec3.4, shouldn't it be "adopting image-specific bias outperforms directly sharing the bias"? Clarity, Quality, Novelty And Reproducibility Clarity: Good but some details are missing. Quality: Good. Novelty: Good. Reproducibility: Since it requires a retrieving system on large-scale image datasets, reproducing might takes more effort.
ICLR
Title Visually-Augmented Language Modeling Abstract Human language is grounded on multimodal knowledge including visual knowledge like colors, sizes, and shapes. However, current large-scale pre-trained language models rely on text-only self-supervised training with massive text data, which precludes them from utilizing relevant visual information when necessary. To address this, we propose a novel pre-training framework, named VALM, to Visually-augment text tokens with retrieved relevant images for Language Modeling. Specifically, VALM builds on a novel latent text-image alignment method via an image retrieval module to fetch corresponding images given a textual context. With the visually-augmented context, VALM uses a visual knowledge fusion layer to enable multimodal grounded language modeling by attending to both text context and visual knowledge in images. We evaluate VALM on various visual knowledge-intensive commonsense reasoning tasks, which require visual information to excel. The experimental results illustrate that VALM outperforms all strong language-only and vision-language baselines with substantial gains in reasoning object commonsense including color, size, and shape. Our code is available at https://github.com/Victorwz/VaLM. 1 INTRODUCTION Large-scale pre-trained language models (PLMs) have achieved great success in promoting state of the art on various natural language understanding and generation tasks (Devlin et al., 2019; Radford et al., 2019; Liu et al., 2019; Yang et al., 2019; Brown et al., 2020; Wang et al., 2022). PLM self-supervision training largely benefits from harvesting local context information in the pre-training corpus. To further strengthen such contextual self-supervision, recent seminal works, e.g. GPT-3 (Brown et al., 2020) and Megatron-LM (Narayanan et al., 2021), focus on increasing the model size and the scale of pre-training corpus. With billions of parameters, these tremendous PLMs exhibit incredible ability as zero-shot or few-shot learners. More remarkably, PLMs can achieve human-parity performance on various downstream tasks, even without any task-specific supervision. Another major research line of PLMs is to enhance the language model with auxiliary knowledge (Wei et al., 2021), including entity knowledge (Yu et al., 2020), relational knowledge (Zhang et al., 2019; Qin et al., 2021), text chunk (Lewis et al., 2020; Wu et al., 2022; Borgeaud et al., 2021), etc. The incorporation of various knowledge resources to PLMs mitigates the drawbacks of local contextual attention, bringing additional relevant global context that benefits both language understanding and generation tasks. Since current unimodal PLMs lack visual knowledge grounding, they inevitably suffer from the hallucination problem, which refers to the inconsistent or false statements generated by PLMs with respect to the world knowledge (Logan et al., 2019). For instance, the PLMs may predict the color of the sky as red only due to the statistical contextual correlations between the token “color” and “red” in the pre-training corpus, neglecting the commonsense facts. In this paper, we propose a novel framework to enable language model pre-training to take full advantage of both local text context and corresponding visual knowledge. Recent work on joint visionlanguage model (VLM) pre-training (Su et al., 2020; Tan & Bansal, 2020) relies on explicit alignments between text and image, e.g. supervised image captioning data, which limits the cross-modality fusion during fine-tuning/inference over text without accompanying images. As a consequence, later in our experiments (section 3), those prominent VLMs are found to achieve unsatisfactory performance on visual knowledge-intensive commonsense reasoning tasks. Instead, we design a flexible text-image alignment mechanism via an image retrieval module that gathers related images for each token as visual augmentation. To achieve better language-vision grounding, we propose a visual knowledge fusion layer to enable joint attention across visually-augmented context including both textual tokens and retrieved images. Based on this, we build up a Visually-augmented Language Model, VALM, with flexible on-the-fly visual knowledge enhancement. We evaluate the effectiveness of the proposed VALM on various commonsense reasoning and language-only benchmarks. Experimental results demonstrate that our model consistently outperforms the unimodal and multimodal baselines in terms of object commonsense reasoning. Remarkably, our method substantially improves +14.50%, +17.80%, and +11.68% accuracy on MEMORYCOLOR, RELATIVESIZE and OBJECTSHAPE datasets, respectively. Additional experiments on natural language understanding tasks also validate that the proposed visually-augmented language modeling framework could be helpful to improve the fundamental natural language understanding capability of PLMs. Our contributions are summarized as follows: • We propose a novel visually-augmented casual language model, VALM, to enable the language model to utilize visual knowledge flexibly and effectively. Through the proposed visual knowledge fused language modeling, VALM is capable of accomplishing tasks with the high demand of cross-modality knowledge, such as visual commonsense reasoning. • We design a framework to construct flexible on-the-fly text-image alignments and fuse augmented images into the context of language modeling. We implement an image retrieval module to query token-level representation in a large-scale cached image database and retrieve its nearest neighbors as the augmentation. With the proposed visual knowledge fusion layer, VALM can effectively take full advantage of both language information from local text context and visual information from retrieved images. • Experimental results demonstrate that VALM effectively alleviates the hallucination problem of PLMs via introducing visual knowledge in language model pre-training. VALM achieves significant performance improvements in inferring the commonsense object properties. 2 METHODS We propose a novel multi-modal pre-trained language model, which is augmented with retrieved images, named VALM. The architecture of VALM is presented in Figure 1. VALM augments each token in pre-training text corpus with k retrieved related images. VALM uses an image retrieval module to retrieve corresponding images for each token. The image retrieval module deploys a pre-trained CLIP model, which is capable of unifying the textual query and image candidates into a joint embedding space. VALM constructs a cached large-scale image knowledge base using image encoder of CLIP, and uses the contextual representation of each token as textual query to search its nearest neighbors in image knowledge base. With the help of the unified text and image embedding space provided by CLIP, the image nearest neighbors are taken as augmented images of each token to construct text and image alignments. We then propose a visual-knowledge fusion layer to enable learned hidden state to attend to both texts and augmented images. 2.1 VALM: VISUALLY-AUGMENTED LANGUAGE MODELING Given an input text sequence {xi}Ni=1, the embedding layer first encodes input vector {xi}Ni=1 into embedding space and outputs the initial hidden state H0 to the successive Transformer decoder layers. Then the proposed VALM model encodes H0 into visual knowledge fused contextual representations at difference levels H = {Hl}Ll=1 via L − 1 Transformer decoder layers and one special visual knowledge fusion layer. Each Transformer decoder layer is identical to Vaswani et al. (2017), which outputs the contextual representations at different semantic levels given the representation from the previous layer Hl = Layerl(H l−1), l ∈ [1, L]. The visual knowledge fusion layer is proposed as a variant of the Transformer decoder layer to incorporate visual knowledge in contextual learning via joint attention on both text contexts and augmented images. The visual knowledge fusion layer is injected in the second-to-last layer of VALM. The visual knowledge is stored in corresponding augmented image representations, obtained from image retrieval module {{zij}Kj=1} = frt(xi). Then the visual knowledge fusion layer takes the input including both contextual representation of the previous layer and augmented image sets and outputs a visual-knowledge fused contextual representation HL−1 = VisualLayer({HL−2i , {zij}Kj=1}Ni=1). Finally, the output contextual representations are passed into the output projection layer and a softmax function is used to compute the token probability P (xi|x1, · · · ,xi−1) = softmax(WHL + b). We conduct generative unsupervised pre-training (Radford et al., 2019) for VALM on a large-scale text corpus. The training objective of VALM is the standard left-to-right language modeling objective, which maximizes the likelihood of the next word token based on the left context: max ∑ x∈D |x|∑ i=1 logP (xi|x1, · · · ,xi−1), (1) where x represents a sentence randomly sampled from the large-scale pre-training text corpus D. 2.2 IMAGE RETRIEVAL The visual knowledge corresponding to a specific token is stored in its correlated images. Therefore, to prepare the fused visual knowledge, VALM deploys an image retrieval module to retrieve augmented images, denoted as frt(·). In order to achieve multi-modality text-image retrieval, it is of great importance to building up a discriminator to assess the correlation of every image in the extremely large-scale open image knowledge bases to the specific text representation. CLIP (Radford et al., 2021) proposed a simple-yet-effective method to connect images and texts into a unified multi-modal embedding space. We directly deploy the pre-trained CLIP model to encode the images and texts to enable a nearest neighbor text-image retrieval. Specifically, the pre-trained CLIP model we use in constructing the image retrieval module includes a ResNet-50x16 (He et al., 2016) model as an image encoder and a Transformer (Vaswani et al., 2017) model as a text encoder. Here, we only use the CLIP model as the backbone of our image retrieval module, and the CLIP parameters are not updated during the pre-training process of VALM. Image Knowledge Base Creation. The image knowledge base of the retrieval module is the cache of a set of image keys, which are the high-level visual representations of images. Given an image z ∈ Dimg, such visual representation can be obtained via forwarding image z to the pre-trained CLIP image encoder. Then the whole image knowledge base (Z) is constructed by taking the output hidden state fθI (x) as image keys: Z = ⋃ z∈Dimg{fθI (z)}, where θI represents the image encoder parameters. Textual Query. We take the contextual representation of each token as the query in the nearest neighbor search. For each sentence x ∈ D, the contextual representation of i-th token is computed via fθT (x<i), where θT represents the text encoder parameters. As the input sequence length of VALM generally exceeds the input length limitation of 75 tokens of CLIP text encoder, the long context x<i is cut off into a context-chunk yi for fitting in CLIP text encoder: yi = { x[t,i−1], i− t < 75, x[i−75,i−1], i− t ≥ 75, where t is the index of the closest stop character before i-th token. Then the textual query for i-th token is computed as its context-chunk representation as fθT (yi). kNN Text-Image Retrieval. The retrieval module uses the contextual representation to search the cached image knowledge base (Z) and retrieves k nearest neighbor image keys w.r.t. dot product distance. As the pre-trained CLIP model has learned a joint embedding space for text and image domain, the retrieved images {zij}Kj=1 are thus regarded as the top-k relevant images to the query. 2.3 VISUAL KNOWLEDGE FUSION With the help of the image retrieval module, each token in the pre-training corpus is augmented with k corresponding images, and these augmented images are represented in the joint embedding space with texts. Then the augmented image representations are directly treated as auxiliary “context” in the learning process. As the conventional Transformer decoder layer uses the multi-head self-attention (Vaswani et al., 2017) to learn the contextual representation, we extend it to a joint-attention mechanism and propose a novel visual knowledge fusion layer to enable each token to attend to both contexts and retrieval images jointly. In addition, due to the inconsistency in magnitude and distribution between contextual hidden states and retrieved image representations, we apply Layer Normalization (Ba et al., 2016) on retrieved K image representations to alleviate such inconsistency, denoted as LNimg. Assume that the hidden state output for i-th token is hi and the corresponding retrieved images are {zij}Kj=1, the hidden state HL−1i is computed as: Q = HL−2WQ + bQ,K = HL−2WK + bK ,V = HL−2WV + bV , (2) k̇ik = LNimg(zik)W K + bKimg, v̇ik = LNimg(zik)W V + bVimg, (3) ei = QiK T √ d , ai = exp (ei)∑L j=1 exp (eij) + ∑K k=1 exp (eik) , (4) eik = Qik̇ T ik√ d , aik = exp (eik)∑L j=1 exp (eij) + ∑K k=1 exp (eik) , (5) HL−1i = aiV + ∑ k aikv̇ik, (6) where Qi, k̇ik, v̇ik ∈ RE, K,V ∈ R|x|×E, ei, ai ∈ R|x|. The hidden state output from the previous layer HL−1i is linearly projected into contextual queries, keys, and values Q,K,V separately. K is the number of retrieved images for each token, and E is the embedding dimension for both context and image representations. In order to generate image-specific attention keys and values, we adopt image-specific bias bKimg, b V img in linear projections and reuse the contextual projection weights WK ,WV to generate image-specific attention keys and values. Moreover, it is vital to mention that the image-specific attention keys and values are distinct for each query token, which is highly different from self-attention where the contextual keys and values are kept the same for each token. A secondary subscript k is used to denote different image representations for the i-th token. 3 EXPERIMENTS 3.1 PRETRAINING SETUP Text Corpus. We use the English corpus of CC-100 (Conneau et al., 2020) as the pre-training text corpus for both VALM and baseline GPT-2∗. CC-100 corpus is one of the largest high-quality Task Example Prompt Object / Pair Answer web-crawl text data. The English monolingual dataset of CC-100 contains about 55 billion tokens, stored in 301 GiBs disk storage. Due to the limitation of computing resources, we only consume 15% of CC-100 English monolingual corpus for pre-training VALM and baseline GPT-2∗. Image Data. We use the LAION Open Image Dataset (Schuhmann et al., 2021) as the image knowledge base for dense retrieval. To the best of our knowledge, the LAION Open Dataset is one of the world’s largest openly available image-text-pair dataset with 400 million samples. Due to the disk space limitation, we randomly select half of LAION images for the dense text-image retrieval, which is 200M images in total. Pre-training Hyperparameters. The proposed model deploys transformer decoder architecture with 124M trainable parameters. Hyperparameter setting and training details are presented in Appendix B.1. Retrieval Module. For the implementation of the dense text-image retrieval module, we use the faiss (Johnson et al., 2021) toolkit to construct the efficient index. The faiss index contains the whole 200M image keys and provides the efficient nearest neighbor search. For efficiency purposes, we quantize all image keys to 32 bytes. faiss index stores image keys in clusters to speed up the search, which requires the additional training process to learn the cluster centroids. We use 10M keys for learning 131k cluster centroids and search 32 clusters to find the nearest neighbors during inference. We load the faiss index to GPU to achieve efficient dense text-image retrieval. 3.2 VISUAL KNOWLEDGE INTENSIVE TASKS The visual information stored in retrieved images can play a useful role in providing relevant visual knowledge to help language models perform better grounded commonsense reasoning. Such helpful visual information can be colors, positions, sizes, spatial relations, etc. The task of object commonsense reasoning requires language models to predict the correct visual property for a given object. To excel these tasks typically require models to capture and utilize intensive visual knowledge without any explicit text demonstrations or external knowledge bases. Due to reporting biases, such descriptive text of object properties rarely appears in text corpora, likely making this type of knowledge absent from language models. Thus, those visual knowledge-intensive tasks are likely challenging for both language models and vision-language models. We first compared VALM and recent baselines on four object commonsense reasoning datasets, MEMORYCOLOR (Norlund et al., 2021), COLORTERMS (Bruni et al., 2012), OBJECTSHAPE (Zhang et al., 2022a) reasoning, and RELATIVESIZE (Bagherinezhad et al., 2016). In addition, we use another physical interaction question answering dataset (PIQA) (Bisk et al., 2020), to evaluate whether such visual commonsense knowledge could be implicitly encoded and utilized in the question answering process. In Table 1, we provide examples for different visual commonsense reasoning tasks. MEMORYCOLOR and COLORTERMS Dataset. The memory color of a concrete object is the typical color an object appears in, e.g. the color of banana is mostly memorized as yellow. Norlund et al. (2021) proposed this dataset for evaluating visual knowledge transfer in multi-modal language models. The dataset contains 109 objects paired with their memory color labels, an illustrating picture, and a descriptor. The COLORTERMS dataset also contains a list of common items manually labeled with their commonsense color. Both datasets hold a set of 11 color labels. OBJECTSHAPE Dataset. Zhang et al. (2022a) proposed a visual commonsense dataset with a set of object attributes like shape. The dataset of object shapes contains 140 objects with their shape label. The OBJECTSHAPE dataset consists of 12 shape categories. RELATIVESIZE Dataset. Bagherinezhad et al. (2016) proposed the RELATIVESIZE dataset, which includes a total of 486 object pairs between 41 physical objects. The task of object size reasoning requires the model to predict the size relations between two given objects, e.g., an ant is smaller than an elephant. The size information is again rarely included and described in text, while it is much easier to capture from the images. We convert the size relation reasoning task into a binary question-answering form with "Yes"/"No" answers. PHYSICAL INTERACTION QUESTION ANSWERING. Physical Interaction Question Answering (PIQA) is proposed and designed to investigate the physical commonsense knowledge of existing language models (Bisk et al., 2020). Completing such question answering tasks requires the language model to effectively utilize physical commonsense knowledge, i.e. knowledge of basic properties of the objects (flexibility, curvature, and being porous). Language models are supposed to first achieve the perception of objects and later encode such physical knowledge into the language modeling process. Each data sample in PIQA contains one goal and two solutions. The model is supposed to figure out and predict the more reasonable and appropriate solution between two candidates. Evaluation Setting. We evaluate VALM and all baseline methods in a zero-shot manner without any task-specific tuning. Specifically, VALM takes the input consisting of textual prompts and objects during inference and predicts the property label as the last token. The prompts used in evaluating object color, shape, and size reasoning performance are listed in Appendix Table 11. We use the top-1 accuracy as the evaluation metric and compute the average accuracy of all listed prompts to increase evaluation robustness. For PIQA, we follow Shwartz et al. (2020) to use the cross-entropy loss as the scorer for each potential solution score(sij) = CE([gi, sij ]), j ∈ [0, 1]. Then the solution with lower scores is selected as the prediction. The classification accuracy is used as the evaluation metric. Baselines. We consider both pretrained language-only and vision-language models as baselines. In particular, three strong language models are considered for comparison with VALM, including 1) GPT-2∗ (Radford et al., 2019); 2) BERT Devlin et al. (2019); and 3) CaptionBERT (Zhang et al., 2022a), a pre-trained auto-encoding language model on Oscar’s (Li et al., 2020) caption-based text data. Here, GPT-2∗ is re-implemented and trained from scratch using the identical training data, hyper-parameter settings, and model size as the proposed VALM. Additionally, we also compare VALM with prominent vision-language models, including 1) OSCAR (Li et al., 2020), a pre-trained vision-language model with learned representations that capture channel-invariant factors (i.e. object tags) at the semantic level; 2) VisualBERT (Li et al., 2019), a vision-language model with learned joint contextualized representations across vision and language; 3) CLIP (Radford et al., 2021), a vision-language system with one image encoder and one text encoder which are mapped into a same cross-modal embedding space. We directly use OSCAR and VisualBERT as auto-encoding language models for zero-shot evaluations. For CLIP, we first retrieve the corresponding image using the concatenated query prompt and the given object. Then, the dot-product similarity of the retrieved image vector and the candidate-aware text vector (including the query prompt, the given object, and one candidate label) is used to rank. Finally, the top-ranked candidate label is regarded as the prediction for evaluation. Results. The main results on four object commonsense reasoning datasets are summarized in Table 2. The two variants of VALM (K = 4, 8) significantly outperform all considered language models and vision-language models on object color and shape reasoning datasets, with an improvement of +14.50%, +13.56%, and +11.68% on MEMORYCOLOR, COLORTERMS, and OBJECTSHAPE respectively. Moreover, the proposed VALM with K = 4 achieves an encouraging result with +17.80% accuracy gain over the best baseline, VisualBERT on RELATIVESIZE. The substantial improvements on these datasets demonstrate that VALM takes full advantage of visual knowledge (object visual property) to complete the corresponding visual commonsense reasoning. Surprisingly, the zero-shot evaluation results of all auto-encoding language models and vision-language models are below 40% accuracy on object color and shape reasoning datasets. Although pretrained with aligned text-image pairs, those vision-language models cannot effectively utilize relevant visual knowledge from their jointly contextualized vision-language representations. Among language models, the auto-regressive PLMs significantly outperform auto-encoding PLMs, suggesting that auto-regressive PLMs are likely better at zero-shot reasoners. We also observe that retrieving more images for each token results in a performance drop for object size and shape reasoning. We attribute the degradation to the increased noise brought by augmenting with more images which causes model confusion when differentiating relevant visual information from irrelevant one. PIQA is a more challenging task that requires the model to reason useful implicit object properties and utilize these commonsense in the question answering process. The results on PIQA are presented in Table 3. As is shown, VALM outperforms all baseline language models with +2.11% accuracy improvement. The two variants of VALM achieve almost identical performance because the selection for the correct solution is based on the language modeling perplexity, indicating that the two variants demonstrate similar language modeling capability. 3.3 NATURAL LANGUAGE UNDERSTANDING AND LANGUAGE MODELING TASKS The casual language modeling pre-training task enables PLMs to naturally perform natural language understanding (NLU) and long-text modeling. Therefore, the zero-shot natural language understanding and language modeling performance are widely adopted to evaluate the capability of PLMs (Radford et al., 2019). Here, we evaluate VALM and the most relevant language model baseline GPT-2∗ on four NLU datasets, SST-2 (Socher et al., 2013), MPQA (Wiebe et al., 2005), DBPeida (Auer et al., 2007), and AGNews (Zhang et al., 2015). The prediction accuracy is used as the evaluation metric. In addition, following Radford et al. (2019), Wikitext-103 (Merity et al., 2017) and Lambada corpus (Paperno et al., 2016) are considered to study the language modeling performance in a zero-shot manner. We report perplexity for two corpora and also report last-word prediction accuracy for Lambada corpus. The results on natural language understanding are summarized in Table 4. It is easy to see that VALM achieves decent improvements on all four NLU tasks, indicating that the cross-modality knowledge learned in our model is likely helpful for typical natural language understanding. Thus, our visually-augmented language modeling framework can be further explored to enhance the natural language understanding ability of PLMs. Table 5 illustrates the results of language modeling tasks. Again, VALM slightly improves the perplexity on both datasets, +0.68 on Wikitext-103 and +0.08 on Lambda. A similar trend is observed for the final word prediction accuracy on Lambada. Different from previous visual knowledge intensive commonsense reasoning tasks (subsection 3.2), we find that VALM models with different numbers of retrieved images (K = 8 vs K = 4) perform similarly on the intrinsic language modeling task, suggesting that VALM can effectively ignore irrelevant visual information when the task is unlikely to benefit from visual knowledge. In other words, visual commonsense reasoning tasks require more fine-grained fusions of text and image, i.e. locating the text object in the image set, extracting relevant vision information, and verbalizing reasoning output. In contrast, a certain portion of text from general language modeling corpora s is probably not visually related. Thus, only a coarse-grained fusion is sufficient here (e.g. deciding if the image set is useful), making the language modeling evaluation less affected by the retrieval noise from augmented images. 3.4 ABLATION STUDIES So far, we empirically verify the effectiveness and superiority of VALM in utilizing visual knowledge for both visual knowledge-intensive tasks and traditional language understanding and modeling. To figure out how the visual information takes effect in our model, we focus on two questions here: 1) Is the model capable of using the retrieved image representations as "auxiliary" contexts? What is the effect of disabling such retrieved image representations during inference? To evaluate this, we design an ablation model which set K = 0 and disables image retrieval and fusion during inference. 2) Does the model learn to leverage visual knowledge in the retrieved images? What is the effect of directly augmenting randomly-retrieved image representations during inference? Thus, an ablation model which retrieves random images as augmentations during inference is used for probing. The results of the two ablation models, Randomly-Retrieval and Disable-Retrieval during the inference stage, are listed in the first two rows of Table 6. As we can see, both changes to the image retrieval result in noticeable performance degradation on all evaluation tasks. In particular, we find that disabling the image retrieval and augmenting no image during inference also makes a huge difference to the language modeling perplexity on two corpora, which is more related to pure text corpus rather than augmented images. Therefore, it suggests that VALM is able to effectively capture rich semantics from both pretraining sources, i.e. text corpus as well as augmented images. In other words, the improved zero-shot task transferability of VALM relies on visual information from augmented images, which complements the linguistic knowledge learned via text-based self-supervised learning. The results of the Randomly-Retrieval ablation model further illustrate that achieving the capability of integrating visual knowledge cannot be realized by only augmenting unrelated images to language models, while only context-relevant images can make a true difference. VALM proposes a novel visual knowledge fusion layer with a joint context-image attention mechanism as a key component to fuse visual knowledge into the language model. The separate linear projection layers are regarded as important components to map contexts into different embedding spaces for attention keys and values. Therefore, the proposed joint self-attention mechanism naturally holds three variations to generate image keys and values: establish image-specific linear projections, reuse contextual linear projections, and only use specific linear bias for augmented images. We conduct the ablation study to evaluate the effect of these three alternatives on image linear projections. The results in Table 6 demonstrate that adopting image-specific projection bias outperforms directly sharing the contextual projection bias. Introducing additional image-specific linear projection weights does not lead to further performance improvement. Thus, we take the strategy of only adding additional linear bias for augmented images and reuse contextual linear weights in generating visual attention keys and values for implementation convenience and parameter efficiency. img in Equation 3 for augmented images. The proposed model VALM is shown in the last row which introduces only image-specific bias and reuses contextual weight in attention key and value projection layers. 4 RELATED WORK Pre-trained Language Models. Pre-trained language models (PLMs) revolutionized NLP research. Enabled by attention mechanism (Bahdanau et al., 2015) and Transformer architecture (Vaswani et al., 2017), the state-of-the-art PLMs, including BERT (Liu et al., 2019), GPT (Radford et al., 2018; 2019), RoBERTa (Liu et al., 2019), ELECTRA (Clark et al., 2020), T5 (Raffel et al., 2020), and OPT (Zhang et al., 2022b), have become the dominant approach in NLP tasks via the paradigm of pre-training on large-scale text corpora and fine-tuning on downstream tasks. With the exponential scaling up of model size, a surprising fact emerged that the PLMs like GPT-3 (Brown et al., 2020) can work as few-shot or zero-shot learners. Vision-Language Models. Vision-language tasks are at the intersection area of both modalities of vision and language, like visual-question answering (Agrawal et al., 2015), and image captioning (Chen et al., 2015). ViL-BERT (Lu et al., 2019) firstly proposed to generate image region features via object detection and then learn joint multi-modal representations via an interacted two-stream model. OSCAR (Li et al., 2020) proposed to introduce object tags detected in images as anchor points to solve the issue of high demand for image-text alignments. Another significant pathway for VLMs is to construct a unified embedding space for texts and images and use textual prompts to extract task-specific labels during inference, of which the representative models are CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021). Visually-Grounded Language Learning. Visually-grounded language learning is an emerging research topic in vision-language learning, in which the proposed VALM can be categorized in this area with other prior works like Vokenization (Tan & Bansal, 2020), VidLanKD (Tang et al., 2021), and iACE (Lu et al., 2022). Visual information and knowledge can be memorized by the PLMs via fusion layer or concatenated inputs. However, extracting and utilizing the visual information efficiently and effectively is still difficult for uni-modal language models. Vokenization concatenated tokens and token-related images as “vokens", transferring sentence-level caption text to token-level “voken" with a Vokenizer model. 5 CONCLUSION In this paper, we propose a multi-modal framework VALM to enable auto-regressive language modeling to effectively utilize visual knowledge. Specifically, an effective text-to-image retrieval module is designed to construct latent text-image alignments for visually-grounded language modeling. Empowered by pre-training, VALM achieves improved zero-shot task transfer on downstream tasks. Experiments on various visual knowledge-intensive tasks demonstrate the effectiveness of our model over recent vision-language models. VALM also achieves decent improvements over language models on multiple representative natural language understanding tasks. For future work, we plan to adapt the model architecture to encoder-only and encoder-decoder Transformer backbones. We are also interested in more input modalities for VALM. ACKNOWLEDGEMENTS We would like to thank the anonymous reviewers for the helpful comments. We appreciate Zewen Chi and Hangbo Bao for the fruitful discussions, and Yaru Hao for helpful suggestions on evaluation benchmarks. A ADDITIONAL RESULTS A.1 TIME-COST EFFECTS OF RETRIEVAL AND IMAGESET SIZE Introducing efficient image retrieval on GPU brings a linear increase in inference time cost (about 2.1 times of text-only GPT-2∗ baseline), shown in Table 7. This cost is negligible with larger-size language models because the model forward cost will increase many times while the retrieval cost will not change with the model size. The retrieval cost can be further improved by searching fewer clusters or decreasing the number of encoding bytes for approximate image keys, with a minor trade-off on the performance. Moreover, efficient nearest neighbor search is an active research area (Guo et al., 2020) and we could try other efficient search tools to accelerate the retrieval. As the introduced retrieval time cost is proportional to the size of imageset for dense retrieval, we provide more details on the relationship between retrieval time cost and imageset size, presented in Table 7. Concluded from Table 7, there is no significant performance decrease with the smaller imageset size from the original 200M down to 10M. As the 10M set is still large and sufficient for providing enough visual knowledge, we will consider deploying a 10M size imageset to train VALM for potential real-time industry applications. A.2 COMPARISONS WITH ADDITIONAL STRONG BASELINES We compare VALM with Vokenization (Tan & Bansal, 2020) on four visual-knowledge-intensive tasks, and the results are shown in Table 8. In addition, we evaluate the performance of large language models on the visual–knowledge-intensive tasks for stronger and more fair comparisons. We evaluate the OPT (1.3B parameters) (Zhang et al., 2022b) model on these visual–knowledgeintensive tasks and the results are presented in Table 8. VALM(124M parameters) significantly outperforms the OPT-1.3B on four datasets, which further demonstrates the challenge of solving those visual-knowledge-intensive tasks and the effectiveness of our method. A.3 SCALING EFFECT OF VALM We train the 355M model (GPT-2 Medium Size) of VALM (k=8) to evaluate the effects of scaling up model parameters. The results are presented in Table 9 and the model performance is significantly improved on four visual knowledge-intensive datasets. We will seek more computation resources to train large size VALM models. A.4 ABLATION STUDY OF K We further conduct another ablation study by setting the number of augmented images K = 1 for VALM, which is very similar to the CLIP (Radford et al., 2021) inference. The results are presented in Table 10. VALM (k=1) significantly outperforms CLIP in all visual-knowledge-intensive tasks, validating the effectiveness of our method. A.5 CASE STUDIES We provide a case study in the object color reasoning task for VALM. In order to reason the correct commonsense color of objects sky and parsley, VALM takes the input combination of the prompt and the object as “the color of [object] is”. Then we present the retrieval results of top-4 corresponding images to the textual query in Figure 2. A.6 COLORIZATION EFFECT We conduct another interesting ablation case study to evaluate the effect of image color changes in the object color reasoning task. Specifically, VALM predicts the color label of an apple as red based on the commonsense in both contexts and retrieved images. The original prediction probability distribution is presented in Blue Bars of Figure 3(b). Then we replace the retrieved images with K unusual images of green apples in OBJECTCOLORIZATION dataset (Anwar et al., 2020), shown in Figure 3(a). The predicted probability distribution for 11 color types given replaced colorization objects is presented in Orange Bars of Figure 3(b). We could observe a clear probability increase in the color type of green and a decrease in that of red, which is confronted with the colorization process. This ablation study demonstrates VALM is capable of extracting useful visual knowledge from retrieved object images and inferring correct semantics based on that. Given retrieved object images in different colors, VALM could extract the correct color knowledge and adopt it in its semantic inference stage. B EXPERIMENTAL DETAILS B.1 PRE-TRAINING HYPERPARAMETERS AND TRAINING DETAILS The implementation of models and all experiments are based on the fairseq (Ott et al., 2019) toolkit. The proposed model deploys transformer decoder architecture with 124M trainable parameters, in which nlayer = 12, nhead = 12, dembed = 768. We deploy Adam (Kingma & Ba, 2015) (β1 = 0.9, β2 = 0.98) optimizer and train all models with lr = 0.0005, twarmup = 4000, dropout = 0.1, bsz = 128, len = 512. The layer normalization over the retrieved image keys is initialized with ϵ of 0.00001. VALM reuses the identical lower-cased byte pair encoding (BPE) (Sennrich et al., 2016) representation with a 49152 vocab size of CLIP text encoder. The proposed VALM and re-implemented GPT-2∗ are trained for 500k steps using 16 Nvidia Tesla V100-SXM2-32GB GPUs. The encoded 200M image knowledge base takes up 274GiBs disk storage and the trained faiss approximate retrieval index takes another 14GiBs storage. B.2 PROBE TEMPLATES We present all zero-shot query prompts and labels for 4 object commonsense reasoning datasets and 4 natural language understanding benchmarks in Tabele 11.
1. What is the main contribution of the paper regarding visual-augmented language modeling? 2. What are the strengths and weaknesses of the proposed architecture and evaluation methodology? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or questions regarding the comparison with other models and the training dataset used in the study? 5. Would reducing the number of images in the auxiliary image database affect performance, and how would it impact real-world applications? 6. How would scaling up the model size influence the results, specifically using larger models like 350M?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This work proposes a novel architecture to do visual-augmented language modeling. Before each prediction of the next word, this architecture queries the most relevant images w.r.t. the current received part of the sentence using a pretrained CLIP model and a large image database. The authors trained this model on a large text corpus. They then showed that this model outperforms other models on multiple visual-language benchmarks, including Memory Color, Object Shape, Relative Size, and ColorTerms, by noticeable margins. The authors further show some ablation studies that the use of the image-retrieval module is helpful. Strengths And Weaknesses Strength: This work proposes a new architecture for augmenting language modeling with auxiliary visual cues. The authors tested multiple evaluation datasets for their proposed and other candidate models. The improvement in these datasets compared to the other models tested in this work is significant. The authors also show some ablation studies that the proposed image retrieval module helps the performance. Weakness: The biggest worry I have is about the evaluation in this work, which is critical to be resolved before I can fully back the acceptance of the paper. The pure language model tested in this work is a GPT-2 retrained by the authors on the same text corpus. And the other visual augmented models are all pretrained models from other papers. None of these comparisons can be perfectly fair on the training datasets. However, it seems that the model size is controlled, as this model uses a pretrained CLIP model during its training and is augmented by a large image database. If the training dataset can never be perfectly controlled, why not try models trained on much larger text corpus, such as the pretrained OPT models? Besides this, the numbers on these benchmarks seem to be also lower than the numbers I can find in other papers. For example, in the paper “Transferring Knowledge from Vision to Language: How to Achieve it and how to Measure it?” (about the Memory Color dataset), Table 4 shows numbers much higher than the numbers reported in this paper. Can the authors explain this difference? Another minor issue about the evaluation is: what would be the current up-limit on these evaluation datasets from pure-language models? Have people tested the largest pretrained models on these datasets? Are these problems really hard for the models to resolve? To help correctly evaluate the innovation of this work, can the authors also comment on how different this fusion layer proposed here is from the fusion layer in the Google Flamingo paper? This is more of a question instead of an issue. It would be great to see how reducing the number of images in the auxiliary image database will influence the performance. I would imagine that having these images during the real-time inference makes the inference very slow, about which I cannot find any time estimation from the paper. So this could be an issue for real-world applications. To be clear, I don’t think this issue needs to be addressed in this work right now, but I want to get a sense of how annoying this is. Finally, it would be good to know how the scaling will influence the results. Will large models like 350M make the performance better? Clarity, Quality, Novelty And Reproducibility The writing is clear. There is some novelty, but more explanations are needed to separate it from other publications. The paper has enough details to reproduce the results, though I think the reproduction would require many computation resources and time. So it would be great to have the pretrained models also released by the authors.
ICLR
Title Partitioned Learned Bloom Filters Abstract Bloom filters are space-efficient probabilistic data structures that are used to test whether an element is a member of a set, and may return false positives. Recently, variations referred to as learned Bloom filters were developed that can provide improved performance in terms of the rate of false positives, by using a learned model for the represented set. However, previous methods for learned Bloom filters do not take full advantage of the learned model. Here we show how to frame the problem of optimal model utilization as an optimization problem, and using our framework derive algorithms that can achieve near-optimal performance in many cases. Experimental results from both simulated and real-world datasets show significant performance improvements from our optimization approach over both the original learned Bloom filter constructions and previously proposed heuristic improvements. 1 INTRODUCTION Bloom filters are space-efficient probabilistic data structures that are used to test whether an element is a member of a set [Bloom (1970)]. A Bloom filter compresses a given set S into an array of bits. A Bloom filter may allow false positives, but will not give false negative matches, which makes them suitable for numerous memory-constrained applications in networks, databases, and other systems areas. Indeed, there are many thousands of papers describing applications of Bloom filters [Dayan et al. (2018), Dillinger & Manolios (2004), Broder & Mitzenmacher (2003)]. There exists a trade off between the false positive rate and the size of a Bloom filter (smaller false positive rate leads to larger Bloom filters). For a given false positive rate, there are known theoretical lower bounds on the space used [Pagh et al. (2005)] by the Bloom filter. However, these lower bounds assume the Bloom filter could store any possible set. If the data set or the membership queries have specific structure, it may be possible to beat the lower bounds in practice [Mitzenmacher (2002), Bruck et al. (2006), Mitzenmacher et al. (2020)]. In particular, [Kraska et al. (2018)] and [Mitzenmacher (2018)] propose using machine learning models to reduce the space further, by using a learned model to provide a suitable pre-filter for the membership queries. This allows one to beat the space lower bounds by leveraging the context specific information present in the learned model. Rae et al. (2019) propose a neural Bloom Filter that learns to write to memory using a distributed write scheme and achieves compression gains over the classical Bloom filter. The key idea of learned Bloom filters is that in many practical settings, given a query input, the likelihood that the input is in the set S can be deduced by some observable features which can be captured by a machine learning model. For example, a Bloom filter that represents a set of malicious URLs can benefit from a learned model that can distinguish malicious URLs from benign URLs. This model can be trained on URL features such as length of hostname, counts of special characters, etc. This approach is described in [Kraska et al. (2018)], which studies how standard index structures can be improved using machine learning models; we refer to their framework as the original learned Bloom filter, Given an input x and its features, the model outputs a score s(x) which is supposed to correlate with the likelihood of the input being in the set. Thus, the elements of the set, or keys, should have a higher score value compared to non-keys. This model is used as a pre-filter, so when score s(x) of an input x is above a pre-determined threshold t, it is directly classified as being in the set. For inputs where s(x) < t, a smaller backup Bloom filter built from only keys with a score below the threshold (which are known) is used. This maintains the property that there are no false negatives. The design essentially uses the model to immediately answer for inputs with high score whereas the rest of the inputs are handled by the backup Bloom filter as shown in Fig.1(A). The threshold value t is used to partition the space of scores into two regions, with inputs being processed differently depending on in which region its score falls. With a sufficiently accurate model, the size of the backup Bloom filter can be reduced significantly over the size of a standard Bloom filter while maintaining overall accuracy. [Kraska et al. (2018)] showed that, in some applications, even after taking the size of the model into account, the learned Bloom filter can be smaller than the standard Bloom filter for the same false positive rate. The original learned Bloom filter compares the model score against a single threshold, but the framework has several drawbacks. Choosing the right threshold: The choice of threshold value for the learned Bloom filter is critical, but the original design uses heuristics to determine the threshold value. Using more partitions: Comparing the score value only against a single threshold value wastes information provided by the learning model. For instance, two elements x1, x2 with s(x1) >> s(x2) > t, are treated the same way but the odds of x1 being a key are much higher than for x2. Intuitively, we should be able to do better by partitioning the score space into more than two regions. Optimal Bloom filters for each region: Elements with scores above the threshold are directly accepted as keys. A more general design would provide backup Bloom filters in both regions and choose the Bloom filter false positive rate of each region so as to optimize the space/false positive trade-off as desired. The original setup can be interpreted as using a Bloom filter of size 0 and false positive rate of 1 above the threshold. This may not be the optimal choice; moreover, as we show, using different Bloom filters for each region(as shown in Fig.1(C)) allows further gains when we increase the number of partitions. Follow-up work by [Mitzenmacher (2018)] and [Dai & Shrivastava (2019)] improve on the original design but only address a subset of these drawbacks. In particular, [Mitzenmacher (2018)] proposes using Bloom filters for both regions and provides a method to find the optimal false positive rates for each Bloom filter. But [Mitzenmacher (2018)] only considers two regions and does not consider how to find the optimal threshold value. [Dai & Shrivastava (2019)] propose using multiple thresholds to divide the space of scores into multiple regions, with a different backup Bloom filter for each score region. The false positive rates for each of the backup Bloom filters and the threshold values are chosen using heuristics. Empirically, we found that these heuristics might perform worse than [Mitzenmacher (2018)] in some scenarios. A general design that resolves all the drawbacks would, given a target false positive rate and the learned model, partition the score space into multiple regions with separate backup Bloom filters for each region, and find the optimal threshold values and false positive rates, under the goal of minimizing the memory usage while achieving the desired false positive rate as shown in Fig.1(C). In this work, we show how to frame this problem as an optimization problem, and show that our resulting solution significantly outperforms the heuristics used in previous works. Additionally, we show that our maximum space saving1 is linearly proportional to the KL divergence of the key and non-key score distributions determined by the partitions. We present a dynamic programming algorithm to find the optimal parameters (up to the discretization used for the dynamic programming) and demonstrate performance improvements over a synthetic dataset and two real world datasets: URLs and EMBER. We also show that the performance of the learned Bloom filter improves with increasing number of partitions and that in practice a small number of regions (≈ 4− 6) suffices to get a very good performance. We refer to our approach as a partitioned learned Bloom filter (PLBF). Experimental results from both simulated and real-world datasets show significant performance improvements. We show that to achieve a false positive rate of 0.001, [Mitzenmacher (2018)] uses 1space saved by using our approach instead of a Bloom filter 8.8x, 3.3x and 1.2x the amount of space and [Dai & Shrivastava (2019)] uses 6x, 2.5x and 1.1x the amount of space compared to PLBF for synthetic, URLs and EMBER respectively. 2 BACKGROUND 2.1 STANDARD BLOOM FILTERS AND RELATED VARIANTS A standard Bloom filter, as described in Bloom’s original paper [Bloom (1970)], is for a set S = {x1, x2, ..., xn} of n keys. It consists of an array of m bits and uses k independent hash functions {h1, h2, ...hk} with the range of each hi being integer values between 0 and m− 1. We assume the hash functions are fully random. Initially all m bits are 0. For every key x ∈ S, array bits hi(x) are set to 1 for all i ∈ {1, 2, ...k}. A membership query for y returns that y ∈ S if hi(y) = 1 for all i ∈ {1, 2, ...k} and y 6∈ S otherwise. This ensures that the Bloom filter has no false negatives but non-keys y might result in a false positive. This false positive rate depends on the space m used by the Bloom Filter. Asymptotically (for large m,n with m/n held constant), the false positive rate is given by( 1− ( 1− 1 m )kn)k . (1) See [Broder & Mitzenmacher (2003); Bose et al. (2008)] for further details. [Bloom (1970)] proved a space lower bound of |S| × log2( 1F ) for a Bloom filter with false positive rate F . The standard construction uses space that is asymptotically log2 e(≈ 1.44) times more than the lower bound. Other constructions exist, such as Cuckoo filters[Fan et al. (2014)], Morton filters[Breslow & Jayasena (2018)], XOR filters[Graf & Lemire (2020)] and Vacuum filters[Wang et al. (2019)]. These variants achieve slightly better space performance compared to standard Bloom filters but still are a constant factor larger than the lower bound. [Pagh et al. (2005)] presents a Bloom filter design that achieves this space lower bound, but it appears too complicated to use in practice. 2.2 LEARNED BLOOM FILTER Learned Bloom filters make use of learned models to beat the theoretical space bounds. Given a learned model that can distinguish between keys and non-keys, learned Bloom filters use it as a pre-filter before using backup Bloom filters. The backup Bloom filters can be any variant including the standard, cuckoo, XOR filters, etc. If the size of the model is sufficiently small, learned models can be used to enhance the performance of any Bloom filter variant. We provide the framework for learned Bloom filters. We are given a set of keys S = {x1, x2, .., xn} from a universe U for which to build a Bloom filter. We are also given a sample of the non-keys Q which is representative of the set U − S. Features that can help in determining if an element is a member of S are determined. The learned model is then trained on features of set S ∪Q for a binary classification task and produces a score s(x) ∈ [0, 1]. This score s(x) can be viewed (intuitively, not formally) as the confidence of the model that the element x is in the set S. So, a key in S would ideally have a higher score value than the non-keys. An assumption in this framework is that the training sample distribution needs to match or be close to the test distribution of non-keys; the importance of this assumptions has been discussed at length in [Mitzenmacher (2018)]. For many applications, past workloads or historical data can be used to get an appropriate non-key sample. As discussed above, [Kraska et al. (2018)] set a threshold t and inputs satisfying s(x) > t are classified as a key. A backup Bloom filter is built for just the keys in S satisfying s(x) ≤ t. This design is represented in Fig.1(A). [Mitzenmacher (2018)] proposes using another Bloom filter before the learned model along with a backup Bloom Filter. As the learned model is used between two Bloom filters as shown in Fig.1(B), this is referred to as the ’sandwiching’ approach. They also provide the analysis of the optimal false positive rates for a given amount of memory for the two Bloom filters (given the false negative rate and false positive rate for the learned model, and the corresponding threshold). Interestingly, the sandwiching approach and analysis can be seen as a special case of our approach and analysis, as we describe later in Appendix.D.1. [Dai & Shrivastava (2019)] use multiple thresholds to partition the score space into multiple regions and use a backup Bloom filter for each score region. They propose heuristics for how to divide up the score range and choose false positive rate per region. 3 PARTITIONED LEARNED BLOOM FILTER (PLBF) 3.1 DESIGN As discussed before, the general design segments the score space into multiple regions using multiple thresholds, as shown in Fig.1(C), and uses separate backup Bloom filters for each region. We can choose different target false positive rates for each region2. The parameters associated with each region are its threshold boundaries and its false positive rate. Setting good values for these parameters is crucial for performance. Our aim is to analyze the performance of the learned Bloom filter with respect to these parameters, and find methods to determine optimal or near-optimal parameters. The following notation will be important for our analysis. Let G(t) be the fraction of keys with scores falling below t. We note that since the key set is finite, G(t) goes through discrete jumps. But it is helpful (particularly in our pictures) to think of G(t) as being a continuous function, corresponding to a cumulative probability distribution, with a corresponding “density” function g(t). For non keys, we assume that queries involving non-keys come from some distribution D, and we define H(t) to be probability that a non-key query from D has a score less than or equal to t. Note that non key query distribution might be different from non key distribution. If non key queries are chosen uniformly at random, non key query distribution would be the same as non key distribution. We assume that H(t) is known in the theoretical analysis below. In practice, we expect a good approximation of H(t) will be used, determined by taking samples from D or a suitably good approximation, which may be based on, for example, historical data (discussed in detail in [Mitzenmacher (2018)]). Here H(t) can be viewed as a cumulative distribution function, and again in our pictures we think of it as having a density h(t). Also, note that if queries for non-keys are simply chosen uniformly at random, then H(t) is just the fraction of non-keys with scores below t. While our analysis holds generally, the example of H(t) being the fraction of non-keys with scores below t may be easier to keep in mind. Visualization of the original learned Bloom filter in terms of these distributions is shown in Fig.1(D). As we describe further below, for our partitioned learned Bloom filter, we use multiple thresholds and a separate backup Bloom filter for each region, as show in Fig.1(E). In what follows, we formulate the problem of choosing thresholds and backup Bloom filter false positive rates (or equivalently, sizes) as an optimization problem in section 3.2. In section 3.3.1, we find the optimal solution of a relaxed problem which helps us gain some insight into the general problem. We then propose an approximate solution for the general problem in section 3.3.3. 2The different false positive rates per region can be achieved in multiple ways. Either by choosing a separate Bloom filter per region or by having a common Bloom filter with varying number of hash functions per region. We find in our formulation that the resulting parameters correspond to quite natural quantities in terms of G and H . Specifically, the optimal false positive rate of a region is proportional to the ratio of the fraction of keys to the fraction of non-keys in that region. If we think of these region-based fractions for keys and non-keys as probability distributions, the maximum space saving obtained is proportional to the KL divergence between these distributions. Hence we can optimize the thresholds by choosing them to maximize this divergence. We show that we can find thresholds to maximize this divergence, approximately, through dynamic programming. We also show that, naturally, this KL divergence increases with more number of regions and so does the performance. In our experiments, we find a small number(≈ 4− 6) of partitions suffices to get good performance. 3.2 GENERAL OPTIMIZATION FORMULATION To formulate the overall problem as an optimization problem, we consider the variant which minimizes the space used by the Bloom filters in PLBF in order to achieve an overall a target false positive rate (F ). We could have similarly framed it as minimizing the false positive rate given a fixed amount of space. Here we are assuming the learned model is given. We assume normalized score values in [0, 1] for convenience. We have region boundaries given by ti values 0 = t0 ≤ t1 ≤ ....tk−1 ≤ tk = 1, with score values between [ti−1, ti] falling into the ith region. We assume the target number of regions k is given. We denote the false positive rate for the Bloom filter in the ith region by fi. We let G and H be defined as above. As state previously, Fig.1(E) corresponds to this setting, and the following optimization problem finds the optimal thresholds ti and the false positive rates fi: min ti,fi (∑k i=1 |S| × (G(ti)−G(ti−1))× c log2 ( 1 fi )) + Size of Learned Model (2) constraints ∑k i=1 (H (ti)−H(ti−1))× fi ≤ F (3) fi ≤ 1 , i = 1...k (4) (ti − ti−1) ≥ 0 , i = 1...k ; t0 = 0; tk = 1 (5) The minimized term (Eq.2) represents the total size of the learned Bloom filter, the size of backup Bloom filters is obtained by summing the individual backup Bloom filter sizes. The constant c in the equation depends on which variant of the Bloom filter is used as the backup3; as it happens, its value will not affect the optimization. The first constraint (Eq.3) ensures that the overall false positive rate stays below the target F . The overall false positive rate is obtained by summing the appropriately weighted rates of each region. The next constraint (Eq.4) encodes the constraint that false positive rate for each region is at most 1. The last set of constraints (Eq.5) ensure threshold values are increasing and cover the interval [0, 1]. 3.3 SOLVING THE OPTIMIZATION PROBLEM 3.3.1 SOLVING A RELAXED PROBLEM If we remove the false positive rate constraints (Eq.4, giving fi ≤ 1), we obtain a relaxed problem shown in Eq.6. This relaxation is useful because it allows us to use the Karush-Kuhn-Tucker (KKT) conditions to obtain optimal fi values in terms of the ti values, which we used to design algorithms for finding near-optimal solutions. Throughout this section, we assume the the relaxed problem yields a solution for the original problem; we return to this issue in subsection 3.3.3. min ti=1...k−1,fi=1...k (∑k i=1 |S| × (G(ti)−G(ti−1))× c log2 ( 1 fi )) + Size of Learned Model constraints ∑k i=1 (H(ti)−H(ti−1))× fi ≤ F ; (ti − ti−1) ≥ 0 , i = 1...k; t0 = 0; tk = 1 (6) 3The sizes of Bloom filter variants are proportional to |S| × log2(1/f), where S is the set it represents, and f is the false positive rate it achieves. See e.g. [Mitzenmacher (2018)] for related discussion. The constant c depends on which type of Bloom filter is used as a backup. For example, c = log2(e) for standard Bloom filter. The optimal fi values obtained by using the KKT conditions yield Eq.7 (as derived in Appendix.A), giving the exact solution in terms of ti’s. fi = F G(ti)−G(ti−1) H(ti)−H(ti−1) (7) The numerator G(ti) − G(ti−1) is the fraction of keys in the ith region and the denominator H(ti) −H(ti−1) is the probability of a non-key query being in the ith region. In intuitive terms, the false positive rate for a region is proportional to the ratio of the key density (fraction of keys) to non-key density (fraction of non-key queries). Since we have found the optimal fi in terms of the ti, we can replace the fi in the original problem to obtain a problem only in terms of the ti. In what follows, we use ˆg(t) to represent the discrete distribution given by the k values of G(ti)−G(ti−1) for i = 1, . . . , k, and similarly we use ˆh(t) for the distribution corresponding to the H(ti)−H(ti−1) values. Eq.8 shows the rearrangement of the minimization term(excluding model size) after substitution. Min. Term = k∑ i=1 |S| × (G(ti)−G(ti−1))× c log2 ( H(ti)−H(ti−1) (G(ti)−G(ti−1))× F ) = k∑ i=1 |S| × (G(ti)−G(ti−1))× c log2 ( 1 F ) − c× |S| ×DKL ( ˆg(t), ˆh(t) ) (8) where DKL is the standard KL divergence for the distributions given by ˆg(t) and ˆh(t). Eq.8 represents the space occupied by the backup Bloom filters; the total space includes this and the space occupied by the learned model. c× ( |S| × log2 ( 1 F ) − |S| ×DKL ( ˆg(t), ˆh(t) )) + Size Of Learned Model (9) The space occupied by the Bloom filter without the learned model is c× |S| × log2(1/F ). Thus, the space saved by PLBF in comparison to the normal Bloom filter is: c× ( |S| ×DKL ( ˆg(t), ˆh(t) )) − Size Of Learned Model (10) The space saved by PLBF is therefore linearly proportional to the KL divergence of key and non-key distributions of the regions given by ˆg(t) and ˆh(t) of the regions. This derivation suggests that the KL divergence might also be used as a loss function to improve the model quality. We have tested this empirically, but thus far have not seen significant improvements over the MSE loss we use in our experiments; this remains an interesting issue for future work. 3.3.2 FINDING THE OPTIMAL THRESHOLDS FOR RELAXED PROBLEM We have shown that, given a set of thresholds, we can find the optimal false positive rates for the relaxed problem. Here we turn to the question of finding optimal thresholds. We assume again that we are given k, the number of regions desired. (We consider the importance of choosing k further in our experimental section.) Given our results above, the optimal thresholds correspond to the points that maximize the KL divergence between ( ˆg(t), ˆh(t)). The KL divergence of ( ˆg(t), ˆh(t)) is the sum of the terms gi log2 gi hi , one term per region. (Here gi = G(ti)−G(ti−1) and hi = H(ti)−H(ti−1).) Note that each term depends only on the proportion of keys and non-keys in that region and is otherwise independent of the other regions. This property allows a recursive definition of KL divergence that is suitable for dynamic programming. We divide the score space [0, 1] into N consecutive small segments for a chosen value of N ; this provides us a discretization of the score space, with larger N more closely approximating the real interval. Given k, we can find a set of k approximately optimal thresholds using dynamic programming, where the solution is approximate due to our discretization of the score space. Let DPKL(n, j) denote the maximum divergence one can get when you divide the first n segments into j regions. Our approximately optimal divergence corresponds to DPKL(N, k). The idea behind the algorithm is that the we can recursively define DPKL(n, j) as represented in Eq.11. Here g′, h′ represent the fraction of keys and the fraction of non-key queries, respectively, in these N segments. DPKL (n, j) = max ( DPKL(n− i, j − 1) + ( n∑ r=i g′(r)× log2 (∑n r=i g ′(r)∑n r=i h ′(r) ))) (11) The time complexity of computing DPKL(N, k) is O(N2k). One can increase the value of N to get more precision in the discretization when finding thresholds, at the cost of higher computation time. 3.3.3 THE RELAXED PROBLEM AND THE GENERAL PROBLEM We can find a near-optimal solution to the relaxed problem by first, obtaining the threshold values that maximize the divergence and then, getting the optimal fi values using Eq.7. In many cases, the optimal relaxed solution will also be the optimal general solution, specifically if F × (G(ti−1) − G(ti))/(H(ti−1)−H(ti)) < 1 for all i. Hence, if we are aiming for a sufficiently low false positive rate F , solving the relaxed problem suffices. To solve the general problem, we need to deal with regions where fi ≥ 1, but we can use the relaxed problem as a subroutine. First, given a fixed set of ti values for the general problem, we have an algorithm (Alg.1, as discussed in Appendix.B) to find the optimal fi’s. Briefly summarized, we solve the relaxed problem, and for regions with fi > 1, the algorithm sets fi = 1, and then re-solves the relaxed problem with these additional constraints, and does this iteratively until no region with fi > 1 remains. The problem is that we do not have the optimal set of ti values to begin; as such, we use the optimal ti values for the relaxed solution as described in Section 3.3.2. This yields a solution to the general problem (psuedo-code in Alg.2), but we emphasize that it is not optimal in general, since we did not start with the optimal ti. We expect still that it will perform very well in most cases. In practice, we observe that keys are more concentrated on higher scores, and non-key queries are more concentrated on lower scores. Given this property, if a region with fi = 1 (no backup Bloom filter used) exists in the optimal solution of the general problem, it will most probably be the rightmost region. In particular, if (G(ti−1)−G(ti))/(H(ti−1)−H(ti)) is increasing as ti−1, ti increase – that is, the ratio of the fraction of keys to the fraction of non-key queries over regions is increasing – then indeed without loss of generality the last (kth) region will be the only one with fk = 1. (We say only one region because any two consecutive regions with fi = 1 can be merged and an extra region can be made in the remaining space which is strictly better, as adding an extra region always helps as shown in Appendix.D.2.) It is reasonable to believe that in practice this ratio will be increasing or nearly so. Hence if we make the assumption that in the optimal solution all the regions except the last satisfy the fi < 1 constraint, then if we identify the optimal last region’s boundary, we can remove the fi ≤ 1 constraints for i 6= k and apply the DP algorithm to find near optimal ti’s. To identify the optimal last region’s boundary, we simply try all possible boundaries for the kth region (details discussed in Appendix.C). As it involves assumptions on the behavior of G and H , we emphasize again that this will not guarantee finding the optimal solution. But when the conditions are met it will lead to a near-optimal solution (only near-optimal due to the discretization of the dynamic program). 4 EVALUATION We compare PLBF against the theoretically optimal Bloom filter [Bloom (1970)]4, the sandwiching approach [Mitzenmacher (2018)], and AdaBF [Dai & Shrivastava (2019)]. Comparisons against standard Bloom filters5 appear in Appendix.E.1. We excluded the original learned Bloom filter [Kraska et al. (2018)] as the sandwiching approach was strictly better. We include the size of the learned model with the size of the learned Bloom filter. To ensure a fair comparison, we used the optimal Bloom filter as the backup bloom filter for all learned variants. We use 3 different datasets: URLs: As in previous papers [Kraska et al. (2018), Dai & Shrivastava (2019)], we used the URL data set, which contains 103520 (23%) malicious and 346646 (77%) are benign URLs. We used 17 features from these URL’s such as host name length, use of shortening, counts of special characters,etc. 4For the space of a theoretically optimal Bloom filter, we take the standard Bloom filter of same false positive rate and divide it’s space used by log2 e, as obtaining near-optimality in practice is difficult. This uses the fact that the standard Bloom filter is asymptotically log2 e times suboptimal than the optimal as discussed in Sec.2.1. 5PLBF performs better against standard Bloom filters, as discussed in Appendix.D.3. Section 4.1 are conservative estimates of gains possible in practice using a PLBF. EMBER: Bloom filters are widely used to match file signatures with the virus signature database. Ember (Endgame Malware Benchmark for Research) [Anderson & Roth (2018)] is an open source collection of 1.1M sha256 file hashes that were scanned by VirusTotal in 2017. Out of the 1.1 million files, 400K are malicious, 400K are benign, and we ignore the remaining 300K unlabeled files. The features of the files are already included in the dataset. Synthetic: An appealing scenario for our method is when the key density increases and non-key density decreases monotonically with respect to the score value. We simulate this by generating the key and non-key score distribution using Zipfian distributions as in Fig.2(A). Since we directly work on the score distribution, the size of the learned model for this synthetic dataset is zero. 4.1 OVERALL PERFORMANCE Here, we compare the performance of PLBF against other baselines by fixing the target F and measuring the space used by each methods. We use PLBF Alg.3 with DP algorithm discretization(N ) set to 1000. We train the model on the entire key set and 40% of the non-key set. The thresholds and backup Bloom filters are then tuned using this model with the aim of achieving the fixed target F . The rest of the non-keys are used to evaluate the actual false positive rate. While any model can be used, we choose the random forest classifier from sklearn [Pedregosa et al.] for its good accuracy. The F1 scores of the learned models used for synthetic, URLs and EMBER were 0.99, 0.97, and 0.85, respectively. We consider the size of the model to be the pickle file size on the disk (a standard way of serializing objects in Python). We use five regions (k = 5) for both PLBF and AdaBF as this is usually enough to achieve good performance as discussed in 4.2. Using higher k would only improve our performance. The results of the experiment are shown in the Fig.2(A-C) along with the distribution of the scores of keys and non-keys for each dataset. As we can see from the figure, PLBF has a better Pareto curve than the other baselines for all the datasets. On the synthetic dataset and URLs dataset we observe a significantly better performance. In contrast, for the EMBER dataset our performance is only slightly better indicating that the model here is not as helpful. The difference between space used by PLBF and optimal Bloom filter first increases with decreasing false positive rate but converges to a constant value for all datasets, as given in Eq.10. For the same amount of space used(400Kb,500Kb,3000Kb space for synthetic,URLs,EMBER, respectively), PLBF achieves 22x, 26x, and 3x smaller false positive rates than the sandwiching approach, and 8.5x, 9x, and 1.9x smaller false positive rates than AdaBF for synthetic, URLs, and EMBER, respectively. To achieve a false positive rate of 0.001, the sandwiching approach uses 8.8x, 3.3x, and 1.2x the amount of space and AdaBF uses 6x, 2.5x, and 1.1x the amount of space compared to PLBF for synthetic, URLs, and EMBER datasets respectively. 4.2 PERFORMANCE AND THE NUMBER OF REGIONS The maximum space savings obtained by using PLBF is linearly proportional to the KL divergence of the distributions(Eq10) and this KL divergence strictly increases with the number of regions(Appendix.D.2). Fig.2(D-F) show the space saved w.r.t the optimal Bloom filter as we increase the number of regions k for a target false positive rate of 0.001. The red line in the figure shows the savings when using 25 regions; using more regions provides no noticeable improvement on this data. Our results suggest using 4-6 regions should be sufficient to obtain reasonable performance. We have additional experiments in Appendix.E that shows PLBF performance against standard Bloom filters and PLBF performance w.r.t model quality. 5 CONCLUSION Our analysis of the partitioned learned Bloom filter provides a formal framework for improving on learned Bloom filter performance that provides substantially better performance than previous heuristics. As Bloom filters are used across thousands of applications, we hope the PLBF may find many uses where the data set is amenable to a learned model. Acknowledgments This research was supported in part by NSF grants CCF-1563710 and CCF-1535795, and by a gift to the Center for Research on Computation and Society at Harvard University. A SOLVING THE RELAXED PROBLEM USING KKT CONDITIONS As mentioned in the main text, if we relax the constraint of fi ≤ 1, using the stationary KKT conditions we can obtain the optimal fi values. Here we show this work. The appropriate Lagrangian equation is given in Eq.12. In this case, the KKT coniditions tell us that the optimal solution is a stationary point of the Lagrangian. Therefore, we find where the derivative of the Lagrangian with respect to fi is zero. L (ti, fi, λ, νi) = ∑k i=1 (G(ti)−G(ti−1))× c log2 ( 1 fi ) + λ× ((∑k i=1 (H(ti)−H(ti−1))× fi ) − F ) +∑k i=1 νi × (ti−1 − ti) (12) ∂L(ti,fi,λ,νi) ∂fi = 0 (13) ∂(G(ti)−G(ti−1))c log2 ( 1 fi ) ∂fi = −λ∂(H(ti)−H(ti−1))×fi∂fi (14) fi = c ln(2)×(G(ti)−G(ti−1))×λ (H(ti)−H(ti−1)) (15) λ = F c ln(2)× ∑k i=1 (G(ti)−G(ti−1)) = Fc ln 2 (16) fi = (G(ti)−G(ti−1))×FPR (H(ti)−H(ti−1)) (17) Eq.15 expresses fpri in terms of λ. Summing Eq.15 over all i and using the relationship between F and H we get Eq.16. Thus the optimal fi values turn out to be as given in Eq.17. Algorithm 1 Finding optimal fpr’s given thresholds InputG′ - the array containing key density of each region InputH′ - the array containing non-key density of each region Input F - target overall false positive rate Input k - number of regions Output f - the array of false positive rate of each region 1: procedure OPTIMALFPR(G′, H′, F, k) 2: Gsum ← 0 . sum of key density of regions with fi = 1 3: Hsum ← 0 . sum of non-key density of regions with fi = 1 4: for i in 1, 2, ...k do 5: f [i]← G ′[i]·F H′[i] . Assign relaxed problem solution 6: while some f [i] > 1 do 7: for i in 1, 2, ...k do 8: if (f [i] > 1) then f [i]← 1 . Cap the false positive rate of region to one 9: Gsum ← 0 10: Hsum ← 0 11: for i in 1, 2, ...k do 12: if (f [i] = 1) thenGsum ← Gsum +G′[i];Hsum ← Hsum +H′[i] . Calculate key,non-key density in regions with no Bloom filter(f [i] = 1) 13: for i in 1, 2, ...k do 14: if (f [i] < 1) then f [i] = G ′[i]·(F−Hsum) H′[i]·(1−Gsum) .Modifying the fi of the regions to ensure target false positive rate is FPR 15: return fpr Array Algorithm 2 Using relaxed solution for the general problem InputGdis - the array containing discretized key density of each region InputHdis - the array containing discretized key density of each region Input F - target overall false positive rate Input k - number of regions Output t - the array of threshold boundaries of each region Output f - the array of false positive rate of each region Algorithm ThresMaxDivDP - DP algorithm that returns the thresholds maximizing the divergence between key and non-key distribution. Algorithm CalcDensity - returns the region density given thresholds of the regions Algorithm OptimalFPR - returns the optimal false positive rate of the regions given thresholds Algorithm SpaceUsed - returns space used by the back-up Bloom filters given threhsolds and false positive rate per region. 1: procedure SOLVE(Gdis, Hdis, F, k) 2: t← ThresMaxDivDP(Gdis, Hdis, k) . Getting the optimal thresholds for the relaxed problem 3: G′, H′ ← CalcDensity(Gdis, Hdis, t) 4: f = OptimalFPR(G′, H′, F, k) . Obtaining optimal false positive rates of the general problem for given thresholds 5: 6: return t , f Array B OPTIMAL FALSE POSITIVE RATE FOR GIVEN THRESHOLDS We provide the pseudocode for the algorithm to find the optimal false positive rates if threshold values are provided. The corresponding optimization problem is given in Eq.18. As the boundaries for the regions are already defined, one only needs to find the optimal false positive rate for the backup Bloom filter of each region. min fi=1...k ∑k i=1 (G(ti)−G(ti−1))× c log2( 1 fi ) constraints ∑k i=1 (H(ti)−H(ti−1))× fi = F fi ≤ 1 i = 1...k (18) Alg.1 gives the pseudocode. We first assign false positive rates based on the relaxed problem but may find that fi ≥ 1 for some regions. For such regions, we can set fi = 1, re-solve the relaxed problem with these additional constraints (that is, excluding these regions), and use the result as a solution for the general problem. Some regions might again have a false positive rate above one, so we can repeat the process. The algorithm stops when there is no new region with false positive rate greater than one. This algorithm finds the optimal false positive rates for the regions when the thresholds are fixed. Algorithm 3 Solving the general problem InputGdis - the array containing discretized key density of each region InputHdis - the array containing discretized key density of each region Input F - target overall false positive rate Input k - number of regions Output t - the array of threshold boundaries of each region Output f - the array of false positive rate of each region Algorithm ThresMaxDivDP - DP algorithm that returns the thresholds maximizing the divergence between key and non-key distribution. Algorithm CalcDensity - returns the region density given thresholds of the regions Algorithm OptimalFPR - returns the optimal false positive rate of the regions given thresholds 1: procedure SOLVE(Gdis, Hdis, F, k) 2: MinSpaceUsed←∞ . Stores minimum space used uptil now 3: index← −1 . Stores index corresponding to minimum space used 4: Glast ← 0 . Key density of the current last region 5: Hlast ← 0 . Non-key density of the current last region 6: 7: for i in k − 1, k, ...N − 1 do . Iterate over possibilities of last region 8: Glast← ∑N j=iGdis[j] . Calculate the key density of last region 9: Hlast← ∑N j=iHdis[j] 10: t← ThresMaxDivDp(G[1..(i− 1)], H[1..(i− 1)], k − 1) . Find the optimal thresholds for the rest of the array 11: t.append(i) 12: G′, H′ ← CalcDensity(Gdis, Hdis, t) 13: f = OptimalFPR(G′, H′, F, k) . Find optimal false positive rates for the current configuration 14: if (MinSpaceUsed < SpaceUsed(Gdis, Hdis, t, f)) 15: thenMinSpaceUsed← SpaceUsed(Gdis, Hdis, t, f); index← i . Remember the best performance uptil now 16: 17: Glast ← ∑N j=indexGdis[j] 18: Hlast ← ∑N j=indexHdis[j] 19: t← ThresMaxDivDP(G[1..(index− 1)], H[1..(index− 1)], k − 1) 20: t.append(index) 21: G′, H′ ← CalcDensity(Gdis, Hdis, t) 22: f = OptimalFPR(G′, H′, F, k) 23: 24: return t , f Array C ALGORITHM FOR FINDING THRESHOLDS We provide the pseudocode for the algorithm to find the solution for the relaxed problem; Alg.3 finds the thresholds and false positive rates. As we have described in the main text, this algorithm provides the optimal parameter values, if (G(ti−1)−G(ti))/(H(ti−1)−H(ti)) is monotonically increasing. The idea is that only the false positive rate of the rightmost region can be one. The algorithm receives discretized key and non-key densities. The algorithm first iterates over all the possibilities of the rightmost region. For each iteration, it finds the thresholds that maximize the KL divergence for the rest of the array for which a dynamic programming algorithm exists. After calculating these thresholds, it finds the optimal false positive rate for each region using Alg.1. After calculating the thresholds and false positive rates, the algorithm calculates the total space used by the back-up Bloom filters in PLBF. It then remembers the index for which the space used was minimal. The ti’s and fi’s corresponding to this index are then used to build the backup Bloom filters. The worst case time complexity is then O(N3k). D ADDITIONAL CONSIDERATIONS D.1 SANDWICHING: A SPECIAL CASE We show here that the sandwiching approach can actually be interpreted as a special case of our method. In the sandwiching approach, the learned model is sandwiched between two Bloom filters as shown in Fig.3(A). The input first goes through a Bloom filter and the negatives are discarded. The positives are passed through the learned model where based on their score s(x) they are either directly accepted when s(x) > t or passed through another backup Bloom filter when s(x) ≤ t. In our setting, we note that the pre-filter in the sandwiching approach can be merged with the backup filters to yield backup filters with a modified false positive rate. Fig.3(B) shows what an equivalent design with modified false positive rates would look like. (Here equivalence means we obtain the same false positive rate with the same bit budget; we do not consider compute time.) Thus, we see that the sandwiching approach can be viewed as a special case of the PLBF with two regions. However, this also tells us we can make the PLBF more efficient by using sandwiching. Specifically, if we find when constructing a PLBF with k regions that fi < 1 for all i, we may assign f0 = max1≤i≤k fi. We may then use an initial Bloom filter with false positive rate f0, and change the target false positive rates for all other intervals to fi/f0, while keeping the same bit budget. This approach will be somewhat more efficient computationally, as we avoid computing the learned model for some fraction of non-key elements. D.2 PERFORMANCE AGAINST NUMBER OF REGIONS k Earlier, we saw the maximum space saved by using PLBF instead of a normal Bloom filter is linearly proportional to the DKL( ˆg(t), ˆh(t)). If we split any region into two regions, the overall divergence would increase because sum of divergences of the two split regions is always more than the original divergence, as shown in Eq.19. Eq.19 is an application of Jensen’s inequality. ( (g1 + g2)× log (g1+g2)(h1+h2) ) ≤ ( g1 × log g1h1 ) + ( g2 × log g2h2 ) (19) Increasing the number of regions therefore always improves the maximum performance. We would hope that in practice a small number of regions k would suffice. This seems to be the the case in our experience; we detail one such experiment in our evaluation(4.2). D.3 PERFORMANCE USING VARIOUS BLOOM FILTER VARIANTS We consider how the space saved of the PLBF varies with the type of backup Bloom filter being used. The PLBF can use any Bloom filter variant as the backup Bloom filter. When we compare our performance with a Bloom filter variant, we use that same Bloom filter variant as the backup Bloom filter for a fair comparison. First, absolute space one can save by using a PLBF instead of a Bloom filter variant is given in Eq.10. This quantity increases with increasing c6. 6The sizes of standard Bloom filter variants are proportional to |S| × log2(1/f), where S is the set it represents, and f is the false positive rate it achieves. See e.g. Mitzenmacher (2018) for related discussion. The constant c depends on which type of Bloom filter is used as a backup. For example, c = log2(e) for standard Bloom filter, c = 1.0 for the optimal Bloom filter. The relative space one saves by using PLBF instead of the given Bloom filter variant is shown in Eq.20. This quantity is the ratio of the space saved by PLBF (as shown in Eq.10) divided by the space used by the given Bloom filter variant (c× |S| × log2(1/F )) as shown in Eq.20. (c×|S|×DKL( ˆg(t), ˆh(t))−Size Of Learned Model) c×|S|×log2(1/F ) (20) Cancelling the common terms we obtain the following Eq.21. ( DKL( ˆg(t), ˆh(t)) log2(1/F ) − Size Of Learned Modelc×|S|×log2(1/F ) ) (21) The relative space saved, like the absolute space saved, also increases with increasing c. Thus, both the relative and absolute space saved for the PLBF is higher for a standard Bloom filter (c = 1.44) than an optimal Bloom filter (c = 1.00), and hence our experiments in Section 4.1 are conservative estimates of gains possible in practice using PLBF. E ADDITIONAL EXPERIMENTS E.1 PERFORMANCE W.R.T STANDARD BLOOM FILTERS Earlier, we evaluated our performance using optimal Bloom filters and here we present results using standard Bloom filters. As shown in Appendix.D.3, PLBF performs better w.r.t standard Bloom filters than optimal Bloom filters. As one can see from Fig.4, PLBF performs better than the standard Bloom filter. E.2 PERFORMANCE AND MODEL QUALITY Here we provide an experiment to see how the performance of various methods varies with the quality of the model. As discussed earlier, a good model will have high skew of the distributions g and h towards extreme values. We therefore vary the skew parameter of the Zipfian distribution to simulate the model quality. We measure the quality of the model using the standard F1 score. Fig.5(B) represents the space used by various methods to achieve a fixed false positive rate of 0.001 as we vary the F1 score of the model. The figure shows that as the model quality in terms of the F1 score increases, the space required by all the methods decreases (except for the optimal Bloom filter, which does not use a model). The space used by all the methods goes to zero as the F1 score goes to 1, as for the synthetic dataset there is no space cost for the model. The data point corresponding to F1 score equal to 0.99 was used to plot Fig.2(A). E.3 DISCRETIZATION EFFECT ON DYNAMIC PROGRAMMING RUNTIME, PLBF SIZE All the runtime experiments in this subsection and the next subsection are measured using an 2.8GHz quad-core Intel Core i7 CPU @ 2.80GHz with 16GB of memory. We use the bloom-filter python package [bloom filter] for our backup Bloom filters. The dynamic programming algorithms are implemented in Python. Here we provide an experiment to see how the dynamic programming (DP) algorithm runtime (psuedo code in Alg.3) and PLBF size vary with level of discretization (N ). In the tables below, we have the DP algorithm runtime and space taken by the PLBF to achieve an approximate empirical false positive rate of 0.001 for various N . As discussed in Sec. 3.3.2, with increasing value of N one gets closer to optimal parameters, at the cost of higher computation time. This trend is demonstrated in the table below for the URLs and EMBER datasets. We note that if runtime is an issue, the increase in size from using smaller N is relatively small. E.4 CONSTRUCTION TIME FOR VARIOUS BASELINES Here we look at the construction time breakdown for the PLBF and various alternatives, with the goal of seeing the cost of in terms of construction time for using the more highly tuned PLBF. The construction time of all the learned Bloom filters includes the model training time and parameter estimation time, which are not required for the standard Bloom filter construction process. Since we use the same model for all learned baselines, the model construction time is the same for all of them. In Fig.6, we plot the construction time breakdown for various baselines in order to achieve an approximate empirical false positive rate of 0.001. Recall that the AdaBF and Sandwiching approaches use heuristics to estimate their parameters and unsurprisingly they therefore seems somewhat faster than PLBF. However, for N = 100 we see the parameter estimation time is smaller than the key insertion time and model training time. The parameter estimation time for PLBF varies with the level of discretization we use for the DP algorithm. The PLBF with N = 1000 takes the longest to execute while standard Bloom filter is fastest baseline. As shown in Table1 above, using N = 1000 gives only a slight improvement in size. We therefore believe that if construction time is an issue, as for situations where one might want to re-learn and change the filter as data changes, one can choose parameters for PLBF construction that would still yield significant benefits over previous approaches.
1. What is the main contribution of the paper in the field of semantic correspondence? 2. What are the strengths of the proposed approach, particularly in terms of neural representation? 3. Do you have any concerns regarding the semantic correspondence representation? 4. What are the limitations of the NeMF approach? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review A clear exposition of the problem and proposed solution, the paper key strength is in the formulation of the partitioned bloom filter as an optimization problem that generalizes previously proposed architectures, and prescribes an interpretable solution for the choice of the optimal partition-thresholds in terms of the properties of the given learned model (specifically, its false-negative and true-negative threshold dependent curves). Furthermore, the experimental section clearly illustrates a significant advantage of the proposed method over state of the art alternatives. The theoretical treatment, however may be considered as a first attempt at combining essential properties of a learned model and bloom-filter design - the paper could be improved by considering and motivating the usage of a particular lower bound for the space cost of a bloom filter and incorporating in the the optimization formulation the relation between the size of the learned model and the qualities of the false positive and false negative curves. The latter, especially, makes a significant practical difference since the model may be part of the design and assuming out the size-quality tradeoff results in a sub-optimal scheme and renders the overall proposed solution as a heuristic still. Update (Nov 30th) In light of the author's responses and the other reviews I increase my score for this paper to 7: Good paper, accept.
ICLR
Title Partitioned Learned Bloom Filters Abstract Bloom filters are space-efficient probabilistic data structures that are used to test whether an element is a member of a set, and may return false positives. Recently, variations referred to as learned Bloom filters were developed that can provide improved performance in terms of the rate of false positives, by using a learned model for the represented set. However, previous methods for learned Bloom filters do not take full advantage of the learned model. Here we show how to frame the problem of optimal model utilization as an optimization problem, and using our framework derive algorithms that can achieve near-optimal performance in many cases. Experimental results from both simulated and real-world datasets show significant performance improvements from our optimization approach over both the original learned Bloom filter constructions and previously proposed heuristic improvements. 1 INTRODUCTION Bloom filters are space-efficient probabilistic data structures that are used to test whether an element is a member of a set [Bloom (1970)]. A Bloom filter compresses a given set S into an array of bits. A Bloom filter may allow false positives, but will not give false negative matches, which makes them suitable for numerous memory-constrained applications in networks, databases, and other systems areas. Indeed, there are many thousands of papers describing applications of Bloom filters [Dayan et al. (2018), Dillinger & Manolios (2004), Broder & Mitzenmacher (2003)]. There exists a trade off between the false positive rate and the size of a Bloom filter (smaller false positive rate leads to larger Bloom filters). For a given false positive rate, there are known theoretical lower bounds on the space used [Pagh et al. (2005)] by the Bloom filter. However, these lower bounds assume the Bloom filter could store any possible set. If the data set or the membership queries have specific structure, it may be possible to beat the lower bounds in practice [Mitzenmacher (2002), Bruck et al. (2006), Mitzenmacher et al. (2020)]. In particular, [Kraska et al. (2018)] and [Mitzenmacher (2018)] propose using machine learning models to reduce the space further, by using a learned model to provide a suitable pre-filter for the membership queries. This allows one to beat the space lower bounds by leveraging the context specific information present in the learned model. Rae et al. (2019) propose a neural Bloom Filter that learns to write to memory using a distributed write scheme and achieves compression gains over the classical Bloom filter. The key idea of learned Bloom filters is that in many practical settings, given a query input, the likelihood that the input is in the set S can be deduced by some observable features which can be captured by a machine learning model. For example, a Bloom filter that represents a set of malicious URLs can benefit from a learned model that can distinguish malicious URLs from benign URLs. This model can be trained on URL features such as length of hostname, counts of special characters, etc. This approach is described in [Kraska et al. (2018)], which studies how standard index structures can be improved using machine learning models; we refer to their framework as the original learned Bloom filter, Given an input x and its features, the model outputs a score s(x) which is supposed to correlate with the likelihood of the input being in the set. Thus, the elements of the set, or keys, should have a higher score value compared to non-keys. This model is used as a pre-filter, so when score s(x) of an input x is above a pre-determined threshold t, it is directly classified as being in the set. For inputs where s(x) < t, a smaller backup Bloom filter built from only keys with a score below the threshold (which are known) is used. This maintains the property that there are no false negatives. The design essentially uses the model to immediately answer for inputs with high score whereas the rest of the inputs are handled by the backup Bloom filter as shown in Fig.1(A). The threshold value t is used to partition the space of scores into two regions, with inputs being processed differently depending on in which region its score falls. With a sufficiently accurate model, the size of the backup Bloom filter can be reduced significantly over the size of a standard Bloom filter while maintaining overall accuracy. [Kraska et al. (2018)] showed that, in some applications, even after taking the size of the model into account, the learned Bloom filter can be smaller than the standard Bloom filter for the same false positive rate. The original learned Bloom filter compares the model score against a single threshold, but the framework has several drawbacks. Choosing the right threshold: The choice of threshold value for the learned Bloom filter is critical, but the original design uses heuristics to determine the threshold value. Using more partitions: Comparing the score value only against a single threshold value wastes information provided by the learning model. For instance, two elements x1, x2 with s(x1) >> s(x2) > t, are treated the same way but the odds of x1 being a key are much higher than for x2. Intuitively, we should be able to do better by partitioning the score space into more than two regions. Optimal Bloom filters for each region: Elements with scores above the threshold are directly accepted as keys. A more general design would provide backup Bloom filters in both regions and choose the Bloom filter false positive rate of each region so as to optimize the space/false positive trade-off as desired. The original setup can be interpreted as using a Bloom filter of size 0 and false positive rate of 1 above the threshold. This may not be the optimal choice; moreover, as we show, using different Bloom filters for each region(as shown in Fig.1(C)) allows further gains when we increase the number of partitions. Follow-up work by [Mitzenmacher (2018)] and [Dai & Shrivastava (2019)] improve on the original design but only address a subset of these drawbacks. In particular, [Mitzenmacher (2018)] proposes using Bloom filters for both regions and provides a method to find the optimal false positive rates for each Bloom filter. But [Mitzenmacher (2018)] only considers two regions and does not consider how to find the optimal threshold value. [Dai & Shrivastava (2019)] propose using multiple thresholds to divide the space of scores into multiple regions, with a different backup Bloom filter for each score region. The false positive rates for each of the backup Bloom filters and the threshold values are chosen using heuristics. Empirically, we found that these heuristics might perform worse than [Mitzenmacher (2018)] in some scenarios. A general design that resolves all the drawbacks would, given a target false positive rate and the learned model, partition the score space into multiple regions with separate backup Bloom filters for each region, and find the optimal threshold values and false positive rates, under the goal of minimizing the memory usage while achieving the desired false positive rate as shown in Fig.1(C). In this work, we show how to frame this problem as an optimization problem, and show that our resulting solution significantly outperforms the heuristics used in previous works. Additionally, we show that our maximum space saving1 is linearly proportional to the KL divergence of the key and non-key score distributions determined by the partitions. We present a dynamic programming algorithm to find the optimal parameters (up to the discretization used for the dynamic programming) and demonstrate performance improvements over a synthetic dataset and two real world datasets: URLs and EMBER. We also show that the performance of the learned Bloom filter improves with increasing number of partitions and that in practice a small number of regions (≈ 4− 6) suffices to get a very good performance. We refer to our approach as a partitioned learned Bloom filter (PLBF). Experimental results from both simulated and real-world datasets show significant performance improvements. We show that to achieve a false positive rate of 0.001, [Mitzenmacher (2018)] uses 1space saved by using our approach instead of a Bloom filter 8.8x, 3.3x and 1.2x the amount of space and [Dai & Shrivastava (2019)] uses 6x, 2.5x and 1.1x the amount of space compared to PLBF for synthetic, URLs and EMBER respectively. 2 BACKGROUND 2.1 STANDARD BLOOM FILTERS AND RELATED VARIANTS A standard Bloom filter, as described in Bloom’s original paper [Bloom (1970)], is for a set S = {x1, x2, ..., xn} of n keys. It consists of an array of m bits and uses k independent hash functions {h1, h2, ...hk} with the range of each hi being integer values between 0 and m− 1. We assume the hash functions are fully random. Initially all m bits are 0. For every key x ∈ S, array bits hi(x) are set to 1 for all i ∈ {1, 2, ...k}. A membership query for y returns that y ∈ S if hi(y) = 1 for all i ∈ {1, 2, ...k} and y 6∈ S otherwise. This ensures that the Bloom filter has no false negatives but non-keys y might result in a false positive. This false positive rate depends on the space m used by the Bloom Filter. Asymptotically (for large m,n with m/n held constant), the false positive rate is given by( 1− ( 1− 1 m )kn)k . (1) See [Broder & Mitzenmacher (2003); Bose et al. (2008)] for further details. [Bloom (1970)] proved a space lower bound of |S| × log2( 1F ) for a Bloom filter with false positive rate F . The standard construction uses space that is asymptotically log2 e(≈ 1.44) times more than the lower bound. Other constructions exist, such as Cuckoo filters[Fan et al. (2014)], Morton filters[Breslow & Jayasena (2018)], XOR filters[Graf & Lemire (2020)] and Vacuum filters[Wang et al. (2019)]. These variants achieve slightly better space performance compared to standard Bloom filters but still are a constant factor larger than the lower bound. [Pagh et al. (2005)] presents a Bloom filter design that achieves this space lower bound, but it appears too complicated to use in practice. 2.2 LEARNED BLOOM FILTER Learned Bloom filters make use of learned models to beat the theoretical space bounds. Given a learned model that can distinguish between keys and non-keys, learned Bloom filters use it as a pre-filter before using backup Bloom filters. The backup Bloom filters can be any variant including the standard, cuckoo, XOR filters, etc. If the size of the model is sufficiently small, learned models can be used to enhance the performance of any Bloom filter variant. We provide the framework for learned Bloom filters. We are given a set of keys S = {x1, x2, .., xn} from a universe U for which to build a Bloom filter. We are also given a sample of the non-keys Q which is representative of the set U − S. Features that can help in determining if an element is a member of S are determined. The learned model is then trained on features of set S ∪Q for a binary classification task and produces a score s(x) ∈ [0, 1]. This score s(x) can be viewed (intuitively, not formally) as the confidence of the model that the element x is in the set S. So, a key in S would ideally have a higher score value than the non-keys. An assumption in this framework is that the training sample distribution needs to match or be close to the test distribution of non-keys; the importance of this assumptions has been discussed at length in [Mitzenmacher (2018)]. For many applications, past workloads or historical data can be used to get an appropriate non-key sample. As discussed above, [Kraska et al. (2018)] set a threshold t and inputs satisfying s(x) > t are classified as a key. A backup Bloom filter is built for just the keys in S satisfying s(x) ≤ t. This design is represented in Fig.1(A). [Mitzenmacher (2018)] proposes using another Bloom filter before the learned model along with a backup Bloom Filter. As the learned model is used between two Bloom filters as shown in Fig.1(B), this is referred to as the ’sandwiching’ approach. They also provide the analysis of the optimal false positive rates for a given amount of memory for the two Bloom filters (given the false negative rate and false positive rate for the learned model, and the corresponding threshold). Interestingly, the sandwiching approach and analysis can be seen as a special case of our approach and analysis, as we describe later in Appendix.D.1. [Dai & Shrivastava (2019)] use multiple thresholds to partition the score space into multiple regions and use a backup Bloom filter for each score region. They propose heuristics for how to divide up the score range and choose false positive rate per region. 3 PARTITIONED LEARNED BLOOM FILTER (PLBF) 3.1 DESIGN As discussed before, the general design segments the score space into multiple regions using multiple thresholds, as shown in Fig.1(C), and uses separate backup Bloom filters for each region. We can choose different target false positive rates for each region2. The parameters associated with each region are its threshold boundaries and its false positive rate. Setting good values for these parameters is crucial for performance. Our aim is to analyze the performance of the learned Bloom filter with respect to these parameters, and find methods to determine optimal or near-optimal parameters. The following notation will be important for our analysis. Let G(t) be the fraction of keys with scores falling below t. We note that since the key set is finite, G(t) goes through discrete jumps. But it is helpful (particularly in our pictures) to think of G(t) as being a continuous function, corresponding to a cumulative probability distribution, with a corresponding “density” function g(t). For non keys, we assume that queries involving non-keys come from some distribution D, and we define H(t) to be probability that a non-key query from D has a score less than or equal to t. Note that non key query distribution might be different from non key distribution. If non key queries are chosen uniformly at random, non key query distribution would be the same as non key distribution. We assume that H(t) is known in the theoretical analysis below. In practice, we expect a good approximation of H(t) will be used, determined by taking samples from D or a suitably good approximation, which may be based on, for example, historical data (discussed in detail in [Mitzenmacher (2018)]). Here H(t) can be viewed as a cumulative distribution function, and again in our pictures we think of it as having a density h(t). Also, note that if queries for non-keys are simply chosen uniformly at random, then H(t) is just the fraction of non-keys with scores below t. While our analysis holds generally, the example of H(t) being the fraction of non-keys with scores below t may be easier to keep in mind. Visualization of the original learned Bloom filter in terms of these distributions is shown in Fig.1(D). As we describe further below, for our partitioned learned Bloom filter, we use multiple thresholds and a separate backup Bloom filter for each region, as show in Fig.1(E). In what follows, we formulate the problem of choosing thresholds and backup Bloom filter false positive rates (or equivalently, sizes) as an optimization problem in section 3.2. In section 3.3.1, we find the optimal solution of a relaxed problem which helps us gain some insight into the general problem. We then propose an approximate solution for the general problem in section 3.3.3. 2The different false positive rates per region can be achieved in multiple ways. Either by choosing a separate Bloom filter per region or by having a common Bloom filter with varying number of hash functions per region. We find in our formulation that the resulting parameters correspond to quite natural quantities in terms of G and H . Specifically, the optimal false positive rate of a region is proportional to the ratio of the fraction of keys to the fraction of non-keys in that region. If we think of these region-based fractions for keys and non-keys as probability distributions, the maximum space saving obtained is proportional to the KL divergence between these distributions. Hence we can optimize the thresholds by choosing them to maximize this divergence. We show that we can find thresholds to maximize this divergence, approximately, through dynamic programming. We also show that, naturally, this KL divergence increases with more number of regions and so does the performance. In our experiments, we find a small number(≈ 4− 6) of partitions suffices to get good performance. 3.2 GENERAL OPTIMIZATION FORMULATION To formulate the overall problem as an optimization problem, we consider the variant which minimizes the space used by the Bloom filters in PLBF in order to achieve an overall a target false positive rate (F ). We could have similarly framed it as minimizing the false positive rate given a fixed amount of space. Here we are assuming the learned model is given. We assume normalized score values in [0, 1] for convenience. We have region boundaries given by ti values 0 = t0 ≤ t1 ≤ ....tk−1 ≤ tk = 1, with score values between [ti−1, ti] falling into the ith region. We assume the target number of regions k is given. We denote the false positive rate for the Bloom filter in the ith region by fi. We let G and H be defined as above. As state previously, Fig.1(E) corresponds to this setting, and the following optimization problem finds the optimal thresholds ti and the false positive rates fi: min ti,fi (∑k i=1 |S| × (G(ti)−G(ti−1))× c log2 ( 1 fi )) + Size of Learned Model (2) constraints ∑k i=1 (H (ti)−H(ti−1))× fi ≤ F (3) fi ≤ 1 , i = 1...k (4) (ti − ti−1) ≥ 0 , i = 1...k ; t0 = 0; tk = 1 (5) The minimized term (Eq.2) represents the total size of the learned Bloom filter, the size of backup Bloom filters is obtained by summing the individual backup Bloom filter sizes. The constant c in the equation depends on which variant of the Bloom filter is used as the backup3; as it happens, its value will not affect the optimization. The first constraint (Eq.3) ensures that the overall false positive rate stays below the target F . The overall false positive rate is obtained by summing the appropriately weighted rates of each region. The next constraint (Eq.4) encodes the constraint that false positive rate for each region is at most 1. The last set of constraints (Eq.5) ensure threshold values are increasing and cover the interval [0, 1]. 3.3 SOLVING THE OPTIMIZATION PROBLEM 3.3.1 SOLVING A RELAXED PROBLEM If we remove the false positive rate constraints (Eq.4, giving fi ≤ 1), we obtain a relaxed problem shown in Eq.6. This relaxation is useful because it allows us to use the Karush-Kuhn-Tucker (KKT) conditions to obtain optimal fi values in terms of the ti values, which we used to design algorithms for finding near-optimal solutions. Throughout this section, we assume the the relaxed problem yields a solution for the original problem; we return to this issue in subsection 3.3.3. min ti=1...k−1,fi=1...k (∑k i=1 |S| × (G(ti)−G(ti−1))× c log2 ( 1 fi )) + Size of Learned Model constraints ∑k i=1 (H(ti)−H(ti−1))× fi ≤ F ; (ti − ti−1) ≥ 0 , i = 1...k; t0 = 0; tk = 1 (6) 3The sizes of Bloom filter variants are proportional to |S| × log2(1/f), where S is the set it represents, and f is the false positive rate it achieves. See e.g. [Mitzenmacher (2018)] for related discussion. The constant c depends on which type of Bloom filter is used as a backup. For example, c = log2(e) for standard Bloom filter. The optimal fi values obtained by using the KKT conditions yield Eq.7 (as derived in Appendix.A), giving the exact solution in terms of ti’s. fi = F G(ti)−G(ti−1) H(ti)−H(ti−1) (7) The numerator G(ti) − G(ti−1) is the fraction of keys in the ith region and the denominator H(ti) −H(ti−1) is the probability of a non-key query being in the ith region. In intuitive terms, the false positive rate for a region is proportional to the ratio of the key density (fraction of keys) to non-key density (fraction of non-key queries). Since we have found the optimal fi in terms of the ti, we can replace the fi in the original problem to obtain a problem only in terms of the ti. In what follows, we use ˆg(t) to represent the discrete distribution given by the k values of G(ti)−G(ti−1) for i = 1, . . . , k, and similarly we use ˆh(t) for the distribution corresponding to the H(ti)−H(ti−1) values. Eq.8 shows the rearrangement of the minimization term(excluding model size) after substitution. Min. Term = k∑ i=1 |S| × (G(ti)−G(ti−1))× c log2 ( H(ti)−H(ti−1) (G(ti)−G(ti−1))× F ) = k∑ i=1 |S| × (G(ti)−G(ti−1))× c log2 ( 1 F ) − c× |S| ×DKL ( ˆg(t), ˆh(t) ) (8) where DKL is the standard KL divergence for the distributions given by ˆg(t) and ˆh(t). Eq.8 represents the space occupied by the backup Bloom filters; the total space includes this and the space occupied by the learned model. c× ( |S| × log2 ( 1 F ) − |S| ×DKL ( ˆg(t), ˆh(t) )) + Size Of Learned Model (9) The space occupied by the Bloom filter without the learned model is c× |S| × log2(1/F ). Thus, the space saved by PLBF in comparison to the normal Bloom filter is: c× ( |S| ×DKL ( ˆg(t), ˆh(t) )) − Size Of Learned Model (10) The space saved by PLBF is therefore linearly proportional to the KL divergence of key and non-key distributions of the regions given by ˆg(t) and ˆh(t) of the regions. This derivation suggests that the KL divergence might also be used as a loss function to improve the model quality. We have tested this empirically, but thus far have not seen significant improvements over the MSE loss we use in our experiments; this remains an interesting issue for future work. 3.3.2 FINDING THE OPTIMAL THRESHOLDS FOR RELAXED PROBLEM We have shown that, given a set of thresholds, we can find the optimal false positive rates for the relaxed problem. Here we turn to the question of finding optimal thresholds. We assume again that we are given k, the number of regions desired. (We consider the importance of choosing k further in our experimental section.) Given our results above, the optimal thresholds correspond to the points that maximize the KL divergence between ( ˆg(t), ˆh(t)). The KL divergence of ( ˆg(t), ˆh(t)) is the sum of the terms gi log2 gi hi , one term per region. (Here gi = G(ti)−G(ti−1) and hi = H(ti)−H(ti−1).) Note that each term depends only on the proportion of keys and non-keys in that region and is otherwise independent of the other regions. This property allows a recursive definition of KL divergence that is suitable for dynamic programming. We divide the score space [0, 1] into N consecutive small segments for a chosen value of N ; this provides us a discretization of the score space, with larger N more closely approximating the real interval. Given k, we can find a set of k approximately optimal thresholds using dynamic programming, where the solution is approximate due to our discretization of the score space. Let DPKL(n, j) denote the maximum divergence one can get when you divide the first n segments into j regions. Our approximately optimal divergence corresponds to DPKL(N, k). The idea behind the algorithm is that the we can recursively define DPKL(n, j) as represented in Eq.11. Here g′, h′ represent the fraction of keys and the fraction of non-key queries, respectively, in these N segments. DPKL (n, j) = max ( DPKL(n− i, j − 1) + ( n∑ r=i g′(r)× log2 (∑n r=i g ′(r)∑n r=i h ′(r) ))) (11) The time complexity of computing DPKL(N, k) is O(N2k). One can increase the value of N to get more precision in the discretization when finding thresholds, at the cost of higher computation time. 3.3.3 THE RELAXED PROBLEM AND THE GENERAL PROBLEM We can find a near-optimal solution to the relaxed problem by first, obtaining the threshold values that maximize the divergence and then, getting the optimal fi values using Eq.7. In many cases, the optimal relaxed solution will also be the optimal general solution, specifically if F × (G(ti−1) − G(ti))/(H(ti−1)−H(ti)) < 1 for all i. Hence, if we are aiming for a sufficiently low false positive rate F , solving the relaxed problem suffices. To solve the general problem, we need to deal with regions where fi ≥ 1, but we can use the relaxed problem as a subroutine. First, given a fixed set of ti values for the general problem, we have an algorithm (Alg.1, as discussed in Appendix.B) to find the optimal fi’s. Briefly summarized, we solve the relaxed problem, and for regions with fi > 1, the algorithm sets fi = 1, and then re-solves the relaxed problem with these additional constraints, and does this iteratively until no region with fi > 1 remains. The problem is that we do not have the optimal set of ti values to begin; as such, we use the optimal ti values for the relaxed solution as described in Section 3.3.2. This yields a solution to the general problem (psuedo-code in Alg.2), but we emphasize that it is not optimal in general, since we did not start with the optimal ti. We expect still that it will perform very well in most cases. In practice, we observe that keys are more concentrated on higher scores, and non-key queries are more concentrated on lower scores. Given this property, if a region with fi = 1 (no backup Bloom filter used) exists in the optimal solution of the general problem, it will most probably be the rightmost region. In particular, if (G(ti−1)−G(ti))/(H(ti−1)−H(ti)) is increasing as ti−1, ti increase – that is, the ratio of the fraction of keys to the fraction of non-key queries over regions is increasing – then indeed without loss of generality the last (kth) region will be the only one with fk = 1. (We say only one region because any two consecutive regions with fi = 1 can be merged and an extra region can be made in the remaining space which is strictly better, as adding an extra region always helps as shown in Appendix.D.2.) It is reasonable to believe that in practice this ratio will be increasing or nearly so. Hence if we make the assumption that in the optimal solution all the regions except the last satisfy the fi < 1 constraint, then if we identify the optimal last region’s boundary, we can remove the fi ≤ 1 constraints for i 6= k and apply the DP algorithm to find near optimal ti’s. To identify the optimal last region’s boundary, we simply try all possible boundaries for the kth region (details discussed in Appendix.C). As it involves assumptions on the behavior of G and H , we emphasize again that this will not guarantee finding the optimal solution. But when the conditions are met it will lead to a near-optimal solution (only near-optimal due to the discretization of the dynamic program). 4 EVALUATION We compare PLBF against the theoretically optimal Bloom filter [Bloom (1970)]4, the sandwiching approach [Mitzenmacher (2018)], and AdaBF [Dai & Shrivastava (2019)]. Comparisons against standard Bloom filters5 appear in Appendix.E.1. We excluded the original learned Bloom filter [Kraska et al. (2018)] as the sandwiching approach was strictly better. We include the size of the learned model with the size of the learned Bloom filter. To ensure a fair comparison, we used the optimal Bloom filter as the backup bloom filter for all learned variants. We use 3 different datasets: URLs: As in previous papers [Kraska et al. (2018), Dai & Shrivastava (2019)], we used the URL data set, which contains 103520 (23%) malicious and 346646 (77%) are benign URLs. We used 17 features from these URL’s such as host name length, use of shortening, counts of special characters,etc. 4For the space of a theoretically optimal Bloom filter, we take the standard Bloom filter of same false positive rate and divide it’s space used by log2 e, as obtaining near-optimality in practice is difficult. This uses the fact that the standard Bloom filter is asymptotically log2 e times suboptimal than the optimal as discussed in Sec.2.1. 5PLBF performs better against standard Bloom filters, as discussed in Appendix.D.3. Section 4.1 are conservative estimates of gains possible in practice using a PLBF. EMBER: Bloom filters are widely used to match file signatures with the virus signature database. Ember (Endgame Malware Benchmark for Research) [Anderson & Roth (2018)] is an open source collection of 1.1M sha256 file hashes that were scanned by VirusTotal in 2017. Out of the 1.1 million files, 400K are malicious, 400K are benign, and we ignore the remaining 300K unlabeled files. The features of the files are already included in the dataset. Synthetic: An appealing scenario for our method is when the key density increases and non-key density decreases monotonically with respect to the score value. We simulate this by generating the key and non-key score distribution using Zipfian distributions as in Fig.2(A). Since we directly work on the score distribution, the size of the learned model for this synthetic dataset is zero. 4.1 OVERALL PERFORMANCE Here, we compare the performance of PLBF against other baselines by fixing the target F and measuring the space used by each methods. We use PLBF Alg.3 with DP algorithm discretization(N ) set to 1000. We train the model on the entire key set and 40% of the non-key set. The thresholds and backup Bloom filters are then tuned using this model with the aim of achieving the fixed target F . The rest of the non-keys are used to evaluate the actual false positive rate. While any model can be used, we choose the random forest classifier from sklearn [Pedregosa et al.] for its good accuracy. The F1 scores of the learned models used for synthetic, URLs and EMBER were 0.99, 0.97, and 0.85, respectively. We consider the size of the model to be the pickle file size on the disk (a standard way of serializing objects in Python). We use five regions (k = 5) for both PLBF and AdaBF as this is usually enough to achieve good performance as discussed in 4.2. Using higher k would only improve our performance. The results of the experiment are shown in the Fig.2(A-C) along with the distribution of the scores of keys and non-keys for each dataset. As we can see from the figure, PLBF has a better Pareto curve than the other baselines for all the datasets. On the synthetic dataset and URLs dataset we observe a significantly better performance. In contrast, for the EMBER dataset our performance is only slightly better indicating that the model here is not as helpful. The difference between space used by PLBF and optimal Bloom filter first increases with decreasing false positive rate but converges to a constant value for all datasets, as given in Eq.10. For the same amount of space used(400Kb,500Kb,3000Kb space for synthetic,URLs,EMBER, respectively), PLBF achieves 22x, 26x, and 3x smaller false positive rates than the sandwiching approach, and 8.5x, 9x, and 1.9x smaller false positive rates than AdaBF for synthetic, URLs, and EMBER, respectively. To achieve a false positive rate of 0.001, the sandwiching approach uses 8.8x, 3.3x, and 1.2x the amount of space and AdaBF uses 6x, 2.5x, and 1.1x the amount of space compared to PLBF for synthetic, URLs, and EMBER datasets respectively. 4.2 PERFORMANCE AND THE NUMBER OF REGIONS The maximum space savings obtained by using PLBF is linearly proportional to the KL divergence of the distributions(Eq10) and this KL divergence strictly increases with the number of regions(Appendix.D.2). Fig.2(D-F) show the space saved w.r.t the optimal Bloom filter as we increase the number of regions k for a target false positive rate of 0.001. The red line in the figure shows the savings when using 25 regions; using more regions provides no noticeable improvement on this data. Our results suggest using 4-6 regions should be sufficient to obtain reasonable performance. We have additional experiments in Appendix.E that shows PLBF performance against standard Bloom filters and PLBF performance w.r.t model quality. 5 CONCLUSION Our analysis of the partitioned learned Bloom filter provides a formal framework for improving on learned Bloom filter performance that provides substantially better performance than previous heuristics. As Bloom filters are used across thousands of applications, we hope the PLBF may find many uses where the data set is amenable to a learned model. Acknowledgments This research was supported in part by NSF grants CCF-1563710 and CCF-1535795, and by a gift to the Center for Research on Computation and Society at Harvard University. A SOLVING THE RELAXED PROBLEM USING KKT CONDITIONS As mentioned in the main text, if we relax the constraint of fi ≤ 1, using the stationary KKT conditions we can obtain the optimal fi values. Here we show this work. The appropriate Lagrangian equation is given in Eq.12. In this case, the KKT coniditions tell us that the optimal solution is a stationary point of the Lagrangian. Therefore, we find where the derivative of the Lagrangian with respect to fi is zero. L (ti, fi, λ, νi) = ∑k i=1 (G(ti)−G(ti−1))× c log2 ( 1 fi ) + λ× ((∑k i=1 (H(ti)−H(ti−1))× fi ) − F ) +∑k i=1 νi × (ti−1 − ti) (12) ∂L(ti,fi,λ,νi) ∂fi = 0 (13) ∂(G(ti)−G(ti−1))c log2 ( 1 fi ) ∂fi = −λ∂(H(ti)−H(ti−1))×fi∂fi (14) fi = c ln(2)×(G(ti)−G(ti−1))×λ (H(ti)−H(ti−1)) (15) λ = F c ln(2)× ∑k i=1 (G(ti)−G(ti−1)) = Fc ln 2 (16) fi = (G(ti)−G(ti−1))×FPR (H(ti)−H(ti−1)) (17) Eq.15 expresses fpri in terms of λ. Summing Eq.15 over all i and using the relationship between F and H we get Eq.16. Thus the optimal fi values turn out to be as given in Eq.17. Algorithm 1 Finding optimal fpr’s given thresholds InputG′ - the array containing key density of each region InputH′ - the array containing non-key density of each region Input F - target overall false positive rate Input k - number of regions Output f - the array of false positive rate of each region 1: procedure OPTIMALFPR(G′, H′, F, k) 2: Gsum ← 0 . sum of key density of regions with fi = 1 3: Hsum ← 0 . sum of non-key density of regions with fi = 1 4: for i in 1, 2, ...k do 5: f [i]← G ′[i]·F H′[i] . Assign relaxed problem solution 6: while some f [i] > 1 do 7: for i in 1, 2, ...k do 8: if (f [i] > 1) then f [i]← 1 . Cap the false positive rate of region to one 9: Gsum ← 0 10: Hsum ← 0 11: for i in 1, 2, ...k do 12: if (f [i] = 1) thenGsum ← Gsum +G′[i];Hsum ← Hsum +H′[i] . Calculate key,non-key density in regions with no Bloom filter(f [i] = 1) 13: for i in 1, 2, ...k do 14: if (f [i] < 1) then f [i] = G ′[i]·(F−Hsum) H′[i]·(1−Gsum) .Modifying the fi of the regions to ensure target false positive rate is FPR 15: return fpr Array Algorithm 2 Using relaxed solution for the general problem InputGdis - the array containing discretized key density of each region InputHdis - the array containing discretized key density of each region Input F - target overall false positive rate Input k - number of regions Output t - the array of threshold boundaries of each region Output f - the array of false positive rate of each region Algorithm ThresMaxDivDP - DP algorithm that returns the thresholds maximizing the divergence between key and non-key distribution. Algorithm CalcDensity - returns the region density given thresholds of the regions Algorithm OptimalFPR - returns the optimal false positive rate of the regions given thresholds Algorithm SpaceUsed - returns space used by the back-up Bloom filters given threhsolds and false positive rate per region. 1: procedure SOLVE(Gdis, Hdis, F, k) 2: t← ThresMaxDivDP(Gdis, Hdis, k) . Getting the optimal thresholds for the relaxed problem 3: G′, H′ ← CalcDensity(Gdis, Hdis, t) 4: f = OptimalFPR(G′, H′, F, k) . Obtaining optimal false positive rates of the general problem for given thresholds 5: 6: return t , f Array B OPTIMAL FALSE POSITIVE RATE FOR GIVEN THRESHOLDS We provide the pseudocode for the algorithm to find the optimal false positive rates if threshold values are provided. The corresponding optimization problem is given in Eq.18. As the boundaries for the regions are already defined, one only needs to find the optimal false positive rate for the backup Bloom filter of each region. min fi=1...k ∑k i=1 (G(ti)−G(ti−1))× c log2( 1 fi ) constraints ∑k i=1 (H(ti)−H(ti−1))× fi = F fi ≤ 1 i = 1...k (18) Alg.1 gives the pseudocode. We first assign false positive rates based on the relaxed problem but may find that fi ≥ 1 for some regions. For such regions, we can set fi = 1, re-solve the relaxed problem with these additional constraints (that is, excluding these regions), and use the result as a solution for the general problem. Some regions might again have a false positive rate above one, so we can repeat the process. The algorithm stops when there is no new region with false positive rate greater than one. This algorithm finds the optimal false positive rates for the regions when the thresholds are fixed. Algorithm 3 Solving the general problem InputGdis - the array containing discretized key density of each region InputHdis - the array containing discretized key density of each region Input F - target overall false positive rate Input k - number of regions Output t - the array of threshold boundaries of each region Output f - the array of false positive rate of each region Algorithm ThresMaxDivDP - DP algorithm that returns the thresholds maximizing the divergence between key and non-key distribution. Algorithm CalcDensity - returns the region density given thresholds of the regions Algorithm OptimalFPR - returns the optimal false positive rate of the regions given thresholds 1: procedure SOLVE(Gdis, Hdis, F, k) 2: MinSpaceUsed←∞ . Stores minimum space used uptil now 3: index← −1 . Stores index corresponding to minimum space used 4: Glast ← 0 . Key density of the current last region 5: Hlast ← 0 . Non-key density of the current last region 6: 7: for i in k − 1, k, ...N − 1 do . Iterate over possibilities of last region 8: Glast← ∑N j=iGdis[j] . Calculate the key density of last region 9: Hlast← ∑N j=iHdis[j] 10: t← ThresMaxDivDp(G[1..(i− 1)], H[1..(i− 1)], k − 1) . Find the optimal thresholds for the rest of the array 11: t.append(i) 12: G′, H′ ← CalcDensity(Gdis, Hdis, t) 13: f = OptimalFPR(G′, H′, F, k) . Find optimal false positive rates for the current configuration 14: if (MinSpaceUsed < SpaceUsed(Gdis, Hdis, t, f)) 15: thenMinSpaceUsed← SpaceUsed(Gdis, Hdis, t, f); index← i . Remember the best performance uptil now 16: 17: Glast ← ∑N j=indexGdis[j] 18: Hlast ← ∑N j=indexHdis[j] 19: t← ThresMaxDivDP(G[1..(index− 1)], H[1..(index− 1)], k − 1) 20: t.append(index) 21: G′, H′ ← CalcDensity(Gdis, Hdis, t) 22: f = OptimalFPR(G′, H′, F, k) 23: 24: return t , f Array C ALGORITHM FOR FINDING THRESHOLDS We provide the pseudocode for the algorithm to find the solution for the relaxed problem; Alg.3 finds the thresholds and false positive rates. As we have described in the main text, this algorithm provides the optimal parameter values, if (G(ti−1)−G(ti))/(H(ti−1)−H(ti)) is monotonically increasing. The idea is that only the false positive rate of the rightmost region can be one. The algorithm receives discretized key and non-key densities. The algorithm first iterates over all the possibilities of the rightmost region. For each iteration, it finds the thresholds that maximize the KL divergence for the rest of the array for which a dynamic programming algorithm exists. After calculating these thresholds, it finds the optimal false positive rate for each region using Alg.1. After calculating the thresholds and false positive rates, the algorithm calculates the total space used by the back-up Bloom filters in PLBF. It then remembers the index for which the space used was minimal. The ti’s and fi’s corresponding to this index are then used to build the backup Bloom filters. The worst case time complexity is then O(N3k). D ADDITIONAL CONSIDERATIONS D.1 SANDWICHING: A SPECIAL CASE We show here that the sandwiching approach can actually be interpreted as a special case of our method. In the sandwiching approach, the learned model is sandwiched between two Bloom filters as shown in Fig.3(A). The input first goes through a Bloom filter and the negatives are discarded. The positives are passed through the learned model where based on their score s(x) they are either directly accepted when s(x) > t or passed through another backup Bloom filter when s(x) ≤ t. In our setting, we note that the pre-filter in the sandwiching approach can be merged with the backup filters to yield backup filters with a modified false positive rate. Fig.3(B) shows what an equivalent design with modified false positive rates would look like. (Here equivalence means we obtain the same false positive rate with the same bit budget; we do not consider compute time.) Thus, we see that the sandwiching approach can be viewed as a special case of the PLBF with two regions. However, this also tells us we can make the PLBF more efficient by using sandwiching. Specifically, if we find when constructing a PLBF with k regions that fi < 1 for all i, we may assign f0 = max1≤i≤k fi. We may then use an initial Bloom filter with false positive rate f0, and change the target false positive rates for all other intervals to fi/f0, while keeping the same bit budget. This approach will be somewhat more efficient computationally, as we avoid computing the learned model for some fraction of non-key elements. D.2 PERFORMANCE AGAINST NUMBER OF REGIONS k Earlier, we saw the maximum space saved by using PLBF instead of a normal Bloom filter is linearly proportional to the DKL( ˆg(t), ˆh(t)). If we split any region into two regions, the overall divergence would increase because sum of divergences of the two split regions is always more than the original divergence, as shown in Eq.19. Eq.19 is an application of Jensen’s inequality. ( (g1 + g2)× log (g1+g2)(h1+h2) ) ≤ ( g1 × log g1h1 ) + ( g2 × log g2h2 ) (19) Increasing the number of regions therefore always improves the maximum performance. We would hope that in practice a small number of regions k would suffice. This seems to be the the case in our experience; we detail one such experiment in our evaluation(4.2). D.3 PERFORMANCE USING VARIOUS BLOOM FILTER VARIANTS We consider how the space saved of the PLBF varies with the type of backup Bloom filter being used. The PLBF can use any Bloom filter variant as the backup Bloom filter. When we compare our performance with a Bloom filter variant, we use that same Bloom filter variant as the backup Bloom filter for a fair comparison. First, absolute space one can save by using a PLBF instead of a Bloom filter variant is given in Eq.10. This quantity increases with increasing c6. 6The sizes of standard Bloom filter variants are proportional to |S| × log2(1/f), where S is the set it represents, and f is the false positive rate it achieves. See e.g. Mitzenmacher (2018) for related discussion. The constant c depends on which type of Bloom filter is used as a backup. For example, c = log2(e) for standard Bloom filter, c = 1.0 for the optimal Bloom filter. The relative space one saves by using PLBF instead of the given Bloom filter variant is shown in Eq.20. This quantity is the ratio of the space saved by PLBF (as shown in Eq.10) divided by the space used by the given Bloom filter variant (c× |S| × log2(1/F )) as shown in Eq.20. (c×|S|×DKL( ˆg(t), ˆh(t))−Size Of Learned Model) c×|S|×log2(1/F ) (20) Cancelling the common terms we obtain the following Eq.21. ( DKL( ˆg(t), ˆh(t)) log2(1/F ) − Size Of Learned Modelc×|S|×log2(1/F ) ) (21) The relative space saved, like the absolute space saved, also increases with increasing c. Thus, both the relative and absolute space saved for the PLBF is higher for a standard Bloom filter (c = 1.44) than an optimal Bloom filter (c = 1.00), and hence our experiments in Section 4.1 are conservative estimates of gains possible in practice using PLBF. E ADDITIONAL EXPERIMENTS E.1 PERFORMANCE W.R.T STANDARD BLOOM FILTERS Earlier, we evaluated our performance using optimal Bloom filters and here we present results using standard Bloom filters. As shown in Appendix.D.3, PLBF performs better w.r.t standard Bloom filters than optimal Bloom filters. As one can see from Fig.4, PLBF performs better than the standard Bloom filter. E.2 PERFORMANCE AND MODEL QUALITY Here we provide an experiment to see how the performance of various methods varies with the quality of the model. As discussed earlier, a good model will have high skew of the distributions g and h towards extreme values. We therefore vary the skew parameter of the Zipfian distribution to simulate the model quality. We measure the quality of the model using the standard F1 score. Fig.5(B) represents the space used by various methods to achieve a fixed false positive rate of 0.001 as we vary the F1 score of the model. The figure shows that as the model quality in terms of the F1 score increases, the space required by all the methods decreases (except for the optimal Bloom filter, which does not use a model). The space used by all the methods goes to zero as the F1 score goes to 1, as for the synthetic dataset there is no space cost for the model. The data point corresponding to F1 score equal to 0.99 was used to plot Fig.2(A). E.3 DISCRETIZATION EFFECT ON DYNAMIC PROGRAMMING RUNTIME, PLBF SIZE All the runtime experiments in this subsection and the next subsection are measured using an 2.8GHz quad-core Intel Core i7 CPU @ 2.80GHz with 16GB of memory. We use the bloom-filter python package [bloom filter] for our backup Bloom filters. The dynamic programming algorithms are implemented in Python. Here we provide an experiment to see how the dynamic programming (DP) algorithm runtime (psuedo code in Alg.3) and PLBF size vary with level of discretization (N ). In the tables below, we have the DP algorithm runtime and space taken by the PLBF to achieve an approximate empirical false positive rate of 0.001 for various N . As discussed in Sec. 3.3.2, with increasing value of N one gets closer to optimal parameters, at the cost of higher computation time. This trend is demonstrated in the table below for the URLs and EMBER datasets. We note that if runtime is an issue, the increase in size from using smaller N is relatively small. E.4 CONSTRUCTION TIME FOR VARIOUS BASELINES Here we look at the construction time breakdown for the PLBF and various alternatives, with the goal of seeing the cost of in terms of construction time for using the more highly tuned PLBF. The construction time of all the learned Bloom filters includes the model training time and parameter estimation time, which are not required for the standard Bloom filter construction process. Since we use the same model for all learned baselines, the model construction time is the same for all of them. In Fig.6, we plot the construction time breakdown for various baselines in order to achieve an approximate empirical false positive rate of 0.001. Recall that the AdaBF and Sandwiching approaches use heuristics to estimate their parameters and unsurprisingly they therefore seems somewhat faster than PLBF. However, for N = 100 we see the parameter estimation time is smaller than the key insertion time and model training time. The parameter estimation time for PLBF varies with the level of discretization we use for the DP algorithm. The PLBF with N = 1000 takes the longest to execute while standard Bloom filter is fastest baseline. As shown in Table1 above, using N = 1000 gives only a slight improvement in size. We therefore believe that if construction time is an issue, as for situations where one might want to re-learn and change the filter as data changes, one can choose parameters for PLBF construction that would still yield significant benefits over previous approaches.
1. What is the focus of the paper regarding the sandwiched Bloom filter model? 2. What are the strengths of the proposed approach, particularly in terms of its ability to maintain multiple score partitions? 3. What are the weaknesses of the paper, especially regarding its clarity and experimental results? 4. Do you have any concerns about the optimization objective and trading off the learned model power for larger regions and backup Bloom filters? 5. How does the reviewer assess the impact of using different models for each filter, and how does this relate to the size of the pickle file? 6. Are there any questions regarding the variance caused by observing particular distributions G and H, and how does this affect the confidence intervals for each curve? 7. Would you like to see an analysis of how the behavior of all considered models changes as the sample size changes? 8. Should the authors cite "Meta-learning Neural Bloom Filters" (Rae et al., 2019) in their work?
Review
Review The paper proposes a generalization of the sandwiched Bloom filter model that maintais a set of score partitions instead of just two and an algorithm for optimizing parameters of the partition under the target false-positive rate. Authors evaluate partitioned Bloom filter on three datasets and demonstrate that delivers better false positive rates under for a given model size compared to the baselines. I find the paper quite innovative and the experimental results impressive. My main concern is regarding the paper clarity. How can a region's FPR f i be greater than 1? Perhaps, this is something very obvious, but I couldn't get this immediately and perhaps some other readers might struggle here too. The learned model size appears in the optimization objective, but is considered given. I wonder what will change if we allow to trade-off the learned model power for a larger number of regions and larger backup Bloom filters in each region? Does it even make sense to ask such a question? Would the results change if a different model is used? I think it is possible that for a given variant of a learned Bloom filter, a different model may result into different values of optimal parameters and a difference performance, thus these should ideally be optimized independently for each of the baselines. I also think that size of the pickle file is arguably not the best estimate for the learned model size if indeed a different model is used for each of the filters, e.g. a neural network might admit a decent compression rate if a lower precision number format is used etc. Thus, it is important to separate the impact made by a learned model from an algorithmic improvement. Is there any variance caused by observing particular distributions G and H? Is it small enough to ignore or confidence interals for each of the curves might actually overlap? I would also be interested in understanding behaviour of all considered models as the sample size changes. I also feel like authors can cite Meta-learning neural Bloom filters. Rae et al, 2019 as it considers a relevant (although a different as well) setting. Nevertheless, I think the kind of analysis presented in the paper very useful for the community and for further development of learned data structures. I recommend acceptance and I will gladly raise my rather conservative score if authors could clarify the points mentioned above.
ICLR
Title Partitioned Learned Bloom Filters Abstract Bloom filters are space-efficient probabilistic data structures that are used to test whether an element is a member of a set, and may return false positives. Recently, variations referred to as learned Bloom filters were developed that can provide improved performance in terms of the rate of false positives, by using a learned model for the represented set. However, previous methods for learned Bloom filters do not take full advantage of the learned model. Here we show how to frame the problem of optimal model utilization as an optimization problem, and using our framework derive algorithms that can achieve near-optimal performance in many cases. Experimental results from both simulated and real-world datasets show significant performance improvements from our optimization approach over both the original learned Bloom filter constructions and previously proposed heuristic improvements. 1 INTRODUCTION Bloom filters are space-efficient probabilistic data structures that are used to test whether an element is a member of a set [Bloom (1970)]. A Bloom filter compresses a given set S into an array of bits. A Bloom filter may allow false positives, but will not give false negative matches, which makes them suitable for numerous memory-constrained applications in networks, databases, and other systems areas. Indeed, there are many thousands of papers describing applications of Bloom filters [Dayan et al. (2018), Dillinger & Manolios (2004), Broder & Mitzenmacher (2003)]. There exists a trade off between the false positive rate and the size of a Bloom filter (smaller false positive rate leads to larger Bloom filters). For a given false positive rate, there are known theoretical lower bounds on the space used [Pagh et al. (2005)] by the Bloom filter. However, these lower bounds assume the Bloom filter could store any possible set. If the data set or the membership queries have specific structure, it may be possible to beat the lower bounds in practice [Mitzenmacher (2002), Bruck et al. (2006), Mitzenmacher et al. (2020)]. In particular, [Kraska et al. (2018)] and [Mitzenmacher (2018)] propose using machine learning models to reduce the space further, by using a learned model to provide a suitable pre-filter for the membership queries. This allows one to beat the space lower bounds by leveraging the context specific information present in the learned model. Rae et al. (2019) propose a neural Bloom Filter that learns to write to memory using a distributed write scheme and achieves compression gains over the classical Bloom filter. The key idea of learned Bloom filters is that in many practical settings, given a query input, the likelihood that the input is in the set S can be deduced by some observable features which can be captured by a machine learning model. For example, a Bloom filter that represents a set of malicious URLs can benefit from a learned model that can distinguish malicious URLs from benign URLs. This model can be trained on URL features such as length of hostname, counts of special characters, etc. This approach is described in [Kraska et al. (2018)], which studies how standard index structures can be improved using machine learning models; we refer to their framework as the original learned Bloom filter, Given an input x and its features, the model outputs a score s(x) which is supposed to correlate with the likelihood of the input being in the set. Thus, the elements of the set, or keys, should have a higher score value compared to non-keys. This model is used as a pre-filter, so when score s(x) of an input x is above a pre-determined threshold t, it is directly classified as being in the set. For inputs where s(x) < t, a smaller backup Bloom filter built from only keys with a score below the threshold (which are known) is used. This maintains the property that there are no false negatives. The design essentially uses the model to immediately answer for inputs with high score whereas the rest of the inputs are handled by the backup Bloom filter as shown in Fig.1(A). The threshold value t is used to partition the space of scores into two regions, with inputs being processed differently depending on in which region its score falls. With a sufficiently accurate model, the size of the backup Bloom filter can be reduced significantly over the size of a standard Bloom filter while maintaining overall accuracy. [Kraska et al. (2018)] showed that, in some applications, even after taking the size of the model into account, the learned Bloom filter can be smaller than the standard Bloom filter for the same false positive rate. The original learned Bloom filter compares the model score against a single threshold, but the framework has several drawbacks. Choosing the right threshold: The choice of threshold value for the learned Bloom filter is critical, but the original design uses heuristics to determine the threshold value. Using more partitions: Comparing the score value only against a single threshold value wastes information provided by the learning model. For instance, two elements x1, x2 with s(x1) >> s(x2) > t, are treated the same way but the odds of x1 being a key are much higher than for x2. Intuitively, we should be able to do better by partitioning the score space into more than two regions. Optimal Bloom filters for each region: Elements with scores above the threshold are directly accepted as keys. A more general design would provide backup Bloom filters in both regions and choose the Bloom filter false positive rate of each region so as to optimize the space/false positive trade-off as desired. The original setup can be interpreted as using a Bloom filter of size 0 and false positive rate of 1 above the threshold. This may not be the optimal choice; moreover, as we show, using different Bloom filters for each region(as shown in Fig.1(C)) allows further gains when we increase the number of partitions. Follow-up work by [Mitzenmacher (2018)] and [Dai & Shrivastava (2019)] improve on the original design but only address a subset of these drawbacks. In particular, [Mitzenmacher (2018)] proposes using Bloom filters for both regions and provides a method to find the optimal false positive rates for each Bloom filter. But [Mitzenmacher (2018)] only considers two regions and does not consider how to find the optimal threshold value. [Dai & Shrivastava (2019)] propose using multiple thresholds to divide the space of scores into multiple regions, with a different backup Bloom filter for each score region. The false positive rates for each of the backup Bloom filters and the threshold values are chosen using heuristics. Empirically, we found that these heuristics might perform worse than [Mitzenmacher (2018)] in some scenarios. A general design that resolves all the drawbacks would, given a target false positive rate and the learned model, partition the score space into multiple regions with separate backup Bloom filters for each region, and find the optimal threshold values and false positive rates, under the goal of minimizing the memory usage while achieving the desired false positive rate as shown in Fig.1(C). In this work, we show how to frame this problem as an optimization problem, and show that our resulting solution significantly outperforms the heuristics used in previous works. Additionally, we show that our maximum space saving1 is linearly proportional to the KL divergence of the key and non-key score distributions determined by the partitions. We present a dynamic programming algorithm to find the optimal parameters (up to the discretization used for the dynamic programming) and demonstrate performance improvements over a synthetic dataset and two real world datasets: URLs and EMBER. We also show that the performance of the learned Bloom filter improves with increasing number of partitions and that in practice a small number of regions (≈ 4− 6) suffices to get a very good performance. We refer to our approach as a partitioned learned Bloom filter (PLBF). Experimental results from both simulated and real-world datasets show significant performance improvements. We show that to achieve a false positive rate of 0.001, [Mitzenmacher (2018)] uses 1space saved by using our approach instead of a Bloom filter 8.8x, 3.3x and 1.2x the amount of space and [Dai & Shrivastava (2019)] uses 6x, 2.5x and 1.1x the amount of space compared to PLBF for synthetic, URLs and EMBER respectively. 2 BACKGROUND 2.1 STANDARD BLOOM FILTERS AND RELATED VARIANTS A standard Bloom filter, as described in Bloom’s original paper [Bloom (1970)], is for a set S = {x1, x2, ..., xn} of n keys. It consists of an array of m bits and uses k independent hash functions {h1, h2, ...hk} with the range of each hi being integer values between 0 and m− 1. We assume the hash functions are fully random. Initially all m bits are 0. For every key x ∈ S, array bits hi(x) are set to 1 for all i ∈ {1, 2, ...k}. A membership query for y returns that y ∈ S if hi(y) = 1 for all i ∈ {1, 2, ...k} and y 6∈ S otherwise. This ensures that the Bloom filter has no false negatives but non-keys y might result in a false positive. This false positive rate depends on the space m used by the Bloom Filter. Asymptotically (for large m,n with m/n held constant), the false positive rate is given by( 1− ( 1− 1 m )kn)k . (1) See [Broder & Mitzenmacher (2003); Bose et al. (2008)] for further details. [Bloom (1970)] proved a space lower bound of |S| × log2( 1F ) for a Bloom filter with false positive rate F . The standard construction uses space that is asymptotically log2 e(≈ 1.44) times more than the lower bound. Other constructions exist, such as Cuckoo filters[Fan et al. (2014)], Morton filters[Breslow & Jayasena (2018)], XOR filters[Graf & Lemire (2020)] and Vacuum filters[Wang et al. (2019)]. These variants achieve slightly better space performance compared to standard Bloom filters but still are a constant factor larger than the lower bound. [Pagh et al. (2005)] presents a Bloom filter design that achieves this space lower bound, but it appears too complicated to use in practice. 2.2 LEARNED BLOOM FILTER Learned Bloom filters make use of learned models to beat the theoretical space bounds. Given a learned model that can distinguish between keys and non-keys, learned Bloom filters use it as a pre-filter before using backup Bloom filters. The backup Bloom filters can be any variant including the standard, cuckoo, XOR filters, etc. If the size of the model is sufficiently small, learned models can be used to enhance the performance of any Bloom filter variant. We provide the framework for learned Bloom filters. We are given a set of keys S = {x1, x2, .., xn} from a universe U for which to build a Bloom filter. We are also given a sample of the non-keys Q which is representative of the set U − S. Features that can help in determining if an element is a member of S are determined. The learned model is then trained on features of set S ∪Q for a binary classification task and produces a score s(x) ∈ [0, 1]. This score s(x) can be viewed (intuitively, not formally) as the confidence of the model that the element x is in the set S. So, a key in S would ideally have a higher score value than the non-keys. An assumption in this framework is that the training sample distribution needs to match or be close to the test distribution of non-keys; the importance of this assumptions has been discussed at length in [Mitzenmacher (2018)]. For many applications, past workloads or historical data can be used to get an appropriate non-key sample. As discussed above, [Kraska et al. (2018)] set a threshold t and inputs satisfying s(x) > t are classified as a key. A backup Bloom filter is built for just the keys in S satisfying s(x) ≤ t. This design is represented in Fig.1(A). [Mitzenmacher (2018)] proposes using another Bloom filter before the learned model along with a backup Bloom Filter. As the learned model is used between two Bloom filters as shown in Fig.1(B), this is referred to as the ’sandwiching’ approach. They also provide the analysis of the optimal false positive rates for a given amount of memory for the two Bloom filters (given the false negative rate and false positive rate for the learned model, and the corresponding threshold). Interestingly, the sandwiching approach and analysis can be seen as a special case of our approach and analysis, as we describe later in Appendix.D.1. [Dai & Shrivastava (2019)] use multiple thresholds to partition the score space into multiple regions and use a backup Bloom filter for each score region. They propose heuristics for how to divide up the score range and choose false positive rate per region. 3 PARTITIONED LEARNED BLOOM FILTER (PLBF) 3.1 DESIGN As discussed before, the general design segments the score space into multiple regions using multiple thresholds, as shown in Fig.1(C), and uses separate backup Bloom filters for each region. We can choose different target false positive rates for each region2. The parameters associated with each region are its threshold boundaries and its false positive rate. Setting good values for these parameters is crucial for performance. Our aim is to analyze the performance of the learned Bloom filter with respect to these parameters, and find methods to determine optimal or near-optimal parameters. The following notation will be important for our analysis. Let G(t) be the fraction of keys with scores falling below t. We note that since the key set is finite, G(t) goes through discrete jumps. But it is helpful (particularly in our pictures) to think of G(t) as being a continuous function, corresponding to a cumulative probability distribution, with a corresponding “density” function g(t). For non keys, we assume that queries involving non-keys come from some distribution D, and we define H(t) to be probability that a non-key query from D has a score less than or equal to t. Note that non key query distribution might be different from non key distribution. If non key queries are chosen uniformly at random, non key query distribution would be the same as non key distribution. We assume that H(t) is known in the theoretical analysis below. In practice, we expect a good approximation of H(t) will be used, determined by taking samples from D or a suitably good approximation, which may be based on, for example, historical data (discussed in detail in [Mitzenmacher (2018)]). Here H(t) can be viewed as a cumulative distribution function, and again in our pictures we think of it as having a density h(t). Also, note that if queries for non-keys are simply chosen uniformly at random, then H(t) is just the fraction of non-keys with scores below t. While our analysis holds generally, the example of H(t) being the fraction of non-keys with scores below t may be easier to keep in mind. Visualization of the original learned Bloom filter in terms of these distributions is shown in Fig.1(D). As we describe further below, for our partitioned learned Bloom filter, we use multiple thresholds and a separate backup Bloom filter for each region, as show in Fig.1(E). In what follows, we formulate the problem of choosing thresholds and backup Bloom filter false positive rates (or equivalently, sizes) as an optimization problem in section 3.2. In section 3.3.1, we find the optimal solution of a relaxed problem which helps us gain some insight into the general problem. We then propose an approximate solution for the general problem in section 3.3.3. 2The different false positive rates per region can be achieved in multiple ways. Either by choosing a separate Bloom filter per region or by having a common Bloom filter with varying number of hash functions per region. We find in our formulation that the resulting parameters correspond to quite natural quantities in terms of G and H . Specifically, the optimal false positive rate of a region is proportional to the ratio of the fraction of keys to the fraction of non-keys in that region. If we think of these region-based fractions for keys and non-keys as probability distributions, the maximum space saving obtained is proportional to the KL divergence between these distributions. Hence we can optimize the thresholds by choosing them to maximize this divergence. We show that we can find thresholds to maximize this divergence, approximately, through dynamic programming. We also show that, naturally, this KL divergence increases with more number of regions and so does the performance. In our experiments, we find a small number(≈ 4− 6) of partitions suffices to get good performance. 3.2 GENERAL OPTIMIZATION FORMULATION To formulate the overall problem as an optimization problem, we consider the variant which minimizes the space used by the Bloom filters in PLBF in order to achieve an overall a target false positive rate (F ). We could have similarly framed it as minimizing the false positive rate given a fixed amount of space. Here we are assuming the learned model is given. We assume normalized score values in [0, 1] for convenience. We have region boundaries given by ti values 0 = t0 ≤ t1 ≤ ....tk−1 ≤ tk = 1, with score values between [ti−1, ti] falling into the ith region. We assume the target number of regions k is given. We denote the false positive rate for the Bloom filter in the ith region by fi. We let G and H be defined as above. As state previously, Fig.1(E) corresponds to this setting, and the following optimization problem finds the optimal thresholds ti and the false positive rates fi: min ti,fi (∑k i=1 |S| × (G(ti)−G(ti−1))× c log2 ( 1 fi )) + Size of Learned Model (2) constraints ∑k i=1 (H (ti)−H(ti−1))× fi ≤ F (3) fi ≤ 1 , i = 1...k (4) (ti − ti−1) ≥ 0 , i = 1...k ; t0 = 0; tk = 1 (5) The minimized term (Eq.2) represents the total size of the learned Bloom filter, the size of backup Bloom filters is obtained by summing the individual backup Bloom filter sizes. The constant c in the equation depends on which variant of the Bloom filter is used as the backup3; as it happens, its value will not affect the optimization. The first constraint (Eq.3) ensures that the overall false positive rate stays below the target F . The overall false positive rate is obtained by summing the appropriately weighted rates of each region. The next constraint (Eq.4) encodes the constraint that false positive rate for each region is at most 1. The last set of constraints (Eq.5) ensure threshold values are increasing and cover the interval [0, 1]. 3.3 SOLVING THE OPTIMIZATION PROBLEM 3.3.1 SOLVING A RELAXED PROBLEM If we remove the false positive rate constraints (Eq.4, giving fi ≤ 1), we obtain a relaxed problem shown in Eq.6. This relaxation is useful because it allows us to use the Karush-Kuhn-Tucker (KKT) conditions to obtain optimal fi values in terms of the ti values, which we used to design algorithms for finding near-optimal solutions. Throughout this section, we assume the the relaxed problem yields a solution for the original problem; we return to this issue in subsection 3.3.3. min ti=1...k−1,fi=1...k (∑k i=1 |S| × (G(ti)−G(ti−1))× c log2 ( 1 fi )) + Size of Learned Model constraints ∑k i=1 (H(ti)−H(ti−1))× fi ≤ F ; (ti − ti−1) ≥ 0 , i = 1...k; t0 = 0; tk = 1 (6) 3The sizes of Bloom filter variants are proportional to |S| × log2(1/f), where S is the set it represents, and f is the false positive rate it achieves. See e.g. [Mitzenmacher (2018)] for related discussion. The constant c depends on which type of Bloom filter is used as a backup. For example, c = log2(e) for standard Bloom filter. The optimal fi values obtained by using the KKT conditions yield Eq.7 (as derived in Appendix.A), giving the exact solution in terms of ti’s. fi = F G(ti)−G(ti−1) H(ti)−H(ti−1) (7) The numerator G(ti) − G(ti−1) is the fraction of keys in the ith region and the denominator H(ti) −H(ti−1) is the probability of a non-key query being in the ith region. In intuitive terms, the false positive rate for a region is proportional to the ratio of the key density (fraction of keys) to non-key density (fraction of non-key queries). Since we have found the optimal fi in terms of the ti, we can replace the fi in the original problem to obtain a problem only in terms of the ti. In what follows, we use ˆg(t) to represent the discrete distribution given by the k values of G(ti)−G(ti−1) for i = 1, . . . , k, and similarly we use ˆh(t) for the distribution corresponding to the H(ti)−H(ti−1) values. Eq.8 shows the rearrangement of the minimization term(excluding model size) after substitution. Min. Term = k∑ i=1 |S| × (G(ti)−G(ti−1))× c log2 ( H(ti)−H(ti−1) (G(ti)−G(ti−1))× F ) = k∑ i=1 |S| × (G(ti)−G(ti−1))× c log2 ( 1 F ) − c× |S| ×DKL ( ˆg(t), ˆh(t) ) (8) where DKL is the standard KL divergence for the distributions given by ˆg(t) and ˆh(t). Eq.8 represents the space occupied by the backup Bloom filters; the total space includes this and the space occupied by the learned model. c× ( |S| × log2 ( 1 F ) − |S| ×DKL ( ˆg(t), ˆh(t) )) + Size Of Learned Model (9) The space occupied by the Bloom filter without the learned model is c× |S| × log2(1/F ). Thus, the space saved by PLBF in comparison to the normal Bloom filter is: c× ( |S| ×DKL ( ˆg(t), ˆh(t) )) − Size Of Learned Model (10) The space saved by PLBF is therefore linearly proportional to the KL divergence of key and non-key distributions of the regions given by ˆg(t) and ˆh(t) of the regions. This derivation suggests that the KL divergence might also be used as a loss function to improve the model quality. We have tested this empirically, but thus far have not seen significant improvements over the MSE loss we use in our experiments; this remains an interesting issue for future work. 3.3.2 FINDING THE OPTIMAL THRESHOLDS FOR RELAXED PROBLEM We have shown that, given a set of thresholds, we can find the optimal false positive rates for the relaxed problem. Here we turn to the question of finding optimal thresholds. We assume again that we are given k, the number of regions desired. (We consider the importance of choosing k further in our experimental section.) Given our results above, the optimal thresholds correspond to the points that maximize the KL divergence between ( ˆg(t), ˆh(t)). The KL divergence of ( ˆg(t), ˆh(t)) is the sum of the terms gi log2 gi hi , one term per region. (Here gi = G(ti)−G(ti−1) and hi = H(ti)−H(ti−1).) Note that each term depends only on the proportion of keys and non-keys in that region and is otherwise independent of the other regions. This property allows a recursive definition of KL divergence that is suitable for dynamic programming. We divide the score space [0, 1] into N consecutive small segments for a chosen value of N ; this provides us a discretization of the score space, with larger N more closely approximating the real interval. Given k, we can find a set of k approximately optimal thresholds using dynamic programming, where the solution is approximate due to our discretization of the score space. Let DPKL(n, j) denote the maximum divergence one can get when you divide the first n segments into j regions. Our approximately optimal divergence corresponds to DPKL(N, k). The idea behind the algorithm is that the we can recursively define DPKL(n, j) as represented in Eq.11. Here g′, h′ represent the fraction of keys and the fraction of non-key queries, respectively, in these N segments. DPKL (n, j) = max ( DPKL(n− i, j − 1) + ( n∑ r=i g′(r)× log2 (∑n r=i g ′(r)∑n r=i h ′(r) ))) (11) The time complexity of computing DPKL(N, k) is O(N2k). One can increase the value of N to get more precision in the discretization when finding thresholds, at the cost of higher computation time. 3.3.3 THE RELAXED PROBLEM AND THE GENERAL PROBLEM We can find a near-optimal solution to the relaxed problem by first, obtaining the threshold values that maximize the divergence and then, getting the optimal fi values using Eq.7. In many cases, the optimal relaxed solution will also be the optimal general solution, specifically if F × (G(ti−1) − G(ti))/(H(ti−1)−H(ti)) < 1 for all i. Hence, if we are aiming for a sufficiently low false positive rate F , solving the relaxed problem suffices. To solve the general problem, we need to deal with regions where fi ≥ 1, but we can use the relaxed problem as a subroutine. First, given a fixed set of ti values for the general problem, we have an algorithm (Alg.1, as discussed in Appendix.B) to find the optimal fi’s. Briefly summarized, we solve the relaxed problem, and for regions with fi > 1, the algorithm sets fi = 1, and then re-solves the relaxed problem with these additional constraints, and does this iteratively until no region with fi > 1 remains. The problem is that we do not have the optimal set of ti values to begin; as such, we use the optimal ti values for the relaxed solution as described in Section 3.3.2. This yields a solution to the general problem (psuedo-code in Alg.2), but we emphasize that it is not optimal in general, since we did not start with the optimal ti. We expect still that it will perform very well in most cases. In practice, we observe that keys are more concentrated on higher scores, and non-key queries are more concentrated on lower scores. Given this property, if a region with fi = 1 (no backup Bloom filter used) exists in the optimal solution of the general problem, it will most probably be the rightmost region. In particular, if (G(ti−1)−G(ti))/(H(ti−1)−H(ti)) is increasing as ti−1, ti increase – that is, the ratio of the fraction of keys to the fraction of non-key queries over regions is increasing – then indeed without loss of generality the last (kth) region will be the only one with fk = 1. (We say only one region because any two consecutive regions with fi = 1 can be merged and an extra region can be made in the remaining space which is strictly better, as adding an extra region always helps as shown in Appendix.D.2.) It is reasonable to believe that in practice this ratio will be increasing or nearly so. Hence if we make the assumption that in the optimal solution all the regions except the last satisfy the fi < 1 constraint, then if we identify the optimal last region’s boundary, we can remove the fi ≤ 1 constraints for i 6= k and apply the DP algorithm to find near optimal ti’s. To identify the optimal last region’s boundary, we simply try all possible boundaries for the kth region (details discussed in Appendix.C). As it involves assumptions on the behavior of G and H , we emphasize again that this will not guarantee finding the optimal solution. But when the conditions are met it will lead to a near-optimal solution (only near-optimal due to the discretization of the dynamic program). 4 EVALUATION We compare PLBF against the theoretically optimal Bloom filter [Bloom (1970)]4, the sandwiching approach [Mitzenmacher (2018)], and AdaBF [Dai & Shrivastava (2019)]. Comparisons against standard Bloom filters5 appear in Appendix.E.1. We excluded the original learned Bloom filter [Kraska et al. (2018)] as the sandwiching approach was strictly better. We include the size of the learned model with the size of the learned Bloom filter. To ensure a fair comparison, we used the optimal Bloom filter as the backup bloom filter for all learned variants. We use 3 different datasets: URLs: As in previous papers [Kraska et al. (2018), Dai & Shrivastava (2019)], we used the URL data set, which contains 103520 (23%) malicious and 346646 (77%) are benign URLs. We used 17 features from these URL’s such as host name length, use of shortening, counts of special characters,etc. 4For the space of a theoretically optimal Bloom filter, we take the standard Bloom filter of same false positive rate and divide it’s space used by log2 e, as obtaining near-optimality in practice is difficult. This uses the fact that the standard Bloom filter is asymptotically log2 e times suboptimal than the optimal as discussed in Sec.2.1. 5PLBF performs better against standard Bloom filters, as discussed in Appendix.D.3. Section 4.1 are conservative estimates of gains possible in practice using a PLBF. EMBER: Bloom filters are widely used to match file signatures with the virus signature database. Ember (Endgame Malware Benchmark for Research) [Anderson & Roth (2018)] is an open source collection of 1.1M sha256 file hashes that were scanned by VirusTotal in 2017. Out of the 1.1 million files, 400K are malicious, 400K are benign, and we ignore the remaining 300K unlabeled files. The features of the files are already included in the dataset. Synthetic: An appealing scenario for our method is when the key density increases and non-key density decreases monotonically with respect to the score value. We simulate this by generating the key and non-key score distribution using Zipfian distributions as in Fig.2(A). Since we directly work on the score distribution, the size of the learned model for this synthetic dataset is zero. 4.1 OVERALL PERFORMANCE Here, we compare the performance of PLBF against other baselines by fixing the target F and measuring the space used by each methods. We use PLBF Alg.3 with DP algorithm discretization(N ) set to 1000. We train the model on the entire key set and 40% of the non-key set. The thresholds and backup Bloom filters are then tuned using this model with the aim of achieving the fixed target F . The rest of the non-keys are used to evaluate the actual false positive rate. While any model can be used, we choose the random forest classifier from sklearn [Pedregosa et al.] for its good accuracy. The F1 scores of the learned models used for synthetic, URLs and EMBER were 0.99, 0.97, and 0.85, respectively. We consider the size of the model to be the pickle file size on the disk (a standard way of serializing objects in Python). We use five regions (k = 5) for both PLBF and AdaBF as this is usually enough to achieve good performance as discussed in 4.2. Using higher k would only improve our performance. The results of the experiment are shown in the Fig.2(A-C) along with the distribution of the scores of keys and non-keys for each dataset. As we can see from the figure, PLBF has a better Pareto curve than the other baselines for all the datasets. On the synthetic dataset and URLs dataset we observe a significantly better performance. In contrast, for the EMBER dataset our performance is only slightly better indicating that the model here is not as helpful. The difference between space used by PLBF and optimal Bloom filter first increases with decreasing false positive rate but converges to a constant value for all datasets, as given in Eq.10. For the same amount of space used(400Kb,500Kb,3000Kb space for synthetic,URLs,EMBER, respectively), PLBF achieves 22x, 26x, and 3x smaller false positive rates than the sandwiching approach, and 8.5x, 9x, and 1.9x smaller false positive rates than AdaBF for synthetic, URLs, and EMBER, respectively. To achieve a false positive rate of 0.001, the sandwiching approach uses 8.8x, 3.3x, and 1.2x the amount of space and AdaBF uses 6x, 2.5x, and 1.1x the amount of space compared to PLBF for synthetic, URLs, and EMBER datasets respectively. 4.2 PERFORMANCE AND THE NUMBER OF REGIONS The maximum space savings obtained by using PLBF is linearly proportional to the KL divergence of the distributions(Eq10) and this KL divergence strictly increases with the number of regions(Appendix.D.2). Fig.2(D-F) show the space saved w.r.t the optimal Bloom filter as we increase the number of regions k for a target false positive rate of 0.001. The red line in the figure shows the savings when using 25 regions; using more regions provides no noticeable improvement on this data. Our results suggest using 4-6 regions should be sufficient to obtain reasonable performance. We have additional experiments in Appendix.E that shows PLBF performance against standard Bloom filters and PLBF performance w.r.t model quality. 5 CONCLUSION Our analysis of the partitioned learned Bloom filter provides a formal framework for improving on learned Bloom filter performance that provides substantially better performance than previous heuristics. As Bloom filters are used across thousands of applications, we hope the PLBF may find many uses where the data set is amenable to a learned model. Acknowledgments This research was supported in part by NSF grants CCF-1563710 and CCF-1535795, and by a gift to the Center for Research on Computation and Society at Harvard University. A SOLVING THE RELAXED PROBLEM USING KKT CONDITIONS As mentioned in the main text, if we relax the constraint of fi ≤ 1, using the stationary KKT conditions we can obtain the optimal fi values. Here we show this work. The appropriate Lagrangian equation is given in Eq.12. In this case, the KKT coniditions tell us that the optimal solution is a stationary point of the Lagrangian. Therefore, we find where the derivative of the Lagrangian with respect to fi is zero. L (ti, fi, λ, νi) = ∑k i=1 (G(ti)−G(ti−1))× c log2 ( 1 fi ) + λ× ((∑k i=1 (H(ti)−H(ti−1))× fi ) − F ) +∑k i=1 νi × (ti−1 − ti) (12) ∂L(ti,fi,λ,νi) ∂fi = 0 (13) ∂(G(ti)−G(ti−1))c log2 ( 1 fi ) ∂fi = −λ∂(H(ti)−H(ti−1))×fi∂fi (14) fi = c ln(2)×(G(ti)−G(ti−1))×λ (H(ti)−H(ti−1)) (15) λ = F c ln(2)× ∑k i=1 (G(ti)−G(ti−1)) = Fc ln 2 (16) fi = (G(ti)−G(ti−1))×FPR (H(ti)−H(ti−1)) (17) Eq.15 expresses fpri in terms of λ. Summing Eq.15 over all i and using the relationship between F and H we get Eq.16. Thus the optimal fi values turn out to be as given in Eq.17. Algorithm 1 Finding optimal fpr’s given thresholds InputG′ - the array containing key density of each region InputH′ - the array containing non-key density of each region Input F - target overall false positive rate Input k - number of regions Output f - the array of false positive rate of each region 1: procedure OPTIMALFPR(G′, H′, F, k) 2: Gsum ← 0 . sum of key density of regions with fi = 1 3: Hsum ← 0 . sum of non-key density of regions with fi = 1 4: for i in 1, 2, ...k do 5: f [i]← G ′[i]·F H′[i] . Assign relaxed problem solution 6: while some f [i] > 1 do 7: for i in 1, 2, ...k do 8: if (f [i] > 1) then f [i]← 1 . Cap the false positive rate of region to one 9: Gsum ← 0 10: Hsum ← 0 11: for i in 1, 2, ...k do 12: if (f [i] = 1) thenGsum ← Gsum +G′[i];Hsum ← Hsum +H′[i] . Calculate key,non-key density in regions with no Bloom filter(f [i] = 1) 13: for i in 1, 2, ...k do 14: if (f [i] < 1) then f [i] = G ′[i]·(F−Hsum) H′[i]·(1−Gsum) .Modifying the fi of the regions to ensure target false positive rate is FPR 15: return fpr Array Algorithm 2 Using relaxed solution for the general problem InputGdis - the array containing discretized key density of each region InputHdis - the array containing discretized key density of each region Input F - target overall false positive rate Input k - number of regions Output t - the array of threshold boundaries of each region Output f - the array of false positive rate of each region Algorithm ThresMaxDivDP - DP algorithm that returns the thresholds maximizing the divergence between key and non-key distribution. Algorithm CalcDensity - returns the region density given thresholds of the regions Algorithm OptimalFPR - returns the optimal false positive rate of the regions given thresholds Algorithm SpaceUsed - returns space used by the back-up Bloom filters given threhsolds and false positive rate per region. 1: procedure SOLVE(Gdis, Hdis, F, k) 2: t← ThresMaxDivDP(Gdis, Hdis, k) . Getting the optimal thresholds for the relaxed problem 3: G′, H′ ← CalcDensity(Gdis, Hdis, t) 4: f = OptimalFPR(G′, H′, F, k) . Obtaining optimal false positive rates of the general problem for given thresholds 5: 6: return t , f Array B OPTIMAL FALSE POSITIVE RATE FOR GIVEN THRESHOLDS We provide the pseudocode for the algorithm to find the optimal false positive rates if threshold values are provided. The corresponding optimization problem is given in Eq.18. As the boundaries for the regions are already defined, one only needs to find the optimal false positive rate for the backup Bloom filter of each region. min fi=1...k ∑k i=1 (G(ti)−G(ti−1))× c log2( 1 fi ) constraints ∑k i=1 (H(ti)−H(ti−1))× fi = F fi ≤ 1 i = 1...k (18) Alg.1 gives the pseudocode. We first assign false positive rates based on the relaxed problem but may find that fi ≥ 1 for some regions. For such regions, we can set fi = 1, re-solve the relaxed problem with these additional constraints (that is, excluding these regions), and use the result as a solution for the general problem. Some regions might again have a false positive rate above one, so we can repeat the process. The algorithm stops when there is no new region with false positive rate greater than one. This algorithm finds the optimal false positive rates for the regions when the thresholds are fixed. Algorithm 3 Solving the general problem InputGdis - the array containing discretized key density of each region InputHdis - the array containing discretized key density of each region Input F - target overall false positive rate Input k - number of regions Output t - the array of threshold boundaries of each region Output f - the array of false positive rate of each region Algorithm ThresMaxDivDP - DP algorithm that returns the thresholds maximizing the divergence between key and non-key distribution. Algorithm CalcDensity - returns the region density given thresholds of the regions Algorithm OptimalFPR - returns the optimal false positive rate of the regions given thresholds 1: procedure SOLVE(Gdis, Hdis, F, k) 2: MinSpaceUsed←∞ . Stores minimum space used uptil now 3: index← −1 . Stores index corresponding to minimum space used 4: Glast ← 0 . Key density of the current last region 5: Hlast ← 0 . Non-key density of the current last region 6: 7: for i in k − 1, k, ...N − 1 do . Iterate over possibilities of last region 8: Glast← ∑N j=iGdis[j] . Calculate the key density of last region 9: Hlast← ∑N j=iHdis[j] 10: t← ThresMaxDivDp(G[1..(i− 1)], H[1..(i− 1)], k − 1) . Find the optimal thresholds for the rest of the array 11: t.append(i) 12: G′, H′ ← CalcDensity(Gdis, Hdis, t) 13: f = OptimalFPR(G′, H′, F, k) . Find optimal false positive rates for the current configuration 14: if (MinSpaceUsed < SpaceUsed(Gdis, Hdis, t, f)) 15: thenMinSpaceUsed← SpaceUsed(Gdis, Hdis, t, f); index← i . Remember the best performance uptil now 16: 17: Glast ← ∑N j=indexGdis[j] 18: Hlast ← ∑N j=indexHdis[j] 19: t← ThresMaxDivDP(G[1..(index− 1)], H[1..(index− 1)], k − 1) 20: t.append(index) 21: G′, H′ ← CalcDensity(Gdis, Hdis, t) 22: f = OptimalFPR(G′, H′, F, k) 23: 24: return t , f Array C ALGORITHM FOR FINDING THRESHOLDS We provide the pseudocode for the algorithm to find the solution for the relaxed problem; Alg.3 finds the thresholds and false positive rates. As we have described in the main text, this algorithm provides the optimal parameter values, if (G(ti−1)−G(ti))/(H(ti−1)−H(ti)) is monotonically increasing. The idea is that only the false positive rate of the rightmost region can be one. The algorithm receives discretized key and non-key densities. The algorithm first iterates over all the possibilities of the rightmost region. For each iteration, it finds the thresholds that maximize the KL divergence for the rest of the array for which a dynamic programming algorithm exists. After calculating these thresholds, it finds the optimal false positive rate for each region using Alg.1. After calculating the thresholds and false positive rates, the algorithm calculates the total space used by the back-up Bloom filters in PLBF. It then remembers the index for which the space used was minimal. The ti’s and fi’s corresponding to this index are then used to build the backup Bloom filters. The worst case time complexity is then O(N3k). D ADDITIONAL CONSIDERATIONS D.1 SANDWICHING: A SPECIAL CASE We show here that the sandwiching approach can actually be interpreted as a special case of our method. In the sandwiching approach, the learned model is sandwiched between two Bloom filters as shown in Fig.3(A). The input first goes through a Bloom filter and the negatives are discarded. The positives are passed through the learned model where based on their score s(x) they are either directly accepted when s(x) > t or passed through another backup Bloom filter when s(x) ≤ t. In our setting, we note that the pre-filter in the sandwiching approach can be merged with the backup filters to yield backup filters with a modified false positive rate. Fig.3(B) shows what an equivalent design with modified false positive rates would look like. (Here equivalence means we obtain the same false positive rate with the same bit budget; we do not consider compute time.) Thus, we see that the sandwiching approach can be viewed as a special case of the PLBF with two regions. However, this also tells us we can make the PLBF more efficient by using sandwiching. Specifically, if we find when constructing a PLBF with k regions that fi < 1 for all i, we may assign f0 = max1≤i≤k fi. We may then use an initial Bloom filter with false positive rate f0, and change the target false positive rates for all other intervals to fi/f0, while keeping the same bit budget. This approach will be somewhat more efficient computationally, as we avoid computing the learned model for some fraction of non-key elements. D.2 PERFORMANCE AGAINST NUMBER OF REGIONS k Earlier, we saw the maximum space saved by using PLBF instead of a normal Bloom filter is linearly proportional to the DKL( ˆg(t), ˆh(t)). If we split any region into two regions, the overall divergence would increase because sum of divergences of the two split regions is always more than the original divergence, as shown in Eq.19. Eq.19 is an application of Jensen’s inequality. ( (g1 + g2)× log (g1+g2)(h1+h2) ) ≤ ( g1 × log g1h1 ) + ( g2 × log g2h2 ) (19) Increasing the number of regions therefore always improves the maximum performance. We would hope that in practice a small number of regions k would suffice. This seems to be the the case in our experience; we detail one such experiment in our evaluation(4.2). D.3 PERFORMANCE USING VARIOUS BLOOM FILTER VARIANTS We consider how the space saved of the PLBF varies with the type of backup Bloom filter being used. The PLBF can use any Bloom filter variant as the backup Bloom filter. When we compare our performance with a Bloom filter variant, we use that same Bloom filter variant as the backup Bloom filter for a fair comparison. First, absolute space one can save by using a PLBF instead of a Bloom filter variant is given in Eq.10. This quantity increases with increasing c6. 6The sizes of standard Bloom filter variants are proportional to |S| × log2(1/f), where S is the set it represents, and f is the false positive rate it achieves. See e.g. Mitzenmacher (2018) for related discussion. The constant c depends on which type of Bloom filter is used as a backup. For example, c = log2(e) for standard Bloom filter, c = 1.0 for the optimal Bloom filter. The relative space one saves by using PLBF instead of the given Bloom filter variant is shown in Eq.20. This quantity is the ratio of the space saved by PLBF (as shown in Eq.10) divided by the space used by the given Bloom filter variant (c× |S| × log2(1/F )) as shown in Eq.20. (c×|S|×DKL( ˆg(t), ˆh(t))−Size Of Learned Model) c×|S|×log2(1/F ) (20) Cancelling the common terms we obtain the following Eq.21. ( DKL( ˆg(t), ˆh(t)) log2(1/F ) − Size Of Learned Modelc×|S|×log2(1/F ) ) (21) The relative space saved, like the absolute space saved, also increases with increasing c. Thus, both the relative and absolute space saved for the PLBF is higher for a standard Bloom filter (c = 1.44) than an optimal Bloom filter (c = 1.00), and hence our experiments in Section 4.1 are conservative estimates of gains possible in practice using PLBF. E ADDITIONAL EXPERIMENTS E.1 PERFORMANCE W.R.T STANDARD BLOOM FILTERS Earlier, we evaluated our performance using optimal Bloom filters and here we present results using standard Bloom filters. As shown in Appendix.D.3, PLBF performs better w.r.t standard Bloom filters than optimal Bloom filters. As one can see from Fig.4, PLBF performs better than the standard Bloom filter. E.2 PERFORMANCE AND MODEL QUALITY Here we provide an experiment to see how the performance of various methods varies with the quality of the model. As discussed earlier, a good model will have high skew of the distributions g and h towards extreme values. We therefore vary the skew parameter of the Zipfian distribution to simulate the model quality. We measure the quality of the model using the standard F1 score. Fig.5(B) represents the space used by various methods to achieve a fixed false positive rate of 0.001 as we vary the F1 score of the model. The figure shows that as the model quality in terms of the F1 score increases, the space required by all the methods decreases (except for the optimal Bloom filter, which does not use a model). The space used by all the methods goes to zero as the F1 score goes to 1, as for the synthetic dataset there is no space cost for the model. The data point corresponding to F1 score equal to 0.99 was used to plot Fig.2(A). E.3 DISCRETIZATION EFFECT ON DYNAMIC PROGRAMMING RUNTIME, PLBF SIZE All the runtime experiments in this subsection and the next subsection are measured using an 2.8GHz quad-core Intel Core i7 CPU @ 2.80GHz with 16GB of memory. We use the bloom-filter python package [bloom filter] for our backup Bloom filters. The dynamic programming algorithms are implemented in Python. Here we provide an experiment to see how the dynamic programming (DP) algorithm runtime (psuedo code in Alg.3) and PLBF size vary with level of discretization (N ). In the tables below, we have the DP algorithm runtime and space taken by the PLBF to achieve an approximate empirical false positive rate of 0.001 for various N . As discussed in Sec. 3.3.2, with increasing value of N one gets closer to optimal parameters, at the cost of higher computation time. This trend is demonstrated in the table below for the URLs and EMBER datasets. We note that if runtime is an issue, the increase in size from using smaller N is relatively small. E.4 CONSTRUCTION TIME FOR VARIOUS BASELINES Here we look at the construction time breakdown for the PLBF and various alternatives, with the goal of seeing the cost of in terms of construction time for using the more highly tuned PLBF. The construction time of all the learned Bloom filters includes the model training time and parameter estimation time, which are not required for the standard Bloom filter construction process. Since we use the same model for all learned baselines, the model construction time is the same for all of them. In Fig.6, we plot the construction time breakdown for various baselines in order to achieve an approximate empirical false positive rate of 0.001. Recall that the AdaBF and Sandwiching approaches use heuristics to estimate their parameters and unsurprisingly they therefore seems somewhat faster than PLBF. However, for N = 100 we see the parameter estimation time is smaller than the key insertion time and model training time. The parameter estimation time for PLBF varies with the level of discretization we use for the DP algorithm. The PLBF with N = 1000 takes the longest to execute while standard Bloom filter is fastest baseline. As shown in Table1 above, using N = 1000 gives only a slight improvement in size. We therefore believe that if construction time is an issue, as for situations where one might want to re-learn and change the filter as data changes, one can choose parameters for PLBF construction that would still yield significant benefits over previous approaches.
1. What is the main contribution of the paper regarding the fine-tuning of partitioned learned Bloom filters? 2. What are the strengths of the proposed approach, particularly in terms of its ability to reduce space consumption and false positive rates? 3. What are the weaknesses or limitations of the proposed method, especially when considering insertions and maintenance? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any questions or concerns regarding the practicality and efficiency of the proposed technique in real-world scenarios?
Review
Review This work proposed a technique to fine tune the partitioned learned Bloom filter to reduce the space consumption given a false positive rate threshold. The idea is to formulate the problem into a two-part optimization problem: How to best partition the scores from the model into a given number of regions and how to choose thresholds for the regions to minimize the overall space consumption of Bloom filters. A relaxed version of the latter problem is addressed by using KKT conditions to obtain the optimal thresholds. The former problem is addressed by discretizing the region boundaries and dynamic programming. Overall, I like the idea of fine tuning the partitioned learned Bloom filter. It seems to me that it will also be possible to fine tune the number of partitions as well, e.g., a simple way is to use a binary search. The evaluation result is impressive compared with baselines in terms of space consumption vs. false positive rate. And the writing of the paper is clear and easy to follow. Having said that, I have some concerns with this line of work. First, IMHO, the real challenge of putting these learned Bloom filters into work is how to maintain them under insertions. While it is OK that this is not the focus of this work, it seems to me that the proposed optimal learned Bloom filter can be brittle under insertions. It will be great to understand how the proposed technique degrades compared with the baselines. In addition, with partitioned Bloom filters, it seems to be more prone to resizing upon insertions compared with using a single Bloom filter. Second, it is possible that the proposed technique needs to be reoptimized upon insertions. In this case, it will be important to understand the overhead of constructing the Bloom filters proposed in this technique compared with the baselines. However, the overhead of constructing the various variants of Bloom filters is missing in the evaluation. It might be possible to reduce the overhead by using a coarser grained discretization for the DP. The performance, however, can degrade with a coarser grained discretization.
ICLR
Title SEARNN: Training RNNs with global-local losses Abstract We propose SEARNN, a novel training algorithm for recurrent neural networks (RNNs) inspired by the “learning to search” (L2S) approach to structured prediction. RNNs have been widely successful in structured prediction applications such as machine translation or parsing, and are commonly trained using maximum likelihood estimation (MLE). Unfortunately, this training loss is not always an appropriate surrogate for the test error: by only maximizing the ground truth probability, it fails to exploit the wealth of information offered by structured losses. Further, it introduces discrepancies between training and predicting (such as exposure bias) that may hurt test performance. Instead, SEARNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error. We first demonstrate improved performance over MLE on two different tasks: OCR and spelling correction. Then, we propose a subsampling strategy to enable SEARNN to scale to large vocabulary sizes. This allows us to validate the benefits of our approach on a machine translation task. 1 INTRODUCTION Recurrent neural networks (RNNs) have been quite successful in structured prediction applications such as machine translation (Sutskever et al., 2014), parsing (Ballesteros et al., 2016) or caption generation (Vinyals et al., 2015). These models use the same repeated cell (or unit) to output a sequence of tokens one by one. As each prediction takes into account all previous predictions, this cell learns to output the next token conditioned on the previous ones. The standard training loss for RNNs is derived from maximum likelihood estimation (MLE): we consider that the cell outputs a probability distribution at each step in the sequence, and we seek to maximize the probability of the ground truth. Unfortunately, this training loss is not a particularly close surrogate to the various test errors we want to minimize. A striking example of discrepancy is that the MLE loss is close to 0/1: it makes no distinction between candidates that are close or far away from the ground truth (with respect to the structured test error), thus failing to exploit valuable information. Another example of train/test discrepancy is called exposure or exploration bias (Ranzato et al., 2016): in traditional MLE training the cell learns the conditional probability of the next token, based on the previous ground truth tokens – this is often referred to as teacher forcing. However, at test time the model does not have access to the ground truth, and thus feeds its own previous predictions to its next cell for prediction instead. Improving RNN training thus appears as a relevant endeavor, which has received much attention recently. In particular, ideas coming from reinforcement learning (RL), such as the REINFORCE and ACTOR-CRITIC algorithms (Ranzato et al., 2016; Bahdanau et al., 2017), have been adapted to derive training losses that are more closely related to the test error that we actually want to minimize. ∗Equal contribution. In order to address the issues of MLE training, we propose instead to use ideas from the structured prediction field, in particular from the “learning to search” (L2S) approach introduced by Daumé et al. (2009) and later refined by Ross & Bagnell (2014) and Chang et al. (2015) among others. Contributions. In Section 2, we review the limitations of MLE training for RNNs in details. We also clarify some related claims made in the recent literature. In Section 3, we make explicit the strong links between RNNs and the L2S approach. In Section 4, we present SEARNN, a novel training algorithm for RNNs, using ideas from L2S to derive a global-local loss that is much closer to the test error than MLE. We demonstrate that this novel approach leads to significant improvements on two difficult structured prediction tasks, including a spelling correction problem recently introduced in Bahdanau et al. (2017). As this algorithm is quite costly, we investigate scaling solutions in Section 5. We explore a subsampling strategy that allows us to considerably reduce training times, while maintaining improved performance compared to MLE. We apply this new algorithm to machine translation and report significant improvements in Section 6. Finally, we contrast our novel approach to the related L2S and RL-inspired methods in Section 7. 2 TRADITIONAL RNN TRAINING AND ITS LIMITATIONS RNNs are a large family of neural network models aimed at representing sequential data. To do so, they produce a sequence of states (h1, ..., hT ) by recursively applying the same transformation (or cell) f on the sequential data: ht = f(ht−1, yt−1, x), with h0 an initial state and x an optional input. Many possible design choices fit this framework. We focus on a subset typically used for structured prediction, where we want to model the joint probability of a target sequence (y1, . . . , yTx) ∈ ATx given an input x (e.g. the decoder RNN in the encoder-decoder architecture (Sutskever et al., 2014; Cho et al., 2014)). HereA is the alphabet of output tokens and Tx is the length of the output sequence associated with input x (though Tx may take different values, in the following we drop the dependency in x and use T for simplicity). To achieve this modeling, we feed ht through a projection layer (i.e. a linear classifier) to obtain a vector of scores st over all possible tokens a ∈ A, and normalize these with a softmax layer (an exponential normalizer) to obtain a distribution ot over tokens: ht = f(ht−1, yt−1, x) ; st = proj(ht) ; ot = softmax(st) ∀ 1 ≤ t ≤ T . (1) The vector ot is interpreted as the predictive conditional distribution for the tth token given by the RNN model, i.e. p(a|y1, . . . , yt−1, x) := ot(a) for a ∈ A. Multiplying the values ot(yt) together thus yields the joint probability of the sequence y defined by the RNN (thanks to the chain rule): p(y1, ..., yT |x) = p(y1|x)p(y2|y1, x) ... p(yT |y1, ..., yT−1, x) := ΠTt=1ot(yt) . (2) As pointed by Goodfellow et al. (2016), the underlying structure of these RNNs as graphical models is thus a complete graph, and there is no conditional independence assumption to simplify the difficult prediction task of computing arg maxy∈Y p(y|x). In practice, one typically uses either beam search to approximate this decoding, or a sequence of greedy predictions ŷt := arg maxa∈A p(a|ŷ1, . . . , ŷt−1, x). If we use the “teacher forcing” regimen, where the inputs to the RNN cell are the ground truth tokens (as opposed to its own greedy predictions), we obtain the probability of each ground truth sequence according to the RNN model. We can then use MLE to derive a loss to train the RNN. One should note here that despite the fact that the individual output probabilities are at the token level, the MLE loss involves the joint probability (computed via the chain rule) and is thus at the sequence level. The limitations of MLE training. While this maximum likelihood style of training has been very successful in various applications, it suffers from several known issues, especially for structured prediction problems. The first one is called exposure or exploration bias (Ranzato et al., 2016). During training (with teacher forcing), the model learns the probabilities of the next tokens conditioned on the ground truth. But at test time, the model does not have access to the ground truth and outputs probabilities are conditioned on its own previous predictions instead. Therefore if the predictions differ from the ground truth, the model has to continue based on an exploration path it has not seen during training, which means that it is less likely to make accurate predictions. This phenomenon, which is typical of sequential prediction tasks (Kääriäinen, 2006; Daumé et al., 2009) can lead to a compounding of errors, where mistakes in prediction accumulate and prevent good performance. The second major issue is the discrepancy between the training loss and the various test errors associated with the tasks for which RNNs are used (e.g. edit distance, F1 score...). Of course, a single surrogate is not likely to be a good approximation for all these errors. One salient illustration of that fact is that MLE ignores the information contained in structured losses. As it only focuses on maximizing the probability of the ground truth, it does not distinguish between a prediction that is very close to the ground truth and one that is very far away. Thus, most of the information given by a structured loss is not leveraged when using this approach. Local vs. sequence-level. Some recent papers (Ranzato et al., 2016; Wiseman & Rush, 2016) also point out the fact that since RNNs output next token predictions, their loss is local instead of sequence-level, contrary to the error we typically want to minimize. This claim seems to contradict the standard RNN analysis, which postulates that the underlying graphical model is the complete graph: that is, the RNN outputs the probability of the next tokens conditioned on all the previous predictions. Thanks to the chain rule, one recovers the probability of the whole sequence. Thus the maximum likelihood training loss is indeed a sequence level loss, even though we can decompose it in a product of local losses at each cell. However, if we assume that the RNN outputs are only conditioned on the last few predictions (instead of all previous ones), then we can indeed consider the MLE loss as local. In this setting, the underlying graphical model obeys Markovian constraints (as in maximum entropy Markov models (MEMMs)) rather than being the complete graph; this corresponds to the assumption that the information from the previous inputs is imperfectly carried through the network to the cell, preventing the model from accurately representing long-term dependencies. Given all these limitations, exploring novel ways of training RNNs appears to be a worthy endeavor, and this field has attracted a lot of interest in the past few years. While many papers try to adapt ideas coming from the reinforcement learning literature, we instead focus in this paper on the links we can draw with structured prediction, and in particular with the L2S approach. 3 LINKS BETWEEN RNNS AND LEARNING TO SEARCH The L2S approach to structured prediction was first introduced by Daumé et al. (2009). The main idea behind it is a learning reduction (Beygelzimer et al., 2016): transforming a complex learning problem (structured prediction) into a simpler one that we know how to solve (multiclass classification). To achieve this, Daumé et al. (2009) propose in their SEARN algorithm to train a shared local classifier to predict each token sequentially (conditioned on all inputs and all past decisions), thus searching greedily step by step in the big combinatorial space of structured outputs. The idea that tokens can be predicted one at a time, conditioned on their predecessors, is central to this approach. The training procedure is iterative: at the beginning of each round, one uses the current model (or policy1) to build an intermediate dataset to train the shared classifier on. The specificity of this new dataset is that each new sample is accompanied by a cost vector containing one entry per token in the output vocabularyA. To obtain these cost vectors, one starts by applying a roll-in policy to predict all the tokens up to T , thus building one trajectory (or exploration path) in the search space per sample in the initial dataset. Then, at each time step t, one picks arbitrarily each possible token (diverging from the roll-in trajectory) and then continues predicting to finish the modified trajectory using a roll-out policy. One then computes the cost of all the obtained sequences, and ends up with T vectors (one per time step) of size |A| (the number of possible tokens) for every sample. Figure 1 describes the same process for our SEARNN algorithm (although in this case the shared classifier is an RNN). One then extracts features from the “context” at each time step t (which encompasses the full input and the previous tokens predicted up to t during the roll-in).2 Combining the cost vectors to these features yields the new intermediary dataset. The original problem is thus reduced to multi-class cost-sensitive classification. Once the shared classifier has been fully trained on this new dataset, the policy is updated for the next round. The algorithm is described more formally in Algorithm 2 (see Appendix A). Theoretical guarantees for various policy updating rules are provided by e.g. Daumé et al. (2009) and Chang et al. (2015). 1Note that the vocabulary used in this literature is slightly different from that of RNNs: tokens are rather referenced as actions, predictions as decisions and models as policies. 2This is often referred to as “search state” in the L2S literature, but we prefer calling it context to avoid confusion with the RNN hidden state. Roll-in and roll-out policies. The policies used to create the intermediate datasets fulfill different roles. The roll-in policy controls what part of the search space the algorithm explores, while the roll-out policy determines how the cost of each token is computed. The main possibilities for both roll-in and roll-out are explored by Chang et al. (2015). The reference policy tries to pick the optimal token based on the ground truth. During the roll-in, it corresponds to picking the ground truth. For the roll-out phase, while it is easy to compute an optimal policy in some cases (e.g. for the Hamming loss where simply copying the ground truth is also optimal), it is often too expensive (e.g. for BLEU score). One then uses a heuristic (in our experiments the reference policy is to copy the ground truth for both roll-in and roll-out unless indicated otherwise). The learned policy simply uses the current model instead, and the mixed policy stochastically combines both. According to Chang et al. (2015), the best combination when the reference policy is poor is to use a learned roll-in and a mixed roll-out. Links to RNNs. One can identify the following interesting similarities between a greedy approach to RNNs and L2S. Both models handle sequence labeling problems by outputting tokens recursively, conditioned on past decisions. Further, the RNN “cell” is shared at each time step and can thus also be seen as a shared local classifier that is used to make structured predictions, as in the L2S framework. In addition, there is a clear equivalent to the choice of roll-in policy in RNNs. Indeed, teacher forcing (conditioning the outputs on the ground truth) can be seen as the roll-in reference policy for the RNN. Instead, if one conditions the outputs on the previous predictions of the model, then we obtain a roll-in learned policy. Despite these connections, many differences remain. Amongst them, the fact that no roll-outs are involved in standard RNN training. We thus consider next whether ideas coming from L2S could mitigate the limitations of MLE training for RNNs. In particular, one key property of L2S worth porting over to RNN training is that the former fully leverages structured losses information, contrarily to MLE as previously noted. 4 IMPROVING RNN TRAINING WITH L2S Since we are interested in leveraging structured loss information, we can try to obtain it in the same fashion as L2S. The main tool that L2S uses in order to construct a cost-sensitive dataset is the roll-out policy. In many classical structured prediction use cases, one does not need to follow through with a policy because the “cost-to-go” that the roll-out yields is either free or easily computable from the ground truth. We are however also interested in cases where this information is unavailable, and roll-outs are needed to approximate it (e.g. for machine translation). This leads to several questions. How can we integrate roll-outs in a RNN model? How do we use this additional information, i.e. what loss do we use to train the model on? How do we make it computationally tractable? The SEARNN Algorithm. The basic idea of the SEARNN algorithm is quite simple: we borrow from L2S the idea of using a global loss for each local cell of the RNN. As in L2S, we first compute a roll-in trajectory, following a specific roll-in policy. Then, at each step t of this trajectory, we compute the costs ct(a) associated with each possible token a. To do so we pick a at this step and then follow a roll-out policy to finish the output sequence ŷa. We then compare ŷa with the ground truth using the test error itself, rather than a surrogate. By repeating this for the T steps we obtain T cost vectors. We use this information to derive one cost-sensitive training loss for each cell, which allows us to compute an update for the parameters of the model. The full process for one cell is illustrated in Figure 1. Our losses are global-local, in the sense that they appear at the local level but all contain sequence-level information. Our final loss is the sum over the T local losses. We provide the pseudo-code for SEARNN in Algorithm 1. Algorithm 1 SEARNN algorithm (for a simple encoder-decoder network) 1: Initialize the weights ω of the RNN network. 2: for i in 1 to N do 3: Sample B ground truth input/output structured pairs {(x1, y1), · · · , (xB , yB)} # Perform the roll-in/roll-outs to get the costs. This step can be heavily parallelized. 4: for b in 1 to B do 5: Compute input features φ(xb) # Roll-in. 6: Run the RNN until tth cell with φ(xb) as initial state by following the roll-in policy (see Appendix A.2 for details in the case of reference roll-in policy) 7: Store the sequence of hidden states in order to perform several roll-outs 8: for t in 1 to T do # Roll-outs for all actions in order to collect the cost vector at the tth cell. 9: for a in 1 to A do 10: Pick a decoding method (e.g. greedy or beam search) 11: Run the RNN from the tth cell to the end by first enforcing action a at cell t, and then following the decoding method. 12: Collect the cost cbt(a) by comparing the obtained output sequence ŷ b t (a) to y b 13: end for 14: end for 15: end for 16: Derive a loss for each cell from the collected costs 17: Update the parameters of the network ω by doing a single gradient step 18: end for Choosing a multi-class classifier. SEARNN appears quite similar to L2S, but there are a few key differences that merit more explanation. As the RNN cell can serve as a multi-class classifier, in SEARNN we could pick the cell as a (shallow) shared classifier, whose input are features extracted from the full context by the previous cells of the RNN. Instead, we pick the RNN itself, thus getting a (deep) shared classifier that also learns the features directly from the context. The difference between the two options is more thoroughly detailed in Appendix B. Arbitrarily picking a token a during the roll-out phase can then be done by emulating the teacher forcing technique: if predicted tokens are fed back to the model (say if the roll-out policy requires it), we use a for the next cell (instead of the prediction the cell would have output). We also use a in the output sequence before computing the cost. Choosing a cost-sensitive loss. We now also explain our choice for the training loss function derived from the cost vectors. One popular possibility from L2S is to go the full reduction route down to binary classification. However, this technique involves creating multiple new datasets (which is hard to implement as part of a neural network), as well as training |A|2 binary classifiers. Instead, we simply work with the multi-class classifier encoded by the RNN cell with training losses defined next. We now introduce two of the more successful losses we used (although we experimented with many others, which are detailed in Appendix C.1). In the following, each loss is defined at the cell level. The global loss is the sum of all T losses. st(a) refers to the score output by cell t for token a. Log-loss (LL). A central idea in L2S is to learn the target tokens the model should aim for. This is more meaningful than blindly imposing the ground truth as target, in particular when the model has deviated from the ground truth trajectory. Golberg & Nivre (2012) refer to this technique as using dynamic oracles. In the context of RNN training, we call this approach target learning. Our first loss is thus a simple log-loss with the minimal cost token as target: Lt(st; ct) = − log ( est(a ?) /∑A i=1 e st(i) ) where a? = arg mina∈A ct(a) . (3) It is structurally similar to MLE. The only difference is that instead of maximizing the probability of the ground truth action, we maximize the probability of the best performing action with respect to the cost vector. This similarity is a significant advantage from an optimization perspective: as RNNs have mostly been trained using MLE, this allows us to leverage decades of previous work. Note that when the reference policy is to simply copy the ground truth (which is sometimes optimal, e.g. when the test error is the Hamming loss), a? is always the ground truth token. LL with reference roll-in and roll-out is in this case equivalent to MLE. Kullback-Leibler divergence (KL). The log-loss approach appears to be relatively wasteful with the structured information we have access to since we are only using the minimal cost value. To exploit this information more meaningfully, we consider the following approach: we convert each cost vector into a probability distribution (e.g. through a softmax operator) and then minimize a divergence between the current model distribution PM and the “target distribution” PC derived from the costs. As the MLE objective itself can be expressed as the KL divergence between Dgt (a Dirac distribution with full mass on the ground truth) and PM , we also choose to minimize the KL divergence between PC and PM . Since the costs are considered fixed with respect to the parameters of the model, our loss is equivalent to the cross-entropy between PC and PM . Lt(st; ct) = − A∑ a=1 ( PC(a) log ( PM (a) )) where PC(a) = e−αct(a)/∑Ai=1 e−αct(i) and PM (a) = est(a) /∑A i=1 e st(i). (4) α is a scaling parameter that controls how peaky the target distributions are. It can be chosen using a validation set. The associated gradient update discriminates between tokens based on their costs. Compared to LL, KL leverages the structured loss information more directly and thus mitigates the 0/1 nature of MLE better. Optimization. Another difference between SEARN and RNNs is that RNNs are typically trained using stochastic gradient descent, whereas SEARN is a batch method. In order to facilitate training, we decide to adapt the optimization process of LOLS, an online variant of SEARN introduced by Chang et al. (2015). At each round, we select a random mini-batch of samples, and then take a single gradient step on the parameters with the associated loss (contrary to SEARN where the reduced classifier is fully trained at each round). Note that we do not need the test error to be differentiable, as our costs ct(a) are fixed when we minimize our training loss. This corresponds to defining a different loss at each round, which is the way it is done in L2S. In this case our gradient is unbiased. However, if instead we consider that we define a single loss for the whole procedure, then the costs depend on the parameters of the model and we effectively compute an approximation of the gradient. Whether it is possible not to fix the costs and to backpropagate through the roll-in and roll-out remains an open problem. Expected benefits. SEARNN can improve performance because of a few key properties. First, our losses leverage the test error, leading to potentially much better surrogates than MLE. Second, all of our training losses (even plain LL) leverage the structured information that is contained in the computed costs. This is much more satisfactory than MLE which does not exploit this information and ignores nuances between good and bad candidate predictions. Indeed, our hypothesis is that the more complex the error is, the more SEARNN can improve performance. Third, the exploration bias we find in teacher forcing can be mitigated by using a “learned” roll-in policy, which may be the best roll-in policy for L2S applications according to Chang et al. (2015). Fourth, the loss at each cell is global, in the sense that the computed costs contain information about full sequences. This may help with the classical vanishing gradients problem that is prevalent in RNN training and motivated the introduction of specialized cells such as LSTMs (Hochreiter & Schmidhuber, 1997) or GRUs (Cho et al., 2014). Experiments. In order to validate these theoretical benefits, we ran SEARNN on two datasets and compared its performance against that of MLE. For a fair comparison, we use the same optimization routine for all methods. We pick the one that performs best for the MLE baseline. Note that in all the experiments of the paper, we use greedy decoding, both for our cost computation and for evaluation. Furthermore, whenever we use a mixed roll-out we always use 0.5 as our mixin parameter, following Chang et al. (2015). The first dataset is the optical character recognition (OCR) dataset introduced in Taskar et al. (2003). The task is to output English words given an input sequence of handwritten characters. We use an encoder-decoder model with GRU cells (Cho et al., 2014) of size 128. For all runs, we use SGD with constant step-size 0.5 and batch size of 64. The cost used in the SEARNN algorithm is the Hamming error. We report the total Hamming error, normalized by the total number of characters on the test set. The second dataset is the Spelling dataset introduced in Bahdanau et al. (2017). The task is to recover correct text from a corrupted version. This dataset is synthetically generated from a text corpus (One Billion Word dataset): for each character, we decide with some fixed probability whether or not to replace it with a random one. The total number of tokens A is 43 (alphabet size plus a few special characters) and the maximum sequence length T is 10 (sentences from the corpus are clipped). We provide results for two sub-datasets generated with the following replacement probabilities: 0.3 and 0.5. For this task, we follow Bahdanau et al. (2017) and use the edit distance as our cost. It is defined as the edit distance between the predicted sequence and the ground truth sequence divided by the ground truth length. We reuse the attention-based encoder-decoder model with GRU cells of size 100 described in (Bahdanau et al., 2017). For all runs, we use the Adam optimizer (Kingma & Ba, 2015) with learning rate 0.001 and batch size of 128. Results are given in Table 1, including ACTOR-CRITIC (Bahdanau et al., 2017) runs on our data splits as an additional baseline. Key takeaways. First, SEARNN outperforms MLE by a significant margin on the two different tasks and datasets, which confirms our intuition that taking structured information into account enables better performance. Second, we observed that the best performing losses were those structurally close to MLE – LL and KL – whereas others (detailed in Appendix C.1) did not improve results. This might be explained by the fact that RNN architectures and optimization techniques have been evolving for decades with MLE training in mind. Third, the best roll-in/out strategy appears to be combining a learned roll-in and a mixed roll-out, which is consistent with the claims from Chang et al. (2015). Fourth, although we expect SEARNN to make stronger improvements over MLE on hard tasks (where a simplistic roll-out policy – akin to MLE – is suboptimal), we do get improvements even when outputting the ground truth (regardless of the current trajectory) is the optimal policy. 5 SCALING UP SEARNN While SEARNN does provide significant improvements on the two tasks we have tested it on, it comes with a rather heavy price, since a large number of roll-outs (i.e. forward passes) have to be run in order to compute the costs. This number, |A|T , is proportional both to the length of the sequences, and to the number of possible tokens. SEARNN is therefore not directly applicable to tasks with large output sequences or vocabulary size (such as machine translation) where computing so many forward passes becomes a computational bottleneck. Even though forward passes can be parallelized more heavily than backward ones (because they do not require maintaining activations in memory), their asymptotic cost remains in O(dT ), where d is the number of parameters of the model. There are a number of ways to mitigate this issue. In this paper, we focus on subsampling both the cells and the tokens when computing the costs. That is, instead of computing a cost vector for each cell, we only compute them for a subsample of all cells. Similarly, we also compute these costs only for a small portion of all possible tokens. The speedups we can expect from this strategy are large, since the total number of roll-outs is proportional to both the quantities we are decreasing. Sampling strategies. First, we need to decide how we select the steps and tokens that we sample. We have chosen to sample steps uniformly when we do not take all of them. On the other hand, we have explored several different possibilities for token sampling. The first is indeed the uniform sampling strategy. The 3 alternative samplings we tried use the current state of our model: stochastic current policy sampling (where we use the current state of the stochastic policy to pick at random), a biased version of current policy sampling where we boost the scores of the low-probability tokens, and finally a top-k strategy where we take the top k tokens according to the current policy. Note that the latter strategy (top-k) can be seen as a simplified variant of targeted sampling (Goodman et al., 2016), another smarter strategy introduced to help L2S methods scale. Finally, in all strategies we always sample the ground truth action to make sure that our performance is at least as good as MLE. Adapting our losses to sampling. Our losses require computing the costs of all possible tokens at a given step. One could still use LL by simply making the assumption that the token with minimum cost is always sampled. However this is a rather strong assumption and it means pushing down the scores of tokens that were not even sampled and hence could not compete with the others. To alleviate this issue, we replace the full softmax by a layer applied only on the tokens that were sampled (Jean et al., 2015). While the target can still only be in the sampled tokens, the unsampled tokens are left alone by the gradient update, at least for the first order dependency. This trick is even more needed for KL, which otherwise requires a “default” score for unsampled tokens, adding a difficult to tune hyperparameter. We refer to these new losses as sLL and sKL. Experiments. The main goal of these experiments is to assess whether or not combining subsampling with the SEARNN algorithm is a viable strategy. To do so we ran the method on the same two datasets that we used in the previous section. We decided to only focus on subsampling tokens as the vocabulary size is usually the blocking factor rather than the sequence length. Thus we sampled all cells. We evaluate different sampling strategies and training losses. For all experiments, we use the learned policy for roll-in and the mixed one for roll-out and we sample 5 tokens per cell. Finally, we use the same optimization techniques than in the previous experiment. Key takeaways. Results are given in Table 2. The analysis of this experiment yields interesting observations. First, and perhaps most importantly, subsampling appears to be a viable strategy to obtain a large part of the improvements of SEARNN while keeping computational costs under control. Indeed, we recover all of the improvements of the full method while only sampling a fraction of all possible tokens. Second, it appears that the best strategy for token sampling depends on the chosen loss. In the case of sLL, the top-k strategy performs best, whereas sKL favors the biased current policy. Third, it also seems like the best performing loss is task-dependent. Finally, this sampling technique yields a 5× running time speedup, therefore validating our scaling approach. 6 NEURAL MACHINE TRANSLATION. Having introduced a cheaper alternative SEARNN method enables us to apply it to a large-scale structured prediction task and to thus investigate whether our algorithm also improves upon MLE in more challenging real-life settings. We choose neural machine translation as out task, and the German-English translation track of the IWSLT 2014 campaign (Cettolo et al., 2014) as our dataset, as it was used in several related papers and thus allows for easier comparisons. We reuse the pre-processing of Ranzato et al. (2016), obtaining training, validation and test datasets of roughly 153k, 7k and 7k sentence pairs respectively with vocabularies of size 22822 words for English and 32009 words for German. For fair comparison to related methods, we use similar architectures. To compare with BSO and ACTOR-CRITIC, we use an encoder-decoder model with GRU cells of size 256, with a bidirectional encoder and single-layer RNNs. For the specific case of MIXER, we replace the recurrent encoder with a convolutional encoder as in Ranzato et al. (2016) . We use Adam as our optimizer, with an initial learning rate of 10−3 gradually decreasing to 10−5, and a batch size of 64. We select the best models on the validation set and report results both without and with dropout (0.3). Regarding the specific settings of SEARNN, we use a reference roll-in and a mixed roll-out. Additionally, we sample 25 tokens at each cell, following a mixed sampling strategy (detailed in Appendix C.2). We use the best performing loss on the validation set, i.e. the KL loss with scaling parameter 200. The traditional evaluation metric for such tasks is the BLEU score (Papineni et al., 2002). As we cannot use this corpus-wide metric to compute our sentence-level intermediate costs, we adopt the alternative smoothed BLEU score of Bahdanau et al. (2017) as our cost. We use a custom reference policy (detailed in Appendix C.2). We report the corpus-wide BLEU score on the test set in Table 3. Key takeaways. First, the significant improvements SEARNN obtains over MLE on this task (2 BLEU points without dropout) show that the algorithm can be profitably applied to large-scale, challenging structured prediction tasks at a reasonable computational cost. Second, our performance is on par or better than those of related methods with comparable baselines. Our performance using a convolutional encoder is similar to that of MIXER. Compared to BSO (Wiseman & Rush, 2016), our baseline, absolute performance and improvements are all stronger. While SEARNN presents similar improvements to ACTOR-CRITIC, the absolute performance is slightly worse. This can be explained in part by the fact that SEARNN requires twice less parameters during training. Finally, the learned roll-in policy performed poorly for this specific task, so we used instead a reference roll-in. While this observation seems to go against the L2S analysis from Chang et al. (2015), it is consistent with another experiment we ran: we tried applying scheduled sampling (Bengio et al., 2015) – which uses a schedule of mixed roll-ins – on this dataset, but did not succeed to obtain any improvements, despite using a careful schedule as proposed by their authors in private communications. One potential factor is that our reference policy is not good enough to yield valuable signal when starting from a poor roll-in. Another possibility is that the underlying optimization problem becomes harder when using a learned rather than a reference roll-in. 7 DISCUSSION We now contrast SEARNN to several related algorithms, including traditional L2S approaches (which are not adapted to RNN training), and RNN training methods inspired by L2S and RL. Traditional L2S approaches. Although SEARNN is heavily inspired by SEARN, it is actually closer to LOLS (Chang et al., 2015), another L2S algorithm. As LOLS, SEARNN is a meta-algorithm where roll-in/roll-out strategies are customizable (we explored most combinations in our experiments). Our findings are in agreement with those of Chang et al. (2015): we advocate using the same combination, that is, a learned roll-in and a mixed roll-out. The one exception to this rule of thumb is when the associated reduced problem is too hard (as seems to be the case for machine translation), in which case we recommend switching to a reference roll-in. Moreover, as noted in Section 4, SEARNN adapts the optimization process of LOLS (the one difference being that our method is stochastic rather than online): each intermediate dataset is only used for a single gradient step. This means the policy interpolation is of a different nature than in SEARN where intermediate datasets are optimized for fully and the resulting policy is mixed with the previous one. However, despite the similarities we have just underlined, SEARNN presents significant differences from these traditional L2S algorithms. First off, and most importantly, SEARNN is a full integration of the L2S ideas to RNN training, whereas previous methods cannot be used for this purpose directly. Second, in order to achieve this adaptation we had to modify several design choices, including: • the intermediate dataset construction, which significantly differs from traditional L2S;3 • the careful choice of a classifier (those used in the L2S literature do not fit RNNs well); • the design of tailored surrogate loss functions that leverage cost information while being easy to optimize in RNNs. L2S-inspired approaches. Several other papers have tried using L2S-like ideas for better RNN training, starting with Bengio et al. (2015) which introduces “scheduled sampling” to avoid the exposure bias problem. The idea is to start with teacher forcing and to gradually use more and more model predictions instead of ground truth tokens during training. This is akin to a mixed roll-in – an idea which also appears in (Daumé et al., 2009). Wiseman & Rush (2016, BSO) adapt one of the early variants of the L2S framework: the “Learning A Search Optimization” approach of Daumé & Marcu (2005, LASO) to train RNNs. However LASO is quite different from the more modern SEARN family of algorithms that we focus on: it does not include either local classifiers or roll-outs, and has much weaker theoretical guarantees. Additionally, BSO’s training loss is defined by violations in the beam-search procedure, yielding a very different algorithm from SEARNN. Furthermore, BSO requires being able to compute a meaningful loss on partial sequences, and thus does not handle general structured losses unlike SEARNN. Finally, its ad hoc surrogate objective provides very sparse sequence-level training signal, as mentioned by their authors, thus requiring warm-start. Ballesteros et al. (2016) use a loss that is similar to LL for parsing, a specific task where cost-to-go are essentially free. This property is also a requirement for Sun et al. (2017), in which new gradient procedures are introduced to incorporate neural classifiers in the AGGREVATE (Ross & Bagnell, 2014) variant of L2S.4 In contrast, SEARNN can be used on tasks without a free cost-to-go oracle. RL-inspired approaches. In structured prediction tasks, we have access to ground truth trajectories, i.e. a lot more information than in traditional RL. One major direction of research has been to adapt RL techniques to leverage this additional information. The main idea is to try to optimize the expectation of the test error directly (under the stochastic policy parameterized by the RNN): L(θ) = − N∑ i=1 E(yi1,..,yiT )∼π(θ)r(y i 1, .., y i T ) . (5) Since we are taking an expectation over all possible structured outputs, the only term that depends on the parameters is the probability term (the tokens in the error term are fixed). This allows this 3The feature extraction is fully integrated in the model and thus learnable instead of being hand-crafted. Moreover, arbitrarily picking a token a during the roll-out phase to compute the associated costs requires feeding them back to the RNN (as opposed to simply adding the decision to the context before extracting features). 4Sun et al. (2017)’s algorithm simply replaces the classifier in AGGREVATE with a neural network. As it is trained on an ever growing dataset, a natural gradient update is required to make the algorithm tractable. loss function to support non-differentiable test errors, which is a key advantage. Of course, actually computing the expectation over an exponential number of possibilities is computationally intractable. To circumvent this issue, Shen et al. (2016) subsample trajectories according to the learned policy, while Ranzato et al. (2016); Rennie et al. (2016) use the REINFORCE algorithm, which essentially approximates the expectation with a single trajectory sample. Bahdanau et al. (2017) adapt the ACTOR-CRITIC algorithm, where a second critic network is trained to approximate the expectation. While all these approaches report significant improvement on various tasks, one trait they share is that they only work when initialized from a good pre-trained model. This phenomenon is often explained by the sparsity of the information contained in “sequence-level” losses. Indeed, in the case of REINFORCE, no distinction is made between the tokens that form a sequence: depending on whether the sampled trajectory is above a global baseline, all tokens are pushed up or down by the gradient update. This means good tokens are sometimes penalized and bad tokens rewarded. In contrast, SEARNN uses “global-local” losses, with a local loss attached to each step, which contains global information since the costs are computed on full sequences. To do so, we have to “sample” more trajectories through our roll-in/roll-outs. As a result, SEARNN does not require warm-starting to achieve good experimental performance. This distinction is quite relevant, because warm-starting means initializing in a specific region of parameter space which may be hard to escape. Exploration is less constrained when starting from scratch, leading to potentially larger gains over MLE. RL-based methods often involve optimizing additional models (baselines for REINFORCE and the critic for ACTOR-CRITIC), introducing more complexity (e.g. target networks). SEARNN does not. Finally, while maximizing the expected reward allows the RL approaches to use gradient descent even when the test error is not differentiable, it introduces another discrepancy between training and testing. Indeed, at test time, one does not decode by sampling from the stochastic policy. Instead, one selects the “best” sequence (according to a search algorithm, e.g. greedy or beam search). SEARNN avoids this averse effect by computing costs using deterministic roll-outs – the same decoding technique as the one used at test time – so that its loss is even closer to the test loss. The associated price is that we approximate the gradient by fixing the costs, although they do depend on the parameters. RAML (Norouzi et al., 2016) is another RL-inspired approach. Though quite different from the previous papers we have cited, it is also related to SEARNN. Here, in order to mitigate the 0/1 aspect of MLE training, the authors introduce noise in the target outputs at each iteration. The amount of random noise is determined according to the associated reward (target outputs with a lot of noise obtain lower rewards and are thus less sampled). This idea is linked to the label smoothing technique (Szegedy et al., 2016), where the target distribution at each step is the addition of a Dirac (the usual MLE target) and a uniform distribution. In this sense, when using the KL loss SEARNN can be viewed as doing learned label smoothing, where we compute the target distribution from the intermediate costs rather than arbitrarily adding the uniform distribution. Conclusion and future work. We have described SEARNN, a novel algorithm that uses core ideas from the learning to search framework in order to alleviate the known limitations of MLE training for RNNs. By leveraging structured cost information obtained through strategic exploration, we define global-local losses. These losses provide a global feedback related to the structured task at hand, distributed locally within the cells of the RNN. This alternative procedure allows us to train RNNs from scratch and to outperform MLE on three challenging structured prediction tasks. Finally we have proposed efficient scaling techniques that allow us to apply SEARNN on structured tasks for which the output vocabulary is very large, such as neural machine translation. The L2S literature provides several promising directions for further research. Adapting “bandit” L2S alternatives (Chang et al., 2015) would allow us to apply SEARNN to tasks where only a single trajectory may be observed at any given point (so trying every possible token is not possible). Focused costing (Goodman et al., 2016) – a mixed roll-out policy where a fixed number of learned steps are taken before resorting to the reference policy – could help us lift the quadratic dependency of SEARNN on the sequence length. Finally, targeted sampling (Goodman et al., 2016) – a smart sampling strategy that prioritizes cells where the model is uncertain of what to do – could enable more efficient exploration for large-scale tasks. ACKNOWLEDGMENTS We would like to thank Dzmitry Bahdanau for helping us with both the spelling and the machine translation experiments, as well as Hal Daumé for constructive feedback on both Learning to Search and an earlier version of the paper. This research was partially supported by the NSERC Discovery Grant RGPIN-2017-06936, by the ERC grant Activia (no. 307574), by a Google Research Award and by Samsung Research, Samsung Electronics. A ALGORITHMS A.1 SEARN (ADAPTED FROM DAUMÉ ET AL. (2009), FIGURE 1.) Algorithm 2 SEARN algorithm 1: Initialize a policy h with the reference policy π. 2: for i in 1 to N do # Start of round i. 3: Initialize the set of cost-sensitive examples S ← ∅. # Create the intermediate dataset for round i. 4: for (x, y) in the ground truth input/output structured pairs do # Perform the roll-in (actually only run once). 5: Compute predictions under the current policy, (ŷ1, ..., ŷTx) ∼ h, x. 6: for t in 1 to Tx do 7: Compute input features φ(st) for context st = (x, ŷ1, ..., ŷt). 8: Initialize a cost vector ct = 〈〉. # Perform the roll-outs for each action to fill the cost vector. 9: for each possible token a ∈ A do 10: Get a full sequence ŷt(a) by applying an expert policy, starting from (x, ŷ1..t, a). 11: Collect the cost ct(a) by comparing ŷt(a) and y. 12: end for 13: Add cost-sensitive example (φ, c) to S 14: end for 15: end for 16: Learn a classifier h′ on S. 17: Interpolate h← βh′ + (1− β)h. 18: end for 19: Return h. A.2 SEARNN: REFERENCE ROLL-IN WITH AN RNN. As mentioned in Section 3, teacher forcing can be seen as the roll-in reference policy of the RNN. In this section, we detail this analogy further. Let us consider the case where we perform the roll-in up until the tth cell. In order to be able to perform roll-outs from that tth cell, a hidden state is needed. If we used a reference policy roll-in, this state is obtained by running the RNN until the tth cell by using the teacher forcing strategy, i.e. by conditioning the outputs on the ground truth. Finally, SEARNN also needs to know what the predictions for the full sequence were in order to compute the costs. When the reference roll-in is used, we obtain the predictions up until the tth cell by simply copying the ground truth. Hence, we discard the outputs of the RNN that are before the tth cell. B DESIGN DECISIONS Choosing a classifier: to backpropagate or not to backpropagate? In standard L2S, the classifier and the feature extractor are clearly delineated. The latter is a fixed hand-crafted transformation applied on the input and the partial sequence that has already been predicted. One then has to pick a classifier and its convergence properties carry over to the initial problem. In SEARNN, we choose the RNN itself as our classifier. The fixed feature extractor is reduced to the bare minimum (e.g. one-hot encoding) and the classifier performs feature learning afterwards. In this setting, the intermediate dataset is the initial state and all previous decisions (x, y1:t−1) combined with the cost vector.5 5In the encoder-decoder architecture, the decoder RNN does not receive x directly, but rather φ(x), the features extracted from the input by the encoder RNN. In this case, our SEARNN classifier includes both the encoder and the decoder RNNs. An alternative way to look at RNNs, is to consider the RNN cell as a shared classifier in its own right, and the beginning of the RNN (including the previous cells) as a feature extractor. One could then pick the RNN cell (instead of the full RNN) as the SEARNN classifier, in which case the intermediate dataset would be (ht−1, yt−1)6 (the state at the previous step, combined with the previous decision) plus the cost vector. While this last perspective – seeing the RNN cell as the shared classifier instead of the full RNN – is perhaps more intuitive, it actually fits the L2S framework less well. Indeed, there is no clear delineation between classifier and feature extractor as these functions are carried out by different instances of the same RNN cell (and as such share weights). This means that the feature extraction in this case is learned instead of being fixed. This choice of classifier has a direct consequence on the optimization routine. In case we pick the RNN itself, then each loss gradient has to be fully backpropagated through the network. On the other hand, if the classifier is the cell itself, then one should not backpropagate the gradient updates. Reference policy. The reference policy defined by Daumé et al. (2009) picks the action which “minimizes the (corresponding) cost, assuming all future decisions are made optimally”, i.e. arg minyt minyt+1:T l(y1:T , y). For the roll-in phase, this policy corresponds to always picking the ground truth, since it leads to predicting the full ground truth sequence and hence the best possible loss. For the roll-out phase, computing this policy explicitly is easy in a few select cases. However, in the general case it is not tractable. One then has to turn to heuristics, whose performance can be relatively poor. While Chang et al. (2015) tell us that overcoming a bad reference policy can be done through a careful choice of roll-in/roll-out policies, the fact remains that the better the reference policy is, the better performance will be. Choosing this heuristic well is then quite important. The most basic heuristic is to simply use the ground truth. Of course, one can readily see that it is not always optimal. For example, when the model skips a token and outputs the next one, a, instead, it may be more beneficial to also skip a in the roll-out phase rather than to repeat it. Although we mostly chose this basic heuristic in this paper, using tailored alternatives can yield better results for tasks where it is suboptimal, such as machine translation (see Appendix C.2). C ADDITIONAL EXPERIMENTAL DETAILS C.1 LOSSES. We now describe other losses we tried but did not perform as well (or at least not better) than the ones presented in the main text. The first two follow the target learning principle, as LL. Log-loss with cost-augmented softmax (LLCAS). LLCAS is another attempt to leverage the structured information we have access to more meaningfully, through a slight modification of LL. We add information about the full costs in the exponential, following e.g. Pletscher et al. (2010); Gimpel & Smith (2010); Hazan & Urtasun (2010). Lt(st; ct) = − log ( est(a ?)+αct(a ?) /∑A i=1 e st(i)+αct(i) ) where a? = arg mina∈A ct(a) . (6) α is a scaling parameter that ensures that the scores of the model and the costs are not too dissimilar, and can be chosen using a validation set. The associated gradient update discriminates between tokens based on their costs. Although it leverages the structured loss information more directly and thus should in principle mitigate the 0/1 nature of MLE better, we did not observe any significant improvements over LL, even after tuning the scaling parameter α. 6One could also add ψ(x), features learned from the input through e.g. an attention mechanism. Structured hinge loss (SHL). The LLCAS can be seen as a smooth version of the (cost-sensitive) structured hinge loss used for structured SVMs (Tsochantaridis et al., 2005), that we also consider: Lt(st; ct) = max a∈A (st(a) + ct(a))− st(a?) where a? = arg min a∈A ct(a) . (7) While this loss did enable the RNNs to learn, the overall performance was actually slightly worse than that of MLE. This may be due to the fact that RNNs have a harder time optimizing the resulting objective, compared to others more similar to the traditional MLE objective (which they have been tuned to train well on). Consistent loss. This last loss is inspired from traditional structured prediction. Following Lee et al. (2004), we define: Lt(ct) = ∑ a∈A ct(a) ln(1 + exp(s̃t(a))) where s̃t(a) = st(a)− 1 A ∑ a∈A st(a) . (8) Unfortunately, we encountered optimization issues and could not get significant improvements over the MLE baseline. KL and label smoothing. We have seen that when the loss function is the Hamming loss, the reference policy is to simply output the ground truth. In this case, LL with a reference roll-in and rollout is equivalent to MLE. Interestingly, in the same setup KL is also equivalent to an existing method: the label smoothing technique. Indeed, the vector of costs can be written as a vector with equal coordinates minus a one-hot vector with all its mass on the ground truth token. After transformation through a softmax operator, this yields the same target distribution as in label smoothing. C.2 NMT Custom sampling. For this experiment, we decided to sample 15 tokens per cell according to the top-k policy (as the vocabulary size is quite big, sampling tokens with low probability is not very attractive), as well as 10 neighboring ground truth labels around the cell. The rationale for these neighboring tokens is that skipping or repeating words is quite a common mistake in NMT. Custom reference policy. The very basic reference policy we have been using for the other experiments of the paper is too bad a heuristic for BLEU to perform well. Instead, we try adding every suffix in the ground truth sequence to the current predictions and we pick the one with the highest BLEU-1 score (using this strategy with BLEU-4 leads to unfortunate events when the best suffix to add is always the entire sequence, leading to uninformative costs). Reference roll-in. As mentioned in Section 6, we had to switch from a learned to a reference roll-in. In addition to the existing problems of a weak reference policy (which affects a learned roll-in much more than a reference one), and the introduction of a harder optimization problem, there is another potential source of explanation: this may illustrate a gap in the standard reduction theory from the L2S framework. Indeed, the standard reduction analysis (Daumé et al., 2009; Chang et al., 2015) guarantees that the level of performance of the classifier on the reduced problem translates to overall performance on the initial problem. However, this does not take into account the fact that the reduced problem may be harder or easier, depending on the choice of roll-in/roll-out combination. In this case, it appears that using a learned roll-in may have lead to a harder reduced problem and thus ultimately worse overall performance.
1. What is the main contribution of the paper regarding RNNs? 2. What are the strengths of the proposed approach, particularly in resolving issues with local ground truth choices? 3. How does the reviewer assess the efficacy of the technique based on the experiments conducted? 4. What is the significance of scaling SEARNN to the IWSLT'14 de-en machine translation dataset? 5. Are there any minor issues or typos in the review that do not affect its overall positive assessment of the paper?
Review
Review This paper extends the concept of global rather than local optimization from the learning to search (L2S) literature to RNNs, specifically in the formation and implementation of SEARNN. Their work takes steps to consider and resolve issues that arise from restricting optimization to only local ground truth choices, which traditionally results in label / transition bias from the teacher forced model. The underlying issue (MLE training of RNNs) is well founded and referenced, their introduction and extension to the L2S techniques that may help resolve the issue are promising, and their experiments, both small and large, show the efficacy of their technique. I am also glad to see the exploration of scaling SEARNN to the IWSLT'14 de-en machine translation dataset. As noted by the authors, it is a dataset that has been tackled by related papers and importantly a well scaled dataset. For SEARNN and related techniques to see widespread adoption, the scaling analysis this paper provides is a fundamental component. This reviewer, whilst not having read all of the appendix in detail, also appreciates the additional insights provided by it, such as including losses that were attempted but did not result in appreciable gains. Overall I believe this is a paper that tackles an important topic area and provides a novel and persuasive potential solution to many of the issues it highlights. (extremely minor typo: "One popular possibility from L2S is go the full reduction route down to binary classification")
ICLR
Title SEARNN: Training RNNs with global-local losses Abstract We propose SEARNN, a novel training algorithm for recurrent neural networks (RNNs) inspired by the “learning to search” (L2S) approach to structured prediction. RNNs have been widely successful in structured prediction applications such as machine translation or parsing, and are commonly trained using maximum likelihood estimation (MLE). Unfortunately, this training loss is not always an appropriate surrogate for the test error: by only maximizing the ground truth probability, it fails to exploit the wealth of information offered by structured losses. Further, it introduces discrepancies between training and predicting (such as exposure bias) that may hurt test performance. Instead, SEARNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error. We first demonstrate improved performance over MLE on two different tasks: OCR and spelling correction. Then, we propose a subsampling strategy to enable SEARNN to scale to large vocabulary sizes. This allows us to validate the benefits of our approach on a machine translation task. 1 INTRODUCTION Recurrent neural networks (RNNs) have been quite successful in structured prediction applications such as machine translation (Sutskever et al., 2014), parsing (Ballesteros et al., 2016) or caption generation (Vinyals et al., 2015). These models use the same repeated cell (or unit) to output a sequence of tokens one by one. As each prediction takes into account all previous predictions, this cell learns to output the next token conditioned on the previous ones. The standard training loss for RNNs is derived from maximum likelihood estimation (MLE): we consider that the cell outputs a probability distribution at each step in the sequence, and we seek to maximize the probability of the ground truth. Unfortunately, this training loss is not a particularly close surrogate to the various test errors we want to minimize. A striking example of discrepancy is that the MLE loss is close to 0/1: it makes no distinction between candidates that are close or far away from the ground truth (with respect to the structured test error), thus failing to exploit valuable information. Another example of train/test discrepancy is called exposure or exploration bias (Ranzato et al., 2016): in traditional MLE training the cell learns the conditional probability of the next token, based on the previous ground truth tokens – this is often referred to as teacher forcing. However, at test time the model does not have access to the ground truth, and thus feeds its own previous predictions to its next cell for prediction instead. Improving RNN training thus appears as a relevant endeavor, which has received much attention recently. In particular, ideas coming from reinforcement learning (RL), such as the REINFORCE and ACTOR-CRITIC algorithms (Ranzato et al., 2016; Bahdanau et al., 2017), have been adapted to derive training losses that are more closely related to the test error that we actually want to minimize. ∗Equal contribution. In order to address the issues of MLE training, we propose instead to use ideas from the structured prediction field, in particular from the “learning to search” (L2S) approach introduced by Daumé et al. (2009) and later refined by Ross & Bagnell (2014) and Chang et al. (2015) among others. Contributions. In Section 2, we review the limitations of MLE training for RNNs in details. We also clarify some related claims made in the recent literature. In Section 3, we make explicit the strong links between RNNs and the L2S approach. In Section 4, we present SEARNN, a novel training algorithm for RNNs, using ideas from L2S to derive a global-local loss that is much closer to the test error than MLE. We demonstrate that this novel approach leads to significant improvements on two difficult structured prediction tasks, including a spelling correction problem recently introduced in Bahdanau et al. (2017). As this algorithm is quite costly, we investigate scaling solutions in Section 5. We explore a subsampling strategy that allows us to considerably reduce training times, while maintaining improved performance compared to MLE. We apply this new algorithm to machine translation and report significant improvements in Section 6. Finally, we contrast our novel approach to the related L2S and RL-inspired methods in Section 7. 2 TRADITIONAL RNN TRAINING AND ITS LIMITATIONS RNNs are a large family of neural network models aimed at representing sequential data. To do so, they produce a sequence of states (h1, ..., hT ) by recursively applying the same transformation (or cell) f on the sequential data: ht = f(ht−1, yt−1, x), with h0 an initial state and x an optional input. Many possible design choices fit this framework. We focus on a subset typically used for structured prediction, where we want to model the joint probability of a target sequence (y1, . . . , yTx) ∈ ATx given an input x (e.g. the decoder RNN in the encoder-decoder architecture (Sutskever et al., 2014; Cho et al., 2014)). HereA is the alphabet of output tokens and Tx is the length of the output sequence associated with input x (though Tx may take different values, in the following we drop the dependency in x and use T for simplicity). To achieve this modeling, we feed ht through a projection layer (i.e. a linear classifier) to obtain a vector of scores st over all possible tokens a ∈ A, and normalize these with a softmax layer (an exponential normalizer) to obtain a distribution ot over tokens: ht = f(ht−1, yt−1, x) ; st = proj(ht) ; ot = softmax(st) ∀ 1 ≤ t ≤ T . (1) The vector ot is interpreted as the predictive conditional distribution for the tth token given by the RNN model, i.e. p(a|y1, . . . , yt−1, x) := ot(a) for a ∈ A. Multiplying the values ot(yt) together thus yields the joint probability of the sequence y defined by the RNN (thanks to the chain rule): p(y1, ..., yT |x) = p(y1|x)p(y2|y1, x) ... p(yT |y1, ..., yT−1, x) := ΠTt=1ot(yt) . (2) As pointed by Goodfellow et al. (2016), the underlying structure of these RNNs as graphical models is thus a complete graph, and there is no conditional independence assumption to simplify the difficult prediction task of computing arg maxy∈Y p(y|x). In practice, one typically uses either beam search to approximate this decoding, or a sequence of greedy predictions ŷt := arg maxa∈A p(a|ŷ1, . . . , ŷt−1, x). If we use the “teacher forcing” regimen, where the inputs to the RNN cell are the ground truth tokens (as opposed to its own greedy predictions), we obtain the probability of each ground truth sequence according to the RNN model. We can then use MLE to derive a loss to train the RNN. One should note here that despite the fact that the individual output probabilities are at the token level, the MLE loss involves the joint probability (computed via the chain rule) and is thus at the sequence level. The limitations of MLE training. While this maximum likelihood style of training has been very successful in various applications, it suffers from several known issues, especially for structured prediction problems. The first one is called exposure or exploration bias (Ranzato et al., 2016). During training (with teacher forcing), the model learns the probabilities of the next tokens conditioned on the ground truth. But at test time, the model does not have access to the ground truth and outputs probabilities are conditioned on its own previous predictions instead. Therefore if the predictions differ from the ground truth, the model has to continue based on an exploration path it has not seen during training, which means that it is less likely to make accurate predictions. This phenomenon, which is typical of sequential prediction tasks (Kääriäinen, 2006; Daumé et al., 2009) can lead to a compounding of errors, where mistakes in prediction accumulate and prevent good performance. The second major issue is the discrepancy between the training loss and the various test errors associated with the tasks for which RNNs are used (e.g. edit distance, F1 score...). Of course, a single surrogate is not likely to be a good approximation for all these errors. One salient illustration of that fact is that MLE ignores the information contained in structured losses. As it only focuses on maximizing the probability of the ground truth, it does not distinguish between a prediction that is very close to the ground truth and one that is very far away. Thus, most of the information given by a structured loss is not leveraged when using this approach. Local vs. sequence-level. Some recent papers (Ranzato et al., 2016; Wiseman & Rush, 2016) also point out the fact that since RNNs output next token predictions, their loss is local instead of sequence-level, contrary to the error we typically want to minimize. This claim seems to contradict the standard RNN analysis, which postulates that the underlying graphical model is the complete graph: that is, the RNN outputs the probability of the next tokens conditioned on all the previous predictions. Thanks to the chain rule, one recovers the probability of the whole sequence. Thus the maximum likelihood training loss is indeed a sequence level loss, even though we can decompose it in a product of local losses at each cell. However, if we assume that the RNN outputs are only conditioned on the last few predictions (instead of all previous ones), then we can indeed consider the MLE loss as local. In this setting, the underlying graphical model obeys Markovian constraints (as in maximum entropy Markov models (MEMMs)) rather than being the complete graph; this corresponds to the assumption that the information from the previous inputs is imperfectly carried through the network to the cell, preventing the model from accurately representing long-term dependencies. Given all these limitations, exploring novel ways of training RNNs appears to be a worthy endeavor, and this field has attracted a lot of interest in the past few years. While many papers try to adapt ideas coming from the reinforcement learning literature, we instead focus in this paper on the links we can draw with structured prediction, and in particular with the L2S approach. 3 LINKS BETWEEN RNNS AND LEARNING TO SEARCH The L2S approach to structured prediction was first introduced by Daumé et al. (2009). The main idea behind it is a learning reduction (Beygelzimer et al., 2016): transforming a complex learning problem (structured prediction) into a simpler one that we know how to solve (multiclass classification). To achieve this, Daumé et al. (2009) propose in their SEARN algorithm to train a shared local classifier to predict each token sequentially (conditioned on all inputs and all past decisions), thus searching greedily step by step in the big combinatorial space of structured outputs. The idea that tokens can be predicted one at a time, conditioned on their predecessors, is central to this approach. The training procedure is iterative: at the beginning of each round, one uses the current model (or policy1) to build an intermediate dataset to train the shared classifier on. The specificity of this new dataset is that each new sample is accompanied by a cost vector containing one entry per token in the output vocabularyA. To obtain these cost vectors, one starts by applying a roll-in policy to predict all the tokens up to T , thus building one trajectory (or exploration path) in the search space per sample in the initial dataset. Then, at each time step t, one picks arbitrarily each possible token (diverging from the roll-in trajectory) and then continues predicting to finish the modified trajectory using a roll-out policy. One then computes the cost of all the obtained sequences, and ends up with T vectors (one per time step) of size |A| (the number of possible tokens) for every sample. Figure 1 describes the same process for our SEARNN algorithm (although in this case the shared classifier is an RNN). One then extracts features from the “context” at each time step t (which encompasses the full input and the previous tokens predicted up to t during the roll-in).2 Combining the cost vectors to these features yields the new intermediary dataset. The original problem is thus reduced to multi-class cost-sensitive classification. Once the shared classifier has been fully trained on this new dataset, the policy is updated for the next round. The algorithm is described more formally in Algorithm 2 (see Appendix A). Theoretical guarantees for various policy updating rules are provided by e.g. Daumé et al. (2009) and Chang et al. (2015). 1Note that the vocabulary used in this literature is slightly different from that of RNNs: tokens are rather referenced as actions, predictions as decisions and models as policies. 2This is often referred to as “search state” in the L2S literature, but we prefer calling it context to avoid confusion with the RNN hidden state. Roll-in and roll-out policies. The policies used to create the intermediate datasets fulfill different roles. The roll-in policy controls what part of the search space the algorithm explores, while the roll-out policy determines how the cost of each token is computed. The main possibilities for both roll-in and roll-out are explored by Chang et al. (2015). The reference policy tries to pick the optimal token based on the ground truth. During the roll-in, it corresponds to picking the ground truth. For the roll-out phase, while it is easy to compute an optimal policy in some cases (e.g. for the Hamming loss where simply copying the ground truth is also optimal), it is often too expensive (e.g. for BLEU score). One then uses a heuristic (in our experiments the reference policy is to copy the ground truth for both roll-in and roll-out unless indicated otherwise). The learned policy simply uses the current model instead, and the mixed policy stochastically combines both. According to Chang et al. (2015), the best combination when the reference policy is poor is to use a learned roll-in and a mixed roll-out. Links to RNNs. One can identify the following interesting similarities between a greedy approach to RNNs and L2S. Both models handle sequence labeling problems by outputting tokens recursively, conditioned on past decisions. Further, the RNN “cell” is shared at each time step and can thus also be seen as a shared local classifier that is used to make structured predictions, as in the L2S framework. In addition, there is a clear equivalent to the choice of roll-in policy in RNNs. Indeed, teacher forcing (conditioning the outputs on the ground truth) can be seen as the roll-in reference policy for the RNN. Instead, if one conditions the outputs on the previous predictions of the model, then we obtain a roll-in learned policy. Despite these connections, many differences remain. Amongst them, the fact that no roll-outs are involved in standard RNN training. We thus consider next whether ideas coming from L2S could mitigate the limitations of MLE training for RNNs. In particular, one key property of L2S worth porting over to RNN training is that the former fully leverages structured losses information, contrarily to MLE as previously noted. 4 IMPROVING RNN TRAINING WITH L2S Since we are interested in leveraging structured loss information, we can try to obtain it in the same fashion as L2S. The main tool that L2S uses in order to construct a cost-sensitive dataset is the roll-out policy. In many classical structured prediction use cases, one does not need to follow through with a policy because the “cost-to-go” that the roll-out yields is either free or easily computable from the ground truth. We are however also interested in cases where this information is unavailable, and roll-outs are needed to approximate it (e.g. for machine translation). This leads to several questions. How can we integrate roll-outs in a RNN model? How do we use this additional information, i.e. what loss do we use to train the model on? How do we make it computationally tractable? The SEARNN Algorithm. The basic idea of the SEARNN algorithm is quite simple: we borrow from L2S the idea of using a global loss for each local cell of the RNN. As in L2S, we first compute a roll-in trajectory, following a specific roll-in policy. Then, at each step t of this trajectory, we compute the costs ct(a) associated with each possible token a. To do so we pick a at this step and then follow a roll-out policy to finish the output sequence ŷa. We then compare ŷa with the ground truth using the test error itself, rather than a surrogate. By repeating this for the T steps we obtain T cost vectors. We use this information to derive one cost-sensitive training loss for each cell, which allows us to compute an update for the parameters of the model. The full process for one cell is illustrated in Figure 1. Our losses are global-local, in the sense that they appear at the local level but all contain sequence-level information. Our final loss is the sum over the T local losses. We provide the pseudo-code for SEARNN in Algorithm 1. Algorithm 1 SEARNN algorithm (for a simple encoder-decoder network) 1: Initialize the weights ω of the RNN network. 2: for i in 1 to N do 3: Sample B ground truth input/output structured pairs {(x1, y1), · · · , (xB , yB)} # Perform the roll-in/roll-outs to get the costs. This step can be heavily parallelized. 4: for b in 1 to B do 5: Compute input features φ(xb) # Roll-in. 6: Run the RNN until tth cell with φ(xb) as initial state by following the roll-in policy (see Appendix A.2 for details in the case of reference roll-in policy) 7: Store the sequence of hidden states in order to perform several roll-outs 8: for t in 1 to T do # Roll-outs for all actions in order to collect the cost vector at the tth cell. 9: for a in 1 to A do 10: Pick a decoding method (e.g. greedy or beam search) 11: Run the RNN from the tth cell to the end by first enforcing action a at cell t, and then following the decoding method. 12: Collect the cost cbt(a) by comparing the obtained output sequence ŷ b t (a) to y b 13: end for 14: end for 15: end for 16: Derive a loss for each cell from the collected costs 17: Update the parameters of the network ω by doing a single gradient step 18: end for Choosing a multi-class classifier. SEARNN appears quite similar to L2S, but there are a few key differences that merit more explanation. As the RNN cell can serve as a multi-class classifier, in SEARNN we could pick the cell as a (shallow) shared classifier, whose input are features extracted from the full context by the previous cells of the RNN. Instead, we pick the RNN itself, thus getting a (deep) shared classifier that also learns the features directly from the context. The difference between the two options is more thoroughly detailed in Appendix B. Arbitrarily picking a token a during the roll-out phase can then be done by emulating the teacher forcing technique: if predicted tokens are fed back to the model (say if the roll-out policy requires it), we use a for the next cell (instead of the prediction the cell would have output). We also use a in the output sequence before computing the cost. Choosing a cost-sensitive loss. We now also explain our choice for the training loss function derived from the cost vectors. One popular possibility from L2S is to go the full reduction route down to binary classification. However, this technique involves creating multiple new datasets (which is hard to implement as part of a neural network), as well as training |A|2 binary classifiers. Instead, we simply work with the multi-class classifier encoded by the RNN cell with training losses defined next. We now introduce two of the more successful losses we used (although we experimented with many others, which are detailed in Appendix C.1). In the following, each loss is defined at the cell level. The global loss is the sum of all T losses. st(a) refers to the score output by cell t for token a. Log-loss (LL). A central idea in L2S is to learn the target tokens the model should aim for. This is more meaningful than blindly imposing the ground truth as target, in particular when the model has deviated from the ground truth trajectory. Golberg & Nivre (2012) refer to this technique as using dynamic oracles. In the context of RNN training, we call this approach target learning. Our first loss is thus a simple log-loss with the minimal cost token as target: Lt(st; ct) = − log ( est(a ?) /∑A i=1 e st(i) ) where a? = arg mina∈A ct(a) . (3) It is structurally similar to MLE. The only difference is that instead of maximizing the probability of the ground truth action, we maximize the probability of the best performing action with respect to the cost vector. This similarity is a significant advantage from an optimization perspective: as RNNs have mostly been trained using MLE, this allows us to leverage decades of previous work. Note that when the reference policy is to simply copy the ground truth (which is sometimes optimal, e.g. when the test error is the Hamming loss), a? is always the ground truth token. LL with reference roll-in and roll-out is in this case equivalent to MLE. Kullback-Leibler divergence (KL). The log-loss approach appears to be relatively wasteful with the structured information we have access to since we are only using the minimal cost value. To exploit this information more meaningfully, we consider the following approach: we convert each cost vector into a probability distribution (e.g. through a softmax operator) and then minimize a divergence between the current model distribution PM and the “target distribution” PC derived from the costs. As the MLE objective itself can be expressed as the KL divergence between Dgt (a Dirac distribution with full mass on the ground truth) and PM , we also choose to minimize the KL divergence between PC and PM . Since the costs are considered fixed with respect to the parameters of the model, our loss is equivalent to the cross-entropy between PC and PM . Lt(st; ct) = − A∑ a=1 ( PC(a) log ( PM (a) )) where PC(a) = e−αct(a)/∑Ai=1 e−αct(i) and PM (a) = est(a) /∑A i=1 e st(i). (4) α is a scaling parameter that controls how peaky the target distributions are. It can be chosen using a validation set. The associated gradient update discriminates between tokens based on their costs. Compared to LL, KL leverages the structured loss information more directly and thus mitigates the 0/1 nature of MLE better. Optimization. Another difference between SEARN and RNNs is that RNNs are typically trained using stochastic gradient descent, whereas SEARN is a batch method. In order to facilitate training, we decide to adapt the optimization process of LOLS, an online variant of SEARN introduced by Chang et al. (2015). At each round, we select a random mini-batch of samples, and then take a single gradient step on the parameters with the associated loss (contrary to SEARN where the reduced classifier is fully trained at each round). Note that we do not need the test error to be differentiable, as our costs ct(a) are fixed when we minimize our training loss. This corresponds to defining a different loss at each round, which is the way it is done in L2S. In this case our gradient is unbiased. However, if instead we consider that we define a single loss for the whole procedure, then the costs depend on the parameters of the model and we effectively compute an approximation of the gradient. Whether it is possible not to fix the costs and to backpropagate through the roll-in and roll-out remains an open problem. Expected benefits. SEARNN can improve performance because of a few key properties. First, our losses leverage the test error, leading to potentially much better surrogates than MLE. Second, all of our training losses (even plain LL) leverage the structured information that is contained in the computed costs. This is much more satisfactory than MLE which does not exploit this information and ignores nuances between good and bad candidate predictions. Indeed, our hypothesis is that the more complex the error is, the more SEARNN can improve performance. Third, the exploration bias we find in teacher forcing can be mitigated by using a “learned” roll-in policy, which may be the best roll-in policy for L2S applications according to Chang et al. (2015). Fourth, the loss at each cell is global, in the sense that the computed costs contain information about full sequences. This may help with the classical vanishing gradients problem that is prevalent in RNN training and motivated the introduction of specialized cells such as LSTMs (Hochreiter & Schmidhuber, 1997) or GRUs (Cho et al., 2014). Experiments. In order to validate these theoretical benefits, we ran SEARNN on two datasets and compared its performance against that of MLE. For a fair comparison, we use the same optimization routine for all methods. We pick the one that performs best for the MLE baseline. Note that in all the experiments of the paper, we use greedy decoding, both for our cost computation and for evaluation. Furthermore, whenever we use a mixed roll-out we always use 0.5 as our mixin parameter, following Chang et al. (2015). The first dataset is the optical character recognition (OCR) dataset introduced in Taskar et al. (2003). The task is to output English words given an input sequence of handwritten characters. We use an encoder-decoder model with GRU cells (Cho et al., 2014) of size 128. For all runs, we use SGD with constant step-size 0.5 and batch size of 64. The cost used in the SEARNN algorithm is the Hamming error. We report the total Hamming error, normalized by the total number of characters on the test set. The second dataset is the Spelling dataset introduced in Bahdanau et al. (2017). The task is to recover correct text from a corrupted version. This dataset is synthetically generated from a text corpus (One Billion Word dataset): for each character, we decide with some fixed probability whether or not to replace it with a random one. The total number of tokens A is 43 (alphabet size plus a few special characters) and the maximum sequence length T is 10 (sentences from the corpus are clipped). We provide results for two sub-datasets generated with the following replacement probabilities: 0.3 and 0.5. For this task, we follow Bahdanau et al. (2017) and use the edit distance as our cost. It is defined as the edit distance between the predicted sequence and the ground truth sequence divided by the ground truth length. We reuse the attention-based encoder-decoder model with GRU cells of size 100 described in (Bahdanau et al., 2017). For all runs, we use the Adam optimizer (Kingma & Ba, 2015) with learning rate 0.001 and batch size of 128. Results are given in Table 1, including ACTOR-CRITIC (Bahdanau et al., 2017) runs on our data splits as an additional baseline. Key takeaways. First, SEARNN outperforms MLE by a significant margin on the two different tasks and datasets, which confirms our intuition that taking structured information into account enables better performance. Second, we observed that the best performing losses were those structurally close to MLE – LL and KL – whereas others (detailed in Appendix C.1) did not improve results. This might be explained by the fact that RNN architectures and optimization techniques have been evolving for decades with MLE training in mind. Third, the best roll-in/out strategy appears to be combining a learned roll-in and a mixed roll-out, which is consistent with the claims from Chang et al. (2015). Fourth, although we expect SEARNN to make stronger improvements over MLE on hard tasks (where a simplistic roll-out policy – akin to MLE – is suboptimal), we do get improvements even when outputting the ground truth (regardless of the current trajectory) is the optimal policy. 5 SCALING UP SEARNN While SEARNN does provide significant improvements on the two tasks we have tested it on, it comes with a rather heavy price, since a large number of roll-outs (i.e. forward passes) have to be run in order to compute the costs. This number, |A|T , is proportional both to the length of the sequences, and to the number of possible tokens. SEARNN is therefore not directly applicable to tasks with large output sequences or vocabulary size (such as machine translation) where computing so many forward passes becomes a computational bottleneck. Even though forward passes can be parallelized more heavily than backward ones (because they do not require maintaining activations in memory), their asymptotic cost remains in O(dT ), where d is the number of parameters of the model. There are a number of ways to mitigate this issue. In this paper, we focus on subsampling both the cells and the tokens when computing the costs. That is, instead of computing a cost vector for each cell, we only compute them for a subsample of all cells. Similarly, we also compute these costs only for a small portion of all possible tokens. The speedups we can expect from this strategy are large, since the total number of roll-outs is proportional to both the quantities we are decreasing. Sampling strategies. First, we need to decide how we select the steps and tokens that we sample. We have chosen to sample steps uniformly when we do not take all of them. On the other hand, we have explored several different possibilities for token sampling. The first is indeed the uniform sampling strategy. The 3 alternative samplings we tried use the current state of our model: stochastic current policy sampling (where we use the current state of the stochastic policy to pick at random), a biased version of current policy sampling where we boost the scores of the low-probability tokens, and finally a top-k strategy where we take the top k tokens according to the current policy. Note that the latter strategy (top-k) can be seen as a simplified variant of targeted sampling (Goodman et al., 2016), another smarter strategy introduced to help L2S methods scale. Finally, in all strategies we always sample the ground truth action to make sure that our performance is at least as good as MLE. Adapting our losses to sampling. Our losses require computing the costs of all possible tokens at a given step. One could still use LL by simply making the assumption that the token with minimum cost is always sampled. However this is a rather strong assumption and it means pushing down the scores of tokens that were not even sampled and hence could not compete with the others. To alleviate this issue, we replace the full softmax by a layer applied only on the tokens that were sampled (Jean et al., 2015). While the target can still only be in the sampled tokens, the unsampled tokens are left alone by the gradient update, at least for the first order dependency. This trick is even more needed for KL, which otherwise requires a “default” score for unsampled tokens, adding a difficult to tune hyperparameter. We refer to these new losses as sLL and sKL. Experiments. The main goal of these experiments is to assess whether or not combining subsampling with the SEARNN algorithm is a viable strategy. To do so we ran the method on the same two datasets that we used in the previous section. We decided to only focus on subsampling tokens as the vocabulary size is usually the blocking factor rather than the sequence length. Thus we sampled all cells. We evaluate different sampling strategies and training losses. For all experiments, we use the learned policy for roll-in and the mixed one for roll-out and we sample 5 tokens per cell. Finally, we use the same optimization techniques than in the previous experiment. Key takeaways. Results are given in Table 2. The analysis of this experiment yields interesting observations. First, and perhaps most importantly, subsampling appears to be a viable strategy to obtain a large part of the improvements of SEARNN while keeping computational costs under control. Indeed, we recover all of the improvements of the full method while only sampling a fraction of all possible tokens. Second, it appears that the best strategy for token sampling depends on the chosen loss. In the case of sLL, the top-k strategy performs best, whereas sKL favors the biased current policy. Third, it also seems like the best performing loss is task-dependent. Finally, this sampling technique yields a 5× running time speedup, therefore validating our scaling approach. 6 NEURAL MACHINE TRANSLATION. Having introduced a cheaper alternative SEARNN method enables us to apply it to a large-scale structured prediction task and to thus investigate whether our algorithm also improves upon MLE in more challenging real-life settings. We choose neural machine translation as out task, and the German-English translation track of the IWSLT 2014 campaign (Cettolo et al., 2014) as our dataset, as it was used in several related papers and thus allows for easier comparisons. We reuse the pre-processing of Ranzato et al. (2016), obtaining training, validation and test datasets of roughly 153k, 7k and 7k sentence pairs respectively with vocabularies of size 22822 words for English and 32009 words for German. For fair comparison to related methods, we use similar architectures. To compare with BSO and ACTOR-CRITIC, we use an encoder-decoder model with GRU cells of size 256, with a bidirectional encoder and single-layer RNNs. For the specific case of MIXER, we replace the recurrent encoder with a convolutional encoder as in Ranzato et al. (2016) . We use Adam as our optimizer, with an initial learning rate of 10−3 gradually decreasing to 10−5, and a batch size of 64. We select the best models on the validation set and report results both without and with dropout (0.3). Regarding the specific settings of SEARNN, we use a reference roll-in and a mixed roll-out. Additionally, we sample 25 tokens at each cell, following a mixed sampling strategy (detailed in Appendix C.2). We use the best performing loss on the validation set, i.e. the KL loss with scaling parameter 200. The traditional evaluation metric for such tasks is the BLEU score (Papineni et al., 2002). As we cannot use this corpus-wide metric to compute our sentence-level intermediate costs, we adopt the alternative smoothed BLEU score of Bahdanau et al. (2017) as our cost. We use a custom reference policy (detailed in Appendix C.2). We report the corpus-wide BLEU score on the test set in Table 3. Key takeaways. First, the significant improvements SEARNN obtains over MLE on this task (2 BLEU points without dropout) show that the algorithm can be profitably applied to large-scale, challenging structured prediction tasks at a reasonable computational cost. Second, our performance is on par or better than those of related methods with comparable baselines. Our performance using a convolutional encoder is similar to that of MIXER. Compared to BSO (Wiseman & Rush, 2016), our baseline, absolute performance and improvements are all stronger. While SEARNN presents similar improvements to ACTOR-CRITIC, the absolute performance is slightly worse. This can be explained in part by the fact that SEARNN requires twice less parameters during training. Finally, the learned roll-in policy performed poorly for this specific task, so we used instead a reference roll-in. While this observation seems to go against the L2S analysis from Chang et al. (2015), it is consistent with another experiment we ran: we tried applying scheduled sampling (Bengio et al., 2015) – which uses a schedule of mixed roll-ins – on this dataset, but did not succeed to obtain any improvements, despite using a careful schedule as proposed by their authors in private communications. One potential factor is that our reference policy is not good enough to yield valuable signal when starting from a poor roll-in. Another possibility is that the underlying optimization problem becomes harder when using a learned rather than a reference roll-in. 7 DISCUSSION We now contrast SEARNN to several related algorithms, including traditional L2S approaches (which are not adapted to RNN training), and RNN training methods inspired by L2S and RL. Traditional L2S approaches. Although SEARNN is heavily inspired by SEARN, it is actually closer to LOLS (Chang et al., 2015), another L2S algorithm. As LOLS, SEARNN is a meta-algorithm where roll-in/roll-out strategies are customizable (we explored most combinations in our experiments). Our findings are in agreement with those of Chang et al. (2015): we advocate using the same combination, that is, a learned roll-in and a mixed roll-out. The one exception to this rule of thumb is when the associated reduced problem is too hard (as seems to be the case for machine translation), in which case we recommend switching to a reference roll-in. Moreover, as noted in Section 4, SEARNN adapts the optimization process of LOLS (the one difference being that our method is stochastic rather than online): each intermediate dataset is only used for a single gradient step. This means the policy interpolation is of a different nature than in SEARN where intermediate datasets are optimized for fully and the resulting policy is mixed with the previous one. However, despite the similarities we have just underlined, SEARNN presents significant differences from these traditional L2S algorithms. First off, and most importantly, SEARNN is a full integration of the L2S ideas to RNN training, whereas previous methods cannot be used for this purpose directly. Second, in order to achieve this adaptation we had to modify several design choices, including: • the intermediate dataset construction, which significantly differs from traditional L2S;3 • the careful choice of a classifier (those used in the L2S literature do not fit RNNs well); • the design of tailored surrogate loss functions that leverage cost information while being easy to optimize in RNNs. L2S-inspired approaches. Several other papers have tried using L2S-like ideas for better RNN training, starting with Bengio et al. (2015) which introduces “scheduled sampling” to avoid the exposure bias problem. The idea is to start with teacher forcing and to gradually use more and more model predictions instead of ground truth tokens during training. This is akin to a mixed roll-in – an idea which also appears in (Daumé et al., 2009). Wiseman & Rush (2016, BSO) adapt one of the early variants of the L2S framework: the “Learning A Search Optimization” approach of Daumé & Marcu (2005, LASO) to train RNNs. However LASO is quite different from the more modern SEARN family of algorithms that we focus on: it does not include either local classifiers or roll-outs, and has much weaker theoretical guarantees. Additionally, BSO’s training loss is defined by violations in the beam-search procedure, yielding a very different algorithm from SEARNN. Furthermore, BSO requires being able to compute a meaningful loss on partial sequences, and thus does not handle general structured losses unlike SEARNN. Finally, its ad hoc surrogate objective provides very sparse sequence-level training signal, as mentioned by their authors, thus requiring warm-start. Ballesteros et al. (2016) use a loss that is similar to LL for parsing, a specific task where cost-to-go are essentially free. This property is also a requirement for Sun et al. (2017), in which new gradient procedures are introduced to incorporate neural classifiers in the AGGREVATE (Ross & Bagnell, 2014) variant of L2S.4 In contrast, SEARNN can be used on tasks without a free cost-to-go oracle. RL-inspired approaches. In structured prediction tasks, we have access to ground truth trajectories, i.e. a lot more information than in traditional RL. One major direction of research has been to adapt RL techniques to leverage this additional information. The main idea is to try to optimize the expectation of the test error directly (under the stochastic policy parameterized by the RNN): L(θ) = − N∑ i=1 E(yi1,..,yiT )∼π(θ)r(y i 1, .., y i T ) . (5) Since we are taking an expectation over all possible structured outputs, the only term that depends on the parameters is the probability term (the tokens in the error term are fixed). This allows this 3The feature extraction is fully integrated in the model and thus learnable instead of being hand-crafted. Moreover, arbitrarily picking a token a during the roll-out phase to compute the associated costs requires feeding them back to the RNN (as opposed to simply adding the decision to the context before extracting features). 4Sun et al. (2017)’s algorithm simply replaces the classifier in AGGREVATE with a neural network. As it is trained on an ever growing dataset, a natural gradient update is required to make the algorithm tractable. loss function to support non-differentiable test errors, which is a key advantage. Of course, actually computing the expectation over an exponential number of possibilities is computationally intractable. To circumvent this issue, Shen et al. (2016) subsample trajectories according to the learned policy, while Ranzato et al. (2016); Rennie et al. (2016) use the REINFORCE algorithm, which essentially approximates the expectation with a single trajectory sample. Bahdanau et al. (2017) adapt the ACTOR-CRITIC algorithm, where a second critic network is trained to approximate the expectation. While all these approaches report significant improvement on various tasks, one trait they share is that they only work when initialized from a good pre-trained model. This phenomenon is often explained by the sparsity of the information contained in “sequence-level” losses. Indeed, in the case of REINFORCE, no distinction is made between the tokens that form a sequence: depending on whether the sampled trajectory is above a global baseline, all tokens are pushed up or down by the gradient update. This means good tokens are sometimes penalized and bad tokens rewarded. In contrast, SEARNN uses “global-local” losses, with a local loss attached to each step, which contains global information since the costs are computed on full sequences. To do so, we have to “sample” more trajectories through our roll-in/roll-outs. As a result, SEARNN does not require warm-starting to achieve good experimental performance. This distinction is quite relevant, because warm-starting means initializing in a specific region of parameter space which may be hard to escape. Exploration is less constrained when starting from scratch, leading to potentially larger gains over MLE. RL-based methods often involve optimizing additional models (baselines for REINFORCE and the critic for ACTOR-CRITIC), introducing more complexity (e.g. target networks). SEARNN does not. Finally, while maximizing the expected reward allows the RL approaches to use gradient descent even when the test error is not differentiable, it introduces another discrepancy between training and testing. Indeed, at test time, one does not decode by sampling from the stochastic policy. Instead, one selects the “best” sequence (according to a search algorithm, e.g. greedy or beam search). SEARNN avoids this averse effect by computing costs using deterministic roll-outs – the same decoding technique as the one used at test time – so that its loss is even closer to the test loss. The associated price is that we approximate the gradient by fixing the costs, although they do depend on the parameters. RAML (Norouzi et al., 2016) is another RL-inspired approach. Though quite different from the previous papers we have cited, it is also related to SEARNN. Here, in order to mitigate the 0/1 aspect of MLE training, the authors introduce noise in the target outputs at each iteration. The amount of random noise is determined according to the associated reward (target outputs with a lot of noise obtain lower rewards and are thus less sampled). This idea is linked to the label smoothing technique (Szegedy et al., 2016), where the target distribution at each step is the addition of a Dirac (the usual MLE target) and a uniform distribution. In this sense, when using the KL loss SEARNN can be viewed as doing learned label smoothing, where we compute the target distribution from the intermediate costs rather than arbitrarily adding the uniform distribution. Conclusion and future work. We have described SEARNN, a novel algorithm that uses core ideas from the learning to search framework in order to alleviate the known limitations of MLE training for RNNs. By leveraging structured cost information obtained through strategic exploration, we define global-local losses. These losses provide a global feedback related to the structured task at hand, distributed locally within the cells of the RNN. This alternative procedure allows us to train RNNs from scratch and to outperform MLE on three challenging structured prediction tasks. Finally we have proposed efficient scaling techniques that allow us to apply SEARNN on structured tasks for which the output vocabulary is very large, such as neural machine translation. The L2S literature provides several promising directions for further research. Adapting “bandit” L2S alternatives (Chang et al., 2015) would allow us to apply SEARNN to tasks where only a single trajectory may be observed at any given point (so trying every possible token is not possible). Focused costing (Goodman et al., 2016) – a mixed roll-out policy where a fixed number of learned steps are taken before resorting to the reference policy – could help us lift the quadratic dependency of SEARNN on the sequence length. Finally, targeted sampling (Goodman et al., 2016) – a smart sampling strategy that prioritizes cells where the model is uncertain of what to do – could enable more efficient exploration for large-scale tasks. ACKNOWLEDGMENTS We would like to thank Dzmitry Bahdanau for helping us with both the spelling and the machine translation experiments, as well as Hal Daumé for constructive feedback on both Learning to Search and an earlier version of the paper. This research was partially supported by the NSERC Discovery Grant RGPIN-2017-06936, by the ERC grant Activia (no. 307574), by a Google Research Award and by Samsung Research, Samsung Electronics. A ALGORITHMS A.1 SEARN (ADAPTED FROM DAUMÉ ET AL. (2009), FIGURE 1.) Algorithm 2 SEARN algorithm 1: Initialize a policy h with the reference policy π. 2: for i in 1 to N do # Start of round i. 3: Initialize the set of cost-sensitive examples S ← ∅. # Create the intermediate dataset for round i. 4: for (x, y) in the ground truth input/output structured pairs do # Perform the roll-in (actually only run once). 5: Compute predictions under the current policy, (ŷ1, ..., ŷTx) ∼ h, x. 6: for t in 1 to Tx do 7: Compute input features φ(st) for context st = (x, ŷ1, ..., ŷt). 8: Initialize a cost vector ct = 〈〉. # Perform the roll-outs for each action to fill the cost vector. 9: for each possible token a ∈ A do 10: Get a full sequence ŷt(a) by applying an expert policy, starting from (x, ŷ1..t, a). 11: Collect the cost ct(a) by comparing ŷt(a) and y. 12: end for 13: Add cost-sensitive example (φ, c) to S 14: end for 15: end for 16: Learn a classifier h′ on S. 17: Interpolate h← βh′ + (1− β)h. 18: end for 19: Return h. A.2 SEARNN: REFERENCE ROLL-IN WITH AN RNN. As mentioned in Section 3, teacher forcing can be seen as the roll-in reference policy of the RNN. In this section, we detail this analogy further. Let us consider the case where we perform the roll-in up until the tth cell. In order to be able to perform roll-outs from that tth cell, a hidden state is needed. If we used a reference policy roll-in, this state is obtained by running the RNN until the tth cell by using the teacher forcing strategy, i.e. by conditioning the outputs on the ground truth. Finally, SEARNN also needs to know what the predictions for the full sequence were in order to compute the costs. When the reference roll-in is used, we obtain the predictions up until the tth cell by simply copying the ground truth. Hence, we discard the outputs of the RNN that are before the tth cell. B DESIGN DECISIONS Choosing a classifier: to backpropagate or not to backpropagate? In standard L2S, the classifier and the feature extractor are clearly delineated. The latter is a fixed hand-crafted transformation applied on the input and the partial sequence that has already been predicted. One then has to pick a classifier and its convergence properties carry over to the initial problem. In SEARNN, we choose the RNN itself as our classifier. The fixed feature extractor is reduced to the bare minimum (e.g. one-hot encoding) and the classifier performs feature learning afterwards. In this setting, the intermediate dataset is the initial state and all previous decisions (x, y1:t−1) combined with the cost vector.5 5In the encoder-decoder architecture, the decoder RNN does not receive x directly, but rather φ(x), the features extracted from the input by the encoder RNN. In this case, our SEARNN classifier includes both the encoder and the decoder RNNs. An alternative way to look at RNNs, is to consider the RNN cell as a shared classifier in its own right, and the beginning of the RNN (including the previous cells) as a feature extractor. One could then pick the RNN cell (instead of the full RNN) as the SEARNN classifier, in which case the intermediate dataset would be (ht−1, yt−1)6 (the state at the previous step, combined with the previous decision) plus the cost vector. While this last perspective – seeing the RNN cell as the shared classifier instead of the full RNN – is perhaps more intuitive, it actually fits the L2S framework less well. Indeed, there is no clear delineation between classifier and feature extractor as these functions are carried out by different instances of the same RNN cell (and as such share weights). This means that the feature extraction in this case is learned instead of being fixed. This choice of classifier has a direct consequence on the optimization routine. In case we pick the RNN itself, then each loss gradient has to be fully backpropagated through the network. On the other hand, if the classifier is the cell itself, then one should not backpropagate the gradient updates. Reference policy. The reference policy defined by Daumé et al. (2009) picks the action which “minimizes the (corresponding) cost, assuming all future decisions are made optimally”, i.e. arg minyt minyt+1:T l(y1:T , y). For the roll-in phase, this policy corresponds to always picking the ground truth, since it leads to predicting the full ground truth sequence and hence the best possible loss. For the roll-out phase, computing this policy explicitly is easy in a few select cases. However, in the general case it is not tractable. One then has to turn to heuristics, whose performance can be relatively poor. While Chang et al. (2015) tell us that overcoming a bad reference policy can be done through a careful choice of roll-in/roll-out policies, the fact remains that the better the reference policy is, the better performance will be. Choosing this heuristic well is then quite important. The most basic heuristic is to simply use the ground truth. Of course, one can readily see that it is not always optimal. For example, when the model skips a token and outputs the next one, a, instead, it may be more beneficial to also skip a in the roll-out phase rather than to repeat it. Although we mostly chose this basic heuristic in this paper, using tailored alternatives can yield better results for tasks where it is suboptimal, such as machine translation (see Appendix C.2). C ADDITIONAL EXPERIMENTAL DETAILS C.1 LOSSES. We now describe other losses we tried but did not perform as well (or at least not better) than the ones presented in the main text. The first two follow the target learning principle, as LL. Log-loss with cost-augmented softmax (LLCAS). LLCAS is another attempt to leverage the structured information we have access to more meaningfully, through a slight modification of LL. We add information about the full costs in the exponential, following e.g. Pletscher et al. (2010); Gimpel & Smith (2010); Hazan & Urtasun (2010). Lt(st; ct) = − log ( est(a ?)+αct(a ?) /∑A i=1 e st(i)+αct(i) ) where a? = arg mina∈A ct(a) . (6) α is a scaling parameter that ensures that the scores of the model and the costs are not too dissimilar, and can be chosen using a validation set. The associated gradient update discriminates between tokens based on their costs. Although it leverages the structured loss information more directly and thus should in principle mitigate the 0/1 nature of MLE better, we did not observe any significant improvements over LL, even after tuning the scaling parameter α. 6One could also add ψ(x), features learned from the input through e.g. an attention mechanism. Structured hinge loss (SHL). The LLCAS can be seen as a smooth version of the (cost-sensitive) structured hinge loss used for structured SVMs (Tsochantaridis et al., 2005), that we also consider: Lt(st; ct) = max a∈A (st(a) + ct(a))− st(a?) where a? = arg min a∈A ct(a) . (7) While this loss did enable the RNNs to learn, the overall performance was actually slightly worse than that of MLE. This may be due to the fact that RNNs have a harder time optimizing the resulting objective, compared to others more similar to the traditional MLE objective (which they have been tuned to train well on). Consistent loss. This last loss is inspired from traditional structured prediction. Following Lee et al. (2004), we define: Lt(ct) = ∑ a∈A ct(a) ln(1 + exp(s̃t(a))) where s̃t(a) = st(a)− 1 A ∑ a∈A st(a) . (8) Unfortunately, we encountered optimization issues and could not get significant improvements over the MLE baseline. KL and label smoothing. We have seen that when the loss function is the Hamming loss, the reference policy is to simply output the ground truth. In this case, LL with a reference roll-in and rollout is equivalent to MLE. Interestingly, in the same setup KL is also equivalent to an existing method: the label smoothing technique. Indeed, the vector of costs can be written as a vector with equal coordinates minus a one-hot vector with all its mass on the ground truth token. After transformation through a softmax operator, this yields the same target distribution as in label smoothing. C.2 NMT Custom sampling. For this experiment, we decided to sample 15 tokens per cell according to the top-k policy (as the vocabulary size is quite big, sampling tokens with low probability is not very attractive), as well as 10 neighboring ground truth labels around the cell. The rationale for these neighboring tokens is that skipping or repeating words is quite a common mistake in NMT. Custom reference policy. The very basic reference policy we have been using for the other experiments of the paper is too bad a heuristic for BLEU to perform well. Instead, we try adding every suffix in the ground truth sequence to the current predictions and we pick the one with the highest BLEU-1 score (using this strategy with BLEU-4 leads to unfortunate events when the best suffix to add is always the entire sequence, leading to uninformative costs). Reference roll-in. As mentioned in Section 6, we had to switch from a learned to a reference roll-in. In addition to the existing problems of a weak reference policy (which affects a learned roll-in much more than a reference one), and the introduction of a harder optimization problem, there is another potential source of explanation: this may illustrate a gap in the standard reduction theory from the L2S framework. Indeed, the standard reduction analysis (Daumé et al., 2009; Chang et al., 2015) guarantees that the level of performance of the classifier on the reduced problem translates to overall performance on the initial problem. However, this does not take into account the fact that the reduced problem may be harder or easier, depending on the choice of roll-in/roll-out combination. In this case, it appears that using a learned roll-in may have lead to a harder reduced problem and thus ultimately worse overall performance.
1. What is the focus of the paper, and what are the proposed contributions? 2. What are the strengths of the paper, particularly in its literature review and adaptation of SEARN to RNNs? 3. What are the weaknesses of the paper, such as the lack of direct comparisons and experimental details? 4. Do you have any concerns regarding the novelty of the introduced losses and sampling methods? 5. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content?
Review
Review This paper proposes an adaptation of the SEARN algorithm to RNNs for generating text. In order to do so, they discuss various issues on how to scale the approach to large output vocabularies by sampling which actions the algorithm to explore. Pros: - Good literature review. But the future work on bandits is already happening: Paper accepted at ACL 2017: Bandit Structured Prediction for Neural Sequence-to-Sequence Learning. Julia Kreutzer, Artem Sokolov, Stefan Riezler. Cons: - The key argument of the paper is that SEARNN is a better IL-inspired algorithm than the previously proposed ones. However there is no direct comparison either theoretical or empirical against them. In the examples on spelling using the dataset of Bahdanau et al. 2017, no comparison is made against their actor-critic method. Furthermore, given its simplicity, I would expect a comparison against scheduled sampling. - A lot of important experimental details are in the appendices and they differ among experiments. For example, while mixed rollins are used in most experiments, reference rollins are used in MT, which is odd since it is a bad option theoretically. Also, no details are given on how the mixing in the rollouts was tuned. Finally, in the NMT comparison while it is stated that similar architecture is used in order to compare fairly against previous work, this is not the case eventually, as it is acknowledged at least in the case of MIXER. I would have expected the same encoder-decoder architecture to have been used for all the methods considered. - the two losses introduced are not really new. The log-loss is just MLE, only assuming that instead of a fixed expert that always returns the same target, we have a dynamic one. Note that the notion of dynamic expert is present in the SEARN paper too. Goldberg and Nivre just adapted it to transition-based dependency parsing. Similarly, since the KL loss is the same as XENT, why give it a new name? - the top-k sampling method is essentially the same as the targeted exploration of Goodman et al. (2016) which the authors cite. Thus it is not a novel contribution. - Not sure I see the difference between the stochastic nature of SEARNN and the online one of LOLS mentioned in section 7. They both could be mini-batched similarly. Also, not sure I see why SEARNN can be used on any task, in comparison to other methods. They all seem to be equally capable. Minor comments: - Figure 1: what is the difference between "cost-sensitive loss" and just "loss"? - local vs sequence-level losses: the point in Ranzato et al and Wiseman & Rush is that the loss they optimizise (BLEU/ROUGE) do not decompose over the the predictions of the RNNs. - Can't see why SEARNN can help with the vanishing gradient problem. Seem to be rather orthogonal.
ICLR
Title SEARNN: Training RNNs with global-local losses Abstract We propose SEARNN, a novel training algorithm for recurrent neural networks (RNNs) inspired by the “learning to search” (L2S) approach to structured prediction. RNNs have been widely successful in structured prediction applications such as machine translation or parsing, and are commonly trained using maximum likelihood estimation (MLE). Unfortunately, this training loss is not always an appropriate surrogate for the test error: by only maximizing the ground truth probability, it fails to exploit the wealth of information offered by structured losses. Further, it introduces discrepancies between training and predicting (such as exposure bias) that may hurt test performance. Instead, SEARNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error. We first demonstrate improved performance over MLE on two different tasks: OCR and spelling correction. Then, we propose a subsampling strategy to enable SEARNN to scale to large vocabulary sizes. This allows us to validate the benefits of our approach on a machine translation task. 1 INTRODUCTION Recurrent neural networks (RNNs) have been quite successful in structured prediction applications such as machine translation (Sutskever et al., 2014), parsing (Ballesteros et al., 2016) or caption generation (Vinyals et al., 2015). These models use the same repeated cell (or unit) to output a sequence of tokens one by one. As each prediction takes into account all previous predictions, this cell learns to output the next token conditioned on the previous ones. The standard training loss for RNNs is derived from maximum likelihood estimation (MLE): we consider that the cell outputs a probability distribution at each step in the sequence, and we seek to maximize the probability of the ground truth. Unfortunately, this training loss is not a particularly close surrogate to the various test errors we want to minimize. A striking example of discrepancy is that the MLE loss is close to 0/1: it makes no distinction between candidates that are close or far away from the ground truth (with respect to the structured test error), thus failing to exploit valuable information. Another example of train/test discrepancy is called exposure or exploration bias (Ranzato et al., 2016): in traditional MLE training the cell learns the conditional probability of the next token, based on the previous ground truth tokens – this is often referred to as teacher forcing. However, at test time the model does not have access to the ground truth, and thus feeds its own previous predictions to its next cell for prediction instead. Improving RNN training thus appears as a relevant endeavor, which has received much attention recently. In particular, ideas coming from reinforcement learning (RL), such as the REINFORCE and ACTOR-CRITIC algorithms (Ranzato et al., 2016; Bahdanau et al., 2017), have been adapted to derive training losses that are more closely related to the test error that we actually want to minimize. ∗Equal contribution. In order to address the issues of MLE training, we propose instead to use ideas from the structured prediction field, in particular from the “learning to search” (L2S) approach introduced by Daumé et al. (2009) and later refined by Ross & Bagnell (2014) and Chang et al. (2015) among others. Contributions. In Section 2, we review the limitations of MLE training for RNNs in details. We also clarify some related claims made in the recent literature. In Section 3, we make explicit the strong links between RNNs and the L2S approach. In Section 4, we present SEARNN, a novel training algorithm for RNNs, using ideas from L2S to derive a global-local loss that is much closer to the test error than MLE. We demonstrate that this novel approach leads to significant improvements on two difficult structured prediction tasks, including a spelling correction problem recently introduced in Bahdanau et al. (2017). As this algorithm is quite costly, we investigate scaling solutions in Section 5. We explore a subsampling strategy that allows us to considerably reduce training times, while maintaining improved performance compared to MLE. We apply this new algorithm to machine translation and report significant improvements in Section 6. Finally, we contrast our novel approach to the related L2S and RL-inspired methods in Section 7. 2 TRADITIONAL RNN TRAINING AND ITS LIMITATIONS RNNs are a large family of neural network models aimed at representing sequential data. To do so, they produce a sequence of states (h1, ..., hT ) by recursively applying the same transformation (or cell) f on the sequential data: ht = f(ht−1, yt−1, x), with h0 an initial state and x an optional input. Many possible design choices fit this framework. We focus on a subset typically used for structured prediction, where we want to model the joint probability of a target sequence (y1, . . . , yTx) ∈ ATx given an input x (e.g. the decoder RNN in the encoder-decoder architecture (Sutskever et al., 2014; Cho et al., 2014)). HereA is the alphabet of output tokens and Tx is the length of the output sequence associated with input x (though Tx may take different values, in the following we drop the dependency in x and use T for simplicity). To achieve this modeling, we feed ht through a projection layer (i.e. a linear classifier) to obtain a vector of scores st over all possible tokens a ∈ A, and normalize these with a softmax layer (an exponential normalizer) to obtain a distribution ot over tokens: ht = f(ht−1, yt−1, x) ; st = proj(ht) ; ot = softmax(st) ∀ 1 ≤ t ≤ T . (1) The vector ot is interpreted as the predictive conditional distribution for the tth token given by the RNN model, i.e. p(a|y1, . . . , yt−1, x) := ot(a) for a ∈ A. Multiplying the values ot(yt) together thus yields the joint probability of the sequence y defined by the RNN (thanks to the chain rule): p(y1, ..., yT |x) = p(y1|x)p(y2|y1, x) ... p(yT |y1, ..., yT−1, x) := ΠTt=1ot(yt) . (2) As pointed by Goodfellow et al. (2016), the underlying structure of these RNNs as graphical models is thus a complete graph, and there is no conditional independence assumption to simplify the difficult prediction task of computing arg maxy∈Y p(y|x). In practice, one typically uses either beam search to approximate this decoding, or a sequence of greedy predictions ŷt := arg maxa∈A p(a|ŷ1, . . . , ŷt−1, x). If we use the “teacher forcing” regimen, where the inputs to the RNN cell are the ground truth tokens (as opposed to its own greedy predictions), we obtain the probability of each ground truth sequence according to the RNN model. We can then use MLE to derive a loss to train the RNN. One should note here that despite the fact that the individual output probabilities are at the token level, the MLE loss involves the joint probability (computed via the chain rule) and is thus at the sequence level. The limitations of MLE training. While this maximum likelihood style of training has been very successful in various applications, it suffers from several known issues, especially for structured prediction problems. The first one is called exposure or exploration bias (Ranzato et al., 2016). During training (with teacher forcing), the model learns the probabilities of the next tokens conditioned on the ground truth. But at test time, the model does not have access to the ground truth and outputs probabilities are conditioned on its own previous predictions instead. Therefore if the predictions differ from the ground truth, the model has to continue based on an exploration path it has not seen during training, which means that it is less likely to make accurate predictions. This phenomenon, which is typical of sequential prediction tasks (Kääriäinen, 2006; Daumé et al., 2009) can lead to a compounding of errors, where mistakes in prediction accumulate and prevent good performance. The second major issue is the discrepancy between the training loss and the various test errors associated with the tasks for which RNNs are used (e.g. edit distance, F1 score...). Of course, a single surrogate is not likely to be a good approximation for all these errors. One salient illustration of that fact is that MLE ignores the information contained in structured losses. As it only focuses on maximizing the probability of the ground truth, it does not distinguish between a prediction that is very close to the ground truth and one that is very far away. Thus, most of the information given by a structured loss is not leveraged when using this approach. Local vs. sequence-level. Some recent papers (Ranzato et al., 2016; Wiseman & Rush, 2016) also point out the fact that since RNNs output next token predictions, their loss is local instead of sequence-level, contrary to the error we typically want to minimize. This claim seems to contradict the standard RNN analysis, which postulates that the underlying graphical model is the complete graph: that is, the RNN outputs the probability of the next tokens conditioned on all the previous predictions. Thanks to the chain rule, one recovers the probability of the whole sequence. Thus the maximum likelihood training loss is indeed a sequence level loss, even though we can decompose it in a product of local losses at each cell. However, if we assume that the RNN outputs are only conditioned on the last few predictions (instead of all previous ones), then we can indeed consider the MLE loss as local. In this setting, the underlying graphical model obeys Markovian constraints (as in maximum entropy Markov models (MEMMs)) rather than being the complete graph; this corresponds to the assumption that the information from the previous inputs is imperfectly carried through the network to the cell, preventing the model from accurately representing long-term dependencies. Given all these limitations, exploring novel ways of training RNNs appears to be a worthy endeavor, and this field has attracted a lot of interest in the past few years. While many papers try to adapt ideas coming from the reinforcement learning literature, we instead focus in this paper on the links we can draw with structured prediction, and in particular with the L2S approach. 3 LINKS BETWEEN RNNS AND LEARNING TO SEARCH The L2S approach to structured prediction was first introduced by Daumé et al. (2009). The main idea behind it is a learning reduction (Beygelzimer et al., 2016): transforming a complex learning problem (structured prediction) into a simpler one that we know how to solve (multiclass classification). To achieve this, Daumé et al. (2009) propose in their SEARN algorithm to train a shared local classifier to predict each token sequentially (conditioned on all inputs and all past decisions), thus searching greedily step by step in the big combinatorial space of structured outputs. The idea that tokens can be predicted one at a time, conditioned on their predecessors, is central to this approach. The training procedure is iterative: at the beginning of each round, one uses the current model (or policy1) to build an intermediate dataset to train the shared classifier on. The specificity of this new dataset is that each new sample is accompanied by a cost vector containing one entry per token in the output vocabularyA. To obtain these cost vectors, one starts by applying a roll-in policy to predict all the tokens up to T , thus building one trajectory (or exploration path) in the search space per sample in the initial dataset. Then, at each time step t, one picks arbitrarily each possible token (diverging from the roll-in trajectory) and then continues predicting to finish the modified trajectory using a roll-out policy. One then computes the cost of all the obtained sequences, and ends up with T vectors (one per time step) of size |A| (the number of possible tokens) for every sample. Figure 1 describes the same process for our SEARNN algorithm (although in this case the shared classifier is an RNN). One then extracts features from the “context” at each time step t (which encompasses the full input and the previous tokens predicted up to t during the roll-in).2 Combining the cost vectors to these features yields the new intermediary dataset. The original problem is thus reduced to multi-class cost-sensitive classification. Once the shared classifier has been fully trained on this new dataset, the policy is updated for the next round. The algorithm is described more formally in Algorithm 2 (see Appendix A). Theoretical guarantees for various policy updating rules are provided by e.g. Daumé et al. (2009) and Chang et al. (2015). 1Note that the vocabulary used in this literature is slightly different from that of RNNs: tokens are rather referenced as actions, predictions as decisions and models as policies. 2This is often referred to as “search state” in the L2S literature, but we prefer calling it context to avoid confusion with the RNN hidden state. Roll-in and roll-out policies. The policies used to create the intermediate datasets fulfill different roles. The roll-in policy controls what part of the search space the algorithm explores, while the roll-out policy determines how the cost of each token is computed. The main possibilities for both roll-in and roll-out are explored by Chang et al. (2015). The reference policy tries to pick the optimal token based on the ground truth. During the roll-in, it corresponds to picking the ground truth. For the roll-out phase, while it is easy to compute an optimal policy in some cases (e.g. for the Hamming loss where simply copying the ground truth is also optimal), it is often too expensive (e.g. for BLEU score). One then uses a heuristic (in our experiments the reference policy is to copy the ground truth for both roll-in and roll-out unless indicated otherwise). The learned policy simply uses the current model instead, and the mixed policy stochastically combines both. According to Chang et al. (2015), the best combination when the reference policy is poor is to use a learned roll-in and a mixed roll-out. Links to RNNs. One can identify the following interesting similarities between a greedy approach to RNNs and L2S. Both models handle sequence labeling problems by outputting tokens recursively, conditioned on past decisions. Further, the RNN “cell” is shared at each time step and can thus also be seen as a shared local classifier that is used to make structured predictions, as in the L2S framework. In addition, there is a clear equivalent to the choice of roll-in policy in RNNs. Indeed, teacher forcing (conditioning the outputs on the ground truth) can be seen as the roll-in reference policy for the RNN. Instead, if one conditions the outputs on the previous predictions of the model, then we obtain a roll-in learned policy. Despite these connections, many differences remain. Amongst them, the fact that no roll-outs are involved in standard RNN training. We thus consider next whether ideas coming from L2S could mitigate the limitations of MLE training for RNNs. In particular, one key property of L2S worth porting over to RNN training is that the former fully leverages structured losses information, contrarily to MLE as previously noted. 4 IMPROVING RNN TRAINING WITH L2S Since we are interested in leveraging structured loss information, we can try to obtain it in the same fashion as L2S. The main tool that L2S uses in order to construct a cost-sensitive dataset is the roll-out policy. In many classical structured prediction use cases, one does not need to follow through with a policy because the “cost-to-go” that the roll-out yields is either free or easily computable from the ground truth. We are however also interested in cases where this information is unavailable, and roll-outs are needed to approximate it (e.g. for machine translation). This leads to several questions. How can we integrate roll-outs in a RNN model? How do we use this additional information, i.e. what loss do we use to train the model on? How do we make it computationally tractable? The SEARNN Algorithm. The basic idea of the SEARNN algorithm is quite simple: we borrow from L2S the idea of using a global loss for each local cell of the RNN. As in L2S, we first compute a roll-in trajectory, following a specific roll-in policy. Then, at each step t of this trajectory, we compute the costs ct(a) associated with each possible token a. To do so we pick a at this step and then follow a roll-out policy to finish the output sequence ŷa. We then compare ŷa with the ground truth using the test error itself, rather than a surrogate. By repeating this for the T steps we obtain T cost vectors. We use this information to derive one cost-sensitive training loss for each cell, which allows us to compute an update for the parameters of the model. The full process for one cell is illustrated in Figure 1. Our losses are global-local, in the sense that they appear at the local level but all contain sequence-level information. Our final loss is the sum over the T local losses. We provide the pseudo-code for SEARNN in Algorithm 1. Algorithm 1 SEARNN algorithm (for a simple encoder-decoder network) 1: Initialize the weights ω of the RNN network. 2: for i in 1 to N do 3: Sample B ground truth input/output structured pairs {(x1, y1), · · · , (xB , yB)} # Perform the roll-in/roll-outs to get the costs. This step can be heavily parallelized. 4: for b in 1 to B do 5: Compute input features φ(xb) # Roll-in. 6: Run the RNN until tth cell with φ(xb) as initial state by following the roll-in policy (see Appendix A.2 for details in the case of reference roll-in policy) 7: Store the sequence of hidden states in order to perform several roll-outs 8: for t in 1 to T do # Roll-outs for all actions in order to collect the cost vector at the tth cell. 9: for a in 1 to A do 10: Pick a decoding method (e.g. greedy or beam search) 11: Run the RNN from the tth cell to the end by first enforcing action a at cell t, and then following the decoding method. 12: Collect the cost cbt(a) by comparing the obtained output sequence ŷ b t (a) to y b 13: end for 14: end for 15: end for 16: Derive a loss for each cell from the collected costs 17: Update the parameters of the network ω by doing a single gradient step 18: end for Choosing a multi-class classifier. SEARNN appears quite similar to L2S, but there are a few key differences that merit more explanation. As the RNN cell can serve as a multi-class classifier, in SEARNN we could pick the cell as a (shallow) shared classifier, whose input are features extracted from the full context by the previous cells of the RNN. Instead, we pick the RNN itself, thus getting a (deep) shared classifier that also learns the features directly from the context. The difference between the two options is more thoroughly detailed in Appendix B. Arbitrarily picking a token a during the roll-out phase can then be done by emulating the teacher forcing technique: if predicted tokens are fed back to the model (say if the roll-out policy requires it), we use a for the next cell (instead of the prediction the cell would have output). We also use a in the output sequence before computing the cost. Choosing a cost-sensitive loss. We now also explain our choice for the training loss function derived from the cost vectors. One popular possibility from L2S is to go the full reduction route down to binary classification. However, this technique involves creating multiple new datasets (which is hard to implement as part of a neural network), as well as training |A|2 binary classifiers. Instead, we simply work with the multi-class classifier encoded by the RNN cell with training losses defined next. We now introduce two of the more successful losses we used (although we experimented with many others, which are detailed in Appendix C.1). In the following, each loss is defined at the cell level. The global loss is the sum of all T losses. st(a) refers to the score output by cell t for token a. Log-loss (LL). A central idea in L2S is to learn the target tokens the model should aim for. This is more meaningful than blindly imposing the ground truth as target, in particular when the model has deviated from the ground truth trajectory. Golberg & Nivre (2012) refer to this technique as using dynamic oracles. In the context of RNN training, we call this approach target learning. Our first loss is thus a simple log-loss with the minimal cost token as target: Lt(st; ct) = − log ( est(a ?) /∑A i=1 e st(i) ) where a? = arg mina∈A ct(a) . (3) It is structurally similar to MLE. The only difference is that instead of maximizing the probability of the ground truth action, we maximize the probability of the best performing action with respect to the cost vector. This similarity is a significant advantage from an optimization perspective: as RNNs have mostly been trained using MLE, this allows us to leverage decades of previous work. Note that when the reference policy is to simply copy the ground truth (which is sometimes optimal, e.g. when the test error is the Hamming loss), a? is always the ground truth token. LL with reference roll-in and roll-out is in this case equivalent to MLE. Kullback-Leibler divergence (KL). The log-loss approach appears to be relatively wasteful with the structured information we have access to since we are only using the minimal cost value. To exploit this information more meaningfully, we consider the following approach: we convert each cost vector into a probability distribution (e.g. through a softmax operator) and then minimize a divergence between the current model distribution PM and the “target distribution” PC derived from the costs. As the MLE objective itself can be expressed as the KL divergence between Dgt (a Dirac distribution with full mass on the ground truth) and PM , we also choose to minimize the KL divergence between PC and PM . Since the costs are considered fixed with respect to the parameters of the model, our loss is equivalent to the cross-entropy between PC and PM . Lt(st; ct) = − A∑ a=1 ( PC(a) log ( PM (a) )) where PC(a) = e−αct(a)/∑Ai=1 e−αct(i) and PM (a) = est(a) /∑A i=1 e st(i). (4) α is a scaling parameter that controls how peaky the target distributions are. It can be chosen using a validation set. The associated gradient update discriminates between tokens based on their costs. Compared to LL, KL leverages the structured loss information more directly and thus mitigates the 0/1 nature of MLE better. Optimization. Another difference between SEARN and RNNs is that RNNs are typically trained using stochastic gradient descent, whereas SEARN is a batch method. In order to facilitate training, we decide to adapt the optimization process of LOLS, an online variant of SEARN introduced by Chang et al. (2015). At each round, we select a random mini-batch of samples, and then take a single gradient step on the parameters with the associated loss (contrary to SEARN where the reduced classifier is fully trained at each round). Note that we do not need the test error to be differentiable, as our costs ct(a) are fixed when we minimize our training loss. This corresponds to defining a different loss at each round, which is the way it is done in L2S. In this case our gradient is unbiased. However, if instead we consider that we define a single loss for the whole procedure, then the costs depend on the parameters of the model and we effectively compute an approximation of the gradient. Whether it is possible not to fix the costs and to backpropagate through the roll-in and roll-out remains an open problem. Expected benefits. SEARNN can improve performance because of a few key properties. First, our losses leverage the test error, leading to potentially much better surrogates than MLE. Second, all of our training losses (even plain LL) leverage the structured information that is contained in the computed costs. This is much more satisfactory than MLE which does not exploit this information and ignores nuances between good and bad candidate predictions. Indeed, our hypothesis is that the more complex the error is, the more SEARNN can improve performance. Third, the exploration bias we find in teacher forcing can be mitigated by using a “learned” roll-in policy, which may be the best roll-in policy for L2S applications according to Chang et al. (2015). Fourth, the loss at each cell is global, in the sense that the computed costs contain information about full sequences. This may help with the classical vanishing gradients problem that is prevalent in RNN training and motivated the introduction of specialized cells such as LSTMs (Hochreiter & Schmidhuber, 1997) or GRUs (Cho et al., 2014). Experiments. In order to validate these theoretical benefits, we ran SEARNN on two datasets and compared its performance against that of MLE. For a fair comparison, we use the same optimization routine for all methods. We pick the one that performs best for the MLE baseline. Note that in all the experiments of the paper, we use greedy decoding, both for our cost computation and for evaluation. Furthermore, whenever we use a mixed roll-out we always use 0.5 as our mixin parameter, following Chang et al. (2015). The first dataset is the optical character recognition (OCR) dataset introduced in Taskar et al. (2003). The task is to output English words given an input sequence of handwritten characters. We use an encoder-decoder model with GRU cells (Cho et al., 2014) of size 128. For all runs, we use SGD with constant step-size 0.5 and batch size of 64. The cost used in the SEARNN algorithm is the Hamming error. We report the total Hamming error, normalized by the total number of characters on the test set. The second dataset is the Spelling dataset introduced in Bahdanau et al. (2017). The task is to recover correct text from a corrupted version. This dataset is synthetically generated from a text corpus (One Billion Word dataset): for each character, we decide with some fixed probability whether or not to replace it with a random one. The total number of tokens A is 43 (alphabet size plus a few special characters) and the maximum sequence length T is 10 (sentences from the corpus are clipped). We provide results for two sub-datasets generated with the following replacement probabilities: 0.3 and 0.5. For this task, we follow Bahdanau et al. (2017) and use the edit distance as our cost. It is defined as the edit distance between the predicted sequence and the ground truth sequence divided by the ground truth length. We reuse the attention-based encoder-decoder model with GRU cells of size 100 described in (Bahdanau et al., 2017). For all runs, we use the Adam optimizer (Kingma & Ba, 2015) with learning rate 0.001 and batch size of 128. Results are given in Table 1, including ACTOR-CRITIC (Bahdanau et al., 2017) runs on our data splits as an additional baseline. Key takeaways. First, SEARNN outperforms MLE by a significant margin on the two different tasks and datasets, which confirms our intuition that taking structured information into account enables better performance. Second, we observed that the best performing losses were those structurally close to MLE – LL and KL – whereas others (detailed in Appendix C.1) did not improve results. This might be explained by the fact that RNN architectures and optimization techniques have been evolving for decades with MLE training in mind. Third, the best roll-in/out strategy appears to be combining a learned roll-in and a mixed roll-out, which is consistent with the claims from Chang et al. (2015). Fourth, although we expect SEARNN to make stronger improvements over MLE on hard tasks (where a simplistic roll-out policy – akin to MLE – is suboptimal), we do get improvements even when outputting the ground truth (regardless of the current trajectory) is the optimal policy. 5 SCALING UP SEARNN While SEARNN does provide significant improvements on the two tasks we have tested it on, it comes with a rather heavy price, since a large number of roll-outs (i.e. forward passes) have to be run in order to compute the costs. This number, |A|T , is proportional both to the length of the sequences, and to the number of possible tokens. SEARNN is therefore not directly applicable to tasks with large output sequences or vocabulary size (such as machine translation) where computing so many forward passes becomes a computational bottleneck. Even though forward passes can be parallelized more heavily than backward ones (because they do not require maintaining activations in memory), their asymptotic cost remains in O(dT ), where d is the number of parameters of the model. There are a number of ways to mitigate this issue. In this paper, we focus on subsampling both the cells and the tokens when computing the costs. That is, instead of computing a cost vector for each cell, we only compute them for a subsample of all cells. Similarly, we also compute these costs only for a small portion of all possible tokens. The speedups we can expect from this strategy are large, since the total number of roll-outs is proportional to both the quantities we are decreasing. Sampling strategies. First, we need to decide how we select the steps and tokens that we sample. We have chosen to sample steps uniformly when we do not take all of them. On the other hand, we have explored several different possibilities for token sampling. The first is indeed the uniform sampling strategy. The 3 alternative samplings we tried use the current state of our model: stochastic current policy sampling (where we use the current state of the stochastic policy to pick at random), a biased version of current policy sampling where we boost the scores of the low-probability tokens, and finally a top-k strategy where we take the top k tokens according to the current policy. Note that the latter strategy (top-k) can be seen as a simplified variant of targeted sampling (Goodman et al., 2016), another smarter strategy introduced to help L2S methods scale. Finally, in all strategies we always sample the ground truth action to make sure that our performance is at least as good as MLE. Adapting our losses to sampling. Our losses require computing the costs of all possible tokens at a given step. One could still use LL by simply making the assumption that the token with minimum cost is always sampled. However this is a rather strong assumption and it means pushing down the scores of tokens that were not even sampled and hence could not compete with the others. To alleviate this issue, we replace the full softmax by a layer applied only on the tokens that were sampled (Jean et al., 2015). While the target can still only be in the sampled tokens, the unsampled tokens are left alone by the gradient update, at least for the first order dependency. This trick is even more needed for KL, which otherwise requires a “default” score for unsampled tokens, adding a difficult to tune hyperparameter. We refer to these new losses as sLL and sKL. Experiments. The main goal of these experiments is to assess whether or not combining subsampling with the SEARNN algorithm is a viable strategy. To do so we ran the method on the same two datasets that we used in the previous section. We decided to only focus on subsampling tokens as the vocabulary size is usually the blocking factor rather than the sequence length. Thus we sampled all cells. We evaluate different sampling strategies and training losses. For all experiments, we use the learned policy for roll-in and the mixed one for roll-out and we sample 5 tokens per cell. Finally, we use the same optimization techniques than in the previous experiment. Key takeaways. Results are given in Table 2. The analysis of this experiment yields interesting observations. First, and perhaps most importantly, subsampling appears to be a viable strategy to obtain a large part of the improvements of SEARNN while keeping computational costs under control. Indeed, we recover all of the improvements of the full method while only sampling a fraction of all possible tokens. Second, it appears that the best strategy for token sampling depends on the chosen loss. In the case of sLL, the top-k strategy performs best, whereas sKL favors the biased current policy. Third, it also seems like the best performing loss is task-dependent. Finally, this sampling technique yields a 5× running time speedup, therefore validating our scaling approach. 6 NEURAL MACHINE TRANSLATION. Having introduced a cheaper alternative SEARNN method enables us to apply it to a large-scale structured prediction task and to thus investigate whether our algorithm also improves upon MLE in more challenging real-life settings. We choose neural machine translation as out task, and the German-English translation track of the IWSLT 2014 campaign (Cettolo et al., 2014) as our dataset, as it was used in several related papers and thus allows for easier comparisons. We reuse the pre-processing of Ranzato et al. (2016), obtaining training, validation and test datasets of roughly 153k, 7k and 7k sentence pairs respectively with vocabularies of size 22822 words for English and 32009 words for German. For fair comparison to related methods, we use similar architectures. To compare with BSO and ACTOR-CRITIC, we use an encoder-decoder model with GRU cells of size 256, with a bidirectional encoder and single-layer RNNs. For the specific case of MIXER, we replace the recurrent encoder with a convolutional encoder as in Ranzato et al. (2016) . We use Adam as our optimizer, with an initial learning rate of 10−3 gradually decreasing to 10−5, and a batch size of 64. We select the best models on the validation set and report results both without and with dropout (0.3). Regarding the specific settings of SEARNN, we use a reference roll-in and a mixed roll-out. Additionally, we sample 25 tokens at each cell, following a mixed sampling strategy (detailed in Appendix C.2). We use the best performing loss on the validation set, i.e. the KL loss with scaling parameter 200. The traditional evaluation metric for such tasks is the BLEU score (Papineni et al., 2002). As we cannot use this corpus-wide metric to compute our sentence-level intermediate costs, we adopt the alternative smoothed BLEU score of Bahdanau et al. (2017) as our cost. We use a custom reference policy (detailed in Appendix C.2). We report the corpus-wide BLEU score on the test set in Table 3. Key takeaways. First, the significant improvements SEARNN obtains over MLE on this task (2 BLEU points without dropout) show that the algorithm can be profitably applied to large-scale, challenging structured prediction tasks at a reasonable computational cost. Second, our performance is on par or better than those of related methods with comparable baselines. Our performance using a convolutional encoder is similar to that of MIXER. Compared to BSO (Wiseman & Rush, 2016), our baseline, absolute performance and improvements are all stronger. While SEARNN presents similar improvements to ACTOR-CRITIC, the absolute performance is slightly worse. This can be explained in part by the fact that SEARNN requires twice less parameters during training. Finally, the learned roll-in policy performed poorly for this specific task, so we used instead a reference roll-in. While this observation seems to go against the L2S analysis from Chang et al. (2015), it is consistent with another experiment we ran: we tried applying scheduled sampling (Bengio et al., 2015) – which uses a schedule of mixed roll-ins – on this dataset, but did not succeed to obtain any improvements, despite using a careful schedule as proposed by their authors in private communications. One potential factor is that our reference policy is not good enough to yield valuable signal when starting from a poor roll-in. Another possibility is that the underlying optimization problem becomes harder when using a learned rather than a reference roll-in. 7 DISCUSSION We now contrast SEARNN to several related algorithms, including traditional L2S approaches (which are not adapted to RNN training), and RNN training methods inspired by L2S and RL. Traditional L2S approaches. Although SEARNN is heavily inspired by SEARN, it is actually closer to LOLS (Chang et al., 2015), another L2S algorithm. As LOLS, SEARNN is a meta-algorithm where roll-in/roll-out strategies are customizable (we explored most combinations in our experiments). Our findings are in agreement with those of Chang et al. (2015): we advocate using the same combination, that is, a learned roll-in and a mixed roll-out. The one exception to this rule of thumb is when the associated reduced problem is too hard (as seems to be the case for machine translation), in which case we recommend switching to a reference roll-in. Moreover, as noted in Section 4, SEARNN adapts the optimization process of LOLS (the one difference being that our method is stochastic rather than online): each intermediate dataset is only used for a single gradient step. This means the policy interpolation is of a different nature than in SEARN where intermediate datasets are optimized for fully and the resulting policy is mixed with the previous one. However, despite the similarities we have just underlined, SEARNN presents significant differences from these traditional L2S algorithms. First off, and most importantly, SEARNN is a full integration of the L2S ideas to RNN training, whereas previous methods cannot be used for this purpose directly. Second, in order to achieve this adaptation we had to modify several design choices, including: • the intermediate dataset construction, which significantly differs from traditional L2S;3 • the careful choice of a classifier (those used in the L2S literature do not fit RNNs well); • the design of tailored surrogate loss functions that leverage cost information while being easy to optimize in RNNs. L2S-inspired approaches. Several other papers have tried using L2S-like ideas for better RNN training, starting with Bengio et al. (2015) which introduces “scheduled sampling” to avoid the exposure bias problem. The idea is to start with teacher forcing and to gradually use more and more model predictions instead of ground truth tokens during training. This is akin to a mixed roll-in – an idea which also appears in (Daumé et al., 2009). Wiseman & Rush (2016, BSO) adapt one of the early variants of the L2S framework: the “Learning A Search Optimization” approach of Daumé & Marcu (2005, LASO) to train RNNs. However LASO is quite different from the more modern SEARN family of algorithms that we focus on: it does not include either local classifiers or roll-outs, and has much weaker theoretical guarantees. Additionally, BSO’s training loss is defined by violations in the beam-search procedure, yielding a very different algorithm from SEARNN. Furthermore, BSO requires being able to compute a meaningful loss on partial sequences, and thus does not handle general structured losses unlike SEARNN. Finally, its ad hoc surrogate objective provides very sparse sequence-level training signal, as mentioned by their authors, thus requiring warm-start. Ballesteros et al. (2016) use a loss that is similar to LL for parsing, a specific task where cost-to-go are essentially free. This property is also a requirement for Sun et al. (2017), in which new gradient procedures are introduced to incorporate neural classifiers in the AGGREVATE (Ross & Bagnell, 2014) variant of L2S.4 In contrast, SEARNN can be used on tasks without a free cost-to-go oracle. RL-inspired approaches. In structured prediction tasks, we have access to ground truth trajectories, i.e. a lot more information than in traditional RL. One major direction of research has been to adapt RL techniques to leverage this additional information. The main idea is to try to optimize the expectation of the test error directly (under the stochastic policy parameterized by the RNN): L(θ) = − N∑ i=1 E(yi1,..,yiT )∼π(θ)r(y i 1, .., y i T ) . (5) Since we are taking an expectation over all possible structured outputs, the only term that depends on the parameters is the probability term (the tokens in the error term are fixed). This allows this 3The feature extraction is fully integrated in the model and thus learnable instead of being hand-crafted. Moreover, arbitrarily picking a token a during the roll-out phase to compute the associated costs requires feeding them back to the RNN (as opposed to simply adding the decision to the context before extracting features). 4Sun et al. (2017)’s algorithm simply replaces the classifier in AGGREVATE with a neural network. As it is trained on an ever growing dataset, a natural gradient update is required to make the algorithm tractable. loss function to support non-differentiable test errors, which is a key advantage. Of course, actually computing the expectation over an exponential number of possibilities is computationally intractable. To circumvent this issue, Shen et al. (2016) subsample trajectories according to the learned policy, while Ranzato et al. (2016); Rennie et al. (2016) use the REINFORCE algorithm, which essentially approximates the expectation with a single trajectory sample. Bahdanau et al. (2017) adapt the ACTOR-CRITIC algorithm, where a second critic network is trained to approximate the expectation. While all these approaches report significant improvement on various tasks, one trait they share is that they only work when initialized from a good pre-trained model. This phenomenon is often explained by the sparsity of the information contained in “sequence-level” losses. Indeed, in the case of REINFORCE, no distinction is made between the tokens that form a sequence: depending on whether the sampled trajectory is above a global baseline, all tokens are pushed up or down by the gradient update. This means good tokens are sometimes penalized and bad tokens rewarded. In contrast, SEARNN uses “global-local” losses, with a local loss attached to each step, which contains global information since the costs are computed on full sequences. To do so, we have to “sample” more trajectories through our roll-in/roll-outs. As a result, SEARNN does not require warm-starting to achieve good experimental performance. This distinction is quite relevant, because warm-starting means initializing in a specific region of parameter space which may be hard to escape. Exploration is less constrained when starting from scratch, leading to potentially larger gains over MLE. RL-based methods often involve optimizing additional models (baselines for REINFORCE and the critic for ACTOR-CRITIC), introducing more complexity (e.g. target networks). SEARNN does not. Finally, while maximizing the expected reward allows the RL approaches to use gradient descent even when the test error is not differentiable, it introduces another discrepancy between training and testing. Indeed, at test time, one does not decode by sampling from the stochastic policy. Instead, one selects the “best” sequence (according to a search algorithm, e.g. greedy or beam search). SEARNN avoids this averse effect by computing costs using deterministic roll-outs – the same decoding technique as the one used at test time – so that its loss is even closer to the test loss. The associated price is that we approximate the gradient by fixing the costs, although they do depend on the parameters. RAML (Norouzi et al., 2016) is another RL-inspired approach. Though quite different from the previous papers we have cited, it is also related to SEARNN. Here, in order to mitigate the 0/1 aspect of MLE training, the authors introduce noise in the target outputs at each iteration. The amount of random noise is determined according to the associated reward (target outputs with a lot of noise obtain lower rewards and are thus less sampled). This idea is linked to the label smoothing technique (Szegedy et al., 2016), where the target distribution at each step is the addition of a Dirac (the usual MLE target) and a uniform distribution. In this sense, when using the KL loss SEARNN can be viewed as doing learned label smoothing, where we compute the target distribution from the intermediate costs rather than arbitrarily adding the uniform distribution. Conclusion and future work. We have described SEARNN, a novel algorithm that uses core ideas from the learning to search framework in order to alleviate the known limitations of MLE training for RNNs. By leveraging structured cost information obtained through strategic exploration, we define global-local losses. These losses provide a global feedback related to the structured task at hand, distributed locally within the cells of the RNN. This alternative procedure allows us to train RNNs from scratch and to outperform MLE on three challenging structured prediction tasks. Finally we have proposed efficient scaling techniques that allow us to apply SEARNN on structured tasks for which the output vocabulary is very large, such as neural machine translation. The L2S literature provides several promising directions for further research. Adapting “bandit” L2S alternatives (Chang et al., 2015) would allow us to apply SEARNN to tasks where only a single trajectory may be observed at any given point (so trying every possible token is not possible). Focused costing (Goodman et al., 2016) – a mixed roll-out policy where a fixed number of learned steps are taken before resorting to the reference policy – could help us lift the quadratic dependency of SEARNN on the sequence length. Finally, targeted sampling (Goodman et al., 2016) – a smart sampling strategy that prioritizes cells where the model is uncertain of what to do – could enable more efficient exploration for large-scale tasks. ACKNOWLEDGMENTS We would like to thank Dzmitry Bahdanau for helping us with both the spelling and the machine translation experiments, as well as Hal Daumé for constructive feedback on both Learning to Search and an earlier version of the paper. This research was partially supported by the NSERC Discovery Grant RGPIN-2017-06936, by the ERC grant Activia (no. 307574), by a Google Research Award and by Samsung Research, Samsung Electronics. A ALGORITHMS A.1 SEARN (ADAPTED FROM DAUMÉ ET AL. (2009), FIGURE 1.) Algorithm 2 SEARN algorithm 1: Initialize a policy h with the reference policy π. 2: for i in 1 to N do # Start of round i. 3: Initialize the set of cost-sensitive examples S ← ∅. # Create the intermediate dataset for round i. 4: for (x, y) in the ground truth input/output structured pairs do # Perform the roll-in (actually only run once). 5: Compute predictions under the current policy, (ŷ1, ..., ŷTx) ∼ h, x. 6: for t in 1 to Tx do 7: Compute input features φ(st) for context st = (x, ŷ1, ..., ŷt). 8: Initialize a cost vector ct = 〈〉. # Perform the roll-outs for each action to fill the cost vector. 9: for each possible token a ∈ A do 10: Get a full sequence ŷt(a) by applying an expert policy, starting from (x, ŷ1..t, a). 11: Collect the cost ct(a) by comparing ŷt(a) and y. 12: end for 13: Add cost-sensitive example (φ, c) to S 14: end for 15: end for 16: Learn a classifier h′ on S. 17: Interpolate h← βh′ + (1− β)h. 18: end for 19: Return h. A.2 SEARNN: REFERENCE ROLL-IN WITH AN RNN. As mentioned in Section 3, teacher forcing can be seen as the roll-in reference policy of the RNN. In this section, we detail this analogy further. Let us consider the case where we perform the roll-in up until the tth cell. In order to be able to perform roll-outs from that tth cell, a hidden state is needed. If we used a reference policy roll-in, this state is obtained by running the RNN until the tth cell by using the teacher forcing strategy, i.e. by conditioning the outputs on the ground truth. Finally, SEARNN also needs to know what the predictions for the full sequence were in order to compute the costs. When the reference roll-in is used, we obtain the predictions up until the tth cell by simply copying the ground truth. Hence, we discard the outputs of the RNN that are before the tth cell. B DESIGN DECISIONS Choosing a classifier: to backpropagate or not to backpropagate? In standard L2S, the classifier and the feature extractor are clearly delineated. The latter is a fixed hand-crafted transformation applied on the input and the partial sequence that has already been predicted. One then has to pick a classifier and its convergence properties carry over to the initial problem. In SEARNN, we choose the RNN itself as our classifier. The fixed feature extractor is reduced to the bare minimum (e.g. one-hot encoding) and the classifier performs feature learning afterwards. In this setting, the intermediate dataset is the initial state and all previous decisions (x, y1:t−1) combined with the cost vector.5 5In the encoder-decoder architecture, the decoder RNN does not receive x directly, but rather φ(x), the features extracted from the input by the encoder RNN. In this case, our SEARNN classifier includes both the encoder and the decoder RNNs. An alternative way to look at RNNs, is to consider the RNN cell as a shared classifier in its own right, and the beginning of the RNN (including the previous cells) as a feature extractor. One could then pick the RNN cell (instead of the full RNN) as the SEARNN classifier, in which case the intermediate dataset would be (ht−1, yt−1)6 (the state at the previous step, combined with the previous decision) plus the cost vector. While this last perspective – seeing the RNN cell as the shared classifier instead of the full RNN – is perhaps more intuitive, it actually fits the L2S framework less well. Indeed, there is no clear delineation between classifier and feature extractor as these functions are carried out by different instances of the same RNN cell (and as such share weights). This means that the feature extraction in this case is learned instead of being fixed. This choice of classifier has a direct consequence on the optimization routine. In case we pick the RNN itself, then each loss gradient has to be fully backpropagated through the network. On the other hand, if the classifier is the cell itself, then one should not backpropagate the gradient updates. Reference policy. The reference policy defined by Daumé et al. (2009) picks the action which “minimizes the (corresponding) cost, assuming all future decisions are made optimally”, i.e. arg minyt minyt+1:T l(y1:T , y). For the roll-in phase, this policy corresponds to always picking the ground truth, since it leads to predicting the full ground truth sequence and hence the best possible loss. For the roll-out phase, computing this policy explicitly is easy in a few select cases. However, in the general case it is not tractable. One then has to turn to heuristics, whose performance can be relatively poor. While Chang et al. (2015) tell us that overcoming a bad reference policy can be done through a careful choice of roll-in/roll-out policies, the fact remains that the better the reference policy is, the better performance will be. Choosing this heuristic well is then quite important. The most basic heuristic is to simply use the ground truth. Of course, one can readily see that it is not always optimal. For example, when the model skips a token and outputs the next one, a, instead, it may be more beneficial to also skip a in the roll-out phase rather than to repeat it. Although we mostly chose this basic heuristic in this paper, using tailored alternatives can yield better results for tasks where it is suboptimal, such as machine translation (see Appendix C.2). C ADDITIONAL EXPERIMENTAL DETAILS C.1 LOSSES. We now describe other losses we tried but did not perform as well (or at least not better) than the ones presented in the main text. The first two follow the target learning principle, as LL. Log-loss with cost-augmented softmax (LLCAS). LLCAS is another attempt to leverage the structured information we have access to more meaningfully, through a slight modification of LL. We add information about the full costs in the exponential, following e.g. Pletscher et al. (2010); Gimpel & Smith (2010); Hazan & Urtasun (2010). Lt(st; ct) = − log ( est(a ?)+αct(a ?) /∑A i=1 e st(i)+αct(i) ) where a? = arg mina∈A ct(a) . (6) α is a scaling parameter that ensures that the scores of the model and the costs are not too dissimilar, and can be chosen using a validation set. The associated gradient update discriminates between tokens based on their costs. Although it leverages the structured loss information more directly and thus should in principle mitigate the 0/1 nature of MLE better, we did not observe any significant improvements over LL, even after tuning the scaling parameter α. 6One could also add ψ(x), features learned from the input through e.g. an attention mechanism. Structured hinge loss (SHL). The LLCAS can be seen as a smooth version of the (cost-sensitive) structured hinge loss used for structured SVMs (Tsochantaridis et al., 2005), that we also consider: Lt(st; ct) = max a∈A (st(a) + ct(a))− st(a?) where a? = arg min a∈A ct(a) . (7) While this loss did enable the RNNs to learn, the overall performance was actually slightly worse than that of MLE. This may be due to the fact that RNNs have a harder time optimizing the resulting objective, compared to others more similar to the traditional MLE objective (which they have been tuned to train well on). Consistent loss. This last loss is inspired from traditional structured prediction. Following Lee et al. (2004), we define: Lt(ct) = ∑ a∈A ct(a) ln(1 + exp(s̃t(a))) where s̃t(a) = st(a)− 1 A ∑ a∈A st(a) . (8) Unfortunately, we encountered optimization issues and could not get significant improvements over the MLE baseline. KL and label smoothing. We have seen that when the loss function is the Hamming loss, the reference policy is to simply output the ground truth. In this case, LL with a reference roll-in and rollout is equivalent to MLE. Interestingly, in the same setup KL is also equivalent to an existing method: the label smoothing technique. Indeed, the vector of costs can be written as a vector with equal coordinates minus a one-hot vector with all its mass on the ground truth token. After transformation through a softmax operator, this yields the same target distribution as in label smoothing. C.2 NMT Custom sampling. For this experiment, we decided to sample 15 tokens per cell according to the top-k policy (as the vocabulary size is quite big, sampling tokens with low probability is not very attractive), as well as 10 neighboring ground truth labels around the cell. The rationale for these neighboring tokens is that skipping or repeating words is quite a common mistake in NMT. Custom reference policy. The very basic reference policy we have been using for the other experiments of the paper is too bad a heuristic for BLEU to perform well. Instead, we try adding every suffix in the ground truth sequence to the current predictions and we pick the one with the highest BLEU-1 score (using this strategy with BLEU-4 leads to unfortunate events when the best suffix to add is always the entire sequence, leading to uninformative costs). Reference roll-in. As mentioned in Section 6, we had to switch from a learned to a reference roll-in. In addition to the existing problems of a weak reference policy (which affects a learned roll-in much more than a reference one), and the introduction of a harder optimization problem, there is another potential source of explanation: this may illustrate a gap in the standard reduction theory from the L2S framework. Indeed, the standard reduction analysis (Daumé et al., 2009; Chang et al., 2015) guarantees that the level of performance of the classifier on the reduced problem translates to overall performance on the initial problem. However, this does not take into account the fact that the reduced problem may be harder or easier, depending on the choice of roll-in/roll-out combination. In this case, it appears that using a learned roll-in may have lead to a harder reduced problem and thus ultimately worse overall performance.
1. What is the main contribution of the paper in the field of RNN training? 2. How does the proposed method, SeaRnn, overcome the limitations of local optimization in RNN training? 3. Can you explain how SeaRnn improves the results obtained by MLE training in three different problems? 4. How does the paper demonstrate the effectiveness and scalability of SeaRnn in large-vocabulary machine translation? 5. What is the significance of the paper's contribution to the field of RNN training, particularly in addressing the challenges of error propagation and MLE training?
Review
Review The paper proposes new RNN training method based on the SEARN learning to search (L2S) algorithm and named as SeaRnn. It proposes a way of overcoming the limitation of local optimization trough the exploitation of the structured losses by L2S. It can consider different classifiers and loss functions, and a sampling strategy for making the optimization problem scalable is proposed. SeaRnn improves the results obtained by MLE training in three different problems, including a large-vocabulary machine translation. In summary, a very nice paper. Quality: SeaRnn is a well rooted and successful application of the L2S strategy to the RNN training that combines at the same time global optimization and scalable complexity. Clarity: The paper is well structured and written, with a nice and well-founded literature review. Originality: the paper presents a new algorithm for training RNN based on the L2S methodology, and it has been proven to be competitive in both toy and real-world problems. Significance: although the application of L2S to RNN training is not new, the contribution to the overcoming the limitations due to error propagation and MLE training of RNN is substantial.
ICLR
Title Neuro-Symbolic Ontology-Mediated Query Answering Abstract Recently, low-dimensional vector space representations of Knowledge Graphs (KGs) have been applied to find answers to logical queries over incomplete KGs. However, the current methods only focus on inductive reasoning, i.e. answering such queries by predicting facts based on patterns learned from the data, and lack the ability of deductive reasoning, the task of computing logical entailments using expert domain knowledge. To address this shortcoming, we investigate how existing embedding models for query answering over incomplete KGs can be adapted to incorporate domain knowledge in the form of ontologies. We propose two novel datasets, based on LUBM and NELL KGs, as well as various training strategies to integrate domain knowledge into prominent representatives of embedding models for query answering. Our strategies involve (1) different ontology-driven data augmentation techniques and (2) adaptation of the loss function using query-rewriting methods. The achieved improvements in the settings that require both inductive and deductive reasoning, are from 20% to 50% in HITS@3. 1 INTRODUCTION Answering complex logical queries over Knowledge Graphs (KGs) has recently received a lot of attention due to the relevance of this task in various applications such as natural question answering, web search or data analytics. For example, the query Who works for Amazon and has a degree from MIT? over the KG in Figure 1 can be formulated as q(X) ← degreeFrom(X,mit) ∧ worksFor(X, amazon). Answering such a query is very challenging when KGs are incomplete, which is often the case due to their (semi-) automatic construction, and obtaining complete answers typically requires further domain knowledge. For instance, mary is a missing but desired answer of q. Due to the data distribution in the KG, link prediction models might only be able to derive managerAt(mary, amazon). Therefore, in this case, further domain knowledge that managerAt implies worksFor in ontologyO of Figure 1 would be required to derive worksFor(mary, amazon) and retrieve mary as an answer for q. Recently, Knowledge Graph Embedding (KGE) techniques (Nickel et al., 2016; Wang et al., 2017) that are able to predict missing facts have been proposed for answering logical queries over incomplete KGs. The existing methods can be broadly divided into two categories: query-based (Ren et al., 2020; Ren & Leskovec, 2020; Liu et al., 2021; Choudhary et al., 2021; Kotnis et al., 2021) and atom-based (Arakelyan et al., 2021). The former compute continuous query embedding representations, and use them for answering queries, while the latter compute answers to a query by identifying the most likely answers to all its atoms using neural link predictors (Nickel et al., 2016), and then aggregating those answers using t-norms. While being promising, such existing embedding-based methods do not account for ontologies, regarded as KG schema that enriches the KG by describing dependencies between types and/or relations. Exploiting ontologies when querying KGs is beneficial, e.g., for simplifying query formulation and obtaining more complete answers. The task of answering logical queries in the presence of ontologies is referred to as Ontology-Mediated Query Answering (OMQA) (Bienvenu & Ortiz, 2015). On the one hand, the use of ontologies requires deductive reasoning, i.e., inferring new facts by applying ontology rules to existing facts, but ignoring missing true facts. On the other hand, embedding methods are essentially tailored towards inductive reasoning, i.e. learning from examples: Given a number of queries and their answers, they are used to predict answers to other similar mat mit bob yale Professor maryamazon University AProfessor teachesAt hasAlumnus degreeFrom managerAt hasAlumnus Knowledge Graph type worksFor john googleworksFor type degreeFrom type Ontology Figure 1: An exemplary KG in which solid edges illustrate existing facts in the KG, while dashed edges indicate missing facts. The rules in O state that (1) managers at companies also work there; (2) the inverse of relation degreeFrom is hasAlumnus; (3) assistant professors are professors; (4) teachers at organizations also work there; (5) the range of the relation teachesAt is University. queries, but they typically cannot perform ontology reasoning. Since large portions of expert knowledge can be conveniently encoded using ontologies, the benefits of coupling ontology reasoning and embedding methods for KG completion are evident, and have been acknowledged (e.g. see Bianchi et al., 2020; Zhang et al., 2020; Gutiérrez-Basulto & Schockaert, 2018; Kulmanov et al., 2019). However, to the best of our knowledge, such coupling has not been studied for OMQA. A natural attempt is to interchangeably complete the KG using ontology reasoning and embedding methods, and then perform query answering on top of the result. This naive procedure comes with a big scalability challenge: In practice, we need to restrict ourselves to computing merely small subsets of likely fact predictions required for answering a given query; thus more sophisticated proposals are required. To this end, we investigate three open questions: (1) How to adapt existing OMQA techniques to the setting of KGE? (2) How do different data augmentation strategies impact the accuracy of existing embedding models for OMQA task? and (3) Does the enforcement of ontology axioms in the embedding space via loss function help to improve inductive and deductive reasoning performance? We answer these questions by making the following contributions: • We formally define the task of Embedding-Based OMQA (E-OMQA) and empirically show that existing off-the-shelf KGE models applied naively perform poorly on this task. • We propose novel ontology-driven strategies for sampling training queries as well as loss function modification to enforce the ontology within the embedding space, and demonstrate the effectiveness of these proposals on popular representatives of query-based and atom-based models. • Since no previous benchmarks exist for E-OMQA, we design two datasets using LUBM and NELL, which are well-known benchmarks for OMQA and embedding models, respectively. • Extensive evaluation demonstrates improvements (20% to 50% in HITS@3) in the accuracy of E-OMQA by our methods compared to the baselines, and allows us to obtain and analyze answers to the above questions. 2 PRELIMINARIES Knowledge Graphs and Ontologies. We assume a signature Σ = 〈E,C,R〉 consisting of countable pairwise disjoint sets E,C, and R of constants (entities), concepts (types), and roles (binary relations) respectively. A knowledge graph G (a.k.a. ABox) is a set of triples, such as (mit, type,University) and (bob,worksFor,mit) formalized using Σ. These triples can also be represented as type(mit,University) and worksFor(bob,mit). An ontology O (a.k.a. TBox), e.g. O in Figure 1, is a set of axioms in Description Logics (Baader et al., 2009) over Σ. We focus on DL-LiteR (Artale et al., 2009) which has the following syntax: A v A′, A v ∃p, ∃p v A,∃p− v A, p v s, p− v s, where A,A′ ∈ C, p, s ∈ R, and p− denotes the inverse relation of p. The deductive closure O∞(G), contains all (possibly infinitely many) new facts derived from G using axioms from O (e.g., type(bob,Professor) follows from (3) and type(bob,AProfessor)). Ontology-Mediated Query Answering. A query atom is an expression of the form p(T1, T2), where p ∈ R, and each Ti ∈ V ∪ E is called a term, with V disjoint with E,C, and R being a set of variables. A monadic conjunctive query (CQ) q(X) is a First-Order (FO) formula of the form q(X)← ∃~Y .p1( ~T1) ∧ · · · ∧ pn( ~Tn) where each pi(~Ti) is a query atom, and vars(q) = X ∪ ~Y denotes the set of variables appearing in q, with X 6∈ ~Y being the answer variable. A monadic Existential Positive FO (EPFO) query is a union of monadic CQs (Dalvi & Suciu, 2007). For a query q(X) and a KG G, a constant a from G is an answer to q(X) if there exists a mapping π : var(q) 7→ E that maps the body (the right-hand side) of q to a sub-graph of G. We denote by q [G] the answers of q on G. Ontology-Mediated Query Answering (OMQA) concerns answering queries by accounting for both the KG and the accompanying ontology. Given a KG G and an ontology O, an entity a from G is a certain answer of q(X) over (G,O) if a is an answer to q(X ) over O∞(G). We use q[G,O] to denote the set of certain answers of q over (G,O). Let q and q′ be two monadic queries over (G,O), then q is contained in q′ w.r.t. O if q[G,O] ⊆ q′[G,O]; we call q a specialization of q′ (written as q′ s q), and q′ a generalization of q (written as q g q′). Query generalizations and specializations can be obtained by exploiting ontology axioms; such process (and result) is referred to as query rewriting. Example 1. Consider G in Figure 1 and q(X )← type(X ,Professor) ∧ degreeFrom(X ,mit). Since mat ∈ q[G], it is a certain answer. Moreover, according to O, AProfessor is a sub-type of Professor and degreeFrom is inverse of hasAlumnus, thus bob is also a certain answer. Query q′(X)←type(X,AProfessor) ∧ degreeFrom(X,mit) is a specialization of q as mat 6∈ q′[G,O]. Embedding-Based Query Answering. Recent works on KGEs for answering logical queries can be divided into two categories: query-based (Ren et al., 2020; Ren & Leskovec, 2020; Liu et al., 2021; Choudhary et al., 2021; Kotnis et al., 2021) and atom-based (Arakelyan et al., 2021) models. A neural QA model maps entities and relations into a d-dimensional embedding space. It then computes a score of each entity c for being an answer to a given query q via a scoring function φq(c) : Rd 7→ [0, 1], where c denotes the embedding vector of c.1 Using these scoring functions, the final embedding QA function EG takes as input a query and returns answers to that query. We describe below how this is done for Query2Box and Continuous Query Decomposition (CQD). In Query2Box, entities and queries are embedded as points and boxes, respectively, in a d-dimensional vector space. A d-dimensional embedding is a function ϕ that maps c ∈ E ∪ C to c ∈ Rd and a query q to q=(cenq,offq)∈Rd × Rd≥0, which is used to define a query box as boxq = {v ∈ Rd | cenq − offq v cenq + offq}, where is the element-wise inequality, cenq is the center of the box, and offq is the positive offset of the box, modeling its size. The score for an entity c being an answer to q is computed based on the distance from c to boxq . A prominent representative of the second category, Continuous Query Decomposition (Arakelyan et al., 2021) reduces the task of answering a complex query to that of answering each of its sub-queries. It relies on neural link predictors for answering atomic sub-queries, and aggregates the resulting scores via t-norms. 3 EMBEDDING-BASED ONTOLOGY-MEDIATED QUERY ANSWERING Inductive and deductive reasoning complement each other, thus combining both yields more complete answers to queries. To target such combination, we define an embedding-based QA function that can additionally apply ontology rules to answer queries. Definition 1 (E-OMQA). Let G be a KG, letO be an ontology, and let Gi be an ideal completion of G. An embedding QA function EG is reliable if for any query q and entity a we have that a ∈ EG(q) iff a ∈ q[Gi]. Moreover, EG is ontology-aware iff a ∈ q[Gi,O]. The problem of embedding-based OMQA is to obtain an embedding QA function that is both reliable and ontology-aware. Note that, q[Gi,O] subsumes both q[Gi], the answers requiring inductive reasoning, and q[G,O], the answers computed via deductive reasoning. We proceed to present several methods for E-OMQA. Query Rewriting over Pre-trained Models. In the traditional OMQA setting, each query q can be evaluated by first rewriting q into a set of FO-queries qO, and then evaluating each query in qO over G alone. In our case, this amounts to constructing an embedding QA function EG aware of G alone, and using it to compute the answers to all queries in qO rather than only to the query q. Example 2. For G,O in Figure 1 and queries q(X)←degreeFrom(X,mit) ∧ worksFor(X, amazon) and q′(X)←degreeFrom(X,mit) ∧managerAt(X, amazon), qO contains q and q′ among others, and to approximate q[Gi,O] we take the EG-based answers of all queries in qO. Ontology-Aware Models. An alternative to query rewriting is to develop an embedding QA function that accounts for axioms in O. To the best of our knowledge, there are no KGE models that di- 1Bold small letters denote vector representations. rectly address the problem of E-OMQA. Therefore, we suggest the following two options: (1) Train existing embedding models for logical QA on the data derived from O∞(G) instead of G;(2) Develop an ontology-aware embedding model that will be trained on G, but will have special terms in the training objective structurally enforcing O. While the proposed approaches can be realized on top of any embedding model for logical QA, in this work we verify their effectiveness on the two prominent recent embedding models: Query2Box and CQD. Regarding (1), in Section 3.1 we present several methods for effective ontology-driven training. As for (2), building on Query2Box, in Section 3.2 we develop an ontology-aware embedding model. Finally, we use the query-rewriting method over embeddings as a baseline in Section 4. 3.1 ONTOLOGY-DRIVEN DATA SAMPLING Let QG be the set of all possible EPFO monadic queries that can be formed using signature Σ. To answer any arbitrary such query, existing embedding models are trained on a set of sampled queries of certain shapes and their answers over the KG G. For instance, queries in (Ren et al., 2020) have multiple atoms while in (Arakelyan et al., 2021) they are atomic (e.g., q(X)← worksFor(X,mit)). Usually, the set of training queries does not take the schema into account. For example, in (Ren et al., 2020), the queries are randomly selected from QG and used for training the model along with their answers over G as positive examples and randomly generated non-answers as negative examples. However, if the ontology is present along with the KG, this procedure is not guaranteed to capture the ontology axioms, and using all possible queries from QG may be infeasible in practice. In the following, we discuss various options for sampling queries to train ontology-aware KGE models. Certain Answer and Query Rewriting-Based Sampling. The first natural approach for query sampling is to select queries along with their certain answers instead of the standard answers. For ontology languages such as those in the DL-Lite family (Artale et al., 2009) computing certain answers can be done efficiently. An example of this training case is to randomly sample query q(Y ) ← ∃X.hasAlumnus(mit, X) ∧ worksFor(X,Y ) and, given (G, O) in Figure 1, use it along with all its certain answers: mit, yale during training. To account for the ontology, we can randomly sample queries over the KG, and add also all of their generalizations and specializations obtained using the rules in Table 1. To rewrite a query we select an atom and apply an ontology axiom, e.g., the first rule (R1) applies a concept inclusion axiom, while (R6) applies a role inclusion. The specializations of a given query q (denoted as Spec(q)), incorporate specific information regarding the answers of q, while the generalizations of q (i.e. Gen(q)) incorporate additional related entities. Example 3. Consider q1(X)←∃Y.type(X,University), q2(X)←∃Y, Z.teachesAt(Z,X). Using R2 in Table 1 and (5) in Figure 1 we get q1 s q2. Similarly, in Example 2, q′ g q using R6 and (1). In general, there are exponentially many rewritings thus to keep the training size reasonable, we fix a rewriting depth, up to which the respective training queries are generated, via a dedicated parameter. Strategic Ontology-Based Sampling. While adding generalizations and specializations of randomly selected queries should capture some parts of the ontological background knowledge, many relevant queries might still be missed. For example, if O contains also ∃worksFor− v Organization, queries such as q(X)←∃YmanagerAt(Y,X)∧type(X,Organization) are likely to be disregarded during training. Therefore, another training approach that we propose is to leverage the ontology to strategically generate the train queries. For that first, we formalize the set of target queries by means of a query template graph, i.e., a directed acyclic graph (DAG) (N,E), where N is a set of nodes and E ⊆ N ×N is a set of directed edges. Such DAG captures the shape of each query. The set of target queries is then obtained by applying a labeling function to assign symbols in Σ to nodes and edges. Definition 2 (Query Shape). A query shape S is a tuple (N,E, n) such that (N,E) is a DAG and n ∈ N is the distinguished node of S (i.e., the node corresponding to the answer variable). For a given set of relations and constants in Σ, a labeling function f : N ∪ E 7→ Σ ∪V maps each node to either a variable or an entitiy and each edge to a relation symbol in Σ. Our goal is to exploit the ontology to label query shapes to create semantically meaningful queries. Towards that, let v∗ be the reflexive and transitive closure of v. Then, for a given relation p : - inv(p) = {p′ | p v p′− ∈ O}, dom(p) = {A | ∃p′vA′ ∈ O s.t. p v∗ p′,Av∗A′ or A′v∗A}, - range(p) = {A | ∃p′−vA′ ∈ O s.t. pv∗p′,Av∗A′ or A′v∗A}, - follows(p) = {p′ | range(p) ∩ dom(p′) 6= ∅ or }, - interr(p)={p′|range(p)∩range(p′) 6=∅ or p1∈ inv(p), p2∈ inv(p′) and dom(p1)∩dom(p2) 6=∅}, - interd(p)={p′|dom(p)∩dom(p′) 6= ∅ or p1∈ inv(p), p2∈ inv(p′)andrange(p1)∩range(p2) 6=∅}. Intuitively, for a given relation p, the set inv(p) contains all inverse relations of p, dom(p) contains all domain types for p, range(p) all range types for p, follows(p) stores all relations p′ which can follow p, and interr(p), interd(p) contain resp. all relations p′ which can intersect p on range and domain positions. Then, for each shape we label nodes and edges to create queries that are valid w.r.t. O as illustrated in Figure 2. Note that this query sampling process uses only the ontology, thus it is data independent. However, if the ontology does not capture additional data patterns we can proceed in a bottom-up fashion. We randomly take some labeled query shapes which produce answers, and construct their generalizations as before. 3.2 AN ONTOLOGY-AWARE TRAINING OBJECTIVE In this section, we present our novel training objective function employed by Query2Box. Recall that when embedding a query, the Query2Box model defines a box in an embedding space, s.t. the answer entities of the given query are mapped to points located inside of the box. Note that for every ontological axiom its both left- and right-hand side can be turned into queries. We observe that when embedding those queries as boxes, ontological axioms can be naturally injected into the model if in the vector space the inclusion of the boxes corresponding to the respective queries is ensured. Example 4. In Figure 3, the entities and relations are embedded into the vector space as points and projection operators, resp. The embedding of q(Y )←∃X .hasAlumnus(mit,X )∧worksFor(X ,Y ) is represented by the larger grey box, obtained by applying the projection hasAlumnus to the embedding of entity mit followed by the projection on worksFor. To enforce teachesAt v worksFor we ensure that the box corresponding to q′(Y )←∃X.hasAlumnus(mit, X) ∧ teachesAt(X,Y ), is contained in the box corresponding to q. The goal is to learn the embedding of queries, such that the distance between the box, corresponding to the query, and its answers is minimized, while the distance to this box from other negative samples is maximized. Similarly to Ren et al. (2020), we define the distance between q ∈ Rd × Rd≥0 and v ∈ Rd as d(q,v) = ‖cenq − v‖1, namely the L1 distance from the entity v to the center of the box. Using the sigmoid function we transform the distance into the (0, 1) interval, that is, p(v |q) = σ ( − (d(q,v)− γ) ) , where γ > 0 is a margin, which denotes the probability of v ∈ q[O,Gi]. For a query q, let Gen(q) = {q1 . . . qn} be the set of all generalizations of q based on O. Given a train query q and its certain answer v ∈ q[G,O], we aim at maximizing ∏n i=1 p(v |qi)βi , where βi ≥ 0 is a weighting parameter for all i = 1, . . . , n. This is achieved by minimizing the negative log-likelihood:2 − log (∏n i=1 p(v |qi)βi ) = − ∑n i=1 βi log ( p(v |qi) ) . By exploiting the fact that σ(x) = 1−σ(−x), for any v′j 6∈ q[G,O], we have that p(v′ |q) = 1− p(v |qi) = σ(d(q,v)−γ) . Our goal is to ensure that if q′ is a generalization of a given train query q w.r.t. O, then the box of q′ contains the box of q. In other words, if a is an answer to the query q then the distance not only between a and q should be minimized, but also between a and q′ as well as between a and all other generalizations of q. The following training objective reflects our goal: L=− n∑ i=1 βi log σ ( γ − d(v,qi) ) − k∑ j=1 1 k log σ(d(v′j ; q)− γ), where v′j 6∈ q[G,O] is a random entity for all j = 1, . . . , k obtained via negative sampling. In our experiments, we use βi = |Gen(q)|−1 = 1/n. Example 5. Consider q(Y )←∃X .hasAlumnus(mit,X )∧type(X,AProfessor)∧teachesAt(X ,Y ). We have Gen(q) = {q1 , q2 , q3}, where q1 is obtained from q by substituting teachesAt with worksAt, while q2 is q with type(X,Professor) instead of type(X,AProfessor). In q3 the first, second and third atoms are resp. the same as in q, q1 and q2. It holds that q [G,O] = {yale}, hence our training objective is to minimize the distance between yale (the embedding of yale), and q as well as the distance between yale and the boxes of q1, q2 and q3 (denoted by q1, q2 and q3). Note that conceptually, our training data sampling techniques and the loss function modifications are flexible in terms of the Description Logic in which the ontology is encoded. The only restriction is the existence of efficient query rewriting algorithms for this DL. In this work, we focused on DL-LiteR, since the majority of available ontologies are belong to this language. 4 EVALUATION In this section, we evaluate the proposed training strategies on the two recent embedding models for QA: Query2Box model (Q2B , Ren et al., 2020) and Continuous Query Decomposition (CQD , Arakelyan et al., 2021). We also measure the effectiveness of the newly introduced training objective function of Q2B model (called O2B ). All models are evaluated in different settings to measure their ability to perform inductive reasoning, deductive reasoning, and their combination.3 4.1 EXPERIMENTAL SETUP We have configured both Q2B and O2B systems as follows: We set the size of the embedding dimension to 400, and trained the models for 15×104 steps using Adam (Kingma & Ba, 2015) with an initial learning rate of 10−4 and the batch size of 512. We evaluated the models periodically and reported the test results of the models which have the best performance on the validation dataset. For CQD we have used the following configuration: we used ComplEx-N3 (Lacroix et al., 2018) as the underlying neural link predictor, where the embedding size was set to 1000, and the regularisation weight was selected based on the validation set by searching in {10−3, 5× 10−3, . . . , 10−1}. 2The log is strictly monotonically increasing, thus, it will not change the maximization. It only changes the product to a summation. During training we consider a minimization, which motivates the negative sign. 3Code and data are available at https://tinyurl.com/66hbhppc. Query and Answers Sampling. We use the same type of queries (corresponding to directed acyclic graphs with entities as the source nodes, also known as anchors) as Ren et al. (2020) (see Figure 5 in Appendix). We consider each input KG G to be the ideal completion (i.e. Gi) and then partition it into Gvalid for validation and Gtrain for training by discarding 10% of edges at each step; this yields Gtrain ( Gvalid ( G. We then create several training sets of queries according to our ontologyaware data sampling strategies from Section 3.1. More specifically, these include: plain: the training queries are randomly sampled based on the signature of Gtrain , and their plain answers, i.e. over Gtrain . gen: queries from plain augmented with their ontology-based generalizations 4; all answers are certain, i.e. over O∞(Gtrain). spec: queries from gen augmented with their ontology-based specializations; all answers are certain answers as well. onto: queries constructed relying on the ontology axioms as introduced in Section 3.1, for which we randomly choose a percentage of valid entities as anchors; all answers are certain. 4This setting is similar to random sampling over O∞(Gtrain) but unlike the deductive closure, our procedure is guaranteed to terminate. We used the rewriting depth of up to 10. Following Ren & Leskovec (2020), the training query shapes are the first five ones in Figure 5 (1p– 3i); non-compliant specializations and generalizations are discarded. The Q2B and O2B are trained on all five query shapes, while CQD is trained only on 1p queries (Arakelyan et al., 2021). Evaluation Procedure. For each trained model we measure its performance using standard metric HITS at K for K=3 (HITS@3), which indicates the frequency that the correct answer is ranked among the top-3 results (the higher, the better). We use such metric for measuring the reliability and ontology-awareness of the resulting models (as in Definition 1): Inductive case (I). Evaluating the inductive reasoning ability (accounts for the standard test case): Is the model able to predict missing answers to queries over the ideal completion Gi? Deductive case (D). Evaluating the deductive reasoning ability: Is the model able to predict answers that can be inferred from the known triples in Gtrain using ontology axioms? Inductive + Deductive case (I+D). The combination of I and D: Is the model able to predict missing answers that are inferred from the ideal completion Gi using axioms from O? For I, we randomly generate validation and test queries over Gvalid, and input G, in such a way that for each validation query q we have that q[Gtrain] ( q[Gvalid], and for each test query q we have q[Gvalid] ( q[G]. For D, we randomly generate evaluation queries overO∞(Gtrain) s.t. they are not trivially answered over Gtrain . Moreover, each validation query is unseen during training, and each test query is unseen during training and validation. For I+D, we proceed as for I, but useO∞(Gvalid) and O∞(G) to sample validation and test queries and their answers. In each test case all shapes in Figure 5 are sampled, and we measure accuracy based on so-called hard answers, which cannot be trivially retrieved from Gtrain and require prediction of missing edges and/or the application of some ontology axioms. A hard answer for the case I is an answer in q[G] \ q[Gvalid], for D it is an answer in q[O∞(Gtrain)] \ q[Gtrain], while for I+D it is an answer in q[O∞(G)] \ q[O∞(Gvalid)]. Models and Datasets. We consider Q2B , O2B and CQD trained in each described setting: i.e., Mx, where M ∈ {Q2B ,O2B ,CQD} and x ∈ {plain, gen, spec, onto}. Additionally, we consider the use of the query-rewriting method on top of each model pre-trained using plain strategy, denoted by M rewplain , i.e., Q2Bplain , Q2B rew plain and CQDplain , CQD rew plain respectively are used as baselines. We evaluate the proposed training methods, as well as the novel training objective, on two datasets: NELL (Carlson et al., 2010), a general purpose real world KG, and LUBM (Guo et al., 2005), a domain specific synthetic dataset describing the university domain. We selected these datasets, as they are among few large KGs that have ontologies (see Appendix for statistics). 4.2 EVALUATION RESULTS For the standard test case I, the baseline Q2Bplain performs best on LUBM, while CQDplain outperforms the other models and configurations on NELL. This is not surprising: ontologies are not effective when coping with missing edges and facts in a KG beyond those that they can deductively infer. In fact, if statistically, the patterns reflected by ontologies do not hold in the data, ontology-aware training strategies might worsen the prediction quality. The second observation is that the query rewriting over embedding models only slightly improves the prediction accuracy; Q2Brewplain , O2B rew plain and CQD rew plain result in at most 10% enhancement on test cases D and I+D over Q2Bplain , O2Bplain and CQDplain respectively. These limited improvements are likely due to the incompleteness of the rewriting procedure caused by the restriction of the queries supported by the models. The results on I and query-rewriting over embeddings are in Appendix E.1. Next, we discuss our main observations for the other more interesting test cases D and I+D. The results are reported in Table 2 and visually illustrated in Figure 4. Overall, the effectiveness of the proposed solutions is evident: for I+D on LUBM the improvements are of almost 50% for Query2Box and 54% for CQD, while for NELL of almost 20% for Query2Box and 25% for CQD. Performance of Training Strategies. The results on certain answer prediction (D and I+D) show that none of the baselines is able to capture the domain knowledge expressed in the ontology, and thus cannot be used directly for OMQA. Our ontology-aware model–O2Bplain outperforms the other models trained on plain , but incorporation of certain answers and generalizations via our training strategies leads to better results. The proposed training methods from Section 3.1 significantly improved the accuracy for test cases D and I+D. For all models generating training queries by taking the ontology into account yields improvements. This observation holds already when augmenting the set of random queries by choosing their generalizations, though the addition of specializations does not seem to have a major impact. We observed that randomly selecting training queries, as usually done in the literature, does not result in the most accurate models. On LUBM, for all models, the advantage of the ontology-driven query sampling (i.e. onto setting) is significant compared to all other settings. Remarkably, for LUBM CQDonto trained on less data than CQDgen or CQDspe results in higher accuracy. This shows that random sampling is not adequate for OMQA. For NELL, to keep the size of the training set reasonable, we chose a much lower number of anchors obtaining a significantly lower number of atomic queries (details in the Appendix), however since Q2B and O2B use information from other query shapes onto setting still outperforms all others, unlike for CQD which only relies on atomic queries. Evaluation of the Ontology-Aware Training Objective. The model O2Bonto has far better accuracy on cases D and I+D than the Q2B baseline. This shows that the enforcement of ontology axioms in the embedding space together with strategic ontology-driven training provides significant improvement, especially for LUBM which has a more expressive ontology. Furthermore, the improvement of O2Bplain over Q2Bplain shows that we are able to partially incorporate the domain knowledge into the embedding model without explicitly training on certain answers. 5 RELATED WORK The task of answering queries that involve multiple atoms using embedding techniques has recently received a lot of attention. The existing proposals can be divided into query-based (Ren et al., 2020; Ren & Leskovec, 2020; Liu et al., 2021; Choudhary et al., 2021; Kotnis et al., 2021; Sun et al., 2020) and atom-based (Arakelyan et al., 2021). Friedman & den Broeck (2020) and Borgwardt et al. (2019) study the relation between the problem of conjunctive QA in the embedding space and over probabilistic databases. Our work is different from the above proposals in that along with the data we also rely on ontologies to answer queries. Integration of ontologies into KG embeddings has been studied by e.g. Krompaß et al. (2015); Minervini et al. (2017); Hao et al. (2019); Guo et al. (2016); Rocktäschel et al. (2015); Demeester et al. (2016); Kazemi & Poole (2018); Fatemi et al. (2019); Abboud et al. (2020), but these works do not capture all supported axioms and focus on link prediction rather than QA. The capability of embeddings to model hierarchical data has been explored by Patel et al. (2020); Idahl et al. (2019); Gutiérrez-Basulto & Schockaert (2018). In particular, Idahl et al. (2019) aim at interpreting embeddings by finding concept spaces in node embeddings and linking them to a simple external type hierarchy; this is different from our method for OMQA over embeddings. In Gutiérrez-Basulto & Schockaert (2018), conceptual space representations of known concepts are learned by associating a Gaussian distribution with each concept over a learned vector space. Constructing models for EL ontologies in the embedding space (Kulmanov et al., 2019) is another relevant direction. While Gutiérrez-Basulto & Schockaert (2018); Kulmanov et al. (2019) are related to our work, they do not touch upon the problem of OMQA. The OMQA problem has been actively studied (see e.g. Schneider & Simkus (2020) for an overview), but available methods only focus on purely logicbased deductive reasoning, without aiming at simultaneously handling missing links. 6 CONCLUSION We have presented methods for Ontology-Mediated Query Answering that operate in the embedding space to enable simultaneous inductive and deductive reasoning over the incomplete data. To the best of our knowledge, this is the first work on embedding-based OMQA. We have empirically demonstrated that embedding-based methods for QA applied naively or combined with query rewriting techniques are not effective. In our work, we have proposed solutions for making the existing models ontology-aware via ontology-driven training sampling strategies and loss function modifications. The improvements in the accuracy on prominent query-based and atom-based models range from 20% to 50% compared to the baselines. We believe that this work opens interesting perspectives for combining OMQA methods, with roots in knowledge representation, and embedding techniques from the machine learning area. Reproducibility Statement. Code, data, and instructions for reproducing all experiments are available at https://tinyurl.com/66hbhppc. The hyperparameters are presented in Appendix F. A DESCRIPTION LOGICS ONTOLOGIES The syntax of DL-LiteR ontologies and its translation into rule-based syntax are given in Table 3. The semantics of DL ontologies is defined using FO interpretations (∆I , ·I) consisting of a nonempty domain ∆I and an interpretation function ·I , which maps each entity e to an element eI ∈ ∆I , each concept name A to a subset AI ⊆ ∆I , and each role name r to a binary relation rI ⊆ ∆I×∆I . The interpretation function ·I is extended to complex concepts as follows: (∃p)I = {d ∈ ∆I | ∃d′, (d, d′) ∈ pI}, (p−)I = {(d′, d) | (d, d) ∈ pI}. An interpretation I satisfies a concept inclusionC v D iffCI ⊆ DI , and I satisfies a role inclusion p v s iff pI ⊆ sI . Finally, I is a model of an ontologyO if it satisfies all concept and role inclusions in O. The notion of modelhood is applied also to a KG G as follows: An interpretation I satisfies a fact A(c) (i.e., type(c, A)), resp. p(c, c′), if c ∈ AI , resp. (c, c′) ∈ pI . Given a KG G and an ontology O, an interpretation I is a model of G w.r.t O if I satisfies each fact in G and each axiom in O. In OMQA setting, to answer a given query, we need to evaluate it over each such model; in the case of DL-LiteR ontologies, for computing answers to ontology-mediated queries, we can rely on the deductive closureO∞(G), since the model constructed fromO∞(G) can be homomorphically mapped to every other model. B TRACTABILITY OF REWRITING-BASED QUERY GENERATION For an arbitrary DL-LiteR ontology O and an arbitrary existential positive FO query q, let Spe(q,O) = {q′ | q s ∗ q′} be the set of all specializations of q w.r.t. O, modulo variable renamings, obtained by exhaustively applying s ∗ rules from Table 1. Similarly, let Gen(q,O) = {q′ | q g ∗ q′} be the set of all generalizations of q w.r.t. O, modulo variable renamings, obtained by exhaustively applying g ∗ rules. The following proposition, states that our training strategies based on query-rewriting are tractable. Proposition 1. Let O be an arbitrary DL-LiteR ontology, and let q be an existential positive FO query. Then, Spe(q,O) and Gen(q,O) are finite and can be computed in time that is polynomial in the size of O. Proof (Sketch). The rewriting rules we propose are simulating the standard rewriting for DL-LiteR. Thus, it follows from Lemma 34 in Calvanese et al. (2007) that Spe(q,O) is finite. Moreover, based on Lemma 42 in Calvanese et al. (2007) it follows that there exists a procedure to compute Spe(q,O) in time that is polynomial in the size of O. Since the generalization procedure is similar, only applying the axioms in the other direction, we also conclude that Gen(q,O) is finite and polynomially bounded by O. C QUERY2BOX GEOMETRIC OPERATIONS We now describe the geometric operators employed in the Query2Box model. Projection. Let S ⊆ E ∪ C be a set of entities, and r ∈ R a relation. Intuitively, the projection operator performs graph traversal, e.g. given an embedding of entity e, the projection operator for the relation r provides the box corresponding to the set {e′ ∈ E ∪ C | r(e, e′) ∈ G}. Given the embedding r = (cenr,offr) ∈ Rd × Rd≥0 for the relation r, we model the projection of a box v = (cenv,offv) by applying element-wise summation v + r = (cenv + cenr,offv + offr). This relational translation Bordes et al. (2013) operation corresponds to the translation and enlargement of the box v. Intersection. Given a set of entity sets {S1, . . . , Sn}, the intersection operator computes the intersection of these sets. Recall that each set of entities is represented by a box in Query2Box model. The intersection w = (cenw,offw) of a set of boxes {(cenv1 ,offv1), . . . , (cenvn ,offvn)} corresponding to the set {S1, . . . , Sn} is modeled by applying the following operations: cenw = n∑ i=1 Φ ( NN(cenv1), . . . ,NN(cenvn) ) i cenvi , offw = min(offv1 , . . . ,offvn) σ ( Ψ(offv1 , . . . ,offvn) ) , where and min denote the element-wise multiplication and minimum, respectively. NN : Rd → Rd is a 2-layer feed-forward neural network having the same dimensionality for the hidden layers as for the input layer. Φ and σ stand for the softmax and sigmoid functions, resp., applied in a dimension-wise manner. Ψ is a permutation invariant function composed of a 2-layer feed-forward network followed by element-wise mean operation and a linear transformation. The center cenw is calculated as the weighted mean of the box centers cenv1 , . . . , cenvn . This geometric intersection provides a smaller box that lies inside a given set of boxes – for more details we refer to Ren et al. (2020). D DATA AND QUERY STATISTICS Following the procedure in the literature, each input KG is completed w.r.t. inverse edges. For the considered datasets, in Table 4 we present the number of ontology axioms of various types as well as the number of (materialized) triples, entities and relations. In our experiments, we have considered both complex and simple ontologies. Indeed, LUBM has a rich ontology including domain and range axioms as well as concept and role inclusions, while the NELL KG is accompanied with a more simple ontology containing only (inverse) role inclusions. The size of each training/testing set, as well as the number of queries per shape for each of the considered settings is presented in Table 5, while each query shape is illustrated in Figure 5. Note that for NELL, the plain data is exactly the one from Ren et al. (2021). We observe that the number of 1p queries obtained for gen and spe settings are identical. This is probably because the set of 1p queries in plain covers all edges in the train KG. This explains the high accuracy of CQDgen and CQDspe on the test case D. Moreover, the NELL ontology does not contain interesting axioms that can be leveraged by ontology-driven query sampling technique, thus to obtain onto we had to rely on the patterns from the data alone. Since there are too many queries to chose from, due to the large number of relations, we had to select a smaller number of valid entities as anchors, namely 20-30%. This explains the small number of 1p queries. For the LUBM dataset, we have created the training and testing sets from scratch, and the 1p queries in plain do not contain the entire training KG. The onto set of queries leverages the proposed ontology-driven technique, given that the ontology covers all relations and concepts in the KG and describes how they interact, i.e. the ontology axioms support all the constructed queries, and we chose 50 % of valid entities as anchors. E EXTENDED DISCUSSION OF EXPERIMENTAL RESULTS In this section, we present more insights into our results on the inductive case I and performance of the query-rewriting baselines. E.1 RESULTS ON INDUCTIVE TEST CASE In Figure 6, we present the average HITS@3 for the inductive test case I. As previously discussed, we see no improvement of ontology-injection methods upon answering queries over incomplete KGs without taking certain answers into account. Indeed, Q2Bplain outperforms all other models on LUBM, while CQDplain performs best on NELL for this test case. This behaviour is expected, since ontologies cannot handle missing edges and facts in a KG that are not inferred from the data using ontological reasoning. E.2 QUERY REWRITING OVER PRE-TRAINED EMBEDDING MODELS In order to evaluate the target procedure that performs query rewriting over pre-trained embeddings for QA, for each hard answer a we take the best (i.e., minimum) ranking among all rankings generated by all queries in the rewriting of each test query. In other words, we take the minimal distance between the embedding of a and all rewritings of q. Note that, for measuring the performance we use the pre-trained models Q2Bplain , CQDplain and O2Bplain , obtained after 450K training steps. Due to the reliance on particular query shapes of the respective models, the complete rewriting for each query is not guaranteed. In Table 6, we present the results for this method compared to the plain setting. Minor improvements of only at most 10% are observed. We have also used our pre-trained O2Bplain model as a possible way to cope with this issue, and indeed it outperforms all other baselines. In fact, on NELL O2Bplain becomes relatively competitive even compared to the other ontology-aware models that have been trained using more advanced ontology-driven training strategies. However, for richer ontology that comes with the LUBM KG, the improvements are still not sufficient. E.3 DATA AUGMENTATION In Figure 7, we present the performance of each model and the number of training queries needed in each training setting. In general, naturally, the increase of the number of training queries leads to better performance, with the exception of the setting spe, for which the training data contains all queries from gen , but the performance is comparable or slightly decreases. The onto setting boosts the performance for almost all models. In particular, on LUBM, which has a richer ontology, the increase in performance is much higher compared to that for the setting when query generalizations and certain answers are included. It is worth noting that the number of 1p queries is smaller for onto than for gen , but CQDonto performs much better than CQDgen , which demonstrates the effectiveness of our proposed ontology-driven training strategy. F HYPERPARAMETERS FOR Q2B, O2B AND CQD For Q2B we have used the code5 from Ren & Leskovec (2020). Our extension of this code with the implementation of the novel training objective is available online.6. The systems Q2B and O2B have been configured as follows: We set the size of the embedding dimension to 400, and trained the models for 15×104 steps using Adam (Kingma & Ba, 2015) with an initial learning rate of 10−4 and the batch size of 512. The rest of the parameters were set in the same way as in Ren et al. (2020). We evaluated the models periodically and reported the test results of the models which have the best performance on the validation dataset. For CQD, we used the code shared by Arakelyan et al. (2021) 7, using ComplEx-N3 (Lacroix et al., 2018) as the base model, where the embedding size was set to 1000, and the regularisation weight was selected based on the validation set by searching in {10−3, 5 × 10−3, . . . , 10−1}. For LUBM, the regularization weight was set to 0.1 in the gen , spe , and onto settings, and to 0.01 in the plain setting. For NELL, the regularization weight was set to 0.005 in the plain setting, to 0.001 in the gen and spe settings, and to 0.05 in the onto setting. 5https://github.com/snap-stanford/KGReasoning 6https://tinyurl.com/66hbhppc 7Available at https://github.com/pminervini/KGReasoning/
1. What is the focus of the paper regarding formal query answering over an incomplete knowledge graph? 2. What are the strengths of the paper, particularly in providing new datasets and proposing extensions to enhance existing methods? 3. What are the weaknesses of the paper, especially regarding the design of the extensions and the lack of evident implications from the experimental results? 4. Do you have any concerns about the effectiveness of the proposed extensions in improving the baseline methods? 5. How do the provided datasets contribute to the problem of formal query answering over an incomplete knowledge graph with a background DL-Lite_R ontology?
Summary Of The Paper Review
Summary Of The Paper The paper targets a problem about formal query answering over an incomplete knowledge graph with a background DL-Lite_R ontology, where the formal queries are restricted to certain shapes. The paper extends existing methods for embedding query answering over knowledge graph without background ontology, namely Query2Box (Ren et al. 2020) and CQD (Arakelyan et al. 2021), with data sampling and a revised training objective. To cope with the background ontology, the basic idea for data sampling is to sample queries rewritten from the given query to train the embedding model, whereas the main revision to the training objective is to consider all queries generalized from the given query. The paper also provides two new datasets (LUBM and NELL) for the targeting problem and presents evaluation on Query2Box, CQD and their extensions mentioned above. Review Strengths: (1) The paper provides two new datasets for the problem about formal query answering over an incomplete knowledge graph with a background DL-Lite_R ontology. (2) The paper also proposes two main extensions to enhance a method for embedding query answering to cope with background ontology. Weaknesses: (1) The usage of generalizations of a query will enlarge the set of certain answers to the query. In fact, all queries rewritten from a query based on a background ontology are specializations (from the right-hand side of the subsumption to the left-hand side), and thus the exact set of certain answers can be preserved. It is very unclear why generalizations are used in both data sampling and the revised training objective. In other words, the design of the extensions is questionable. (2) There are no evident implications from the main experimental results (see Figure 4 and Table 2); in particular, there is lack of evidence that the extensions are effective in improving the baseline methods Query2Box (Q2B) and CQD. If I do not misunderstand, the method O2B is enhanced from Q2B with the proposed extensions. However, O2B is inferior to Q2B in two (LUBM gen and LUBM spec) out of eight cases; i.e., the extensions are not effective in these cases. It is said in the paper that the proposed solution (O2B?) improves Q2B for almost 50% and CQD for almost 54% in terms of LUBM I+D, and that it improves Q2B for almost 20% and CQD for almost 25% in terms of NELL. However, these improvements cannot be checked from Figure 4 or Table 2. In contrast, CQD_gen and CQD_spec outperforms O2B_any a lot according to Figure 4 and Table 2.
ICLR
Title Neuro-Symbolic Ontology-Mediated Query Answering Abstract Recently, low-dimensional vector space representations of Knowledge Graphs (KGs) have been applied to find answers to logical queries over incomplete KGs. However, the current methods only focus on inductive reasoning, i.e. answering such queries by predicting facts based on patterns learned from the data, and lack the ability of deductive reasoning, the task of computing logical entailments using expert domain knowledge. To address this shortcoming, we investigate how existing embedding models for query answering over incomplete KGs can be adapted to incorporate domain knowledge in the form of ontologies. We propose two novel datasets, based on LUBM and NELL KGs, as well as various training strategies to integrate domain knowledge into prominent representatives of embedding models for query answering. Our strategies involve (1) different ontology-driven data augmentation techniques and (2) adaptation of the loss function using query-rewriting methods. The achieved improvements in the settings that require both inductive and deductive reasoning, are from 20% to 50% in HITS@3. 1 INTRODUCTION Answering complex logical queries over Knowledge Graphs (KGs) has recently received a lot of attention due to the relevance of this task in various applications such as natural question answering, web search or data analytics. For example, the query Who works for Amazon and has a degree from MIT? over the KG in Figure 1 can be formulated as q(X) ← degreeFrom(X,mit) ∧ worksFor(X, amazon). Answering such a query is very challenging when KGs are incomplete, which is often the case due to their (semi-) automatic construction, and obtaining complete answers typically requires further domain knowledge. For instance, mary is a missing but desired answer of q. Due to the data distribution in the KG, link prediction models might only be able to derive managerAt(mary, amazon). Therefore, in this case, further domain knowledge that managerAt implies worksFor in ontologyO of Figure 1 would be required to derive worksFor(mary, amazon) and retrieve mary as an answer for q. Recently, Knowledge Graph Embedding (KGE) techniques (Nickel et al., 2016; Wang et al., 2017) that are able to predict missing facts have been proposed for answering logical queries over incomplete KGs. The existing methods can be broadly divided into two categories: query-based (Ren et al., 2020; Ren & Leskovec, 2020; Liu et al., 2021; Choudhary et al., 2021; Kotnis et al., 2021) and atom-based (Arakelyan et al., 2021). The former compute continuous query embedding representations, and use them for answering queries, while the latter compute answers to a query by identifying the most likely answers to all its atoms using neural link predictors (Nickel et al., 2016), and then aggregating those answers using t-norms. While being promising, such existing embedding-based methods do not account for ontologies, regarded as KG schema that enriches the KG by describing dependencies between types and/or relations. Exploiting ontologies when querying KGs is beneficial, e.g., for simplifying query formulation and obtaining more complete answers. The task of answering logical queries in the presence of ontologies is referred to as Ontology-Mediated Query Answering (OMQA) (Bienvenu & Ortiz, 2015). On the one hand, the use of ontologies requires deductive reasoning, i.e., inferring new facts by applying ontology rules to existing facts, but ignoring missing true facts. On the other hand, embedding methods are essentially tailored towards inductive reasoning, i.e. learning from examples: Given a number of queries and their answers, they are used to predict answers to other similar mat mit bob yale Professor maryamazon University AProfessor teachesAt hasAlumnus degreeFrom managerAt hasAlumnus Knowledge Graph type worksFor john googleworksFor type degreeFrom type Ontology Figure 1: An exemplary KG in which solid edges illustrate existing facts in the KG, while dashed edges indicate missing facts. The rules in O state that (1) managers at companies also work there; (2) the inverse of relation degreeFrom is hasAlumnus; (3) assistant professors are professors; (4) teachers at organizations also work there; (5) the range of the relation teachesAt is University. queries, but they typically cannot perform ontology reasoning. Since large portions of expert knowledge can be conveniently encoded using ontologies, the benefits of coupling ontology reasoning and embedding methods for KG completion are evident, and have been acknowledged (e.g. see Bianchi et al., 2020; Zhang et al., 2020; Gutiérrez-Basulto & Schockaert, 2018; Kulmanov et al., 2019). However, to the best of our knowledge, such coupling has not been studied for OMQA. A natural attempt is to interchangeably complete the KG using ontology reasoning and embedding methods, and then perform query answering on top of the result. This naive procedure comes with a big scalability challenge: In practice, we need to restrict ourselves to computing merely small subsets of likely fact predictions required for answering a given query; thus more sophisticated proposals are required. To this end, we investigate three open questions: (1) How to adapt existing OMQA techniques to the setting of KGE? (2) How do different data augmentation strategies impact the accuracy of existing embedding models for OMQA task? and (3) Does the enforcement of ontology axioms in the embedding space via loss function help to improve inductive and deductive reasoning performance? We answer these questions by making the following contributions: • We formally define the task of Embedding-Based OMQA (E-OMQA) and empirically show that existing off-the-shelf KGE models applied naively perform poorly on this task. • We propose novel ontology-driven strategies for sampling training queries as well as loss function modification to enforce the ontology within the embedding space, and demonstrate the effectiveness of these proposals on popular representatives of query-based and atom-based models. • Since no previous benchmarks exist for E-OMQA, we design two datasets using LUBM and NELL, which are well-known benchmarks for OMQA and embedding models, respectively. • Extensive evaluation demonstrates improvements (20% to 50% in HITS@3) in the accuracy of E-OMQA by our methods compared to the baselines, and allows us to obtain and analyze answers to the above questions. 2 PRELIMINARIES Knowledge Graphs and Ontologies. We assume a signature Σ = 〈E,C,R〉 consisting of countable pairwise disjoint sets E,C, and R of constants (entities), concepts (types), and roles (binary relations) respectively. A knowledge graph G (a.k.a. ABox) is a set of triples, such as (mit, type,University) and (bob,worksFor,mit) formalized using Σ. These triples can also be represented as type(mit,University) and worksFor(bob,mit). An ontology O (a.k.a. TBox), e.g. O in Figure 1, is a set of axioms in Description Logics (Baader et al., 2009) over Σ. We focus on DL-LiteR (Artale et al., 2009) which has the following syntax: A v A′, A v ∃p, ∃p v A,∃p− v A, p v s, p− v s, where A,A′ ∈ C, p, s ∈ R, and p− denotes the inverse relation of p. The deductive closure O∞(G), contains all (possibly infinitely many) new facts derived from G using axioms from O (e.g., type(bob,Professor) follows from (3) and type(bob,AProfessor)). Ontology-Mediated Query Answering. A query atom is an expression of the form p(T1, T2), where p ∈ R, and each Ti ∈ V ∪ E is called a term, with V disjoint with E,C, and R being a set of variables. A monadic conjunctive query (CQ) q(X) is a First-Order (FO) formula of the form q(X)← ∃~Y .p1( ~T1) ∧ · · · ∧ pn( ~Tn) where each pi(~Ti) is a query atom, and vars(q) = X ∪ ~Y denotes the set of variables appearing in q, with X 6∈ ~Y being the answer variable. A monadic Existential Positive FO (EPFO) query is a union of monadic CQs (Dalvi & Suciu, 2007). For a query q(X) and a KG G, a constant a from G is an answer to q(X) if there exists a mapping π : var(q) 7→ E that maps the body (the right-hand side) of q to a sub-graph of G. We denote by q [G] the answers of q on G. Ontology-Mediated Query Answering (OMQA) concerns answering queries by accounting for both the KG and the accompanying ontology. Given a KG G and an ontology O, an entity a from G is a certain answer of q(X) over (G,O) if a is an answer to q(X ) over O∞(G). We use q[G,O] to denote the set of certain answers of q over (G,O). Let q and q′ be two monadic queries over (G,O), then q is contained in q′ w.r.t. O if q[G,O] ⊆ q′[G,O]; we call q a specialization of q′ (written as q′ s q), and q′ a generalization of q (written as q g q′). Query generalizations and specializations can be obtained by exploiting ontology axioms; such process (and result) is referred to as query rewriting. Example 1. Consider G in Figure 1 and q(X )← type(X ,Professor) ∧ degreeFrom(X ,mit). Since mat ∈ q[G], it is a certain answer. Moreover, according to O, AProfessor is a sub-type of Professor and degreeFrom is inverse of hasAlumnus, thus bob is also a certain answer. Query q′(X)←type(X,AProfessor) ∧ degreeFrom(X,mit) is a specialization of q as mat 6∈ q′[G,O]. Embedding-Based Query Answering. Recent works on KGEs for answering logical queries can be divided into two categories: query-based (Ren et al., 2020; Ren & Leskovec, 2020; Liu et al., 2021; Choudhary et al., 2021; Kotnis et al., 2021) and atom-based (Arakelyan et al., 2021) models. A neural QA model maps entities and relations into a d-dimensional embedding space. It then computes a score of each entity c for being an answer to a given query q via a scoring function φq(c) : Rd 7→ [0, 1], where c denotes the embedding vector of c.1 Using these scoring functions, the final embedding QA function EG takes as input a query and returns answers to that query. We describe below how this is done for Query2Box and Continuous Query Decomposition (CQD). In Query2Box, entities and queries are embedded as points and boxes, respectively, in a d-dimensional vector space. A d-dimensional embedding is a function ϕ that maps c ∈ E ∪ C to c ∈ Rd and a query q to q=(cenq,offq)∈Rd × Rd≥0, which is used to define a query box as boxq = {v ∈ Rd | cenq − offq v cenq + offq}, where is the element-wise inequality, cenq is the center of the box, and offq is the positive offset of the box, modeling its size. The score for an entity c being an answer to q is computed based on the distance from c to boxq . A prominent representative of the second category, Continuous Query Decomposition (Arakelyan et al., 2021) reduces the task of answering a complex query to that of answering each of its sub-queries. It relies on neural link predictors for answering atomic sub-queries, and aggregates the resulting scores via t-norms. 3 EMBEDDING-BASED ONTOLOGY-MEDIATED QUERY ANSWERING Inductive and deductive reasoning complement each other, thus combining both yields more complete answers to queries. To target such combination, we define an embedding-based QA function that can additionally apply ontology rules to answer queries. Definition 1 (E-OMQA). Let G be a KG, letO be an ontology, and let Gi be an ideal completion of G. An embedding QA function EG is reliable if for any query q and entity a we have that a ∈ EG(q) iff a ∈ q[Gi]. Moreover, EG is ontology-aware iff a ∈ q[Gi,O]. The problem of embedding-based OMQA is to obtain an embedding QA function that is both reliable and ontology-aware. Note that, q[Gi,O] subsumes both q[Gi], the answers requiring inductive reasoning, and q[G,O], the answers computed via deductive reasoning. We proceed to present several methods for E-OMQA. Query Rewriting over Pre-trained Models. In the traditional OMQA setting, each query q can be evaluated by first rewriting q into a set of FO-queries qO, and then evaluating each query in qO over G alone. In our case, this amounts to constructing an embedding QA function EG aware of G alone, and using it to compute the answers to all queries in qO rather than only to the query q. Example 2. For G,O in Figure 1 and queries q(X)←degreeFrom(X,mit) ∧ worksFor(X, amazon) and q′(X)←degreeFrom(X,mit) ∧managerAt(X, amazon), qO contains q and q′ among others, and to approximate q[Gi,O] we take the EG-based answers of all queries in qO. Ontology-Aware Models. An alternative to query rewriting is to develop an embedding QA function that accounts for axioms in O. To the best of our knowledge, there are no KGE models that di- 1Bold small letters denote vector representations. rectly address the problem of E-OMQA. Therefore, we suggest the following two options: (1) Train existing embedding models for logical QA on the data derived from O∞(G) instead of G;(2) Develop an ontology-aware embedding model that will be trained on G, but will have special terms in the training objective structurally enforcing O. While the proposed approaches can be realized on top of any embedding model for logical QA, in this work we verify their effectiveness on the two prominent recent embedding models: Query2Box and CQD. Regarding (1), in Section 3.1 we present several methods for effective ontology-driven training. As for (2), building on Query2Box, in Section 3.2 we develop an ontology-aware embedding model. Finally, we use the query-rewriting method over embeddings as a baseline in Section 4. 3.1 ONTOLOGY-DRIVEN DATA SAMPLING Let QG be the set of all possible EPFO monadic queries that can be formed using signature Σ. To answer any arbitrary such query, existing embedding models are trained on a set of sampled queries of certain shapes and their answers over the KG G. For instance, queries in (Ren et al., 2020) have multiple atoms while in (Arakelyan et al., 2021) they are atomic (e.g., q(X)← worksFor(X,mit)). Usually, the set of training queries does not take the schema into account. For example, in (Ren et al., 2020), the queries are randomly selected from QG and used for training the model along with their answers over G as positive examples and randomly generated non-answers as negative examples. However, if the ontology is present along with the KG, this procedure is not guaranteed to capture the ontology axioms, and using all possible queries from QG may be infeasible in practice. In the following, we discuss various options for sampling queries to train ontology-aware KGE models. Certain Answer and Query Rewriting-Based Sampling. The first natural approach for query sampling is to select queries along with their certain answers instead of the standard answers. For ontology languages such as those in the DL-Lite family (Artale et al., 2009) computing certain answers can be done efficiently. An example of this training case is to randomly sample query q(Y ) ← ∃X.hasAlumnus(mit, X) ∧ worksFor(X,Y ) and, given (G, O) in Figure 1, use it along with all its certain answers: mit, yale during training. To account for the ontology, we can randomly sample queries over the KG, and add also all of their generalizations and specializations obtained using the rules in Table 1. To rewrite a query we select an atom and apply an ontology axiom, e.g., the first rule (R1) applies a concept inclusion axiom, while (R6) applies a role inclusion. The specializations of a given query q (denoted as Spec(q)), incorporate specific information regarding the answers of q, while the generalizations of q (i.e. Gen(q)) incorporate additional related entities. Example 3. Consider q1(X)←∃Y.type(X,University), q2(X)←∃Y, Z.teachesAt(Z,X). Using R2 in Table 1 and (5) in Figure 1 we get q1 s q2. Similarly, in Example 2, q′ g q using R6 and (1). In general, there are exponentially many rewritings thus to keep the training size reasonable, we fix a rewriting depth, up to which the respective training queries are generated, via a dedicated parameter. Strategic Ontology-Based Sampling. While adding generalizations and specializations of randomly selected queries should capture some parts of the ontological background knowledge, many relevant queries might still be missed. For example, if O contains also ∃worksFor− v Organization, queries such as q(X)←∃YmanagerAt(Y,X)∧type(X,Organization) are likely to be disregarded during training. Therefore, another training approach that we propose is to leverage the ontology to strategically generate the train queries. For that first, we formalize the set of target queries by means of a query template graph, i.e., a directed acyclic graph (DAG) (N,E), where N is a set of nodes and E ⊆ N ×N is a set of directed edges. Such DAG captures the shape of each query. The set of target queries is then obtained by applying a labeling function to assign symbols in Σ to nodes and edges. Definition 2 (Query Shape). A query shape S is a tuple (N,E, n) such that (N,E) is a DAG and n ∈ N is the distinguished node of S (i.e., the node corresponding to the answer variable). For a given set of relations and constants in Σ, a labeling function f : N ∪ E 7→ Σ ∪V maps each node to either a variable or an entitiy and each edge to a relation symbol in Σ. Our goal is to exploit the ontology to label query shapes to create semantically meaningful queries. Towards that, let v∗ be the reflexive and transitive closure of v. Then, for a given relation p : - inv(p) = {p′ | p v p′− ∈ O}, dom(p) = {A | ∃p′vA′ ∈ O s.t. p v∗ p′,Av∗A′ or A′v∗A}, - range(p) = {A | ∃p′−vA′ ∈ O s.t. pv∗p′,Av∗A′ or A′v∗A}, - follows(p) = {p′ | range(p) ∩ dom(p′) 6= ∅ or }, - interr(p)={p′|range(p)∩range(p′) 6=∅ or p1∈ inv(p), p2∈ inv(p′) and dom(p1)∩dom(p2) 6=∅}, - interd(p)={p′|dom(p)∩dom(p′) 6= ∅ or p1∈ inv(p), p2∈ inv(p′)andrange(p1)∩range(p2) 6=∅}. Intuitively, for a given relation p, the set inv(p) contains all inverse relations of p, dom(p) contains all domain types for p, range(p) all range types for p, follows(p) stores all relations p′ which can follow p, and interr(p), interd(p) contain resp. all relations p′ which can intersect p on range and domain positions. Then, for each shape we label nodes and edges to create queries that are valid w.r.t. O as illustrated in Figure 2. Note that this query sampling process uses only the ontology, thus it is data independent. However, if the ontology does not capture additional data patterns we can proceed in a bottom-up fashion. We randomly take some labeled query shapes which produce answers, and construct their generalizations as before. 3.2 AN ONTOLOGY-AWARE TRAINING OBJECTIVE In this section, we present our novel training objective function employed by Query2Box. Recall that when embedding a query, the Query2Box model defines a box in an embedding space, s.t. the answer entities of the given query are mapped to points located inside of the box. Note that for every ontological axiom its both left- and right-hand side can be turned into queries. We observe that when embedding those queries as boxes, ontological axioms can be naturally injected into the model if in the vector space the inclusion of the boxes corresponding to the respective queries is ensured. Example 4. In Figure 3, the entities and relations are embedded into the vector space as points and projection operators, resp. The embedding of q(Y )←∃X .hasAlumnus(mit,X )∧worksFor(X ,Y ) is represented by the larger grey box, obtained by applying the projection hasAlumnus to the embedding of entity mit followed by the projection on worksFor. To enforce teachesAt v worksFor we ensure that the box corresponding to q′(Y )←∃X.hasAlumnus(mit, X) ∧ teachesAt(X,Y ), is contained in the box corresponding to q. The goal is to learn the embedding of queries, such that the distance between the box, corresponding to the query, and its answers is minimized, while the distance to this box from other negative samples is maximized. Similarly to Ren et al. (2020), we define the distance between q ∈ Rd × Rd≥0 and v ∈ Rd as d(q,v) = ‖cenq − v‖1, namely the L1 distance from the entity v to the center of the box. Using the sigmoid function we transform the distance into the (0, 1) interval, that is, p(v |q) = σ ( − (d(q,v)− γ) ) , where γ > 0 is a margin, which denotes the probability of v ∈ q[O,Gi]. For a query q, let Gen(q) = {q1 . . . qn} be the set of all generalizations of q based on O. Given a train query q and its certain answer v ∈ q[G,O], we aim at maximizing ∏n i=1 p(v |qi)βi , where βi ≥ 0 is a weighting parameter for all i = 1, . . . , n. This is achieved by minimizing the negative log-likelihood:2 − log (∏n i=1 p(v |qi)βi ) = − ∑n i=1 βi log ( p(v |qi) ) . By exploiting the fact that σ(x) = 1−σ(−x), for any v′j 6∈ q[G,O], we have that p(v′ |q) = 1− p(v |qi) = σ(d(q,v)−γ) . Our goal is to ensure that if q′ is a generalization of a given train query q w.r.t. O, then the box of q′ contains the box of q. In other words, if a is an answer to the query q then the distance not only between a and q should be minimized, but also between a and q′ as well as between a and all other generalizations of q. The following training objective reflects our goal: L=− n∑ i=1 βi log σ ( γ − d(v,qi) ) − k∑ j=1 1 k log σ(d(v′j ; q)− γ), where v′j 6∈ q[G,O] is a random entity for all j = 1, . . . , k obtained via negative sampling. In our experiments, we use βi = |Gen(q)|−1 = 1/n. Example 5. Consider q(Y )←∃X .hasAlumnus(mit,X )∧type(X,AProfessor)∧teachesAt(X ,Y ). We have Gen(q) = {q1 , q2 , q3}, where q1 is obtained from q by substituting teachesAt with worksAt, while q2 is q with type(X,Professor) instead of type(X,AProfessor). In q3 the first, second and third atoms are resp. the same as in q, q1 and q2. It holds that q [G,O] = {yale}, hence our training objective is to minimize the distance between yale (the embedding of yale), and q as well as the distance between yale and the boxes of q1, q2 and q3 (denoted by q1, q2 and q3). Note that conceptually, our training data sampling techniques and the loss function modifications are flexible in terms of the Description Logic in which the ontology is encoded. The only restriction is the existence of efficient query rewriting algorithms for this DL. In this work, we focused on DL-LiteR, since the majority of available ontologies are belong to this language. 4 EVALUATION In this section, we evaluate the proposed training strategies on the two recent embedding models for QA: Query2Box model (Q2B , Ren et al., 2020) and Continuous Query Decomposition (CQD , Arakelyan et al., 2021). We also measure the effectiveness of the newly introduced training objective function of Q2B model (called O2B ). All models are evaluated in different settings to measure their ability to perform inductive reasoning, deductive reasoning, and their combination.3 4.1 EXPERIMENTAL SETUP We have configured both Q2B and O2B systems as follows: We set the size of the embedding dimension to 400, and trained the models for 15×104 steps using Adam (Kingma & Ba, 2015) with an initial learning rate of 10−4 and the batch size of 512. We evaluated the models periodically and reported the test results of the models which have the best performance on the validation dataset. For CQD we have used the following configuration: we used ComplEx-N3 (Lacroix et al., 2018) as the underlying neural link predictor, where the embedding size was set to 1000, and the regularisation weight was selected based on the validation set by searching in {10−3, 5× 10−3, . . . , 10−1}. 2The log is strictly monotonically increasing, thus, it will not change the maximization. It only changes the product to a summation. During training we consider a minimization, which motivates the negative sign. 3Code and data are available at https://tinyurl.com/66hbhppc. Query and Answers Sampling. We use the same type of queries (corresponding to directed acyclic graphs with entities as the source nodes, also known as anchors) as Ren et al. (2020) (see Figure 5 in Appendix). We consider each input KG G to be the ideal completion (i.e. Gi) and then partition it into Gvalid for validation and Gtrain for training by discarding 10% of edges at each step; this yields Gtrain ( Gvalid ( G. We then create several training sets of queries according to our ontologyaware data sampling strategies from Section 3.1. More specifically, these include: plain: the training queries are randomly sampled based on the signature of Gtrain , and their plain answers, i.e. over Gtrain . gen: queries from plain augmented with their ontology-based generalizations 4; all answers are certain, i.e. over O∞(Gtrain). spec: queries from gen augmented with their ontology-based specializations; all answers are certain answers as well. onto: queries constructed relying on the ontology axioms as introduced in Section 3.1, for which we randomly choose a percentage of valid entities as anchors; all answers are certain. 4This setting is similar to random sampling over O∞(Gtrain) but unlike the deductive closure, our procedure is guaranteed to terminate. We used the rewriting depth of up to 10. Following Ren & Leskovec (2020), the training query shapes are the first five ones in Figure 5 (1p– 3i); non-compliant specializations and generalizations are discarded. The Q2B and O2B are trained on all five query shapes, while CQD is trained only on 1p queries (Arakelyan et al., 2021). Evaluation Procedure. For each trained model we measure its performance using standard metric HITS at K for K=3 (HITS@3), which indicates the frequency that the correct answer is ranked among the top-3 results (the higher, the better). We use such metric for measuring the reliability and ontology-awareness of the resulting models (as in Definition 1): Inductive case (I). Evaluating the inductive reasoning ability (accounts for the standard test case): Is the model able to predict missing answers to queries over the ideal completion Gi? Deductive case (D). Evaluating the deductive reasoning ability: Is the model able to predict answers that can be inferred from the known triples in Gtrain using ontology axioms? Inductive + Deductive case (I+D). The combination of I and D: Is the model able to predict missing answers that are inferred from the ideal completion Gi using axioms from O? For I, we randomly generate validation and test queries over Gvalid, and input G, in such a way that for each validation query q we have that q[Gtrain] ( q[Gvalid], and for each test query q we have q[Gvalid] ( q[G]. For D, we randomly generate evaluation queries overO∞(Gtrain) s.t. they are not trivially answered over Gtrain . Moreover, each validation query is unseen during training, and each test query is unseen during training and validation. For I+D, we proceed as for I, but useO∞(Gvalid) and O∞(G) to sample validation and test queries and their answers. In each test case all shapes in Figure 5 are sampled, and we measure accuracy based on so-called hard answers, which cannot be trivially retrieved from Gtrain and require prediction of missing edges and/or the application of some ontology axioms. A hard answer for the case I is an answer in q[G] \ q[Gvalid], for D it is an answer in q[O∞(Gtrain)] \ q[Gtrain], while for I+D it is an answer in q[O∞(G)] \ q[O∞(Gvalid)]. Models and Datasets. We consider Q2B , O2B and CQD trained in each described setting: i.e., Mx, where M ∈ {Q2B ,O2B ,CQD} and x ∈ {plain, gen, spec, onto}. Additionally, we consider the use of the query-rewriting method on top of each model pre-trained using plain strategy, denoted by M rewplain , i.e., Q2Bplain , Q2B rew plain and CQDplain , CQD rew plain respectively are used as baselines. We evaluate the proposed training methods, as well as the novel training objective, on two datasets: NELL (Carlson et al., 2010), a general purpose real world KG, and LUBM (Guo et al., 2005), a domain specific synthetic dataset describing the university domain. We selected these datasets, as they are among few large KGs that have ontologies (see Appendix for statistics). 4.2 EVALUATION RESULTS For the standard test case I, the baseline Q2Bplain performs best on LUBM, while CQDplain outperforms the other models and configurations on NELL. This is not surprising: ontologies are not effective when coping with missing edges and facts in a KG beyond those that they can deductively infer. In fact, if statistically, the patterns reflected by ontologies do not hold in the data, ontology-aware training strategies might worsen the prediction quality. The second observation is that the query rewriting over embedding models only slightly improves the prediction accuracy; Q2Brewplain , O2B rew plain and CQD rew plain result in at most 10% enhancement on test cases D and I+D over Q2Bplain , O2Bplain and CQDplain respectively. These limited improvements are likely due to the incompleteness of the rewriting procedure caused by the restriction of the queries supported by the models. The results on I and query-rewriting over embeddings are in Appendix E.1. Next, we discuss our main observations for the other more interesting test cases D and I+D. The results are reported in Table 2 and visually illustrated in Figure 4. Overall, the effectiveness of the proposed solutions is evident: for I+D on LUBM the improvements are of almost 50% for Query2Box and 54% for CQD, while for NELL of almost 20% for Query2Box and 25% for CQD. Performance of Training Strategies. The results on certain answer prediction (D and I+D) show that none of the baselines is able to capture the domain knowledge expressed in the ontology, and thus cannot be used directly for OMQA. Our ontology-aware model–O2Bplain outperforms the other models trained on plain , but incorporation of certain answers and generalizations via our training strategies leads to better results. The proposed training methods from Section 3.1 significantly improved the accuracy for test cases D and I+D. For all models generating training queries by taking the ontology into account yields improvements. This observation holds already when augmenting the set of random queries by choosing their generalizations, though the addition of specializations does not seem to have a major impact. We observed that randomly selecting training queries, as usually done in the literature, does not result in the most accurate models. On LUBM, for all models, the advantage of the ontology-driven query sampling (i.e. onto setting) is significant compared to all other settings. Remarkably, for LUBM CQDonto trained on less data than CQDgen or CQDspe results in higher accuracy. This shows that random sampling is not adequate for OMQA. For NELL, to keep the size of the training set reasonable, we chose a much lower number of anchors obtaining a significantly lower number of atomic queries (details in the Appendix), however since Q2B and O2B use information from other query shapes onto setting still outperforms all others, unlike for CQD which only relies on atomic queries. Evaluation of the Ontology-Aware Training Objective. The model O2Bonto has far better accuracy on cases D and I+D than the Q2B baseline. This shows that the enforcement of ontology axioms in the embedding space together with strategic ontology-driven training provides significant improvement, especially for LUBM which has a more expressive ontology. Furthermore, the improvement of O2Bplain over Q2Bplain shows that we are able to partially incorporate the domain knowledge into the embedding model without explicitly training on certain answers. 5 RELATED WORK The task of answering queries that involve multiple atoms using embedding techniques has recently received a lot of attention. The existing proposals can be divided into query-based (Ren et al., 2020; Ren & Leskovec, 2020; Liu et al., 2021; Choudhary et al., 2021; Kotnis et al., 2021; Sun et al., 2020) and atom-based (Arakelyan et al., 2021). Friedman & den Broeck (2020) and Borgwardt et al. (2019) study the relation between the problem of conjunctive QA in the embedding space and over probabilistic databases. Our work is different from the above proposals in that along with the data we also rely on ontologies to answer queries. Integration of ontologies into KG embeddings has been studied by e.g. Krompaß et al. (2015); Minervini et al. (2017); Hao et al. (2019); Guo et al. (2016); Rocktäschel et al. (2015); Demeester et al. (2016); Kazemi & Poole (2018); Fatemi et al. (2019); Abboud et al. (2020), but these works do not capture all supported axioms and focus on link prediction rather than QA. The capability of embeddings to model hierarchical data has been explored by Patel et al. (2020); Idahl et al. (2019); Gutiérrez-Basulto & Schockaert (2018). In particular, Idahl et al. (2019) aim at interpreting embeddings by finding concept spaces in node embeddings and linking them to a simple external type hierarchy; this is different from our method for OMQA over embeddings. In Gutiérrez-Basulto & Schockaert (2018), conceptual space representations of known concepts are learned by associating a Gaussian distribution with each concept over a learned vector space. Constructing models for EL ontologies in the embedding space (Kulmanov et al., 2019) is another relevant direction. While Gutiérrez-Basulto & Schockaert (2018); Kulmanov et al. (2019) are related to our work, they do not touch upon the problem of OMQA. The OMQA problem has been actively studied (see e.g. Schneider & Simkus (2020) for an overview), but available methods only focus on purely logicbased deductive reasoning, without aiming at simultaneously handling missing links. 6 CONCLUSION We have presented methods for Ontology-Mediated Query Answering that operate in the embedding space to enable simultaneous inductive and deductive reasoning over the incomplete data. To the best of our knowledge, this is the first work on embedding-based OMQA. We have empirically demonstrated that embedding-based methods for QA applied naively or combined with query rewriting techniques are not effective. In our work, we have proposed solutions for making the existing models ontology-aware via ontology-driven training sampling strategies and loss function modifications. The improvements in the accuracy on prominent query-based and atom-based models range from 20% to 50% compared to the baselines. We believe that this work opens interesting perspectives for combining OMQA methods, with roots in knowledge representation, and embedding techniques from the machine learning area. Reproducibility Statement. Code, data, and instructions for reproducing all experiments are available at https://tinyurl.com/66hbhppc. The hyperparameters are presented in Appendix F. A DESCRIPTION LOGICS ONTOLOGIES The syntax of DL-LiteR ontologies and its translation into rule-based syntax are given in Table 3. The semantics of DL ontologies is defined using FO interpretations (∆I , ·I) consisting of a nonempty domain ∆I and an interpretation function ·I , which maps each entity e to an element eI ∈ ∆I , each concept name A to a subset AI ⊆ ∆I , and each role name r to a binary relation rI ⊆ ∆I×∆I . The interpretation function ·I is extended to complex concepts as follows: (∃p)I = {d ∈ ∆I | ∃d′, (d, d′) ∈ pI}, (p−)I = {(d′, d) | (d, d) ∈ pI}. An interpretation I satisfies a concept inclusionC v D iffCI ⊆ DI , and I satisfies a role inclusion p v s iff pI ⊆ sI . Finally, I is a model of an ontologyO if it satisfies all concept and role inclusions in O. The notion of modelhood is applied also to a KG G as follows: An interpretation I satisfies a fact A(c) (i.e., type(c, A)), resp. p(c, c′), if c ∈ AI , resp. (c, c′) ∈ pI . Given a KG G and an ontology O, an interpretation I is a model of G w.r.t O if I satisfies each fact in G and each axiom in O. In OMQA setting, to answer a given query, we need to evaluate it over each such model; in the case of DL-LiteR ontologies, for computing answers to ontology-mediated queries, we can rely on the deductive closureO∞(G), since the model constructed fromO∞(G) can be homomorphically mapped to every other model. B TRACTABILITY OF REWRITING-BASED QUERY GENERATION For an arbitrary DL-LiteR ontology O and an arbitrary existential positive FO query q, let Spe(q,O) = {q′ | q s ∗ q′} be the set of all specializations of q w.r.t. O, modulo variable renamings, obtained by exhaustively applying s ∗ rules from Table 1. Similarly, let Gen(q,O) = {q′ | q g ∗ q′} be the set of all generalizations of q w.r.t. O, modulo variable renamings, obtained by exhaustively applying g ∗ rules. The following proposition, states that our training strategies based on query-rewriting are tractable. Proposition 1. Let O be an arbitrary DL-LiteR ontology, and let q be an existential positive FO query. Then, Spe(q,O) and Gen(q,O) are finite and can be computed in time that is polynomial in the size of O. Proof (Sketch). The rewriting rules we propose are simulating the standard rewriting for DL-LiteR. Thus, it follows from Lemma 34 in Calvanese et al. (2007) that Spe(q,O) is finite. Moreover, based on Lemma 42 in Calvanese et al. (2007) it follows that there exists a procedure to compute Spe(q,O) in time that is polynomial in the size of O. Since the generalization procedure is similar, only applying the axioms in the other direction, we also conclude that Gen(q,O) is finite and polynomially bounded by O. C QUERY2BOX GEOMETRIC OPERATIONS We now describe the geometric operators employed in the Query2Box model. Projection. Let S ⊆ E ∪ C be a set of entities, and r ∈ R a relation. Intuitively, the projection operator performs graph traversal, e.g. given an embedding of entity e, the projection operator for the relation r provides the box corresponding to the set {e′ ∈ E ∪ C | r(e, e′) ∈ G}. Given the embedding r = (cenr,offr) ∈ Rd × Rd≥0 for the relation r, we model the projection of a box v = (cenv,offv) by applying element-wise summation v + r = (cenv + cenr,offv + offr). This relational translation Bordes et al. (2013) operation corresponds to the translation and enlargement of the box v. Intersection. Given a set of entity sets {S1, . . . , Sn}, the intersection operator computes the intersection of these sets. Recall that each set of entities is represented by a box in Query2Box model. The intersection w = (cenw,offw) of a set of boxes {(cenv1 ,offv1), . . . , (cenvn ,offvn)} corresponding to the set {S1, . . . , Sn} is modeled by applying the following operations: cenw = n∑ i=1 Φ ( NN(cenv1), . . . ,NN(cenvn) ) i cenvi , offw = min(offv1 , . . . ,offvn) σ ( Ψ(offv1 , . . . ,offvn) ) , where and min denote the element-wise multiplication and minimum, respectively. NN : Rd → Rd is a 2-layer feed-forward neural network having the same dimensionality for the hidden layers as for the input layer. Φ and σ stand for the softmax and sigmoid functions, resp., applied in a dimension-wise manner. Ψ is a permutation invariant function composed of a 2-layer feed-forward network followed by element-wise mean operation and a linear transformation. The center cenw is calculated as the weighted mean of the box centers cenv1 , . . . , cenvn . This geometric intersection provides a smaller box that lies inside a given set of boxes – for more details we refer to Ren et al. (2020). D DATA AND QUERY STATISTICS Following the procedure in the literature, each input KG is completed w.r.t. inverse edges. For the considered datasets, in Table 4 we present the number of ontology axioms of various types as well as the number of (materialized) triples, entities and relations. In our experiments, we have considered both complex and simple ontologies. Indeed, LUBM has a rich ontology including domain and range axioms as well as concept and role inclusions, while the NELL KG is accompanied with a more simple ontology containing only (inverse) role inclusions. The size of each training/testing set, as well as the number of queries per shape for each of the considered settings is presented in Table 5, while each query shape is illustrated in Figure 5. Note that for NELL, the plain data is exactly the one from Ren et al. (2021). We observe that the number of 1p queries obtained for gen and spe settings are identical. This is probably because the set of 1p queries in plain covers all edges in the train KG. This explains the high accuracy of CQDgen and CQDspe on the test case D. Moreover, the NELL ontology does not contain interesting axioms that can be leveraged by ontology-driven query sampling technique, thus to obtain onto we had to rely on the patterns from the data alone. Since there are too many queries to chose from, due to the large number of relations, we had to select a smaller number of valid entities as anchors, namely 20-30%. This explains the small number of 1p queries. For the LUBM dataset, we have created the training and testing sets from scratch, and the 1p queries in plain do not contain the entire training KG. The onto set of queries leverages the proposed ontology-driven technique, given that the ontology covers all relations and concepts in the KG and describes how they interact, i.e. the ontology axioms support all the constructed queries, and we chose 50 % of valid entities as anchors. E EXTENDED DISCUSSION OF EXPERIMENTAL RESULTS In this section, we present more insights into our results on the inductive case I and performance of the query-rewriting baselines. E.1 RESULTS ON INDUCTIVE TEST CASE In Figure 6, we present the average HITS@3 for the inductive test case I. As previously discussed, we see no improvement of ontology-injection methods upon answering queries over incomplete KGs without taking certain answers into account. Indeed, Q2Bplain outperforms all other models on LUBM, while CQDplain performs best on NELL for this test case. This behaviour is expected, since ontologies cannot handle missing edges and facts in a KG that are not inferred from the data using ontological reasoning. E.2 QUERY REWRITING OVER PRE-TRAINED EMBEDDING MODELS In order to evaluate the target procedure that performs query rewriting over pre-trained embeddings for QA, for each hard answer a we take the best (i.e., minimum) ranking among all rankings generated by all queries in the rewriting of each test query. In other words, we take the minimal distance between the embedding of a and all rewritings of q. Note that, for measuring the performance we use the pre-trained models Q2Bplain , CQDplain and O2Bplain , obtained after 450K training steps. Due to the reliance on particular query shapes of the respective models, the complete rewriting for each query is not guaranteed. In Table 6, we present the results for this method compared to the plain setting. Minor improvements of only at most 10% are observed. We have also used our pre-trained O2Bplain model as a possible way to cope with this issue, and indeed it outperforms all other baselines. In fact, on NELL O2Bplain becomes relatively competitive even compared to the other ontology-aware models that have been trained using more advanced ontology-driven training strategies. However, for richer ontology that comes with the LUBM KG, the improvements are still not sufficient. E.3 DATA AUGMENTATION In Figure 7, we present the performance of each model and the number of training queries needed in each training setting. In general, naturally, the increase of the number of training queries leads to better performance, with the exception of the setting spe, for which the training data contains all queries from gen , but the performance is comparable or slightly decreases. The onto setting boosts the performance for almost all models. In particular, on LUBM, which has a richer ontology, the increase in performance is much higher compared to that for the setting when query generalizations and certain answers are included. It is worth noting that the number of 1p queries is smaller for onto than for gen , but CQDonto performs much better than CQDgen , which demonstrates the effectiveness of our proposed ontology-driven training strategy. F HYPERPARAMETERS FOR Q2B, O2B AND CQD For Q2B we have used the code5 from Ren & Leskovec (2020). Our extension of this code with the implementation of the novel training objective is available online.6. The systems Q2B and O2B have been configured as follows: We set the size of the embedding dimension to 400, and trained the models for 15×104 steps using Adam (Kingma & Ba, 2015) with an initial learning rate of 10−4 and the batch size of 512. The rest of the parameters were set in the same way as in Ren et al. (2020). We evaluated the models periodically and reported the test results of the models which have the best performance on the validation dataset. For CQD, we used the code shared by Arakelyan et al. (2021) 7, using ComplEx-N3 (Lacroix et al., 2018) as the base model, where the embedding size was set to 1000, and the regularisation weight was selected based on the validation set by searching in {10−3, 5 × 10−3, . . . , 10−1}. For LUBM, the regularization weight was set to 0.1 in the gen , spe , and onto settings, and to 0.01 in the plain setting. For NELL, the regularization weight was set to 0.005 in the plain setting, to 0.001 in the gen and spe settings, and to 0.05 in the onto setting. 5https://github.com/snap-stanford/KGReasoning 6https://tinyurl.com/66hbhppc 7Available at https://github.com/pminervini/KGReasoning/
1. What is the main contribution of the paper in the field of query answering systems? 2. What are the strengths of the proposed approach, particularly in its ability to integrate inductive and deductive reasoning capabilities? 3. What are the weaknesses of the paper, especially regarding the experimental results of the modified O2B method? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper proposes ontology mediated Neuro symbolic query answering system that uses popular embeddings and knowledge present in the ontology to bring the inductive and deductive reasoning capabilities for query answering. Paper proposes strategies to use ontology axioms to improve embedding training for both query based embedding methods and atom based embedding methods. Authors introduce two new benchmarks where this can be tested and show positive results that can be obtained on queries that span across deductive, inductive and inductive+ deductive scenarios Review Strengths: Paper explores how inductive and deductive reasoning capabilities can be integrated in knowledge graph query answering. Paper is well motivated and the approach is described clearly. Paper proposes ways to generate training data to cover different aspects of the deductive reasoning. Describes ontology aware training objective description that can be easily incorporated into existing embedding methods(Q2B and O2B) and show it two existing methods. Benchmark datasets creation methodology and the two datasets introduced based on LUBM and NELL can be useful to the research community in pushing complex query answering research. Weakness: Experimental results on newly introduced modification O2B by modifying Q2B does’t seem to yield consistent results across all the scenarios. Any insights around this ?
ICLR
Title Neuro-Symbolic Ontology-Mediated Query Answering Abstract Recently, low-dimensional vector space representations of Knowledge Graphs (KGs) have been applied to find answers to logical queries over incomplete KGs. However, the current methods only focus on inductive reasoning, i.e. answering such queries by predicting facts based on patterns learned from the data, and lack the ability of deductive reasoning, the task of computing logical entailments using expert domain knowledge. To address this shortcoming, we investigate how existing embedding models for query answering over incomplete KGs can be adapted to incorporate domain knowledge in the form of ontologies. We propose two novel datasets, based on LUBM and NELL KGs, as well as various training strategies to integrate domain knowledge into prominent representatives of embedding models for query answering. Our strategies involve (1) different ontology-driven data augmentation techniques and (2) adaptation of the loss function using query-rewriting methods. The achieved improvements in the settings that require both inductive and deductive reasoning, are from 20% to 50% in HITS@3. 1 INTRODUCTION Answering complex logical queries over Knowledge Graphs (KGs) has recently received a lot of attention due to the relevance of this task in various applications such as natural question answering, web search or data analytics. For example, the query Who works for Amazon and has a degree from MIT? over the KG in Figure 1 can be formulated as q(X) ← degreeFrom(X,mit) ∧ worksFor(X, amazon). Answering such a query is very challenging when KGs are incomplete, which is often the case due to their (semi-) automatic construction, and obtaining complete answers typically requires further domain knowledge. For instance, mary is a missing but desired answer of q. Due to the data distribution in the KG, link prediction models might only be able to derive managerAt(mary, amazon). Therefore, in this case, further domain knowledge that managerAt implies worksFor in ontologyO of Figure 1 would be required to derive worksFor(mary, amazon) and retrieve mary as an answer for q. Recently, Knowledge Graph Embedding (KGE) techniques (Nickel et al., 2016; Wang et al., 2017) that are able to predict missing facts have been proposed for answering logical queries over incomplete KGs. The existing methods can be broadly divided into two categories: query-based (Ren et al., 2020; Ren & Leskovec, 2020; Liu et al., 2021; Choudhary et al., 2021; Kotnis et al., 2021) and atom-based (Arakelyan et al., 2021). The former compute continuous query embedding representations, and use them for answering queries, while the latter compute answers to a query by identifying the most likely answers to all its atoms using neural link predictors (Nickel et al., 2016), and then aggregating those answers using t-norms. While being promising, such existing embedding-based methods do not account for ontologies, regarded as KG schema that enriches the KG by describing dependencies between types and/or relations. Exploiting ontologies when querying KGs is beneficial, e.g., for simplifying query formulation and obtaining more complete answers. The task of answering logical queries in the presence of ontologies is referred to as Ontology-Mediated Query Answering (OMQA) (Bienvenu & Ortiz, 2015). On the one hand, the use of ontologies requires deductive reasoning, i.e., inferring new facts by applying ontology rules to existing facts, but ignoring missing true facts. On the other hand, embedding methods are essentially tailored towards inductive reasoning, i.e. learning from examples: Given a number of queries and their answers, they are used to predict answers to other similar mat mit bob yale Professor maryamazon University AProfessor teachesAt hasAlumnus degreeFrom managerAt hasAlumnus Knowledge Graph type worksFor john googleworksFor type degreeFrom type Ontology Figure 1: An exemplary KG in which solid edges illustrate existing facts in the KG, while dashed edges indicate missing facts. The rules in O state that (1) managers at companies also work there; (2) the inverse of relation degreeFrom is hasAlumnus; (3) assistant professors are professors; (4) teachers at organizations also work there; (5) the range of the relation teachesAt is University. queries, but they typically cannot perform ontology reasoning. Since large portions of expert knowledge can be conveniently encoded using ontologies, the benefits of coupling ontology reasoning and embedding methods for KG completion are evident, and have been acknowledged (e.g. see Bianchi et al., 2020; Zhang et al., 2020; Gutiérrez-Basulto & Schockaert, 2018; Kulmanov et al., 2019). However, to the best of our knowledge, such coupling has not been studied for OMQA. A natural attempt is to interchangeably complete the KG using ontology reasoning and embedding methods, and then perform query answering on top of the result. This naive procedure comes with a big scalability challenge: In practice, we need to restrict ourselves to computing merely small subsets of likely fact predictions required for answering a given query; thus more sophisticated proposals are required. To this end, we investigate three open questions: (1) How to adapt existing OMQA techniques to the setting of KGE? (2) How do different data augmentation strategies impact the accuracy of existing embedding models for OMQA task? and (3) Does the enforcement of ontology axioms in the embedding space via loss function help to improve inductive and deductive reasoning performance? We answer these questions by making the following contributions: • We formally define the task of Embedding-Based OMQA (E-OMQA) and empirically show that existing off-the-shelf KGE models applied naively perform poorly on this task. • We propose novel ontology-driven strategies for sampling training queries as well as loss function modification to enforce the ontology within the embedding space, and demonstrate the effectiveness of these proposals on popular representatives of query-based and atom-based models. • Since no previous benchmarks exist for E-OMQA, we design two datasets using LUBM and NELL, which are well-known benchmarks for OMQA and embedding models, respectively. • Extensive evaluation demonstrates improvements (20% to 50% in HITS@3) in the accuracy of E-OMQA by our methods compared to the baselines, and allows us to obtain and analyze answers to the above questions. 2 PRELIMINARIES Knowledge Graphs and Ontologies. We assume a signature Σ = 〈E,C,R〉 consisting of countable pairwise disjoint sets E,C, and R of constants (entities), concepts (types), and roles (binary relations) respectively. A knowledge graph G (a.k.a. ABox) is a set of triples, such as (mit, type,University) and (bob,worksFor,mit) formalized using Σ. These triples can also be represented as type(mit,University) and worksFor(bob,mit). An ontology O (a.k.a. TBox), e.g. O in Figure 1, is a set of axioms in Description Logics (Baader et al., 2009) over Σ. We focus on DL-LiteR (Artale et al., 2009) which has the following syntax: A v A′, A v ∃p, ∃p v A,∃p− v A, p v s, p− v s, where A,A′ ∈ C, p, s ∈ R, and p− denotes the inverse relation of p. The deductive closure O∞(G), contains all (possibly infinitely many) new facts derived from G using axioms from O (e.g., type(bob,Professor) follows from (3) and type(bob,AProfessor)). Ontology-Mediated Query Answering. A query atom is an expression of the form p(T1, T2), where p ∈ R, and each Ti ∈ V ∪ E is called a term, with V disjoint with E,C, and R being a set of variables. A monadic conjunctive query (CQ) q(X) is a First-Order (FO) formula of the form q(X)← ∃~Y .p1( ~T1) ∧ · · · ∧ pn( ~Tn) where each pi(~Ti) is a query atom, and vars(q) = X ∪ ~Y denotes the set of variables appearing in q, with X 6∈ ~Y being the answer variable. A monadic Existential Positive FO (EPFO) query is a union of monadic CQs (Dalvi & Suciu, 2007). For a query q(X) and a KG G, a constant a from G is an answer to q(X) if there exists a mapping π : var(q) 7→ E that maps the body (the right-hand side) of q to a sub-graph of G. We denote by q [G] the answers of q on G. Ontology-Mediated Query Answering (OMQA) concerns answering queries by accounting for both the KG and the accompanying ontology. Given a KG G and an ontology O, an entity a from G is a certain answer of q(X) over (G,O) if a is an answer to q(X ) over O∞(G). We use q[G,O] to denote the set of certain answers of q over (G,O). Let q and q′ be two monadic queries over (G,O), then q is contained in q′ w.r.t. O if q[G,O] ⊆ q′[G,O]; we call q a specialization of q′ (written as q′ s q), and q′ a generalization of q (written as q g q′). Query generalizations and specializations can be obtained by exploiting ontology axioms; such process (and result) is referred to as query rewriting. Example 1. Consider G in Figure 1 and q(X )← type(X ,Professor) ∧ degreeFrom(X ,mit). Since mat ∈ q[G], it is a certain answer. Moreover, according to O, AProfessor is a sub-type of Professor and degreeFrom is inverse of hasAlumnus, thus bob is also a certain answer. Query q′(X)←type(X,AProfessor) ∧ degreeFrom(X,mit) is a specialization of q as mat 6∈ q′[G,O]. Embedding-Based Query Answering. Recent works on KGEs for answering logical queries can be divided into two categories: query-based (Ren et al., 2020; Ren & Leskovec, 2020; Liu et al., 2021; Choudhary et al., 2021; Kotnis et al., 2021) and atom-based (Arakelyan et al., 2021) models. A neural QA model maps entities and relations into a d-dimensional embedding space. It then computes a score of each entity c for being an answer to a given query q via a scoring function φq(c) : Rd 7→ [0, 1], where c denotes the embedding vector of c.1 Using these scoring functions, the final embedding QA function EG takes as input a query and returns answers to that query. We describe below how this is done for Query2Box and Continuous Query Decomposition (CQD). In Query2Box, entities and queries are embedded as points and boxes, respectively, in a d-dimensional vector space. A d-dimensional embedding is a function ϕ that maps c ∈ E ∪ C to c ∈ Rd and a query q to q=(cenq,offq)∈Rd × Rd≥0, which is used to define a query box as boxq = {v ∈ Rd | cenq − offq v cenq + offq}, where is the element-wise inequality, cenq is the center of the box, and offq is the positive offset of the box, modeling its size. The score for an entity c being an answer to q is computed based on the distance from c to boxq . A prominent representative of the second category, Continuous Query Decomposition (Arakelyan et al., 2021) reduces the task of answering a complex query to that of answering each of its sub-queries. It relies on neural link predictors for answering atomic sub-queries, and aggregates the resulting scores via t-norms. 3 EMBEDDING-BASED ONTOLOGY-MEDIATED QUERY ANSWERING Inductive and deductive reasoning complement each other, thus combining both yields more complete answers to queries. To target such combination, we define an embedding-based QA function that can additionally apply ontology rules to answer queries. Definition 1 (E-OMQA). Let G be a KG, letO be an ontology, and let Gi be an ideal completion of G. An embedding QA function EG is reliable if for any query q and entity a we have that a ∈ EG(q) iff a ∈ q[Gi]. Moreover, EG is ontology-aware iff a ∈ q[Gi,O]. The problem of embedding-based OMQA is to obtain an embedding QA function that is both reliable and ontology-aware. Note that, q[Gi,O] subsumes both q[Gi], the answers requiring inductive reasoning, and q[G,O], the answers computed via deductive reasoning. We proceed to present several methods for E-OMQA. Query Rewriting over Pre-trained Models. In the traditional OMQA setting, each query q can be evaluated by first rewriting q into a set of FO-queries qO, and then evaluating each query in qO over G alone. In our case, this amounts to constructing an embedding QA function EG aware of G alone, and using it to compute the answers to all queries in qO rather than only to the query q. Example 2. For G,O in Figure 1 and queries q(X)←degreeFrom(X,mit) ∧ worksFor(X, amazon) and q′(X)←degreeFrom(X,mit) ∧managerAt(X, amazon), qO contains q and q′ among others, and to approximate q[Gi,O] we take the EG-based answers of all queries in qO. Ontology-Aware Models. An alternative to query rewriting is to develop an embedding QA function that accounts for axioms in O. To the best of our knowledge, there are no KGE models that di- 1Bold small letters denote vector representations. rectly address the problem of E-OMQA. Therefore, we suggest the following two options: (1) Train existing embedding models for logical QA on the data derived from O∞(G) instead of G;(2) Develop an ontology-aware embedding model that will be trained on G, but will have special terms in the training objective structurally enforcing O. While the proposed approaches can be realized on top of any embedding model for logical QA, in this work we verify their effectiveness on the two prominent recent embedding models: Query2Box and CQD. Regarding (1), in Section 3.1 we present several methods for effective ontology-driven training. As for (2), building on Query2Box, in Section 3.2 we develop an ontology-aware embedding model. Finally, we use the query-rewriting method over embeddings as a baseline in Section 4. 3.1 ONTOLOGY-DRIVEN DATA SAMPLING Let QG be the set of all possible EPFO monadic queries that can be formed using signature Σ. To answer any arbitrary such query, existing embedding models are trained on a set of sampled queries of certain shapes and their answers over the KG G. For instance, queries in (Ren et al., 2020) have multiple atoms while in (Arakelyan et al., 2021) they are atomic (e.g., q(X)← worksFor(X,mit)). Usually, the set of training queries does not take the schema into account. For example, in (Ren et al., 2020), the queries are randomly selected from QG and used for training the model along with their answers over G as positive examples and randomly generated non-answers as negative examples. However, if the ontology is present along with the KG, this procedure is not guaranteed to capture the ontology axioms, and using all possible queries from QG may be infeasible in practice. In the following, we discuss various options for sampling queries to train ontology-aware KGE models. Certain Answer and Query Rewriting-Based Sampling. The first natural approach for query sampling is to select queries along with their certain answers instead of the standard answers. For ontology languages such as those in the DL-Lite family (Artale et al., 2009) computing certain answers can be done efficiently. An example of this training case is to randomly sample query q(Y ) ← ∃X.hasAlumnus(mit, X) ∧ worksFor(X,Y ) and, given (G, O) in Figure 1, use it along with all its certain answers: mit, yale during training. To account for the ontology, we can randomly sample queries over the KG, and add also all of their generalizations and specializations obtained using the rules in Table 1. To rewrite a query we select an atom and apply an ontology axiom, e.g., the first rule (R1) applies a concept inclusion axiom, while (R6) applies a role inclusion. The specializations of a given query q (denoted as Spec(q)), incorporate specific information regarding the answers of q, while the generalizations of q (i.e. Gen(q)) incorporate additional related entities. Example 3. Consider q1(X)←∃Y.type(X,University), q2(X)←∃Y, Z.teachesAt(Z,X). Using R2 in Table 1 and (5) in Figure 1 we get q1 s q2. Similarly, in Example 2, q′ g q using R6 and (1). In general, there are exponentially many rewritings thus to keep the training size reasonable, we fix a rewriting depth, up to which the respective training queries are generated, via a dedicated parameter. Strategic Ontology-Based Sampling. While adding generalizations and specializations of randomly selected queries should capture some parts of the ontological background knowledge, many relevant queries might still be missed. For example, if O contains also ∃worksFor− v Organization, queries such as q(X)←∃YmanagerAt(Y,X)∧type(X,Organization) are likely to be disregarded during training. Therefore, another training approach that we propose is to leverage the ontology to strategically generate the train queries. For that first, we formalize the set of target queries by means of a query template graph, i.e., a directed acyclic graph (DAG) (N,E), where N is a set of nodes and E ⊆ N ×N is a set of directed edges. Such DAG captures the shape of each query. The set of target queries is then obtained by applying a labeling function to assign symbols in Σ to nodes and edges. Definition 2 (Query Shape). A query shape S is a tuple (N,E, n) such that (N,E) is a DAG and n ∈ N is the distinguished node of S (i.e., the node corresponding to the answer variable). For a given set of relations and constants in Σ, a labeling function f : N ∪ E 7→ Σ ∪V maps each node to either a variable or an entitiy and each edge to a relation symbol in Σ. Our goal is to exploit the ontology to label query shapes to create semantically meaningful queries. Towards that, let v∗ be the reflexive and transitive closure of v. Then, for a given relation p : - inv(p) = {p′ | p v p′− ∈ O}, dom(p) = {A | ∃p′vA′ ∈ O s.t. p v∗ p′,Av∗A′ or A′v∗A}, - range(p) = {A | ∃p′−vA′ ∈ O s.t. pv∗p′,Av∗A′ or A′v∗A}, - follows(p) = {p′ | range(p) ∩ dom(p′) 6= ∅ or }, - interr(p)={p′|range(p)∩range(p′) 6=∅ or p1∈ inv(p), p2∈ inv(p′) and dom(p1)∩dom(p2) 6=∅}, - interd(p)={p′|dom(p)∩dom(p′) 6= ∅ or p1∈ inv(p), p2∈ inv(p′)andrange(p1)∩range(p2) 6=∅}. Intuitively, for a given relation p, the set inv(p) contains all inverse relations of p, dom(p) contains all domain types for p, range(p) all range types for p, follows(p) stores all relations p′ which can follow p, and interr(p), interd(p) contain resp. all relations p′ which can intersect p on range and domain positions. Then, for each shape we label nodes and edges to create queries that are valid w.r.t. O as illustrated in Figure 2. Note that this query sampling process uses only the ontology, thus it is data independent. However, if the ontology does not capture additional data patterns we can proceed in a bottom-up fashion. We randomly take some labeled query shapes which produce answers, and construct their generalizations as before. 3.2 AN ONTOLOGY-AWARE TRAINING OBJECTIVE In this section, we present our novel training objective function employed by Query2Box. Recall that when embedding a query, the Query2Box model defines a box in an embedding space, s.t. the answer entities of the given query are mapped to points located inside of the box. Note that for every ontological axiom its both left- and right-hand side can be turned into queries. We observe that when embedding those queries as boxes, ontological axioms can be naturally injected into the model if in the vector space the inclusion of the boxes corresponding to the respective queries is ensured. Example 4. In Figure 3, the entities and relations are embedded into the vector space as points and projection operators, resp. The embedding of q(Y )←∃X .hasAlumnus(mit,X )∧worksFor(X ,Y ) is represented by the larger grey box, obtained by applying the projection hasAlumnus to the embedding of entity mit followed by the projection on worksFor. To enforce teachesAt v worksFor we ensure that the box corresponding to q′(Y )←∃X.hasAlumnus(mit, X) ∧ teachesAt(X,Y ), is contained in the box corresponding to q. The goal is to learn the embedding of queries, such that the distance between the box, corresponding to the query, and its answers is minimized, while the distance to this box from other negative samples is maximized. Similarly to Ren et al. (2020), we define the distance between q ∈ Rd × Rd≥0 and v ∈ Rd as d(q,v) = ‖cenq − v‖1, namely the L1 distance from the entity v to the center of the box. Using the sigmoid function we transform the distance into the (0, 1) interval, that is, p(v |q) = σ ( − (d(q,v)− γ) ) , where γ > 0 is a margin, which denotes the probability of v ∈ q[O,Gi]. For a query q, let Gen(q) = {q1 . . . qn} be the set of all generalizations of q based on O. Given a train query q and its certain answer v ∈ q[G,O], we aim at maximizing ∏n i=1 p(v |qi)βi , where βi ≥ 0 is a weighting parameter for all i = 1, . . . , n. This is achieved by minimizing the negative log-likelihood:2 − log (∏n i=1 p(v |qi)βi ) = − ∑n i=1 βi log ( p(v |qi) ) . By exploiting the fact that σ(x) = 1−σ(−x), for any v′j 6∈ q[G,O], we have that p(v′ |q) = 1− p(v |qi) = σ(d(q,v)−γ) . Our goal is to ensure that if q′ is a generalization of a given train query q w.r.t. O, then the box of q′ contains the box of q. In other words, if a is an answer to the query q then the distance not only between a and q should be minimized, but also between a and q′ as well as between a and all other generalizations of q. The following training objective reflects our goal: L=− n∑ i=1 βi log σ ( γ − d(v,qi) ) − k∑ j=1 1 k log σ(d(v′j ; q)− γ), where v′j 6∈ q[G,O] is a random entity for all j = 1, . . . , k obtained via negative sampling. In our experiments, we use βi = |Gen(q)|−1 = 1/n. Example 5. Consider q(Y )←∃X .hasAlumnus(mit,X )∧type(X,AProfessor)∧teachesAt(X ,Y ). We have Gen(q) = {q1 , q2 , q3}, where q1 is obtained from q by substituting teachesAt with worksAt, while q2 is q with type(X,Professor) instead of type(X,AProfessor). In q3 the first, second and third atoms are resp. the same as in q, q1 and q2. It holds that q [G,O] = {yale}, hence our training objective is to minimize the distance between yale (the embedding of yale), and q as well as the distance between yale and the boxes of q1, q2 and q3 (denoted by q1, q2 and q3). Note that conceptually, our training data sampling techniques and the loss function modifications are flexible in terms of the Description Logic in which the ontology is encoded. The only restriction is the existence of efficient query rewriting algorithms for this DL. In this work, we focused on DL-LiteR, since the majority of available ontologies are belong to this language. 4 EVALUATION In this section, we evaluate the proposed training strategies on the two recent embedding models for QA: Query2Box model (Q2B , Ren et al., 2020) and Continuous Query Decomposition (CQD , Arakelyan et al., 2021). We also measure the effectiveness of the newly introduced training objective function of Q2B model (called O2B ). All models are evaluated in different settings to measure their ability to perform inductive reasoning, deductive reasoning, and their combination.3 4.1 EXPERIMENTAL SETUP We have configured both Q2B and O2B systems as follows: We set the size of the embedding dimension to 400, and trained the models for 15×104 steps using Adam (Kingma & Ba, 2015) with an initial learning rate of 10−4 and the batch size of 512. We evaluated the models periodically and reported the test results of the models which have the best performance on the validation dataset. For CQD we have used the following configuration: we used ComplEx-N3 (Lacroix et al., 2018) as the underlying neural link predictor, where the embedding size was set to 1000, and the regularisation weight was selected based on the validation set by searching in {10−3, 5× 10−3, . . . , 10−1}. 2The log is strictly monotonically increasing, thus, it will not change the maximization. It only changes the product to a summation. During training we consider a minimization, which motivates the negative sign. 3Code and data are available at https://tinyurl.com/66hbhppc. Query and Answers Sampling. We use the same type of queries (corresponding to directed acyclic graphs with entities as the source nodes, also known as anchors) as Ren et al. (2020) (see Figure 5 in Appendix). We consider each input KG G to be the ideal completion (i.e. Gi) and then partition it into Gvalid for validation and Gtrain for training by discarding 10% of edges at each step; this yields Gtrain ( Gvalid ( G. We then create several training sets of queries according to our ontologyaware data sampling strategies from Section 3.1. More specifically, these include: plain: the training queries are randomly sampled based on the signature of Gtrain , and their plain answers, i.e. over Gtrain . gen: queries from plain augmented with their ontology-based generalizations 4; all answers are certain, i.e. over O∞(Gtrain). spec: queries from gen augmented with their ontology-based specializations; all answers are certain answers as well. onto: queries constructed relying on the ontology axioms as introduced in Section 3.1, for which we randomly choose a percentage of valid entities as anchors; all answers are certain. 4This setting is similar to random sampling over O∞(Gtrain) but unlike the deductive closure, our procedure is guaranteed to terminate. We used the rewriting depth of up to 10. Following Ren & Leskovec (2020), the training query shapes are the first five ones in Figure 5 (1p– 3i); non-compliant specializations and generalizations are discarded. The Q2B and O2B are trained on all five query shapes, while CQD is trained only on 1p queries (Arakelyan et al., 2021). Evaluation Procedure. For each trained model we measure its performance using standard metric HITS at K for K=3 (HITS@3), which indicates the frequency that the correct answer is ranked among the top-3 results (the higher, the better). We use such metric for measuring the reliability and ontology-awareness of the resulting models (as in Definition 1): Inductive case (I). Evaluating the inductive reasoning ability (accounts for the standard test case): Is the model able to predict missing answers to queries over the ideal completion Gi? Deductive case (D). Evaluating the deductive reasoning ability: Is the model able to predict answers that can be inferred from the known triples in Gtrain using ontology axioms? Inductive + Deductive case (I+D). The combination of I and D: Is the model able to predict missing answers that are inferred from the ideal completion Gi using axioms from O? For I, we randomly generate validation and test queries over Gvalid, and input G, in such a way that for each validation query q we have that q[Gtrain] ( q[Gvalid], and for each test query q we have q[Gvalid] ( q[G]. For D, we randomly generate evaluation queries overO∞(Gtrain) s.t. they are not trivially answered over Gtrain . Moreover, each validation query is unseen during training, and each test query is unseen during training and validation. For I+D, we proceed as for I, but useO∞(Gvalid) and O∞(G) to sample validation and test queries and their answers. In each test case all shapes in Figure 5 are sampled, and we measure accuracy based on so-called hard answers, which cannot be trivially retrieved from Gtrain and require prediction of missing edges and/or the application of some ontology axioms. A hard answer for the case I is an answer in q[G] \ q[Gvalid], for D it is an answer in q[O∞(Gtrain)] \ q[Gtrain], while for I+D it is an answer in q[O∞(G)] \ q[O∞(Gvalid)]. Models and Datasets. We consider Q2B , O2B and CQD trained in each described setting: i.e., Mx, where M ∈ {Q2B ,O2B ,CQD} and x ∈ {plain, gen, spec, onto}. Additionally, we consider the use of the query-rewriting method on top of each model pre-trained using plain strategy, denoted by M rewplain , i.e., Q2Bplain , Q2B rew plain and CQDplain , CQD rew plain respectively are used as baselines. We evaluate the proposed training methods, as well as the novel training objective, on two datasets: NELL (Carlson et al., 2010), a general purpose real world KG, and LUBM (Guo et al., 2005), a domain specific synthetic dataset describing the university domain. We selected these datasets, as they are among few large KGs that have ontologies (see Appendix for statistics). 4.2 EVALUATION RESULTS For the standard test case I, the baseline Q2Bplain performs best on LUBM, while CQDplain outperforms the other models and configurations on NELL. This is not surprising: ontologies are not effective when coping with missing edges and facts in a KG beyond those that they can deductively infer. In fact, if statistically, the patterns reflected by ontologies do not hold in the data, ontology-aware training strategies might worsen the prediction quality. The second observation is that the query rewriting over embedding models only slightly improves the prediction accuracy; Q2Brewplain , O2B rew plain and CQD rew plain result in at most 10% enhancement on test cases D and I+D over Q2Bplain , O2Bplain and CQDplain respectively. These limited improvements are likely due to the incompleteness of the rewriting procedure caused by the restriction of the queries supported by the models. The results on I and query-rewriting over embeddings are in Appendix E.1. Next, we discuss our main observations for the other more interesting test cases D and I+D. The results are reported in Table 2 and visually illustrated in Figure 4. Overall, the effectiveness of the proposed solutions is evident: for I+D on LUBM the improvements are of almost 50% for Query2Box and 54% for CQD, while for NELL of almost 20% for Query2Box and 25% for CQD. Performance of Training Strategies. The results on certain answer prediction (D and I+D) show that none of the baselines is able to capture the domain knowledge expressed in the ontology, and thus cannot be used directly for OMQA. Our ontology-aware model–O2Bplain outperforms the other models trained on plain , but incorporation of certain answers and generalizations via our training strategies leads to better results. The proposed training methods from Section 3.1 significantly improved the accuracy for test cases D and I+D. For all models generating training queries by taking the ontology into account yields improvements. This observation holds already when augmenting the set of random queries by choosing their generalizations, though the addition of specializations does not seem to have a major impact. We observed that randomly selecting training queries, as usually done in the literature, does not result in the most accurate models. On LUBM, for all models, the advantage of the ontology-driven query sampling (i.e. onto setting) is significant compared to all other settings. Remarkably, for LUBM CQDonto trained on less data than CQDgen or CQDspe results in higher accuracy. This shows that random sampling is not adequate for OMQA. For NELL, to keep the size of the training set reasonable, we chose a much lower number of anchors obtaining a significantly lower number of atomic queries (details in the Appendix), however since Q2B and O2B use information from other query shapes onto setting still outperforms all others, unlike for CQD which only relies on atomic queries. Evaluation of the Ontology-Aware Training Objective. The model O2Bonto has far better accuracy on cases D and I+D than the Q2B baseline. This shows that the enforcement of ontology axioms in the embedding space together with strategic ontology-driven training provides significant improvement, especially for LUBM which has a more expressive ontology. Furthermore, the improvement of O2Bplain over Q2Bplain shows that we are able to partially incorporate the domain knowledge into the embedding model without explicitly training on certain answers. 5 RELATED WORK The task of answering queries that involve multiple atoms using embedding techniques has recently received a lot of attention. The existing proposals can be divided into query-based (Ren et al., 2020; Ren & Leskovec, 2020; Liu et al., 2021; Choudhary et al., 2021; Kotnis et al., 2021; Sun et al., 2020) and atom-based (Arakelyan et al., 2021). Friedman & den Broeck (2020) and Borgwardt et al. (2019) study the relation between the problem of conjunctive QA in the embedding space and over probabilistic databases. Our work is different from the above proposals in that along with the data we also rely on ontologies to answer queries. Integration of ontologies into KG embeddings has been studied by e.g. Krompaß et al. (2015); Minervini et al. (2017); Hao et al. (2019); Guo et al. (2016); Rocktäschel et al. (2015); Demeester et al. (2016); Kazemi & Poole (2018); Fatemi et al. (2019); Abboud et al. (2020), but these works do not capture all supported axioms and focus on link prediction rather than QA. The capability of embeddings to model hierarchical data has been explored by Patel et al. (2020); Idahl et al. (2019); Gutiérrez-Basulto & Schockaert (2018). In particular, Idahl et al. (2019) aim at interpreting embeddings by finding concept spaces in node embeddings and linking them to a simple external type hierarchy; this is different from our method for OMQA over embeddings. In Gutiérrez-Basulto & Schockaert (2018), conceptual space representations of known concepts are learned by associating a Gaussian distribution with each concept over a learned vector space. Constructing models for EL ontologies in the embedding space (Kulmanov et al., 2019) is another relevant direction. While Gutiérrez-Basulto & Schockaert (2018); Kulmanov et al. (2019) are related to our work, they do not touch upon the problem of OMQA. The OMQA problem has been actively studied (see e.g. Schneider & Simkus (2020) for an overview), but available methods only focus on purely logicbased deductive reasoning, without aiming at simultaneously handling missing links. 6 CONCLUSION We have presented methods for Ontology-Mediated Query Answering that operate in the embedding space to enable simultaneous inductive and deductive reasoning over the incomplete data. To the best of our knowledge, this is the first work on embedding-based OMQA. We have empirically demonstrated that embedding-based methods for QA applied naively or combined with query rewriting techniques are not effective. In our work, we have proposed solutions for making the existing models ontology-aware via ontology-driven training sampling strategies and loss function modifications. The improvements in the accuracy on prominent query-based and atom-based models range from 20% to 50% compared to the baselines. We believe that this work opens interesting perspectives for combining OMQA methods, with roots in knowledge representation, and embedding techniques from the machine learning area. Reproducibility Statement. Code, data, and instructions for reproducing all experiments are available at https://tinyurl.com/66hbhppc. The hyperparameters are presented in Appendix F. A DESCRIPTION LOGICS ONTOLOGIES The syntax of DL-LiteR ontologies and its translation into rule-based syntax are given in Table 3. The semantics of DL ontologies is defined using FO interpretations (∆I , ·I) consisting of a nonempty domain ∆I and an interpretation function ·I , which maps each entity e to an element eI ∈ ∆I , each concept name A to a subset AI ⊆ ∆I , and each role name r to a binary relation rI ⊆ ∆I×∆I . The interpretation function ·I is extended to complex concepts as follows: (∃p)I = {d ∈ ∆I | ∃d′, (d, d′) ∈ pI}, (p−)I = {(d′, d) | (d, d) ∈ pI}. An interpretation I satisfies a concept inclusionC v D iffCI ⊆ DI , and I satisfies a role inclusion p v s iff pI ⊆ sI . Finally, I is a model of an ontologyO if it satisfies all concept and role inclusions in O. The notion of modelhood is applied also to a KG G as follows: An interpretation I satisfies a fact A(c) (i.e., type(c, A)), resp. p(c, c′), if c ∈ AI , resp. (c, c′) ∈ pI . Given a KG G and an ontology O, an interpretation I is a model of G w.r.t O if I satisfies each fact in G and each axiom in O. In OMQA setting, to answer a given query, we need to evaluate it over each such model; in the case of DL-LiteR ontologies, for computing answers to ontology-mediated queries, we can rely on the deductive closureO∞(G), since the model constructed fromO∞(G) can be homomorphically mapped to every other model. B TRACTABILITY OF REWRITING-BASED QUERY GENERATION For an arbitrary DL-LiteR ontology O and an arbitrary existential positive FO query q, let Spe(q,O) = {q′ | q s ∗ q′} be the set of all specializations of q w.r.t. O, modulo variable renamings, obtained by exhaustively applying s ∗ rules from Table 1. Similarly, let Gen(q,O) = {q′ | q g ∗ q′} be the set of all generalizations of q w.r.t. O, modulo variable renamings, obtained by exhaustively applying g ∗ rules. The following proposition, states that our training strategies based on query-rewriting are tractable. Proposition 1. Let O be an arbitrary DL-LiteR ontology, and let q be an existential positive FO query. Then, Spe(q,O) and Gen(q,O) are finite and can be computed in time that is polynomial in the size of O. Proof (Sketch). The rewriting rules we propose are simulating the standard rewriting for DL-LiteR. Thus, it follows from Lemma 34 in Calvanese et al. (2007) that Spe(q,O) is finite. Moreover, based on Lemma 42 in Calvanese et al. (2007) it follows that there exists a procedure to compute Spe(q,O) in time that is polynomial in the size of O. Since the generalization procedure is similar, only applying the axioms in the other direction, we also conclude that Gen(q,O) is finite and polynomially bounded by O. C QUERY2BOX GEOMETRIC OPERATIONS We now describe the geometric operators employed in the Query2Box model. Projection. Let S ⊆ E ∪ C be a set of entities, and r ∈ R a relation. Intuitively, the projection operator performs graph traversal, e.g. given an embedding of entity e, the projection operator for the relation r provides the box corresponding to the set {e′ ∈ E ∪ C | r(e, e′) ∈ G}. Given the embedding r = (cenr,offr) ∈ Rd × Rd≥0 for the relation r, we model the projection of a box v = (cenv,offv) by applying element-wise summation v + r = (cenv + cenr,offv + offr). This relational translation Bordes et al. (2013) operation corresponds to the translation and enlargement of the box v. Intersection. Given a set of entity sets {S1, . . . , Sn}, the intersection operator computes the intersection of these sets. Recall that each set of entities is represented by a box in Query2Box model. The intersection w = (cenw,offw) of a set of boxes {(cenv1 ,offv1), . . . , (cenvn ,offvn)} corresponding to the set {S1, . . . , Sn} is modeled by applying the following operations: cenw = n∑ i=1 Φ ( NN(cenv1), . . . ,NN(cenvn) ) i cenvi , offw = min(offv1 , . . . ,offvn) σ ( Ψ(offv1 , . . . ,offvn) ) , where and min denote the element-wise multiplication and minimum, respectively. NN : Rd → Rd is a 2-layer feed-forward neural network having the same dimensionality for the hidden layers as for the input layer. Φ and σ stand for the softmax and sigmoid functions, resp., applied in a dimension-wise manner. Ψ is a permutation invariant function composed of a 2-layer feed-forward network followed by element-wise mean operation and a linear transformation. The center cenw is calculated as the weighted mean of the box centers cenv1 , . . . , cenvn . This geometric intersection provides a smaller box that lies inside a given set of boxes – for more details we refer to Ren et al. (2020). D DATA AND QUERY STATISTICS Following the procedure in the literature, each input KG is completed w.r.t. inverse edges. For the considered datasets, in Table 4 we present the number of ontology axioms of various types as well as the number of (materialized) triples, entities and relations. In our experiments, we have considered both complex and simple ontologies. Indeed, LUBM has a rich ontology including domain and range axioms as well as concept and role inclusions, while the NELL KG is accompanied with a more simple ontology containing only (inverse) role inclusions. The size of each training/testing set, as well as the number of queries per shape for each of the considered settings is presented in Table 5, while each query shape is illustrated in Figure 5. Note that for NELL, the plain data is exactly the one from Ren et al. (2021). We observe that the number of 1p queries obtained for gen and spe settings are identical. This is probably because the set of 1p queries in plain covers all edges in the train KG. This explains the high accuracy of CQDgen and CQDspe on the test case D. Moreover, the NELL ontology does not contain interesting axioms that can be leveraged by ontology-driven query sampling technique, thus to obtain onto we had to rely on the patterns from the data alone. Since there are too many queries to chose from, due to the large number of relations, we had to select a smaller number of valid entities as anchors, namely 20-30%. This explains the small number of 1p queries. For the LUBM dataset, we have created the training and testing sets from scratch, and the 1p queries in plain do not contain the entire training KG. The onto set of queries leverages the proposed ontology-driven technique, given that the ontology covers all relations and concepts in the KG and describes how they interact, i.e. the ontology axioms support all the constructed queries, and we chose 50 % of valid entities as anchors. E EXTENDED DISCUSSION OF EXPERIMENTAL RESULTS In this section, we present more insights into our results on the inductive case I and performance of the query-rewriting baselines. E.1 RESULTS ON INDUCTIVE TEST CASE In Figure 6, we present the average HITS@3 for the inductive test case I. As previously discussed, we see no improvement of ontology-injection methods upon answering queries over incomplete KGs without taking certain answers into account. Indeed, Q2Bplain outperforms all other models on LUBM, while CQDplain performs best on NELL for this test case. This behaviour is expected, since ontologies cannot handle missing edges and facts in a KG that are not inferred from the data using ontological reasoning. E.2 QUERY REWRITING OVER PRE-TRAINED EMBEDDING MODELS In order to evaluate the target procedure that performs query rewriting over pre-trained embeddings for QA, for each hard answer a we take the best (i.e., minimum) ranking among all rankings generated by all queries in the rewriting of each test query. In other words, we take the minimal distance between the embedding of a and all rewritings of q. Note that, for measuring the performance we use the pre-trained models Q2Bplain , CQDplain and O2Bplain , obtained after 450K training steps. Due to the reliance on particular query shapes of the respective models, the complete rewriting for each query is not guaranteed. In Table 6, we present the results for this method compared to the plain setting. Minor improvements of only at most 10% are observed. We have also used our pre-trained O2Bplain model as a possible way to cope with this issue, and indeed it outperforms all other baselines. In fact, on NELL O2Bplain becomes relatively competitive even compared to the other ontology-aware models that have been trained using more advanced ontology-driven training strategies. However, for richer ontology that comes with the LUBM KG, the improvements are still not sufficient. E.3 DATA AUGMENTATION In Figure 7, we present the performance of each model and the number of training queries needed in each training setting. In general, naturally, the increase of the number of training queries leads to better performance, with the exception of the setting spe, for which the training data contains all queries from gen , but the performance is comparable or slightly decreases. The onto setting boosts the performance for almost all models. In particular, on LUBM, which has a richer ontology, the increase in performance is much higher compared to that for the setting when query generalizations and certain answers are included. It is worth noting that the number of 1p queries is smaller for onto than for gen , but CQDonto performs much better than CQDgen , which demonstrates the effectiveness of our proposed ontology-driven training strategy. F HYPERPARAMETERS FOR Q2B, O2B AND CQD For Q2B we have used the code5 from Ren & Leskovec (2020). Our extension of this code with the implementation of the novel training objective is available online.6. The systems Q2B and O2B have been configured as follows: We set the size of the embedding dimension to 400, and trained the models for 15×104 steps using Adam (Kingma & Ba, 2015) with an initial learning rate of 10−4 and the batch size of 512. The rest of the parameters were set in the same way as in Ren et al. (2020). We evaluated the models periodically and reported the test results of the models which have the best performance on the validation dataset. For CQD, we used the code shared by Arakelyan et al. (2021) 7, using ComplEx-N3 (Lacroix et al., 2018) as the base model, where the embedding size was set to 1000, and the regularisation weight was selected based on the validation set by searching in {10−3, 5 × 10−3, . . . , 10−1}. For LUBM, the regularization weight was set to 0.1 in the gen , spe , and onto settings, and to 0.01 in the plain setting. For NELL, the regularization weight was set to 0.005 in the plain setting, to 0.001 in the gen and spe settings, and to 0.05 in the onto setting. 5https://github.com/snap-stanford/KGReasoning 6https://tinyurl.com/66hbhppc 7Available at https://github.com/pminervini/KGReasoning/
1. What is the main contribution of the paper regarding ontology-mediated query answering? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. Do you have any concerns or questions regarding the experimental setup and results? 4. How does the reviewer assess the novelty and limitations of the paper's content? 5. Are there any minor issues or suggestions for improvement in the paper?
Summary Of The Paper Review
Summary Of The Paper Authors study the problem of ontology-mediated query answering (OMQA) over knowledge graphs (KGs). The problem definition is as follows: given an ontology, an incomplete KG, and a monadic (positive) conjunctive query (CQ) which does not have a match in the given KG, predict/rank/score the most likely answers to the query. The proposed approach largely builds on existing work of Query2Box on answering monadic CQs over incomplete KGs, while extending this approach to make the embeddings, and hence their predictions, "ontology-aware". Review Strengths: The problem is meaningful, essentially importing the idea of OMQA to the domain of embedding-based approaches: one is interested in query answering over incomplete KGs using an embedding-based approach, while additionally making use of a logical background theory. This is commonly referred as rule injection in the context of KG embedding models. It is widely studied for, e.g., link prediction - performing link prediction while incorporating a set of given rules. Weaknesses: While the problem formulation is meaningful, the presented study appears rather shallow both theoretically and empirically. After describing the problem setup, authors state the following alternatives for solving the task: Train existing embedding models for logical QA on the data derived from the deductive closure of G instead of G ; Develop an ontology-aware embedding model that will be trained on G , but will have special terms in the training objective structurally enforcing O . There is an obvious alternative which is completely missed: consider existing embedding models developed for link prediction, which possess rule injection capacity (i.e., can inject (subset of) rules of O ) and train these following (Arakelyan et al., 2021) for QA. The resulting model will be ontology-aware by construction, though disjoint from approaches (1) and (2). For example, BoxE can inject all rules from Fig 5 excluding (5). Since by (Arakelyan et al., 2021) it is possible to turn any link prediction model to a QA answering model, one would obtain an ontology-aware QA model. It is easy to see that this approach would result in much stronger baselines that the ones provided and so the presented results cannot be conclusive without such an analysis. Hence, the empirical evaluation is limited. Besides, I had a hard time to understand the experimental setup and the results are not discussed in detail: In many instances, it is not clear to me what to make out of these results, and there is generally a lack of insights. This work is very incremental on Query2Box and lacks novelty. The authors only modify the loss function and the query sampling strategy. In my understanding the main challenge in this approach is to incorporate the rules and there is a large body of work on this. While support for rules is generally limited, this work does not offer anything new in this respect either. In fact, the whole approach uses the Query2Box for model representation which itself has limitations in incorporating rules. It is possible to come up with examples, where Query2Box would fail to inject the intended meaning of the rules, and the current paper does not add anything new in this respect and neither does it include a formal analysis discussing their limitations. Another obvious problem with the current approach is that it follows Query2Box approach and so is based on query sampling. (Arakelyan et al., 2021) obtain state-of-the-art results by using a simple framework requiring only neural link predictors trained for atomic queries, rather than millions of queries as in Query2Box. I could not find any strong arguments to justify a query sampling approach which comes with many additional problems/challenges. Some minor issues: A knowledge graph is not the same as an ABox. There are semantic differences. (I understand what authors mean here, but it is misleading). The presented monadic CQs are actually monadic positive CQs. In Prelim's X is a variable, and Y → is a sequence of variables, and their union is not well-defined. In Def 1, the term "ideal completion" is undefined. In Def 1, ontology-awareness is not well defined (one direction of iff is missing)?
ICLR
Title A Closer Look at Few-shot Classification Abstract Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples. While significant progress has been made, the growing complexity of network designs, meta-learning algorithms, and differences in implementation details make a fair comparison difficult. In this paper, we present 1) a consistent comparative analysis of several representative few-shot classification algorithms, with results showing that deeper backbones significantly reduce the performance differences among methods on datasets with limited domain differences, 2) a modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the miniImageNet and the CUB datasets, and 3) a new experimental setting for evaluating the cross-domain generalization ability for few-shot classification algorithms. Our results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones. In a realistic cross-domain evaluation setting, we show that a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms. 1 INTRODUCTION Deep learning models have achieved state-of-the-art performance on visual recognition tasks such as image classification. The strong performance, however, heavily relies on training a network with abundant labeled instances with diverse visual variations (e.g., thousands of examples for each new class even with pre-training on large-scale dataset with base classes). The human annotation cost as well as the scarcity of data in some classes (e.g., rare species) significantly limit the applicability of current vision systems to learn new visual concepts efficiently. In contrast, the human visual systems can recognize new classes with extremely few labeled examples. It is thus of great interest to learn to generalize to new classes with a limited amount of labeled examples for each novel class. The problem of learning to generalize to unseen classes during training, known as few-shot classification, has attracted considerable attention Vinyals et al. (2016); Snell et al. (2017); Finn et al. (2017); Ravi & Larochelle (2017); Sung et al. (2018); Garcia & Bruna (2018); Qi et al. (2018). One promising direction to few-shot classification is the meta-learning paradigm where transferable knowledge is extracted and propagated from a collection of tasks to prevent overfitting and improve generalization. Examples include model initialization based methods Ravi & Larochelle (2017); Finn et al. (2017), metric learning methods Vinyals et al. (2016); Snell et al. (2017); Sung et al. (2018), and hallucination based methods Antoniou et al. (2018); Hariharan & Girshick (2017); Wang et al. (2018). Another line of work Gidaris & Komodakis (2018); Qi et al. (2018) also demonstrates promising results by directly predicting the weights of the classifiers for novel classes. Limitations. While many few-shot classification algorithms have reported improved performance over the state-of-the-art, there are two main challenges that prevent us from making a fair comparison and measuring the actual progress. First, the discrepancy of the implementation details among multiple few-shot learning algorithms obscures the relative performance gain. The performance of baseline approaches can also be significantly under-estimated (e.g., training without data augmentation). Second, while the current evaluation focuses on recognizing novel class with limited training examples, these novel classes are sampled from the same dataset. The lack of domain shift between the base and novel classes makes the evaluation scenarios unrealistic. Our work. In this paper, we present a detailed empirical study to shed new light on the few-shot classification problem. First, we conduct consistent comparative experiments to compare several representative few-shot classification methods on common ground. Our results show that using a deep backbone shrinks the performance gap between different methods in the setting of limited domain differences between base and novel classes. Second, by replacing the linear classifier with a distance-based classifier as used in Gidaris & Komodakis (2018); Qi et al. (2018), the baseline method is surprisingly competitive to current state-of-art meta-learning algorithms. Third, we introduce a practical evaluation setting where there exists domain shift between base and novel classes (e.g., sampling base classes from generic object categories and novel classes from fine-grained categories). Our results show that sophisticated few-shot learning algorithms do not provide performance improvement over the baseline under this setting. Through making the source code and model implementations with a consistent evaluation setting publicly available, we hope to foster future progress in the field.1 Our contributions. 1. We provide a unified testbed for several different few-shot classification algorithms for a fair comparison. Our empirical evaluation results reveal that the use of a shallow backbone commonly used in existing work leads to favorable results for methods that explicitly reduce intra-class variation. Increasing the model capacity of the feature backbone reduces the performance gap between different methods when domain differences are limited. 2. We show that a baseline method with a distance-based classifier surprisingly achieves competitive performance with the state-of-the-art meta-learning methods on both mini-ImageNet and CUB datasets. 3. We investigate a practical evaluation setting where base and novel classes are sampled from different domains. We show that current few-shot classification algorithms fail to address such domain shifts and are inferior even to the baseline method, highlighting the importance of learning to adapt to domain differences in few-shot learning. 2 RELATED WORK Given abundant training examples for the base classes, few-shot learning algorithms aim to learn to recognizing novel classes with a limited amount of labeled examples. Much efforts have been devoted to overcome the data efficiency issue. In the following, we discuss representative few-shot learning algorithms organized into three main categories: initialization based, metric learning based, and hallucination based methods. Initialization based methods tackle the few-shot learning problem by “learning to fine-tune”. One approach aims to learn good model initialization (i.e., the parameters of a network) so that the classifiers for novel classes can be learned with a limited number of labeled examples and a small number of gradient update steps Finn et al. (2017; 2018); Nichol & Schulman (2018); Rusu et al. (2019). Another line of work focuses on learning an optimizer. Examples include the LSTM-based meta-learner for replacing the stochastic gradient decent optimizer Ravi & Larochelle (2017) and the weight-update mechanism with an external memory Munkhdalai & Yu (2017). While these initialization based methods are capable of achieving rapid adaption with a limited number of training examples for novel classes, our experiments show that these methods have difficulty in handling domain shifts between base and novel classes. Distance metric learning based methods address the few-shot classification problem by “learning to compare”. The intuition is that if a model can determine the similarity of two images, it can classify an unseen input image with the labeled instances Koch et al. (2015). To learn a sophisticated comparison models, meta-learning based methods make their prediction conditioned on distance or 1https://github.com/wyharveychen/CloserLookFewShot metric to few labeled instances during the training process. Examples of distance metrics include cosine similarity Vinyals et al. (2016), Euclidean distance to class-mean representation Snell et al. (2017), CNN-based relation module Sung et al. (2018), ridge regression Bertinetto et al. (2019), and graph neural network Garcia & Bruna (2018). In this paper, we compare the performance of three distance metric learning methods. Our results show that a simple baseline method with a distancebased classifier (without training over a collection of tasks/episodes as in meta-learning) achieves competitive performance with respect to other sophisticated algorithms. Besides meta-learning methods, both Gidaris & Komodakis (2018) and Qi et al. (2018) develop a similar method to our Baseline++ (described later in Section 3.2). The method in Gidaris & Komodakis (2018) learns a weight generator to predict the novel class classifier using an attentionbased mechanism (cosine similarity), and the Qi et al. (2018) directly use novel class features as their weights. Our Baseline++ can be viewed as a simplified architecture of these methods. Our focus, however, is to show that simply reducing intra-class variation in a baseline method using the base class data leads to competitive performance. Hallucination based methods directly deal with data deficiency by “learning to augment”. This class of methods learns a generator from data in the base classes and use the learned generator to hallucinate new novel class data for data augmentation. One type of generator aims at transferring appearance variations exhibited in the base classes. These generators either transfer variance in base class data to novel classes Hariharan & Girshick (2017), or use GAN models Antoniou et al. (2018) to transfer the style. Another type of generators does not explicitly specify what to transfer, but directly integrate the generator into a meta-learning algorithm for improving the classification accuracy Wang et al. (2018). Since hallucination based methods often work with other few-shot methods together (e.g. use hallucination based and metric learning based methods together) and lead to complicated comparison, we do not include these methods in our comparative study and leave it for future work. Domain adaptation techniques aim to reduce the domain shifts between source and target domain Pan et al. (2010); Ganin & Lempitsky (2015), as well as novel tasks in a different domain Hsu et al. (2018). Similar to domain adaptation, we also investigate the impact of domain difference on fewshot classification algorithms in Section 4.5. In contrast to most domain adaptation problems where a large amount of data is available in the target domain (either labeled or unlabeled), our problem setting differs because we only have very few examples in the new domain. Very recently, the method in Dong & Xing (2018) addresses the one-shot novel category domain adaptation problem, where in the testing stage both the domain and the category to classify are changed. Similarly, our work highlights the limitations of existing few-shot classification algorithms problem in handling domain shift. To put these problem settings in context, we provided a detailed comparison of setting difference in the appendix A1. 3 OVERVIEW OF FEW-SHOT CLASSIFICATION ALGORITHMS In this section, we first outline the details of the baseline model (Section 3.1) and its variant (Section 3.2), followed by describing representative meta-learning algorithms (Section 3.3) studied in our experiments. Given abundant base class labeled data Xb and a small amount of novel class labeled data Xn, the goal of few-shot classification algorithms is to train classifiers for novel classes (unseen during training) with few labeled examples. 3.1 BASELINE Our baseline model follows the standard transfer learning procedure of network pre-training and fine-tuning. Figure 1 illustrates the overall procedure. Training stage. We train a feature extractor fθ (parametrized by the network parameters θ ) and the classifier C(·|Wb) (parametrized by the weight matrix Wb ∈ Rd×c) from scratch by minimizing a standard cross-entropy classification loss Lpred using the training examples in the base classes xi ∈ Xb. Here, we denote the dimension of the encoded feature as d and the number of output classes as c. The classifier C(.|Wb) consists of a linear layer W>b fθ (xi) followed by a softmax function σ . Fine-tuning stage. To adapt the model to recognize novel classes in the fine-tuning stage, we fix the pre-trained network parameter θ in our feature extractor fθ and train a new classifier C(.|Wn) (parametrized by the weight matrix Wn) by minimizing Lpred using the few labeled of examples (i.e., the support set) in the novel classes Xn. 3.2 BASELINE++ In addition to the baseline model, we also implement a variant of the baseline model, denoted as Baseline++, which explicitly reduces intra-class variation among features during training. The importance of reducing intra-class variations of features has been highlighted in deep metric learning Hu et al. (2015) and few-shot classification methods Gidaris & Komodakis (2018). The training procedure of Baseline++ is the same as the original Baseline model except for the classifier design. As shown in Figure 1, we still have a weight matrix Wb ∈Rd×c of the classifier in the training stage and a Wn in the fine-tuning stage in Baseline++. The classifier design, however, is different from the linear classifier used in the Baseline. Take the weight matrix Wb as an example. We can write the weight matrix Wb as [w1,w2, ...wc], where each class has a d-dimensional weight vector. In the training stage, for an input feature fθ (xi) where xi ∈ Xb, we compute its cosine similarity to each weight vector [w1, · · · ,wc] and obtain the similarity scores [si,1,si,2, · · · ,si,c] for all classes, where si, j = fθ (xi)>w j/‖ fθ (xi)‖‖w j‖. We can then obtain the prediction probability for each class by normalizing these similarity scores with a softmax function. Here, the classifier makes a prediction based on the cosine distance between the input feature and the learned weight vectors representing each class. Consequently, training the model with this distance-based classifier explicitly reduce intra-class variations. Intuitively, the learned weight vectors [w1, · · · ,wc] can be interpreted as prototypes (similar to Snell et al. (2017); Vinyals et al. (2016)) for each class and the classification is based on the distance of the input feature to these learned prototypes. The softmax function prevents the learned weight vectors collapsing to zeros. We clarify that the network design in Baseline++ is not our contribution. The concept of distancebased classification has been extensively studied in Mensink et al. (2012) and recently has been revisited in the few-shot classification setting Gidaris & Komodakis (2018); Qi et al. (2018). 3.3 META-LEARNING ALGORITHMS Here we describe the formulations of meta-learning methods used in our study. We consider three distance metric learning based methods (MatchingNet Vinyals et al. (2016), ProtoNet Snell et al. (2017), and RelationNet Sung et al. (2018)) and one initialization based method (MAML Finn et al. (2017)). While meta-learning is not a clearly defined, Vinyals et al. (2016) considers a few-shot classification method as meta-learning if the prediction is conditioned on a small support set S, because it makes the training procedure explicitly learn to learn from a given small support set. As shown in Figure 2, meta-learning algorithms consist of a meta-training and a meta-testing stage. In the meta-training stage, the algorithm first randomly select N classes, and sample small base support set Sb and a base query set Qb from data samples within these classes. The objective is to train a classification model M that minimizes N-way prediction loss LN−way of the samples in the query set Qb. Here, the classifier M is conditioned on provided support set Sb. By making prediction conditioned on the given support set, a meta-learning method can learn how to learn from limited labeled data through training from a collection of tasks (episodes). In the meta-testing stage, all novel class data Xn are considered as the support set for novel classes Sn, and the classification model M can be adapted to predict novel classes with the new support set Sn. Different meta-learning methods differ in their strategies to make prediction conditioned on support set (see Figure 2). For both MatchingNet Vinyals et al. (2016) and ProtoNet Snell et al. (2017), the prediction of the examples in a query set Q is based on comparing the distance between the query feature and the support feature from each class. MatchingNet compares cosine distance between the query feature and each support feature, and computes average cosine distance for each class, while ProtoNet compares the Euclidean distance between query features and the class mean of support features. RelationNet Sung et al. (2018) shares a similar idea, but it replaces distance with a learnable relation module. The MAML method Finn et al. (2017) is an initialization based meta-learning algorithm, where each support set is used to adapt the initial model parameters using few gradient updates. As different support sets have different gradient updates, the adapted model is conditioned on the support set. Note that when the query set instances are predicted by the adapted model in the meta-training stage, the loss of the query set is used to update the initial model, not the adapted model. 4 EXPERIMENTAL RESULTS 4.1 EXPERIMENTAL SETUP Datasets and scenarios. We address the few-shot classification problem under three scenarios: 1) generic object recognition, 2) fine-grained image classification, and 3) cross-domain adaptation. For object recognition, we use the mini-ImageNet dataset commonly used in evaluating few-shot classification algorithms. The mini-ImageNet dataset consists of a subset of 100 classes from the ImageNet dataset Deng et al. (2009) and contains 600 images for each class. The dataset was first proposed by Vinyals et al. (2016), but recent works use the follow-up setting provided by Ravi & Larochelle (2017), which is composed of randomly selected 64 base, 16 validation, and 20 novel classes. For fine-grained classification, we use CUB-200-2011 dataset Wah et al. (2011) (referred to as the CUB hereafter). The CUB dataset contains 200 classes and 11,788 images in total. Following the evaluation protocol of Hilliard et al. (2018), we randomly split the dataset into 100 base, 50 validation, and 50 novel classes. For the cross-domain scenario (mini-ImageNet →CUB), we use mini-ImageNet as our base class and the 50 validation and 50 novel class from CUB. Evaluating the cross-domain scenario allows us to understand the effects of domain shifts to existing few-shot classification approaches. Implementation details. In the training stage for the Baseline and the Baseline++ methods, we train 400 epochs with a batch size of 16. In the meta-training stage for meta-learning methods, we train 60,000 episodes for 1-shot and 40,000 episodes for 5-shot tasks. We use the validation set to select the training episodes with the best accuracy.2 In each episode, we sample N classes to form N-way classification (N is 5 in both meta-training and meta-testing stages unless otherwise mentioned). For each class, we pick k labeled instances as our support set and 16 instances for the query set for a k-shot task. In the fine-tuning or meta-testing stage for all methods, we average the results over 600 experiments. In each experiment, we randomly sample 5 classes from novel classes, and in each class, we also pick k instances for the support set and 16 for the query set. For Baseline and Baseline++, we use the entire support set to train a new classifier for 100 iterations with a batch size of 4. For meta-learning methods, we obtain the classification model conditioned on the support set as in Section 3.3. All methods are trained from scratch and use the Adam optimizer with initial learning rate 10−3. We apply standard data augmentation including random crop, left-right flip, and color jitter in both the training or meta-training stage. Some implementation details have been adjusted individually for each method. For Baseline++, we multiply the cosine similarity by a constant scalar 2 to adjust original value range [-1,1] to be more appropriate for subsequent softmax layer. For MatchingNet, we use an FCE classification layer without fine-tuning in all experiments and also multiply cosine similarity by a constant scalar. For RelationNet, we replace the L2 norm with a softmax layer to expedite training. For MAML, we use a first-order approximation in the gradient for memory efficiency. The approximation has been shown in the original paper and in our appendix to have nearly identical performance as the full version. We choose the first-order approximation for its efficiency. 4.2 EVALUATION USING THE STANDARD SETTING We now conduct experiments on the most common setting in few-shot classification, 1-shot and 5-shot classification, i.e., 1 or 5 labeled instances are available from each novel class. We use a four-layer convolution backbone (Conv-4) with an input size of 84x84 as in Snell et al. (2017) and perform 5-way classification for only novel classes during the fine-tuning or meta-testing stage. To validate the correctness of our implementation, we first compare our results to the reported numbers for the mini-ImageNet dataset in Table 1. Note that we have a ProtoNet#, as we use 5-way classification in the meta-training and meta-testing stages for all meta-learning methods as mentioned in Section 4.1; however, the official reported results from ProtoNet uses 30-way for one shot and 20-way for five shot in the meta-training stage in spite of using 5-way in the meta-testing stage. We report this result for completeness. From Table 1, we can observe that all of our re-implementation for meta-learning methods do not fall more than 2% behind reported performance. These minor differences can be attributed to our 2For example, the exact episodes for experiments on the mini-ImageNet in the 5-shot setting with a fourlayer ConvNet are: ProtoNet: 24,600; MatchingNet: 35,300; RelationNet: 37,100; MAML: 36,700. 3Reported results are from Ravi & Larochelle (2017) modifications of some implementation details to ensure a fair comparison among all methods, such as using the same optimizer for all methods. Moreover, our implementation of existing work also improves the performance of some of the methods. For example, our results show that the Baseline approach under 5-shot setting can be improved by a large margin since previous implementations of the Baseline do not include data augmentation in their training stage, thereby leads to over-fitting. While our Baseline∗ is not as good as reported in 1-shot, our Baseline with augmentation still improves on it, and could be even higher if our reproduced Baseline∗ matches the reported statistics. In either case, the performance of the Baseline method is severely underestimated. We also improve the results of MatchingNet by adjusting the input score to the softmax layer to a more appropriate range as stated in Section 4.1. On the other hand, while ProtoNet# is not as good as ProtoNet, as mentioned in the original paper a more challenging setting in the meta-training stage leads to better accuracy. We choose to use a consistent 5-way classification setting in subsequent experiments to have a fair comparison to other methods. This issue can be resolved by using a deeper backbone as shown in Section 4.3. After validating our re-implementation, we now report the accuracy in Table 2. Besides additionally reporting results on the CUB dataset, we also compare Baseline++ to other methods. Here, we find that Baseline++ improves the Baseline by a large margin and becomes competitive even when compared with other meta-learning methods. The results demonstrate that reducing intra-class variation is an important factor in the current few-shot classification problem setting. However, note that our current setting only uses a 4-layer backbone, while a deeper backbone can inherently reduce intra-class variation. Thus, we conduct experiments to investigate the effects of backbone depth in the next section. 4.3 EFFECT OF INCREASING THE NETWORK DEPTH In this section, we change the depth of the feature backbone to reduce intra-class variation for all methods. See appendix for statistics on how network depth correlates with intra-class variation. Starting from Conv-4, we gradually increase the feature backbone to Conv-6, ResNet-10, 18 and 34, where Conv-6 have two additional convolution blocks without pooling after Conv-4. ResNet-18 and 34 are the same as described in He et al. (2016) with an input size of 224×224, while ResNet-10 is a simplified version of ResNet-18 where only one residual building block is used in each layer. The statistics of this experiment would also be helpful to other works to make a fair comparison under different feature backbones. Results of the CUB dataset shows a clearer tendency in Figure 3. As the backbone gets deeper, the gap among different methods drastically reduces. Another observation is how ProtoNet improves rapidly as the backbone gets deeper. While using a consistent 5-way classification as discussed in Section 4.2 degrades the accuracy of ProtoNet with Conv-4, it works well with a deeper backbone. Thus, the two observations above demonstrate that in the CUB dataset, the gap among existing methods would be reduced if their intra-class variation are all reduced by a deeper backbone. However, the result of mini-ImageNet in Figure 3 is much more complicated. In the 5-shot setting, both Baseline and Baseline++ achieve good performance with a deeper backbone, but some metalearning methods become worse relative to them. Thus, other than intra-class variation, we can assume that the dataset is also important in few-shot classification. One difference between CUB and mini-ImageNet is their domain difference in base and novel classes since classes in mini-ImageNet have a larger divergence than CUB in a word-net hierarchy Miller (1995). To better understand the effect, below we discuss how domain differences between base and novel classes impact few-shot classification results. 4.4 EFFECT OF DOMAIN DIFFERENCES BETWEEN BASE AND NOVEL CLASSES To further dig into the issue of domain difference, we design scenarios that provide such domain shifts. Besides the fine-grained classification and object recognition scenarios, we propose a new cross-domain scenario: mini-ImageNet →CUB as mentioned in Section 4.1. We believe that this is practical scenario since collecting images from a general class may be relatively easy (e.g. due to increased availability) but collecting images from fine-grained classes might be more difficult. We conduct the experiments with a ResNet-18 feature backbone. As shown in Table 3, the Baseline outperforms all meta-learning methods under this scenario. While meta-learning methods learn to learn from the support set during the meta-training stage, they are not able to adapt to novel classes that are too different since all of the base support sets are within the same dataset. A similar concept is also mentioned in Vinyals et al. (2016). In contrast, the Baseline simply replaces and trains a new classifier based on the few given novel class data, which allows it to quickly adapt to a novel mini-ImageNet→CUB Baseline 65.57±0.70 Baseline++ 62.04±0.76 MatchingNet 53.07±0.74 ProtoNet 62.02±0.70 MAML 51.34±0.72 RelationNet 57.71±0.73 Table 3: 5-shot accuracy under the cross-domain scenario with a ResNet-18 backbone. Baseline outperforms all other methods under this scenario. class and is less affected by domain shift between the source and target domains. The Baseline also performs better than the Baseline++ method, possibly because additionally reducing intra-class variation compromises adaptability. In Figure 4, we can further observe how Baseline accuracy becomes relatively higher as the domain difference gets larger. That is, as the domain difference grows larger, the adaptation based on a few novel class instances becomes more important. 4.5 EFFECT OF FURTHER ADAPTATION To further adapt meta-learning methods as in the Baseline method, an intuitive way is to fix the features and train a new softmax classifier. We apply this simple adaptation scheme to MatchingNet and ProtoNet. For MAML, it is not feasible to fix the feature as it is an initialization method. In contrast, since it updates the model with the support set for only a few iterations, we can adapt further by updating for as many iterations as is required to train a new classification layer, which is 100 updates as mentioned in Section 4.1. For RelationNet, the features are convolution maps rather than the feature vectors, so we are not able to replace it with a softmax. As an alternative, we randomly split the few training data in novel class into 3 support and 2 query data to finetune the relation module for 100 epochs. The results of further adaptation are shown in Figure 5; we can observe that the performance of MatchingNet and MAML improves significantly after further adaptation, particularly in the miniImageNet →CUB scenario. The results demonstrate that lack of adaptation is the reason they fall behind the Baseline. However, changing the setting in the meta-testing stage can lead to inconsistency with the meta-training stage. The ProtoNet result shows that performance can degrade in sce- narios with less domain difference. Thus, we believe that learning how to adapt in the meta-training stage is important future direction. In summary, as domain differences are likely to exist in many real-world applications, we consider that learning to learn adaptation in the meta-training stage would be an important direction for future meta-learning research in few-shot classification. 5 CONCLUSIONS In this paper, we have investigated the limits of the standard evaluation setting for few-shot classification. Through comparing methods on a common ground, our results show that the Baseline++ model is competitive to state of art under standard conditions, and the Baseline model achieves competitive performance with recent state-of-the-art meta-learning algorithms on both CUB and mini-ImageNet benchmark datasets when using a deeper feature backbone. Surprisingly, the Baseline compares favorably against all the evaluated meta-learning algorithms under a realistic scenario where there exists domain shift between the base and novel classes. By making our source code publicly available, we believe that community can benefit from the consistent comparative experiments and move forward to tackle the challenge of potential domain shifts in the context of few-shot learning. A1 RELATIONSHIP BETWEEN DOMAIN ADAPTATION AND FEW-SHOT CLASSIFICATION As mentioned in Section 2, here we discuss the relationship between domain adaptation and fewshot classification to clarify different experimental settings. As shown in Table A1, in general, domain adaptation aims at adapting source dataset knowledge to the same class in target dataset. On the other hand, the goal of few-shot classification is to learn from base classes to classify novel classes in the same dataset. Several recent work tackle the problem at the intersection of the two fields of study. For example, cross-task domain adaptation Hsu et al. (2018) also discuss novel classes in the target dataset. In contrast, while Motiian et al. (2017) has “few-shot” in the title, their evaluation setting focuses on classifying the same class in the target dataset. If base and novel classes are both drawn from the same dataset, minor domain shift exists between the base and novel classes, as we demonstrated in Section 4.4. To highlight the impact of domain shift, we further propose the mini-ImageNet→CUB setting. The domain shift in few-shot classification is also discussed in Dong & Xing (2018). Table A1: Relationship between domain adaptation and few-shot classification. The two field-of-studies have overlapping in the development. Notation ”*” indicates minor domain shifts exist between base and novel classes. Domain shift Source to target dataset Base to novel class Domain adaptation Motiian et al. (2017) V V - Cross-task domain adaptation Hsu et al. (2018) V V V Few-shot classification Ours (CUB, mini-ImageNet ) * - V Cross-domain few-shot Ours (mini-ImageNet→CUB) Dong & Xing (2018) V V V A2 TERMINOLOGY DIFFERENCE Different meta-learning works use different terminology in their works. We highlight their differences in appendix Table A2 to clarify the inconsistency. Table A2: Different terminology used in other works. Notation ”-” indicates the term is the same as in this paper. Our terms MatchingNet Vinyals et al. ProtoNet Snell et al. MAML Finn et al. Meta-learn LSTM Ravi & Larochelle Imaginary Wang et al. meta-training stage training training - - - meta-testing stage test test - - - base class training set training set task meta-training set - novel class test set test set new task meta-testing set - support set - - sample training dataset training data query set batch - test time sample test dataset test data A3 ADDITIONAL RESULTS ON OMNIGLOT AND OMNIGLOT→EMNIST For completeness, here we also show the results under two additional scenarios in 4) character recognition 5) cross-domain character recognition. For character recognition, we use the Omniglot dataset Lake et al. (2011) commonly used in evaluating few-shot classification algorithms. Omniglot contains 1,623 characters from 50 languages, and we follow the evaluation protocol of Vinyals et al. (2016) to first augment the classes by rotations in 90, 180, 270 degrees, resulting in 6492 classes. We then follow Snell et al. (2017) to split these classes into 4112 base, 688 validation, and 1692 novel classes. Unlike Snell et al. (2017), our validation classes are only used to monitor the performance during meta-training. For cross-domain character recognition (Omniglot→EMNIST), we follow the setting of Dong & Xing (2018) to use Omniglot without Latin characters and without rotation augmentation as base classes, so there are 1597 base classes. On the other hand, EMNIST dataset Cohen et al. (2017) contains 10-digits and upper and lower case alphabets in English, so there are 62 classes in total. We split these classes into 31 validation and 31 novel classes, and invert the white-on-black characters to black-on-white as in Omniglot. We use a Conv-4 backbone with input size 28x28 for both settings. As Omniglot characters are black-and-white, center-aligned and rotation sensitive, we do not use data augmentation in this experiment. To reduce the risk of over-fitting, we use the validation set to select the epoch or episode with the best accuracy for all methods, including baseline and baseline++.4 As shown in Table A3, in both Omniglot and Omniglot→EMNIST settings, meta-learning methods outperform baseline and baseline++ in 1-shot. However, all methods reach comparable performance in the 5-shot classification setting. We attribute this to the lack of data augmentation for the baseline and baseline++ methods as they tend to over-fit base classes. When sufficient examples in novel classes are available, the negative impact of over-fitting is reduced. Table A3: Few-shot classification results for both the Omniglot and Omniglot→EMNIST. All experiments are from 5-way classification with a Conv-4 backbone and without data augmentation. Omniglot Omniglot→EMNIST Method 1-shot 5-shot 1-shot 5-shot Baseline 94.89 ± 0.45 99.12 ± 0.13 63.94 ± 0.87 86.00 ± 0.59 Baseline++ 95.41 ± 0.39 99.38 ± 0.10 64.74 ± 0.82 87.31 ± 0.58 MatchingNet 97.78 ± 0.30 99.37 ± 0.11 72.71 ± 0.79 87.60 ± 0.56 ProtoNet 98.01 ± 0.30 99.15 ± 0.12 70.43 ± 0.80 87.04 ± 0.55 MAML 98.57 ± 0.19 99.53 ± 0.08 72.04 ± 0.83 88.24 ± 0.56 RelationNet 97.22 ± 0.33 99.30 ± 0.10 75.55 ± 0.87 88.94 ± 0.54 A4 BASELINE WITH 1-NN CLASSIFIER Some prior work (Vinyals et al. (2016)) apply a Baseline with 1-NN classifier in the test stage. We include our result as in Table A4. The result shows that using 1-NN classifier has better performance than that of using the softmax classifier in 1-shot setting, but softmax classifier performs better in 5-shot setting. We note that the number here are not directly comparable to results in Vinyals et al. (2016) because we use a different mini-ImageNet as in Ravi & Larochelle (2017). Table A4: Baseline with softmax and 1-NN classifier in test stage. We note that we use cosine distance in 1-NN. 1-shot 5-shot softmax 1-NN softmax 1-NN Baseline 42.11±0.71 44.18±0.69 62.53±0.69 56.68±0.67 Baseline++ 48.24±0.75 49.57±0.73 66.43±0.63 61.93±0.65 4The exact epoch of baseline and baseline++ on Omniglot and Omniglot→EMNIST is 5 epochs A5 MAML AND MAML WITH FIRST-ORDER APPROXIMATION As discussed in Section 4.1, we use first-order approximation MAML to improve memory efficiency in all of our experiments. To demonstrate this design choice does not affect the accuracy, we compare their validation accuracy trends on Omniglot with 5-shot as in Figure A1. We observe that while the full version MAML converge faster, both versions reach similar accuracy in the end. This phenomena is consistent with the difference of first-order (e.g. gradient descent) and secondorder methods (e.g. Newton) in convex optimization problems. Second-order methods converge faster at the cost of memory, but they both converge to similar objective value. Figure A1: Validation accuracy trends of MAML and MAML with first order approximation. Both versions converge to the same validation accuracy. The experimental results are on Omniglot with 5-shot with a Conv-4 backbone. A6 INTRA-CLASS VARIATION AND BACKBONE DEPTH As mentioned in Section 4.3, here we demonstrate decreased intra-class variation as the network depth gets deeper as in Figure A2. We use the Davies-Bouldin index Davies & Bouldin (1979) to measure intra-class variation. The Davies-Bouldin index is a metric to evaluate the tightness in a cluster (or class, in our case). Our results show that both intra-class variation in the base and novel class feature decrease using deeper backbones. 2 3 4 5 6 7 8 Conv-4 Conv-6 Resnet-10 Resnet-18 Resnet-34 3 4 5 6 7 Conv-4 Conv-6 Resnet-10 Resnet-18 Resnet-34 Baseline Baseline++ MatchingNetProtoNet Base class feature Novel class feature D av ie s- B ou ld in in de x (I nt ra -c la ss va ri at io n) Figure A2: Intra-class variation decreases as backbone gets deeper. Here we use Davies-Bouldin index to represent intra-class variation, which is a metric to evaluate the tightness in a cluster (or class, in our case). The statistics are Davies-Bouldin index for all base and novel class feature (extracted by feature extractor learned after training or meta-training stage) for CUB dataset under different backbone. A7 DETAILED STATISTICS IN EFFECTS OF INCREASING BACKBONE DEPTH Here we show a high-resolution version of Figure 3 in Figure A3 and show detailed statistics in Table A5 for easier comparison. 45% 55% 65% 75% 40% 45% 50% 55% 60% 70% 80% 90% 60% 65% 70% 75% 80% Baseline Baseline++ MatchingNet ProtoNet MAML RelationNet CUB mini-ImageNet 1- sh ot 5- sh ot Figure A3: Few-shot classification accuracy vs. backbone depth. In the CUB dataset, gaps among different methods diminish as the backbone gets deeper. In mini-ImageNet 5-shot, some meta-learning methods are even beaten by Baseline with a deeper backbone. Table A5: Detailed statistics in Figure 3. We put exact value here for reference. Conv-4 Conv-6 Resnet-10 Resnet-18 Resnet-34 CUB 1-shot Baseline 47.12±0.74 55.77±0.86 63.34±0.91 65.51±0.87 67.96±0.89 Baseline++ 60.53±0.83 66.00±0.89 69.55±0.89 67.02±0.90 68.00±0.83 MatchingNet 61.16±0.89 67.16±0.97 71.29±0.90 72.36±0.90 71.44±0.96 ProtoNet 51.31±0.91 66.07±0.97 70.13±0.94 71.88±0.91 72.03±0.91 MAML 55.92±0.95 65.91±0.97 71.29±0.95 69.96±1.01 67.28±1.08 RelationNet 62.45±0.98 63.11±0.94 68.65±0.91 67.59±1.02 66.20±0.99 CUB 5-shot Baseline 64.16±0.71 73.07±0.71 81.27±0.57 82.85±0.55 84.27±0.53 Baseline++ 79.34±0.61 82.02±0.55 85.17±0.50 83.58±0.54 84.50±0.51 MatchingNet 72.86±0.70 77.08±0.66 83.59±0.58 83.64±0.60 83.78±0.56 ProtoNet 70.77±0.69 78.14±0.67 84.76±0.52 87.42±0.48 85.98±0.53 MAML 72.09±0.76 76.31±0.74 80.33±0.70 82.70±0.65 83.47±0.59 RelationNet 76.11±0.69 77.81±0.66 81.12±0.63 82.75±0.58 82.30±0.58 mini-ImageNet 1-shot Baseline 42.11±0.71 45.82±0.74 52.37±0.79 51.75±0.80 49.82±0.73 Baseline++ 48.24±0.75 48.29±0.72 53.97±0.79 51.87±0.77 52.65±0.83 MatchingNet 48.14±0.78 50.47±0.86 54.49±0.81 52.91±0.88 53.20±0.78 ProtoNet 44.42±0.84 50.37±0.83 51.98±0.84 54.16±0.82 53.90±0.83 MAML 46.47±0.82 50.96±0.92 54.69±0.89 49.61±0.92 51.46±0.90 RelationNet 49.31±0.85 51.84±0.88 52.19±0.83 52.48±0.86 51.74±0.83 mini-ImageNet 5-shot Baseline 62.53±0.69 66.42±0.67 74.69±0.64 74.27±0.63 73.45±0.65 Baseline++ 66.43±0.63 68.09±0.69 75.90±0.61 75.68±0.63 76.16±0.63 MatchingNet 63.48±0.66 63.19±0.70 68.82±0.65 68.88±0.69 68.32±0.66 ProtoNet 64.24±0.72 67.33±0.67 72.64±0.64 73.68±0.65 74.65±0.64 MAML 62.71±0.71 66.09±0.71 66.62±0.83 65.72±0.77 65.90±0.79 RelationNet 66.60±0.69 64.55±0.70 70.20±0.66 69.83±0.68 69.61±0.67 A8 MORE-WAY IN META-TESTING STAGE We experiment with a practical setting that handles different testing scenarios. Specifically, we conduct the experiments of 5-way meta-training and N-way meta-testing (where N = 5, 10, 20) to examine the effect of testing scenarios that are different from training. As in Table A6, we compare the methods Baseline, Baseline++, MatchingNet, ProtoNet, and RelationNet. Note that we are unable to apply the MAML method as MAML learns the initialization for the classifier and can thus only be updated to classify the same number of classes. Our results show that for classification with a larger N-way in the meta-testing stage, the proposed Baseline++ compares favorably against other methods in both shallow or deeper backbone settings. We attribute the results to two reasons. First, to perform well in a larger N-way classification setting, one needs to further reduce the intra-class variation to avoid misclassification. Thus, Baseline++ has better performance than Baseline in both backbone settings. Second, as meta-learning algorithms were trained to perform 5-way classification in the meta-training stage, the performance of these algorithms may drop significantly when increasing the N-way in the meta-testing stage because the tasks of 10-way or 20-way classification are harder than that of 5-way one. One may address this issue by performing a larger N-way classification in the meta-training stage (as suggested in Snell et al. (2017)). However, it may encounter the issue of memory constraint. For example, to perform a 20-way classification with 5 support images and 15 query images in each class, we need to fit a batch size of 400 (20 x (5 + 15)) that must fit into the GPUs. Without special hardware parallelization, the large batch size may prevent us from training models with deeper backbones such as ResNet. Table A6: 5-way meta-training and N-way meta-testing experiment. The experimental results are on mini-ImageNet with 5-shot. We could see Baseline++ compares favorably against other methods in both shallow or deeper backbone settings. Conv-4 ResNet-18 N-way test 5-way 10-way 20-way 5-way 10-way 20-way Baseline 62.53±0.69 46.44±0.41 32.27±0.24 74.27±0.63 55.00±0.46 42.03±0.25 Baseline++ 66.43±0.63 52.26±0.40 38.03±0.24 75.68±0.63 63.40±0.44 50.85±0.25 MatchingNet 63.48±0.66 47.61±0.44 33.97±0.24 68.88±0.69 52.27±0.46 36.78±0.25 ProtoNet 64.24±0.68 48.77±0.45 34.58±0.23 73.68±0.65 59.22±0.44 44.96±0.26 RelationNet 66.60±0.69 47.77±0.43 33.72±0.22 69.83±0.68 53.88±0.48 39.17±0.25
1. How can we evaluate meta-learning algorithms in a consistent and systematic way? 2. Are there any simple modifications that can improve the baselines of existing meta-learning algorithms? 3. Can we get insights into why certain meta-learning algorithms may not be as effective as claimed, and what can be done to improve them?
Review
Review The paper tried to propose a systematic/consistent way for evaluating meta-learning algorithms. I believe this is a great direction of research as the meta-learning community is growing quickly. However, my question is if a relatively simple modification could improve the baselines, are there simple modifications available to other meta-learning algorithms being investigated? If the other algorithms are not as good as they claimed, can you give any insights on why and what to improve?
ICLR
Title A Closer Look at Few-shot Classification Abstract Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples. While significant progress has been made, the growing complexity of network designs, meta-learning algorithms, and differences in implementation details make a fair comparison difficult. In this paper, we present 1) a consistent comparative analysis of several representative few-shot classification algorithms, with results showing that deeper backbones significantly reduce the performance differences among methods on datasets with limited domain differences, 2) a modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the miniImageNet and the CUB datasets, and 3) a new experimental setting for evaluating the cross-domain generalization ability for few-shot classification algorithms. Our results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones. In a realistic cross-domain evaluation setting, we show that a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms. 1 INTRODUCTION Deep learning models have achieved state-of-the-art performance on visual recognition tasks such as image classification. The strong performance, however, heavily relies on training a network with abundant labeled instances with diverse visual variations (e.g., thousands of examples for each new class even with pre-training on large-scale dataset with base classes). The human annotation cost as well as the scarcity of data in some classes (e.g., rare species) significantly limit the applicability of current vision systems to learn new visual concepts efficiently. In contrast, the human visual systems can recognize new classes with extremely few labeled examples. It is thus of great interest to learn to generalize to new classes with a limited amount of labeled examples for each novel class. The problem of learning to generalize to unseen classes during training, known as few-shot classification, has attracted considerable attention Vinyals et al. (2016); Snell et al. (2017); Finn et al. (2017); Ravi & Larochelle (2017); Sung et al. (2018); Garcia & Bruna (2018); Qi et al. (2018). One promising direction to few-shot classification is the meta-learning paradigm where transferable knowledge is extracted and propagated from a collection of tasks to prevent overfitting and improve generalization. Examples include model initialization based methods Ravi & Larochelle (2017); Finn et al. (2017), metric learning methods Vinyals et al. (2016); Snell et al. (2017); Sung et al. (2018), and hallucination based methods Antoniou et al. (2018); Hariharan & Girshick (2017); Wang et al. (2018). Another line of work Gidaris & Komodakis (2018); Qi et al. (2018) also demonstrates promising results by directly predicting the weights of the classifiers for novel classes. Limitations. While many few-shot classification algorithms have reported improved performance over the state-of-the-art, there are two main challenges that prevent us from making a fair comparison and measuring the actual progress. First, the discrepancy of the implementation details among multiple few-shot learning algorithms obscures the relative performance gain. The performance of baseline approaches can also be significantly under-estimated (e.g., training without data augmentation). Second, while the current evaluation focuses on recognizing novel class with limited training examples, these novel classes are sampled from the same dataset. The lack of domain shift between the base and novel classes makes the evaluation scenarios unrealistic. Our work. In this paper, we present a detailed empirical study to shed new light on the few-shot classification problem. First, we conduct consistent comparative experiments to compare several representative few-shot classification methods on common ground. Our results show that using a deep backbone shrinks the performance gap between different methods in the setting of limited domain differences between base and novel classes. Second, by replacing the linear classifier with a distance-based classifier as used in Gidaris & Komodakis (2018); Qi et al. (2018), the baseline method is surprisingly competitive to current state-of-art meta-learning algorithms. Third, we introduce a practical evaluation setting where there exists domain shift between base and novel classes (e.g., sampling base classes from generic object categories and novel classes from fine-grained categories). Our results show that sophisticated few-shot learning algorithms do not provide performance improvement over the baseline under this setting. Through making the source code and model implementations with a consistent evaluation setting publicly available, we hope to foster future progress in the field.1 Our contributions. 1. We provide a unified testbed for several different few-shot classification algorithms for a fair comparison. Our empirical evaluation results reveal that the use of a shallow backbone commonly used in existing work leads to favorable results for methods that explicitly reduce intra-class variation. Increasing the model capacity of the feature backbone reduces the performance gap between different methods when domain differences are limited. 2. We show that a baseline method with a distance-based classifier surprisingly achieves competitive performance with the state-of-the-art meta-learning methods on both mini-ImageNet and CUB datasets. 3. We investigate a practical evaluation setting where base and novel classes are sampled from different domains. We show that current few-shot classification algorithms fail to address such domain shifts and are inferior even to the baseline method, highlighting the importance of learning to adapt to domain differences in few-shot learning. 2 RELATED WORK Given abundant training examples for the base classes, few-shot learning algorithms aim to learn to recognizing novel classes with a limited amount of labeled examples. Much efforts have been devoted to overcome the data efficiency issue. In the following, we discuss representative few-shot learning algorithms organized into three main categories: initialization based, metric learning based, and hallucination based methods. Initialization based methods tackle the few-shot learning problem by “learning to fine-tune”. One approach aims to learn good model initialization (i.e., the parameters of a network) so that the classifiers for novel classes can be learned with a limited number of labeled examples and a small number of gradient update steps Finn et al. (2017; 2018); Nichol & Schulman (2018); Rusu et al. (2019). Another line of work focuses on learning an optimizer. Examples include the LSTM-based meta-learner for replacing the stochastic gradient decent optimizer Ravi & Larochelle (2017) and the weight-update mechanism with an external memory Munkhdalai & Yu (2017). While these initialization based methods are capable of achieving rapid adaption with a limited number of training examples for novel classes, our experiments show that these methods have difficulty in handling domain shifts between base and novel classes. Distance metric learning based methods address the few-shot classification problem by “learning to compare”. The intuition is that if a model can determine the similarity of two images, it can classify an unseen input image with the labeled instances Koch et al. (2015). To learn a sophisticated comparison models, meta-learning based methods make their prediction conditioned on distance or 1https://github.com/wyharveychen/CloserLookFewShot metric to few labeled instances during the training process. Examples of distance metrics include cosine similarity Vinyals et al. (2016), Euclidean distance to class-mean representation Snell et al. (2017), CNN-based relation module Sung et al. (2018), ridge regression Bertinetto et al. (2019), and graph neural network Garcia & Bruna (2018). In this paper, we compare the performance of three distance metric learning methods. Our results show that a simple baseline method with a distancebased classifier (without training over a collection of tasks/episodes as in meta-learning) achieves competitive performance with respect to other sophisticated algorithms. Besides meta-learning methods, both Gidaris & Komodakis (2018) and Qi et al. (2018) develop a similar method to our Baseline++ (described later in Section 3.2). The method in Gidaris & Komodakis (2018) learns a weight generator to predict the novel class classifier using an attentionbased mechanism (cosine similarity), and the Qi et al. (2018) directly use novel class features as their weights. Our Baseline++ can be viewed as a simplified architecture of these methods. Our focus, however, is to show that simply reducing intra-class variation in a baseline method using the base class data leads to competitive performance. Hallucination based methods directly deal with data deficiency by “learning to augment”. This class of methods learns a generator from data in the base classes and use the learned generator to hallucinate new novel class data for data augmentation. One type of generator aims at transferring appearance variations exhibited in the base classes. These generators either transfer variance in base class data to novel classes Hariharan & Girshick (2017), or use GAN models Antoniou et al. (2018) to transfer the style. Another type of generators does not explicitly specify what to transfer, but directly integrate the generator into a meta-learning algorithm for improving the classification accuracy Wang et al. (2018). Since hallucination based methods often work with other few-shot methods together (e.g. use hallucination based and metric learning based methods together) and lead to complicated comparison, we do not include these methods in our comparative study and leave it for future work. Domain adaptation techniques aim to reduce the domain shifts between source and target domain Pan et al. (2010); Ganin & Lempitsky (2015), as well as novel tasks in a different domain Hsu et al. (2018). Similar to domain adaptation, we also investigate the impact of domain difference on fewshot classification algorithms in Section 4.5. In contrast to most domain adaptation problems where a large amount of data is available in the target domain (either labeled or unlabeled), our problem setting differs because we only have very few examples in the new domain. Very recently, the method in Dong & Xing (2018) addresses the one-shot novel category domain adaptation problem, where in the testing stage both the domain and the category to classify are changed. Similarly, our work highlights the limitations of existing few-shot classification algorithms problem in handling domain shift. To put these problem settings in context, we provided a detailed comparison of setting difference in the appendix A1. 3 OVERVIEW OF FEW-SHOT CLASSIFICATION ALGORITHMS In this section, we first outline the details of the baseline model (Section 3.1) and its variant (Section 3.2), followed by describing representative meta-learning algorithms (Section 3.3) studied in our experiments. Given abundant base class labeled data Xb and a small amount of novel class labeled data Xn, the goal of few-shot classification algorithms is to train classifiers for novel classes (unseen during training) with few labeled examples. 3.1 BASELINE Our baseline model follows the standard transfer learning procedure of network pre-training and fine-tuning. Figure 1 illustrates the overall procedure. Training stage. We train a feature extractor fθ (parametrized by the network parameters θ ) and the classifier C(·|Wb) (parametrized by the weight matrix Wb ∈ Rd×c) from scratch by minimizing a standard cross-entropy classification loss Lpred using the training examples in the base classes xi ∈ Xb. Here, we denote the dimension of the encoded feature as d and the number of output classes as c. The classifier C(.|Wb) consists of a linear layer W>b fθ (xi) followed by a softmax function σ . Fine-tuning stage. To adapt the model to recognize novel classes in the fine-tuning stage, we fix the pre-trained network parameter θ in our feature extractor fθ and train a new classifier C(.|Wn) (parametrized by the weight matrix Wn) by minimizing Lpred using the few labeled of examples (i.e., the support set) in the novel classes Xn. 3.2 BASELINE++ In addition to the baseline model, we also implement a variant of the baseline model, denoted as Baseline++, which explicitly reduces intra-class variation among features during training. The importance of reducing intra-class variations of features has been highlighted in deep metric learning Hu et al. (2015) and few-shot classification methods Gidaris & Komodakis (2018). The training procedure of Baseline++ is the same as the original Baseline model except for the classifier design. As shown in Figure 1, we still have a weight matrix Wb ∈Rd×c of the classifier in the training stage and a Wn in the fine-tuning stage in Baseline++. The classifier design, however, is different from the linear classifier used in the Baseline. Take the weight matrix Wb as an example. We can write the weight matrix Wb as [w1,w2, ...wc], where each class has a d-dimensional weight vector. In the training stage, for an input feature fθ (xi) where xi ∈ Xb, we compute its cosine similarity to each weight vector [w1, · · · ,wc] and obtain the similarity scores [si,1,si,2, · · · ,si,c] for all classes, where si, j = fθ (xi)>w j/‖ fθ (xi)‖‖w j‖. We can then obtain the prediction probability for each class by normalizing these similarity scores with a softmax function. Here, the classifier makes a prediction based on the cosine distance between the input feature and the learned weight vectors representing each class. Consequently, training the model with this distance-based classifier explicitly reduce intra-class variations. Intuitively, the learned weight vectors [w1, · · · ,wc] can be interpreted as prototypes (similar to Snell et al. (2017); Vinyals et al. (2016)) for each class and the classification is based on the distance of the input feature to these learned prototypes. The softmax function prevents the learned weight vectors collapsing to zeros. We clarify that the network design in Baseline++ is not our contribution. The concept of distancebased classification has been extensively studied in Mensink et al. (2012) and recently has been revisited in the few-shot classification setting Gidaris & Komodakis (2018); Qi et al. (2018). 3.3 META-LEARNING ALGORITHMS Here we describe the formulations of meta-learning methods used in our study. We consider three distance metric learning based methods (MatchingNet Vinyals et al. (2016), ProtoNet Snell et al. (2017), and RelationNet Sung et al. (2018)) and one initialization based method (MAML Finn et al. (2017)). While meta-learning is not a clearly defined, Vinyals et al. (2016) considers a few-shot classification method as meta-learning if the prediction is conditioned on a small support set S, because it makes the training procedure explicitly learn to learn from a given small support set. As shown in Figure 2, meta-learning algorithms consist of a meta-training and a meta-testing stage. In the meta-training stage, the algorithm first randomly select N classes, and sample small base support set Sb and a base query set Qb from data samples within these classes. The objective is to train a classification model M that minimizes N-way prediction loss LN−way of the samples in the query set Qb. Here, the classifier M is conditioned on provided support set Sb. By making prediction conditioned on the given support set, a meta-learning method can learn how to learn from limited labeled data through training from a collection of tasks (episodes). In the meta-testing stage, all novel class data Xn are considered as the support set for novel classes Sn, and the classification model M can be adapted to predict novel classes with the new support set Sn. Different meta-learning methods differ in their strategies to make prediction conditioned on support set (see Figure 2). For both MatchingNet Vinyals et al. (2016) and ProtoNet Snell et al. (2017), the prediction of the examples in a query set Q is based on comparing the distance between the query feature and the support feature from each class. MatchingNet compares cosine distance between the query feature and each support feature, and computes average cosine distance for each class, while ProtoNet compares the Euclidean distance between query features and the class mean of support features. RelationNet Sung et al. (2018) shares a similar idea, but it replaces distance with a learnable relation module. The MAML method Finn et al. (2017) is an initialization based meta-learning algorithm, where each support set is used to adapt the initial model parameters using few gradient updates. As different support sets have different gradient updates, the adapted model is conditioned on the support set. Note that when the query set instances are predicted by the adapted model in the meta-training stage, the loss of the query set is used to update the initial model, not the adapted model. 4 EXPERIMENTAL RESULTS 4.1 EXPERIMENTAL SETUP Datasets and scenarios. We address the few-shot classification problem under three scenarios: 1) generic object recognition, 2) fine-grained image classification, and 3) cross-domain adaptation. For object recognition, we use the mini-ImageNet dataset commonly used in evaluating few-shot classification algorithms. The mini-ImageNet dataset consists of a subset of 100 classes from the ImageNet dataset Deng et al. (2009) and contains 600 images for each class. The dataset was first proposed by Vinyals et al. (2016), but recent works use the follow-up setting provided by Ravi & Larochelle (2017), which is composed of randomly selected 64 base, 16 validation, and 20 novel classes. For fine-grained classification, we use CUB-200-2011 dataset Wah et al. (2011) (referred to as the CUB hereafter). The CUB dataset contains 200 classes and 11,788 images in total. Following the evaluation protocol of Hilliard et al. (2018), we randomly split the dataset into 100 base, 50 validation, and 50 novel classes. For the cross-domain scenario (mini-ImageNet →CUB), we use mini-ImageNet as our base class and the 50 validation and 50 novel class from CUB. Evaluating the cross-domain scenario allows us to understand the effects of domain shifts to existing few-shot classification approaches. Implementation details. In the training stage for the Baseline and the Baseline++ methods, we train 400 epochs with a batch size of 16. In the meta-training stage for meta-learning methods, we train 60,000 episodes for 1-shot and 40,000 episodes for 5-shot tasks. We use the validation set to select the training episodes with the best accuracy.2 In each episode, we sample N classes to form N-way classification (N is 5 in both meta-training and meta-testing stages unless otherwise mentioned). For each class, we pick k labeled instances as our support set and 16 instances for the query set for a k-shot task. In the fine-tuning or meta-testing stage for all methods, we average the results over 600 experiments. In each experiment, we randomly sample 5 classes from novel classes, and in each class, we also pick k instances for the support set and 16 for the query set. For Baseline and Baseline++, we use the entire support set to train a new classifier for 100 iterations with a batch size of 4. For meta-learning methods, we obtain the classification model conditioned on the support set as in Section 3.3. All methods are trained from scratch and use the Adam optimizer with initial learning rate 10−3. We apply standard data augmentation including random crop, left-right flip, and color jitter in both the training or meta-training stage. Some implementation details have been adjusted individually for each method. For Baseline++, we multiply the cosine similarity by a constant scalar 2 to adjust original value range [-1,1] to be more appropriate for subsequent softmax layer. For MatchingNet, we use an FCE classification layer without fine-tuning in all experiments and also multiply cosine similarity by a constant scalar. For RelationNet, we replace the L2 norm with a softmax layer to expedite training. For MAML, we use a first-order approximation in the gradient for memory efficiency. The approximation has been shown in the original paper and in our appendix to have nearly identical performance as the full version. We choose the first-order approximation for its efficiency. 4.2 EVALUATION USING THE STANDARD SETTING We now conduct experiments on the most common setting in few-shot classification, 1-shot and 5-shot classification, i.e., 1 or 5 labeled instances are available from each novel class. We use a four-layer convolution backbone (Conv-4) with an input size of 84x84 as in Snell et al. (2017) and perform 5-way classification for only novel classes during the fine-tuning or meta-testing stage. To validate the correctness of our implementation, we first compare our results to the reported numbers for the mini-ImageNet dataset in Table 1. Note that we have a ProtoNet#, as we use 5-way classification in the meta-training and meta-testing stages for all meta-learning methods as mentioned in Section 4.1; however, the official reported results from ProtoNet uses 30-way for one shot and 20-way for five shot in the meta-training stage in spite of using 5-way in the meta-testing stage. We report this result for completeness. From Table 1, we can observe that all of our re-implementation for meta-learning methods do not fall more than 2% behind reported performance. These minor differences can be attributed to our 2For example, the exact episodes for experiments on the mini-ImageNet in the 5-shot setting with a fourlayer ConvNet are: ProtoNet: 24,600; MatchingNet: 35,300; RelationNet: 37,100; MAML: 36,700. 3Reported results are from Ravi & Larochelle (2017) modifications of some implementation details to ensure a fair comparison among all methods, such as using the same optimizer for all methods. Moreover, our implementation of existing work also improves the performance of some of the methods. For example, our results show that the Baseline approach under 5-shot setting can be improved by a large margin since previous implementations of the Baseline do not include data augmentation in their training stage, thereby leads to over-fitting. While our Baseline∗ is not as good as reported in 1-shot, our Baseline with augmentation still improves on it, and could be even higher if our reproduced Baseline∗ matches the reported statistics. In either case, the performance of the Baseline method is severely underestimated. We also improve the results of MatchingNet by adjusting the input score to the softmax layer to a more appropriate range as stated in Section 4.1. On the other hand, while ProtoNet# is not as good as ProtoNet, as mentioned in the original paper a more challenging setting in the meta-training stage leads to better accuracy. We choose to use a consistent 5-way classification setting in subsequent experiments to have a fair comparison to other methods. This issue can be resolved by using a deeper backbone as shown in Section 4.3. After validating our re-implementation, we now report the accuracy in Table 2. Besides additionally reporting results on the CUB dataset, we also compare Baseline++ to other methods. Here, we find that Baseline++ improves the Baseline by a large margin and becomes competitive even when compared with other meta-learning methods. The results demonstrate that reducing intra-class variation is an important factor in the current few-shot classification problem setting. However, note that our current setting only uses a 4-layer backbone, while a deeper backbone can inherently reduce intra-class variation. Thus, we conduct experiments to investigate the effects of backbone depth in the next section. 4.3 EFFECT OF INCREASING THE NETWORK DEPTH In this section, we change the depth of the feature backbone to reduce intra-class variation for all methods. See appendix for statistics on how network depth correlates with intra-class variation. Starting from Conv-4, we gradually increase the feature backbone to Conv-6, ResNet-10, 18 and 34, where Conv-6 have two additional convolution blocks without pooling after Conv-4. ResNet-18 and 34 are the same as described in He et al. (2016) with an input size of 224×224, while ResNet-10 is a simplified version of ResNet-18 where only one residual building block is used in each layer. The statistics of this experiment would also be helpful to other works to make a fair comparison under different feature backbones. Results of the CUB dataset shows a clearer tendency in Figure 3. As the backbone gets deeper, the gap among different methods drastically reduces. Another observation is how ProtoNet improves rapidly as the backbone gets deeper. While using a consistent 5-way classification as discussed in Section 4.2 degrades the accuracy of ProtoNet with Conv-4, it works well with a deeper backbone. Thus, the two observations above demonstrate that in the CUB dataset, the gap among existing methods would be reduced if their intra-class variation are all reduced by a deeper backbone. However, the result of mini-ImageNet in Figure 3 is much more complicated. In the 5-shot setting, both Baseline and Baseline++ achieve good performance with a deeper backbone, but some metalearning methods become worse relative to them. Thus, other than intra-class variation, we can assume that the dataset is also important in few-shot classification. One difference between CUB and mini-ImageNet is their domain difference in base and novel classes since classes in mini-ImageNet have a larger divergence than CUB in a word-net hierarchy Miller (1995). To better understand the effect, below we discuss how domain differences between base and novel classes impact few-shot classification results. 4.4 EFFECT OF DOMAIN DIFFERENCES BETWEEN BASE AND NOVEL CLASSES To further dig into the issue of domain difference, we design scenarios that provide such domain shifts. Besides the fine-grained classification and object recognition scenarios, we propose a new cross-domain scenario: mini-ImageNet →CUB as mentioned in Section 4.1. We believe that this is practical scenario since collecting images from a general class may be relatively easy (e.g. due to increased availability) but collecting images from fine-grained classes might be more difficult. We conduct the experiments with a ResNet-18 feature backbone. As shown in Table 3, the Baseline outperforms all meta-learning methods under this scenario. While meta-learning methods learn to learn from the support set during the meta-training stage, they are not able to adapt to novel classes that are too different since all of the base support sets are within the same dataset. A similar concept is also mentioned in Vinyals et al. (2016). In contrast, the Baseline simply replaces and trains a new classifier based on the few given novel class data, which allows it to quickly adapt to a novel mini-ImageNet→CUB Baseline 65.57±0.70 Baseline++ 62.04±0.76 MatchingNet 53.07±0.74 ProtoNet 62.02±0.70 MAML 51.34±0.72 RelationNet 57.71±0.73 Table 3: 5-shot accuracy under the cross-domain scenario with a ResNet-18 backbone. Baseline outperforms all other methods under this scenario. class and is less affected by domain shift between the source and target domains. The Baseline also performs better than the Baseline++ method, possibly because additionally reducing intra-class variation compromises adaptability. In Figure 4, we can further observe how Baseline accuracy becomes relatively higher as the domain difference gets larger. That is, as the domain difference grows larger, the adaptation based on a few novel class instances becomes more important. 4.5 EFFECT OF FURTHER ADAPTATION To further adapt meta-learning methods as in the Baseline method, an intuitive way is to fix the features and train a new softmax classifier. We apply this simple adaptation scheme to MatchingNet and ProtoNet. For MAML, it is not feasible to fix the feature as it is an initialization method. In contrast, since it updates the model with the support set for only a few iterations, we can adapt further by updating for as many iterations as is required to train a new classification layer, which is 100 updates as mentioned in Section 4.1. For RelationNet, the features are convolution maps rather than the feature vectors, so we are not able to replace it with a softmax. As an alternative, we randomly split the few training data in novel class into 3 support and 2 query data to finetune the relation module for 100 epochs. The results of further adaptation are shown in Figure 5; we can observe that the performance of MatchingNet and MAML improves significantly after further adaptation, particularly in the miniImageNet →CUB scenario. The results demonstrate that lack of adaptation is the reason they fall behind the Baseline. However, changing the setting in the meta-testing stage can lead to inconsistency with the meta-training stage. The ProtoNet result shows that performance can degrade in sce- narios with less domain difference. Thus, we believe that learning how to adapt in the meta-training stage is important future direction. In summary, as domain differences are likely to exist in many real-world applications, we consider that learning to learn adaptation in the meta-training stage would be an important direction for future meta-learning research in few-shot classification. 5 CONCLUSIONS In this paper, we have investigated the limits of the standard evaluation setting for few-shot classification. Through comparing methods on a common ground, our results show that the Baseline++ model is competitive to state of art under standard conditions, and the Baseline model achieves competitive performance with recent state-of-the-art meta-learning algorithms on both CUB and mini-ImageNet benchmark datasets when using a deeper feature backbone. Surprisingly, the Baseline compares favorably against all the evaluated meta-learning algorithms under a realistic scenario where there exists domain shift between the base and novel classes. By making our source code publicly available, we believe that community can benefit from the consistent comparative experiments and move forward to tackle the challenge of potential domain shifts in the context of few-shot learning. A1 RELATIONSHIP BETWEEN DOMAIN ADAPTATION AND FEW-SHOT CLASSIFICATION As mentioned in Section 2, here we discuss the relationship between domain adaptation and fewshot classification to clarify different experimental settings. As shown in Table A1, in general, domain adaptation aims at adapting source dataset knowledge to the same class in target dataset. On the other hand, the goal of few-shot classification is to learn from base classes to classify novel classes in the same dataset. Several recent work tackle the problem at the intersection of the two fields of study. For example, cross-task domain adaptation Hsu et al. (2018) also discuss novel classes in the target dataset. In contrast, while Motiian et al. (2017) has “few-shot” in the title, their evaluation setting focuses on classifying the same class in the target dataset. If base and novel classes are both drawn from the same dataset, minor domain shift exists between the base and novel classes, as we demonstrated in Section 4.4. To highlight the impact of domain shift, we further propose the mini-ImageNet→CUB setting. The domain shift in few-shot classification is also discussed in Dong & Xing (2018). Table A1: Relationship between domain adaptation and few-shot classification. The two field-of-studies have overlapping in the development. Notation ”*” indicates minor domain shifts exist between base and novel classes. Domain shift Source to target dataset Base to novel class Domain adaptation Motiian et al. (2017) V V - Cross-task domain adaptation Hsu et al. (2018) V V V Few-shot classification Ours (CUB, mini-ImageNet ) * - V Cross-domain few-shot Ours (mini-ImageNet→CUB) Dong & Xing (2018) V V V A2 TERMINOLOGY DIFFERENCE Different meta-learning works use different terminology in their works. We highlight their differences in appendix Table A2 to clarify the inconsistency. Table A2: Different terminology used in other works. Notation ”-” indicates the term is the same as in this paper. Our terms MatchingNet Vinyals et al. ProtoNet Snell et al. MAML Finn et al. Meta-learn LSTM Ravi & Larochelle Imaginary Wang et al. meta-training stage training training - - - meta-testing stage test test - - - base class training set training set task meta-training set - novel class test set test set new task meta-testing set - support set - - sample training dataset training data query set batch - test time sample test dataset test data A3 ADDITIONAL RESULTS ON OMNIGLOT AND OMNIGLOT→EMNIST For completeness, here we also show the results under two additional scenarios in 4) character recognition 5) cross-domain character recognition. For character recognition, we use the Omniglot dataset Lake et al. (2011) commonly used in evaluating few-shot classification algorithms. Omniglot contains 1,623 characters from 50 languages, and we follow the evaluation protocol of Vinyals et al. (2016) to first augment the classes by rotations in 90, 180, 270 degrees, resulting in 6492 classes. We then follow Snell et al. (2017) to split these classes into 4112 base, 688 validation, and 1692 novel classes. Unlike Snell et al. (2017), our validation classes are only used to monitor the performance during meta-training. For cross-domain character recognition (Omniglot→EMNIST), we follow the setting of Dong & Xing (2018) to use Omniglot without Latin characters and without rotation augmentation as base classes, so there are 1597 base classes. On the other hand, EMNIST dataset Cohen et al. (2017) contains 10-digits and upper and lower case alphabets in English, so there are 62 classes in total. We split these classes into 31 validation and 31 novel classes, and invert the white-on-black characters to black-on-white as in Omniglot. We use a Conv-4 backbone with input size 28x28 for both settings. As Omniglot characters are black-and-white, center-aligned and rotation sensitive, we do not use data augmentation in this experiment. To reduce the risk of over-fitting, we use the validation set to select the epoch or episode with the best accuracy for all methods, including baseline and baseline++.4 As shown in Table A3, in both Omniglot and Omniglot→EMNIST settings, meta-learning methods outperform baseline and baseline++ in 1-shot. However, all methods reach comparable performance in the 5-shot classification setting. We attribute this to the lack of data augmentation for the baseline and baseline++ methods as they tend to over-fit base classes. When sufficient examples in novel classes are available, the negative impact of over-fitting is reduced. Table A3: Few-shot classification results for both the Omniglot and Omniglot→EMNIST. All experiments are from 5-way classification with a Conv-4 backbone and without data augmentation. Omniglot Omniglot→EMNIST Method 1-shot 5-shot 1-shot 5-shot Baseline 94.89 ± 0.45 99.12 ± 0.13 63.94 ± 0.87 86.00 ± 0.59 Baseline++ 95.41 ± 0.39 99.38 ± 0.10 64.74 ± 0.82 87.31 ± 0.58 MatchingNet 97.78 ± 0.30 99.37 ± 0.11 72.71 ± 0.79 87.60 ± 0.56 ProtoNet 98.01 ± 0.30 99.15 ± 0.12 70.43 ± 0.80 87.04 ± 0.55 MAML 98.57 ± 0.19 99.53 ± 0.08 72.04 ± 0.83 88.24 ± 0.56 RelationNet 97.22 ± 0.33 99.30 ± 0.10 75.55 ± 0.87 88.94 ± 0.54 A4 BASELINE WITH 1-NN CLASSIFIER Some prior work (Vinyals et al. (2016)) apply a Baseline with 1-NN classifier in the test stage. We include our result as in Table A4. The result shows that using 1-NN classifier has better performance than that of using the softmax classifier in 1-shot setting, but softmax classifier performs better in 5-shot setting. We note that the number here are not directly comparable to results in Vinyals et al. (2016) because we use a different mini-ImageNet as in Ravi & Larochelle (2017). Table A4: Baseline with softmax and 1-NN classifier in test stage. We note that we use cosine distance in 1-NN. 1-shot 5-shot softmax 1-NN softmax 1-NN Baseline 42.11±0.71 44.18±0.69 62.53±0.69 56.68±0.67 Baseline++ 48.24±0.75 49.57±0.73 66.43±0.63 61.93±0.65 4The exact epoch of baseline and baseline++ on Omniglot and Omniglot→EMNIST is 5 epochs A5 MAML AND MAML WITH FIRST-ORDER APPROXIMATION As discussed in Section 4.1, we use first-order approximation MAML to improve memory efficiency in all of our experiments. To demonstrate this design choice does not affect the accuracy, we compare their validation accuracy trends on Omniglot with 5-shot as in Figure A1. We observe that while the full version MAML converge faster, both versions reach similar accuracy in the end. This phenomena is consistent with the difference of first-order (e.g. gradient descent) and secondorder methods (e.g. Newton) in convex optimization problems. Second-order methods converge faster at the cost of memory, but they both converge to similar objective value. Figure A1: Validation accuracy trends of MAML and MAML with first order approximation. Both versions converge to the same validation accuracy. The experimental results are on Omniglot with 5-shot with a Conv-4 backbone. A6 INTRA-CLASS VARIATION AND BACKBONE DEPTH As mentioned in Section 4.3, here we demonstrate decreased intra-class variation as the network depth gets deeper as in Figure A2. We use the Davies-Bouldin index Davies & Bouldin (1979) to measure intra-class variation. The Davies-Bouldin index is a metric to evaluate the tightness in a cluster (or class, in our case). Our results show that both intra-class variation in the base and novel class feature decrease using deeper backbones. 2 3 4 5 6 7 8 Conv-4 Conv-6 Resnet-10 Resnet-18 Resnet-34 3 4 5 6 7 Conv-4 Conv-6 Resnet-10 Resnet-18 Resnet-34 Baseline Baseline++ MatchingNetProtoNet Base class feature Novel class feature D av ie s- B ou ld in in de x (I nt ra -c la ss va ri at io n) Figure A2: Intra-class variation decreases as backbone gets deeper. Here we use Davies-Bouldin index to represent intra-class variation, which is a metric to evaluate the tightness in a cluster (or class, in our case). The statistics are Davies-Bouldin index for all base and novel class feature (extracted by feature extractor learned after training or meta-training stage) for CUB dataset under different backbone. A7 DETAILED STATISTICS IN EFFECTS OF INCREASING BACKBONE DEPTH Here we show a high-resolution version of Figure 3 in Figure A3 and show detailed statistics in Table A5 for easier comparison. 45% 55% 65% 75% 40% 45% 50% 55% 60% 70% 80% 90% 60% 65% 70% 75% 80% Baseline Baseline++ MatchingNet ProtoNet MAML RelationNet CUB mini-ImageNet 1- sh ot 5- sh ot Figure A3: Few-shot classification accuracy vs. backbone depth. In the CUB dataset, gaps among different methods diminish as the backbone gets deeper. In mini-ImageNet 5-shot, some meta-learning methods are even beaten by Baseline with a deeper backbone. Table A5: Detailed statistics in Figure 3. We put exact value here for reference. Conv-4 Conv-6 Resnet-10 Resnet-18 Resnet-34 CUB 1-shot Baseline 47.12±0.74 55.77±0.86 63.34±0.91 65.51±0.87 67.96±0.89 Baseline++ 60.53±0.83 66.00±0.89 69.55±0.89 67.02±0.90 68.00±0.83 MatchingNet 61.16±0.89 67.16±0.97 71.29±0.90 72.36±0.90 71.44±0.96 ProtoNet 51.31±0.91 66.07±0.97 70.13±0.94 71.88±0.91 72.03±0.91 MAML 55.92±0.95 65.91±0.97 71.29±0.95 69.96±1.01 67.28±1.08 RelationNet 62.45±0.98 63.11±0.94 68.65±0.91 67.59±1.02 66.20±0.99 CUB 5-shot Baseline 64.16±0.71 73.07±0.71 81.27±0.57 82.85±0.55 84.27±0.53 Baseline++ 79.34±0.61 82.02±0.55 85.17±0.50 83.58±0.54 84.50±0.51 MatchingNet 72.86±0.70 77.08±0.66 83.59±0.58 83.64±0.60 83.78±0.56 ProtoNet 70.77±0.69 78.14±0.67 84.76±0.52 87.42±0.48 85.98±0.53 MAML 72.09±0.76 76.31±0.74 80.33±0.70 82.70±0.65 83.47±0.59 RelationNet 76.11±0.69 77.81±0.66 81.12±0.63 82.75±0.58 82.30±0.58 mini-ImageNet 1-shot Baseline 42.11±0.71 45.82±0.74 52.37±0.79 51.75±0.80 49.82±0.73 Baseline++ 48.24±0.75 48.29±0.72 53.97±0.79 51.87±0.77 52.65±0.83 MatchingNet 48.14±0.78 50.47±0.86 54.49±0.81 52.91±0.88 53.20±0.78 ProtoNet 44.42±0.84 50.37±0.83 51.98±0.84 54.16±0.82 53.90±0.83 MAML 46.47±0.82 50.96±0.92 54.69±0.89 49.61±0.92 51.46±0.90 RelationNet 49.31±0.85 51.84±0.88 52.19±0.83 52.48±0.86 51.74±0.83 mini-ImageNet 5-shot Baseline 62.53±0.69 66.42±0.67 74.69±0.64 74.27±0.63 73.45±0.65 Baseline++ 66.43±0.63 68.09±0.69 75.90±0.61 75.68±0.63 76.16±0.63 MatchingNet 63.48±0.66 63.19±0.70 68.82±0.65 68.88±0.69 68.32±0.66 ProtoNet 64.24±0.72 67.33±0.67 72.64±0.64 73.68±0.65 74.65±0.64 MAML 62.71±0.71 66.09±0.71 66.62±0.83 65.72±0.77 65.90±0.79 RelationNet 66.60±0.69 64.55±0.70 70.20±0.66 69.83±0.68 69.61±0.67 A8 MORE-WAY IN META-TESTING STAGE We experiment with a practical setting that handles different testing scenarios. Specifically, we conduct the experiments of 5-way meta-training and N-way meta-testing (where N = 5, 10, 20) to examine the effect of testing scenarios that are different from training. As in Table A6, we compare the methods Baseline, Baseline++, MatchingNet, ProtoNet, and RelationNet. Note that we are unable to apply the MAML method as MAML learns the initialization for the classifier and can thus only be updated to classify the same number of classes. Our results show that for classification with a larger N-way in the meta-testing stage, the proposed Baseline++ compares favorably against other methods in both shallow or deeper backbone settings. We attribute the results to two reasons. First, to perform well in a larger N-way classification setting, one needs to further reduce the intra-class variation to avoid misclassification. Thus, Baseline++ has better performance than Baseline in both backbone settings. Second, as meta-learning algorithms were trained to perform 5-way classification in the meta-training stage, the performance of these algorithms may drop significantly when increasing the N-way in the meta-testing stage because the tasks of 10-way or 20-way classification are harder than that of 5-way one. One may address this issue by performing a larger N-way classification in the meta-training stage (as suggested in Snell et al. (2017)). However, it may encounter the issue of memory constraint. For example, to perform a 20-way classification with 5 support images and 15 query images in each class, we need to fit a batch size of 400 (20 x (5 + 15)) that must fit into the GPUs. Without special hardware parallelization, the large batch size may prevent us from training models with deeper backbones such as ResNet. Table A6: 5-way meta-training and N-way meta-testing experiment. The experimental results are on mini-ImageNet with 5-shot. We could see Baseline++ compares favorably against other methods in both shallow or deeper backbone settings. Conv-4 ResNet-18 N-way test 5-way 10-way 20-way 5-way 10-way 20-way Baseline 62.53±0.69 46.44±0.41 32.27±0.24 74.27±0.63 55.00±0.46 42.03±0.25 Baseline++ 66.43±0.63 52.26±0.40 38.03±0.24 75.68±0.63 63.40±0.44 50.85±0.25 MatchingNet 63.48±0.66 47.61±0.44 33.97±0.24 68.88±0.69 52.27±0.46 36.78±0.25 ProtoNet 64.24±0.68 48.77±0.45 34.58±0.23 73.68±0.65 59.22±0.44 44.96±0.26 RelationNet 66.60±0.69 47.77±0.43 33.72±0.22 69.83±0.68 53.88±0.48 39.17±0.25
1. What are the strengths and weaknesses of the paper's experimental design? 2. How does the reviewer assess the validity and practicality of the paper's comparison of few-shot learning methods? 3. What are the limitations of the paper regarding the number of novel classes in the training and testing stages? 4. How does the reviewer evaluate the significance of the paper's contribution to the field of few-shot learning? 5. Are there any factual errors or misleading statements in the review that need correction?
Review
Review This paper gives a nice overview of existing works on few-shot learning. It groups them into some intuitive categories and meanwhile distills a common framework (Figure 2) employed by the methods. Moreover, the authors selected four of them, along with two baselines, to experimentally compare their performances under a cleaned experiment protocol. The experiments cover three few-shot learning scenarios respectively for generic object recognition, fine-grained classification, and cross-domain adaptation. While I do *not* think the third scenario is “more practical”, it is certainly nice to have it included in the experiments. The experiment setup is unfortunately questionable. Since there is a validation set, one should use it to determine the free parameters (e.g., the number of epochs, learning rates, etc.). However, it seems like the same set of free parameters are used for different methods, making the comparison unfair because this set may favor some methods and yet hurt the others. The results of RelationNet are missing in Table 4. Another concern is that the same number of novel classes is used in the training and the testing stage. A more practical application of the learned meta model is to use it to handle different testing scenarios. There could be five novel classes in one scenario, 10 novel classes in another, and 100 in the third, etc. The number of labeled examples per class may also vary from one testing scenario to anther. It is misleading by the following: “Very recently, Motiian et al. (2017) addresses the few-shot domain adaptation problem.” There are a few variations in domain adaptation (DA). The learner has access to the fully labeled source domain and a small set of labeled target examples in supervised DA, to the source domain, a couple of labeled target examples, and many unlabeled target examples in semi-supervised DA, and to the source domain and many unlabeled target data points in the unsupervised DA. These have been studied long before (Motiian et al., 2017), for instance the works of Saenko et al. (2010) and Gong et al. (2013). [ref] Saenko K, Kulis B, Fritz M, Darrell T. Adapting visual category models to new domains. InEuropean conference on computer vision 2010 Sep 5 (pp. 213-226). Springer, Berlin, Heidelberg. [ref] Gong B, Grauman K, Sha F. Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation. InInternational Conference on Machine Learning 2013 Feb 13 (pp. 222-230). Overall, the paper is well written and may serve as a nice survey of existing works on few-shot learning. The unified experiment setup can facilitate the future research for fair comparisons, along with the three testing scenarios. However, I have some concerns as above about the experiment setups and hence also the conclusions.
ICLR
Title A Closer Look at Few-shot Classification Abstract Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples. While significant progress has been made, the growing complexity of network designs, meta-learning algorithms, and differences in implementation details make a fair comparison difficult. In this paper, we present 1) a consistent comparative analysis of several representative few-shot classification algorithms, with results showing that deeper backbones significantly reduce the performance differences among methods on datasets with limited domain differences, 2) a modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the miniImageNet and the CUB datasets, and 3) a new experimental setting for evaluating the cross-domain generalization ability for few-shot classification algorithms. Our results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones. In a realistic cross-domain evaluation setting, we show that a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms. 1 INTRODUCTION Deep learning models have achieved state-of-the-art performance on visual recognition tasks such as image classification. The strong performance, however, heavily relies on training a network with abundant labeled instances with diverse visual variations (e.g., thousands of examples for each new class even with pre-training on large-scale dataset with base classes). The human annotation cost as well as the scarcity of data in some classes (e.g., rare species) significantly limit the applicability of current vision systems to learn new visual concepts efficiently. In contrast, the human visual systems can recognize new classes with extremely few labeled examples. It is thus of great interest to learn to generalize to new classes with a limited amount of labeled examples for each novel class. The problem of learning to generalize to unseen classes during training, known as few-shot classification, has attracted considerable attention Vinyals et al. (2016); Snell et al. (2017); Finn et al. (2017); Ravi & Larochelle (2017); Sung et al. (2018); Garcia & Bruna (2018); Qi et al. (2018). One promising direction to few-shot classification is the meta-learning paradigm where transferable knowledge is extracted and propagated from a collection of tasks to prevent overfitting and improve generalization. Examples include model initialization based methods Ravi & Larochelle (2017); Finn et al. (2017), metric learning methods Vinyals et al. (2016); Snell et al. (2017); Sung et al. (2018), and hallucination based methods Antoniou et al. (2018); Hariharan & Girshick (2017); Wang et al. (2018). Another line of work Gidaris & Komodakis (2018); Qi et al. (2018) also demonstrates promising results by directly predicting the weights of the classifiers for novel classes. Limitations. While many few-shot classification algorithms have reported improved performance over the state-of-the-art, there are two main challenges that prevent us from making a fair comparison and measuring the actual progress. First, the discrepancy of the implementation details among multiple few-shot learning algorithms obscures the relative performance gain. The performance of baseline approaches can also be significantly under-estimated (e.g., training without data augmentation). Second, while the current evaluation focuses on recognizing novel class with limited training examples, these novel classes are sampled from the same dataset. The lack of domain shift between the base and novel classes makes the evaluation scenarios unrealistic. Our work. In this paper, we present a detailed empirical study to shed new light on the few-shot classification problem. First, we conduct consistent comparative experiments to compare several representative few-shot classification methods on common ground. Our results show that using a deep backbone shrinks the performance gap between different methods in the setting of limited domain differences between base and novel classes. Second, by replacing the linear classifier with a distance-based classifier as used in Gidaris & Komodakis (2018); Qi et al. (2018), the baseline method is surprisingly competitive to current state-of-art meta-learning algorithms. Third, we introduce a practical evaluation setting where there exists domain shift between base and novel classes (e.g., sampling base classes from generic object categories and novel classes from fine-grained categories). Our results show that sophisticated few-shot learning algorithms do not provide performance improvement over the baseline under this setting. Through making the source code and model implementations with a consistent evaluation setting publicly available, we hope to foster future progress in the field.1 Our contributions. 1. We provide a unified testbed for several different few-shot classification algorithms for a fair comparison. Our empirical evaluation results reveal that the use of a shallow backbone commonly used in existing work leads to favorable results for methods that explicitly reduce intra-class variation. Increasing the model capacity of the feature backbone reduces the performance gap between different methods when domain differences are limited. 2. We show that a baseline method with a distance-based classifier surprisingly achieves competitive performance with the state-of-the-art meta-learning methods on both mini-ImageNet and CUB datasets. 3. We investigate a practical evaluation setting where base and novel classes are sampled from different domains. We show that current few-shot classification algorithms fail to address such domain shifts and are inferior even to the baseline method, highlighting the importance of learning to adapt to domain differences in few-shot learning. 2 RELATED WORK Given abundant training examples for the base classes, few-shot learning algorithms aim to learn to recognizing novel classes with a limited amount of labeled examples. Much efforts have been devoted to overcome the data efficiency issue. In the following, we discuss representative few-shot learning algorithms organized into three main categories: initialization based, metric learning based, and hallucination based methods. Initialization based methods tackle the few-shot learning problem by “learning to fine-tune”. One approach aims to learn good model initialization (i.e., the parameters of a network) so that the classifiers for novel classes can be learned with a limited number of labeled examples and a small number of gradient update steps Finn et al. (2017; 2018); Nichol & Schulman (2018); Rusu et al. (2019). Another line of work focuses on learning an optimizer. Examples include the LSTM-based meta-learner for replacing the stochastic gradient decent optimizer Ravi & Larochelle (2017) and the weight-update mechanism with an external memory Munkhdalai & Yu (2017). While these initialization based methods are capable of achieving rapid adaption with a limited number of training examples for novel classes, our experiments show that these methods have difficulty in handling domain shifts between base and novel classes. Distance metric learning based methods address the few-shot classification problem by “learning to compare”. The intuition is that if a model can determine the similarity of two images, it can classify an unseen input image with the labeled instances Koch et al. (2015). To learn a sophisticated comparison models, meta-learning based methods make their prediction conditioned on distance or 1https://github.com/wyharveychen/CloserLookFewShot metric to few labeled instances during the training process. Examples of distance metrics include cosine similarity Vinyals et al. (2016), Euclidean distance to class-mean representation Snell et al. (2017), CNN-based relation module Sung et al. (2018), ridge regression Bertinetto et al. (2019), and graph neural network Garcia & Bruna (2018). In this paper, we compare the performance of three distance metric learning methods. Our results show that a simple baseline method with a distancebased classifier (without training over a collection of tasks/episodes as in meta-learning) achieves competitive performance with respect to other sophisticated algorithms. Besides meta-learning methods, both Gidaris & Komodakis (2018) and Qi et al. (2018) develop a similar method to our Baseline++ (described later in Section 3.2). The method in Gidaris & Komodakis (2018) learns a weight generator to predict the novel class classifier using an attentionbased mechanism (cosine similarity), and the Qi et al. (2018) directly use novel class features as their weights. Our Baseline++ can be viewed as a simplified architecture of these methods. Our focus, however, is to show that simply reducing intra-class variation in a baseline method using the base class data leads to competitive performance. Hallucination based methods directly deal with data deficiency by “learning to augment”. This class of methods learns a generator from data in the base classes and use the learned generator to hallucinate new novel class data for data augmentation. One type of generator aims at transferring appearance variations exhibited in the base classes. These generators either transfer variance in base class data to novel classes Hariharan & Girshick (2017), or use GAN models Antoniou et al. (2018) to transfer the style. Another type of generators does not explicitly specify what to transfer, but directly integrate the generator into a meta-learning algorithm for improving the classification accuracy Wang et al. (2018). Since hallucination based methods often work with other few-shot methods together (e.g. use hallucination based and metric learning based methods together) and lead to complicated comparison, we do not include these methods in our comparative study and leave it for future work. Domain adaptation techniques aim to reduce the domain shifts between source and target domain Pan et al. (2010); Ganin & Lempitsky (2015), as well as novel tasks in a different domain Hsu et al. (2018). Similar to domain adaptation, we also investigate the impact of domain difference on fewshot classification algorithms in Section 4.5. In contrast to most domain adaptation problems where a large amount of data is available in the target domain (either labeled or unlabeled), our problem setting differs because we only have very few examples in the new domain. Very recently, the method in Dong & Xing (2018) addresses the one-shot novel category domain adaptation problem, where in the testing stage both the domain and the category to classify are changed. Similarly, our work highlights the limitations of existing few-shot classification algorithms problem in handling domain shift. To put these problem settings in context, we provided a detailed comparison of setting difference in the appendix A1. 3 OVERVIEW OF FEW-SHOT CLASSIFICATION ALGORITHMS In this section, we first outline the details of the baseline model (Section 3.1) and its variant (Section 3.2), followed by describing representative meta-learning algorithms (Section 3.3) studied in our experiments. Given abundant base class labeled data Xb and a small amount of novel class labeled data Xn, the goal of few-shot classification algorithms is to train classifiers for novel classes (unseen during training) with few labeled examples. 3.1 BASELINE Our baseline model follows the standard transfer learning procedure of network pre-training and fine-tuning. Figure 1 illustrates the overall procedure. Training stage. We train a feature extractor fθ (parametrized by the network parameters θ ) and the classifier C(·|Wb) (parametrized by the weight matrix Wb ∈ Rd×c) from scratch by minimizing a standard cross-entropy classification loss Lpred using the training examples in the base classes xi ∈ Xb. Here, we denote the dimension of the encoded feature as d and the number of output classes as c. The classifier C(.|Wb) consists of a linear layer W>b fθ (xi) followed by a softmax function σ . Fine-tuning stage. To adapt the model to recognize novel classes in the fine-tuning stage, we fix the pre-trained network parameter θ in our feature extractor fθ and train a new classifier C(.|Wn) (parametrized by the weight matrix Wn) by minimizing Lpred using the few labeled of examples (i.e., the support set) in the novel classes Xn. 3.2 BASELINE++ In addition to the baseline model, we also implement a variant of the baseline model, denoted as Baseline++, which explicitly reduces intra-class variation among features during training. The importance of reducing intra-class variations of features has been highlighted in deep metric learning Hu et al. (2015) and few-shot classification methods Gidaris & Komodakis (2018). The training procedure of Baseline++ is the same as the original Baseline model except for the classifier design. As shown in Figure 1, we still have a weight matrix Wb ∈Rd×c of the classifier in the training stage and a Wn in the fine-tuning stage in Baseline++. The classifier design, however, is different from the linear classifier used in the Baseline. Take the weight matrix Wb as an example. We can write the weight matrix Wb as [w1,w2, ...wc], where each class has a d-dimensional weight vector. In the training stage, for an input feature fθ (xi) where xi ∈ Xb, we compute its cosine similarity to each weight vector [w1, · · · ,wc] and obtain the similarity scores [si,1,si,2, · · · ,si,c] for all classes, where si, j = fθ (xi)>w j/‖ fθ (xi)‖‖w j‖. We can then obtain the prediction probability for each class by normalizing these similarity scores with a softmax function. Here, the classifier makes a prediction based on the cosine distance between the input feature and the learned weight vectors representing each class. Consequently, training the model with this distance-based classifier explicitly reduce intra-class variations. Intuitively, the learned weight vectors [w1, · · · ,wc] can be interpreted as prototypes (similar to Snell et al. (2017); Vinyals et al. (2016)) for each class and the classification is based on the distance of the input feature to these learned prototypes. The softmax function prevents the learned weight vectors collapsing to zeros. We clarify that the network design in Baseline++ is not our contribution. The concept of distancebased classification has been extensively studied in Mensink et al. (2012) and recently has been revisited in the few-shot classification setting Gidaris & Komodakis (2018); Qi et al. (2018). 3.3 META-LEARNING ALGORITHMS Here we describe the formulations of meta-learning methods used in our study. We consider three distance metric learning based methods (MatchingNet Vinyals et al. (2016), ProtoNet Snell et al. (2017), and RelationNet Sung et al. (2018)) and one initialization based method (MAML Finn et al. (2017)). While meta-learning is not a clearly defined, Vinyals et al. (2016) considers a few-shot classification method as meta-learning if the prediction is conditioned on a small support set S, because it makes the training procedure explicitly learn to learn from a given small support set. As shown in Figure 2, meta-learning algorithms consist of a meta-training and a meta-testing stage. In the meta-training stage, the algorithm first randomly select N classes, and sample small base support set Sb and a base query set Qb from data samples within these classes. The objective is to train a classification model M that minimizes N-way prediction loss LN−way of the samples in the query set Qb. Here, the classifier M is conditioned on provided support set Sb. By making prediction conditioned on the given support set, a meta-learning method can learn how to learn from limited labeled data through training from a collection of tasks (episodes). In the meta-testing stage, all novel class data Xn are considered as the support set for novel classes Sn, and the classification model M can be adapted to predict novel classes with the new support set Sn. Different meta-learning methods differ in their strategies to make prediction conditioned on support set (see Figure 2). For both MatchingNet Vinyals et al. (2016) and ProtoNet Snell et al. (2017), the prediction of the examples in a query set Q is based on comparing the distance between the query feature and the support feature from each class. MatchingNet compares cosine distance between the query feature and each support feature, and computes average cosine distance for each class, while ProtoNet compares the Euclidean distance between query features and the class mean of support features. RelationNet Sung et al. (2018) shares a similar idea, but it replaces distance with a learnable relation module. The MAML method Finn et al. (2017) is an initialization based meta-learning algorithm, where each support set is used to adapt the initial model parameters using few gradient updates. As different support sets have different gradient updates, the adapted model is conditioned on the support set. Note that when the query set instances are predicted by the adapted model in the meta-training stage, the loss of the query set is used to update the initial model, not the adapted model. 4 EXPERIMENTAL RESULTS 4.1 EXPERIMENTAL SETUP Datasets and scenarios. We address the few-shot classification problem under three scenarios: 1) generic object recognition, 2) fine-grained image classification, and 3) cross-domain adaptation. For object recognition, we use the mini-ImageNet dataset commonly used in evaluating few-shot classification algorithms. The mini-ImageNet dataset consists of a subset of 100 classes from the ImageNet dataset Deng et al. (2009) and contains 600 images for each class. The dataset was first proposed by Vinyals et al. (2016), but recent works use the follow-up setting provided by Ravi & Larochelle (2017), which is composed of randomly selected 64 base, 16 validation, and 20 novel classes. For fine-grained classification, we use CUB-200-2011 dataset Wah et al. (2011) (referred to as the CUB hereafter). The CUB dataset contains 200 classes and 11,788 images in total. Following the evaluation protocol of Hilliard et al. (2018), we randomly split the dataset into 100 base, 50 validation, and 50 novel classes. For the cross-domain scenario (mini-ImageNet →CUB), we use mini-ImageNet as our base class and the 50 validation and 50 novel class from CUB. Evaluating the cross-domain scenario allows us to understand the effects of domain shifts to existing few-shot classification approaches. Implementation details. In the training stage for the Baseline and the Baseline++ methods, we train 400 epochs with a batch size of 16. In the meta-training stage for meta-learning methods, we train 60,000 episodes for 1-shot and 40,000 episodes for 5-shot tasks. We use the validation set to select the training episodes with the best accuracy.2 In each episode, we sample N classes to form N-way classification (N is 5 in both meta-training and meta-testing stages unless otherwise mentioned). For each class, we pick k labeled instances as our support set and 16 instances for the query set for a k-shot task. In the fine-tuning or meta-testing stage for all methods, we average the results over 600 experiments. In each experiment, we randomly sample 5 classes from novel classes, and in each class, we also pick k instances for the support set and 16 for the query set. For Baseline and Baseline++, we use the entire support set to train a new classifier for 100 iterations with a batch size of 4. For meta-learning methods, we obtain the classification model conditioned on the support set as in Section 3.3. All methods are trained from scratch and use the Adam optimizer with initial learning rate 10−3. We apply standard data augmentation including random crop, left-right flip, and color jitter in both the training or meta-training stage. Some implementation details have been adjusted individually for each method. For Baseline++, we multiply the cosine similarity by a constant scalar 2 to adjust original value range [-1,1] to be more appropriate for subsequent softmax layer. For MatchingNet, we use an FCE classification layer without fine-tuning in all experiments and also multiply cosine similarity by a constant scalar. For RelationNet, we replace the L2 norm with a softmax layer to expedite training. For MAML, we use a first-order approximation in the gradient for memory efficiency. The approximation has been shown in the original paper and in our appendix to have nearly identical performance as the full version. We choose the first-order approximation for its efficiency. 4.2 EVALUATION USING THE STANDARD SETTING We now conduct experiments on the most common setting in few-shot classification, 1-shot and 5-shot classification, i.e., 1 or 5 labeled instances are available from each novel class. We use a four-layer convolution backbone (Conv-4) with an input size of 84x84 as in Snell et al. (2017) and perform 5-way classification for only novel classes during the fine-tuning or meta-testing stage. To validate the correctness of our implementation, we first compare our results to the reported numbers for the mini-ImageNet dataset in Table 1. Note that we have a ProtoNet#, as we use 5-way classification in the meta-training and meta-testing stages for all meta-learning methods as mentioned in Section 4.1; however, the official reported results from ProtoNet uses 30-way for one shot and 20-way for five shot in the meta-training stage in spite of using 5-way in the meta-testing stage. We report this result for completeness. From Table 1, we can observe that all of our re-implementation for meta-learning methods do not fall more than 2% behind reported performance. These minor differences can be attributed to our 2For example, the exact episodes for experiments on the mini-ImageNet in the 5-shot setting with a fourlayer ConvNet are: ProtoNet: 24,600; MatchingNet: 35,300; RelationNet: 37,100; MAML: 36,700. 3Reported results are from Ravi & Larochelle (2017) modifications of some implementation details to ensure a fair comparison among all methods, such as using the same optimizer for all methods. Moreover, our implementation of existing work also improves the performance of some of the methods. For example, our results show that the Baseline approach under 5-shot setting can be improved by a large margin since previous implementations of the Baseline do not include data augmentation in their training stage, thereby leads to over-fitting. While our Baseline∗ is not as good as reported in 1-shot, our Baseline with augmentation still improves on it, and could be even higher if our reproduced Baseline∗ matches the reported statistics. In either case, the performance of the Baseline method is severely underestimated. We also improve the results of MatchingNet by adjusting the input score to the softmax layer to a more appropriate range as stated in Section 4.1. On the other hand, while ProtoNet# is not as good as ProtoNet, as mentioned in the original paper a more challenging setting in the meta-training stage leads to better accuracy. We choose to use a consistent 5-way classification setting in subsequent experiments to have a fair comparison to other methods. This issue can be resolved by using a deeper backbone as shown in Section 4.3. After validating our re-implementation, we now report the accuracy in Table 2. Besides additionally reporting results on the CUB dataset, we also compare Baseline++ to other methods. Here, we find that Baseline++ improves the Baseline by a large margin and becomes competitive even when compared with other meta-learning methods. The results demonstrate that reducing intra-class variation is an important factor in the current few-shot classification problem setting. However, note that our current setting only uses a 4-layer backbone, while a deeper backbone can inherently reduce intra-class variation. Thus, we conduct experiments to investigate the effects of backbone depth in the next section. 4.3 EFFECT OF INCREASING THE NETWORK DEPTH In this section, we change the depth of the feature backbone to reduce intra-class variation for all methods. See appendix for statistics on how network depth correlates with intra-class variation. Starting from Conv-4, we gradually increase the feature backbone to Conv-6, ResNet-10, 18 and 34, where Conv-6 have two additional convolution blocks without pooling after Conv-4. ResNet-18 and 34 are the same as described in He et al. (2016) with an input size of 224×224, while ResNet-10 is a simplified version of ResNet-18 where only one residual building block is used in each layer. The statistics of this experiment would also be helpful to other works to make a fair comparison under different feature backbones. Results of the CUB dataset shows a clearer tendency in Figure 3. As the backbone gets deeper, the gap among different methods drastically reduces. Another observation is how ProtoNet improves rapidly as the backbone gets deeper. While using a consistent 5-way classification as discussed in Section 4.2 degrades the accuracy of ProtoNet with Conv-4, it works well with a deeper backbone. Thus, the two observations above demonstrate that in the CUB dataset, the gap among existing methods would be reduced if their intra-class variation are all reduced by a deeper backbone. However, the result of mini-ImageNet in Figure 3 is much more complicated. In the 5-shot setting, both Baseline and Baseline++ achieve good performance with a deeper backbone, but some metalearning methods become worse relative to them. Thus, other than intra-class variation, we can assume that the dataset is also important in few-shot classification. One difference between CUB and mini-ImageNet is their domain difference in base and novel classes since classes in mini-ImageNet have a larger divergence than CUB in a word-net hierarchy Miller (1995). To better understand the effect, below we discuss how domain differences between base and novel classes impact few-shot classification results. 4.4 EFFECT OF DOMAIN DIFFERENCES BETWEEN BASE AND NOVEL CLASSES To further dig into the issue of domain difference, we design scenarios that provide such domain shifts. Besides the fine-grained classification and object recognition scenarios, we propose a new cross-domain scenario: mini-ImageNet →CUB as mentioned in Section 4.1. We believe that this is practical scenario since collecting images from a general class may be relatively easy (e.g. due to increased availability) but collecting images from fine-grained classes might be more difficult. We conduct the experiments with a ResNet-18 feature backbone. As shown in Table 3, the Baseline outperforms all meta-learning methods under this scenario. While meta-learning methods learn to learn from the support set during the meta-training stage, they are not able to adapt to novel classes that are too different since all of the base support sets are within the same dataset. A similar concept is also mentioned in Vinyals et al. (2016). In contrast, the Baseline simply replaces and trains a new classifier based on the few given novel class data, which allows it to quickly adapt to a novel mini-ImageNet→CUB Baseline 65.57±0.70 Baseline++ 62.04±0.76 MatchingNet 53.07±0.74 ProtoNet 62.02±0.70 MAML 51.34±0.72 RelationNet 57.71±0.73 Table 3: 5-shot accuracy under the cross-domain scenario with a ResNet-18 backbone. Baseline outperforms all other methods under this scenario. class and is less affected by domain shift between the source and target domains. The Baseline also performs better than the Baseline++ method, possibly because additionally reducing intra-class variation compromises adaptability. In Figure 4, we can further observe how Baseline accuracy becomes relatively higher as the domain difference gets larger. That is, as the domain difference grows larger, the adaptation based on a few novel class instances becomes more important. 4.5 EFFECT OF FURTHER ADAPTATION To further adapt meta-learning methods as in the Baseline method, an intuitive way is to fix the features and train a new softmax classifier. We apply this simple adaptation scheme to MatchingNet and ProtoNet. For MAML, it is not feasible to fix the feature as it is an initialization method. In contrast, since it updates the model with the support set for only a few iterations, we can adapt further by updating for as many iterations as is required to train a new classification layer, which is 100 updates as mentioned in Section 4.1. For RelationNet, the features are convolution maps rather than the feature vectors, so we are not able to replace it with a softmax. As an alternative, we randomly split the few training data in novel class into 3 support and 2 query data to finetune the relation module for 100 epochs. The results of further adaptation are shown in Figure 5; we can observe that the performance of MatchingNet and MAML improves significantly after further adaptation, particularly in the miniImageNet →CUB scenario. The results demonstrate that lack of adaptation is the reason they fall behind the Baseline. However, changing the setting in the meta-testing stage can lead to inconsistency with the meta-training stage. The ProtoNet result shows that performance can degrade in sce- narios with less domain difference. Thus, we believe that learning how to adapt in the meta-training stage is important future direction. In summary, as domain differences are likely to exist in many real-world applications, we consider that learning to learn adaptation in the meta-training stage would be an important direction for future meta-learning research in few-shot classification. 5 CONCLUSIONS In this paper, we have investigated the limits of the standard evaluation setting for few-shot classification. Through comparing methods on a common ground, our results show that the Baseline++ model is competitive to state of art under standard conditions, and the Baseline model achieves competitive performance with recent state-of-the-art meta-learning algorithms on both CUB and mini-ImageNet benchmark datasets when using a deeper feature backbone. Surprisingly, the Baseline compares favorably against all the evaluated meta-learning algorithms under a realistic scenario where there exists domain shift between the base and novel classes. By making our source code publicly available, we believe that community can benefit from the consistent comparative experiments and move forward to tackle the challenge of potential domain shifts in the context of few-shot learning. A1 RELATIONSHIP BETWEEN DOMAIN ADAPTATION AND FEW-SHOT CLASSIFICATION As mentioned in Section 2, here we discuss the relationship between domain adaptation and fewshot classification to clarify different experimental settings. As shown in Table A1, in general, domain adaptation aims at adapting source dataset knowledge to the same class in target dataset. On the other hand, the goal of few-shot classification is to learn from base classes to classify novel classes in the same dataset. Several recent work tackle the problem at the intersection of the two fields of study. For example, cross-task domain adaptation Hsu et al. (2018) also discuss novel classes in the target dataset. In contrast, while Motiian et al. (2017) has “few-shot” in the title, their evaluation setting focuses on classifying the same class in the target dataset. If base and novel classes are both drawn from the same dataset, minor domain shift exists between the base and novel classes, as we demonstrated in Section 4.4. To highlight the impact of domain shift, we further propose the mini-ImageNet→CUB setting. The domain shift in few-shot classification is also discussed in Dong & Xing (2018). Table A1: Relationship between domain adaptation and few-shot classification. The two field-of-studies have overlapping in the development. Notation ”*” indicates minor domain shifts exist between base and novel classes. Domain shift Source to target dataset Base to novel class Domain adaptation Motiian et al. (2017) V V - Cross-task domain adaptation Hsu et al. (2018) V V V Few-shot classification Ours (CUB, mini-ImageNet ) * - V Cross-domain few-shot Ours (mini-ImageNet→CUB) Dong & Xing (2018) V V V A2 TERMINOLOGY DIFFERENCE Different meta-learning works use different terminology in their works. We highlight their differences in appendix Table A2 to clarify the inconsistency. Table A2: Different terminology used in other works. Notation ”-” indicates the term is the same as in this paper. Our terms MatchingNet Vinyals et al. ProtoNet Snell et al. MAML Finn et al. Meta-learn LSTM Ravi & Larochelle Imaginary Wang et al. meta-training stage training training - - - meta-testing stage test test - - - base class training set training set task meta-training set - novel class test set test set new task meta-testing set - support set - - sample training dataset training data query set batch - test time sample test dataset test data A3 ADDITIONAL RESULTS ON OMNIGLOT AND OMNIGLOT→EMNIST For completeness, here we also show the results under two additional scenarios in 4) character recognition 5) cross-domain character recognition. For character recognition, we use the Omniglot dataset Lake et al. (2011) commonly used in evaluating few-shot classification algorithms. Omniglot contains 1,623 characters from 50 languages, and we follow the evaluation protocol of Vinyals et al. (2016) to first augment the classes by rotations in 90, 180, 270 degrees, resulting in 6492 classes. We then follow Snell et al. (2017) to split these classes into 4112 base, 688 validation, and 1692 novel classes. Unlike Snell et al. (2017), our validation classes are only used to monitor the performance during meta-training. For cross-domain character recognition (Omniglot→EMNIST), we follow the setting of Dong & Xing (2018) to use Omniglot without Latin characters and without rotation augmentation as base classes, so there are 1597 base classes. On the other hand, EMNIST dataset Cohen et al. (2017) contains 10-digits and upper and lower case alphabets in English, so there are 62 classes in total. We split these classes into 31 validation and 31 novel classes, and invert the white-on-black characters to black-on-white as in Omniglot. We use a Conv-4 backbone with input size 28x28 for both settings. As Omniglot characters are black-and-white, center-aligned and rotation sensitive, we do not use data augmentation in this experiment. To reduce the risk of over-fitting, we use the validation set to select the epoch or episode with the best accuracy for all methods, including baseline and baseline++.4 As shown in Table A3, in both Omniglot and Omniglot→EMNIST settings, meta-learning methods outperform baseline and baseline++ in 1-shot. However, all methods reach comparable performance in the 5-shot classification setting. We attribute this to the lack of data augmentation for the baseline and baseline++ methods as they tend to over-fit base classes. When sufficient examples in novel classes are available, the negative impact of over-fitting is reduced. Table A3: Few-shot classification results for both the Omniglot and Omniglot→EMNIST. All experiments are from 5-way classification with a Conv-4 backbone and without data augmentation. Omniglot Omniglot→EMNIST Method 1-shot 5-shot 1-shot 5-shot Baseline 94.89 ± 0.45 99.12 ± 0.13 63.94 ± 0.87 86.00 ± 0.59 Baseline++ 95.41 ± 0.39 99.38 ± 0.10 64.74 ± 0.82 87.31 ± 0.58 MatchingNet 97.78 ± 0.30 99.37 ± 0.11 72.71 ± 0.79 87.60 ± 0.56 ProtoNet 98.01 ± 0.30 99.15 ± 0.12 70.43 ± 0.80 87.04 ± 0.55 MAML 98.57 ± 0.19 99.53 ± 0.08 72.04 ± 0.83 88.24 ± 0.56 RelationNet 97.22 ± 0.33 99.30 ± 0.10 75.55 ± 0.87 88.94 ± 0.54 A4 BASELINE WITH 1-NN CLASSIFIER Some prior work (Vinyals et al. (2016)) apply a Baseline with 1-NN classifier in the test stage. We include our result as in Table A4. The result shows that using 1-NN classifier has better performance than that of using the softmax classifier in 1-shot setting, but softmax classifier performs better in 5-shot setting. We note that the number here are not directly comparable to results in Vinyals et al. (2016) because we use a different mini-ImageNet as in Ravi & Larochelle (2017). Table A4: Baseline with softmax and 1-NN classifier in test stage. We note that we use cosine distance in 1-NN. 1-shot 5-shot softmax 1-NN softmax 1-NN Baseline 42.11±0.71 44.18±0.69 62.53±0.69 56.68±0.67 Baseline++ 48.24±0.75 49.57±0.73 66.43±0.63 61.93±0.65 4The exact epoch of baseline and baseline++ on Omniglot and Omniglot→EMNIST is 5 epochs A5 MAML AND MAML WITH FIRST-ORDER APPROXIMATION As discussed in Section 4.1, we use first-order approximation MAML to improve memory efficiency in all of our experiments. To demonstrate this design choice does not affect the accuracy, we compare their validation accuracy trends on Omniglot with 5-shot as in Figure A1. We observe that while the full version MAML converge faster, both versions reach similar accuracy in the end. This phenomena is consistent with the difference of first-order (e.g. gradient descent) and secondorder methods (e.g. Newton) in convex optimization problems. Second-order methods converge faster at the cost of memory, but they both converge to similar objective value. Figure A1: Validation accuracy trends of MAML and MAML with first order approximation. Both versions converge to the same validation accuracy. The experimental results are on Omniglot with 5-shot with a Conv-4 backbone. A6 INTRA-CLASS VARIATION AND BACKBONE DEPTH As mentioned in Section 4.3, here we demonstrate decreased intra-class variation as the network depth gets deeper as in Figure A2. We use the Davies-Bouldin index Davies & Bouldin (1979) to measure intra-class variation. The Davies-Bouldin index is a metric to evaluate the tightness in a cluster (or class, in our case). Our results show that both intra-class variation in the base and novel class feature decrease using deeper backbones. 2 3 4 5 6 7 8 Conv-4 Conv-6 Resnet-10 Resnet-18 Resnet-34 3 4 5 6 7 Conv-4 Conv-6 Resnet-10 Resnet-18 Resnet-34 Baseline Baseline++ MatchingNetProtoNet Base class feature Novel class feature D av ie s- B ou ld in in de x (I nt ra -c la ss va ri at io n) Figure A2: Intra-class variation decreases as backbone gets deeper. Here we use Davies-Bouldin index to represent intra-class variation, which is a metric to evaluate the tightness in a cluster (or class, in our case). The statistics are Davies-Bouldin index for all base and novel class feature (extracted by feature extractor learned after training or meta-training stage) for CUB dataset under different backbone. A7 DETAILED STATISTICS IN EFFECTS OF INCREASING BACKBONE DEPTH Here we show a high-resolution version of Figure 3 in Figure A3 and show detailed statistics in Table A5 for easier comparison. 45% 55% 65% 75% 40% 45% 50% 55% 60% 70% 80% 90% 60% 65% 70% 75% 80% Baseline Baseline++ MatchingNet ProtoNet MAML RelationNet CUB mini-ImageNet 1- sh ot 5- sh ot Figure A3: Few-shot classification accuracy vs. backbone depth. In the CUB dataset, gaps among different methods diminish as the backbone gets deeper. In mini-ImageNet 5-shot, some meta-learning methods are even beaten by Baseline with a deeper backbone. Table A5: Detailed statistics in Figure 3. We put exact value here for reference. Conv-4 Conv-6 Resnet-10 Resnet-18 Resnet-34 CUB 1-shot Baseline 47.12±0.74 55.77±0.86 63.34±0.91 65.51±0.87 67.96±0.89 Baseline++ 60.53±0.83 66.00±0.89 69.55±0.89 67.02±0.90 68.00±0.83 MatchingNet 61.16±0.89 67.16±0.97 71.29±0.90 72.36±0.90 71.44±0.96 ProtoNet 51.31±0.91 66.07±0.97 70.13±0.94 71.88±0.91 72.03±0.91 MAML 55.92±0.95 65.91±0.97 71.29±0.95 69.96±1.01 67.28±1.08 RelationNet 62.45±0.98 63.11±0.94 68.65±0.91 67.59±1.02 66.20±0.99 CUB 5-shot Baseline 64.16±0.71 73.07±0.71 81.27±0.57 82.85±0.55 84.27±0.53 Baseline++ 79.34±0.61 82.02±0.55 85.17±0.50 83.58±0.54 84.50±0.51 MatchingNet 72.86±0.70 77.08±0.66 83.59±0.58 83.64±0.60 83.78±0.56 ProtoNet 70.77±0.69 78.14±0.67 84.76±0.52 87.42±0.48 85.98±0.53 MAML 72.09±0.76 76.31±0.74 80.33±0.70 82.70±0.65 83.47±0.59 RelationNet 76.11±0.69 77.81±0.66 81.12±0.63 82.75±0.58 82.30±0.58 mini-ImageNet 1-shot Baseline 42.11±0.71 45.82±0.74 52.37±0.79 51.75±0.80 49.82±0.73 Baseline++ 48.24±0.75 48.29±0.72 53.97±0.79 51.87±0.77 52.65±0.83 MatchingNet 48.14±0.78 50.47±0.86 54.49±0.81 52.91±0.88 53.20±0.78 ProtoNet 44.42±0.84 50.37±0.83 51.98±0.84 54.16±0.82 53.90±0.83 MAML 46.47±0.82 50.96±0.92 54.69±0.89 49.61±0.92 51.46±0.90 RelationNet 49.31±0.85 51.84±0.88 52.19±0.83 52.48±0.86 51.74±0.83 mini-ImageNet 5-shot Baseline 62.53±0.69 66.42±0.67 74.69±0.64 74.27±0.63 73.45±0.65 Baseline++ 66.43±0.63 68.09±0.69 75.90±0.61 75.68±0.63 76.16±0.63 MatchingNet 63.48±0.66 63.19±0.70 68.82±0.65 68.88±0.69 68.32±0.66 ProtoNet 64.24±0.72 67.33±0.67 72.64±0.64 73.68±0.65 74.65±0.64 MAML 62.71±0.71 66.09±0.71 66.62±0.83 65.72±0.77 65.90±0.79 RelationNet 66.60±0.69 64.55±0.70 70.20±0.66 69.83±0.68 69.61±0.67 A8 MORE-WAY IN META-TESTING STAGE We experiment with a practical setting that handles different testing scenarios. Specifically, we conduct the experiments of 5-way meta-training and N-way meta-testing (where N = 5, 10, 20) to examine the effect of testing scenarios that are different from training. As in Table A6, we compare the methods Baseline, Baseline++, MatchingNet, ProtoNet, and RelationNet. Note that we are unable to apply the MAML method as MAML learns the initialization for the classifier and can thus only be updated to classify the same number of classes. Our results show that for classification with a larger N-way in the meta-testing stage, the proposed Baseline++ compares favorably against other methods in both shallow or deeper backbone settings. We attribute the results to two reasons. First, to perform well in a larger N-way classification setting, one needs to further reduce the intra-class variation to avoid misclassification. Thus, Baseline++ has better performance than Baseline in both backbone settings. Second, as meta-learning algorithms were trained to perform 5-way classification in the meta-training stage, the performance of these algorithms may drop significantly when increasing the N-way in the meta-testing stage because the tasks of 10-way or 20-way classification are harder than that of 5-way one. One may address this issue by performing a larger N-way classification in the meta-training stage (as suggested in Snell et al. (2017)). However, it may encounter the issue of memory constraint. For example, to perform a 20-way classification with 5 support images and 15 query images in each class, we need to fit a batch size of 400 (20 x (5 + 15)) that must fit into the GPUs. Without special hardware parallelization, the large batch size may prevent us from training models with deeper backbones such as ResNet. Table A6: 5-way meta-training and N-way meta-testing experiment. The experimental results are on mini-ImageNet with 5-shot. We could see Baseline++ compares favorably against other methods in both shallow or deeper backbone settings. Conv-4 ResNet-18 N-way test 5-way 10-way 20-way 5-way 10-way 20-way Baseline 62.53±0.69 46.44±0.41 32.27±0.24 74.27±0.63 55.00±0.46 42.03±0.25 Baseline++ 66.43±0.63 52.26±0.40 38.03±0.24 75.68±0.63 63.40±0.44 50.85±0.25 MatchingNet 63.48±0.66 47.61±0.44 33.97±0.24 68.88±0.69 52.27±0.46 36.78±0.25 ProtoNet 64.24±0.68 48.77±0.45 34.58±0.23 73.68±0.65 59.22±0.44 44.96±0.26 RelationNet 66.60±0.69 47.77±0.43 33.72±0.22 69.83±0.68 53.88±0.48 39.17±0.25
1. What are the strengths and weaknesses of the paper's approach to evaluating few-shot learning methods? 2. How does the reviewer feel about the inclusion of domain shift experiments in the paper? 3. Are there any concerns regarding the redundancy of information in the paper, specifically in Sections 2 and 3.3, and Tables 1 and 4? 4. Would the reviewer like to see additional information included in the appendix, such as results on Omniglot and a cross-domain scenario with Omniglot -> MNIST? 5. Does the reviewer have any questions or concerns about the comparison between the Baseline and Baseline++ models in the Matching Nets paper? 6. Is there a discrepancy between the conclusion about network depth experiments and the actual results shown in the plot for the 5-shot mini-ImageNet case? If so, could the author clarify this point?
Review
Review There are a few things I like about the paper. Firstly, it makes interesting observations about the evaluation of the few-shot learning approaches, e.g. the underestimated baselines, and compares multiple methods in the same conditions. In fact, one of the reasons for accepting this paper would be to get a unified and, hopefully, well-written implementation of those methods. Secondly, I like the domain shift experiments, but I have the following question. The description of the CUB says that there is an overlap between CUB and ImageNet. Is there an overlap between CUB and mini-ImageNet? If so, then domain shift experiments might be too optimistic or even then it is not a big deal? One thing I don’t like is that, in my opinion, the paper includes much redundant information which could go to the appendix in order to not weary the reader. For instance, everything related to Table 1. There is also some overlap between Section 2 and 3.3, while MAML, for instance, is still not well explained. Also, tables with too many numbers are difficult to read, e.g. Table 4. ---- Other notes ----- Many of the few-shot learning papers use Omniglot, so I think it would be a valuable addition to the appendix. Moreover, there exists a cross-domain scenario with Omniglot-> MNIST which I would also like to see in the appendix. In the Matching Nets paper, there is a good baseline classifier based on k-NNs. Do you know how does that one compares to Baseline and Baseline++ models if used with the same architecture for the feature extractor? The conclusion from the network depth experiments is that “gaps among different methods diminish as the backbone gets deeper”. However, in a 5-shot mini-ImageNet case, this is not what the plot shows. Quite the opposite: the gap increased. Did I misunderstand something? Could you please comment on that?
ICLR
Title Non-asymptotic Confidence Intervals of Off-policy Evaluation: Primal and Dual Bounds Abstract Off-policy evaluation (OPE) is the task of estimating the expected reward of a given policy based on offline data previously collected under different policies. Therefore, OPE is a key step in applying reinforcement learning to real-world domains such as medical treatment, where interactive data collection is expensive or even unsafe. As the observed data tends to be noisy and limited, it is essential to provide rigorous uncertainty quantification, not just a point estimation, when applying OPE to make high stakes decisions. This work considers the problem of constructing nonasymptotic confidence intervals in infinite-horizon off-policy evaluation, which remains a challenging open question. We develop a practical algorithm through a primal-dual optimization-based approach, which leverages the kernel Bellman loss (KBL) of Feng et al. (2019) and a new martingale concentration inequality of KBL applicable to time-dependent data with unknown mixing conditions. Our algorithm makes minimum assumptions on the data and the function class of the Q-function, and works for the behavior-agnostic settings where the data is collected under a mix of arbitrary unknown behavior policies. We present empirical results that clearly demonstrate the advantages of our approach over existing methods. 1 INTRODUCTION Off-policy evaluation (OPE) seeks to estimate the expected reward of a target policy in reinforcement learnings (RL) from observational data collected under different policies (e.g., Murphy et al., 2001; Fonteneau et al., 2013; Jiang & Li, 2016; Liu et al., 2018a). OPE plays a central role in applying reinforcement learning (RL) with only observational data and has found important applications in areas such as medicine, self-driving, where interactive “on-policy” data is expensive or even infeasible to collect. A critical challenge in OPE is the uncertainty estimation, as having reliable confidence bounds is essential for making high-stakes decisions. In this work, we aim to tackle this problem by providing non-asymptotic confidence intervals of the expected value of the target policy. Our method allows us to rigorously quantify the uncertainty of the prediction and hence avoid the dangerous case of being overconfident in making costly and/or irreversible decisions. However, off-policy evaluation per se has remained a key technical challenge in the literature (e.g., Precup, 2000; Thomas & Brunskill, 2016; Jiang & Li, 2016; Liu et al., 2018a), let alone gaining rigorous confidence estimation of it. This is especially true when 1) the underlying RL problem is long or infinite horizon, and 2) the data is collected under arbitrary and unknown algorithms (a.k.a. behavior-agnostic). As a consequence, the collected data can exhibit arbitrary dependency structure, which makes constructing rigorous non-asymptotic confidence bounds particularly challenging. Traditionally, the only approach to provide non-asymptotic confidence bounds in OPE is to combine importance sampling (IS) with concentration inequalities (e.g., Thomas et al., 2015a;b), which, however, tends to degenerate for long/infinite horizon problems (Liu et al., 2018a). Furthermore, ∗Equal contribution. neither can this approach be applied to the behavior-agnostic settings, nor can it effectively handle the complicated time dependency structure inside individual trajectories. Instead, it requires to use a large number of independently collected trajectories drawn under known policies. In this work, we provide a practical approach for Behavior-agnostic, Off-policy, Infinite-horizon, Non-asymptotic, Confidence intervals based on arbitrarily Dependent data (BONDIC). Our method is motivated by a recently proposed optimization-based (or variational) approach to estimating OPE confidence bounds (Feng et al., 2020), which leverages a tail bound of kernel Bellman statistics (Feng et al., 2019). Our approach achieves a new bound that is both an order-of-magnitude tighter and computationally efficient than that of Feng et al. (2020). Our improvements are based on two pillars 1) developing a new primal-dual perspective on the non-asymptotic OPE confidence bounds, which is connected to a body of recent works on infinite-horizon value estimation (Liu et al., 2018a; Nachum et al., 2019a; Tang et al., 2020a; Mousavi et al., 2020); and 2) offering a new tight concentration inequality on the kernel Bellman statistics that applies to behavior-agnostic off-policy data with arbitrary dependency between transition pairs. Empirically, we demonstrate that our method can provide reliable and tight bounds on a variety of well-established benchmarks. Related Work Besides the aforementioned approach based on the combination of IS and concentration inequalities (e.g., Thomas et al., 2015a), bootstrapping methods have also been widely used in off-policy estimation (e.g., White & White, 2010; Hanna et al., 2017; Kostrikov & Nachum, 2020). But the latter is limited to asymptotic bounds. Alternatively, Bayesian methods (e.g. Engel et al., 2005; Ghavamzadeh et al., 2016a) offers a different way to estimate the uncertainty in RL, but fails to guarantee frequentist coverage. In addition, Distributed RL (Bellemare et al., 2017) seeks to quantify the intrinsic uncertainties inside the Markov decision process, which is orthogonal to the estimation of uncertainty that we consider. Our work is built upon the recent advances in behavior-agnostic infinite-horizon OPE, including Liu et al. (2018a); Feng et al. (2019); Tang et al. (2020a); Mousavi et al. (2020), as well as the DICE-family (e.g., Nachum et al., 2019a; Zhang et al., 2020a; Yang et al., 2020b). In particular, our method can be viewed as extending the minimax framework of the infinite-horizon OPE in the infinite data region by Tang et al. (2020a); Uehara et al. (2020); Jiang & Huang (2020) to the non-asymptotic finite sample region. Outline For the rest of the paper, we start with the problem statement in Section 2 , and an overview on the two dual approaches to infinite-horizon OPE that are tightly connected to our method in Section 3. We then present our main approach in Section 4 and perform empirical studies in Section 5. The proof and an abundance of additional discussions can be found in Appendix. 2 BACKGROUND, DATA ASSUMPTION, PROBLEM SETTING Consider an agent acting in an unknown environment. At each time step t, the agent observes the current state st in a state space S, takes an action at ∼ π(· | st) in an action space A according to a given policy π; then, the agent receives a reward rt and the state transits to s′t = st+1, following an unknown transition/reward distribution (rt, st+1) ∼ P(· | st, at). Assume the initial state s0 is drawn from an known initial distribution D0. Let γ ∈ (0, 1) be a discount factor. In this setting, the expected reward of π is defined as Jπ := Eπ [∑T t=0 γ trt | s0 ∼ D0 ] , which is the expected total discounted rewards when we execute π starting from D0 for T steps. In this work, we consider the infinite-horizon case with T → +∞. Our goal is to provide an interval estimation of Jπ for a general and challenging setting with significantly released constraints on the data. In particular, we assume the data is behavior-agnostic and off-policy, which means that the data can be collected from multiple experiments, each of which can execute a mix of arbitrary, unknown policies, or even follow a non-fixed policy. More concretely, suppose that the model P is unknown, and we have a set of transition pairs D̂n = (si, ai, ri, s′i) n i=1 collected from previous experiments in a sequential order, such that for each data point i, the (ri, s′i) is drawn from the model P(· | si, ai), while (si, ai) is generated with an arbitrary black box given the previous data points. We formalize both the data assumption and goal as below. Assumption 2.1 (Data Assumption). Assume the data D̂n = (si, ai, ri, s′i)ni=1 is drawn from an arbitrary joint distribution, such that for each i = 1, . . . , n, conditional on D̂<i := (sj , aj , rj , s′j)j<i ∪ (si, ai), the subsequent local reward and next state (ri, s′i) are drawn from P(· | si, ai). Goal Given a confidence level δ ∈ (0, 1), we want to construct an interval [Ĵ−, Ĵ+] ⊂ R based on the data D̂n, such that Pr(Jπ ∈ [Ĵ−, Ĵ+]) ≥ 1− δ, where Pr(·) is w.r.t. the randomness of the data. The partial ordering on the data points is introduced to accommodate the case that si+1 equals s′j for some j ≤ i. The data assumption only requires that (ri, s′i) is generated from P(· | si, ai), and imposes no constraints on how (si, ai) is generated. This provides great flexibility in terms of the data collection process. In particular, we do not require (si, ai)ni=1 to be independent as always assumed in recent works (Liu et al., 2018a; Mousavi et al., 2020). A crucial fact is that our data assumption actually implies a martingale structure on the empirical Bellman residual operator of the Q-function, As we will show in Section 4.1, this enables us to derive a key concentration inequality underpinning our non-asymptotic confidence bounds. Here, we summarize a few notations that will simplify the presentation in the rest of work. First of all, we append each (si, ai, ri, s′i) with an action a ′ i ∼ π(· | s′i) following s′i. This can be done for free as long as π is given (See the Remark in Section 3). Also, we write xi = (si, ai), x′i = (s ′ i, a ′ i), and yi = (x′i, ri) = (s ′ i, a ′ i, ri). Correspondingly, define X = S ×A to be the state-action space and Y = X ×R. Denote Pπ(y | x) = P(s′, r | x)π(a′ | s′). In this way, the observed data can be written as pairs of {xi, yi}ni=1, and Assumption 2.1 is equivalent to saying that yi ∼ Pπ(· | xi) given D̂<i, which is similar to a supervised learning setting. We equalize the data D̂n with its empirical measure D̂n = ∑n i=1 δxi,yi/n, where δ is the Delta measure. 3 TWO DUAL APPROACHES TO INFINITE-HORIZON OFF-POLICY ESTIMATION The deficiency of the traditional IS methods on long-/infinite-horizon RL problems (a.k.a. the curse of horizon (Liu et al., 2018a)) has motivated a line of work on developing efficient infinite-horizon value estimation (e.g., Liu et al., 2018a; Feng et al., 2019; Nachum et al., 2019a; Zhang et al., 2020a; Mousavi et al., 2020; Tang et al., 2020a). The main idea is to transform the value estimation problem into estimating either the Q-function or the visitation distribution (or its related density ratio) of the policy π. This section introduces and reinterprets these two tightly connected methods, which serves to lay out a foundation for our main confidence bounds from a primal and dual perspective. Given a policy π, its Q-function is defined as qπ(x) = Eπ [ ∑∞ t=0 γ trt | x0 = x], where the expectation is taken when we execute π initialized from a fixed state-action pair (s0, a0) = x0 = x. Let Dπ,t be the distribution of (xt, yt) = (st, at, s′t, a ′ t, rt) when executing policy π starting from s0 ∼ D0 for t steps. The visitation distribution of π is defined as Dπ = ∑∞ t=0 γ tDπ,t. Note that Dπ integrates to 1/(1− γ), while we treat it as a probability measure in the notation. The expected reward Jπcan be expressed using either qπ or Dπ as follows: Jπ := Eπ [ ∞∑ t=0 γtrt ] = Er∼Dπ [r] = Ex∼Dπ,0 [qπ(x)], (1) where r ∼ Dπ (resp. x ∼ Dπ,0) denotes sampling from the r-(resp. x-) marginal distribution of Dπ (resp. Dπ,0). Eq. (1) plays a key role in the infinite-horizon value estimation by transforming the estimation of Jπ into estimating either qπ or Dπ . Value Estimation via Q Function Because Dπ,0(x) = D0(s)π(a|s) is known, we can estimate Jπ by Ex∼Dπ,0 [q̂(x)] with any estimation q̂ of the true Q-function qπ; the expectation under x ∼ Dπ,0 can be estimated to any accuracy with Monte Carlo. To estimate qπ, we consider the empirical and expected Bellman residual operator: R̂q(x, y) = q(x)− γq(x′)− r, Rπq(x) = Ey∼Pπ(·|x) [ R̂q(x, y) ] . (2) It is well-known that qπ is the unique solution of the Bellman equation Rπq = 0. Since yi ∼ Pπ(·|xi) for each data point in D̂n, if q = qπ , then R̂q(xi, yi), i = 1, . . . , n are all zero-mean random variables. Let ω be any function from X to R, then∑i R̂q(xi, yi)ω(xi) also has zero mean. This motivates the following functional Bellman loss (Feng et al., 2019; 2020; Xie & Jiang, 2020), LW(q; D̂n) := sup ω∈W { 1 n n∑ i=1 R̂q(xi, yi)ω(xi) } , (3) whereW is a set of functions ω : X → R. To ensure that the sup is finite,W is typically set to be an unit ball of some normed function spaceWo, such thatW = {ω ∈ Wo : ‖ω‖Wo ≤ 1}. Feng et al. (2019) considers the simple case whenW is taken to be the unit ball K of the reproducing kernel Hilbert space (RKHS) with a positive definite kernel k : X × X → R, in which case the loss has a simple closed form solution: LK(q; D̂n) = √√√√ 1 n2 n∑ ij=1 R̂q(xi, yi)k(xi, xj)R̂q(xj , yj). (4) Note that the RHS of Eq. (4) is the square root of the kernel Bellman V-statistics in Feng et al. (2019). Feng et al. (2019) showed that, when the support of data distribution D̂n covers the whole space (which may require an infinite data size) and k is an integrally strictly positive definite kernel, LK(q; D̂n) = 0 iff q = qπ . Therefore, one can estimate qπ by minimizing LK(q, D̂n). Remark The empirical Bellman residual operator R̂ can be extended to R̂q(x, y) = q(x)− r − γ 1m ∑m `=1 q(s ′, a′`), where {a′`}mi=1 are i.i.d. drawn from π(·|s′). As m increases, this gives a lower variance estimation of Rπq. If m = +∞, we have R̂q(x, y) = q(x)− r − γEa′∼π(· | s′)[q(s′, a′)], which coincides with the operator used in the expected SARSA (Sutton & Barto, 1998). In fact, without any modification, all results in this work can be applied to R̂q for any m. Value Estimation via Visitation Distribution Another way to estimate Jπ in Eq. (1) is to approximate Dπ with a weighted empirical measure of the data (Liu et al., 2018a; Nachum et al., 2019a; Mousavi et al., 2020; Zhang et al., 2020a). The key idea is to assign an importance weight ω(xi) to each data point xi in D̂n. We can choose the function ω : X → R properly such that Dπ and hence Jπ can be approximated by the ω-weighted empirical measure of D̂n as follows: Jπ ≈ Ĵω := ED̂ωn [r] = 1 n n∑ i=1 ω(xi)ri, Dπ ≈ D̂ωn := 1 n n∑ i=1 ω(xi)δxi,yi . (5) Intuitively, ω can be viewed as the density ratio between Dπ and D̂n, although the empirical measure D̂n may not have well-defined density. Liu et al. (2018a); Mousavi et al. (2020) proposed to estimate ω by minimizing a discrepancy measure between D̂ωn and Dπ. To see this, note that D = Dπ if and only if ∆(D, q) = 0 for any function q, where ∆(D, q) = ED[γq(x′)− q(x)]− EDπ [γq(x′)− q(x)] = ED[γq(x′)− q(x)] + EDπ,0 [q(x)], (6) using the fact that EDπ [γq(x′) − q(x)] = −EDπ,0 [q(x)] (Theorem 1, Liu et al., 2018a). Also note that the RHS of Eq. (6) can be practically calculated given any D and q without knowing Dπ . Let Q be a set of functions q : X → R. One can define the following loss for ω: IQ(ω; D̂n) = sup q∈Q { ∆(D̂ωn , q) } . (7) Similar to LW(q; D̂n), when Q is a ball in RKHS, IQ(ω; D̂n) also has a bilinear closed form analogous to Eq. (4); see Mousavi et al. (2020) and Appendix F. As we show in Section 4, IQ(ω; D̂n) and LW(q; D̂n) are connected to the primal and dual views of our confidence bounds, respectively. 4 MAIN APPROACH Let Q be a large enough function set including the true Q-function qπ, that is, qπ ∈ Q. Following Feng et al. (2020), a confidence interval [Ĵ−Q,W , Ĵ + Q,W ] of Jπ can be constructed as follows: Ĵ+Q,W = sup q∈Q { EDπ,0 [q] s.t. LW(q; D̂n) ≤ εn } , (8) and Ĵ−Q,W is defined in a similar way by replacing sup on q ∈ Q with inf . The idea here is to seek the extreme q function with the largest (resp. smallest) expected values in set F := Q ∩ {q : LK(q; D̂n) ≤ εn}. Therefore, Eq. (8) would be a 1− δ confidence interval if qπ is included in F with at least probability 1− δ, which is ensured when qπ ∈ Q and Pr(LW(qπ; D̂n) ≤ εn) ≥ 1− δ . (9) Feng et al. (2020) showed that in the RKHS case whenW = K, Eq. (9) can be achieved with εn = √√√√2cqπ,k ( n− 1 n √ log(1/δ) n + 1 n ) , (10) when n is an even number, where cqπ,k = supx,y R̂qπ(x, y) 2k(x, x). This was proved using Hoeffding’s inequality for U-statistics (Hoeffding, 1963). To solve Eq. (8) efficiently, Feng et al. (2020) took Q to be a ball in RKHS with random feature approximation. Unfortunately, this method as described by Eq. (8)-(10) has two major disadvantages: 1) Bound Needs to Be Tightened (Section 4.1) The bound of εn = O(n−1/4) in Eq. (10) is sub-optimal in rate. In Section 4.1, we improve it by an εn = O(n−1/2) bound under the mild Assumption 2.1, which gets rid of the independence requirement between the transition pairs. Our tightened bound is achieved by firstly noting a Martingale structure on the empirical Bellman operator under Assumption 2.1, and then applying an inequality in Pinelis (1992). 2) Dependence on Global Optimization (Section 4.2) The bound in Eq. (8) is guaranteed to be a 1− δ confidence bound only when the maximization in Eq. (8) is solved to global optimality. With a large n, this leads to a high computational cost, even when choosing Q as the RKHS. Feng et al. (2020) solved Eq. (8) approximately using a random feature technique, but this method suffers from a gap between the theory and practice. In Section 4.2, we address this problem by presenting a dual form of Eq. (8), which sidesteps solving the challenging global optimization in Eq. (8). Moreover, the dual form enables us to better analyze the tightness of the confidence interval and issues regarding the choices of Q andW . 4.1 A TIGHTER CONCENTRATION INEQUALITY In this section, we explain our method to improve the bound in Eq. (10) by giving a tighter concentration inequality for the kernel Bellman loss in Eq. (4). We introduce the following semi-expected kernel Bellman loss: L∗K(q; D̂n) = √√√√ 1 n2 n∑ ij=1 Rπq(xi)k(xi, xj)Rπq(xj) , (11) in which we replace the empirical Bellman residual operator R̂q in Eq. (3) with its expected counterpart Rπq, but still take the empirical average over {xi}ni=1 in D̂n. For a more general function set W , we can similarly define L∗W(q; D̂n) by replacing R̂q with Rπq in Eq. (3). Obviously, we have L∗W(q; D̂n) = 0 when q = qπ . Theorem 4.1 below shows that LK(q; D̂n) concentrates around L∗K(q; D̂n) with an O(n −1/2) error under Assumption 2.1. At a first glance, it may seem surprising that the concentration bound is able to hold even without any independence assumption between {xi}. An easy way to make sense of this is by recognizing that the randomness in yi conditional on xi is aggregated through averaging, even when {xi} are deterministic. As Assumption 2.1 does not impose any (weak) independence between {xi}, we cannot establish that LK(q; D̂n) concentrates around its mean ED̂n [LK(q; D̂n)] (which is a full expected kernel bellman loss), without introducing further assumptions. Theorem 4.1. Assume K is the unit ball of RKHS with a positive definite kernel k(·, ·). Let cq,k := supx∈X ,y∈Y(R̂q(x, y)−Rπq(x))2k(x, x) <∞. Under Assumption 2.1, for any δ ∈ (0, 1), with at least probability 1− δ, we have∣∣∣LK(q; D̂n)− L∗K(q; D̂n)∣∣∣ ≤ √ 2cq,k log(2/δ) n . (12) In particular, when q = qπ , we have cqπ,k = supx,y(R̂qπ(x, y)) 2k(x, x), and LK(qπ; D̂n) ≤ √ 2cqπ,k log(2/δ) n . (13) Intuitively, to see why we can expect an O(n−1/2) bound, note that LK(q, D̂n) consists of the square root of the product of two R̂q terms, each of which contributes an O(n−1/2) error w.r.t. Rπq. Technically, the proof is based on a key observation: Assumption 2.1 ensures that Zi := R̂q(xi, yi)− Rπq(xi), i = 1, . . . , n forms a martingale difference sequence w.r.t. {D̂<i : ∀i = 1, . . . , n}, in the sense that E[Zi | D̂<i] = 0, ∀i. See Appendix B for details. The proof also leverages a special property of RKHS and applies a Hoeffding-like inequality on the Hilbert spaces as in Pinelis (1992) (see Appendix B). For other more general function setsW , we establish in Appendix E a similar bound by using Rademacher complexity, although it yields a less tight bound than Eq. (12) when W = K. 4.2 DUAL CONFIDENCE BOUNDS We derive a dual form of Eq. (8) that sidesteps the need for solving the challenging global optimization in Eq. (8). To do so, let us plug the definition of LW(q; D̂n) into Eq. (3) and introduce a Lagrange multiplier: Ĵ+Q,W = sup q∈Q inf h∈W inf λ≥0 EDπ,0 [q]− λ ( 1 n n∑ i=1 h(xi)R̂q(xi, yi)− εn ) (14) = sup q∈Q inf ω∈Wo { EDπ,0 [q]− 1 n n∑ i=1 ω(xi)R̂q(xi) + εn ‖ω‖Wo } , (15) where we use ω(x) = λh(x). Exchanging the order of min/max and some further derivation yields the following main result. Theorem 4.2. I) LetW be the unit ball of a normed function spaceWo. We have Ĵ+Q,W ≤ F̂+Q (ω) := ED̂ωn [r] + IQ(ω; D̂n) + εn ‖ω‖Wo , ∀ω ∈ Wo , Ĵ−Q,W ≥ F̂−Q (ω) := ED̂ωn [r]− I−Q(ω; D̂n)− εn ‖ω‖Wo , ∀ω ∈ Wo , (16) where −Q = {−q : q ∈ Q} and hence I−Q(ω; D̂n) = IQ(ω; D̂n) if Q = −Q. Further, we have Ĵ+Q,W = infω∈Wo F̂ + Q (ω) and Ĵ − Q,W = supω∈Wo F̂ − Q (ω) if Q is convex and there exists a q ∈ Q that satisfies the strict feasibility condition that LW(q; D̂n) < εn. II) For D̂n and δ ∈ (0, 1), assume Wo and εn ∈ R satisfy Eq. (9) (e.g., via Theorem 4.1). Then for any function set Q with qπ ∈ Q, and any function ω+, ω− ∈ Wo (the choice of Q, ω+, ω− can depend on D̂n arbitrarily), we have Pr ( Jπ ∈ [ F̂−Q (ω−), F̂ + Q (ω+) ]) ≥ 1− δ . (17) Theorem 4.2 transforms the original bound in Eq. (8), framed in terms of q and LW(q; D̂n), into a form that involves the density-ratio ω and the related loss IQ(ω; D̂n). The bounds in Eq. (16) can be interpreted as assigning an error bar around the ω-based estimator Ĵω = ED̂ωn [r] in Eq. (5), with the error bar of I±Q(ω; D̂n) + εn ‖ω‖Wo . Specifically, the first term I±Q(ω; D̂n) measures the discrepancy between D̂ωn and Dπ as discussed in Eq. (7), whereas the second term captures the randomness in the empirical Bellman residual operator R̂qπ . Compared with Eq. (8), the global maximization on q ∈ Q is now transformed inside the IQ(ω; D̂n) term, which yields a simple closed form solution in the RKHS case (see Appendix F). In practice, we can optimize ω+ and ω− to obtain the tightest possible bound (and hence recover the primal bound) by minimizing/maximizing F̂+Q (ω) and F̂ − Q (ω), but it is not necessary to solve the optimization to global optimality. WhenWo is an RKHS, by the standard finite representer theorem (Scholkopf & Smola, 2018), the optimization on ω reduces to a finite dimensional optimization, which can be sped up with any favourable approximation techniques. We elaborate on this in Appendix D. Length of the Confidence Interval The form in Eq. (16) also makes it much easier to analyze the tightness of the confidence interval. Suppose ω = ω+ = ω− and Q = −Q, the length of the optimal confidence interval is length([Ĵ−Q,W , Ĵ + Q,W ]) = inf ω∈Wo { 2IQ(ω; D̂n) + 2εn ‖ω‖Wo } . Given εn is O(n−1/2), we can make the overall length of the optimal confidence interval also O(n−1/2), as long asWo is rich enough to include a good density ratio estimator ω∗ that satisfies IQ(ω ∗; D̂n) = O(n −1/2) and has a bounded norm ‖ω∗‖Wo . We can expect to achieve IQ(ω∗; D̂n) = O(n−1/2), when (1) Q has an O(n−1/2) sequential Rademacher complexity (Rakhlin et al., 2015) (e.g., a finite ball in RKHS); and (2) D̂n is collected following a Markov chain with strong mixing condition and weakly converges to some limit distribution D∞ whose support is X , and therefore we can define ω∗ as the density ratio between Dπ and D∞. Refer to Appendix C for more discussions. Indeed, our experiments show that the lengths of practically constructed confidence intervals do tend to decay with an O(n−1/2) rate. Choice ofW and Q To ensure the concentration inequality in Theorem 4.1 is valid, the choice of Wo cannot depend on the data D̂n. Therefore, we should use a separate holdout data to construct a data-dependentWo. In contrast, the choice of Q can depend on the data D̂n arbitrarily, since it is a part of the optimization bound Eq. (8) but not in the tail bound Eq. (9). In this light, one can construct the best possible Q by exploiting the data information in the most favourable way. For example, we can construct an estimator of q̂ ≈ qπ based on any state-of-the-art method (e.g., Q-learning or model-based methods), and set Q to be a ball centering around q̂ such that qπ − q̂ ∈ Q. This enables post-hoc analysis based on prior information on qπ , as suggested in Feng et al. (2020). Mis-specification of Q and Oracle Upper/Lower Estimates Our result relies on the assumption that qπ ∈ Q. However, as with other statistical estimation problems, there exists no provably way to empirically verify the correctness of model assumptions such as qπ ∈ Q. Because empirical data only reveals the information of the unknown function (in our case qπ) on a finite number data points, but no conclusion can be made on the unseeing data points without imposing certain smoothness assumption. Typically, what we can do is the opposite: reject qπ ∈ Q when the Bellman loss LW(q; D̂n) of all q in Q is larger than the threshold εn. We highlight that, even without verifying qπ ∈ Q, our method can still be viewed as a confidence interval of a best possible (oracle) upper and lower estimation given the data D̂n plus the assumption that qπ ∈ Q, defined as Ĵ+Q,∗ = sup q∈Q { EDπ,0 [q] s.t. R̂q(xi, yi) = R̂qπ(xi, yi), ∀i = 1, . . . , n } . (18) In fact, it is impossible to derive empirical upper bounds lower than Ĵ+Q,∗, as there is no way to distinguish q and qπ if R̂q(xi, yi) = R̂qπ(xi, yi) for all i. But our interval [ĴQ,K, Ĵ+Q,K] provides a 1− δ confidence outer bound of [Ĵ−Q,∗, Ĵ+Q,∗] once Eq. (9) holds, regardless if qπ ∈ Q holds or not. Hence, it is of independent interest to further explore the dual form of Eq. (18), which is another starting point for deriving our bound. We have more discussion in Appendix G. Lastly, we argue that it is important to include the Q in the bound. Proposition G.1 in Appendix shows that removing the q ∈ Q constraint in Eq. (18) would lead to an infinite upper bound, unless the {si, s′i}ni=1 from D̂n almost surely covers the whole state space S in the sense that Prs∼D0(s ∈ {si, s′i}ni=1) = 1. 5 EXPERIMENTS We compare our method with a variety of existing algorithms for obtaining asymptotic and nonasymptotic bounds on a number of benchmarks. We find our method can provide confidence interval that correctly covers the true expected reward with probability larger than the specified success probability 1 − δ (and is hence safe) across the multiple examples we tested. In comparison, the non-asymptotic bounds based on IS provide much wider confidence intervals. On the other hand, the asymptotic methods, such as bootstrap, despite giving tighter intervals, often fail to capture the true values with the given probability in practice. Environments and Dataset Construction We test our method on three environments: InvertedPendulum and CartPole from OpenAI Gym (Brockman et al., 2016), and a Type-1 Diabetes medical treatment simulator.1 We follow a similar procedure as Feng et al. (2020) to construct the behavior and target policies. more details on environments and data collection procedure are included in Appendix H.1. Algorithm Settings We test the dual bound described in our paper. Throughout the experiment, we always setW = K, the unit ball of the RKHS with positive definite kernel k, and set Q = rQK̃, the ball of radius rQ in the RKHS with another kernel k̃. We take both kernels to be Gaussian RBF kernel and choose rQ and the bandwidths of k and k̃ using the procedure in Appendix H.2. We use a fast approximation method to optimize ω in F+Q (ω) and F − Q (ω) as shown in Appendix D. Once ω is found, we evaluate the bound in Eq. (16) exactly to ensure that the theoretical guarantee holds. Baseline Algorithms We compare our method with four existing baselines, including the IS-based non-asymptotic bound using empirical Bernstein inequality by Thomas et al. (2015b), the IS-based bootstrap bound of Thomas (2015), the bootstrap bound based on fitted Q evaluation (FQE) by Kostrikov & Nachum (2020), and the bound in Feng et al. (2020) which is equivalent to the primal bound in (8) but with looser concentration inequality (they use a εn = O(n−1/4) threshold). Results Figure 1 shows our method obtains much tighter bounds than Feng et al. (2020), which is because we use a much tighter concentration inequality, even the dual bound that we use can be slightly looser than the primal bound used in Feng et al. (2020). Our method is also more computationally efficient than that of Feng et al. (2020) because the dual bound can be tightened 1 https://github.com/jxx123/simglucose. approximately while the primal bound requires to solve a global optimization problem. Figure 1 (b) shows that we provide increasingly tight bounds as the data size n increases, and the length of the interval decays with an O(n−1/2) rate approximately. Figure 1 (c) shows that when we increase the significance level δ, our bounds become tighter while still capturing the ground truth. Figure 1 (d) shows the percentage of times that the interval fails to capture the true value in a total of 100 random trials (denoted as δ̂) as we vary δ. We can see that δ̂ remains close to zero even when δ is large, suggesting that our bound is very conservative. Part of the reason is that the bound is constructed by considering the worse case and we used a conservative choice of the radius rQ and coefficient cqπ,k in Eq. (13) (See Appendix H.2). In Figure 2 we compare different algorithms on more examples with δ = 0.1. We can again see that our method provides tight and conservative interval that always captures the true value. Although FQE (Bootstrap) yields tighter intervals than our method, it fail to capture the ground truth much more often than the promised δ = 0.1 (e.g., it fails in all the random trials in Figure 2 (a)). We conduct more ablation studies on different hyper-parameter and data collecting procedure. See Appendix H.2 and H.3 for more details. 6 CONCLUSION We develop a dual approach to construct high confidence bounds for off-policy evaluation with an improved rate over Feng et al. (2020). Our method can handle dependent data, and does not require a global optimization to get a valid bound. Empirical results demonstrate that our bounds is tight and valid compared with a range of existing baseline. Future directions include leveraging our bounds for policy optimization and safe exploration. A PROOF OF THE DUAL BOUND IN THEOREM 4.2 Proof. Introducing a Lagrange multiplier, the bound in (8) is equivalent to Ĵ+Q,W = max q∈Q min λ≥0 { EDπ,0 [q] − λ ( max h∈W 1 n n∑ i=1 h(xi)R̂q(xi, yi)− εn )} = max q∈Q min λ≥0 min h∈W { EDπ,0 [q] − λ ( 1 n n∑ i=1 h(xi)R̂q(xi, yi)− εn )} = max q∈Q min ω∈Wo { EDπ,0 [q] − 1 n n∑ i=1 ω(xi)R̂q(xi, yi) + εn ‖ω‖Wo } , where we use ω = λh(x), such that λ is replaced by ‖ω‖Wo . Define M(q, ω; D̂n) = EDπ,0 [q] − 1 n n∑ i=1 ω(xi)R̂q(xi, yi) + εn ‖ω‖Wo = ED̂ωn [r] + ∆(D̂ ω n , q) + εn ‖ω‖Wo . Then we have max q∈Q M(q, ω; D̂n) = ED̂ωn [r] + maxq∈Q ∆(D̂ ω n , q) + εn ‖ω‖Wo = ED̂ωn [r] + IQ(ω; D̂n) + εn ‖ω‖Wo = F̂+Q (ω). Therefore, Ĵ+Q,W = max q∈Q min ω∈Wo M(q, ω; D̂n) ≤ min ω∈Wo max q∈Q M(q, ω; D̂n) = min ω∈Wo F̂+Q (ω). The lower bound follows analogously. The strong duality holds when the Slater’s condition is satisfied (Nesterov, 2013), which amounts to saying that the primal problem in (8) is convex and strictly feasible; this requires that Q is convex and there exists at least one solution q ∈ Q that satisfy that constraint strictly, that is, LW(q; D̂n) < εn; note that the objective function Q is linear on q and the constraint function LW(q; D̂n) is always convex on q (since it is the sup a set of linear functions on q following (3)). B PROOF OF CONCENTRATION BOUND IN THEOREM 4.1 Our proof require the following Hoeffding inequality on Hilbert spaces by Pinelis (Theorem 3, 1992); see also Section 2.4 of Rosasco et al. (2010). Lemma B.1. (Theorem 3, Pinelis, 1992) Let H be a Hilbert space and {fi}ni=1 is a Martingale sequence inH that satisfies supi ‖fi‖H ≤ σ almost surely. We have for any > 0, Pr (∥∥∥∥∥ 1n n∑ i=1 fi ∥∥∥∥∥ H ≥ ) ≤ 2 exp ( −n 2 2σ2 ) . Therefore, with probability at least 1− δ, we have ∥∥ 1 n ∑n i=1 fi ∥∥ H ≤ √ 2σ2 log(2/δ) n . Lemma B.2. Let k(x, x′) be a positive definite kernel whose RKHS isHk. Define fi(·) = R̂q(xi, yi)k(xi, ·)−Rπq(xi)k(xi, ·). Assume Assumption 2.1 holds, then {fi}ni=1 is a Martingale difference sequence inHk w.r.t. T<i := (xj , yj)j<i ∪ (xi). That is, E [fi+1(·) | T<i] = 0. In addition,∥∥∥∥∥ 1n n∑ i=1 fi ∥∥∥∥∥ 2 Hk = 1 n2 n∑ ij=1 ( R̂q(xi, yi)−Rπq(xi) ) k(xi, xj) ( R̂q(xj , yj)−Rπq(xj) ) , and ‖fi‖2Hk ≤ cq,k for ∀i = 1, . . . , n. Proof of Theorem 4.1. Following Lemma B.1 and Lemma B.2, since {fi}ni=1 is a Martingale difference sequence inHk with ‖fi‖Hk ≤ cq,k almost surely, we have with probability at least 1− δ, 1 n2 n∑ ij=1 ( R̂q(xi, yi)−Rπq(xi) ) k(xi, xj) ( R̂q(xj , yj)−Rπq(xj) ) = ∥∥∥∥∥ 1n n∑ i=1 fi ∥∥∥∥∥ 2 Hk ≤ 2cq,k log(2/δ) n . Using Lemma B.3 below, we have∣∣∣LK(q; D̂n)− L∗K(q; D̂n)∣∣∣ ≤ ∥∥∥∥∥ 1n n∑ i=1 fi ∥∥∥∥∥ Hk ≤ √ 2cq,k log(2/δ) n . This completes the proof. Lemma B.3. Assume k(x, x′) is a positive definite kernel. We have∣∣∣LK(q; D̂n)− L∗K(q; D̂n)∣∣∣2 ≤ 1n2 n∑ ij=1 ( R̂q(xi, yi)−Rπq(xi) ) k(xi, xj) ( R̂q(xj , yj)−Rπq(xj) ) . Proof. Define ĝ(·) = 1 n n∑ i=1 R̂q(xi, yi)k(xi, ·) g(·) = 1 n n∑ i=1 Rπq(xi)k(xi, ·). Then we have ‖ĝ‖2Hk = 1 n2 n∑ ij=1 R̂q(xi, yi)k(xi, xj)R̂q(xj , yj) = L̂K(q; D̂n), ‖g‖2Hk = 1 n2 n∑ ij=1 Rπq(xi)k(xi, xj)Rπq(xj) = L ∗ K(q; D̂n), ‖ĝ − g‖2Hk = 1 n2 n∑ ij=1 ( R̂q(xi, yi)−Rπq(xi) ) k(xi, xj) ( R̂q(xj , yj)−Rπq(xj) ) . The result then follows the triangle inequality ∣∣‖ĝ‖Hk − ‖g‖Hk ∣∣ ≤ ‖ĝ − g‖Hk . B.1 CALCULATION OF cqπ,k The practical calculation of the coefficient cqπ,k in the concentration inequality was discussed in Feng et al. (2020), which we include here for completeness. Lemma B.4. (Feng et al. (2020) Lemma 3.1) Assume the reward function and kernel function is bounded with supx |r(x)| ≤ rmax and supx,x′ |k(x, x′)| ≤ Kmax, we have: cqπ,k := sup x∈X ,y∈Y (R̂qπ(x, y)) 2k(x, x) ≤ 4Kmaxr 2 max (1− γ)2 . In practice, we get access to Kmax from the kernel function that we choose (e.g., Kmax = 1 for RBF kernels), and rmax from the knowledge on the environment. C MORE ON THE TIGHTNESS OF THE CONFIDENCE INTERVAL The benefit of having both upper and lower bounds is that we can empirically access the tightness of the bound by checking the length of the interval [F̂−Q (ω−), F̂ + Q (ω+)]. However, from the theoretical perspective, it is desirable to know a priori that the length of the interval will decrease with a fast rate as the data size n increases. We now show that this is the case ifWo is chosen to be sufficiently rich so that it includes a ω ∈ Wo such that D̂ωn ≈ Dπ . Theorem C.1. AssumeWo is sufficiently rich to include a “good” ω∗ inWo with D̂ω ∗ n ≈ Dπ in that sup q∈Q ∣∣∣ED̂ω∗n [R̂q(x;x′, r)]− EDπ [R̂q(x;x′, r)]∣∣∣ ≤ cnα , (19) where c and α are two positive coefficients. Then we have max { Ĵ+Q,W − Jπ, Jπ − Ĵ−Q,W } ≤ c nα + εn ‖ω∗‖Wo . Assumption (19) holds if D̂n is collected following a Markov chain with certain strong mixing condition and weakly converges to some limit discussion D̂∞ whose support is X , for which we can define ω∗(x) = Dπ(x)/D∞(x). In this case, if Q is a finite ball in RKHS, then we can achieve (19) with α = 1/2, and yields the overall bound of rate O(n−1/2). For more general function classes, α depends on the martingale Rademacher complexity of function set R̂Q = {Rq(x, y) : q ∈ Q} Rakhlin et al. (2015). In our empirical reults, we observe that the gap of the practically constructed bounds tend to follow the O(n−1/2) rate. Proof. Note that Jπ = EDπ [r] = EDπ [r], and IQ(ω; D̂n) = sup q∈Q { ED̂ωn [γq(x ′)− q(x)]− EDπ [γq(x′)− q(x)] } . Because ω∗ ∈ W , we have Ĵ+W,Q − Jπ ≤ F̂+Q (ω∗)− Jπ = ED̂ωn [r]− EDπ [r] + IQ(ωπ; D̂n) + εn ‖ω ∗‖Wo = sup q∈Q { ED̂ωn [ R̂q(x, y) ] − EDπ [ R̂q(x, y) ]} + εn ‖ω∗‖Wo ≤ c nα + εn ‖ω∗‖Wo . The case of lower bound follows similarly. D OPTIMIZATION ONWo Consider the optimization of ω inWo F̂+Q (ω) := 1 n n∑ i=1 riω(xi) + IQ(ω; D̂n) + ‖ω‖Wo √ 2cqπ,k log(2/δ) n (20) AssumeWo is the RKHS of kernel k(x, x̄), that is,Wo = Hk. By the finite representer theorem of RKHS (Smola et al., 2007). the optimization of ω in RKHSHk can be reduced to a finite dimensional optimization problem. Specifically, the optimal solution of (20) can be written into a form of ω(x) = ∑n i=1 k(x, xi)αi with ‖ω‖ 2 Hk = ∑n i,j=1 k(xi, xj)αiαj for some vector α := [αi] n i=1 ∈ Rn. WriteK = [k(xi, xj)]ni,j=1 and r = [ri] n i=1. The optimization of ω reduces to a finite dimensional optimization on α: min α∈Rn 1 n r>Kα+ IQ(Kα; D̂n) + √ αKα √ 2cqπ,k log(2/δ) n , where IQ(Kα; D̂n) = max q∈Q { EDπ,0 [q] + 1 n (T̂q)>Kα } , and T̂q = [γq(x′i) − q(xi)]ni=1. When Q is RKHS, we can calculate IQ(Kα; D̂n) using (22) in section F. This computation can be still expensive when n is large. Fortunately, our confidence bound holds for any ω; better ω only gives tighter bounds, but it is not necessary to find the global optimal ω. Therefore, one can use any approximation algorithm to find ω, which provides a trade-off of tightness and computational cost. We discuss two methods: 1) Approximating α The length of α can be too large when n is large. To address this, we assume αi = g(xi, θ), where g is any parametric function (such as a neural network) with parameter θ which can be much lower dimensional than α. We can then optimize θ with stochastic gradient descent, by approximating all the data averaging 1n ∑n i=1(·) with averages over small mini-batches; this would introduce biases in gradient estimation, but it is not an issue when the goal is only to get a reasonable approximation. 2) Replacing kernel k Assume the kernel k yields a random feature expansion: k(x, x̄) = Eβ∼π[φ(x, β)φ(x̄, β)], where φ(x, β) is a feature map with parameter β and π is a distribution of β. We draw {βi}mi=1 i.i.d. from π, where m is taken to be much smaller than n. We replace k with k̂(x, x̄) = 1m ∑m i=1 φ(x, βi)φ(x̄, βi) andHk withHk̂, That is, we consider to solve Ĵ+Q,W = min ω∈Hk̂ F̂+Q (ω) := 1n n∑ i=1 riω(xi) + IQ(ω; D̂n) + ‖ω‖Hk̂ √ 2cqπ,k̂ log(2/δ) n . It is known that any function ω in Hk̂ can be represented as ω(x) = 1m ∑m i=1 wiφ(x, βi), for some w = [wi]mi=1 ∈ Rm and satisfies ‖ω‖2Hk̂ = 1 m ∑m i=1 w 2 i . In this way, the problem reduces to optimizing an m-dimensional vector w, which can be solved by standard convex optimization techniques. E CONCENTRATION INEQUALITIES OF GENERAL FUNCTIONAL BELLMAN LOSSES When K is a general function set, one can still obtain a general concentration bound using Radermacher complexity. Define R̂q ◦ W := {h(x, y) = R̂q(x, y)ω(x) : ω ∈ W}. Using the standard derivation in Radermacher complexity theory in conjunction with Martingale theory (Rakhlin et al., 2015), we have sup ω∈W { 1 n n∑ i=1 (R̂q(xi, yi)−Rπq(xi))ω(xi) } ≤ 2Rad(R̂q ◦W) + √ 2cq log(2/δ) n , where Rad(R̂q ◦ K) is the sequential Radermacher complexity as defined in (Rakhlin et al., 2015). A triangle inequality yields | Lk(q; D̂n)− Lk(q; D̂n) | ≤ sup ω∈W { 1 n n∑ i=1 (R̂q(xi, yi)−Rπq(xi))ω(xi) } Therefore, | LW(q; D̂n)− LW(q; D̂n) | ≤ 2Rad(R̂q ◦W) + √ 2cq log(2/δ) n , (21) where cq,W = supω∈W supx,y(R̂q(x, y)−Rπq(x))2ω(x)2. WhenW equals the unit ball K of the RKHS related to kernel k, we have cq,k = cq,W , and hence this bound is strictly worse than the bound in Theorem 4.1. F CLOSED FORM OF IQ(ω; D̂n) WHEN Q IS RKHS Similar to LK(q; D̂n), whenQ is taken to be the unit ball K̃ of the RKHS of a positive definite kernel k̃(x, x̄), (7) can be expressed into a bilinear closed form shown in Mousavi et al. (2020): IQ(ω; D̂n) 2 = A− 2B + C, (22) where A = E(x,x̄)∼Dπ,0×Dπ,0 [k(x, x̄)] B = E(x,x̄)∼D̂ωn×Dπ,0 [ T̂xπk(x, x̄) ] C = E(x,x̄)∼D̂ωn×D̂ωn [ T̂xπT̂ x̄ πk(x, x̄) ] , were T̂πf(x) = γf(x′) − f(x); in T̂xπT̂x̄πk(x, x̄), we apply T̂x̄π and T̂xπ in a sequential order by treating k as a function of x̄ and then of x. G MORE ON THE ORACLE BOUND AND ITS DUAL FORM The oracle bound (18) provides another starting point for deriving optimization-based confidence bounds. We derive its due form here. Using Lagrangian multiplier, the optimization in (18) can be rewritten into Ĵ+Q,∗ = max q∈Q min ω M(q, ω; D̂n), (23) where M∗(q, ω; D̂n) = EDπ,0 [q]− 1 n n∑ i=1 ω(xi) ( R̂q(xi, yi)− R̂qπ(xi, yi) ) , where ω now serves as the Lagrangian multiplier. By the weak duality, we have J∗Q,+ ≤ F̂+Q,∗(ω) := ED̂ωn [r] + IQ(ω; D̂n)︸ ︷︷ ︸ known +R(ω, qπ)︸ ︷︷ ︸ unknown , ∀ω. and R(ω, qπ) = 1 n n∑ i=1 ω(xi)R̂qπ(xi). The derivation follows similarly for the lower bound. So for any ω ∈ Wo, we have [Ĵ−Q,∗, Ĵ+Q,∗] ⊆ [F̂−Q,∗(ω), F̂ + Q,∗(ω)]. Here the first two terms of F̂+Q,∗(ω) can be empirically estimated (it is the same as the first two terms of (16)), but the third term R(ω, qπ) depends on the unknown qπ and hence need to be further upper bounded. Our method can be viewed as constraining ω inW , which is assumed to be the unit ball ofWo, and applying a worst case bound: F̂+Q,∗(ω) := ED̂ωn [r] + IQ(ω; D̂n) +R(ω, qπ), ∀ω ∈ Wo ≤ ED̂ωn [r] + IQ(ω; D̂n) + ‖w‖Wo suph∈W R(h, qπ), ∀ω ∈ Wo ≤ ED̂ωn [r] + IQ(ω; D̂n) + ‖w‖Wo LW(qπ, D̂n), ∀ω ∈ Wo w.p.1−δ ≤ ED̂ωn [r] + IQ(ω; D̂n) + ‖w‖Wo , ∀ω ∈ Wo = F̂+Q (ω). where the last step applies the high probability bound that Pr(LW(qπ, D̂n) ≤ ε) ≥ 1− δ. Similar derivation on the lower bound counterpart gives Pr ([ F̂−Q,∗(ω), F̂ + Q,∗(ω) ] ⊆ [ F̂−Q (ω), F̂ + Q (ω) ]) ≥ 1− δ. Therefore, our confidence bound [F̂−Q (ω), F̂ + Q (ω)] is a 1− δ confidence outer bound of the oracle bound [Ĵ−Q,∗, Ĵ + Q,∗] ⊆ [F̂−Q,∗(ω), F̂+Q,∗(ω)]. Introducing Q is necessarily Our method does not require any independence assumption between the transition pairs, the trade-off is that that we have to assume that qπ falls into a function set Q that imposes certain smoothness assumption. This is necessary because the data only provide information regarding qπ on a finite number of points, and qπ can be arbitrarily non-smooth outside of the data points, and hence no reasonable upper/lower bound can be obtained without any smoothness condition that extend the information on the data points to other points in the domain. Proposition G.1. Unless Prs∼Dπ,0(s /∈ {si, s′i}ni=1) = 0, for any u ∈ R, there exists a function q : S ×A → R, such that EDπ,0 [q] = u, R̂q(xi, yi) = R̂qπ(xi, yi), ∀i = 1, . . . , n. Proof. Let Qnull be the set of functions that are zero on {si, s′i}ni=1, that is, Qnull = {g : S ×A → R : g(s, a) = 0, ∀s ∈ {si, s′i}ni=1, a ∈ A}. Then we have R̂π(qπ + g)(xi, yi) = R̂πqπ(xi, yi), ∀i = 1, . . . , n. and EDπ,0 [qπ + g] = EDπ,0[qπ] + EDπ,0 [g] = Jπ + EDπ,0 [g]. Taking g(s, a) = zI(s /∈ {si, s′i}ni=1), where z is any real number. Then we have EDπ,0 [qπ + g] = Jπ + zPrs∼Dπ,0(s /∈ {si, s′i}ni=1). Because Prs∼Dπ,0(s /∈ {si, s′i}ni=1). 6= 0, we can take z to be arbitrary value to make EDπ,0 [qπ + g] to take arbitrary value. H ABLATION STUDY AND EXPERIMENTAL DETAILS H.1 EXPERIMENTAL DETAILS Environments and Dataset Construction We test our method on three environments: InvertedPendulum and CartPole from OpenAI Gym (Brockman et al., 2016), and a Type-1 Diabetes medical treatment simulator. For Inverted-Pendulum we discretize the action space to be {−1,−0.3,−0.2, 0, 0.2, 0.3, 1}. The action space of CartPole and the medical treatment simulator are both discrete. Policy Construction We follow a similar setup as Feng et al. (2020) to construct behavior and target policies. For all of the environments, we constraint our policy class to be a softmax policy and use PPO (Schulman et al., 2017) to train a good policy π, and we use different temperatures of the softmax policy to construct the target and behavior policies (we set the temperature τ = 0.1 for target policy and τ = 1 to get the behavior policy, and in this way the target policy is more deterministic than the behavior policy). We consider other choices of behavior policies in Section H.3. For horizon lengths, We fix γ = 0.95 and set horizon length H = 50 for Inverted-Pendulum, H = 100 for CartPole, and H = 50 for Diabetes simulator. Algorithm Settings We test the bound in Eq.(16)-(17). Throughout the experiment, we always set W = K, a unit ball of RKHS with kernel k(·, ·). We set Q = rQK̃, the zero-centered ball of radius rQ in an RKHS with kernel k̃(·, ·). We take both k and k̃ to be Gaussian RBF kernel. The bandwidth of k and k̃ are selected to make sure the function Bellman loss is not large on a validation set. The radius is selected to be sufficiently large to ensure that q∗ is included in Q. To ensure a sufficiently large radius, we use the data to approximate a q̂ so that its functional Bellman loss is small than n. Then we set rQ = 10 ∗ ‖q̂‖K̃. We optimize ω using the random feature approximation method described in Appendix D. Once ω+ and ω− are found, we evaluate the bound in Eq. (16) exactly, to ensure the theoretical guarantee holds. H.2 SENSITIVITY TO HYPER-PARAMETERS We investigate the sensitivity of our algorithm to the choice of hyper-parameters. The hyper-parameter mainly depends on how we choose our function class Q andW . Radius of Q Recall that we choose Q to be a ball in RKHS with radius rQ, that is, Q = rQK̃ = {rQf : f ∈ K̃}, where K̃ is the unit ball of the RKHS with kernel k̃. Ideally, we want to ensure that rQ ≥ ‖q∗‖K̃ so that q∗ ∈ Q. Since it is hard to analyze the behavior of the algorithm when q∗ is unknown, we consider a synthetic environment where the true q∗ is known. This is done by explicitly specifying a q∗ inside K̃ and then infer the corresponding deterministic reward function r(x) by inverting the Bellman equation: r(x) := q∗(x)− γEx′∼Pπ(·|x)[q∗(x′)]. Here r is a deterministic function, instead of a random variable, with an abuse of notation. In this way, we can get access to the true RKHS norm of q∗: ρ∗ = ‖q∗‖K̃ . For simplicity, we set both the state space S and action space A to be R and set a Gaussian policy π(a|s) ∝ exp(f(s, a)/τ), where τ is a positive temperature parameter. We set τ = 0.1 as target policy and τ = 1 as behavior policy. Figure 3 shows the results as we set rQ to be ρ∗, 10ρ∗ and 100ρ∗, respectively. We can see that the tightness of the bound is affected significantly by the radius when the number n of samples is very small. However, as the number n of samples grow (e.g., n ≥ 2× 103 in our experiment), the length of the bounds become less sensitive to the changing of the predefined norm of Q. Similarity Between Behavior Policy and Target Policy We study the performance of changing temperature of the behavior policy. We test on Inverted-Pendulum environment as previous described. Not surprisingly, we can see that the closer the behavior policy to the target policy (with temperature τ = 0.1), the tighter our confidence interval will be, which is observed in Figure 4(a). Bandwidth of RBF kernels We study the results as we change the bandwidth in kernel k and k̃ forW and Q, respectively. Figure 4(b) shows the length of the confidence interval when we use different bandwidth pairs in the Inverted-Pendulum environment. We can see that we get relatively tight confidence bounds as long as we set the bandwidth in a reasonable region (e.g., we set the bandwidth of k in [0.1, 0.5], the bandwidth of k̃ in [0.5, 3]). H.3 SENSITIVITY TO THE DATA COLLECTION PROCEDURE We investigate the sensitivity of our method as we use different behavior policies to collect the dataset D̂n. Varying Behavior Policies We study the effect of using different behavior policies. We consider the following cases: 1. Data is collected from a single behavior policy of form πα = απ + (1− α)π0, where π is the target policy and π0 is another policy. We construct π and π0 to be Gaussian policies of form π(a|s) ∝ exp(f(s, a)/τ) with different temperature τ , where temperature for target policy is τ = 0.1 and temperature for π0 is τ = 1. 2. The dataset D̂n is the combination of the data collected from multiple behavior policies of form πα defined as above, with α ∈ {0.0, 0.2, 0.4, 0.6, 0.8}. We show in Figure 5(a) that the length of the confidence intervals by our method as we vary the number n of transition pairs and the mixture rate α. We can see that the length of the interval decays with the sample size n for all mixture rate α. Larger α yields better performance because the behavior policies are closer to the target policy. Varying Trajectory Length T in D̂n As we collect D̂n, we can either have a small number of long trajectories, or a larger number of short trajectories. In Figure 5(b)-(c), we vary the length T of the trajectories as we collect D̂n, while fixing the total number n of transition pairs. In this way, the number of trajectories in each D̂n would be m = n/T . We can see that the trajectory length does not impact the results significantly, especially when the length is reasonably large (e.g., T ≥ 20). I MORE RELATED WORKS We give a more detailed overview of different approaches for uncertainty estimation in OPE. Finite-Horizon Importance Sampling (IS) Assume the data is collected by rolling out a known behavior policy π0 up to a trajectory length T , then we can estimate the finite horizon reward by changing Eπ,P[·] to Eπ0,P[·] with importance sampling(e.g., Precup et al., 2000; Precup, 2001; Thomas et al., 2015a;b). Taking the trajectory-wise importance sampling as an example, assume we collect a set of independent trajectories τi := {sit, ait, rit}T−1t=0 , i = 1, . . . ,m up to a trajectory length T by unrolling a known behavior policy π0. When T is large, we can estimate J∗ by a weighted averaging: Ĵ IS = 1 m m∑ i=1 ω(τi)J(τi) , where ω(τi) = T−1∏ t=0 π(ait|sit) π0(ait|sit) , J(τi) = T−1∑ t=0 γtrit . (24) One can construct non-asymptotic confidence bounds based on Ĵ IS using variants of concentration inequalities (Thomas, 2015; Thomas et al., 2015b). Unfortunately, a key problem with this IS estimator is that the importance weight ω(τi) is a product of the density ratios over time, and hence tends to cause an explosion in variance when the trajectory length T is large. Although improvement can be made by using per-step and self-normalized weights (Precup, 2001), or control variates (Jiang & Li, 2016; Thomas & Brunskill, 2016), the curse of horizon remains to be a key issue to the classical IS-based estimators (Liu et al., 2018a). Moreover, due to the time dependency between the transition pairs inside each trajectory, the nonasymptotic concentration bounds can only be applied on the trajectory level and hence decay with the number m of independent trajectories in an O(1/ √ m) rate, though m can be small in practice. We could in principle apply the concentration inequalities of Markov chains (e.g., Paulin, 2015) to the time-dependent transition pairs, but such inequalities require to have an upper bound of certain mixing coefficient of the Markov chain, which is unknown and hard to construct empirically. Our work addresses these limitations by constructing a non-asymptotic bound that decay with the number n = mT of transitions pairs, while without requiring known behavior policies and independent trajectories. Infinite-Horizon, Behavior-Agnostic OPE Our work is closely related to the recent advances in infinite-horizon and behavior-agnostic OPE, including, for example, Liu et al. (2018a); Feng et al. (2019); Tang et al. (2020a); Mousavi et al. (2020); Liu et al. (2020); Yang et al. (2020b); Xie et al. (2019); Yin & Wang (2020), as well as the DICE-family (e.g., Nachum et al., 2019a;b; Zhang et al., 2020a; Wen et al., 2020; Zhang et al., 2020b). These methods are based on either estimating the value function, or the stationary visitation distribution, which is shown to form a primal-dual relation (Tang et al., 2020a; Uehara et al., 2020; Jiang & Huang, 2020) that we elaborate in depth in Section 3. Besides Feng et al. (2020) which directly motivated this work, there has been a recent surge of interest in interval estimation under infinite-horizon OPE (e.g., Liu et al., 2018b; Jiang & Huang, 2020; Duan et al., 2020; Dai et al., 2020; Feng et al., 2020; Tang et al., 2020b; Yin et al., 2020; Lazic et al., 2020). For example, Dai et al. (2020) develop an asymptotic confidence bound (CoinDice) for DICE estimators with an i.i.d assumption on the off-policy data; Duan et al. (2020) provide a data dependent confidence bounds based on Fitted Q iteration (FQI) using linear function approximation when the off-policy data consists of a set of independent trajectories; Jiang & Huang (2020) provide a minimax method closely related to our method but do not provide analysis for data error; Tang et al. (2020b) propose a fixed point algorithm for constructing deterministic intervals of the true value function when the reward and transition models are deterministic and the true value function has a bounded Lipschitz norm. Model-Based Methods Since the model P is the only unknown variable, we can construct an estimator P̂ of P using maximum likelihood estimation or other methods, and plug it into Eq. (1) to obtain a plug-in estimator Ĵ = Jπ,P̂. This yields the model-based approach to OPE (e.g., Jiang & Li, 2016; Liu et al., 2018b). One can also estimate the uncertainty in Jπ,P̂ by propagating the uncertatinty in P̂ (e.g., Asadi et al., 2018; Duan et al., 2020), but it is hard to obtain non-asymptotic and computationally efficient bounds unless P̂ is assumed to be simple linear models. In general, estimating the whole model P can be an unnecessarily complicated problem as an intermediate step of the possibly simpler problem of estimating Jπ,P. Bootstrapping, Bayes, Distributional RL As a general approach of uncertainty estimation, bootstrapping has been used in interval estimation in RL in various ways (e.g., White & White, 2010; Hanna et al., 2017; Kostrikov & Nachum, 2020; Hao et al., 2021). Bootstrapping is simple and highly flexible, and can be applied to time-dependent data (as appeared in RL) using variants of block bootstrapping methods (e.g., Lahiri, 2013; White & White, 2010). However, bootstrapping typically only provides asymptotic guarantees; although non-asymptotic bounds of bootstrap exist (e.g., Arlot et al., 2010), they are sophistic and difficult to use in practice and would require to know the mixing condition for the dependent data. Moreover, bootstrapping is time consuming since it requires to repeat the whole off-policy evaluation pipeline on a large number of resampled data. Bayesian methods (e.g., Engel et al., 2005; Ghavamzadeh et al., 2016b; Yang et al., 2020a) offer another general approach to uncertainty estimation in RL, but require to use approximate inference algorithms and do not come with non-asymptotic frequentist guarantees. In addition, distributional RL (e.g., Bellemare et al., 2017) seeks to quantify the intrinsic uncertainties inside the Markov decision process, which is orthogonal to the epistemic uncertainty that we consider in off-policy evaluation.
1. What is the main contribution of the paper regarding off-policy evaluation? 2. What are the strengths of the paper, particularly in terms of its improvement over prior works? 3. Do you have any concerns or questions about the method's performance when the functions do not lie in an RKHS? 4. How does the reviewer assess the clarity and presentation of the paper's content? 5. Are there any potential improvements to the bounds provided in the paper under milder conditions on transition pair independence?
Review
Review This work constructs non-asymptotic confidence intervals for off-policy evaluation. This is achieved by assuming that the reward at any given time only depends on the state action pair, leveraging that assumed structure to define the difference between the empirical and estimated bellman residual operators as a Martingale difference sequence. This, in turn, then allows the authors to apply a Hoeffding-like concentration inequality which applies to Hilbert spaces. The authors then provide a derivation of the confidence bounds by considering the divergence between policies. The work improves on the rate of prior work from O ( n − 1 4 ) to O ( n − 1 2 ) and allows for estimation without the need of global optimality via the dual formulation, both of which are very nice additions to the literature. Experimental evaluation backs up the authors’ claims, showing very strong performance with respect to prior art. I found this paper to be very well written and presented, with impressively thorough theoretical results and good empirical validation. A couple of minor questions: (1) Performance of the proposed method when the functions don’t lie in an RKHS. It appears that the formulation in appendix E provides a bound which uses Rademacher complexity and doesn’t rely on an RKHS. Can the authors provide intuition around how much worse we would expect this to be in practice? (2) Proposition G.1 makes a case for the necessity of assuming a smoothness condition in the absence of an independence between transition pairs. Under a milder condition on transition pair independence, e.g. a mixing condition, are similar bounds to those presented in the current work attainable?
ICLR
Title Non-asymptotic Confidence Intervals of Off-policy Evaluation: Primal and Dual Bounds Abstract Off-policy evaluation (OPE) is the task of estimating the expected reward of a given policy based on offline data previously collected under different policies. Therefore, OPE is a key step in applying reinforcement learning to real-world domains such as medical treatment, where interactive data collection is expensive or even unsafe. As the observed data tends to be noisy and limited, it is essential to provide rigorous uncertainty quantification, not just a point estimation, when applying OPE to make high stakes decisions. This work considers the problem of constructing nonasymptotic confidence intervals in infinite-horizon off-policy evaluation, which remains a challenging open question. We develop a practical algorithm through a primal-dual optimization-based approach, which leverages the kernel Bellman loss (KBL) of Feng et al. (2019) and a new martingale concentration inequality of KBL applicable to time-dependent data with unknown mixing conditions. Our algorithm makes minimum assumptions on the data and the function class of the Q-function, and works for the behavior-agnostic settings where the data is collected under a mix of arbitrary unknown behavior policies. We present empirical results that clearly demonstrate the advantages of our approach over existing methods. 1 INTRODUCTION Off-policy evaluation (OPE) seeks to estimate the expected reward of a target policy in reinforcement learnings (RL) from observational data collected under different policies (e.g., Murphy et al., 2001; Fonteneau et al., 2013; Jiang & Li, 2016; Liu et al., 2018a). OPE plays a central role in applying reinforcement learning (RL) with only observational data and has found important applications in areas such as medicine, self-driving, where interactive “on-policy” data is expensive or even infeasible to collect. A critical challenge in OPE is the uncertainty estimation, as having reliable confidence bounds is essential for making high-stakes decisions. In this work, we aim to tackle this problem by providing non-asymptotic confidence intervals of the expected value of the target policy. Our method allows us to rigorously quantify the uncertainty of the prediction and hence avoid the dangerous case of being overconfident in making costly and/or irreversible decisions. However, off-policy evaluation per se has remained a key technical challenge in the literature (e.g., Precup, 2000; Thomas & Brunskill, 2016; Jiang & Li, 2016; Liu et al., 2018a), let alone gaining rigorous confidence estimation of it. This is especially true when 1) the underlying RL problem is long or infinite horizon, and 2) the data is collected under arbitrary and unknown algorithms (a.k.a. behavior-agnostic). As a consequence, the collected data can exhibit arbitrary dependency structure, which makes constructing rigorous non-asymptotic confidence bounds particularly challenging. Traditionally, the only approach to provide non-asymptotic confidence bounds in OPE is to combine importance sampling (IS) with concentration inequalities (e.g., Thomas et al., 2015a;b), which, however, tends to degenerate for long/infinite horizon problems (Liu et al., 2018a). Furthermore, ∗Equal contribution. neither can this approach be applied to the behavior-agnostic settings, nor can it effectively handle the complicated time dependency structure inside individual trajectories. Instead, it requires to use a large number of independently collected trajectories drawn under known policies. In this work, we provide a practical approach for Behavior-agnostic, Off-policy, Infinite-horizon, Non-asymptotic, Confidence intervals based on arbitrarily Dependent data (BONDIC). Our method is motivated by a recently proposed optimization-based (or variational) approach to estimating OPE confidence bounds (Feng et al., 2020), which leverages a tail bound of kernel Bellman statistics (Feng et al., 2019). Our approach achieves a new bound that is both an order-of-magnitude tighter and computationally efficient than that of Feng et al. (2020). Our improvements are based on two pillars 1) developing a new primal-dual perspective on the non-asymptotic OPE confidence bounds, which is connected to a body of recent works on infinite-horizon value estimation (Liu et al., 2018a; Nachum et al., 2019a; Tang et al., 2020a; Mousavi et al., 2020); and 2) offering a new tight concentration inequality on the kernel Bellman statistics that applies to behavior-agnostic off-policy data with arbitrary dependency between transition pairs. Empirically, we demonstrate that our method can provide reliable and tight bounds on a variety of well-established benchmarks. Related Work Besides the aforementioned approach based on the combination of IS and concentration inequalities (e.g., Thomas et al., 2015a), bootstrapping methods have also been widely used in off-policy estimation (e.g., White & White, 2010; Hanna et al., 2017; Kostrikov & Nachum, 2020). But the latter is limited to asymptotic bounds. Alternatively, Bayesian methods (e.g. Engel et al., 2005; Ghavamzadeh et al., 2016a) offers a different way to estimate the uncertainty in RL, but fails to guarantee frequentist coverage. In addition, Distributed RL (Bellemare et al., 2017) seeks to quantify the intrinsic uncertainties inside the Markov decision process, which is orthogonal to the estimation of uncertainty that we consider. Our work is built upon the recent advances in behavior-agnostic infinite-horizon OPE, including Liu et al. (2018a); Feng et al. (2019); Tang et al. (2020a); Mousavi et al. (2020), as well as the DICE-family (e.g., Nachum et al., 2019a; Zhang et al., 2020a; Yang et al., 2020b). In particular, our method can be viewed as extending the minimax framework of the infinite-horizon OPE in the infinite data region by Tang et al. (2020a); Uehara et al. (2020); Jiang & Huang (2020) to the non-asymptotic finite sample region. Outline For the rest of the paper, we start with the problem statement in Section 2 , and an overview on the two dual approaches to infinite-horizon OPE that are tightly connected to our method in Section 3. We then present our main approach in Section 4 and perform empirical studies in Section 5. The proof and an abundance of additional discussions can be found in Appendix. 2 BACKGROUND, DATA ASSUMPTION, PROBLEM SETTING Consider an agent acting in an unknown environment. At each time step t, the agent observes the current state st in a state space S, takes an action at ∼ π(· | st) in an action space A according to a given policy π; then, the agent receives a reward rt and the state transits to s′t = st+1, following an unknown transition/reward distribution (rt, st+1) ∼ P(· | st, at). Assume the initial state s0 is drawn from an known initial distribution D0. Let γ ∈ (0, 1) be a discount factor. In this setting, the expected reward of π is defined as Jπ := Eπ [∑T t=0 γ trt | s0 ∼ D0 ] , which is the expected total discounted rewards when we execute π starting from D0 for T steps. In this work, we consider the infinite-horizon case with T → +∞. Our goal is to provide an interval estimation of Jπ for a general and challenging setting with significantly released constraints on the data. In particular, we assume the data is behavior-agnostic and off-policy, which means that the data can be collected from multiple experiments, each of which can execute a mix of arbitrary, unknown policies, or even follow a non-fixed policy. More concretely, suppose that the model P is unknown, and we have a set of transition pairs D̂n = (si, ai, ri, s′i) n i=1 collected from previous experiments in a sequential order, such that for each data point i, the (ri, s′i) is drawn from the model P(· | si, ai), while (si, ai) is generated with an arbitrary black box given the previous data points. We formalize both the data assumption and goal as below. Assumption 2.1 (Data Assumption). Assume the data D̂n = (si, ai, ri, s′i)ni=1 is drawn from an arbitrary joint distribution, such that for each i = 1, . . . , n, conditional on D̂<i := (sj , aj , rj , s′j)j<i ∪ (si, ai), the subsequent local reward and next state (ri, s′i) are drawn from P(· | si, ai). Goal Given a confidence level δ ∈ (0, 1), we want to construct an interval [Ĵ−, Ĵ+] ⊂ R based on the data D̂n, such that Pr(Jπ ∈ [Ĵ−, Ĵ+]) ≥ 1− δ, where Pr(·) is w.r.t. the randomness of the data. The partial ordering on the data points is introduced to accommodate the case that si+1 equals s′j for some j ≤ i. The data assumption only requires that (ri, s′i) is generated from P(· | si, ai), and imposes no constraints on how (si, ai) is generated. This provides great flexibility in terms of the data collection process. In particular, we do not require (si, ai)ni=1 to be independent as always assumed in recent works (Liu et al., 2018a; Mousavi et al., 2020). A crucial fact is that our data assumption actually implies a martingale structure on the empirical Bellman residual operator of the Q-function, As we will show in Section 4.1, this enables us to derive a key concentration inequality underpinning our non-asymptotic confidence bounds. Here, we summarize a few notations that will simplify the presentation in the rest of work. First of all, we append each (si, ai, ri, s′i) with an action a ′ i ∼ π(· | s′i) following s′i. This can be done for free as long as π is given (See the Remark in Section 3). Also, we write xi = (si, ai), x′i = (s ′ i, a ′ i), and yi = (x′i, ri) = (s ′ i, a ′ i, ri). Correspondingly, define X = S ×A to be the state-action space and Y = X ×R. Denote Pπ(y | x) = P(s′, r | x)π(a′ | s′). In this way, the observed data can be written as pairs of {xi, yi}ni=1, and Assumption 2.1 is equivalent to saying that yi ∼ Pπ(· | xi) given D̂<i, which is similar to a supervised learning setting. We equalize the data D̂n with its empirical measure D̂n = ∑n i=1 δxi,yi/n, where δ is the Delta measure. 3 TWO DUAL APPROACHES TO INFINITE-HORIZON OFF-POLICY ESTIMATION The deficiency of the traditional IS methods on long-/infinite-horizon RL problems (a.k.a. the curse of horizon (Liu et al., 2018a)) has motivated a line of work on developing efficient infinite-horizon value estimation (e.g., Liu et al., 2018a; Feng et al., 2019; Nachum et al., 2019a; Zhang et al., 2020a; Mousavi et al., 2020; Tang et al., 2020a). The main idea is to transform the value estimation problem into estimating either the Q-function or the visitation distribution (or its related density ratio) of the policy π. This section introduces and reinterprets these two tightly connected methods, which serves to lay out a foundation for our main confidence bounds from a primal and dual perspective. Given a policy π, its Q-function is defined as qπ(x) = Eπ [ ∑∞ t=0 γ trt | x0 = x], where the expectation is taken when we execute π initialized from a fixed state-action pair (s0, a0) = x0 = x. Let Dπ,t be the distribution of (xt, yt) = (st, at, s′t, a ′ t, rt) when executing policy π starting from s0 ∼ D0 for t steps. The visitation distribution of π is defined as Dπ = ∑∞ t=0 γ tDπ,t. Note that Dπ integrates to 1/(1− γ), while we treat it as a probability measure in the notation. The expected reward Jπcan be expressed using either qπ or Dπ as follows: Jπ := Eπ [ ∞∑ t=0 γtrt ] = Er∼Dπ [r] = Ex∼Dπ,0 [qπ(x)], (1) where r ∼ Dπ (resp. x ∼ Dπ,0) denotes sampling from the r-(resp. x-) marginal distribution of Dπ (resp. Dπ,0). Eq. (1) plays a key role in the infinite-horizon value estimation by transforming the estimation of Jπ into estimating either qπ or Dπ . Value Estimation via Q Function Because Dπ,0(x) = D0(s)π(a|s) is known, we can estimate Jπ by Ex∼Dπ,0 [q̂(x)] with any estimation q̂ of the true Q-function qπ; the expectation under x ∼ Dπ,0 can be estimated to any accuracy with Monte Carlo. To estimate qπ, we consider the empirical and expected Bellman residual operator: R̂q(x, y) = q(x)− γq(x′)− r, Rπq(x) = Ey∼Pπ(·|x) [ R̂q(x, y) ] . (2) It is well-known that qπ is the unique solution of the Bellman equation Rπq = 0. Since yi ∼ Pπ(·|xi) for each data point in D̂n, if q = qπ , then R̂q(xi, yi), i = 1, . . . , n are all zero-mean random variables. Let ω be any function from X to R, then∑i R̂q(xi, yi)ω(xi) also has zero mean. This motivates the following functional Bellman loss (Feng et al., 2019; 2020; Xie & Jiang, 2020), LW(q; D̂n) := sup ω∈W { 1 n n∑ i=1 R̂q(xi, yi)ω(xi) } , (3) whereW is a set of functions ω : X → R. To ensure that the sup is finite,W is typically set to be an unit ball of some normed function spaceWo, such thatW = {ω ∈ Wo : ‖ω‖Wo ≤ 1}. Feng et al. (2019) considers the simple case whenW is taken to be the unit ball K of the reproducing kernel Hilbert space (RKHS) with a positive definite kernel k : X × X → R, in which case the loss has a simple closed form solution: LK(q; D̂n) = √√√√ 1 n2 n∑ ij=1 R̂q(xi, yi)k(xi, xj)R̂q(xj , yj). (4) Note that the RHS of Eq. (4) is the square root of the kernel Bellman V-statistics in Feng et al. (2019). Feng et al. (2019) showed that, when the support of data distribution D̂n covers the whole space (which may require an infinite data size) and k is an integrally strictly positive definite kernel, LK(q; D̂n) = 0 iff q = qπ . Therefore, one can estimate qπ by minimizing LK(q, D̂n). Remark The empirical Bellman residual operator R̂ can be extended to R̂q(x, y) = q(x)− r − γ 1m ∑m `=1 q(s ′, a′`), where {a′`}mi=1 are i.i.d. drawn from π(·|s′). As m increases, this gives a lower variance estimation of Rπq. If m = +∞, we have R̂q(x, y) = q(x)− r − γEa′∼π(· | s′)[q(s′, a′)], which coincides with the operator used in the expected SARSA (Sutton & Barto, 1998). In fact, without any modification, all results in this work can be applied to R̂q for any m. Value Estimation via Visitation Distribution Another way to estimate Jπ in Eq. (1) is to approximate Dπ with a weighted empirical measure of the data (Liu et al., 2018a; Nachum et al., 2019a; Mousavi et al., 2020; Zhang et al., 2020a). The key idea is to assign an importance weight ω(xi) to each data point xi in D̂n. We can choose the function ω : X → R properly such that Dπ and hence Jπ can be approximated by the ω-weighted empirical measure of D̂n as follows: Jπ ≈ Ĵω := ED̂ωn [r] = 1 n n∑ i=1 ω(xi)ri, Dπ ≈ D̂ωn := 1 n n∑ i=1 ω(xi)δxi,yi . (5) Intuitively, ω can be viewed as the density ratio between Dπ and D̂n, although the empirical measure D̂n may not have well-defined density. Liu et al. (2018a); Mousavi et al. (2020) proposed to estimate ω by minimizing a discrepancy measure between D̂ωn and Dπ. To see this, note that D = Dπ if and only if ∆(D, q) = 0 for any function q, where ∆(D, q) = ED[γq(x′)− q(x)]− EDπ [γq(x′)− q(x)] = ED[γq(x′)− q(x)] + EDπ,0 [q(x)], (6) using the fact that EDπ [γq(x′) − q(x)] = −EDπ,0 [q(x)] (Theorem 1, Liu et al., 2018a). Also note that the RHS of Eq. (6) can be practically calculated given any D and q without knowing Dπ . Let Q be a set of functions q : X → R. One can define the following loss for ω: IQ(ω; D̂n) = sup q∈Q { ∆(D̂ωn , q) } . (7) Similar to LW(q; D̂n), when Q is a ball in RKHS, IQ(ω; D̂n) also has a bilinear closed form analogous to Eq. (4); see Mousavi et al. (2020) and Appendix F. As we show in Section 4, IQ(ω; D̂n) and LW(q; D̂n) are connected to the primal and dual views of our confidence bounds, respectively. 4 MAIN APPROACH Let Q be a large enough function set including the true Q-function qπ, that is, qπ ∈ Q. Following Feng et al. (2020), a confidence interval [Ĵ−Q,W , Ĵ + Q,W ] of Jπ can be constructed as follows: Ĵ+Q,W = sup q∈Q { EDπ,0 [q] s.t. LW(q; D̂n) ≤ εn } , (8) and Ĵ−Q,W is defined in a similar way by replacing sup on q ∈ Q with inf . The idea here is to seek the extreme q function with the largest (resp. smallest) expected values in set F := Q ∩ {q : LK(q; D̂n) ≤ εn}. Therefore, Eq. (8) would be a 1− δ confidence interval if qπ is included in F with at least probability 1− δ, which is ensured when qπ ∈ Q and Pr(LW(qπ; D̂n) ≤ εn) ≥ 1− δ . (9) Feng et al. (2020) showed that in the RKHS case whenW = K, Eq. (9) can be achieved with εn = √√√√2cqπ,k ( n− 1 n √ log(1/δ) n + 1 n ) , (10) when n is an even number, where cqπ,k = supx,y R̂qπ(x, y) 2k(x, x). This was proved using Hoeffding’s inequality for U-statistics (Hoeffding, 1963). To solve Eq. (8) efficiently, Feng et al. (2020) took Q to be a ball in RKHS with random feature approximation. Unfortunately, this method as described by Eq. (8)-(10) has two major disadvantages: 1) Bound Needs to Be Tightened (Section 4.1) The bound of εn = O(n−1/4) in Eq. (10) is sub-optimal in rate. In Section 4.1, we improve it by an εn = O(n−1/2) bound under the mild Assumption 2.1, which gets rid of the independence requirement between the transition pairs. Our tightened bound is achieved by firstly noting a Martingale structure on the empirical Bellman operator under Assumption 2.1, and then applying an inequality in Pinelis (1992). 2) Dependence on Global Optimization (Section 4.2) The bound in Eq. (8) is guaranteed to be a 1− δ confidence bound only when the maximization in Eq. (8) is solved to global optimality. With a large n, this leads to a high computational cost, even when choosing Q as the RKHS. Feng et al. (2020) solved Eq. (8) approximately using a random feature technique, but this method suffers from a gap between the theory and practice. In Section 4.2, we address this problem by presenting a dual form of Eq. (8), which sidesteps solving the challenging global optimization in Eq. (8). Moreover, the dual form enables us to better analyze the tightness of the confidence interval and issues regarding the choices of Q andW . 4.1 A TIGHTER CONCENTRATION INEQUALITY In this section, we explain our method to improve the bound in Eq. (10) by giving a tighter concentration inequality for the kernel Bellman loss in Eq. (4). We introduce the following semi-expected kernel Bellman loss: L∗K(q; D̂n) = √√√√ 1 n2 n∑ ij=1 Rπq(xi)k(xi, xj)Rπq(xj) , (11) in which we replace the empirical Bellman residual operator R̂q in Eq. (3) with its expected counterpart Rπq, but still take the empirical average over {xi}ni=1 in D̂n. For a more general function set W , we can similarly define L∗W(q; D̂n) by replacing R̂q with Rπq in Eq. (3). Obviously, we have L∗W(q; D̂n) = 0 when q = qπ . Theorem 4.1 below shows that LK(q; D̂n) concentrates around L∗K(q; D̂n) with an O(n −1/2) error under Assumption 2.1. At a first glance, it may seem surprising that the concentration bound is able to hold even without any independence assumption between {xi}. An easy way to make sense of this is by recognizing that the randomness in yi conditional on xi is aggregated through averaging, even when {xi} are deterministic. As Assumption 2.1 does not impose any (weak) independence between {xi}, we cannot establish that LK(q; D̂n) concentrates around its mean ED̂n [LK(q; D̂n)] (which is a full expected kernel bellman loss), without introducing further assumptions. Theorem 4.1. Assume K is the unit ball of RKHS with a positive definite kernel k(·, ·). Let cq,k := supx∈X ,y∈Y(R̂q(x, y)−Rπq(x))2k(x, x) <∞. Under Assumption 2.1, for any δ ∈ (0, 1), with at least probability 1− δ, we have∣∣∣LK(q; D̂n)− L∗K(q; D̂n)∣∣∣ ≤ √ 2cq,k log(2/δ) n . (12) In particular, when q = qπ , we have cqπ,k = supx,y(R̂qπ(x, y)) 2k(x, x), and LK(qπ; D̂n) ≤ √ 2cqπ,k log(2/δ) n . (13) Intuitively, to see why we can expect an O(n−1/2) bound, note that LK(q, D̂n) consists of the square root of the product of two R̂q terms, each of which contributes an O(n−1/2) error w.r.t. Rπq. Technically, the proof is based on a key observation: Assumption 2.1 ensures that Zi := R̂q(xi, yi)− Rπq(xi), i = 1, . . . , n forms a martingale difference sequence w.r.t. {D̂<i : ∀i = 1, . . . , n}, in the sense that E[Zi | D̂<i] = 0, ∀i. See Appendix B for details. The proof also leverages a special property of RKHS and applies a Hoeffding-like inequality on the Hilbert spaces as in Pinelis (1992) (see Appendix B). For other more general function setsW , we establish in Appendix E a similar bound by using Rademacher complexity, although it yields a less tight bound than Eq. (12) when W = K. 4.2 DUAL CONFIDENCE BOUNDS We derive a dual form of Eq. (8) that sidesteps the need for solving the challenging global optimization in Eq. (8). To do so, let us plug the definition of LW(q; D̂n) into Eq. (3) and introduce a Lagrange multiplier: Ĵ+Q,W = sup q∈Q inf h∈W inf λ≥0 EDπ,0 [q]− λ ( 1 n n∑ i=1 h(xi)R̂q(xi, yi)− εn ) (14) = sup q∈Q inf ω∈Wo { EDπ,0 [q]− 1 n n∑ i=1 ω(xi)R̂q(xi) + εn ‖ω‖Wo } , (15) where we use ω(x) = λh(x). Exchanging the order of min/max and some further derivation yields the following main result. Theorem 4.2. I) LetW be the unit ball of a normed function spaceWo. We have Ĵ+Q,W ≤ F̂+Q (ω) := ED̂ωn [r] + IQ(ω; D̂n) + εn ‖ω‖Wo , ∀ω ∈ Wo , Ĵ−Q,W ≥ F̂−Q (ω) := ED̂ωn [r]− I−Q(ω; D̂n)− εn ‖ω‖Wo , ∀ω ∈ Wo , (16) where −Q = {−q : q ∈ Q} and hence I−Q(ω; D̂n) = IQ(ω; D̂n) if Q = −Q. Further, we have Ĵ+Q,W = infω∈Wo F̂ + Q (ω) and Ĵ − Q,W = supω∈Wo F̂ − Q (ω) if Q is convex and there exists a q ∈ Q that satisfies the strict feasibility condition that LW(q; D̂n) < εn. II) For D̂n and δ ∈ (0, 1), assume Wo and εn ∈ R satisfy Eq. (9) (e.g., via Theorem 4.1). Then for any function set Q with qπ ∈ Q, and any function ω+, ω− ∈ Wo (the choice of Q, ω+, ω− can depend on D̂n arbitrarily), we have Pr ( Jπ ∈ [ F̂−Q (ω−), F̂ + Q (ω+) ]) ≥ 1− δ . (17) Theorem 4.2 transforms the original bound in Eq. (8), framed in terms of q and LW(q; D̂n), into a form that involves the density-ratio ω and the related loss IQ(ω; D̂n). The bounds in Eq. (16) can be interpreted as assigning an error bar around the ω-based estimator Ĵω = ED̂ωn [r] in Eq. (5), with the error bar of I±Q(ω; D̂n) + εn ‖ω‖Wo . Specifically, the first term I±Q(ω; D̂n) measures the discrepancy between D̂ωn and Dπ as discussed in Eq. (7), whereas the second term captures the randomness in the empirical Bellman residual operator R̂qπ . Compared with Eq. (8), the global maximization on q ∈ Q is now transformed inside the IQ(ω; D̂n) term, which yields a simple closed form solution in the RKHS case (see Appendix F). In practice, we can optimize ω+ and ω− to obtain the tightest possible bound (and hence recover the primal bound) by minimizing/maximizing F̂+Q (ω) and F̂ − Q (ω), but it is not necessary to solve the optimization to global optimality. WhenWo is an RKHS, by the standard finite representer theorem (Scholkopf & Smola, 2018), the optimization on ω reduces to a finite dimensional optimization, which can be sped up with any favourable approximation techniques. We elaborate on this in Appendix D. Length of the Confidence Interval The form in Eq. (16) also makes it much easier to analyze the tightness of the confidence interval. Suppose ω = ω+ = ω− and Q = −Q, the length of the optimal confidence interval is length([Ĵ−Q,W , Ĵ + Q,W ]) = inf ω∈Wo { 2IQ(ω; D̂n) + 2εn ‖ω‖Wo } . Given εn is O(n−1/2), we can make the overall length of the optimal confidence interval also O(n−1/2), as long asWo is rich enough to include a good density ratio estimator ω∗ that satisfies IQ(ω ∗; D̂n) = O(n −1/2) and has a bounded norm ‖ω∗‖Wo . We can expect to achieve IQ(ω∗; D̂n) = O(n−1/2), when (1) Q has an O(n−1/2) sequential Rademacher complexity (Rakhlin et al., 2015) (e.g., a finite ball in RKHS); and (2) D̂n is collected following a Markov chain with strong mixing condition and weakly converges to some limit distribution D∞ whose support is X , and therefore we can define ω∗ as the density ratio between Dπ and D∞. Refer to Appendix C for more discussions. Indeed, our experiments show that the lengths of practically constructed confidence intervals do tend to decay with an O(n−1/2) rate. Choice ofW and Q To ensure the concentration inequality in Theorem 4.1 is valid, the choice of Wo cannot depend on the data D̂n. Therefore, we should use a separate holdout data to construct a data-dependentWo. In contrast, the choice of Q can depend on the data D̂n arbitrarily, since it is a part of the optimization bound Eq. (8) but not in the tail bound Eq. (9). In this light, one can construct the best possible Q by exploiting the data information in the most favourable way. For example, we can construct an estimator of q̂ ≈ qπ based on any state-of-the-art method (e.g., Q-learning or model-based methods), and set Q to be a ball centering around q̂ such that qπ − q̂ ∈ Q. This enables post-hoc analysis based on prior information on qπ , as suggested in Feng et al. (2020). Mis-specification of Q and Oracle Upper/Lower Estimates Our result relies on the assumption that qπ ∈ Q. However, as with other statistical estimation problems, there exists no provably way to empirically verify the correctness of model assumptions such as qπ ∈ Q. Because empirical data only reveals the information of the unknown function (in our case qπ) on a finite number data points, but no conclusion can be made on the unseeing data points without imposing certain smoothness assumption. Typically, what we can do is the opposite: reject qπ ∈ Q when the Bellman loss LW(q; D̂n) of all q in Q is larger than the threshold εn. We highlight that, even without verifying qπ ∈ Q, our method can still be viewed as a confidence interval of a best possible (oracle) upper and lower estimation given the data D̂n plus the assumption that qπ ∈ Q, defined as Ĵ+Q,∗ = sup q∈Q { EDπ,0 [q] s.t. R̂q(xi, yi) = R̂qπ(xi, yi), ∀i = 1, . . . , n } . (18) In fact, it is impossible to derive empirical upper bounds lower than Ĵ+Q,∗, as there is no way to distinguish q and qπ if R̂q(xi, yi) = R̂qπ(xi, yi) for all i. But our interval [ĴQ,K, Ĵ+Q,K] provides a 1− δ confidence outer bound of [Ĵ−Q,∗, Ĵ+Q,∗] once Eq. (9) holds, regardless if qπ ∈ Q holds or not. Hence, it is of independent interest to further explore the dual form of Eq. (18), which is another starting point for deriving our bound. We have more discussion in Appendix G. Lastly, we argue that it is important to include the Q in the bound. Proposition G.1 in Appendix shows that removing the q ∈ Q constraint in Eq. (18) would lead to an infinite upper bound, unless the {si, s′i}ni=1 from D̂n almost surely covers the whole state space S in the sense that Prs∼D0(s ∈ {si, s′i}ni=1) = 1. 5 EXPERIMENTS We compare our method with a variety of existing algorithms for obtaining asymptotic and nonasymptotic bounds on a number of benchmarks. We find our method can provide confidence interval that correctly covers the true expected reward with probability larger than the specified success probability 1 − δ (and is hence safe) across the multiple examples we tested. In comparison, the non-asymptotic bounds based on IS provide much wider confidence intervals. On the other hand, the asymptotic methods, such as bootstrap, despite giving tighter intervals, often fail to capture the true values with the given probability in practice. Environments and Dataset Construction We test our method on three environments: InvertedPendulum and CartPole from OpenAI Gym (Brockman et al., 2016), and a Type-1 Diabetes medical treatment simulator.1 We follow a similar procedure as Feng et al. (2020) to construct the behavior and target policies. more details on environments and data collection procedure are included in Appendix H.1. Algorithm Settings We test the dual bound described in our paper. Throughout the experiment, we always setW = K, the unit ball of the RKHS with positive definite kernel k, and set Q = rQK̃, the ball of radius rQ in the RKHS with another kernel k̃. We take both kernels to be Gaussian RBF kernel and choose rQ and the bandwidths of k and k̃ using the procedure in Appendix H.2. We use a fast approximation method to optimize ω in F+Q (ω) and F − Q (ω) as shown in Appendix D. Once ω is found, we evaluate the bound in Eq. (16) exactly to ensure that the theoretical guarantee holds. Baseline Algorithms We compare our method with four existing baselines, including the IS-based non-asymptotic bound using empirical Bernstein inequality by Thomas et al. (2015b), the IS-based bootstrap bound of Thomas (2015), the bootstrap bound based on fitted Q evaluation (FQE) by Kostrikov & Nachum (2020), and the bound in Feng et al. (2020) which is equivalent to the primal bound in (8) but with looser concentration inequality (they use a εn = O(n−1/4) threshold). Results Figure 1 shows our method obtains much tighter bounds than Feng et al. (2020), which is because we use a much tighter concentration inequality, even the dual bound that we use can be slightly looser than the primal bound used in Feng et al. (2020). Our method is also more computationally efficient than that of Feng et al. (2020) because the dual bound can be tightened 1 https://github.com/jxx123/simglucose. approximately while the primal bound requires to solve a global optimization problem. Figure 1 (b) shows that we provide increasingly tight bounds as the data size n increases, and the length of the interval decays with an O(n−1/2) rate approximately. Figure 1 (c) shows that when we increase the significance level δ, our bounds become tighter while still capturing the ground truth. Figure 1 (d) shows the percentage of times that the interval fails to capture the true value in a total of 100 random trials (denoted as δ̂) as we vary δ. We can see that δ̂ remains close to zero even when δ is large, suggesting that our bound is very conservative. Part of the reason is that the bound is constructed by considering the worse case and we used a conservative choice of the radius rQ and coefficient cqπ,k in Eq. (13) (See Appendix H.2). In Figure 2 we compare different algorithms on more examples with δ = 0.1. We can again see that our method provides tight and conservative interval that always captures the true value. Although FQE (Bootstrap) yields tighter intervals than our method, it fail to capture the ground truth much more often than the promised δ = 0.1 (e.g., it fails in all the random trials in Figure 2 (a)). We conduct more ablation studies on different hyper-parameter and data collecting procedure. See Appendix H.2 and H.3 for more details. 6 CONCLUSION We develop a dual approach to construct high confidence bounds for off-policy evaluation with an improved rate over Feng et al. (2020). Our method can handle dependent data, and does not require a global optimization to get a valid bound. Empirical results demonstrate that our bounds is tight and valid compared with a range of existing baseline. Future directions include leveraging our bounds for policy optimization and safe exploration. A PROOF OF THE DUAL BOUND IN THEOREM 4.2 Proof. Introducing a Lagrange multiplier, the bound in (8) is equivalent to Ĵ+Q,W = max q∈Q min λ≥0 { EDπ,0 [q] − λ ( max h∈W 1 n n∑ i=1 h(xi)R̂q(xi, yi)− εn )} = max q∈Q min λ≥0 min h∈W { EDπ,0 [q] − λ ( 1 n n∑ i=1 h(xi)R̂q(xi, yi)− εn )} = max q∈Q min ω∈Wo { EDπ,0 [q] − 1 n n∑ i=1 ω(xi)R̂q(xi, yi) + εn ‖ω‖Wo } , where we use ω = λh(x), such that λ is replaced by ‖ω‖Wo . Define M(q, ω; D̂n) = EDπ,0 [q] − 1 n n∑ i=1 ω(xi)R̂q(xi, yi) + εn ‖ω‖Wo = ED̂ωn [r] + ∆(D̂ ω n , q) + εn ‖ω‖Wo . Then we have max q∈Q M(q, ω; D̂n) = ED̂ωn [r] + maxq∈Q ∆(D̂ ω n , q) + εn ‖ω‖Wo = ED̂ωn [r] + IQ(ω; D̂n) + εn ‖ω‖Wo = F̂+Q (ω). Therefore, Ĵ+Q,W = max q∈Q min ω∈Wo M(q, ω; D̂n) ≤ min ω∈Wo max q∈Q M(q, ω; D̂n) = min ω∈Wo F̂+Q (ω). The lower bound follows analogously. The strong duality holds when the Slater’s condition is satisfied (Nesterov, 2013), which amounts to saying that the primal problem in (8) is convex and strictly feasible; this requires that Q is convex and there exists at least one solution q ∈ Q that satisfy that constraint strictly, that is, LW(q; D̂n) < εn; note that the objective function Q is linear on q and the constraint function LW(q; D̂n) is always convex on q (since it is the sup a set of linear functions on q following (3)). B PROOF OF CONCENTRATION BOUND IN THEOREM 4.1 Our proof require the following Hoeffding inequality on Hilbert spaces by Pinelis (Theorem 3, 1992); see also Section 2.4 of Rosasco et al. (2010). Lemma B.1. (Theorem 3, Pinelis, 1992) Let H be a Hilbert space and {fi}ni=1 is a Martingale sequence inH that satisfies supi ‖fi‖H ≤ σ almost surely. We have for any > 0, Pr (∥∥∥∥∥ 1n n∑ i=1 fi ∥∥∥∥∥ H ≥ ) ≤ 2 exp ( −n 2 2σ2 ) . Therefore, with probability at least 1− δ, we have ∥∥ 1 n ∑n i=1 fi ∥∥ H ≤ √ 2σ2 log(2/δ) n . Lemma B.2. Let k(x, x′) be a positive definite kernel whose RKHS isHk. Define fi(·) = R̂q(xi, yi)k(xi, ·)−Rπq(xi)k(xi, ·). Assume Assumption 2.1 holds, then {fi}ni=1 is a Martingale difference sequence inHk w.r.t. T<i := (xj , yj)j<i ∪ (xi). That is, E [fi+1(·) | T<i] = 0. In addition,∥∥∥∥∥ 1n n∑ i=1 fi ∥∥∥∥∥ 2 Hk = 1 n2 n∑ ij=1 ( R̂q(xi, yi)−Rπq(xi) ) k(xi, xj) ( R̂q(xj , yj)−Rπq(xj) ) , and ‖fi‖2Hk ≤ cq,k for ∀i = 1, . . . , n. Proof of Theorem 4.1. Following Lemma B.1 and Lemma B.2, since {fi}ni=1 is a Martingale difference sequence inHk with ‖fi‖Hk ≤ cq,k almost surely, we have with probability at least 1− δ, 1 n2 n∑ ij=1 ( R̂q(xi, yi)−Rπq(xi) ) k(xi, xj) ( R̂q(xj , yj)−Rπq(xj) ) = ∥∥∥∥∥ 1n n∑ i=1 fi ∥∥∥∥∥ 2 Hk ≤ 2cq,k log(2/δ) n . Using Lemma B.3 below, we have∣∣∣LK(q; D̂n)− L∗K(q; D̂n)∣∣∣ ≤ ∥∥∥∥∥ 1n n∑ i=1 fi ∥∥∥∥∥ Hk ≤ √ 2cq,k log(2/δ) n . This completes the proof. Lemma B.3. Assume k(x, x′) is a positive definite kernel. We have∣∣∣LK(q; D̂n)− L∗K(q; D̂n)∣∣∣2 ≤ 1n2 n∑ ij=1 ( R̂q(xi, yi)−Rπq(xi) ) k(xi, xj) ( R̂q(xj , yj)−Rπq(xj) ) . Proof. Define ĝ(·) = 1 n n∑ i=1 R̂q(xi, yi)k(xi, ·) g(·) = 1 n n∑ i=1 Rπq(xi)k(xi, ·). Then we have ‖ĝ‖2Hk = 1 n2 n∑ ij=1 R̂q(xi, yi)k(xi, xj)R̂q(xj , yj) = L̂K(q; D̂n), ‖g‖2Hk = 1 n2 n∑ ij=1 Rπq(xi)k(xi, xj)Rπq(xj) = L ∗ K(q; D̂n), ‖ĝ − g‖2Hk = 1 n2 n∑ ij=1 ( R̂q(xi, yi)−Rπq(xi) ) k(xi, xj) ( R̂q(xj , yj)−Rπq(xj) ) . The result then follows the triangle inequality ∣∣‖ĝ‖Hk − ‖g‖Hk ∣∣ ≤ ‖ĝ − g‖Hk . B.1 CALCULATION OF cqπ,k The practical calculation of the coefficient cqπ,k in the concentration inequality was discussed in Feng et al. (2020), which we include here for completeness. Lemma B.4. (Feng et al. (2020) Lemma 3.1) Assume the reward function and kernel function is bounded with supx |r(x)| ≤ rmax and supx,x′ |k(x, x′)| ≤ Kmax, we have: cqπ,k := sup x∈X ,y∈Y (R̂qπ(x, y)) 2k(x, x) ≤ 4Kmaxr 2 max (1− γ)2 . In practice, we get access to Kmax from the kernel function that we choose (e.g., Kmax = 1 for RBF kernels), and rmax from the knowledge on the environment. C MORE ON THE TIGHTNESS OF THE CONFIDENCE INTERVAL The benefit of having both upper and lower bounds is that we can empirically access the tightness of the bound by checking the length of the interval [F̂−Q (ω−), F̂ + Q (ω+)]. However, from the theoretical perspective, it is desirable to know a priori that the length of the interval will decrease with a fast rate as the data size n increases. We now show that this is the case ifWo is chosen to be sufficiently rich so that it includes a ω ∈ Wo such that D̂ωn ≈ Dπ . Theorem C.1. AssumeWo is sufficiently rich to include a “good” ω∗ inWo with D̂ω ∗ n ≈ Dπ in that sup q∈Q ∣∣∣ED̂ω∗n [R̂q(x;x′, r)]− EDπ [R̂q(x;x′, r)]∣∣∣ ≤ cnα , (19) where c and α are two positive coefficients. Then we have max { Ĵ+Q,W − Jπ, Jπ − Ĵ−Q,W } ≤ c nα + εn ‖ω∗‖Wo . Assumption (19) holds if D̂n is collected following a Markov chain with certain strong mixing condition and weakly converges to some limit discussion D̂∞ whose support is X , for which we can define ω∗(x) = Dπ(x)/D∞(x). In this case, if Q is a finite ball in RKHS, then we can achieve (19) with α = 1/2, and yields the overall bound of rate O(n−1/2). For more general function classes, α depends on the martingale Rademacher complexity of function set R̂Q = {Rq(x, y) : q ∈ Q} Rakhlin et al. (2015). In our empirical reults, we observe that the gap of the practically constructed bounds tend to follow the O(n−1/2) rate. Proof. Note that Jπ = EDπ [r] = EDπ [r], and IQ(ω; D̂n) = sup q∈Q { ED̂ωn [γq(x ′)− q(x)]− EDπ [γq(x′)− q(x)] } . Because ω∗ ∈ W , we have Ĵ+W,Q − Jπ ≤ F̂+Q (ω∗)− Jπ = ED̂ωn [r]− EDπ [r] + IQ(ωπ; D̂n) + εn ‖ω ∗‖Wo = sup q∈Q { ED̂ωn [ R̂q(x, y) ] − EDπ [ R̂q(x, y) ]} + εn ‖ω∗‖Wo ≤ c nα + εn ‖ω∗‖Wo . The case of lower bound follows similarly. D OPTIMIZATION ONWo Consider the optimization of ω inWo F̂+Q (ω) := 1 n n∑ i=1 riω(xi) + IQ(ω; D̂n) + ‖ω‖Wo √ 2cqπ,k log(2/δ) n (20) AssumeWo is the RKHS of kernel k(x, x̄), that is,Wo = Hk. By the finite representer theorem of RKHS (Smola et al., 2007). the optimization of ω in RKHSHk can be reduced to a finite dimensional optimization problem. Specifically, the optimal solution of (20) can be written into a form of ω(x) = ∑n i=1 k(x, xi)αi with ‖ω‖ 2 Hk = ∑n i,j=1 k(xi, xj)αiαj for some vector α := [αi] n i=1 ∈ Rn. WriteK = [k(xi, xj)]ni,j=1 and r = [ri] n i=1. The optimization of ω reduces to a finite dimensional optimization on α: min α∈Rn 1 n r>Kα+ IQ(Kα; D̂n) + √ αKα √ 2cqπ,k log(2/δ) n , where IQ(Kα; D̂n) = max q∈Q { EDπ,0 [q] + 1 n (T̂q)>Kα } , and T̂q = [γq(x′i) − q(xi)]ni=1. When Q is RKHS, we can calculate IQ(Kα; D̂n) using (22) in section F. This computation can be still expensive when n is large. Fortunately, our confidence bound holds for any ω; better ω only gives tighter bounds, but it is not necessary to find the global optimal ω. Therefore, one can use any approximation algorithm to find ω, which provides a trade-off of tightness and computational cost. We discuss two methods: 1) Approximating α The length of α can be too large when n is large. To address this, we assume αi = g(xi, θ), where g is any parametric function (such as a neural network) with parameter θ which can be much lower dimensional than α. We can then optimize θ with stochastic gradient descent, by approximating all the data averaging 1n ∑n i=1(·) with averages over small mini-batches; this would introduce biases in gradient estimation, but it is not an issue when the goal is only to get a reasonable approximation. 2) Replacing kernel k Assume the kernel k yields a random feature expansion: k(x, x̄) = Eβ∼π[φ(x, β)φ(x̄, β)], where φ(x, β) is a feature map with parameter β and π is a distribution of β. We draw {βi}mi=1 i.i.d. from π, where m is taken to be much smaller than n. We replace k with k̂(x, x̄) = 1m ∑m i=1 φ(x, βi)φ(x̄, βi) andHk withHk̂, That is, we consider to solve Ĵ+Q,W = min ω∈Hk̂ F̂+Q (ω) := 1n n∑ i=1 riω(xi) + IQ(ω; D̂n) + ‖ω‖Hk̂ √ 2cqπ,k̂ log(2/δ) n . It is known that any function ω in Hk̂ can be represented as ω(x) = 1m ∑m i=1 wiφ(x, βi), for some w = [wi]mi=1 ∈ Rm and satisfies ‖ω‖2Hk̂ = 1 m ∑m i=1 w 2 i . In this way, the problem reduces to optimizing an m-dimensional vector w, which can be solved by standard convex optimization techniques. E CONCENTRATION INEQUALITIES OF GENERAL FUNCTIONAL BELLMAN LOSSES When K is a general function set, one can still obtain a general concentration bound using Radermacher complexity. Define R̂q ◦ W := {h(x, y) = R̂q(x, y)ω(x) : ω ∈ W}. Using the standard derivation in Radermacher complexity theory in conjunction with Martingale theory (Rakhlin et al., 2015), we have sup ω∈W { 1 n n∑ i=1 (R̂q(xi, yi)−Rπq(xi))ω(xi) } ≤ 2Rad(R̂q ◦W) + √ 2cq log(2/δ) n , where Rad(R̂q ◦ K) is the sequential Radermacher complexity as defined in (Rakhlin et al., 2015). A triangle inequality yields | Lk(q; D̂n)− Lk(q; D̂n) | ≤ sup ω∈W { 1 n n∑ i=1 (R̂q(xi, yi)−Rπq(xi))ω(xi) } Therefore, | LW(q; D̂n)− LW(q; D̂n) | ≤ 2Rad(R̂q ◦W) + √ 2cq log(2/δ) n , (21) where cq,W = supω∈W supx,y(R̂q(x, y)−Rπq(x))2ω(x)2. WhenW equals the unit ball K of the RKHS related to kernel k, we have cq,k = cq,W , and hence this bound is strictly worse than the bound in Theorem 4.1. F CLOSED FORM OF IQ(ω; D̂n) WHEN Q IS RKHS Similar to LK(q; D̂n), whenQ is taken to be the unit ball K̃ of the RKHS of a positive definite kernel k̃(x, x̄), (7) can be expressed into a bilinear closed form shown in Mousavi et al. (2020): IQ(ω; D̂n) 2 = A− 2B + C, (22) where A = E(x,x̄)∼Dπ,0×Dπ,0 [k(x, x̄)] B = E(x,x̄)∼D̂ωn×Dπ,0 [ T̂xπk(x, x̄) ] C = E(x,x̄)∼D̂ωn×D̂ωn [ T̂xπT̂ x̄ πk(x, x̄) ] , were T̂πf(x) = γf(x′) − f(x); in T̂xπT̂x̄πk(x, x̄), we apply T̂x̄π and T̂xπ in a sequential order by treating k as a function of x̄ and then of x. G MORE ON THE ORACLE BOUND AND ITS DUAL FORM The oracle bound (18) provides another starting point for deriving optimization-based confidence bounds. We derive its due form here. Using Lagrangian multiplier, the optimization in (18) can be rewritten into Ĵ+Q,∗ = max q∈Q min ω M(q, ω; D̂n), (23) where M∗(q, ω; D̂n) = EDπ,0 [q]− 1 n n∑ i=1 ω(xi) ( R̂q(xi, yi)− R̂qπ(xi, yi) ) , where ω now serves as the Lagrangian multiplier. By the weak duality, we have J∗Q,+ ≤ F̂+Q,∗(ω) := ED̂ωn [r] + IQ(ω; D̂n)︸ ︷︷ ︸ known +R(ω, qπ)︸ ︷︷ ︸ unknown , ∀ω. and R(ω, qπ) = 1 n n∑ i=1 ω(xi)R̂qπ(xi). The derivation follows similarly for the lower bound. So for any ω ∈ Wo, we have [Ĵ−Q,∗, Ĵ+Q,∗] ⊆ [F̂−Q,∗(ω), F̂ + Q,∗(ω)]. Here the first two terms of F̂+Q,∗(ω) can be empirically estimated (it is the same as the first two terms of (16)), but the third term R(ω, qπ) depends on the unknown qπ and hence need to be further upper bounded. Our method can be viewed as constraining ω inW , which is assumed to be the unit ball ofWo, and applying a worst case bound: F̂+Q,∗(ω) := ED̂ωn [r] + IQ(ω; D̂n) +R(ω, qπ), ∀ω ∈ Wo ≤ ED̂ωn [r] + IQ(ω; D̂n) + ‖w‖Wo suph∈W R(h, qπ), ∀ω ∈ Wo ≤ ED̂ωn [r] + IQ(ω; D̂n) + ‖w‖Wo LW(qπ, D̂n), ∀ω ∈ Wo w.p.1−δ ≤ ED̂ωn [r] + IQ(ω; D̂n) + ‖w‖Wo , ∀ω ∈ Wo = F̂+Q (ω). where the last step applies the high probability bound that Pr(LW(qπ, D̂n) ≤ ε) ≥ 1− δ. Similar derivation on the lower bound counterpart gives Pr ([ F̂−Q,∗(ω), F̂ + Q,∗(ω) ] ⊆ [ F̂−Q (ω), F̂ + Q (ω) ]) ≥ 1− δ. Therefore, our confidence bound [F̂−Q (ω), F̂ + Q (ω)] is a 1− δ confidence outer bound of the oracle bound [Ĵ−Q,∗, Ĵ + Q,∗] ⊆ [F̂−Q,∗(ω), F̂+Q,∗(ω)]. Introducing Q is necessarily Our method does not require any independence assumption between the transition pairs, the trade-off is that that we have to assume that qπ falls into a function set Q that imposes certain smoothness assumption. This is necessary because the data only provide information regarding qπ on a finite number of points, and qπ can be arbitrarily non-smooth outside of the data points, and hence no reasonable upper/lower bound can be obtained without any smoothness condition that extend the information on the data points to other points in the domain. Proposition G.1. Unless Prs∼Dπ,0(s /∈ {si, s′i}ni=1) = 0, for any u ∈ R, there exists a function q : S ×A → R, such that EDπ,0 [q] = u, R̂q(xi, yi) = R̂qπ(xi, yi), ∀i = 1, . . . , n. Proof. Let Qnull be the set of functions that are zero on {si, s′i}ni=1, that is, Qnull = {g : S ×A → R : g(s, a) = 0, ∀s ∈ {si, s′i}ni=1, a ∈ A}. Then we have R̂π(qπ + g)(xi, yi) = R̂πqπ(xi, yi), ∀i = 1, . . . , n. and EDπ,0 [qπ + g] = EDπ,0[qπ] + EDπ,0 [g] = Jπ + EDπ,0 [g]. Taking g(s, a) = zI(s /∈ {si, s′i}ni=1), where z is any real number. Then we have EDπ,0 [qπ + g] = Jπ + zPrs∼Dπ,0(s /∈ {si, s′i}ni=1). Because Prs∼Dπ,0(s /∈ {si, s′i}ni=1). 6= 0, we can take z to be arbitrary value to make EDπ,0 [qπ + g] to take arbitrary value. H ABLATION STUDY AND EXPERIMENTAL DETAILS H.1 EXPERIMENTAL DETAILS Environments and Dataset Construction We test our method on three environments: InvertedPendulum and CartPole from OpenAI Gym (Brockman et al., 2016), and a Type-1 Diabetes medical treatment simulator. For Inverted-Pendulum we discretize the action space to be {−1,−0.3,−0.2, 0, 0.2, 0.3, 1}. The action space of CartPole and the medical treatment simulator are both discrete. Policy Construction We follow a similar setup as Feng et al. (2020) to construct behavior and target policies. For all of the environments, we constraint our policy class to be a softmax policy and use PPO (Schulman et al., 2017) to train a good policy π, and we use different temperatures of the softmax policy to construct the target and behavior policies (we set the temperature τ = 0.1 for target policy and τ = 1 to get the behavior policy, and in this way the target policy is more deterministic than the behavior policy). We consider other choices of behavior policies in Section H.3. For horizon lengths, We fix γ = 0.95 and set horizon length H = 50 for Inverted-Pendulum, H = 100 for CartPole, and H = 50 for Diabetes simulator. Algorithm Settings We test the bound in Eq.(16)-(17). Throughout the experiment, we always set W = K, a unit ball of RKHS with kernel k(·, ·). We set Q = rQK̃, the zero-centered ball of radius rQ in an RKHS with kernel k̃(·, ·). We take both k and k̃ to be Gaussian RBF kernel. The bandwidth of k and k̃ are selected to make sure the function Bellman loss is not large on a validation set. The radius is selected to be sufficiently large to ensure that q∗ is included in Q. To ensure a sufficiently large radius, we use the data to approximate a q̂ so that its functional Bellman loss is small than n. Then we set rQ = 10 ∗ ‖q̂‖K̃. We optimize ω using the random feature approximation method described in Appendix D. Once ω+ and ω− are found, we evaluate the bound in Eq. (16) exactly, to ensure the theoretical guarantee holds. H.2 SENSITIVITY TO HYPER-PARAMETERS We investigate the sensitivity of our algorithm to the choice of hyper-parameters. The hyper-parameter mainly depends on how we choose our function class Q andW . Radius of Q Recall that we choose Q to be a ball in RKHS with radius rQ, that is, Q = rQK̃ = {rQf : f ∈ K̃}, where K̃ is the unit ball of the RKHS with kernel k̃. Ideally, we want to ensure that rQ ≥ ‖q∗‖K̃ so that q∗ ∈ Q. Since it is hard to analyze the behavior of the algorithm when q∗ is unknown, we consider a synthetic environment where the true q∗ is known. This is done by explicitly specifying a q∗ inside K̃ and then infer the corresponding deterministic reward function r(x) by inverting the Bellman equation: r(x) := q∗(x)− γEx′∼Pπ(·|x)[q∗(x′)]. Here r is a deterministic function, instead of a random variable, with an abuse of notation. In this way, we can get access to the true RKHS norm of q∗: ρ∗ = ‖q∗‖K̃ . For simplicity, we set both the state space S and action space A to be R and set a Gaussian policy π(a|s) ∝ exp(f(s, a)/τ), where τ is a positive temperature parameter. We set τ = 0.1 as target policy and τ = 1 as behavior policy. Figure 3 shows the results as we set rQ to be ρ∗, 10ρ∗ and 100ρ∗, respectively. We can see that the tightness of the bound is affected significantly by the radius when the number n of samples is very small. However, as the number n of samples grow (e.g., n ≥ 2× 103 in our experiment), the length of the bounds become less sensitive to the changing of the predefined norm of Q. Similarity Between Behavior Policy and Target Policy We study the performance of changing temperature of the behavior policy. We test on Inverted-Pendulum environment as previous described. Not surprisingly, we can see that the closer the behavior policy to the target policy (with temperature τ = 0.1), the tighter our confidence interval will be, which is observed in Figure 4(a). Bandwidth of RBF kernels We study the results as we change the bandwidth in kernel k and k̃ forW and Q, respectively. Figure 4(b) shows the length of the confidence interval when we use different bandwidth pairs in the Inverted-Pendulum environment. We can see that we get relatively tight confidence bounds as long as we set the bandwidth in a reasonable region (e.g., we set the bandwidth of k in [0.1, 0.5], the bandwidth of k̃ in [0.5, 3]). H.3 SENSITIVITY TO THE DATA COLLECTION PROCEDURE We investigate the sensitivity of our method as we use different behavior policies to collect the dataset D̂n. Varying Behavior Policies We study the effect of using different behavior policies. We consider the following cases: 1. Data is collected from a single behavior policy of form πα = απ + (1− α)π0, where π is the target policy and π0 is another policy. We construct π and π0 to be Gaussian policies of form π(a|s) ∝ exp(f(s, a)/τ) with different temperature τ , where temperature for target policy is τ = 0.1 and temperature for π0 is τ = 1. 2. The dataset D̂n is the combination of the data collected from multiple behavior policies of form πα defined as above, with α ∈ {0.0, 0.2, 0.4, 0.6, 0.8}. We show in Figure 5(a) that the length of the confidence intervals by our method as we vary the number n of transition pairs and the mixture rate α. We can see that the length of the interval decays with the sample size n for all mixture rate α. Larger α yields better performance because the behavior policies are closer to the target policy. Varying Trajectory Length T in D̂n As we collect D̂n, we can either have a small number of long trajectories, or a larger number of short trajectories. In Figure 5(b)-(c), we vary the length T of the trajectories as we collect D̂n, while fixing the total number n of transition pairs. In this way, the number of trajectories in each D̂n would be m = n/T . We can see that the trajectory length does not impact the results significantly, especially when the length is reasonably large (e.g., T ≥ 20). I MORE RELATED WORKS We give a more detailed overview of different approaches for uncertainty estimation in OPE. Finite-Horizon Importance Sampling (IS) Assume the data is collected by rolling out a known behavior policy π0 up to a trajectory length T , then we can estimate the finite horizon reward by changing Eπ,P[·] to Eπ0,P[·] with importance sampling(e.g., Precup et al., 2000; Precup, 2001; Thomas et al., 2015a;b). Taking the trajectory-wise importance sampling as an example, assume we collect a set of independent trajectories τi := {sit, ait, rit}T−1t=0 , i = 1, . . . ,m up to a trajectory length T by unrolling a known behavior policy π0. When T is large, we can estimate J∗ by a weighted averaging: Ĵ IS = 1 m m∑ i=1 ω(τi)J(τi) , where ω(τi) = T−1∏ t=0 π(ait|sit) π0(ait|sit) , J(τi) = T−1∑ t=0 γtrit . (24) One can construct non-asymptotic confidence bounds based on Ĵ IS using variants of concentration inequalities (Thomas, 2015; Thomas et al., 2015b). Unfortunately, a key problem with this IS estimator is that the importance weight ω(τi) is a product of the density ratios over time, and hence tends to cause an explosion in variance when the trajectory length T is large. Although improvement can be made by using per-step and self-normalized weights (Precup, 2001), or control variates (Jiang & Li, 2016; Thomas & Brunskill, 2016), the curse of horizon remains to be a key issue to the classical IS-based estimators (Liu et al., 2018a). Moreover, due to the time dependency between the transition pairs inside each trajectory, the nonasymptotic concentration bounds can only be applied on the trajectory level and hence decay with the number m of independent trajectories in an O(1/ √ m) rate, though m can be small in practice. We could in principle apply the concentration inequalities of Markov chains (e.g., Paulin, 2015) to the time-dependent transition pairs, but such inequalities require to have an upper bound of certain mixing coefficient of the Markov chain, which is unknown and hard to construct empirically. Our work addresses these limitations by constructing a non-asymptotic bound that decay with the number n = mT of transitions pairs, while without requiring known behavior policies and independent trajectories. Infinite-Horizon, Behavior-Agnostic OPE Our work is closely related to the recent advances in infinite-horizon and behavior-agnostic OPE, including, for example, Liu et al. (2018a); Feng et al. (2019); Tang et al. (2020a); Mousavi et al. (2020); Liu et al. (2020); Yang et al. (2020b); Xie et al. (2019); Yin & Wang (2020), as well as the DICE-family (e.g., Nachum et al., 2019a;b; Zhang et al., 2020a; Wen et al., 2020; Zhang et al., 2020b). These methods are based on either estimating the value function, or the stationary visitation distribution, which is shown to form a primal-dual relation (Tang et al., 2020a; Uehara et al., 2020; Jiang & Huang, 2020) that we elaborate in depth in Section 3. Besides Feng et al. (2020) which directly motivated this work, there has been a recent surge of interest in interval estimation under infinite-horizon OPE (e.g., Liu et al., 2018b; Jiang & Huang, 2020; Duan et al., 2020; Dai et al., 2020; Feng et al., 2020; Tang et al., 2020b; Yin et al., 2020; Lazic et al., 2020). For example, Dai et al. (2020) develop an asymptotic confidence bound (CoinDice) for DICE estimators with an i.i.d assumption on the off-policy data; Duan et al. (2020) provide a data dependent confidence bounds based on Fitted Q iteration (FQI) using linear function approximation when the off-policy data consists of a set of independent trajectories; Jiang & Huang (2020) provide a minimax method closely related to our method but do not provide analysis for data error; Tang et al. (2020b) propose a fixed point algorithm for constructing deterministic intervals of the true value function when the reward and transition models are deterministic and the true value function has a bounded Lipschitz norm. Model-Based Methods Since the model P is the only unknown variable, we can construct an estimator P̂ of P using maximum likelihood estimation or other methods, and plug it into Eq. (1) to obtain a plug-in estimator Ĵ = Jπ,P̂. This yields the model-based approach to OPE (e.g., Jiang & Li, 2016; Liu et al., 2018b). One can also estimate the uncertainty in Jπ,P̂ by propagating the uncertatinty in P̂ (e.g., Asadi et al., 2018; Duan et al., 2020), but it is hard to obtain non-asymptotic and computationally efficient bounds unless P̂ is assumed to be simple linear models. In general, estimating the whole model P can be an unnecessarily complicated problem as an intermediate step of the possibly simpler problem of estimating Jπ,P. Bootstrapping, Bayes, Distributional RL As a general approach of uncertainty estimation, bootstrapping has been used in interval estimation in RL in various ways (e.g., White & White, 2010; Hanna et al., 2017; Kostrikov & Nachum, 2020; Hao et al., 2021). Bootstrapping is simple and highly flexible, and can be applied to time-dependent data (as appeared in RL) using variants of block bootstrapping methods (e.g., Lahiri, 2013; White & White, 2010). However, bootstrapping typically only provides asymptotic guarantees; although non-asymptotic bounds of bootstrap exist (e.g., Arlot et al., 2010), they are sophistic and difficult to use in practice and would require to know the mixing condition for the dependent data. Moreover, bootstrapping is time consuming since it requires to repeat the whole off-policy evaluation pipeline on a large number of resampled data. Bayesian methods (e.g., Engel et al., 2005; Ghavamzadeh et al., 2016b; Yang et al., 2020a) offer another general approach to uncertainty estimation in RL, but require to use approximate inference algorithms and do not come with non-asymptotic frequentist guarantees. In addition, distributional RL (e.g., Bellemare et al., 2017) seeks to quantify the intrinsic uncertainties inside the Markov decision process, which is orthogonal to the epistemic uncertainty that we consider in off-policy evaluation.
1. What is the focus of the paper regarding off-policy evaluation? 2. What are the strengths of the proposed method, particularly in its ability to improve confidence intervals? 3. What are the weaknesses of the paper, especially regarding its claims and experiments? 4. Do you have any concerns about the method's applicability in typical experimental setups? 5. How does the reviewer assess the clarity and writing style of the paper?
Review
Review The objective of this paper is to provide a method to produce tighter confidence intervals for off-policy evaluation. The paper claims to develop a new primal-dual perspective on OPE confidence intervals and a tight concentration inequality. It develops both theoretical and empirical evidence to support its claims. Previous methods (Feng et al. 2020) estimate the high upper confidence bound on the bellman residual for q_pi given a set of data and then perform a global optimization procedure to find the largest q function with an empirical error less than the upper bound on the residual. This paper proposes to estimate instead of a confidence interval for the expected bellman error over the empirical data set for any q function. The dual approach from other OPE estimators is then leveraged to create high confidence bounds on the objective function. This paper's strengths are that the presented method could significantly improve confidence intervals for off-policy evaluation with a moderate length horizon and when an RKHS can represent the q function. There is both theory and empirical data to gain insights into the effectiveness of the method and show it a possible solution. Although the method appears to be effective, I cannot yet recommend it for acceptance due to some of the unsubstantiated claims and a lack of clarity in the paper's writing. There are also some ways that the experiments should be improved. This paper claims to produce a tight concentration inequality, but this is not proven. The claim may be a confusion of the wording and that it is intended to mean that the presented method is only a relative improvement over existing methods. Can the authors clarify the intended scope of this claim? If the claim is to be a tight concentration inequality, then a proof showing it cannot be improved is required. Additionally, it is stated that this work is a "substantial extension of [dual form OPE] to the non-asymptotic region, and therefore is both of theoretical and practical significance." However, it is unclear what problem this paper overcomes in previous methods to make this a substantial extension to the non-asymptotic region. The formulation and the use of the dual form do not appear substantial as it is currently presented because, as the authors point out, many others have proposed this form. What is the source of this substantial extension? In the definition of c_{q,k}, the supremum over x,y is used, but it is unclear if this is over the empirical data or any possible x,y. Can the authors clarify this? Notes about experiments: The results look very promising for the method, and the ablation studies in the appendix help understand some of its properties. However, there is significant room for improvement in experimental design. The main component lacking in the experiments is a demonstration of the limitations of the method. The only thing I can take away from these results is that this method worked on these problems. I do not doubt that this method is more effective than PDIS for moderate length horizons, but cannot predict when it will be useful. Horizons of length 50 and 100 were used, but the discount factor was set to 0.95, making the effective horizon only 20. I do not see why this is an effective choice for demonstrating the capabilities of the method. Furthermore, all of these environments are typically simulated with much longer horizons (at least a thousand steps for cart-pole, inverted pendulum, and the diabetes simulator). It would be helpful to see this method's capabilities in a more typical experimental setup. Another shortcoming of the experiments is that the behavior policy is only a high-temperature version of the evaluation policy. Typically, when off-policy estimation is performed, it is not to reduce the policy's noise but evaluate a different policy altogether. Since this work makes no claims or assumptions about the policy used to generate the data, it would make sense to demonstrate that the confidence intervals are accurate and reliable when using significantly different behavior policies or multiple behavior policies. Writing notes: Overall, the paper's writing indicates that it was written for experts who already know and understand the paper's concepts. It would be more useful to the ICLR and RL communities if the paper were written for a more general audience. Minor notes: In Section 2, the objective function is called the expected reward, which implies an average reward setting, but this is not the objective function's formulation. The wording is confusing here. There is a missing reference to proof of theorem 4.2 in the appendix. The term IS is used for importance sampling, but the formulation is actually per decision importance sampling (PDIS). Specifying this would add clarity to the paper. Figure 1 (c) is not described. --EDIT-- updated score to 7 after the author's response to questions and changes to the paper.
ICLR
Title Non-asymptotic Confidence Intervals of Off-policy Evaluation: Primal and Dual Bounds Abstract Off-policy evaluation (OPE) is the task of estimating the expected reward of a given policy based on offline data previously collected under different policies. Therefore, OPE is a key step in applying reinforcement learning to real-world domains such as medical treatment, where interactive data collection is expensive or even unsafe. As the observed data tends to be noisy and limited, it is essential to provide rigorous uncertainty quantification, not just a point estimation, when applying OPE to make high stakes decisions. This work considers the problem of constructing nonasymptotic confidence intervals in infinite-horizon off-policy evaluation, which remains a challenging open question. We develop a practical algorithm through a primal-dual optimization-based approach, which leverages the kernel Bellman loss (KBL) of Feng et al. (2019) and a new martingale concentration inequality of KBL applicable to time-dependent data with unknown mixing conditions. Our algorithm makes minimum assumptions on the data and the function class of the Q-function, and works for the behavior-agnostic settings where the data is collected under a mix of arbitrary unknown behavior policies. We present empirical results that clearly demonstrate the advantages of our approach over existing methods. 1 INTRODUCTION Off-policy evaluation (OPE) seeks to estimate the expected reward of a target policy in reinforcement learnings (RL) from observational data collected under different policies (e.g., Murphy et al., 2001; Fonteneau et al., 2013; Jiang & Li, 2016; Liu et al., 2018a). OPE plays a central role in applying reinforcement learning (RL) with only observational data and has found important applications in areas such as medicine, self-driving, where interactive “on-policy” data is expensive or even infeasible to collect. A critical challenge in OPE is the uncertainty estimation, as having reliable confidence bounds is essential for making high-stakes decisions. In this work, we aim to tackle this problem by providing non-asymptotic confidence intervals of the expected value of the target policy. Our method allows us to rigorously quantify the uncertainty of the prediction and hence avoid the dangerous case of being overconfident in making costly and/or irreversible decisions. However, off-policy evaluation per se has remained a key technical challenge in the literature (e.g., Precup, 2000; Thomas & Brunskill, 2016; Jiang & Li, 2016; Liu et al., 2018a), let alone gaining rigorous confidence estimation of it. This is especially true when 1) the underlying RL problem is long or infinite horizon, and 2) the data is collected under arbitrary and unknown algorithms (a.k.a. behavior-agnostic). As a consequence, the collected data can exhibit arbitrary dependency structure, which makes constructing rigorous non-asymptotic confidence bounds particularly challenging. Traditionally, the only approach to provide non-asymptotic confidence bounds in OPE is to combine importance sampling (IS) with concentration inequalities (e.g., Thomas et al., 2015a;b), which, however, tends to degenerate for long/infinite horizon problems (Liu et al., 2018a). Furthermore, ∗Equal contribution. neither can this approach be applied to the behavior-agnostic settings, nor can it effectively handle the complicated time dependency structure inside individual trajectories. Instead, it requires to use a large number of independently collected trajectories drawn under known policies. In this work, we provide a practical approach for Behavior-agnostic, Off-policy, Infinite-horizon, Non-asymptotic, Confidence intervals based on arbitrarily Dependent data (BONDIC). Our method is motivated by a recently proposed optimization-based (or variational) approach to estimating OPE confidence bounds (Feng et al., 2020), which leverages a tail bound of kernel Bellman statistics (Feng et al., 2019). Our approach achieves a new bound that is both an order-of-magnitude tighter and computationally efficient than that of Feng et al. (2020). Our improvements are based on two pillars 1) developing a new primal-dual perspective on the non-asymptotic OPE confidence bounds, which is connected to a body of recent works on infinite-horizon value estimation (Liu et al., 2018a; Nachum et al., 2019a; Tang et al., 2020a; Mousavi et al., 2020); and 2) offering a new tight concentration inequality on the kernel Bellman statistics that applies to behavior-agnostic off-policy data with arbitrary dependency between transition pairs. Empirically, we demonstrate that our method can provide reliable and tight bounds on a variety of well-established benchmarks. Related Work Besides the aforementioned approach based on the combination of IS and concentration inequalities (e.g., Thomas et al., 2015a), bootstrapping methods have also been widely used in off-policy estimation (e.g., White & White, 2010; Hanna et al., 2017; Kostrikov & Nachum, 2020). But the latter is limited to asymptotic bounds. Alternatively, Bayesian methods (e.g. Engel et al., 2005; Ghavamzadeh et al., 2016a) offers a different way to estimate the uncertainty in RL, but fails to guarantee frequentist coverage. In addition, Distributed RL (Bellemare et al., 2017) seeks to quantify the intrinsic uncertainties inside the Markov decision process, which is orthogonal to the estimation of uncertainty that we consider. Our work is built upon the recent advances in behavior-agnostic infinite-horizon OPE, including Liu et al. (2018a); Feng et al. (2019); Tang et al. (2020a); Mousavi et al. (2020), as well as the DICE-family (e.g., Nachum et al., 2019a; Zhang et al., 2020a; Yang et al., 2020b). In particular, our method can be viewed as extending the minimax framework of the infinite-horizon OPE in the infinite data region by Tang et al. (2020a); Uehara et al. (2020); Jiang & Huang (2020) to the non-asymptotic finite sample region. Outline For the rest of the paper, we start with the problem statement in Section 2 , and an overview on the two dual approaches to infinite-horizon OPE that are tightly connected to our method in Section 3. We then present our main approach in Section 4 and perform empirical studies in Section 5. The proof and an abundance of additional discussions can be found in Appendix. 2 BACKGROUND, DATA ASSUMPTION, PROBLEM SETTING Consider an agent acting in an unknown environment. At each time step t, the agent observes the current state st in a state space S, takes an action at ∼ π(· | st) in an action space A according to a given policy π; then, the agent receives a reward rt and the state transits to s′t = st+1, following an unknown transition/reward distribution (rt, st+1) ∼ P(· | st, at). Assume the initial state s0 is drawn from an known initial distribution D0. Let γ ∈ (0, 1) be a discount factor. In this setting, the expected reward of π is defined as Jπ := Eπ [∑T t=0 γ trt | s0 ∼ D0 ] , which is the expected total discounted rewards when we execute π starting from D0 for T steps. In this work, we consider the infinite-horizon case with T → +∞. Our goal is to provide an interval estimation of Jπ for a general and challenging setting with significantly released constraints on the data. In particular, we assume the data is behavior-agnostic and off-policy, which means that the data can be collected from multiple experiments, each of which can execute a mix of arbitrary, unknown policies, or even follow a non-fixed policy. More concretely, suppose that the model P is unknown, and we have a set of transition pairs D̂n = (si, ai, ri, s′i) n i=1 collected from previous experiments in a sequential order, such that for each data point i, the (ri, s′i) is drawn from the model P(· | si, ai), while (si, ai) is generated with an arbitrary black box given the previous data points. We formalize both the data assumption and goal as below. Assumption 2.1 (Data Assumption). Assume the data D̂n = (si, ai, ri, s′i)ni=1 is drawn from an arbitrary joint distribution, such that for each i = 1, . . . , n, conditional on D̂<i := (sj , aj , rj , s′j)j<i ∪ (si, ai), the subsequent local reward and next state (ri, s′i) are drawn from P(· | si, ai). Goal Given a confidence level δ ∈ (0, 1), we want to construct an interval [Ĵ−, Ĵ+] ⊂ R based on the data D̂n, such that Pr(Jπ ∈ [Ĵ−, Ĵ+]) ≥ 1− δ, where Pr(·) is w.r.t. the randomness of the data. The partial ordering on the data points is introduced to accommodate the case that si+1 equals s′j for some j ≤ i. The data assumption only requires that (ri, s′i) is generated from P(· | si, ai), and imposes no constraints on how (si, ai) is generated. This provides great flexibility in terms of the data collection process. In particular, we do not require (si, ai)ni=1 to be independent as always assumed in recent works (Liu et al., 2018a; Mousavi et al., 2020). A crucial fact is that our data assumption actually implies a martingale structure on the empirical Bellman residual operator of the Q-function, As we will show in Section 4.1, this enables us to derive a key concentration inequality underpinning our non-asymptotic confidence bounds. Here, we summarize a few notations that will simplify the presentation in the rest of work. First of all, we append each (si, ai, ri, s′i) with an action a ′ i ∼ π(· | s′i) following s′i. This can be done for free as long as π is given (See the Remark in Section 3). Also, we write xi = (si, ai), x′i = (s ′ i, a ′ i), and yi = (x′i, ri) = (s ′ i, a ′ i, ri). Correspondingly, define X = S ×A to be the state-action space and Y = X ×R. Denote Pπ(y | x) = P(s′, r | x)π(a′ | s′). In this way, the observed data can be written as pairs of {xi, yi}ni=1, and Assumption 2.1 is equivalent to saying that yi ∼ Pπ(· | xi) given D̂<i, which is similar to a supervised learning setting. We equalize the data D̂n with its empirical measure D̂n = ∑n i=1 δxi,yi/n, where δ is the Delta measure. 3 TWO DUAL APPROACHES TO INFINITE-HORIZON OFF-POLICY ESTIMATION The deficiency of the traditional IS methods on long-/infinite-horizon RL problems (a.k.a. the curse of horizon (Liu et al., 2018a)) has motivated a line of work on developing efficient infinite-horizon value estimation (e.g., Liu et al., 2018a; Feng et al., 2019; Nachum et al., 2019a; Zhang et al., 2020a; Mousavi et al., 2020; Tang et al., 2020a). The main idea is to transform the value estimation problem into estimating either the Q-function or the visitation distribution (or its related density ratio) of the policy π. This section introduces and reinterprets these two tightly connected methods, which serves to lay out a foundation for our main confidence bounds from a primal and dual perspective. Given a policy π, its Q-function is defined as qπ(x) = Eπ [ ∑∞ t=0 γ trt | x0 = x], where the expectation is taken when we execute π initialized from a fixed state-action pair (s0, a0) = x0 = x. Let Dπ,t be the distribution of (xt, yt) = (st, at, s′t, a ′ t, rt) when executing policy π starting from s0 ∼ D0 for t steps. The visitation distribution of π is defined as Dπ = ∑∞ t=0 γ tDπ,t. Note that Dπ integrates to 1/(1− γ), while we treat it as a probability measure in the notation. The expected reward Jπcan be expressed using either qπ or Dπ as follows: Jπ := Eπ [ ∞∑ t=0 γtrt ] = Er∼Dπ [r] = Ex∼Dπ,0 [qπ(x)], (1) where r ∼ Dπ (resp. x ∼ Dπ,0) denotes sampling from the r-(resp. x-) marginal distribution of Dπ (resp. Dπ,0). Eq. (1) plays a key role in the infinite-horizon value estimation by transforming the estimation of Jπ into estimating either qπ or Dπ . Value Estimation via Q Function Because Dπ,0(x) = D0(s)π(a|s) is known, we can estimate Jπ by Ex∼Dπ,0 [q̂(x)] with any estimation q̂ of the true Q-function qπ; the expectation under x ∼ Dπ,0 can be estimated to any accuracy with Monte Carlo. To estimate qπ, we consider the empirical and expected Bellman residual operator: R̂q(x, y) = q(x)− γq(x′)− r, Rπq(x) = Ey∼Pπ(·|x) [ R̂q(x, y) ] . (2) It is well-known that qπ is the unique solution of the Bellman equation Rπq = 0. Since yi ∼ Pπ(·|xi) for each data point in D̂n, if q = qπ , then R̂q(xi, yi), i = 1, . . . , n are all zero-mean random variables. Let ω be any function from X to R, then∑i R̂q(xi, yi)ω(xi) also has zero mean. This motivates the following functional Bellman loss (Feng et al., 2019; 2020; Xie & Jiang, 2020), LW(q; D̂n) := sup ω∈W { 1 n n∑ i=1 R̂q(xi, yi)ω(xi) } , (3) whereW is a set of functions ω : X → R. To ensure that the sup is finite,W is typically set to be an unit ball of some normed function spaceWo, such thatW = {ω ∈ Wo : ‖ω‖Wo ≤ 1}. Feng et al. (2019) considers the simple case whenW is taken to be the unit ball K of the reproducing kernel Hilbert space (RKHS) with a positive definite kernel k : X × X → R, in which case the loss has a simple closed form solution: LK(q; D̂n) = √√√√ 1 n2 n∑ ij=1 R̂q(xi, yi)k(xi, xj)R̂q(xj , yj). (4) Note that the RHS of Eq. (4) is the square root of the kernel Bellman V-statistics in Feng et al. (2019). Feng et al. (2019) showed that, when the support of data distribution D̂n covers the whole space (which may require an infinite data size) and k is an integrally strictly positive definite kernel, LK(q; D̂n) = 0 iff q = qπ . Therefore, one can estimate qπ by minimizing LK(q, D̂n). Remark The empirical Bellman residual operator R̂ can be extended to R̂q(x, y) = q(x)− r − γ 1m ∑m `=1 q(s ′, a′`), where {a′`}mi=1 are i.i.d. drawn from π(·|s′). As m increases, this gives a lower variance estimation of Rπq. If m = +∞, we have R̂q(x, y) = q(x)− r − γEa′∼π(· | s′)[q(s′, a′)], which coincides with the operator used in the expected SARSA (Sutton & Barto, 1998). In fact, without any modification, all results in this work can be applied to R̂q for any m. Value Estimation via Visitation Distribution Another way to estimate Jπ in Eq. (1) is to approximate Dπ with a weighted empirical measure of the data (Liu et al., 2018a; Nachum et al., 2019a; Mousavi et al., 2020; Zhang et al., 2020a). The key idea is to assign an importance weight ω(xi) to each data point xi in D̂n. We can choose the function ω : X → R properly such that Dπ and hence Jπ can be approximated by the ω-weighted empirical measure of D̂n as follows: Jπ ≈ Ĵω := ED̂ωn [r] = 1 n n∑ i=1 ω(xi)ri, Dπ ≈ D̂ωn := 1 n n∑ i=1 ω(xi)δxi,yi . (5) Intuitively, ω can be viewed as the density ratio between Dπ and D̂n, although the empirical measure D̂n may not have well-defined density. Liu et al. (2018a); Mousavi et al. (2020) proposed to estimate ω by minimizing a discrepancy measure between D̂ωn and Dπ. To see this, note that D = Dπ if and only if ∆(D, q) = 0 for any function q, where ∆(D, q) = ED[γq(x′)− q(x)]− EDπ [γq(x′)− q(x)] = ED[γq(x′)− q(x)] + EDπ,0 [q(x)], (6) using the fact that EDπ [γq(x′) − q(x)] = −EDπ,0 [q(x)] (Theorem 1, Liu et al., 2018a). Also note that the RHS of Eq. (6) can be practically calculated given any D and q without knowing Dπ . Let Q be a set of functions q : X → R. One can define the following loss for ω: IQ(ω; D̂n) = sup q∈Q { ∆(D̂ωn , q) } . (7) Similar to LW(q; D̂n), when Q is a ball in RKHS, IQ(ω; D̂n) also has a bilinear closed form analogous to Eq. (4); see Mousavi et al. (2020) and Appendix F. As we show in Section 4, IQ(ω; D̂n) and LW(q; D̂n) are connected to the primal and dual views of our confidence bounds, respectively. 4 MAIN APPROACH Let Q be a large enough function set including the true Q-function qπ, that is, qπ ∈ Q. Following Feng et al. (2020), a confidence interval [Ĵ−Q,W , Ĵ + Q,W ] of Jπ can be constructed as follows: Ĵ+Q,W = sup q∈Q { EDπ,0 [q] s.t. LW(q; D̂n) ≤ εn } , (8) and Ĵ−Q,W is defined in a similar way by replacing sup on q ∈ Q with inf . The idea here is to seek the extreme q function with the largest (resp. smallest) expected values in set F := Q ∩ {q : LK(q; D̂n) ≤ εn}. Therefore, Eq. (8) would be a 1− δ confidence interval if qπ is included in F with at least probability 1− δ, which is ensured when qπ ∈ Q and Pr(LW(qπ; D̂n) ≤ εn) ≥ 1− δ . (9) Feng et al. (2020) showed that in the RKHS case whenW = K, Eq. (9) can be achieved with εn = √√√√2cqπ,k ( n− 1 n √ log(1/δ) n + 1 n ) , (10) when n is an even number, where cqπ,k = supx,y R̂qπ(x, y) 2k(x, x). This was proved using Hoeffding’s inequality for U-statistics (Hoeffding, 1963). To solve Eq. (8) efficiently, Feng et al. (2020) took Q to be a ball in RKHS with random feature approximation. Unfortunately, this method as described by Eq. (8)-(10) has two major disadvantages: 1) Bound Needs to Be Tightened (Section 4.1) The bound of εn = O(n−1/4) in Eq. (10) is sub-optimal in rate. In Section 4.1, we improve it by an εn = O(n−1/2) bound under the mild Assumption 2.1, which gets rid of the independence requirement between the transition pairs. Our tightened bound is achieved by firstly noting a Martingale structure on the empirical Bellman operator under Assumption 2.1, and then applying an inequality in Pinelis (1992). 2) Dependence on Global Optimization (Section 4.2) The bound in Eq. (8) is guaranteed to be a 1− δ confidence bound only when the maximization in Eq. (8) is solved to global optimality. With a large n, this leads to a high computational cost, even when choosing Q as the RKHS. Feng et al. (2020) solved Eq. (8) approximately using a random feature technique, but this method suffers from a gap between the theory and practice. In Section 4.2, we address this problem by presenting a dual form of Eq. (8), which sidesteps solving the challenging global optimization in Eq. (8). Moreover, the dual form enables us to better analyze the tightness of the confidence interval and issues regarding the choices of Q andW . 4.1 A TIGHTER CONCENTRATION INEQUALITY In this section, we explain our method to improve the bound in Eq. (10) by giving a tighter concentration inequality for the kernel Bellman loss in Eq. (4). We introduce the following semi-expected kernel Bellman loss: L∗K(q; D̂n) = √√√√ 1 n2 n∑ ij=1 Rπq(xi)k(xi, xj)Rπq(xj) , (11) in which we replace the empirical Bellman residual operator R̂q in Eq. (3) with its expected counterpart Rπq, but still take the empirical average over {xi}ni=1 in D̂n. For a more general function set W , we can similarly define L∗W(q; D̂n) by replacing R̂q with Rπq in Eq. (3). Obviously, we have L∗W(q; D̂n) = 0 when q = qπ . Theorem 4.1 below shows that LK(q; D̂n) concentrates around L∗K(q; D̂n) with an O(n −1/2) error under Assumption 2.1. At a first glance, it may seem surprising that the concentration bound is able to hold even without any independence assumption between {xi}. An easy way to make sense of this is by recognizing that the randomness in yi conditional on xi is aggregated through averaging, even when {xi} are deterministic. As Assumption 2.1 does not impose any (weak) independence between {xi}, we cannot establish that LK(q; D̂n) concentrates around its mean ED̂n [LK(q; D̂n)] (which is a full expected kernel bellman loss), without introducing further assumptions. Theorem 4.1. Assume K is the unit ball of RKHS with a positive definite kernel k(·, ·). Let cq,k := supx∈X ,y∈Y(R̂q(x, y)−Rπq(x))2k(x, x) <∞. Under Assumption 2.1, for any δ ∈ (0, 1), with at least probability 1− δ, we have∣∣∣LK(q; D̂n)− L∗K(q; D̂n)∣∣∣ ≤ √ 2cq,k log(2/δ) n . (12) In particular, when q = qπ , we have cqπ,k = supx,y(R̂qπ(x, y)) 2k(x, x), and LK(qπ; D̂n) ≤ √ 2cqπ,k log(2/δ) n . (13) Intuitively, to see why we can expect an O(n−1/2) bound, note that LK(q, D̂n) consists of the square root of the product of two R̂q terms, each of which contributes an O(n−1/2) error w.r.t. Rπq. Technically, the proof is based on a key observation: Assumption 2.1 ensures that Zi := R̂q(xi, yi)− Rπq(xi), i = 1, . . . , n forms a martingale difference sequence w.r.t. {D̂<i : ∀i = 1, . . . , n}, in the sense that E[Zi | D̂<i] = 0, ∀i. See Appendix B for details. The proof also leverages a special property of RKHS and applies a Hoeffding-like inequality on the Hilbert spaces as in Pinelis (1992) (see Appendix B). For other more general function setsW , we establish in Appendix E a similar bound by using Rademacher complexity, although it yields a less tight bound than Eq. (12) when W = K. 4.2 DUAL CONFIDENCE BOUNDS We derive a dual form of Eq. (8) that sidesteps the need for solving the challenging global optimization in Eq. (8). To do so, let us plug the definition of LW(q; D̂n) into Eq. (3) and introduce a Lagrange multiplier: Ĵ+Q,W = sup q∈Q inf h∈W inf λ≥0 EDπ,0 [q]− λ ( 1 n n∑ i=1 h(xi)R̂q(xi, yi)− εn ) (14) = sup q∈Q inf ω∈Wo { EDπ,0 [q]− 1 n n∑ i=1 ω(xi)R̂q(xi) + εn ‖ω‖Wo } , (15) where we use ω(x) = λh(x). Exchanging the order of min/max and some further derivation yields the following main result. Theorem 4.2. I) LetW be the unit ball of a normed function spaceWo. We have Ĵ+Q,W ≤ F̂+Q (ω) := ED̂ωn [r] + IQ(ω; D̂n) + εn ‖ω‖Wo , ∀ω ∈ Wo , Ĵ−Q,W ≥ F̂−Q (ω) := ED̂ωn [r]− I−Q(ω; D̂n)− εn ‖ω‖Wo , ∀ω ∈ Wo , (16) where −Q = {−q : q ∈ Q} and hence I−Q(ω; D̂n) = IQ(ω; D̂n) if Q = −Q. Further, we have Ĵ+Q,W = infω∈Wo F̂ + Q (ω) and Ĵ − Q,W = supω∈Wo F̂ − Q (ω) if Q is convex and there exists a q ∈ Q that satisfies the strict feasibility condition that LW(q; D̂n) < εn. II) For D̂n and δ ∈ (0, 1), assume Wo and εn ∈ R satisfy Eq. (9) (e.g., via Theorem 4.1). Then for any function set Q with qπ ∈ Q, and any function ω+, ω− ∈ Wo (the choice of Q, ω+, ω− can depend on D̂n arbitrarily), we have Pr ( Jπ ∈ [ F̂−Q (ω−), F̂ + Q (ω+) ]) ≥ 1− δ . (17) Theorem 4.2 transforms the original bound in Eq. (8), framed in terms of q and LW(q; D̂n), into a form that involves the density-ratio ω and the related loss IQ(ω; D̂n). The bounds in Eq. (16) can be interpreted as assigning an error bar around the ω-based estimator Ĵω = ED̂ωn [r] in Eq. (5), with the error bar of I±Q(ω; D̂n) + εn ‖ω‖Wo . Specifically, the first term I±Q(ω; D̂n) measures the discrepancy between D̂ωn and Dπ as discussed in Eq. (7), whereas the second term captures the randomness in the empirical Bellman residual operator R̂qπ . Compared with Eq. (8), the global maximization on q ∈ Q is now transformed inside the IQ(ω; D̂n) term, which yields a simple closed form solution in the RKHS case (see Appendix F). In practice, we can optimize ω+ and ω− to obtain the tightest possible bound (and hence recover the primal bound) by minimizing/maximizing F̂+Q (ω) and F̂ − Q (ω), but it is not necessary to solve the optimization to global optimality. WhenWo is an RKHS, by the standard finite representer theorem (Scholkopf & Smola, 2018), the optimization on ω reduces to a finite dimensional optimization, which can be sped up with any favourable approximation techniques. We elaborate on this in Appendix D. Length of the Confidence Interval The form in Eq. (16) also makes it much easier to analyze the tightness of the confidence interval. Suppose ω = ω+ = ω− and Q = −Q, the length of the optimal confidence interval is length([Ĵ−Q,W , Ĵ + Q,W ]) = inf ω∈Wo { 2IQ(ω; D̂n) + 2εn ‖ω‖Wo } . Given εn is O(n−1/2), we can make the overall length of the optimal confidence interval also O(n−1/2), as long asWo is rich enough to include a good density ratio estimator ω∗ that satisfies IQ(ω ∗; D̂n) = O(n −1/2) and has a bounded norm ‖ω∗‖Wo . We can expect to achieve IQ(ω∗; D̂n) = O(n−1/2), when (1) Q has an O(n−1/2) sequential Rademacher complexity (Rakhlin et al., 2015) (e.g., a finite ball in RKHS); and (2) D̂n is collected following a Markov chain with strong mixing condition and weakly converges to some limit distribution D∞ whose support is X , and therefore we can define ω∗ as the density ratio between Dπ and D∞. Refer to Appendix C for more discussions. Indeed, our experiments show that the lengths of practically constructed confidence intervals do tend to decay with an O(n−1/2) rate. Choice ofW and Q To ensure the concentration inequality in Theorem 4.1 is valid, the choice of Wo cannot depend on the data D̂n. Therefore, we should use a separate holdout data to construct a data-dependentWo. In contrast, the choice of Q can depend on the data D̂n arbitrarily, since it is a part of the optimization bound Eq. (8) but not in the tail bound Eq. (9). In this light, one can construct the best possible Q by exploiting the data information in the most favourable way. For example, we can construct an estimator of q̂ ≈ qπ based on any state-of-the-art method (e.g., Q-learning or model-based methods), and set Q to be a ball centering around q̂ such that qπ − q̂ ∈ Q. This enables post-hoc analysis based on prior information on qπ , as suggested in Feng et al. (2020). Mis-specification of Q and Oracle Upper/Lower Estimates Our result relies on the assumption that qπ ∈ Q. However, as with other statistical estimation problems, there exists no provably way to empirically verify the correctness of model assumptions such as qπ ∈ Q. Because empirical data only reveals the information of the unknown function (in our case qπ) on a finite number data points, but no conclusion can be made on the unseeing data points without imposing certain smoothness assumption. Typically, what we can do is the opposite: reject qπ ∈ Q when the Bellman loss LW(q; D̂n) of all q in Q is larger than the threshold εn. We highlight that, even without verifying qπ ∈ Q, our method can still be viewed as a confidence interval of a best possible (oracle) upper and lower estimation given the data D̂n plus the assumption that qπ ∈ Q, defined as Ĵ+Q,∗ = sup q∈Q { EDπ,0 [q] s.t. R̂q(xi, yi) = R̂qπ(xi, yi), ∀i = 1, . . . , n } . (18) In fact, it is impossible to derive empirical upper bounds lower than Ĵ+Q,∗, as there is no way to distinguish q and qπ if R̂q(xi, yi) = R̂qπ(xi, yi) for all i. But our interval [ĴQ,K, Ĵ+Q,K] provides a 1− δ confidence outer bound of [Ĵ−Q,∗, Ĵ+Q,∗] once Eq. (9) holds, regardless if qπ ∈ Q holds or not. Hence, it is of independent interest to further explore the dual form of Eq. (18), which is another starting point for deriving our bound. We have more discussion in Appendix G. Lastly, we argue that it is important to include the Q in the bound. Proposition G.1 in Appendix shows that removing the q ∈ Q constraint in Eq. (18) would lead to an infinite upper bound, unless the {si, s′i}ni=1 from D̂n almost surely covers the whole state space S in the sense that Prs∼D0(s ∈ {si, s′i}ni=1) = 1. 5 EXPERIMENTS We compare our method with a variety of existing algorithms for obtaining asymptotic and nonasymptotic bounds on a number of benchmarks. We find our method can provide confidence interval that correctly covers the true expected reward with probability larger than the specified success probability 1 − δ (and is hence safe) across the multiple examples we tested. In comparison, the non-asymptotic bounds based on IS provide much wider confidence intervals. On the other hand, the asymptotic methods, such as bootstrap, despite giving tighter intervals, often fail to capture the true values with the given probability in practice. Environments and Dataset Construction We test our method on three environments: InvertedPendulum and CartPole from OpenAI Gym (Brockman et al., 2016), and a Type-1 Diabetes medical treatment simulator.1 We follow a similar procedure as Feng et al. (2020) to construct the behavior and target policies. more details on environments and data collection procedure are included in Appendix H.1. Algorithm Settings We test the dual bound described in our paper. Throughout the experiment, we always setW = K, the unit ball of the RKHS with positive definite kernel k, and set Q = rQK̃, the ball of radius rQ in the RKHS with another kernel k̃. We take both kernels to be Gaussian RBF kernel and choose rQ and the bandwidths of k and k̃ using the procedure in Appendix H.2. We use a fast approximation method to optimize ω in F+Q (ω) and F − Q (ω) as shown in Appendix D. Once ω is found, we evaluate the bound in Eq. (16) exactly to ensure that the theoretical guarantee holds. Baseline Algorithms We compare our method with four existing baselines, including the IS-based non-asymptotic bound using empirical Bernstein inequality by Thomas et al. (2015b), the IS-based bootstrap bound of Thomas (2015), the bootstrap bound based on fitted Q evaluation (FQE) by Kostrikov & Nachum (2020), and the bound in Feng et al. (2020) which is equivalent to the primal bound in (8) but with looser concentration inequality (they use a εn = O(n−1/4) threshold). Results Figure 1 shows our method obtains much tighter bounds than Feng et al. (2020), which is because we use a much tighter concentration inequality, even the dual bound that we use can be slightly looser than the primal bound used in Feng et al. (2020). Our method is also more computationally efficient than that of Feng et al. (2020) because the dual bound can be tightened 1 https://github.com/jxx123/simglucose. approximately while the primal bound requires to solve a global optimization problem. Figure 1 (b) shows that we provide increasingly tight bounds as the data size n increases, and the length of the interval decays with an O(n−1/2) rate approximately. Figure 1 (c) shows that when we increase the significance level δ, our bounds become tighter while still capturing the ground truth. Figure 1 (d) shows the percentage of times that the interval fails to capture the true value in a total of 100 random trials (denoted as δ̂) as we vary δ. We can see that δ̂ remains close to zero even when δ is large, suggesting that our bound is very conservative. Part of the reason is that the bound is constructed by considering the worse case and we used a conservative choice of the radius rQ and coefficient cqπ,k in Eq. (13) (See Appendix H.2). In Figure 2 we compare different algorithms on more examples with δ = 0.1. We can again see that our method provides tight and conservative interval that always captures the true value. Although FQE (Bootstrap) yields tighter intervals than our method, it fail to capture the ground truth much more often than the promised δ = 0.1 (e.g., it fails in all the random trials in Figure 2 (a)). We conduct more ablation studies on different hyper-parameter and data collecting procedure. See Appendix H.2 and H.3 for more details. 6 CONCLUSION We develop a dual approach to construct high confidence bounds for off-policy evaluation with an improved rate over Feng et al. (2020). Our method can handle dependent data, and does not require a global optimization to get a valid bound. Empirical results demonstrate that our bounds is tight and valid compared with a range of existing baseline. Future directions include leveraging our bounds for policy optimization and safe exploration. A PROOF OF THE DUAL BOUND IN THEOREM 4.2 Proof. Introducing a Lagrange multiplier, the bound in (8) is equivalent to Ĵ+Q,W = max q∈Q min λ≥0 { EDπ,0 [q] − λ ( max h∈W 1 n n∑ i=1 h(xi)R̂q(xi, yi)− εn )} = max q∈Q min λ≥0 min h∈W { EDπ,0 [q] − λ ( 1 n n∑ i=1 h(xi)R̂q(xi, yi)− εn )} = max q∈Q min ω∈Wo { EDπ,0 [q] − 1 n n∑ i=1 ω(xi)R̂q(xi, yi) + εn ‖ω‖Wo } , where we use ω = λh(x), such that λ is replaced by ‖ω‖Wo . Define M(q, ω; D̂n) = EDπ,0 [q] − 1 n n∑ i=1 ω(xi)R̂q(xi, yi) + εn ‖ω‖Wo = ED̂ωn [r] + ∆(D̂ ω n , q) + εn ‖ω‖Wo . Then we have max q∈Q M(q, ω; D̂n) = ED̂ωn [r] + maxq∈Q ∆(D̂ ω n , q) + εn ‖ω‖Wo = ED̂ωn [r] + IQ(ω; D̂n) + εn ‖ω‖Wo = F̂+Q (ω). Therefore, Ĵ+Q,W = max q∈Q min ω∈Wo M(q, ω; D̂n) ≤ min ω∈Wo max q∈Q M(q, ω; D̂n) = min ω∈Wo F̂+Q (ω). The lower bound follows analogously. The strong duality holds when the Slater’s condition is satisfied (Nesterov, 2013), which amounts to saying that the primal problem in (8) is convex and strictly feasible; this requires that Q is convex and there exists at least one solution q ∈ Q that satisfy that constraint strictly, that is, LW(q; D̂n) < εn; note that the objective function Q is linear on q and the constraint function LW(q; D̂n) is always convex on q (since it is the sup a set of linear functions on q following (3)). B PROOF OF CONCENTRATION BOUND IN THEOREM 4.1 Our proof require the following Hoeffding inequality on Hilbert spaces by Pinelis (Theorem 3, 1992); see also Section 2.4 of Rosasco et al. (2010). Lemma B.1. (Theorem 3, Pinelis, 1992) Let H be a Hilbert space and {fi}ni=1 is a Martingale sequence inH that satisfies supi ‖fi‖H ≤ σ almost surely. We have for any > 0, Pr (∥∥∥∥∥ 1n n∑ i=1 fi ∥∥∥∥∥ H ≥ ) ≤ 2 exp ( −n 2 2σ2 ) . Therefore, with probability at least 1− δ, we have ∥∥ 1 n ∑n i=1 fi ∥∥ H ≤ √ 2σ2 log(2/δ) n . Lemma B.2. Let k(x, x′) be a positive definite kernel whose RKHS isHk. Define fi(·) = R̂q(xi, yi)k(xi, ·)−Rπq(xi)k(xi, ·). Assume Assumption 2.1 holds, then {fi}ni=1 is a Martingale difference sequence inHk w.r.t. T<i := (xj , yj)j<i ∪ (xi). That is, E [fi+1(·) | T<i] = 0. In addition,∥∥∥∥∥ 1n n∑ i=1 fi ∥∥∥∥∥ 2 Hk = 1 n2 n∑ ij=1 ( R̂q(xi, yi)−Rπq(xi) ) k(xi, xj) ( R̂q(xj , yj)−Rπq(xj) ) , and ‖fi‖2Hk ≤ cq,k for ∀i = 1, . . . , n. Proof of Theorem 4.1. Following Lemma B.1 and Lemma B.2, since {fi}ni=1 is a Martingale difference sequence inHk with ‖fi‖Hk ≤ cq,k almost surely, we have with probability at least 1− δ, 1 n2 n∑ ij=1 ( R̂q(xi, yi)−Rπq(xi) ) k(xi, xj) ( R̂q(xj , yj)−Rπq(xj) ) = ∥∥∥∥∥ 1n n∑ i=1 fi ∥∥∥∥∥ 2 Hk ≤ 2cq,k log(2/δ) n . Using Lemma B.3 below, we have∣∣∣LK(q; D̂n)− L∗K(q; D̂n)∣∣∣ ≤ ∥∥∥∥∥ 1n n∑ i=1 fi ∥∥∥∥∥ Hk ≤ √ 2cq,k log(2/δ) n . This completes the proof. Lemma B.3. Assume k(x, x′) is a positive definite kernel. We have∣∣∣LK(q; D̂n)− L∗K(q; D̂n)∣∣∣2 ≤ 1n2 n∑ ij=1 ( R̂q(xi, yi)−Rπq(xi) ) k(xi, xj) ( R̂q(xj , yj)−Rπq(xj) ) . Proof. Define ĝ(·) = 1 n n∑ i=1 R̂q(xi, yi)k(xi, ·) g(·) = 1 n n∑ i=1 Rπq(xi)k(xi, ·). Then we have ‖ĝ‖2Hk = 1 n2 n∑ ij=1 R̂q(xi, yi)k(xi, xj)R̂q(xj , yj) = L̂K(q; D̂n), ‖g‖2Hk = 1 n2 n∑ ij=1 Rπq(xi)k(xi, xj)Rπq(xj) = L ∗ K(q; D̂n), ‖ĝ − g‖2Hk = 1 n2 n∑ ij=1 ( R̂q(xi, yi)−Rπq(xi) ) k(xi, xj) ( R̂q(xj , yj)−Rπq(xj) ) . The result then follows the triangle inequality ∣∣‖ĝ‖Hk − ‖g‖Hk ∣∣ ≤ ‖ĝ − g‖Hk . B.1 CALCULATION OF cqπ,k The practical calculation of the coefficient cqπ,k in the concentration inequality was discussed in Feng et al. (2020), which we include here for completeness. Lemma B.4. (Feng et al. (2020) Lemma 3.1) Assume the reward function and kernel function is bounded with supx |r(x)| ≤ rmax and supx,x′ |k(x, x′)| ≤ Kmax, we have: cqπ,k := sup x∈X ,y∈Y (R̂qπ(x, y)) 2k(x, x) ≤ 4Kmaxr 2 max (1− γ)2 . In practice, we get access to Kmax from the kernel function that we choose (e.g., Kmax = 1 for RBF kernels), and rmax from the knowledge on the environment. C MORE ON THE TIGHTNESS OF THE CONFIDENCE INTERVAL The benefit of having both upper and lower bounds is that we can empirically access the tightness of the bound by checking the length of the interval [F̂−Q (ω−), F̂ + Q (ω+)]. However, from the theoretical perspective, it is desirable to know a priori that the length of the interval will decrease with a fast rate as the data size n increases. We now show that this is the case ifWo is chosen to be sufficiently rich so that it includes a ω ∈ Wo such that D̂ωn ≈ Dπ . Theorem C.1. AssumeWo is sufficiently rich to include a “good” ω∗ inWo with D̂ω ∗ n ≈ Dπ in that sup q∈Q ∣∣∣ED̂ω∗n [R̂q(x;x′, r)]− EDπ [R̂q(x;x′, r)]∣∣∣ ≤ cnα , (19) where c and α are two positive coefficients. Then we have max { Ĵ+Q,W − Jπ, Jπ − Ĵ−Q,W } ≤ c nα + εn ‖ω∗‖Wo . Assumption (19) holds if D̂n is collected following a Markov chain with certain strong mixing condition and weakly converges to some limit discussion D̂∞ whose support is X , for which we can define ω∗(x) = Dπ(x)/D∞(x). In this case, if Q is a finite ball in RKHS, then we can achieve (19) with α = 1/2, and yields the overall bound of rate O(n−1/2). For more general function classes, α depends on the martingale Rademacher complexity of function set R̂Q = {Rq(x, y) : q ∈ Q} Rakhlin et al. (2015). In our empirical reults, we observe that the gap of the practically constructed bounds tend to follow the O(n−1/2) rate. Proof. Note that Jπ = EDπ [r] = EDπ [r], and IQ(ω; D̂n) = sup q∈Q { ED̂ωn [γq(x ′)− q(x)]− EDπ [γq(x′)− q(x)] } . Because ω∗ ∈ W , we have Ĵ+W,Q − Jπ ≤ F̂+Q (ω∗)− Jπ = ED̂ωn [r]− EDπ [r] + IQ(ωπ; D̂n) + εn ‖ω ∗‖Wo = sup q∈Q { ED̂ωn [ R̂q(x, y) ] − EDπ [ R̂q(x, y) ]} + εn ‖ω∗‖Wo ≤ c nα + εn ‖ω∗‖Wo . The case of lower bound follows similarly. D OPTIMIZATION ONWo Consider the optimization of ω inWo F̂+Q (ω) := 1 n n∑ i=1 riω(xi) + IQ(ω; D̂n) + ‖ω‖Wo √ 2cqπ,k log(2/δ) n (20) AssumeWo is the RKHS of kernel k(x, x̄), that is,Wo = Hk. By the finite representer theorem of RKHS (Smola et al., 2007). the optimization of ω in RKHSHk can be reduced to a finite dimensional optimization problem. Specifically, the optimal solution of (20) can be written into a form of ω(x) = ∑n i=1 k(x, xi)αi with ‖ω‖ 2 Hk = ∑n i,j=1 k(xi, xj)αiαj for some vector α := [αi] n i=1 ∈ Rn. WriteK = [k(xi, xj)]ni,j=1 and r = [ri] n i=1. The optimization of ω reduces to a finite dimensional optimization on α: min α∈Rn 1 n r>Kα+ IQ(Kα; D̂n) + √ αKα √ 2cqπ,k log(2/δ) n , where IQ(Kα; D̂n) = max q∈Q { EDπ,0 [q] + 1 n (T̂q)>Kα } , and T̂q = [γq(x′i) − q(xi)]ni=1. When Q is RKHS, we can calculate IQ(Kα; D̂n) using (22) in section F. This computation can be still expensive when n is large. Fortunately, our confidence bound holds for any ω; better ω only gives tighter bounds, but it is not necessary to find the global optimal ω. Therefore, one can use any approximation algorithm to find ω, which provides a trade-off of tightness and computational cost. We discuss two methods: 1) Approximating α The length of α can be too large when n is large. To address this, we assume αi = g(xi, θ), where g is any parametric function (such as a neural network) with parameter θ which can be much lower dimensional than α. We can then optimize θ with stochastic gradient descent, by approximating all the data averaging 1n ∑n i=1(·) with averages over small mini-batches; this would introduce biases in gradient estimation, but it is not an issue when the goal is only to get a reasonable approximation. 2) Replacing kernel k Assume the kernel k yields a random feature expansion: k(x, x̄) = Eβ∼π[φ(x, β)φ(x̄, β)], where φ(x, β) is a feature map with parameter β and π is a distribution of β. We draw {βi}mi=1 i.i.d. from π, where m is taken to be much smaller than n. We replace k with k̂(x, x̄) = 1m ∑m i=1 φ(x, βi)φ(x̄, βi) andHk withHk̂, That is, we consider to solve Ĵ+Q,W = min ω∈Hk̂ F̂+Q (ω) := 1n n∑ i=1 riω(xi) + IQ(ω; D̂n) + ‖ω‖Hk̂ √ 2cqπ,k̂ log(2/δ) n . It is known that any function ω in Hk̂ can be represented as ω(x) = 1m ∑m i=1 wiφ(x, βi), for some w = [wi]mi=1 ∈ Rm and satisfies ‖ω‖2Hk̂ = 1 m ∑m i=1 w 2 i . In this way, the problem reduces to optimizing an m-dimensional vector w, which can be solved by standard convex optimization techniques. E CONCENTRATION INEQUALITIES OF GENERAL FUNCTIONAL BELLMAN LOSSES When K is a general function set, one can still obtain a general concentration bound using Radermacher complexity. Define R̂q ◦ W := {h(x, y) = R̂q(x, y)ω(x) : ω ∈ W}. Using the standard derivation in Radermacher complexity theory in conjunction with Martingale theory (Rakhlin et al., 2015), we have sup ω∈W { 1 n n∑ i=1 (R̂q(xi, yi)−Rπq(xi))ω(xi) } ≤ 2Rad(R̂q ◦W) + √ 2cq log(2/δ) n , where Rad(R̂q ◦ K) is the sequential Radermacher complexity as defined in (Rakhlin et al., 2015). A triangle inequality yields | Lk(q; D̂n)− Lk(q; D̂n) | ≤ sup ω∈W { 1 n n∑ i=1 (R̂q(xi, yi)−Rπq(xi))ω(xi) } Therefore, | LW(q; D̂n)− LW(q; D̂n) | ≤ 2Rad(R̂q ◦W) + √ 2cq log(2/δ) n , (21) where cq,W = supω∈W supx,y(R̂q(x, y)−Rπq(x))2ω(x)2. WhenW equals the unit ball K of the RKHS related to kernel k, we have cq,k = cq,W , and hence this bound is strictly worse than the bound in Theorem 4.1. F CLOSED FORM OF IQ(ω; D̂n) WHEN Q IS RKHS Similar to LK(q; D̂n), whenQ is taken to be the unit ball K̃ of the RKHS of a positive definite kernel k̃(x, x̄), (7) can be expressed into a bilinear closed form shown in Mousavi et al. (2020): IQ(ω; D̂n) 2 = A− 2B + C, (22) where A = E(x,x̄)∼Dπ,0×Dπ,0 [k(x, x̄)] B = E(x,x̄)∼D̂ωn×Dπ,0 [ T̂xπk(x, x̄) ] C = E(x,x̄)∼D̂ωn×D̂ωn [ T̂xπT̂ x̄ πk(x, x̄) ] , were T̂πf(x) = γf(x′) − f(x); in T̂xπT̂x̄πk(x, x̄), we apply T̂x̄π and T̂xπ in a sequential order by treating k as a function of x̄ and then of x. G MORE ON THE ORACLE BOUND AND ITS DUAL FORM The oracle bound (18) provides another starting point for deriving optimization-based confidence bounds. We derive its due form here. Using Lagrangian multiplier, the optimization in (18) can be rewritten into Ĵ+Q,∗ = max q∈Q min ω M(q, ω; D̂n), (23) where M∗(q, ω; D̂n) = EDπ,0 [q]− 1 n n∑ i=1 ω(xi) ( R̂q(xi, yi)− R̂qπ(xi, yi) ) , where ω now serves as the Lagrangian multiplier. By the weak duality, we have J∗Q,+ ≤ F̂+Q,∗(ω) := ED̂ωn [r] + IQ(ω; D̂n)︸ ︷︷ ︸ known +R(ω, qπ)︸ ︷︷ ︸ unknown , ∀ω. and R(ω, qπ) = 1 n n∑ i=1 ω(xi)R̂qπ(xi). The derivation follows similarly for the lower bound. So for any ω ∈ Wo, we have [Ĵ−Q,∗, Ĵ+Q,∗] ⊆ [F̂−Q,∗(ω), F̂ + Q,∗(ω)]. Here the first two terms of F̂+Q,∗(ω) can be empirically estimated (it is the same as the first two terms of (16)), but the third term R(ω, qπ) depends on the unknown qπ and hence need to be further upper bounded. Our method can be viewed as constraining ω inW , which is assumed to be the unit ball ofWo, and applying a worst case bound: F̂+Q,∗(ω) := ED̂ωn [r] + IQ(ω; D̂n) +R(ω, qπ), ∀ω ∈ Wo ≤ ED̂ωn [r] + IQ(ω; D̂n) + ‖w‖Wo suph∈W R(h, qπ), ∀ω ∈ Wo ≤ ED̂ωn [r] + IQ(ω; D̂n) + ‖w‖Wo LW(qπ, D̂n), ∀ω ∈ Wo w.p.1−δ ≤ ED̂ωn [r] + IQ(ω; D̂n) + ‖w‖Wo , ∀ω ∈ Wo = F̂+Q (ω). where the last step applies the high probability bound that Pr(LW(qπ, D̂n) ≤ ε) ≥ 1− δ. Similar derivation on the lower bound counterpart gives Pr ([ F̂−Q,∗(ω), F̂ + Q,∗(ω) ] ⊆ [ F̂−Q (ω), F̂ + Q (ω) ]) ≥ 1− δ. Therefore, our confidence bound [F̂−Q (ω), F̂ + Q (ω)] is a 1− δ confidence outer bound of the oracle bound [Ĵ−Q,∗, Ĵ + Q,∗] ⊆ [F̂−Q,∗(ω), F̂+Q,∗(ω)]. Introducing Q is necessarily Our method does not require any independence assumption between the transition pairs, the trade-off is that that we have to assume that qπ falls into a function set Q that imposes certain smoothness assumption. This is necessary because the data only provide information regarding qπ on a finite number of points, and qπ can be arbitrarily non-smooth outside of the data points, and hence no reasonable upper/lower bound can be obtained without any smoothness condition that extend the information on the data points to other points in the domain. Proposition G.1. Unless Prs∼Dπ,0(s /∈ {si, s′i}ni=1) = 0, for any u ∈ R, there exists a function q : S ×A → R, such that EDπ,0 [q] = u, R̂q(xi, yi) = R̂qπ(xi, yi), ∀i = 1, . . . , n. Proof. Let Qnull be the set of functions that are zero on {si, s′i}ni=1, that is, Qnull = {g : S ×A → R : g(s, a) = 0, ∀s ∈ {si, s′i}ni=1, a ∈ A}. Then we have R̂π(qπ + g)(xi, yi) = R̂πqπ(xi, yi), ∀i = 1, . . . , n. and EDπ,0 [qπ + g] = EDπ,0[qπ] + EDπ,0 [g] = Jπ + EDπ,0 [g]. Taking g(s, a) = zI(s /∈ {si, s′i}ni=1), where z is any real number. Then we have EDπ,0 [qπ + g] = Jπ + zPrs∼Dπ,0(s /∈ {si, s′i}ni=1). Because Prs∼Dπ,0(s /∈ {si, s′i}ni=1). 6= 0, we can take z to be arbitrary value to make EDπ,0 [qπ + g] to take arbitrary value. H ABLATION STUDY AND EXPERIMENTAL DETAILS H.1 EXPERIMENTAL DETAILS Environments and Dataset Construction We test our method on three environments: InvertedPendulum and CartPole from OpenAI Gym (Brockman et al., 2016), and a Type-1 Diabetes medical treatment simulator. For Inverted-Pendulum we discretize the action space to be {−1,−0.3,−0.2, 0, 0.2, 0.3, 1}. The action space of CartPole and the medical treatment simulator are both discrete. Policy Construction We follow a similar setup as Feng et al. (2020) to construct behavior and target policies. For all of the environments, we constraint our policy class to be a softmax policy and use PPO (Schulman et al., 2017) to train a good policy π, and we use different temperatures of the softmax policy to construct the target and behavior policies (we set the temperature τ = 0.1 for target policy and τ = 1 to get the behavior policy, and in this way the target policy is more deterministic than the behavior policy). We consider other choices of behavior policies in Section H.3. For horizon lengths, We fix γ = 0.95 and set horizon length H = 50 for Inverted-Pendulum, H = 100 for CartPole, and H = 50 for Diabetes simulator. Algorithm Settings We test the bound in Eq.(16)-(17). Throughout the experiment, we always set W = K, a unit ball of RKHS with kernel k(·, ·). We set Q = rQK̃, the zero-centered ball of radius rQ in an RKHS with kernel k̃(·, ·). We take both k and k̃ to be Gaussian RBF kernel. The bandwidth of k and k̃ are selected to make sure the function Bellman loss is not large on a validation set. The radius is selected to be sufficiently large to ensure that q∗ is included in Q. To ensure a sufficiently large radius, we use the data to approximate a q̂ so that its functional Bellman loss is small than n. Then we set rQ = 10 ∗ ‖q̂‖K̃. We optimize ω using the random feature approximation method described in Appendix D. Once ω+ and ω− are found, we evaluate the bound in Eq. (16) exactly, to ensure the theoretical guarantee holds. H.2 SENSITIVITY TO HYPER-PARAMETERS We investigate the sensitivity of our algorithm to the choice of hyper-parameters. The hyper-parameter mainly depends on how we choose our function class Q andW . Radius of Q Recall that we choose Q to be a ball in RKHS with radius rQ, that is, Q = rQK̃ = {rQf : f ∈ K̃}, where K̃ is the unit ball of the RKHS with kernel k̃. Ideally, we want to ensure that rQ ≥ ‖q∗‖K̃ so that q∗ ∈ Q. Since it is hard to analyze the behavior of the algorithm when q∗ is unknown, we consider a synthetic environment where the true q∗ is known. This is done by explicitly specifying a q∗ inside K̃ and then infer the corresponding deterministic reward function r(x) by inverting the Bellman equation: r(x) := q∗(x)− γEx′∼Pπ(·|x)[q∗(x′)]. Here r is a deterministic function, instead of a random variable, with an abuse of notation. In this way, we can get access to the true RKHS norm of q∗: ρ∗ = ‖q∗‖K̃ . For simplicity, we set both the state space S and action space A to be R and set a Gaussian policy π(a|s) ∝ exp(f(s, a)/τ), where τ is a positive temperature parameter. We set τ = 0.1 as target policy and τ = 1 as behavior policy. Figure 3 shows the results as we set rQ to be ρ∗, 10ρ∗ and 100ρ∗, respectively. We can see that the tightness of the bound is affected significantly by the radius when the number n of samples is very small. However, as the number n of samples grow (e.g., n ≥ 2× 103 in our experiment), the length of the bounds become less sensitive to the changing of the predefined norm of Q. Similarity Between Behavior Policy and Target Policy We study the performance of changing temperature of the behavior policy. We test on Inverted-Pendulum environment as previous described. Not surprisingly, we can see that the closer the behavior policy to the target policy (with temperature τ = 0.1), the tighter our confidence interval will be, which is observed in Figure 4(a). Bandwidth of RBF kernels We study the results as we change the bandwidth in kernel k and k̃ forW and Q, respectively. Figure 4(b) shows the length of the confidence interval when we use different bandwidth pairs in the Inverted-Pendulum environment. We can see that we get relatively tight confidence bounds as long as we set the bandwidth in a reasonable region (e.g., we set the bandwidth of k in [0.1, 0.5], the bandwidth of k̃ in [0.5, 3]). H.3 SENSITIVITY TO THE DATA COLLECTION PROCEDURE We investigate the sensitivity of our method as we use different behavior policies to collect the dataset D̂n. Varying Behavior Policies We study the effect of using different behavior policies. We consider the following cases: 1. Data is collected from a single behavior policy of form πα = απ + (1− α)π0, where π is the target policy and π0 is another policy. We construct π and π0 to be Gaussian policies of form π(a|s) ∝ exp(f(s, a)/τ) with different temperature τ , where temperature for target policy is τ = 0.1 and temperature for π0 is τ = 1. 2. The dataset D̂n is the combination of the data collected from multiple behavior policies of form πα defined as above, with α ∈ {0.0, 0.2, 0.4, 0.6, 0.8}. We show in Figure 5(a) that the length of the confidence intervals by our method as we vary the number n of transition pairs and the mixture rate α. We can see that the length of the interval decays with the sample size n for all mixture rate α. Larger α yields better performance because the behavior policies are closer to the target policy. Varying Trajectory Length T in D̂n As we collect D̂n, we can either have a small number of long trajectories, or a larger number of short trajectories. In Figure 5(b)-(c), we vary the length T of the trajectories as we collect D̂n, while fixing the total number n of transition pairs. In this way, the number of trajectories in each D̂n would be m = n/T . We can see that the trajectory length does not impact the results significantly, especially when the length is reasonably large (e.g., T ≥ 20). I MORE RELATED WORKS We give a more detailed overview of different approaches for uncertainty estimation in OPE. Finite-Horizon Importance Sampling (IS) Assume the data is collected by rolling out a known behavior policy π0 up to a trajectory length T , then we can estimate the finite horizon reward by changing Eπ,P[·] to Eπ0,P[·] with importance sampling(e.g., Precup et al., 2000; Precup, 2001; Thomas et al., 2015a;b). Taking the trajectory-wise importance sampling as an example, assume we collect a set of independent trajectories τi := {sit, ait, rit}T−1t=0 , i = 1, . . . ,m up to a trajectory length T by unrolling a known behavior policy π0. When T is large, we can estimate J∗ by a weighted averaging: Ĵ IS = 1 m m∑ i=1 ω(τi)J(τi) , where ω(τi) = T−1∏ t=0 π(ait|sit) π0(ait|sit) , J(τi) = T−1∑ t=0 γtrit . (24) One can construct non-asymptotic confidence bounds based on Ĵ IS using variants of concentration inequalities (Thomas, 2015; Thomas et al., 2015b). Unfortunately, a key problem with this IS estimator is that the importance weight ω(τi) is a product of the density ratios over time, and hence tends to cause an explosion in variance when the trajectory length T is large. Although improvement can be made by using per-step and self-normalized weights (Precup, 2001), or control variates (Jiang & Li, 2016; Thomas & Brunskill, 2016), the curse of horizon remains to be a key issue to the classical IS-based estimators (Liu et al., 2018a). Moreover, due to the time dependency between the transition pairs inside each trajectory, the nonasymptotic concentration bounds can only be applied on the trajectory level and hence decay with the number m of independent trajectories in an O(1/ √ m) rate, though m can be small in practice. We could in principle apply the concentration inequalities of Markov chains (e.g., Paulin, 2015) to the time-dependent transition pairs, but such inequalities require to have an upper bound of certain mixing coefficient of the Markov chain, which is unknown and hard to construct empirically. Our work addresses these limitations by constructing a non-asymptotic bound that decay with the number n = mT of transitions pairs, while without requiring known behavior policies and independent trajectories. Infinite-Horizon, Behavior-Agnostic OPE Our work is closely related to the recent advances in infinite-horizon and behavior-agnostic OPE, including, for example, Liu et al. (2018a); Feng et al. (2019); Tang et al. (2020a); Mousavi et al. (2020); Liu et al. (2020); Yang et al. (2020b); Xie et al. (2019); Yin & Wang (2020), as well as the DICE-family (e.g., Nachum et al., 2019a;b; Zhang et al., 2020a; Wen et al., 2020; Zhang et al., 2020b). These methods are based on either estimating the value function, or the stationary visitation distribution, which is shown to form a primal-dual relation (Tang et al., 2020a; Uehara et al., 2020; Jiang & Huang, 2020) that we elaborate in depth in Section 3. Besides Feng et al. (2020) which directly motivated this work, there has been a recent surge of interest in interval estimation under infinite-horizon OPE (e.g., Liu et al., 2018b; Jiang & Huang, 2020; Duan et al., 2020; Dai et al., 2020; Feng et al., 2020; Tang et al., 2020b; Yin et al., 2020; Lazic et al., 2020). For example, Dai et al. (2020) develop an asymptotic confidence bound (CoinDice) for DICE estimators with an i.i.d assumption on the off-policy data; Duan et al. (2020) provide a data dependent confidence bounds based on Fitted Q iteration (FQI) using linear function approximation when the off-policy data consists of a set of independent trajectories; Jiang & Huang (2020) provide a minimax method closely related to our method but do not provide analysis for data error; Tang et al. (2020b) propose a fixed point algorithm for constructing deterministic intervals of the true value function when the reward and transition models are deterministic and the true value function has a bounded Lipschitz norm. Model-Based Methods Since the model P is the only unknown variable, we can construct an estimator P̂ of P using maximum likelihood estimation or other methods, and plug it into Eq. (1) to obtain a plug-in estimator Ĵ = Jπ,P̂. This yields the model-based approach to OPE (e.g., Jiang & Li, 2016; Liu et al., 2018b). One can also estimate the uncertainty in Jπ,P̂ by propagating the uncertatinty in P̂ (e.g., Asadi et al., 2018; Duan et al., 2020), but it is hard to obtain non-asymptotic and computationally efficient bounds unless P̂ is assumed to be simple linear models. In general, estimating the whole model P can be an unnecessarily complicated problem as an intermediate step of the possibly simpler problem of estimating Jπ,P. Bootstrapping, Bayes, Distributional RL As a general approach of uncertainty estimation, bootstrapping has been used in interval estimation in RL in various ways (e.g., White & White, 2010; Hanna et al., 2017; Kostrikov & Nachum, 2020; Hao et al., 2021). Bootstrapping is simple and highly flexible, and can be applied to time-dependent data (as appeared in RL) using variants of block bootstrapping methods (e.g., Lahiri, 2013; White & White, 2010). However, bootstrapping typically only provides asymptotic guarantees; although non-asymptotic bounds of bootstrap exist (e.g., Arlot et al., 2010), they are sophistic and difficult to use in practice and would require to know the mixing condition for the dependent data. Moreover, bootstrapping is time consuming since it requires to repeat the whole off-policy evaluation pipeline on a large number of resampled data. Bayesian methods (e.g., Engel et al., 2005; Ghavamzadeh et al., 2016b; Yang et al., 2020a) offer another general approach to uncertainty estimation in RL, but require to use approximate inference algorithms and do not come with non-asymptotic frequentist guarantees. In addition, distributional RL (e.g., Bellemare et al., 2017) seeks to quantify the intrinsic uncertainties inside the Markov decision process, which is orthogonal to the epistemic uncertainty that we consider in off-policy evaluation.
1. What is the focus of the paper regarding Markov decision processes? 2. What are the strengths and weaknesses of the proposed optimization-based method for off-policy evaluation? 3. How does the reviewer assess the significance and novelty of the work compared to prior methods? 4. Are there any concerns regarding the theoretical analysis and interpretation of the results? 5. What are the limitations of the approach, particularly in its applicability to practical scenarios?
Review
Review General overview The paper studies an off-policy evaluation (OPE) problem for Markov decision processes (MDPs). It suggests an optimization-based method that can construct a non-asymptotic confidence interval, for a given confidence level, for the value function of a policy starting from a fixed initial distribution. The paper builds on the works of Feng et al. (2019, 2020); the main advantages of the current work with respect to the previous methods are that the suggested approach guarantees a faster convergence rate, it does not require full independence between transition pairs, and it does not need the global optimal solution of the underlying optimization problem, in order to construct guaranteed confidence intervals. The authors present some theoretical results about the construction, including a discussion on the special case of using RKHS approaches, and also present numerical experiments on benchmark problems, such as the inverted-pendulum, cartpole and type-1 diabetes. Strengths of the paper -- In general, constructing confidence regions for value functions of RL policies is an important problem (however, the paper only addresses a restrictive special case of this problem, see below). -- The presented method is a clear improvement over a recent OPE confidence interval construction with fewer conditions and better rate (for this special case of OPE). -- The properties of the method are analyzed theoretically, "primal" and "dual" bounds are given. -- Illustrative numerical experiments are also presented on benchmark RL problems. Weaknesses of the paper -- The paper is obscurely written, for example, several objects are not precisely defined. It is not clear from the description on page 2 whether the state and action spaces of the MDP are finite or they can be more general (for example, Borel spaces). If the state space is finite, then using RKHS approaches (at least theoretically) seems unnecessary. On the other hand, if the state space can be infinite, then some structural assumptions are needed, for example, about its measurability. -- It is also not clear how should the quantity I_Q(omega, hat{D}_n) computed in practice. -- The precise interpretation of the theoretical results, such as Theorem 4.2, is not obvious, either. -- A major drawback of this work is that it only considers the OPE problem from a fixed, known initial distribution of the states. This is no more general than solving the problem for only one particular starting state. A much more interesting problem would be to have a confidence region for the entire value function, under some structural assumptions on the problem. -- The claim in the "Experiments" part that "Our method is safe (always captures the true value) and tight [...], while the other methods are either too lose or often fail to capture the ground truth" is dubious, as the goal (see also on page 2) is not to "always" capture the true value, but to capture it with a given probability. Also, increasing this probability will make the resulting interval less tight. Mathematically, the Type I and II errors are traded off against each other. Minor comments -- Some more explanations and motivations would be needed for the concept of "functional" Bellman loss. -- In the sentence below equation (8) "sup" and "inf" should be used instead of "min" and "max" (or some argument should be given that the maximum and minimum can be actually obtained). -- It would be better to cite the 2018 extended 2nd edition of Sutton and Barto's classical RL book, instead of its 1st edition published in 1998.
ICLR
Title Non-asymptotic Confidence Intervals of Off-policy Evaluation: Primal and Dual Bounds Abstract Off-policy evaluation (OPE) is the task of estimating the expected reward of a given policy based on offline data previously collected under different policies. Therefore, OPE is a key step in applying reinforcement learning to real-world domains such as medical treatment, where interactive data collection is expensive or even unsafe. As the observed data tends to be noisy and limited, it is essential to provide rigorous uncertainty quantification, not just a point estimation, when applying OPE to make high stakes decisions. This work considers the problem of constructing nonasymptotic confidence intervals in infinite-horizon off-policy evaluation, which remains a challenging open question. We develop a practical algorithm through a primal-dual optimization-based approach, which leverages the kernel Bellman loss (KBL) of Feng et al. (2019) and a new martingale concentration inequality of KBL applicable to time-dependent data with unknown mixing conditions. Our algorithm makes minimum assumptions on the data and the function class of the Q-function, and works for the behavior-agnostic settings where the data is collected under a mix of arbitrary unknown behavior policies. We present empirical results that clearly demonstrate the advantages of our approach over existing methods. 1 INTRODUCTION Off-policy evaluation (OPE) seeks to estimate the expected reward of a target policy in reinforcement learnings (RL) from observational data collected under different policies (e.g., Murphy et al., 2001; Fonteneau et al., 2013; Jiang & Li, 2016; Liu et al., 2018a). OPE plays a central role in applying reinforcement learning (RL) with only observational data and has found important applications in areas such as medicine, self-driving, where interactive “on-policy” data is expensive or even infeasible to collect. A critical challenge in OPE is the uncertainty estimation, as having reliable confidence bounds is essential for making high-stakes decisions. In this work, we aim to tackle this problem by providing non-asymptotic confidence intervals of the expected value of the target policy. Our method allows us to rigorously quantify the uncertainty of the prediction and hence avoid the dangerous case of being overconfident in making costly and/or irreversible decisions. However, off-policy evaluation per se has remained a key technical challenge in the literature (e.g., Precup, 2000; Thomas & Brunskill, 2016; Jiang & Li, 2016; Liu et al., 2018a), let alone gaining rigorous confidence estimation of it. This is especially true when 1) the underlying RL problem is long or infinite horizon, and 2) the data is collected under arbitrary and unknown algorithms (a.k.a. behavior-agnostic). As a consequence, the collected data can exhibit arbitrary dependency structure, which makes constructing rigorous non-asymptotic confidence bounds particularly challenging. Traditionally, the only approach to provide non-asymptotic confidence bounds in OPE is to combine importance sampling (IS) with concentration inequalities (e.g., Thomas et al., 2015a;b), which, however, tends to degenerate for long/infinite horizon problems (Liu et al., 2018a). Furthermore, ∗Equal contribution. neither can this approach be applied to the behavior-agnostic settings, nor can it effectively handle the complicated time dependency structure inside individual trajectories. Instead, it requires to use a large number of independently collected trajectories drawn under known policies. In this work, we provide a practical approach for Behavior-agnostic, Off-policy, Infinite-horizon, Non-asymptotic, Confidence intervals based on arbitrarily Dependent data (BONDIC). Our method is motivated by a recently proposed optimization-based (or variational) approach to estimating OPE confidence bounds (Feng et al., 2020), which leverages a tail bound of kernel Bellman statistics (Feng et al., 2019). Our approach achieves a new bound that is both an order-of-magnitude tighter and computationally efficient than that of Feng et al. (2020). Our improvements are based on two pillars 1) developing a new primal-dual perspective on the non-asymptotic OPE confidence bounds, which is connected to a body of recent works on infinite-horizon value estimation (Liu et al., 2018a; Nachum et al., 2019a; Tang et al., 2020a; Mousavi et al., 2020); and 2) offering a new tight concentration inequality on the kernel Bellman statistics that applies to behavior-agnostic off-policy data with arbitrary dependency between transition pairs. Empirically, we demonstrate that our method can provide reliable and tight bounds on a variety of well-established benchmarks. Related Work Besides the aforementioned approach based on the combination of IS and concentration inequalities (e.g., Thomas et al., 2015a), bootstrapping methods have also been widely used in off-policy estimation (e.g., White & White, 2010; Hanna et al., 2017; Kostrikov & Nachum, 2020). But the latter is limited to asymptotic bounds. Alternatively, Bayesian methods (e.g. Engel et al., 2005; Ghavamzadeh et al., 2016a) offers a different way to estimate the uncertainty in RL, but fails to guarantee frequentist coverage. In addition, Distributed RL (Bellemare et al., 2017) seeks to quantify the intrinsic uncertainties inside the Markov decision process, which is orthogonal to the estimation of uncertainty that we consider. Our work is built upon the recent advances in behavior-agnostic infinite-horizon OPE, including Liu et al. (2018a); Feng et al. (2019); Tang et al. (2020a); Mousavi et al. (2020), as well as the DICE-family (e.g., Nachum et al., 2019a; Zhang et al., 2020a; Yang et al., 2020b). In particular, our method can be viewed as extending the minimax framework of the infinite-horizon OPE in the infinite data region by Tang et al. (2020a); Uehara et al. (2020); Jiang & Huang (2020) to the non-asymptotic finite sample region. Outline For the rest of the paper, we start with the problem statement in Section 2 , and an overview on the two dual approaches to infinite-horizon OPE that are tightly connected to our method in Section 3. We then present our main approach in Section 4 and perform empirical studies in Section 5. The proof and an abundance of additional discussions can be found in Appendix. 2 BACKGROUND, DATA ASSUMPTION, PROBLEM SETTING Consider an agent acting in an unknown environment. At each time step t, the agent observes the current state st in a state space S, takes an action at ∼ π(· | st) in an action space A according to a given policy π; then, the agent receives a reward rt and the state transits to s′t = st+1, following an unknown transition/reward distribution (rt, st+1) ∼ P(· | st, at). Assume the initial state s0 is drawn from an known initial distribution D0. Let γ ∈ (0, 1) be a discount factor. In this setting, the expected reward of π is defined as Jπ := Eπ [∑T t=0 γ trt | s0 ∼ D0 ] , which is the expected total discounted rewards when we execute π starting from D0 for T steps. In this work, we consider the infinite-horizon case with T → +∞. Our goal is to provide an interval estimation of Jπ for a general and challenging setting with significantly released constraints on the data. In particular, we assume the data is behavior-agnostic and off-policy, which means that the data can be collected from multiple experiments, each of which can execute a mix of arbitrary, unknown policies, or even follow a non-fixed policy. More concretely, suppose that the model P is unknown, and we have a set of transition pairs D̂n = (si, ai, ri, s′i) n i=1 collected from previous experiments in a sequential order, such that for each data point i, the (ri, s′i) is drawn from the model P(· | si, ai), while (si, ai) is generated with an arbitrary black box given the previous data points. We formalize both the data assumption and goal as below. Assumption 2.1 (Data Assumption). Assume the data D̂n = (si, ai, ri, s′i)ni=1 is drawn from an arbitrary joint distribution, such that for each i = 1, . . . , n, conditional on D̂<i := (sj , aj , rj , s′j)j<i ∪ (si, ai), the subsequent local reward and next state (ri, s′i) are drawn from P(· | si, ai). Goal Given a confidence level δ ∈ (0, 1), we want to construct an interval [Ĵ−, Ĵ+] ⊂ R based on the data D̂n, such that Pr(Jπ ∈ [Ĵ−, Ĵ+]) ≥ 1− δ, where Pr(·) is w.r.t. the randomness of the data. The partial ordering on the data points is introduced to accommodate the case that si+1 equals s′j for some j ≤ i. The data assumption only requires that (ri, s′i) is generated from P(· | si, ai), and imposes no constraints on how (si, ai) is generated. This provides great flexibility in terms of the data collection process. In particular, we do not require (si, ai)ni=1 to be independent as always assumed in recent works (Liu et al., 2018a; Mousavi et al., 2020). A crucial fact is that our data assumption actually implies a martingale structure on the empirical Bellman residual operator of the Q-function, As we will show in Section 4.1, this enables us to derive a key concentration inequality underpinning our non-asymptotic confidence bounds. Here, we summarize a few notations that will simplify the presentation in the rest of work. First of all, we append each (si, ai, ri, s′i) with an action a ′ i ∼ π(· | s′i) following s′i. This can be done for free as long as π is given (See the Remark in Section 3). Also, we write xi = (si, ai), x′i = (s ′ i, a ′ i), and yi = (x′i, ri) = (s ′ i, a ′ i, ri). Correspondingly, define X = S ×A to be the state-action space and Y = X ×R. Denote Pπ(y | x) = P(s′, r | x)π(a′ | s′). In this way, the observed data can be written as pairs of {xi, yi}ni=1, and Assumption 2.1 is equivalent to saying that yi ∼ Pπ(· | xi) given D̂<i, which is similar to a supervised learning setting. We equalize the data D̂n with its empirical measure D̂n = ∑n i=1 δxi,yi/n, where δ is the Delta measure. 3 TWO DUAL APPROACHES TO INFINITE-HORIZON OFF-POLICY ESTIMATION The deficiency of the traditional IS methods on long-/infinite-horizon RL problems (a.k.a. the curse of horizon (Liu et al., 2018a)) has motivated a line of work on developing efficient infinite-horizon value estimation (e.g., Liu et al., 2018a; Feng et al., 2019; Nachum et al., 2019a; Zhang et al., 2020a; Mousavi et al., 2020; Tang et al., 2020a). The main idea is to transform the value estimation problem into estimating either the Q-function or the visitation distribution (or its related density ratio) of the policy π. This section introduces and reinterprets these two tightly connected methods, which serves to lay out a foundation for our main confidence bounds from a primal and dual perspective. Given a policy π, its Q-function is defined as qπ(x) = Eπ [ ∑∞ t=0 γ trt | x0 = x], where the expectation is taken when we execute π initialized from a fixed state-action pair (s0, a0) = x0 = x. Let Dπ,t be the distribution of (xt, yt) = (st, at, s′t, a ′ t, rt) when executing policy π starting from s0 ∼ D0 for t steps. The visitation distribution of π is defined as Dπ = ∑∞ t=0 γ tDπ,t. Note that Dπ integrates to 1/(1− γ), while we treat it as a probability measure in the notation. The expected reward Jπcan be expressed using either qπ or Dπ as follows: Jπ := Eπ [ ∞∑ t=0 γtrt ] = Er∼Dπ [r] = Ex∼Dπ,0 [qπ(x)], (1) where r ∼ Dπ (resp. x ∼ Dπ,0) denotes sampling from the r-(resp. x-) marginal distribution of Dπ (resp. Dπ,0). Eq. (1) plays a key role in the infinite-horizon value estimation by transforming the estimation of Jπ into estimating either qπ or Dπ . Value Estimation via Q Function Because Dπ,0(x) = D0(s)π(a|s) is known, we can estimate Jπ by Ex∼Dπ,0 [q̂(x)] with any estimation q̂ of the true Q-function qπ; the expectation under x ∼ Dπ,0 can be estimated to any accuracy with Monte Carlo. To estimate qπ, we consider the empirical and expected Bellman residual operator: R̂q(x, y) = q(x)− γq(x′)− r, Rπq(x) = Ey∼Pπ(·|x) [ R̂q(x, y) ] . (2) It is well-known that qπ is the unique solution of the Bellman equation Rπq = 0. Since yi ∼ Pπ(·|xi) for each data point in D̂n, if q = qπ , then R̂q(xi, yi), i = 1, . . . , n are all zero-mean random variables. Let ω be any function from X to R, then∑i R̂q(xi, yi)ω(xi) also has zero mean. This motivates the following functional Bellman loss (Feng et al., 2019; 2020; Xie & Jiang, 2020), LW(q; D̂n) := sup ω∈W { 1 n n∑ i=1 R̂q(xi, yi)ω(xi) } , (3) whereW is a set of functions ω : X → R. To ensure that the sup is finite,W is typically set to be an unit ball of some normed function spaceWo, such thatW = {ω ∈ Wo : ‖ω‖Wo ≤ 1}. Feng et al. (2019) considers the simple case whenW is taken to be the unit ball K of the reproducing kernel Hilbert space (RKHS) with a positive definite kernel k : X × X → R, in which case the loss has a simple closed form solution: LK(q; D̂n) = √√√√ 1 n2 n∑ ij=1 R̂q(xi, yi)k(xi, xj)R̂q(xj , yj). (4) Note that the RHS of Eq. (4) is the square root of the kernel Bellman V-statistics in Feng et al. (2019). Feng et al. (2019) showed that, when the support of data distribution D̂n covers the whole space (which may require an infinite data size) and k is an integrally strictly positive definite kernel, LK(q; D̂n) = 0 iff q = qπ . Therefore, one can estimate qπ by minimizing LK(q, D̂n). Remark The empirical Bellman residual operator R̂ can be extended to R̂q(x, y) = q(x)− r − γ 1m ∑m `=1 q(s ′, a′`), where {a′`}mi=1 are i.i.d. drawn from π(·|s′). As m increases, this gives a lower variance estimation of Rπq. If m = +∞, we have R̂q(x, y) = q(x)− r − γEa′∼π(· | s′)[q(s′, a′)], which coincides with the operator used in the expected SARSA (Sutton & Barto, 1998). In fact, without any modification, all results in this work can be applied to R̂q for any m. Value Estimation via Visitation Distribution Another way to estimate Jπ in Eq. (1) is to approximate Dπ with a weighted empirical measure of the data (Liu et al., 2018a; Nachum et al., 2019a; Mousavi et al., 2020; Zhang et al., 2020a). The key idea is to assign an importance weight ω(xi) to each data point xi in D̂n. We can choose the function ω : X → R properly such that Dπ and hence Jπ can be approximated by the ω-weighted empirical measure of D̂n as follows: Jπ ≈ Ĵω := ED̂ωn [r] = 1 n n∑ i=1 ω(xi)ri, Dπ ≈ D̂ωn := 1 n n∑ i=1 ω(xi)δxi,yi . (5) Intuitively, ω can be viewed as the density ratio between Dπ and D̂n, although the empirical measure D̂n may not have well-defined density. Liu et al. (2018a); Mousavi et al. (2020) proposed to estimate ω by minimizing a discrepancy measure between D̂ωn and Dπ. To see this, note that D = Dπ if and only if ∆(D, q) = 0 for any function q, where ∆(D, q) = ED[γq(x′)− q(x)]− EDπ [γq(x′)− q(x)] = ED[γq(x′)− q(x)] + EDπ,0 [q(x)], (6) using the fact that EDπ [γq(x′) − q(x)] = −EDπ,0 [q(x)] (Theorem 1, Liu et al., 2018a). Also note that the RHS of Eq. (6) can be practically calculated given any D and q without knowing Dπ . Let Q be a set of functions q : X → R. One can define the following loss for ω: IQ(ω; D̂n) = sup q∈Q { ∆(D̂ωn , q) } . (7) Similar to LW(q; D̂n), when Q is a ball in RKHS, IQ(ω; D̂n) also has a bilinear closed form analogous to Eq. (4); see Mousavi et al. (2020) and Appendix F. As we show in Section 4, IQ(ω; D̂n) and LW(q; D̂n) are connected to the primal and dual views of our confidence bounds, respectively. 4 MAIN APPROACH Let Q be a large enough function set including the true Q-function qπ, that is, qπ ∈ Q. Following Feng et al. (2020), a confidence interval [Ĵ−Q,W , Ĵ + Q,W ] of Jπ can be constructed as follows: Ĵ+Q,W = sup q∈Q { EDπ,0 [q] s.t. LW(q; D̂n) ≤ εn } , (8) and Ĵ−Q,W is defined in a similar way by replacing sup on q ∈ Q with inf . The idea here is to seek the extreme q function with the largest (resp. smallest) expected values in set F := Q ∩ {q : LK(q; D̂n) ≤ εn}. Therefore, Eq. (8) would be a 1− δ confidence interval if qπ is included in F with at least probability 1− δ, which is ensured when qπ ∈ Q and Pr(LW(qπ; D̂n) ≤ εn) ≥ 1− δ . (9) Feng et al. (2020) showed that in the RKHS case whenW = K, Eq. (9) can be achieved with εn = √√√√2cqπ,k ( n− 1 n √ log(1/δ) n + 1 n ) , (10) when n is an even number, where cqπ,k = supx,y R̂qπ(x, y) 2k(x, x). This was proved using Hoeffding’s inequality for U-statistics (Hoeffding, 1963). To solve Eq. (8) efficiently, Feng et al. (2020) took Q to be a ball in RKHS with random feature approximation. Unfortunately, this method as described by Eq. (8)-(10) has two major disadvantages: 1) Bound Needs to Be Tightened (Section 4.1) The bound of εn = O(n−1/4) in Eq. (10) is sub-optimal in rate. In Section 4.1, we improve it by an εn = O(n−1/2) bound under the mild Assumption 2.1, which gets rid of the independence requirement between the transition pairs. Our tightened bound is achieved by firstly noting a Martingale structure on the empirical Bellman operator under Assumption 2.1, and then applying an inequality in Pinelis (1992). 2) Dependence on Global Optimization (Section 4.2) The bound in Eq. (8) is guaranteed to be a 1− δ confidence bound only when the maximization in Eq. (8) is solved to global optimality. With a large n, this leads to a high computational cost, even when choosing Q as the RKHS. Feng et al. (2020) solved Eq. (8) approximately using a random feature technique, but this method suffers from a gap between the theory and practice. In Section 4.2, we address this problem by presenting a dual form of Eq. (8), which sidesteps solving the challenging global optimization in Eq. (8). Moreover, the dual form enables us to better analyze the tightness of the confidence interval and issues regarding the choices of Q andW . 4.1 A TIGHTER CONCENTRATION INEQUALITY In this section, we explain our method to improve the bound in Eq. (10) by giving a tighter concentration inequality for the kernel Bellman loss in Eq. (4). We introduce the following semi-expected kernel Bellman loss: L∗K(q; D̂n) = √√√√ 1 n2 n∑ ij=1 Rπq(xi)k(xi, xj)Rπq(xj) , (11) in which we replace the empirical Bellman residual operator R̂q in Eq. (3) with its expected counterpart Rπq, but still take the empirical average over {xi}ni=1 in D̂n. For a more general function set W , we can similarly define L∗W(q; D̂n) by replacing R̂q with Rπq in Eq. (3). Obviously, we have L∗W(q; D̂n) = 0 when q = qπ . Theorem 4.1 below shows that LK(q; D̂n) concentrates around L∗K(q; D̂n) with an O(n −1/2) error under Assumption 2.1. At a first glance, it may seem surprising that the concentration bound is able to hold even without any independence assumption between {xi}. An easy way to make sense of this is by recognizing that the randomness in yi conditional on xi is aggregated through averaging, even when {xi} are deterministic. As Assumption 2.1 does not impose any (weak) independence between {xi}, we cannot establish that LK(q; D̂n) concentrates around its mean ED̂n [LK(q; D̂n)] (which is a full expected kernel bellman loss), without introducing further assumptions. Theorem 4.1. Assume K is the unit ball of RKHS with a positive definite kernel k(·, ·). Let cq,k := supx∈X ,y∈Y(R̂q(x, y)−Rπq(x))2k(x, x) <∞. Under Assumption 2.1, for any δ ∈ (0, 1), with at least probability 1− δ, we have∣∣∣LK(q; D̂n)− L∗K(q; D̂n)∣∣∣ ≤ √ 2cq,k log(2/δ) n . (12) In particular, when q = qπ , we have cqπ,k = supx,y(R̂qπ(x, y)) 2k(x, x), and LK(qπ; D̂n) ≤ √ 2cqπ,k log(2/δ) n . (13) Intuitively, to see why we can expect an O(n−1/2) bound, note that LK(q, D̂n) consists of the square root of the product of two R̂q terms, each of which contributes an O(n−1/2) error w.r.t. Rπq. Technically, the proof is based on a key observation: Assumption 2.1 ensures that Zi := R̂q(xi, yi)− Rπq(xi), i = 1, . . . , n forms a martingale difference sequence w.r.t. {D̂<i : ∀i = 1, . . . , n}, in the sense that E[Zi | D̂<i] = 0, ∀i. See Appendix B for details. The proof also leverages a special property of RKHS and applies a Hoeffding-like inequality on the Hilbert spaces as in Pinelis (1992) (see Appendix B). For other more general function setsW , we establish in Appendix E a similar bound by using Rademacher complexity, although it yields a less tight bound than Eq. (12) when W = K. 4.2 DUAL CONFIDENCE BOUNDS We derive a dual form of Eq. (8) that sidesteps the need for solving the challenging global optimization in Eq. (8). To do so, let us plug the definition of LW(q; D̂n) into Eq. (3) and introduce a Lagrange multiplier: Ĵ+Q,W = sup q∈Q inf h∈W inf λ≥0 EDπ,0 [q]− λ ( 1 n n∑ i=1 h(xi)R̂q(xi, yi)− εn ) (14) = sup q∈Q inf ω∈Wo { EDπ,0 [q]− 1 n n∑ i=1 ω(xi)R̂q(xi) + εn ‖ω‖Wo } , (15) where we use ω(x) = λh(x). Exchanging the order of min/max and some further derivation yields the following main result. Theorem 4.2. I) LetW be the unit ball of a normed function spaceWo. We have Ĵ+Q,W ≤ F̂+Q (ω) := ED̂ωn [r] + IQ(ω; D̂n) + εn ‖ω‖Wo , ∀ω ∈ Wo , Ĵ−Q,W ≥ F̂−Q (ω) := ED̂ωn [r]− I−Q(ω; D̂n)− εn ‖ω‖Wo , ∀ω ∈ Wo , (16) where −Q = {−q : q ∈ Q} and hence I−Q(ω; D̂n) = IQ(ω; D̂n) if Q = −Q. Further, we have Ĵ+Q,W = infω∈Wo F̂ + Q (ω) and Ĵ − Q,W = supω∈Wo F̂ − Q (ω) if Q is convex and there exists a q ∈ Q that satisfies the strict feasibility condition that LW(q; D̂n) < εn. II) For D̂n and δ ∈ (0, 1), assume Wo and εn ∈ R satisfy Eq. (9) (e.g., via Theorem 4.1). Then for any function set Q with qπ ∈ Q, and any function ω+, ω− ∈ Wo (the choice of Q, ω+, ω− can depend on D̂n arbitrarily), we have Pr ( Jπ ∈ [ F̂−Q (ω−), F̂ + Q (ω+) ]) ≥ 1− δ . (17) Theorem 4.2 transforms the original bound in Eq. (8), framed in terms of q and LW(q; D̂n), into a form that involves the density-ratio ω and the related loss IQ(ω; D̂n). The bounds in Eq. (16) can be interpreted as assigning an error bar around the ω-based estimator Ĵω = ED̂ωn [r] in Eq. (5), with the error bar of I±Q(ω; D̂n) + εn ‖ω‖Wo . Specifically, the first term I±Q(ω; D̂n) measures the discrepancy between D̂ωn and Dπ as discussed in Eq. (7), whereas the second term captures the randomness in the empirical Bellman residual operator R̂qπ . Compared with Eq. (8), the global maximization on q ∈ Q is now transformed inside the IQ(ω; D̂n) term, which yields a simple closed form solution in the RKHS case (see Appendix F). In practice, we can optimize ω+ and ω− to obtain the tightest possible bound (and hence recover the primal bound) by minimizing/maximizing F̂+Q (ω) and F̂ − Q (ω), but it is not necessary to solve the optimization to global optimality. WhenWo is an RKHS, by the standard finite representer theorem (Scholkopf & Smola, 2018), the optimization on ω reduces to a finite dimensional optimization, which can be sped up with any favourable approximation techniques. We elaborate on this in Appendix D. Length of the Confidence Interval The form in Eq. (16) also makes it much easier to analyze the tightness of the confidence interval. Suppose ω = ω+ = ω− and Q = −Q, the length of the optimal confidence interval is length([Ĵ−Q,W , Ĵ + Q,W ]) = inf ω∈Wo { 2IQ(ω; D̂n) + 2εn ‖ω‖Wo } . Given εn is O(n−1/2), we can make the overall length of the optimal confidence interval also O(n−1/2), as long asWo is rich enough to include a good density ratio estimator ω∗ that satisfies IQ(ω ∗; D̂n) = O(n −1/2) and has a bounded norm ‖ω∗‖Wo . We can expect to achieve IQ(ω∗; D̂n) = O(n−1/2), when (1) Q has an O(n−1/2) sequential Rademacher complexity (Rakhlin et al., 2015) (e.g., a finite ball in RKHS); and (2) D̂n is collected following a Markov chain with strong mixing condition and weakly converges to some limit distribution D∞ whose support is X , and therefore we can define ω∗ as the density ratio between Dπ and D∞. Refer to Appendix C for more discussions. Indeed, our experiments show that the lengths of practically constructed confidence intervals do tend to decay with an O(n−1/2) rate. Choice ofW and Q To ensure the concentration inequality in Theorem 4.1 is valid, the choice of Wo cannot depend on the data D̂n. Therefore, we should use a separate holdout data to construct a data-dependentWo. In contrast, the choice of Q can depend on the data D̂n arbitrarily, since it is a part of the optimization bound Eq. (8) but not in the tail bound Eq. (9). In this light, one can construct the best possible Q by exploiting the data information in the most favourable way. For example, we can construct an estimator of q̂ ≈ qπ based on any state-of-the-art method (e.g., Q-learning or model-based methods), and set Q to be a ball centering around q̂ such that qπ − q̂ ∈ Q. This enables post-hoc analysis based on prior information on qπ , as suggested in Feng et al. (2020). Mis-specification of Q and Oracle Upper/Lower Estimates Our result relies on the assumption that qπ ∈ Q. However, as with other statistical estimation problems, there exists no provably way to empirically verify the correctness of model assumptions such as qπ ∈ Q. Because empirical data only reveals the information of the unknown function (in our case qπ) on a finite number data points, but no conclusion can be made on the unseeing data points without imposing certain smoothness assumption. Typically, what we can do is the opposite: reject qπ ∈ Q when the Bellman loss LW(q; D̂n) of all q in Q is larger than the threshold εn. We highlight that, even without verifying qπ ∈ Q, our method can still be viewed as a confidence interval of a best possible (oracle) upper and lower estimation given the data D̂n plus the assumption that qπ ∈ Q, defined as Ĵ+Q,∗ = sup q∈Q { EDπ,0 [q] s.t. R̂q(xi, yi) = R̂qπ(xi, yi), ∀i = 1, . . . , n } . (18) In fact, it is impossible to derive empirical upper bounds lower than Ĵ+Q,∗, as there is no way to distinguish q and qπ if R̂q(xi, yi) = R̂qπ(xi, yi) for all i. But our interval [ĴQ,K, Ĵ+Q,K] provides a 1− δ confidence outer bound of [Ĵ−Q,∗, Ĵ+Q,∗] once Eq. (9) holds, regardless if qπ ∈ Q holds or not. Hence, it is of independent interest to further explore the dual form of Eq. (18), which is another starting point for deriving our bound. We have more discussion in Appendix G. Lastly, we argue that it is important to include the Q in the bound. Proposition G.1 in Appendix shows that removing the q ∈ Q constraint in Eq. (18) would lead to an infinite upper bound, unless the {si, s′i}ni=1 from D̂n almost surely covers the whole state space S in the sense that Prs∼D0(s ∈ {si, s′i}ni=1) = 1. 5 EXPERIMENTS We compare our method with a variety of existing algorithms for obtaining asymptotic and nonasymptotic bounds on a number of benchmarks. We find our method can provide confidence interval that correctly covers the true expected reward with probability larger than the specified success probability 1 − δ (and is hence safe) across the multiple examples we tested. In comparison, the non-asymptotic bounds based on IS provide much wider confidence intervals. On the other hand, the asymptotic methods, such as bootstrap, despite giving tighter intervals, often fail to capture the true values with the given probability in practice. Environments and Dataset Construction We test our method on three environments: InvertedPendulum and CartPole from OpenAI Gym (Brockman et al., 2016), and a Type-1 Diabetes medical treatment simulator.1 We follow a similar procedure as Feng et al. (2020) to construct the behavior and target policies. more details on environments and data collection procedure are included in Appendix H.1. Algorithm Settings We test the dual bound described in our paper. Throughout the experiment, we always setW = K, the unit ball of the RKHS with positive definite kernel k, and set Q = rQK̃, the ball of radius rQ in the RKHS with another kernel k̃. We take both kernels to be Gaussian RBF kernel and choose rQ and the bandwidths of k and k̃ using the procedure in Appendix H.2. We use a fast approximation method to optimize ω in F+Q (ω) and F − Q (ω) as shown in Appendix D. Once ω is found, we evaluate the bound in Eq. (16) exactly to ensure that the theoretical guarantee holds. Baseline Algorithms We compare our method with four existing baselines, including the IS-based non-asymptotic bound using empirical Bernstein inequality by Thomas et al. (2015b), the IS-based bootstrap bound of Thomas (2015), the bootstrap bound based on fitted Q evaluation (FQE) by Kostrikov & Nachum (2020), and the bound in Feng et al. (2020) which is equivalent to the primal bound in (8) but with looser concentration inequality (they use a εn = O(n−1/4) threshold). Results Figure 1 shows our method obtains much tighter bounds than Feng et al. (2020), which is because we use a much tighter concentration inequality, even the dual bound that we use can be slightly looser than the primal bound used in Feng et al. (2020). Our method is also more computationally efficient than that of Feng et al. (2020) because the dual bound can be tightened 1 https://github.com/jxx123/simglucose. approximately while the primal bound requires to solve a global optimization problem. Figure 1 (b) shows that we provide increasingly tight bounds as the data size n increases, and the length of the interval decays with an O(n−1/2) rate approximately. Figure 1 (c) shows that when we increase the significance level δ, our bounds become tighter while still capturing the ground truth. Figure 1 (d) shows the percentage of times that the interval fails to capture the true value in a total of 100 random trials (denoted as δ̂) as we vary δ. We can see that δ̂ remains close to zero even when δ is large, suggesting that our bound is very conservative. Part of the reason is that the bound is constructed by considering the worse case and we used a conservative choice of the radius rQ and coefficient cqπ,k in Eq. (13) (See Appendix H.2). In Figure 2 we compare different algorithms on more examples with δ = 0.1. We can again see that our method provides tight and conservative interval that always captures the true value. Although FQE (Bootstrap) yields tighter intervals than our method, it fail to capture the ground truth much more often than the promised δ = 0.1 (e.g., it fails in all the random trials in Figure 2 (a)). We conduct more ablation studies on different hyper-parameter and data collecting procedure. See Appendix H.2 and H.3 for more details. 6 CONCLUSION We develop a dual approach to construct high confidence bounds for off-policy evaluation with an improved rate over Feng et al. (2020). Our method can handle dependent data, and does not require a global optimization to get a valid bound. Empirical results demonstrate that our bounds is tight and valid compared with a range of existing baseline. Future directions include leveraging our bounds for policy optimization and safe exploration. A PROOF OF THE DUAL BOUND IN THEOREM 4.2 Proof. Introducing a Lagrange multiplier, the bound in (8) is equivalent to Ĵ+Q,W = max q∈Q min λ≥0 { EDπ,0 [q] − λ ( max h∈W 1 n n∑ i=1 h(xi)R̂q(xi, yi)− εn )} = max q∈Q min λ≥0 min h∈W { EDπ,0 [q] − λ ( 1 n n∑ i=1 h(xi)R̂q(xi, yi)− εn )} = max q∈Q min ω∈Wo { EDπ,0 [q] − 1 n n∑ i=1 ω(xi)R̂q(xi, yi) + εn ‖ω‖Wo } , where we use ω = λh(x), such that λ is replaced by ‖ω‖Wo . Define M(q, ω; D̂n) = EDπ,0 [q] − 1 n n∑ i=1 ω(xi)R̂q(xi, yi) + εn ‖ω‖Wo = ED̂ωn [r] + ∆(D̂ ω n , q) + εn ‖ω‖Wo . Then we have max q∈Q M(q, ω; D̂n) = ED̂ωn [r] + maxq∈Q ∆(D̂ ω n , q) + εn ‖ω‖Wo = ED̂ωn [r] + IQ(ω; D̂n) + εn ‖ω‖Wo = F̂+Q (ω). Therefore, Ĵ+Q,W = max q∈Q min ω∈Wo M(q, ω; D̂n) ≤ min ω∈Wo max q∈Q M(q, ω; D̂n) = min ω∈Wo F̂+Q (ω). The lower bound follows analogously. The strong duality holds when the Slater’s condition is satisfied (Nesterov, 2013), which amounts to saying that the primal problem in (8) is convex and strictly feasible; this requires that Q is convex and there exists at least one solution q ∈ Q that satisfy that constraint strictly, that is, LW(q; D̂n) < εn; note that the objective function Q is linear on q and the constraint function LW(q; D̂n) is always convex on q (since it is the sup a set of linear functions on q following (3)). B PROOF OF CONCENTRATION BOUND IN THEOREM 4.1 Our proof require the following Hoeffding inequality on Hilbert spaces by Pinelis (Theorem 3, 1992); see also Section 2.4 of Rosasco et al. (2010). Lemma B.1. (Theorem 3, Pinelis, 1992) Let H be a Hilbert space and {fi}ni=1 is a Martingale sequence inH that satisfies supi ‖fi‖H ≤ σ almost surely. We have for any > 0, Pr (∥∥∥∥∥ 1n n∑ i=1 fi ∥∥∥∥∥ H ≥ ) ≤ 2 exp ( −n 2 2σ2 ) . Therefore, with probability at least 1− δ, we have ∥∥ 1 n ∑n i=1 fi ∥∥ H ≤ √ 2σ2 log(2/δ) n . Lemma B.2. Let k(x, x′) be a positive definite kernel whose RKHS isHk. Define fi(·) = R̂q(xi, yi)k(xi, ·)−Rπq(xi)k(xi, ·). Assume Assumption 2.1 holds, then {fi}ni=1 is a Martingale difference sequence inHk w.r.t. T<i := (xj , yj)j<i ∪ (xi). That is, E [fi+1(·) | T<i] = 0. In addition,∥∥∥∥∥ 1n n∑ i=1 fi ∥∥∥∥∥ 2 Hk = 1 n2 n∑ ij=1 ( R̂q(xi, yi)−Rπq(xi) ) k(xi, xj) ( R̂q(xj , yj)−Rπq(xj) ) , and ‖fi‖2Hk ≤ cq,k for ∀i = 1, . . . , n. Proof of Theorem 4.1. Following Lemma B.1 and Lemma B.2, since {fi}ni=1 is a Martingale difference sequence inHk with ‖fi‖Hk ≤ cq,k almost surely, we have with probability at least 1− δ, 1 n2 n∑ ij=1 ( R̂q(xi, yi)−Rπq(xi) ) k(xi, xj) ( R̂q(xj , yj)−Rπq(xj) ) = ∥∥∥∥∥ 1n n∑ i=1 fi ∥∥∥∥∥ 2 Hk ≤ 2cq,k log(2/δ) n . Using Lemma B.3 below, we have∣∣∣LK(q; D̂n)− L∗K(q; D̂n)∣∣∣ ≤ ∥∥∥∥∥ 1n n∑ i=1 fi ∥∥∥∥∥ Hk ≤ √ 2cq,k log(2/δ) n . This completes the proof. Lemma B.3. Assume k(x, x′) is a positive definite kernel. We have∣∣∣LK(q; D̂n)− L∗K(q; D̂n)∣∣∣2 ≤ 1n2 n∑ ij=1 ( R̂q(xi, yi)−Rπq(xi) ) k(xi, xj) ( R̂q(xj , yj)−Rπq(xj) ) . Proof. Define ĝ(·) = 1 n n∑ i=1 R̂q(xi, yi)k(xi, ·) g(·) = 1 n n∑ i=1 Rπq(xi)k(xi, ·). Then we have ‖ĝ‖2Hk = 1 n2 n∑ ij=1 R̂q(xi, yi)k(xi, xj)R̂q(xj , yj) = L̂K(q; D̂n), ‖g‖2Hk = 1 n2 n∑ ij=1 Rπq(xi)k(xi, xj)Rπq(xj) = L ∗ K(q; D̂n), ‖ĝ − g‖2Hk = 1 n2 n∑ ij=1 ( R̂q(xi, yi)−Rπq(xi) ) k(xi, xj) ( R̂q(xj , yj)−Rπq(xj) ) . The result then follows the triangle inequality ∣∣‖ĝ‖Hk − ‖g‖Hk ∣∣ ≤ ‖ĝ − g‖Hk . B.1 CALCULATION OF cqπ,k The practical calculation of the coefficient cqπ,k in the concentration inequality was discussed in Feng et al. (2020), which we include here for completeness. Lemma B.4. (Feng et al. (2020) Lemma 3.1) Assume the reward function and kernel function is bounded with supx |r(x)| ≤ rmax and supx,x′ |k(x, x′)| ≤ Kmax, we have: cqπ,k := sup x∈X ,y∈Y (R̂qπ(x, y)) 2k(x, x) ≤ 4Kmaxr 2 max (1− γ)2 . In practice, we get access to Kmax from the kernel function that we choose (e.g., Kmax = 1 for RBF kernels), and rmax from the knowledge on the environment. C MORE ON THE TIGHTNESS OF THE CONFIDENCE INTERVAL The benefit of having both upper and lower bounds is that we can empirically access the tightness of the bound by checking the length of the interval [F̂−Q (ω−), F̂ + Q (ω+)]. However, from the theoretical perspective, it is desirable to know a priori that the length of the interval will decrease with a fast rate as the data size n increases. We now show that this is the case ifWo is chosen to be sufficiently rich so that it includes a ω ∈ Wo such that D̂ωn ≈ Dπ . Theorem C.1. AssumeWo is sufficiently rich to include a “good” ω∗ inWo with D̂ω ∗ n ≈ Dπ in that sup q∈Q ∣∣∣ED̂ω∗n [R̂q(x;x′, r)]− EDπ [R̂q(x;x′, r)]∣∣∣ ≤ cnα , (19) where c and α are two positive coefficients. Then we have max { Ĵ+Q,W − Jπ, Jπ − Ĵ−Q,W } ≤ c nα + εn ‖ω∗‖Wo . Assumption (19) holds if D̂n is collected following a Markov chain with certain strong mixing condition and weakly converges to some limit discussion D̂∞ whose support is X , for which we can define ω∗(x) = Dπ(x)/D∞(x). In this case, if Q is a finite ball in RKHS, then we can achieve (19) with α = 1/2, and yields the overall bound of rate O(n−1/2). For more general function classes, α depends on the martingale Rademacher complexity of function set R̂Q = {Rq(x, y) : q ∈ Q} Rakhlin et al. (2015). In our empirical reults, we observe that the gap of the practically constructed bounds tend to follow the O(n−1/2) rate. Proof. Note that Jπ = EDπ [r] = EDπ [r], and IQ(ω; D̂n) = sup q∈Q { ED̂ωn [γq(x ′)− q(x)]− EDπ [γq(x′)− q(x)] } . Because ω∗ ∈ W , we have Ĵ+W,Q − Jπ ≤ F̂+Q (ω∗)− Jπ = ED̂ωn [r]− EDπ [r] + IQ(ωπ; D̂n) + εn ‖ω ∗‖Wo = sup q∈Q { ED̂ωn [ R̂q(x, y) ] − EDπ [ R̂q(x, y) ]} + εn ‖ω∗‖Wo ≤ c nα + εn ‖ω∗‖Wo . The case of lower bound follows similarly. D OPTIMIZATION ONWo Consider the optimization of ω inWo F̂+Q (ω) := 1 n n∑ i=1 riω(xi) + IQ(ω; D̂n) + ‖ω‖Wo √ 2cqπ,k log(2/δ) n (20) AssumeWo is the RKHS of kernel k(x, x̄), that is,Wo = Hk. By the finite representer theorem of RKHS (Smola et al., 2007). the optimization of ω in RKHSHk can be reduced to a finite dimensional optimization problem. Specifically, the optimal solution of (20) can be written into a form of ω(x) = ∑n i=1 k(x, xi)αi with ‖ω‖ 2 Hk = ∑n i,j=1 k(xi, xj)αiαj for some vector α := [αi] n i=1 ∈ Rn. WriteK = [k(xi, xj)]ni,j=1 and r = [ri] n i=1. The optimization of ω reduces to a finite dimensional optimization on α: min α∈Rn 1 n r>Kα+ IQ(Kα; D̂n) + √ αKα √ 2cqπ,k log(2/δ) n , where IQ(Kα; D̂n) = max q∈Q { EDπ,0 [q] + 1 n (T̂q)>Kα } , and T̂q = [γq(x′i) − q(xi)]ni=1. When Q is RKHS, we can calculate IQ(Kα; D̂n) using (22) in section F. This computation can be still expensive when n is large. Fortunately, our confidence bound holds for any ω; better ω only gives tighter bounds, but it is not necessary to find the global optimal ω. Therefore, one can use any approximation algorithm to find ω, which provides a trade-off of tightness and computational cost. We discuss two methods: 1) Approximating α The length of α can be too large when n is large. To address this, we assume αi = g(xi, θ), where g is any parametric function (such as a neural network) with parameter θ which can be much lower dimensional than α. We can then optimize θ with stochastic gradient descent, by approximating all the data averaging 1n ∑n i=1(·) with averages over small mini-batches; this would introduce biases in gradient estimation, but it is not an issue when the goal is only to get a reasonable approximation. 2) Replacing kernel k Assume the kernel k yields a random feature expansion: k(x, x̄) = Eβ∼π[φ(x, β)φ(x̄, β)], where φ(x, β) is a feature map with parameter β and π is a distribution of β. We draw {βi}mi=1 i.i.d. from π, where m is taken to be much smaller than n. We replace k with k̂(x, x̄) = 1m ∑m i=1 φ(x, βi)φ(x̄, βi) andHk withHk̂, That is, we consider to solve Ĵ+Q,W = min ω∈Hk̂ F̂+Q (ω) := 1n n∑ i=1 riω(xi) + IQ(ω; D̂n) + ‖ω‖Hk̂ √ 2cqπ,k̂ log(2/δ) n . It is known that any function ω in Hk̂ can be represented as ω(x) = 1m ∑m i=1 wiφ(x, βi), for some w = [wi]mi=1 ∈ Rm and satisfies ‖ω‖2Hk̂ = 1 m ∑m i=1 w 2 i . In this way, the problem reduces to optimizing an m-dimensional vector w, which can be solved by standard convex optimization techniques. E CONCENTRATION INEQUALITIES OF GENERAL FUNCTIONAL BELLMAN LOSSES When K is a general function set, one can still obtain a general concentration bound using Radermacher complexity. Define R̂q ◦ W := {h(x, y) = R̂q(x, y)ω(x) : ω ∈ W}. Using the standard derivation in Radermacher complexity theory in conjunction with Martingale theory (Rakhlin et al., 2015), we have sup ω∈W { 1 n n∑ i=1 (R̂q(xi, yi)−Rπq(xi))ω(xi) } ≤ 2Rad(R̂q ◦W) + √ 2cq log(2/δ) n , where Rad(R̂q ◦ K) is the sequential Radermacher complexity as defined in (Rakhlin et al., 2015). A triangle inequality yields | Lk(q; D̂n)− Lk(q; D̂n) | ≤ sup ω∈W { 1 n n∑ i=1 (R̂q(xi, yi)−Rπq(xi))ω(xi) } Therefore, | LW(q; D̂n)− LW(q; D̂n) | ≤ 2Rad(R̂q ◦W) + √ 2cq log(2/δ) n , (21) where cq,W = supω∈W supx,y(R̂q(x, y)−Rπq(x))2ω(x)2. WhenW equals the unit ball K of the RKHS related to kernel k, we have cq,k = cq,W , and hence this bound is strictly worse than the bound in Theorem 4.1. F CLOSED FORM OF IQ(ω; D̂n) WHEN Q IS RKHS Similar to LK(q; D̂n), whenQ is taken to be the unit ball K̃ of the RKHS of a positive definite kernel k̃(x, x̄), (7) can be expressed into a bilinear closed form shown in Mousavi et al. (2020): IQ(ω; D̂n) 2 = A− 2B + C, (22) where A = E(x,x̄)∼Dπ,0×Dπ,0 [k(x, x̄)] B = E(x,x̄)∼D̂ωn×Dπ,0 [ T̂xπk(x, x̄) ] C = E(x,x̄)∼D̂ωn×D̂ωn [ T̂xπT̂ x̄ πk(x, x̄) ] , were T̂πf(x) = γf(x′) − f(x); in T̂xπT̂x̄πk(x, x̄), we apply T̂x̄π and T̂xπ in a sequential order by treating k as a function of x̄ and then of x. G MORE ON THE ORACLE BOUND AND ITS DUAL FORM The oracle bound (18) provides another starting point for deriving optimization-based confidence bounds. We derive its due form here. Using Lagrangian multiplier, the optimization in (18) can be rewritten into Ĵ+Q,∗ = max q∈Q min ω M(q, ω; D̂n), (23) where M∗(q, ω; D̂n) = EDπ,0 [q]− 1 n n∑ i=1 ω(xi) ( R̂q(xi, yi)− R̂qπ(xi, yi) ) , where ω now serves as the Lagrangian multiplier. By the weak duality, we have J∗Q,+ ≤ F̂+Q,∗(ω) := ED̂ωn [r] + IQ(ω; D̂n)︸ ︷︷ ︸ known +R(ω, qπ)︸ ︷︷ ︸ unknown , ∀ω. and R(ω, qπ) = 1 n n∑ i=1 ω(xi)R̂qπ(xi). The derivation follows similarly for the lower bound. So for any ω ∈ Wo, we have [Ĵ−Q,∗, Ĵ+Q,∗] ⊆ [F̂−Q,∗(ω), F̂ + Q,∗(ω)]. Here the first two terms of F̂+Q,∗(ω) can be empirically estimated (it is the same as the first two terms of (16)), but the third term R(ω, qπ) depends on the unknown qπ and hence need to be further upper bounded. Our method can be viewed as constraining ω inW , which is assumed to be the unit ball ofWo, and applying a worst case bound: F̂+Q,∗(ω) := ED̂ωn [r] + IQ(ω; D̂n) +R(ω, qπ), ∀ω ∈ Wo ≤ ED̂ωn [r] + IQ(ω; D̂n) + ‖w‖Wo suph∈W R(h, qπ), ∀ω ∈ Wo ≤ ED̂ωn [r] + IQ(ω; D̂n) + ‖w‖Wo LW(qπ, D̂n), ∀ω ∈ Wo w.p.1−δ ≤ ED̂ωn [r] + IQ(ω; D̂n) + ‖w‖Wo , ∀ω ∈ Wo = F̂+Q (ω). where the last step applies the high probability bound that Pr(LW(qπ, D̂n) ≤ ε) ≥ 1− δ. Similar derivation on the lower bound counterpart gives Pr ([ F̂−Q,∗(ω), F̂ + Q,∗(ω) ] ⊆ [ F̂−Q (ω), F̂ + Q (ω) ]) ≥ 1− δ. Therefore, our confidence bound [F̂−Q (ω), F̂ + Q (ω)] is a 1− δ confidence outer bound of the oracle bound [Ĵ−Q,∗, Ĵ + Q,∗] ⊆ [F̂−Q,∗(ω), F̂+Q,∗(ω)]. Introducing Q is necessarily Our method does not require any independence assumption between the transition pairs, the trade-off is that that we have to assume that qπ falls into a function set Q that imposes certain smoothness assumption. This is necessary because the data only provide information regarding qπ on a finite number of points, and qπ can be arbitrarily non-smooth outside of the data points, and hence no reasonable upper/lower bound can be obtained without any smoothness condition that extend the information on the data points to other points in the domain. Proposition G.1. Unless Prs∼Dπ,0(s /∈ {si, s′i}ni=1) = 0, for any u ∈ R, there exists a function q : S ×A → R, such that EDπ,0 [q] = u, R̂q(xi, yi) = R̂qπ(xi, yi), ∀i = 1, . . . , n. Proof. Let Qnull be the set of functions that are zero on {si, s′i}ni=1, that is, Qnull = {g : S ×A → R : g(s, a) = 0, ∀s ∈ {si, s′i}ni=1, a ∈ A}. Then we have R̂π(qπ + g)(xi, yi) = R̂πqπ(xi, yi), ∀i = 1, . . . , n. and EDπ,0 [qπ + g] = EDπ,0[qπ] + EDπ,0 [g] = Jπ + EDπ,0 [g]. Taking g(s, a) = zI(s /∈ {si, s′i}ni=1), where z is any real number. Then we have EDπ,0 [qπ + g] = Jπ + zPrs∼Dπ,0(s /∈ {si, s′i}ni=1). Because Prs∼Dπ,0(s /∈ {si, s′i}ni=1). 6= 0, we can take z to be arbitrary value to make EDπ,0 [qπ + g] to take arbitrary value. H ABLATION STUDY AND EXPERIMENTAL DETAILS H.1 EXPERIMENTAL DETAILS Environments and Dataset Construction We test our method on three environments: InvertedPendulum and CartPole from OpenAI Gym (Brockman et al., 2016), and a Type-1 Diabetes medical treatment simulator. For Inverted-Pendulum we discretize the action space to be {−1,−0.3,−0.2, 0, 0.2, 0.3, 1}. The action space of CartPole and the medical treatment simulator are both discrete. Policy Construction We follow a similar setup as Feng et al. (2020) to construct behavior and target policies. For all of the environments, we constraint our policy class to be a softmax policy and use PPO (Schulman et al., 2017) to train a good policy π, and we use different temperatures of the softmax policy to construct the target and behavior policies (we set the temperature τ = 0.1 for target policy and τ = 1 to get the behavior policy, and in this way the target policy is more deterministic than the behavior policy). We consider other choices of behavior policies in Section H.3. For horizon lengths, We fix γ = 0.95 and set horizon length H = 50 for Inverted-Pendulum, H = 100 for CartPole, and H = 50 for Diabetes simulator. Algorithm Settings We test the bound in Eq.(16)-(17). Throughout the experiment, we always set W = K, a unit ball of RKHS with kernel k(·, ·). We set Q = rQK̃, the zero-centered ball of radius rQ in an RKHS with kernel k̃(·, ·). We take both k and k̃ to be Gaussian RBF kernel. The bandwidth of k and k̃ are selected to make sure the function Bellman loss is not large on a validation set. The radius is selected to be sufficiently large to ensure that q∗ is included in Q. To ensure a sufficiently large radius, we use the data to approximate a q̂ so that its functional Bellman loss is small than n. Then we set rQ = 10 ∗ ‖q̂‖K̃. We optimize ω using the random feature approximation method described in Appendix D. Once ω+ and ω− are found, we evaluate the bound in Eq. (16) exactly, to ensure the theoretical guarantee holds. H.2 SENSITIVITY TO HYPER-PARAMETERS We investigate the sensitivity of our algorithm to the choice of hyper-parameters. The hyper-parameter mainly depends on how we choose our function class Q andW . Radius of Q Recall that we choose Q to be a ball in RKHS with radius rQ, that is, Q = rQK̃ = {rQf : f ∈ K̃}, where K̃ is the unit ball of the RKHS with kernel k̃. Ideally, we want to ensure that rQ ≥ ‖q∗‖K̃ so that q∗ ∈ Q. Since it is hard to analyze the behavior of the algorithm when q∗ is unknown, we consider a synthetic environment where the true q∗ is known. This is done by explicitly specifying a q∗ inside K̃ and then infer the corresponding deterministic reward function r(x) by inverting the Bellman equation: r(x) := q∗(x)− γEx′∼Pπ(·|x)[q∗(x′)]. Here r is a deterministic function, instead of a random variable, with an abuse of notation. In this way, we can get access to the true RKHS norm of q∗: ρ∗ = ‖q∗‖K̃ . For simplicity, we set both the state space S and action space A to be R and set a Gaussian policy π(a|s) ∝ exp(f(s, a)/τ), where τ is a positive temperature parameter. We set τ = 0.1 as target policy and τ = 1 as behavior policy. Figure 3 shows the results as we set rQ to be ρ∗, 10ρ∗ and 100ρ∗, respectively. We can see that the tightness of the bound is affected significantly by the radius when the number n of samples is very small. However, as the number n of samples grow (e.g., n ≥ 2× 103 in our experiment), the length of the bounds become less sensitive to the changing of the predefined norm of Q. Similarity Between Behavior Policy and Target Policy We study the performance of changing temperature of the behavior policy. We test on Inverted-Pendulum environment as previous described. Not surprisingly, we can see that the closer the behavior policy to the target policy (with temperature τ = 0.1), the tighter our confidence interval will be, which is observed in Figure 4(a). Bandwidth of RBF kernels We study the results as we change the bandwidth in kernel k and k̃ forW and Q, respectively. Figure 4(b) shows the length of the confidence interval when we use different bandwidth pairs in the Inverted-Pendulum environment. We can see that we get relatively tight confidence bounds as long as we set the bandwidth in a reasonable region (e.g., we set the bandwidth of k in [0.1, 0.5], the bandwidth of k̃ in [0.5, 3]). H.3 SENSITIVITY TO THE DATA COLLECTION PROCEDURE We investigate the sensitivity of our method as we use different behavior policies to collect the dataset D̂n. Varying Behavior Policies We study the effect of using different behavior policies. We consider the following cases: 1. Data is collected from a single behavior policy of form πα = απ + (1− α)π0, where π is the target policy and π0 is another policy. We construct π and π0 to be Gaussian policies of form π(a|s) ∝ exp(f(s, a)/τ) with different temperature τ , where temperature for target policy is τ = 0.1 and temperature for π0 is τ = 1. 2. The dataset D̂n is the combination of the data collected from multiple behavior policies of form πα defined as above, with α ∈ {0.0, 0.2, 0.4, 0.6, 0.8}. We show in Figure 5(a) that the length of the confidence intervals by our method as we vary the number n of transition pairs and the mixture rate α. We can see that the length of the interval decays with the sample size n for all mixture rate α. Larger α yields better performance because the behavior policies are closer to the target policy. Varying Trajectory Length T in D̂n As we collect D̂n, we can either have a small number of long trajectories, or a larger number of short trajectories. In Figure 5(b)-(c), we vary the length T of the trajectories as we collect D̂n, while fixing the total number n of transition pairs. In this way, the number of trajectories in each D̂n would be m = n/T . We can see that the trajectory length does not impact the results significantly, especially when the length is reasonably large (e.g., T ≥ 20). I MORE RELATED WORKS We give a more detailed overview of different approaches for uncertainty estimation in OPE. Finite-Horizon Importance Sampling (IS) Assume the data is collected by rolling out a known behavior policy π0 up to a trajectory length T , then we can estimate the finite horizon reward by changing Eπ,P[·] to Eπ0,P[·] with importance sampling(e.g., Precup et al., 2000; Precup, 2001; Thomas et al., 2015a;b). Taking the trajectory-wise importance sampling as an example, assume we collect a set of independent trajectories τi := {sit, ait, rit}T−1t=0 , i = 1, . . . ,m up to a trajectory length T by unrolling a known behavior policy π0. When T is large, we can estimate J∗ by a weighted averaging: Ĵ IS = 1 m m∑ i=1 ω(τi)J(τi) , where ω(τi) = T−1∏ t=0 π(ait|sit) π0(ait|sit) , J(τi) = T−1∑ t=0 γtrit . (24) One can construct non-asymptotic confidence bounds based on Ĵ IS using variants of concentration inequalities (Thomas, 2015; Thomas et al., 2015b). Unfortunately, a key problem with this IS estimator is that the importance weight ω(τi) is a product of the density ratios over time, and hence tends to cause an explosion in variance when the trajectory length T is large. Although improvement can be made by using per-step and self-normalized weights (Precup, 2001), or control variates (Jiang & Li, 2016; Thomas & Brunskill, 2016), the curse of horizon remains to be a key issue to the classical IS-based estimators (Liu et al., 2018a). Moreover, due to the time dependency between the transition pairs inside each trajectory, the nonasymptotic concentration bounds can only be applied on the trajectory level and hence decay with the number m of independent trajectories in an O(1/ √ m) rate, though m can be small in practice. We could in principle apply the concentration inequalities of Markov chains (e.g., Paulin, 2015) to the time-dependent transition pairs, but such inequalities require to have an upper bound of certain mixing coefficient of the Markov chain, which is unknown and hard to construct empirically. Our work addresses these limitations by constructing a non-asymptotic bound that decay with the number n = mT of transitions pairs, while without requiring known behavior policies and independent trajectories. Infinite-Horizon, Behavior-Agnostic OPE Our work is closely related to the recent advances in infinite-horizon and behavior-agnostic OPE, including, for example, Liu et al. (2018a); Feng et al. (2019); Tang et al. (2020a); Mousavi et al. (2020); Liu et al. (2020); Yang et al. (2020b); Xie et al. (2019); Yin & Wang (2020), as well as the DICE-family (e.g., Nachum et al., 2019a;b; Zhang et al., 2020a; Wen et al., 2020; Zhang et al., 2020b). These methods are based on either estimating the value function, or the stationary visitation distribution, which is shown to form a primal-dual relation (Tang et al., 2020a; Uehara et al., 2020; Jiang & Huang, 2020) that we elaborate in depth in Section 3. Besides Feng et al. (2020) which directly motivated this work, there has been a recent surge of interest in interval estimation under infinite-horizon OPE (e.g., Liu et al., 2018b; Jiang & Huang, 2020; Duan et al., 2020; Dai et al., 2020; Feng et al., 2020; Tang et al., 2020b; Yin et al., 2020; Lazic et al., 2020). For example, Dai et al. (2020) develop an asymptotic confidence bound (CoinDice) for DICE estimators with an i.i.d assumption on the off-policy data; Duan et al. (2020) provide a data dependent confidence bounds based on Fitted Q iteration (FQI) using linear function approximation when the off-policy data consists of a set of independent trajectories; Jiang & Huang (2020) provide a minimax method closely related to our method but do not provide analysis for data error; Tang et al. (2020b) propose a fixed point algorithm for constructing deterministic intervals of the true value function when the reward and transition models are deterministic and the true value function has a bounded Lipschitz norm. Model-Based Methods Since the model P is the only unknown variable, we can construct an estimator P̂ of P using maximum likelihood estimation or other methods, and plug it into Eq. (1) to obtain a plug-in estimator Ĵ = Jπ,P̂. This yields the model-based approach to OPE (e.g., Jiang & Li, 2016; Liu et al., 2018b). One can also estimate the uncertainty in Jπ,P̂ by propagating the uncertatinty in P̂ (e.g., Asadi et al., 2018; Duan et al., 2020), but it is hard to obtain non-asymptotic and computationally efficient bounds unless P̂ is assumed to be simple linear models. In general, estimating the whole model P can be an unnecessarily complicated problem as an intermediate step of the possibly simpler problem of estimating Jπ,P. Bootstrapping, Bayes, Distributional RL As a general approach of uncertainty estimation, bootstrapping has been used in interval estimation in RL in various ways (e.g., White & White, 2010; Hanna et al., 2017; Kostrikov & Nachum, 2020; Hao et al., 2021). Bootstrapping is simple and highly flexible, and can be applied to time-dependent data (as appeared in RL) using variants of block bootstrapping methods (e.g., Lahiri, 2013; White & White, 2010). However, bootstrapping typically only provides asymptotic guarantees; although non-asymptotic bounds of bootstrap exist (e.g., Arlot et al., 2010), they are sophistic and difficult to use in practice and would require to know the mixing condition for the dependent data. Moreover, bootstrapping is time consuming since it requires to repeat the whole off-policy evaluation pipeline on a large number of resampled data. Bayesian methods (e.g., Engel et al., 2005; Ghavamzadeh et al., 2016b; Yang et al., 2020a) offer another general approach to uncertainty estimation in RL, but require to use approximate inference algorithms and do not come with non-asymptotic frequentist guarantees. In addition, distributional RL (e.g., Bellemare et al., 2017) seeks to quantify the intrinsic uncertainties inside the Markov decision process, which is orthogonal to the epistemic uncertainty that we consider in off-policy evaluation.
1. How does the proposed approach improve upon previous methods in terms of sample efficiency and optimization? 2. Can the method provide point estimates of policy values, and if so, how would they be derived? 3. What are the key insights or findings from the ablation study in Appendix H that could be highlighted in the main body of the paper?
Review
Review This paper proposes an approach to construct confidence intervals using finite samples for off-policy evaluation. The paper improves the bound of a previous paper from O ( n − 1 / 4 ) to O ( n − 1 / 2 ) and avoids solving global optimum by introducing the dual. It is also noted that the results do not only apply to independent data. The authors further show the advantage of their method as compared to existing baselines in simulations, where their approach demonstrates good coverage and tight bound. The paper is well written. I have some comments/thoughts as below: how would point estimation of the policy value be derived using such an approach? In many cases, it's also desirable to give a point estimation so that we can compute mse , etc. it may be worth mentioning the findings (or some intuition) of the ablation study in Appendix H in the main body to be more educational, such as how the overlap between behavior policy and target policy influences the results. Please address and clarify these points above.
ICLR
Title SHAMANN: Shared Memory Augmented Neural Networks Abstract Current state-of-the-art methods for semantic segmentation use deep neural networks to learn the segmentation mask from the input image signal as an imageto-image mapping. While these methods effectively exploit global image context, the learning and computational complexities are high. We propose shared memory augmented neural network actors as a dynamically scalable alternative. Based on a decomposition of the image into a sequence of local patches, we train such actors to sequentially segment each patch. To further increase the robustness and better capture shape priors, an external memory module is shared between different actors, providing an implicit mechanism for image information exchange. Finally, the patch-wise predictions are aggregated to a complete segmentation mask. We demonstrate the benefits of the new paradigm on a challenging lung segmentation problem based on chest X-Ray images, as well as on two synthetic tasks based on the MNIST dataset. On the X-Ray data, our method achieves state-of-the-art accuracy with a significantly reduced model size compared to reference methods. In addition, we reduce the number of failure cases by at least half. N/A Current state-of-the-art methods for semantic segmentation use deep neural networks to learn the segmentation mask from the input image signal as an imageto-image mapping. While these methods effectively exploit global image context, the learning and computational complexities are high. We propose shared memory augmented neural network actors as a dynamically scalable alternative. Based on a decomposition of the image into a sequence of local patches, we train such actors to sequentially segment each patch. To further increase the robustness and better capture shape priors, an external memory module is shared between different actors, providing an implicit mechanism for image information exchange. Finally, the patch-wise predictions are aggregated to a complete segmentation mask. We demonstrate the benefits of the new paradigm on a challenging lung segmentation problem based on chest X-Ray images, as well as on two synthetic tasks based on the MNIST dataset. On the X-Ray data, our method achieves state-of-the-art accuracy with a significantly reduced model size compared to reference methods. In addition, we reduce the number of failure cases by at least half. 1 INTRODUCTION In the medical image analysis domain, the automatic parsing of medical images represents a fundamental task that impacts the efficiency of the entire clinical workflow from diagnosis to therapy planning, intervention and follow-up investigations. An essential step in this sense is the semantic segmentation of anatomical structures which supports the radiologist to read and understand the image content. Recent approaches are inspired from the vision domain and rely on fully convolutional neural networks, e.g., (Ronneberger et al., 2015; Yang et al., 2017), to achieve state-of-the-art results on various segmentation problems (Menze et al., 2015). Usually, these methods use the entire image to directly predict the complete segmentation mask. While this facilitates the incorporation of valuable global image context, it also increases the complexity of the learning task, requiring the models to capture the complete variability in the shape and structure of different objects. In addition, this strategy does not scale well to (volumetric) high resolution data due to memory limitations. In this paper, we propose a new paradigm for semantic medical image segmentation based on a novel neural architecture called shared memory augmented neural network (SHAMANN). Based on a decomposition of the original image into a sequence of image subregions, e.g., local patches, we define different so called SHAMANN actors which traverse the sequence differently and segment each image subregion. An external memory module enables each actor to capture relations between different subregions within its sequence and increase the robustness of its predictions. In particular, this external module is shared among all actors and serves as a means to implicitly exchange local image context information in order to better capture global image properties, such as shape priors. Finally, the predictions of all actors are fused to obtain a semantic segmentation mask for the original image. An overview of the proposed framework with two SHAMANN actors is given in Figure 1. The contributions of our work are: (i) a reformulation of the semantic segmentation problem as a sequence learning task (ii) SHAMANN - a memory efficient and dynamically scalabale alternative to end-to-end fully convolutional segmentation networks, that can also implicitly capture global image properties through a shared external memory module; and (iii) a comprehensive analysis of the method and comparison against state-of-the-art methods on a large chest X-Ray dataset. 2 RELATED WORK Segmentation. In the fields of computer vision and medical imaging, segmentation is a fundamental task for understanding the semantic content of an image. State-of-the-art results on different segmentation benchmarks (Cordts et al., 2015; Everingham et al., 2015), have been achieved by using fully convolutional neural networks (He et al., 2017; Shelhamer et al., 2017). However, one limitation of such networks is the use of pooling layers. By down-sampling and increasing the field-of-view, precise localization information is lost. To tackle this issue, two different approaches have been proposed. First, encoder-decoder architectures, e.g., U-NET (Ronneberger et al., 2015), recover the details and spatial dimension using de-convolutions and shortcut connections (Badrinarayanan et al., 2017; Lin et al., 2017; Yang et al., 2017). The alternative is to use dilated convolutions to increase the field-of-view without decreasing the spatial dimension (Chen et al., 2018; Peng et al., 2017; Yu et al., 2017; Yu & Koltun, 2015; Zhao et al., 2017). In this context, graphical models such as conditional random fields (Lafferty et al., 2001) are used to further improve the results. In the medical context, a standard approach for medical segmentation is multi-atlas label propagation (MALP) (Wang et al., 2013; Zikic et al., 2013). In MALP, a collection of atlases, i.e., labeled images, is required. At runtime, one needs to perform expensive non-linear registration operations of each atlas to unseen data to achieve a segmentation. These solutions typically scale poorly and are inefficient. Alternatively, one can address the segmentation problem by using random forests (Glocker et al., 2012), providing stronger unary predictions through joint class and shape regression. Milletari et al. (2017) employed an additional patch-voting scheme to increase the robustness against outliers. Other approaches use linear shape models to incorporate prior information (SSM) (Heimann & Meinzer, 2009). In marginal space deep learning (Ghesu et al., 2016), SSMs have been coupled with deep learning to enable the segmentation of anatomical structures. While these methods provide good results and are relatively easy to train, they do not exploit global anatomical information. In addition, the inference is time-consuming, especially for 3D images. Memory networks. Recently, neural networks have been augmented with an external memory module to decouple the memorization capacity from the network parameters, making these methods better suitable for capturing long-range dependencies in sequential data. These networks have been used in the context of classification (Vinyals et al., 2016), meta-learning (Santoro et al., 2016; Sprechmann et al., 2018), reinforcement learning (Mnih et al., 2015; Pritzel et al., 2017), graph problems (Graves et al., 2016) or question answering (Graves et al., 2016; Sukhbaatar et al., 2015), to name a few. Closest to our work are generative methods (van den Oord et al., 2016), which model the conditional probability of a pixel based on previous pixels, using LSTMs. In contrast, we propose a sequence learning task for image segmentation and show that the memorization capacity can be improved using a shared external memory. Bahdanau et al. (2014) and Wang et al. (2016) proposed a memory-based strategy for the task of machine translation. They use a bidirectional RNN to encode the input and save the concatenation of the outputs of the two units in a memory. After the sequence is processed and saved in the memory, a decoder reads from the memory and outputs the final predictions. In contrast, our proposed method allows information exchange between SHAMANN actors while processing the input sequence thereby enabling each agent to access global image context. To the best of our knowledge, this is the first paper that proposes a method based on memory networks for the task of image segmentation. 3 PROPOSED METHODS In this section, we present our main contribution, the shared memory augmented neural networks (SHAMANN) architecture for semantic segmentation. Our observation is that in a bidirectional setup, information from different directions is not being explicitly exchanged. We hypothesize that by sharing an external memory, our networks can better capture global context, leading to a more accurate segmentation. 3.1 PROBLEM FORMULATION In the following x and xT will denote a row and column vector respectively, and A a matrix. Following formulations are focused on but not limited to 2D images. Formally, let us consider an input image I : Ω → RHI×WI×C , with Ω ⊂ R2 the image domain; HI ,WI and C denoting the height, width and number of channels of the image signal. The goal of the segmentation task is to assign a label to every pixel/voxel x in the input image, considering a predefined set of K object classes {y1, . . . , yK}. The segmentation result can be represented as a set of segmentation channels Y : Ω → RHI×WI×K , where the value of a pixel (x, y) of a given channel k encodes the probability of observing the class yk. A final segmentation mask can then be obtained by applying a softmax function along the different class-specific channels. In this work, we propose to reformulate the segmentation problem as a sequential learning task. Let us consider a sequence of T patches P = {P0, . . . ,PT } covering the image domain, with Pt : Ω→ RHp×Wp×C , where Hp,Wp are the height and width of the patch. For example, these patches may be extracted using uniform sampling. We propose to learn a function f that maps the sequence of input patches to a sequence of patch segmentation masks as f(Pt)Tt=0 = (Φt) T t=0, with f : RT×Hp×Wp×C → RT×Hp×Wp×K . 3.2 ARCHITECTURE OVERVIEW In this section, we introduce in more detail the components of our model (as can be seen in Figure 1). The encoder extracts a rich visual representation from the raw patch intensities. We model it as a function (e.g., a convolutional network), mapping the input to a d-dimensional feature space: E(Pt) T t=0 = (ψt) T t=0, with E : RHp×Wp×C → Rd. The actor module, defined as component 2, learns the sequence of input feature vectors Ψ = {ψ0, . . . ,ψT } and captures distal spatial dependencies. Each actor scans the input sequence Ψ differently, to produce an output sequence HJ = {hJ0 , . . . ,hJT }, with hJ ∈ Rd. Here, we use two actors, one scanning the input in the forward direction (J := F ), and the other in the backward direction (J := B). The patch-level fusion step combines the outputs of the actors as σ(HF ⊕HB) = H , with σ : R2×d → Rd and ⊕ the concatenation operator. The mapping σ could be a simple function, e.g., an average or a concatenation operation. In our work, we propose to explicitly learn how to combine the different outputs using a neural network with a single fully connected layer. The decoder maps the fused outputs of the actors to patch segmentation masks as D(ht)Tt=0 = (Φt) T t=0, with D : Rd → RHp×Wp×K . In the final image-level fusion step (see component 4), all patch segmentation masks Φ = {Φ0, . . . ,ΦT } are aggregated over the full image domain to generate the final segmentation mask Y . For fusion, we propose to use averaging (Iglesias & Sabuncu, 2015). 3.3 IMAGE SEGMENTATION AS A SEQUENTIAL LEARNING TASK In the following sections, we show three different alternatives for the actor module. These are the bidirectional long-short term memory units (Bi-LSTM), described in Section 3.3.1; the bidirectional memory-augmented neural networks (Bi-MANN), described in Section 3.3.2; and our proposed SHAMANN framework (see Section 3.3.3). 3.3.1 BIDIRECTIONAL LONG-SHORT TERM MEMORY NETWORKS: BI-LSTM One of the most common challenge in training a recurrent neural netowrk is the vanishing gradient effect. To address this challenge, LSTM units have been proposed by Hochreiter & Schmidhuber (1997). These units have achieved high performance on real-world problems such as image captioning (Vinyals et al., 2015). The core element of the LSTM unit is the memory cell ct, which is an abstract representation of the previously observed input. The definition of the output ht and ct can be summarized as: [ht, ct] = LSTM(ψt,ht−1, ct−1), (1) where LSTM stands for the gated processing structure. The output of a LSTM unit is the sequence of output vectorsH = {h0, . . . ,hT }. The bidirectional LSTM processes the sequence data both in forward and backward directions with separate LSTM units. Thus, the forward LSTM unit will process the input sequence ΨF = {ψ0, . . . ,ψT } and produce the output sequence HF = {hF0 , . . . ,hFT }, while the backward LSTM cell will process the reverse input sequence ΨB = {ψT , . . . ,ψ0} and produce the output sequence HB = {hBT , . . . ,hB0 }. The final output of the Bi-LSTM is given by H = σ(HF ⊕HB), where ⊕ denotes the concatenation operator. 3.3.2 BIDIRECTIONAL MEMORY-AUGMENTED NEURAL NETWORKS: BI-MANN One limitation of Bi-LSTM is that the number of network parameters grows proportionally to the memorization capacity, making it unsuitable for sequences with long-range dependencies. These types of dependencies often occur in our formulation of the segmentation task, depending on the image content, the sequence length, and the patch size. One can alleviate this issue and increase the memorization capacity by making use of an external memory. These networks called memory augmented neural networks (MANN) use a controller network, i.e., an LSTM, to access an external, addressable memoryM ∈ RQ×N , where N is the number of memory cells and Q is the dimension of each cell (Graves et al., 2016). Following these principles, we propose to enhance each actor with an external memory capability. Figure 2 illustrates how a forward actor addresses such a memory module to perform a sequential segmentation task. At every time iteration t, the actor produces write and read heads to interact with a small portion of the memory constrained by weights associated with previous observations. The write operation uses the write weights wwt ∈ RN to remove content from the memory with an erase vector et ∈ [0, 1]Q, then write the add vector at ∈ RQ: Mt[i] ← (1 − wwt [i] · et) ◦ Mt−1[i] +w w t [i] · at, where ◦ and · denote the element-wise and scalar multiplication respectively and 1 ∈ RQ a vector of ones. Similarly, the output of a read operation using the read weights wrt ∈ RN is the weighted sum over the memory locations: rt(M) = ∑N i=1w r t [i] ·Mt[i]. We use content lookup to define the read weights, in which a key krt ∈ RQ emitted by the actor is compared to the content of each memory location. The attention score for a read operation at row i is the i-th value in the column vector wrt = exp(F (k r t ,Mt[i]))/ ∑N j=1 exp(F (k r t ,Mt[j])), where F computes the similarity between two vectors, i.e., cosine similarity, and [] is the row operator. The content lookup weights ŵwt ∈ RN allow the write operation to update content in the memory. In order to also allocate new memory slots, we extend the addressing with a mechanism that returns the most unused location w̃wt ∈ {0, 1}N (as a one hot vector). At every iteration the write operation uses an allocation gate α to either update the content of a location, or write to a new, unused location: wwt = αŵ w t + (1 − α)w̃wt . The read and write keys, erase and add vectors and the allocation gate are linear mappings of the memory cell of an actor. We extend MANNs to a bidirectional formulation, where two actors, each with its individual external memory module, scans the input sequence in a forward (J := F ) and backward (J := B) manner and produce the output and memory cell a time t as: [gJt , c J t ] = LSTM(ψt ⊕ rt−1(MJ), gJt−1, cJt−1), (2) where gJt = W J g (h J t ⊕ rt(MJ)) + bJg are linear mappings of the concatenation of the output vectors and the currently read information from the memory module. The final output of Bi-MANN is given by H = σ({gF0 , gF1 , . . . , gFT } ⊕ {gBT , gBT−1, . . . , gB0 }). 3.3.3 SHARED MEMORY-AUGMENTED NEURAL NETWORKS: SHAMANN While the external memory module addresses the limited memorization capability of standard BiLSTM units, the sequence processing by the different actors remains suboptimal - in the sense that there is no active exchange of context information between them. The hypothesis is that through such an exchange, individual actors can observe more global context, leading to a more robust segmentation. With this in mind, we propose to share the external memory module between actors. By reading and writing information to the same memory module, the actors can interact in an implicit way. The output and memory cell for a time iteration t are defined as follows: [gJt , c J t ] = LSTM(ψt ⊕ rt−1(M), gJt−1, cJt−1), (3) where gJt = W J g (h J t ⊕rt(M))+bJg are linear mappings of the concatenation of the output vectors and the current read information from the shared memory module. Note that the matrixM in Equation 3 represents the memory module, which is shared by both the forward and backward actors, in contrast to Equation 2 where each actor has its own memory module, i.e.,MF andMB . The two actors write and read alternatively from the memory, first the forward actor, then the backward actor. To ensure the correct allocation of free memory, the two actors also share the usage vector. The final output of the SHAMANN framework is given by H = σ({gF0 , gF1 , . . . , gFT } ⊕ {gBT , gBT−1, . . . , gB0 }). Our network is fully differentiable and can be trained end-to-end via back-propagation through time (Werbos, 1990). 4 EXPERIMENTS In this section, we present the results of the proposed methods on real-world and synthetic applications. We benchmarked our method on a large chest X-Ray dataset and compared it to state-of-theart methods. Additionally, we conducted two synthetic experiments on MNIST (Lecun et al., 1998) with the goal of analyzing the memorization capacity of the different models and providing insights into the benefits of sharing an external memory module. 4.1 CHEST X-RAY LUNG SEGMENTATION This is a fundamental preprocessing task towards automated diagnosis of lung diseases, e.g., nodules, tumors, etc. (Wang et al., 2017). To meet high clinical standards, an accurate and robust segmentation of the lungs is required. For this problem, important challenges are the variability in shape and intensities of the lungs, as well as reduced anatomy contrast, due to pleural effusion. The chest X-Ray dataset consists of 7083 images of 7083 patients selected from the public database ChestX-Ray8 (Wang et al., 2017), each of size 1024×1024 pixels. Ground truth segmentation masks were provided by clinical experts. We performed a random patient-based split in 5000 training, 583 validation and 1500 test images. The patch size was set to 160×160 pixels with a stride of 80×80, resulting in a sequence of 169 patches per image. Table 1 shows quantitative results. We compute the dice score using the definition of true positive (TP), false positive (FP) and false negative (FN) as: (2∗TP )/(2∗TP+FP+FN). The experiment demonstrates that, even though we use sequences of local patches, our algorithm reaches state-ofthe-art performance by effectively capturing the global context through the shared memory. In particular, our model requires significantly less parameters in comparison to the reference methods. This allows in theory a more (memory) efficient application to high-resolution (volumetric) data. Furthermore, in our formulation one can dynamically split the sequence length (both at training and testing time) and maintain global context in the shared memory to achieve an even higher degree of flexibility. We are currently investigating these benefits on large volumetric medical scans. An additional important property of our method is the robustness on difficult cases, caused by, e.g., large scale variations between children and adults, different image artifacts and abnormalities, such as pleural effusion or large lesions. We manage to reduce the number of cases with large error, i.e., a dice score below 0.9, by at least half. Figure 3 shows qualitative results. 4.2 MNIST IMAGE COMPLETION: MEMORY ANALYSIS To investigate the benefits of extending neural networks with an external memory, we designed two synthetic tasks based on the MNIST dataset. First, we deleted the bottom half of the input images and trained our models to complete the missing information. The goal of this experiment was to observe the networks capacity to extrapolate the missing data based only on the first half of the image. In a second experiment, we removed random patches from the input images. Since in this case the location of the missing data is not deterministic, the networks have to adaptively learn a more complex strategy for the memorization and lookup of information to better extrapolate the missing data. For both experiments we used the original MNIST images as labels. The MNIST data consists of 70000 pictures of handwritten digits (55000 train, 5000 validation and 10000 test) and their associated label. We considered patches of size 8×8 with a stride of 4×4 resulting in a sequence of 49 patches per image. For the quantitative evaluation we measured dice scores, as well as classification accuracy on the reconstructed digits. To measure the classification accuracy we trained a deep neural network classifier on the original MNIST dataset and used this network to evaluate the images reconstructed by our methods. The accuracy of this classifier on the original MNIST dataset was 99.23%. On the altered test sets, without applying any completion, the accuracy was 56.14% for the first and 67.8% for the second experiment. Figure 4 shows quantitative results. Using SHAMANN to perform image completion on the altered data, the classification accuracy was increased to 95.2% for the first, and 96.9% for the second experiment. In both experiments the networks augmented with memory outperform the Bi-LSTM network and especially the model without memory (called NO MEM). This demonstrates that more effective image completion strategies can be learned with the use of an external memory module, reaching best performance when the memory is shared. Note that as the capacity of the Bi-LSTM units increases, the difference in reconstruction performance to both Bi-MANN and SHAMANN reduces. As expected, given a large enough cell size, LSTM units can emulate the high memorization capacity of an external memory. While in the first experiment the methods perform similarly at the largest cell size; in the second experiment the differences between the methods is considerably large, even at the largest cell size level. This indicates that for more complex problems the performance of the Bi-LSTM is limited, even for a larger cell sizes. Figures 5a and 5b show qualitative results. While in the first rows, the first three methods fail to extrapolate correctly the missing parts of the digits, the networks using a shared memory module make an accurate shape reconstruction that leads to correct classifications. The last row shows a failure case, where all four methods fail to correctly recognize the digit. However, considering the high difficulty in reconstructing these two digits, one can argue that the output of the SHAMANN method is reasonable. Figure 5c shows the benefits of sharing the memory module, by comparing the prediction of individual actors with and without the information exchange via the shared memory. Table 2 shows the hyperparameters used for the experiments. For training we used the RMSProp optimizer with a learning rate of 10−3 and minimized the mean squared error on all experiments. 5 CONCLUSION AND FUTURE WORK In this paper, we presented a novel memory efficient segmentation approach based on sequence learning and memory augmented neural networks. Based on a decomposition of the original image into a sequence of image patches, we trained two SHAMANN actors that traverse the sequence bidirectionally and segment each image subregion. An external memory module enables each actor to capture relations between different subregions within its sequence and increase the robustness of its predictions. In particular, the shared nature of the external module serves as a means to implicitly exchange local image context information between actors to better capture shape priors. Despite the fact that we learn the segmentation module at patch-level, our algorithm matches the state-of-the-art performance of image-to-image architectures on a challenging lung segmentation task based on a X-Ray dataset. In addition, we conducted a detailed analysis on two synthetic tasks based on the MNIST dataset, demonstrating the benefits of sharing the external memory among different actors. In our future work, we plan to extend our model to large 3D/4D medical scans and investigate the improved scalability and memory efficiency. We also plan to investigate the benefits of increasing the number of actors with different scanning strategies.
1. What is the focus of the paper regarding semantic segmentation? 2. What are the strengths and weaknesses of the proposed approach in comparison to prior works? 3. How does the reviewer assess the handling of long-range dependencies in the paper's approach? 4. What are the limitations of the experimental analysis, particularly regarding the dataset and sequence length? 5. What additional comparisons and analyses should be included in the paper to further support its contributions?
Review
Review The authors applied the external memory module proposed by Graves et al. (2016) to the image segmentation task. SHAMANN is an extension to allow memory sharing between directions. Authors claimed that one of the contributions is a reformulation of the semantic segmentation problem as a sequence learning task. There are many previous works done in this direction, - "Multi-Dimensional Recurrent Neural Networks", 2007 - "Scene Labeling with LSTM Recurrent Neural Networks", 2015 - "ReSeg: A Recurrent Neural Network-Based Model for Semantic Segmentation", 2016 - "Robust, Simple Page Segmentation Using Hybrid Convolutional MDLSTM Networks", 2017 and many more. Authors should compare with those LSTM-based image segmentation approaches as well. Their second contribution is a network with a shared external memory module between directions. However, the experiments are not enough to show the benefits of it. See the details below. Handling long-range dependencies: - In Section 3.3.2, authors mentioned that "One limitation of Bi-LSTM is that the number of network parameters grows proportionally to the memorization capacity, making it unsuitable for sequences with long-range dependencies.". However, the experiments are not with long range sequences: 169 sequence length for X-ray dataset and 49 length for MNIST. A classic LSTM (not bi-directional) is known to handle up to 200 timesteps. Some comparison/analysis of handling long-range dependencies of Bi-LSTM, Bi-MANN, and SHAMANN are needed (ideally on high-resolution real images). Dataset: - Authors compared 3 models only on MNIST. The structure on MNIST is simple, and the resolution of images is small to show the benefit of using (shared) external memory module instead of individual memory cells. It is not surprising that the reported performance difference is small. Authors could have reported such a comparison on X-ray dataset too but they did not. I would recommend authors pick another high-resolution real-image dataset and compare the performance of these 3 models. Additional comparisons: - Various patch size - Longer sequence length - Especially a trade-off between the patch size and the sequence length on the high resolution images (larger patch size with a shorter sequence length or shorter patch size with a longer sequence length) - A comparison of Bi-LSTM with sharing weights will also be a good baseline.
ICLR
Title SHAMANN: Shared Memory Augmented Neural Networks Abstract Current state-of-the-art methods for semantic segmentation use deep neural networks to learn the segmentation mask from the input image signal as an imageto-image mapping. While these methods effectively exploit global image context, the learning and computational complexities are high. We propose shared memory augmented neural network actors as a dynamically scalable alternative. Based on a decomposition of the image into a sequence of local patches, we train such actors to sequentially segment each patch. To further increase the robustness and better capture shape priors, an external memory module is shared between different actors, providing an implicit mechanism for image information exchange. Finally, the patch-wise predictions are aggregated to a complete segmentation mask. We demonstrate the benefits of the new paradigm on a challenging lung segmentation problem based on chest X-Ray images, as well as on two synthetic tasks based on the MNIST dataset. On the X-Ray data, our method achieves state-of-the-art accuracy with a significantly reduced model size compared to reference methods. In addition, we reduce the number of failure cases by at least half. N/A Current state-of-the-art methods for semantic segmentation use deep neural networks to learn the segmentation mask from the input image signal as an imageto-image mapping. While these methods effectively exploit global image context, the learning and computational complexities are high. We propose shared memory augmented neural network actors as a dynamically scalable alternative. Based on a decomposition of the image into a sequence of local patches, we train such actors to sequentially segment each patch. To further increase the robustness and better capture shape priors, an external memory module is shared between different actors, providing an implicit mechanism for image information exchange. Finally, the patch-wise predictions are aggregated to a complete segmentation mask. We demonstrate the benefits of the new paradigm on a challenging lung segmentation problem based on chest X-Ray images, as well as on two synthetic tasks based on the MNIST dataset. On the X-Ray data, our method achieves state-of-the-art accuracy with a significantly reduced model size compared to reference methods. In addition, we reduce the number of failure cases by at least half. 1 INTRODUCTION In the medical image analysis domain, the automatic parsing of medical images represents a fundamental task that impacts the efficiency of the entire clinical workflow from diagnosis to therapy planning, intervention and follow-up investigations. An essential step in this sense is the semantic segmentation of anatomical structures which supports the radiologist to read and understand the image content. Recent approaches are inspired from the vision domain and rely on fully convolutional neural networks, e.g., (Ronneberger et al., 2015; Yang et al., 2017), to achieve state-of-the-art results on various segmentation problems (Menze et al., 2015). Usually, these methods use the entire image to directly predict the complete segmentation mask. While this facilitates the incorporation of valuable global image context, it also increases the complexity of the learning task, requiring the models to capture the complete variability in the shape and structure of different objects. In addition, this strategy does not scale well to (volumetric) high resolution data due to memory limitations. In this paper, we propose a new paradigm for semantic medical image segmentation based on a novel neural architecture called shared memory augmented neural network (SHAMANN). Based on a decomposition of the original image into a sequence of image subregions, e.g., local patches, we define different so called SHAMANN actors which traverse the sequence differently and segment each image subregion. An external memory module enables each actor to capture relations between different subregions within its sequence and increase the robustness of its predictions. In particular, this external module is shared among all actors and serves as a means to implicitly exchange local image context information in order to better capture global image properties, such as shape priors. Finally, the predictions of all actors are fused to obtain a semantic segmentation mask for the original image. An overview of the proposed framework with two SHAMANN actors is given in Figure 1. The contributions of our work are: (i) a reformulation of the semantic segmentation problem as a sequence learning task (ii) SHAMANN - a memory efficient and dynamically scalabale alternative to end-to-end fully convolutional segmentation networks, that can also implicitly capture global image properties through a shared external memory module; and (iii) a comprehensive analysis of the method and comparison against state-of-the-art methods on a large chest X-Ray dataset. 2 RELATED WORK Segmentation. In the fields of computer vision and medical imaging, segmentation is a fundamental task for understanding the semantic content of an image. State-of-the-art results on different segmentation benchmarks (Cordts et al., 2015; Everingham et al., 2015), have been achieved by using fully convolutional neural networks (He et al., 2017; Shelhamer et al., 2017). However, one limitation of such networks is the use of pooling layers. By down-sampling and increasing the field-of-view, precise localization information is lost. To tackle this issue, two different approaches have been proposed. First, encoder-decoder architectures, e.g., U-NET (Ronneberger et al., 2015), recover the details and spatial dimension using de-convolutions and shortcut connections (Badrinarayanan et al., 2017; Lin et al., 2017; Yang et al., 2017). The alternative is to use dilated convolutions to increase the field-of-view without decreasing the spatial dimension (Chen et al., 2018; Peng et al., 2017; Yu et al., 2017; Yu & Koltun, 2015; Zhao et al., 2017). In this context, graphical models such as conditional random fields (Lafferty et al., 2001) are used to further improve the results. In the medical context, a standard approach for medical segmentation is multi-atlas label propagation (MALP) (Wang et al., 2013; Zikic et al., 2013). In MALP, a collection of atlases, i.e., labeled images, is required. At runtime, one needs to perform expensive non-linear registration operations of each atlas to unseen data to achieve a segmentation. These solutions typically scale poorly and are inefficient. Alternatively, one can address the segmentation problem by using random forests (Glocker et al., 2012), providing stronger unary predictions through joint class and shape regression. Milletari et al. (2017) employed an additional patch-voting scheme to increase the robustness against outliers. Other approaches use linear shape models to incorporate prior information (SSM) (Heimann & Meinzer, 2009). In marginal space deep learning (Ghesu et al., 2016), SSMs have been coupled with deep learning to enable the segmentation of anatomical structures. While these methods provide good results and are relatively easy to train, they do not exploit global anatomical information. In addition, the inference is time-consuming, especially for 3D images. Memory networks. Recently, neural networks have been augmented with an external memory module to decouple the memorization capacity from the network parameters, making these methods better suitable for capturing long-range dependencies in sequential data. These networks have been used in the context of classification (Vinyals et al., 2016), meta-learning (Santoro et al., 2016; Sprechmann et al., 2018), reinforcement learning (Mnih et al., 2015; Pritzel et al., 2017), graph problems (Graves et al., 2016) or question answering (Graves et al., 2016; Sukhbaatar et al., 2015), to name a few. Closest to our work are generative methods (van den Oord et al., 2016), which model the conditional probability of a pixel based on previous pixels, using LSTMs. In contrast, we propose a sequence learning task for image segmentation and show that the memorization capacity can be improved using a shared external memory. Bahdanau et al. (2014) and Wang et al. (2016) proposed a memory-based strategy for the task of machine translation. They use a bidirectional RNN to encode the input and save the concatenation of the outputs of the two units in a memory. After the sequence is processed and saved in the memory, a decoder reads from the memory and outputs the final predictions. In contrast, our proposed method allows information exchange between SHAMANN actors while processing the input sequence thereby enabling each agent to access global image context. To the best of our knowledge, this is the first paper that proposes a method based on memory networks for the task of image segmentation. 3 PROPOSED METHODS In this section, we present our main contribution, the shared memory augmented neural networks (SHAMANN) architecture for semantic segmentation. Our observation is that in a bidirectional setup, information from different directions is not being explicitly exchanged. We hypothesize that by sharing an external memory, our networks can better capture global context, leading to a more accurate segmentation. 3.1 PROBLEM FORMULATION In the following x and xT will denote a row and column vector respectively, and A a matrix. Following formulations are focused on but not limited to 2D images. Formally, let us consider an input image I : Ω → RHI×WI×C , with Ω ⊂ R2 the image domain; HI ,WI and C denoting the height, width and number of channels of the image signal. The goal of the segmentation task is to assign a label to every pixel/voxel x in the input image, considering a predefined set of K object classes {y1, . . . , yK}. The segmentation result can be represented as a set of segmentation channels Y : Ω → RHI×WI×K , where the value of a pixel (x, y) of a given channel k encodes the probability of observing the class yk. A final segmentation mask can then be obtained by applying a softmax function along the different class-specific channels. In this work, we propose to reformulate the segmentation problem as a sequential learning task. Let us consider a sequence of T patches P = {P0, . . . ,PT } covering the image domain, with Pt : Ω→ RHp×Wp×C , where Hp,Wp are the height and width of the patch. For example, these patches may be extracted using uniform sampling. We propose to learn a function f that maps the sequence of input patches to a sequence of patch segmentation masks as f(Pt)Tt=0 = (Φt) T t=0, with f : RT×Hp×Wp×C → RT×Hp×Wp×K . 3.2 ARCHITECTURE OVERVIEW In this section, we introduce in more detail the components of our model (as can be seen in Figure 1). The encoder extracts a rich visual representation from the raw patch intensities. We model it as a function (e.g., a convolutional network), mapping the input to a d-dimensional feature space: E(Pt) T t=0 = (ψt) T t=0, with E : RHp×Wp×C → Rd. The actor module, defined as component 2, learns the sequence of input feature vectors Ψ = {ψ0, . . . ,ψT } and captures distal spatial dependencies. Each actor scans the input sequence Ψ differently, to produce an output sequence HJ = {hJ0 , . . . ,hJT }, with hJ ∈ Rd. Here, we use two actors, one scanning the input in the forward direction (J := F ), and the other in the backward direction (J := B). The patch-level fusion step combines the outputs of the actors as σ(HF ⊕HB) = H , with σ : R2×d → Rd and ⊕ the concatenation operator. The mapping σ could be a simple function, e.g., an average or a concatenation operation. In our work, we propose to explicitly learn how to combine the different outputs using a neural network with a single fully connected layer. The decoder maps the fused outputs of the actors to patch segmentation masks as D(ht)Tt=0 = (Φt) T t=0, with D : Rd → RHp×Wp×K . In the final image-level fusion step (see component 4), all patch segmentation masks Φ = {Φ0, . . . ,ΦT } are aggregated over the full image domain to generate the final segmentation mask Y . For fusion, we propose to use averaging (Iglesias & Sabuncu, 2015). 3.3 IMAGE SEGMENTATION AS A SEQUENTIAL LEARNING TASK In the following sections, we show three different alternatives for the actor module. These are the bidirectional long-short term memory units (Bi-LSTM), described in Section 3.3.1; the bidirectional memory-augmented neural networks (Bi-MANN), described in Section 3.3.2; and our proposed SHAMANN framework (see Section 3.3.3). 3.3.1 BIDIRECTIONAL LONG-SHORT TERM MEMORY NETWORKS: BI-LSTM One of the most common challenge in training a recurrent neural netowrk is the vanishing gradient effect. To address this challenge, LSTM units have been proposed by Hochreiter & Schmidhuber (1997). These units have achieved high performance on real-world problems such as image captioning (Vinyals et al., 2015). The core element of the LSTM unit is the memory cell ct, which is an abstract representation of the previously observed input. The definition of the output ht and ct can be summarized as: [ht, ct] = LSTM(ψt,ht−1, ct−1), (1) where LSTM stands for the gated processing structure. The output of a LSTM unit is the sequence of output vectorsH = {h0, . . . ,hT }. The bidirectional LSTM processes the sequence data both in forward and backward directions with separate LSTM units. Thus, the forward LSTM unit will process the input sequence ΨF = {ψ0, . . . ,ψT } and produce the output sequence HF = {hF0 , . . . ,hFT }, while the backward LSTM cell will process the reverse input sequence ΨB = {ψT , . . . ,ψ0} and produce the output sequence HB = {hBT , . . . ,hB0 }. The final output of the Bi-LSTM is given by H = σ(HF ⊕HB), where ⊕ denotes the concatenation operator. 3.3.2 BIDIRECTIONAL MEMORY-AUGMENTED NEURAL NETWORKS: BI-MANN One limitation of Bi-LSTM is that the number of network parameters grows proportionally to the memorization capacity, making it unsuitable for sequences with long-range dependencies. These types of dependencies often occur in our formulation of the segmentation task, depending on the image content, the sequence length, and the patch size. One can alleviate this issue and increase the memorization capacity by making use of an external memory. These networks called memory augmented neural networks (MANN) use a controller network, i.e., an LSTM, to access an external, addressable memoryM ∈ RQ×N , where N is the number of memory cells and Q is the dimension of each cell (Graves et al., 2016). Following these principles, we propose to enhance each actor with an external memory capability. Figure 2 illustrates how a forward actor addresses such a memory module to perform a sequential segmentation task. At every time iteration t, the actor produces write and read heads to interact with a small portion of the memory constrained by weights associated with previous observations. The write operation uses the write weights wwt ∈ RN to remove content from the memory with an erase vector et ∈ [0, 1]Q, then write the add vector at ∈ RQ: Mt[i] ← (1 − wwt [i] · et) ◦ Mt−1[i] +w w t [i] · at, where ◦ and · denote the element-wise and scalar multiplication respectively and 1 ∈ RQ a vector of ones. Similarly, the output of a read operation using the read weights wrt ∈ RN is the weighted sum over the memory locations: rt(M) = ∑N i=1w r t [i] ·Mt[i]. We use content lookup to define the read weights, in which a key krt ∈ RQ emitted by the actor is compared to the content of each memory location. The attention score for a read operation at row i is the i-th value in the column vector wrt = exp(F (k r t ,Mt[i]))/ ∑N j=1 exp(F (k r t ,Mt[j])), where F computes the similarity between two vectors, i.e., cosine similarity, and [] is the row operator. The content lookup weights ŵwt ∈ RN allow the write operation to update content in the memory. In order to also allocate new memory slots, we extend the addressing with a mechanism that returns the most unused location w̃wt ∈ {0, 1}N (as a one hot vector). At every iteration the write operation uses an allocation gate α to either update the content of a location, or write to a new, unused location: wwt = αŵ w t + (1 − α)w̃wt . The read and write keys, erase and add vectors and the allocation gate are linear mappings of the memory cell of an actor. We extend MANNs to a bidirectional formulation, where two actors, each with its individual external memory module, scans the input sequence in a forward (J := F ) and backward (J := B) manner and produce the output and memory cell a time t as: [gJt , c J t ] = LSTM(ψt ⊕ rt−1(MJ), gJt−1, cJt−1), (2) where gJt = W J g (h J t ⊕ rt(MJ)) + bJg are linear mappings of the concatenation of the output vectors and the currently read information from the memory module. The final output of Bi-MANN is given by H = σ({gF0 , gF1 , . . . , gFT } ⊕ {gBT , gBT−1, . . . , gB0 }). 3.3.3 SHARED MEMORY-AUGMENTED NEURAL NETWORKS: SHAMANN While the external memory module addresses the limited memorization capability of standard BiLSTM units, the sequence processing by the different actors remains suboptimal - in the sense that there is no active exchange of context information between them. The hypothesis is that through such an exchange, individual actors can observe more global context, leading to a more robust segmentation. With this in mind, we propose to share the external memory module between actors. By reading and writing information to the same memory module, the actors can interact in an implicit way. The output and memory cell for a time iteration t are defined as follows: [gJt , c J t ] = LSTM(ψt ⊕ rt−1(M), gJt−1, cJt−1), (3) where gJt = W J g (h J t ⊕rt(M))+bJg are linear mappings of the concatenation of the output vectors and the current read information from the shared memory module. Note that the matrixM in Equation 3 represents the memory module, which is shared by both the forward and backward actors, in contrast to Equation 2 where each actor has its own memory module, i.e.,MF andMB . The two actors write and read alternatively from the memory, first the forward actor, then the backward actor. To ensure the correct allocation of free memory, the two actors also share the usage vector. The final output of the SHAMANN framework is given by H = σ({gF0 , gF1 , . . . , gFT } ⊕ {gBT , gBT−1, . . . , gB0 }). Our network is fully differentiable and can be trained end-to-end via back-propagation through time (Werbos, 1990). 4 EXPERIMENTS In this section, we present the results of the proposed methods on real-world and synthetic applications. We benchmarked our method on a large chest X-Ray dataset and compared it to state-of-theart methods. Additionally, we conducted two synthetic experiments on MNIST (Lecun et al., 1998) with the goal of analyzing the memorization capacity of the different models and providing insights into the benefits of sharing an external memory module. 4.1 CHEST X-RAY LUNG SEGMENTATION This is a fundamental preprocessing task towards automated diagnosis of lung diseases, e.g., nodules, tumors, etc. (Wang et al., 2017). To meet high clinical standards, an accurate and robust segmentation of the lungs is required. For this problem, important challenges are the variability in shape and intensities of the lungs, as well as reduced anatomy contrast, due to pleural effusion. The chest X-Ray dataset consists of 7083 images of 7083 patients selected from the public database ChestX-Ray8 (Wang et al., 2017), each of size 1024×1024 pixels. Ground truth segmentation masks were provided by clinical experts. We performed a random patient-based split in 5000 training, 583 validation and 1500 test images. The patch size was set to 160×160 pixels with a stride of 80×80, resulting in a sequence of 169 patches per image. Table 1 shows quantitative results. We compute the dice score using the definition of true positive (TP), false positive (FP) and false negative (FN) as: (2∗TP )/(2∗TP+FP+FN). The experiment demonstrates that, even though we use sequences of local patches, our algorithm reaches state-ofthe-art performance by effectively capturing the global context through the shared memory. In particular, our model requires significantly less parameters in comparison to the reference methods. This allows in theory a more (memory) efficient application to high-resolution (volumetric) data. Furthermore, in our formulation one can dynamically split the sequence length (both at training and testing time) and maintain global context in the shared memory to achieve an even higher degree of flexibility. We are currently investigating these benefits on large volumetric medical scans. An additional important property of our method is the robustness on difficult cases, caused by, e.g., large scale variations between children and adults, different image artifacts and abnormalities, such as pleural effusion or large lesions. We manage to reduce the number of cases with large error, i.e., a dice score below 0.9, by at least half. Figure 3 shows qualitative results. 4.2 MNIST IMAGE COMPLETION: MEMORY ANALYSIS To investigate the benefits of extending neural networks with an external memory, we designed two synthetic tasks based on the MNIST dataset. First, we deleted the bottom half of the input images and trained our models to complete the missing information. The goal of this experiment was to observe the networks capacity to extrapolate the missing data based only on the first half of the image. In a second experiment, we removed random patches from the input images. Since in this case the location of the missing data is not deterministic, the networks have to adaptively learn a more complex strategy for the memorization and lookup of information to better extrapolate the missing data. For both experiments we used the original MNIST images as labels. The MNIST data consists of 70000 pictures of handwritten digits (55000 train, 5000 validation and 10000 test) and their associated label. We considered patches of size 8×8 with a stride of 4×4 resulting in a sequence of 49 patches per image. For the quantitative evaluation we measured dice scores, as well as classification accuracy on the reconstructed digits. To measure the classification accuracy we trained a deep neural network classifier on the original MNIST dataset and used this network to evaluate the images reconstructed by our methods. The accuracy of this classifier on the original MNIST dataset was 99.23%. On the altered test sets, without applying any completion, the accuracy was 56.14% for the first and 67.8% for the second experiment. Figure 4 shows quantitative results. Using SHAMANN to perform image completion on the altered data, the classification accuracy was increased to 95.2% for the first, and 96.9% for the second experiment. In both experiments the networks augmented with memory outperform the Bi-LSTM network and especially the model without memory (called NO MEM). This demonstrates that more effective image completion strategies can be learned with the use of an external memory module, reaching best performance when the memory is shared. Note that as the capacity of the Bi-LSTM units increases, the difference in reconstruction performance to both Bi-MANN and SHAMANN reduces. As expected, given a large enough cell size, LSTM units can emulate the high memorization capacity of an external memory. While in the first experiment the methods perform similarly at the largest cell size; in the second experiment the differences between the methods is considerably large, even at the largest cell size level. This indicates that for more complex problems the performance of the Bi-LSTM is limited, even for a larger cell sizes. Figures 5a and 5b show qualitative results. While in the first rows, the first three methods fail to extrapolate correctly the missing parts of the digits, the networks using a shared memory module make an accurate shape reconstruction that leads to correct classifications. The last row shows a failure case, where all four methods fail to correctly recognize the digit. However, considering the high difficulty in reconstructing these two digits, one can argue that the output of the SHAMANN method is reasonable. Figure 5c shows the benefits of sharing the memory module, by comparing the prediction of individual actors with and without the information exchange via the shared memory. Table 2 shows the hyperparameters used for the experiments. For training we used the RMSProp optimizer with a learning rate of 10−3 and minimized the mean squared error on all experiments. 5 CONCLUSION AND FUTURE WORK In this paper, we presented a novel memory efficient segmentation approach based on sequence learning and memory augmented neural networks. Based on a decomposition of the original image into a sequence of image patches, we trained two SHAMANN actors that traverse the sequence bidirectionally and segment each image subregion. An external memory module enables each actor to capture relations between different subregions within its sequence and increase the robustness of its predictions. In particular, the shared nature of the external module serves as a means to implicitly exchange local image context information between actors to better capture shape priors. Despite the fact that we learn the segmentation module at patch-level, our algorithm matches the state-of-the-art performance of image-to-image architectures on a challenging lung segmentation task based on a X-Ray dataset. In addition, we conducted a detailed analysis on two synthetic tasks based on the MNIST dataset, demonstrating the benefits of sharing the external memory among different actors. In our future work, we plan to extend our model to large 3D/4D medical scans and investigate the improved scalability and memory efficiency. We also plan to investigate the benefits of increasing the number of actors with different scanning strategies.
1. What is the main contribution of the paper in the field of semantic segmentation? 2. What are the strengths and weaknesses of the proposed method, particularly regarding its novelty and potential impact? 3. Do you have any questions or concerns regarding the processing order of patches and its potential impact on the results? 4. Could you clarify the number and type of actors used in the study, and their roles in the segmentation process? 5. Can you provide additional information or explanations regarding the surprising results in section 4.2, especially the recovery of accuracy to 96% without seeing the trained classifier? 6. How does the reviewer assess the clarity and novelty of the paper's content, and what suggestions do they have for improving its impact?
Review
Review The authors present a model for semantic segmentation. The proposed method casts the full image segmentation as a sequence of local segmentation predictions. The image is split in multiple patches and processed sequentially in some order. A shared memory allows the local patch predictions to propagate information to improve other patch predictions which is necessary for resolving ambiguities. They show a set of results on an XRay segmentation dataset with a reasonable ablation and baseline study. As well as a somewhat unclear result on image completion. The paper is well written, mostly clear and novel to the best of my knowledge. pros: - semantic segmentation is clearly very important problem with many applications - the method seems clean and promising cons: - the segmentation community is much more familiar with MS-COCO and VOC. I think results on those datasets will make the paper much more impactful and clear any doubts about the method. - it is not clear what processing order the patches are processed in. Does that matter ? This should be clearer in the paper. - there is a brief mention of multiple actors but it seems to me its just one Bi-MANN actor is that true ? - sec. 4.2 is very surprising to me. From what is written I understand that an MNIST classifier is trained on the original MNIST dataset and that it still works to 56% on the test set with the bottom blanked out. Is that correct ? What architecture is this ? Also I find it very surprising that you can recover accuracy to 96% without seeing the trained classifier at all. Anything that can help me understand how that is possible would be appreciated. Are you aware of anyone else matching these results in the literature ?
ICLR
Title SHAMANN: Shared Memory Augmented Neural Networks Abstract Current state-of-the-art methods for semantic segmentation use deep neural networks to learn the segmentation mask from the input image signal as an imageto-image mapping. While these methods effectively exploit global image context, the learning and computational complexities are high. We propose shared memory augmented neural network actors as a dynamically scalable alternative. Based on a decomposition of the image into a sequence of local patches, we train such actors to sequentially segment each patch. To further increase the robustness and better capture shape priors, an external memory module is shared between different actors, providing an implicit mechanism for image information exchange. Finally, the patch-wise predictions are aggregated to a complete segmentation mask. We demonstrate the benefits of the new paradigm on a challenging lung segmentation problem based on chest X-Ray images, as well as on two synthetic tasks based on the MNIST dataset. On the X-Ray data, our method achieves state-of-the-art accuracy with a significantly reduced model size compared to reference methods. In addition, we reduce the number of failure cases by at least half. N/A Current state-of-the-art methods for semantic segmentation use deep neural networks to learn the segmentation mask from the input image signal as an imageto-image mapping. While these methods effectively exploit global image context, the learning and computational complexities are high. We propose shared memory augmented neural network actors as a dynamically scalable alternative. Based on a decomposition of the image into a sequence of local patches, we train such actors to sequentially segment each patch. To further increase the robustness and better capture shape priors, an external memory module is shared between different actors, providing an implicit mechanism for image information exchange. Finally, the patch-wise predictions are aggregated to a complete segmentation mask. We demonstrate the benefits of the new paradigm on a challenging lung segmentation problem based on chest X-Ray images, as well as on two synthetic tasks based on the MNIST dataset. On the X-Ray data, our method achieves state-of-the-art accuracy with a significantly reduced model size compared to reference methods. In addition, we reduce the number of failure cases by at least half. 1 INTRODUCTION In the medical image analysis domain, the automatic parsing of medical images represents a fundamental task that impacts the efficiency of the entire clinical workflow from diagnosis to therapy planning, intervention and follow-up investigations. An essential step in this sense is the semantic segmentation of anatomical structures which supports the radiologist to read and understand the image content. Recent approaches are inspired from the vision domain and rely on fully convolutional neural networks, e.g., (Ronneberger et al., 2015; Yang et al., 2017), to achieve state-of-the-art results on various segmentation problems (Menze et al., 2015). Usually, these methods use the entire image to directly predict the complete segmentation mask. While this facilitates the incorporation of valuable global image context, it also increases the complexity of the learning task, requiring the models to capture the complete variability in the shape and structure of different objects. In addition, this strategy does not scale well to (volumetric) high resolution data due to memory limitations. In this paper, we propose a new paradigm for semantic medical image segmentation based on a novel neural architecture called shared memory augmented neural network (SHAMANN). Based on a decomposition of the original image into a sequence of image subregions, e.g., local patches, we define different so called SHAMANN actors which traverse the sequence differently and segment each image subregion. An external memory module enables each actor to capture relations between different subregions within its sequence and increase the robustness of its predictions. In particular, this external module is shared among all actors and serves as a means to implicitly exchange local image context information in order to better capture global image properties, such as shape priors. Finally, the predictions of all actors are fused to obtain a semantic segmentation mask for the original image. An overview of the proposed framework with two SHAMANN actors is given in Figure 1. The contributions of our work are: (i) a reformulation of the semantic segmentation problem as a sequence learning task (ii) SHAMANN - a memory efficient and dynamically scalabale alternative to end-to-end fully convolutional segmentation networks, that can also implicitly capture global image properties through a shared external memory module; and (iii) a comprehensive analysis of the method and comparison against state-of-the-art methods on a large chest X-Ray dataset. 2 RELATED WORK Segmentation. In the fields of computer vision and medical imaging, segmentation is a fundamental task for understanding the semantic content of an image. State-of-the-art results on different segmentation benchmarks (Cordts et al., 2015; Everingham et al., 2015), have been achieved by using fully convolutional neural networks (He et al., 2017; Shelhamer et al., 2017). However, one limitation of such networks is the use of pooling layers. By down-sampling and increasing the field-of-view, precise localization information is lost. To tackle this issue, two different approaches have been proposed. First, encoder-decoder architectures, e.g., U-NET (Ronneberger et al., 2015), recover the details and spatial dimension using de-convolutions and shortcut connections (Badrinarayanan et al., 2017; Lin et al., 2017; Yang et al., 2017). The alternative is to use dilated convolutions to increase the field-of-view without decreasing the spatial dimension (Chen et al., 2018; Peng et al., 2017; Yu et al., 2017; Yu & Koltun, 2015; Zhao et al., 2017). In this context, graphical models such as conditional random fields (Lafferty et al., 2001) are used to further improve the results. In the medical context, a standard approach for medical segmentation is multi-atlas label propagation (MALP) (Wang et al., 2013; Zikic et al., 2013). In MALP, a collection of atlases, i.e., labeled images, is required. At runtime, one needs to perform expensive non-linear registration operations of each atlas to unseen data to achieve a segmentation. These solutions typically scale poorly and are inefficient. Alternatively, one can address the segmentation problem by using random forests (Glocker et al., 2012), providing stronger unary predictions through joint class and shape regression. Milletari et al. (2017) employed an additional patch-voting scheme to increase the robustness against outliers. Other approaches use linear shape models to incorporate prior information (SSM) (Heimann & Meinzer, 2009). In marginal space deep learning (Ghesu et al., 2016), SSMs have been coupled with deep learning to enable the segmentation of anatomical structures. While these methods provide good results and are relatively easy to train, they do not exploit global anatomical information. In addition, the inference is time-consuming, especially for 3D images. Memory networks. Recently, neural networks have been augmented with an external memory module to decouple the memorization capacity from the network parameters, making these methods better suitable for capturing long-range dependencies in sequential data. These networks have been used in the context of classification (Vinyals et al., 2016), meta-learning (Santoro et al., 2016; Sprechmann et al., 2018), reinforcement learning (Mnih et al., 2015; Pritzel et al., 2017), graph problems (Graves et al., 2016) or question answering (Graves et al., 2016; Sukhbaatar et al., 2015), to name a few. Closest to our work are generative methods (van den Oord et al., 2016), which model the conditional probability of a pixel based on previous pixels, using LSTMs. In contrast, we propose a sequence learning task for image segmentation and show that the memorization capacity can be improved using a shared external memory. Bahdanau et al. (2014) and Wang et al. (2016) proposed a memory-based strategy for the task of machine translation. They use a bidirectional RNN to encode the input and save the concatenation of the outputs of the two units in a memory. After the sequence is processed and saved in the memory, a decoder reads from the memory and outputs the final predictions. In contrast, our proposed method allows information exchange between SHAMANN actors while processing the input sequence thereby enabling each agent to access global image context. To the best of our knowledge, this is the first paper that proposes a method based on memory networks for the task of image segmentation. 3 PROPOSED METHODS In this section, we present our main contribution, the shared memory augmented neural networks (SHAMANN) architecture for semantic segmentation. Our observation is that in a bidirectional setup, information from different directions is not being explicitly exchanged. We hypothesize that by sharing an external memory, our networks can better capture global context, leading to a more accurate segmentation. 3.1 PROBLEM FORMULATION In the following x and xT will denote a row and column vector respectively, and A a matrix. Following formulations are focused on but not limited to 2D images. Formally, let us consider an input image I : Ω → RHI×WI×C , with Ω ⊂ R2 the image domain; HI ,WI and C denoting the height, width and number of channels of the image signal. The goal of the segmentation task is to assign a label to every pixel/voxel x in the input image, considering a predefined set of K object classes {y1, . . . , yK}. The segmentation result can be represented as a set of segmentation channels Y : Ω → RHI×WI×K , where the value of a pixel (x, y) of a given channel k encodes the probability of observing the class yk. A final segmentation mask can then be obtained by applying a softmax function along the different class-specific channels. In this work, we propose to reformulate the segmentation problem as a sequential learning task. Let us consider a sequence of T patches P = {P0, . . . ,PT } covering the image domain, with Pt : Ω→ RHp×Wp×C , where Hp,Wp are the height and width of the patch. For example, these patches may be extracted using uniform sampling. We propose to learn a function f that maps the sequence of input patches to a sequence of patch segmentation masks as f(Pt)Tt=0 = (Φt) T t=0, with f : RT×Hp×Wp×C → RT×Hp×Wp×K . 3.2 ARCHITECTURE OVERVIEW In this section, we introduce in more detail the components of our model (as can be seen in Figure 1). The encoder extracts a rich visual representation from the raw patch intensities. We model it as a function (e.g., a convolutional network), mapping the input to a d-dimensional feature space: E(Pt) T t=0 = (ψt) T t=0, with E : RHp×Wp×C → Rd. The actor module, defined as component 2, learns the sequence of input feature vectors Ψ = {ψ0, . . . ,ψT } and captures distal spatial dependencies. Each actor scans the input sequence Ψ differently, to produce an output sequence HJ = {hJ0 , . . . ,hJT }, with hJ ∈ Rd. Here, we use two actors, one scanning the input in the forward direction (J := F ), and the other in the backward direction (J := B). The patch-level fusion step combines the outputs of the actors as σ(HF ⊕HB) = H , with σ : R2×d → Rd and ⊕ the concatenation operator. The mapping σ could be a simple function, e.g., an average or a concatenation operation. In our work, we propose to explicitly learn how to combine the different outputs using a neural network with a single fully connected layer. The decoder maps the fused outputs of the actors to patch segmentation masks as D(ht)Tt=0 = (Φt) T t=0, with D : Rd → RHp×Wp×K . In the final image-level fusion step (see component 4), all patch segmentation masks Φ = {Φ0, . . . ,ΦT } are aggregated over the full image domain to generate the final segmentation mask Y . For fusion, we propose to use averaging (Iglesias & Sabuncu, 2015). 3.3 IMAGE SEGMENTATION AS A SEQUENTIAL LEARNING TASK In the following sections, we show three different alternatives for the actor module. These are the bidirectional long-short term memory units (Bi-LSTM), described in Section 3.3.1; the bidirectional memory-augmented neural networks (Bi-MANN), described in Section 3.3.2; and our proposed SHAMANN framework (see Section 3.3.3). 3.3.1 BIDIRECTIONAL LONG-SHORT TERM MEMORY NETWORKS: BI-LSTM One of the most common challenge in training a recurrent neural netowrk is the vanishing gradient effect. To address this challenge, LSTM units have been proposed by Hochreiter & Schmidhuber (1997). These units have achieved high performance on real-world problems such as image captioning (Vinyals et al., 2015). The core element of the LSTM unit is the memory cell ct, which is an abstract representation of the previously observed input. The definition of the output ht and ct can be summarized as: [ht, ct] = LSTM(ψt,ht−1, ct−1), (1) where LSTM stands for the gated processing structure. The output of a LSTM unit is the sequence of output vectorsH = {h0, . . . ,hT }. The bidirectional LSTM processes the sequence data both in forward and backward directions with separate LSTM units. Thus, the forward LSTM unit will process the input sequence ΨF = {ψ0, . . . ,ψT } and produce the output sequence HF = {hF0 , . . . ,hFT }, while the backward LSTM cell will process the reverse input sequence ΨB = {ψT , . . . ,ψ0} and produce the output sequence HB = {hBT , . . . ,hB0 }. The final output of the Bi-LSTM is given by H = σ(HF ⊕HB), where ⊕ denotes the concatenation operator. 3.3.2 BIDIRECTIONAL MEMORY-AUGMENTED NEURAL NETWORKS: BI-MANN One limitation of Bi-LSTM is that the number of network parameters grows proportionally to the memorization capacity, making it unsuitable for sequences with long-range dependencies. These types of dependencies often occur in our formulation of the segmentation task, depending on the image content, the sequence length, and the patch size. One can alleviate this issue and increase the memorization capacity by making use of an external memory. These networks called memory augmented neural networks (MANN) use a controller network, i.e., an LSTM, to access an external, addressable memoryM ∈ RQ×N , where N is the number of memory cells and Q is the dimension of each cell (Graves et al., 2016). Following these principles, we propose to enhance each actor with an external memory capability. Figure 2 illustrates how a forward actor addresses such a memory module to perform a sequential segmentation task. At every time iteration t, the actor produces write and read heads to interact with a small portion of the memory constrained by weights associated with previous observations. The write operation uses the write weights wwt ∈ RN to remove content from the memory with an erase vector et ∈ [0, 1]Q, then write the add vector at ∈ RQ: Mt[i] ← (1 − wwt [i] · et) ◦ Mt−1[i] +w w t [i] · at, where ◦ and · denote the element-wise and scalar multiplication respectively and 1 ∈ RQ a vector of ones. Similarly, the output of a read operation using the read weights wrt ∈ RN is the weighted sum over the memory locations: rt(M) = ∑N i=1w r t [i] ·Mt[i]. We use content lookup to define the read weights, in which a key krt ∈ RQ emitted by the actor is compared to the content of each memory location. The attention score for a read operation at row i is the i-th value in the column vector wrt = exp(F (k r t ,Mt[i]))/ ∑N j=1 exp(F (k r t ,Mt[j])), where F computes the similarity between two vectors, i.e., cosine similarity, and [] is the row operator. The content lookup weights ŵwt ∈ RN allow the write operation to update content in the memory. In order to also allocate new memory slots, we extend the addressing with a mechanism that returns the most unused location w̃wt ∈ {0, 1}N (as a one hot vector). At every iteration the write operation uses an allocation gate α to either update the content of a location, or write to a new, unused location: wwt = αŵ w t + (1 − α)w̃wt . The read and write keys, erase and add vectors and the allocation gate are linear mappings of the memory cell of an actor. We extend MANNs to a bidirectional formulation, where two actors, each with its individual external memory module, scans the input sequence in a forward (J := F ) and backward (J := B) manner and produce the output and memory cell a time t as: [gJt , c J t ] = LSTM(ψt ⊕ rt−1(MJ), gJt−1, cJt−1), (2) where gJt = W J g (h J t ⊕ rt(MJ)) + bJg are linear mappings of the concatenation of the output vectors and the currently read information from the memory module. The final output of Bi-MANN is given by H = σ({gF0 , gF1 , . . . , gFT } ⊕ {gBT , gBT−1, . . . , gB0 }). 3.3.3 SHARED MEMORY-AUGMENTED NEURAL NETWORKS: SHAMANN While the external memory module addresses the limited memorization capability of standard BiLSTM units, the sequence processing by the different actors remains suboptimal - in the sense that there is no active exchange of context information between them. The hypothesis is that through such an exchange, individual actors can observe more global context, leading to a more robust segmentation. With this in mind, we propose to share the external memory module between actors. By reading and writing information to the same memory module, the actors can interact in an implicit way. The output and memory cell for a time iteration t are defined as follows: [gJt , c J t ] = LSTM(ψt ⊕ rt−1(M), gJt−1, cJt−1), (3) where gJt = W J g (h J t ⊕rt(M))+bJg are linear mappings of the concatenation of the output vectors and the current read information from the shared memory module. Note that the matrixM in Equation 3 represents the memory module, which is shared by both the forward and backward actors, in contrast to Equation 2 where each actor has its own memory module, i.e.,MF andMB . The two actors write and read alternatively from the memory, first the forward actor, then the backward actor. To ensure the correct allocation of free memory, the two actors also share the usage vector. The final output of the SHAMANN framework is given by H = σ({gF0 , gF1 , . . . , gFT } ⊕ {gBT , gBT−1, . . . , gB0 }). Our network is fully differentiable and can be trained end-to-end via back-propagation through time (Werbos, 1990). 4 EXPERIMENTS In this section, we present the results of the proposed methods on real-world and synthetic applications. We benchmarked our method on a large chest X-Ray dataset and compared it to state-of-theart methods. Additionally, we conducted two synthetic experiments on MNIST (Lecun et al., 1998) with the goal of analyzing the memorization capacity of the different models and providing insights into the benefits of sharing an external memory module. 4.1 CHEST X-RAY LUNG SEGMENTATION This is a fundamental preprocessing task towards automated diagnosis of lung diseases, e.g., nodules, tumors, etc. (Wang et al., 2017). To meet high clinical standards, an accurate and robust segmentation of the lungs is required. For this problem, important challenges are the variability in shape and intensities of the lungs, as well as reduced anatomy contrast, due to pleural effusion. The chest X-Ray dataset consists of 7083 images of 7083 patients selected from the public database ChestX-Ray8 (Wang et al., 2017), each of size 1024×1024 pixels. Ground truth segmentation masks were provided by clinical experts. We performed a random patient-based split in 5000 training, 583 validation and 1500 test images. The patch size was set to 160×160 pixels with a stride of 80×80, resulting in a sequence of 169 patches per image. Table 1 shows quantitative results. We compute the dice score using the definition of true positive (TP), false positive (FP) and false negative (FN) as: (2∗TP )/(2∗TP+FP+FN). The experiment demonstrates that, even though we use sequences of local patches, our algorithm reaches state-ofthe-art performance by effectively capturing the global context through the shared memory. In particular, our model requires significantly less parameters in comparison to the reference methods. This allows in theory a more (memory) efficient application to high-resolution (volumetric) data. Furthermore, in our formulation one can dynamically split the sequence length (both at training and testing time) and maintain global context in the shared memory to achieve an even higher degree of flexibility. We are currently investigating these benefits on large volumetric medical scans. An additional important property of our method is the robustness on difficult cases, caused by, e.g., large scale variations between children and adults, different image artifacts and abnormalities, such as pleural effusion or large lesions. We manage to reduce the number of cases with large error, i.e., a dice score below 0.9, by at least half. Figure 3 shows qualitative results. 4.2 MNIST IMAGE COMPLETION: MEMORY ANALYSIS To investigate the benefits of extending neural networks with an external memory, we designed two synthetic tasks based on the MNIST dataset. First, we deleted the bottom half of the input images and trained our models to complete the missing information. The goal of this experiment was to observe the networks capacity to extrapolate the missing data based only on the first half of the image. In a second experiment, we removed random patches from the input images. Since in this case the location of the missing data is not deterministic, the networks have to adaptively learn a more complex strategy for the memorization and lookup of information to better extrapolate the missing data. For both experiments we used the original MNIST images as labels. The MNIST data consists of 70000 pictures of handwritten digits (55000 train, 5000 validation and 10000 test) and their associated label. We considered patches of size 8×8 with a stride of 4×4 resulting in a sequence of 49 patches per image. For the quantitative evaluation we measured dice scores, as well as classification accuracy on the reconstructed digits. To measure the classification accuracy we trained a deep neural network classifier on the original MNIST dataset and used this network to evaluate the images reconstructed by our methods. The accuracy of this classifier on the original MNIST dataset was 99.23%. On the altered test sets, without applying any completion, the accuracy was 56.14% for the first and 67.8% for the second experiment. Figure 4 shows quantitative results. Using SHAMANN to perform image completion on the altered data, the classification accuracy was increased to 95.2% for the first, and 96.9% for the second experiment. In both experiments the networks augmented with memory outperform the Bi-LSTM network and especially the model without memory (called NO MEM). This demonstrates that more effective image completion strategies can be learned with the use of an external memory module, reaching best performance when the memory is shared. Note that as the capacity of the Bi-LSTM units increases, the difference in reconstruction performance to both Bi-MANN and SHAMANN reduces. As expected, given a large enough cell size, LSTM units can emulate the high memorization capacity of an external memory. While in the first experiment the methods perform similarly at the largest cell size; in the second experiment the differences between the methods is considerably large, even at the largest cell size level. This indicates that for more complex problems the performance of the Bi-LSTM is limited, even for a larger cell sizes. Figures 5a and 5b show qualitative results. While in the first rows, the first three methods fail to extrapolate correctly the missing parts of the digits, the networks using a shared memory module make an accurate shape reconstruction that leads to correct classifications. The last row shows a failure case, where all four methods fail to correctly recognize the digit. However, considering the high difficulty in reconstructing these two digits, one can argue that the output of the SHAMANN method is reasonable. Figure 5c shows the benefits of sharing the memory module, by comparing the prediction of individual actors with and without the information exchange via the shared memory. Table 2 shows the hyperparameters used for the experiments. For training we used the RMSProp optimizer with a learning rate of 10−3 and minimized the mean squared error on all experiments. 5 CONCLUSION AND FUTURE WORK In this paper, we presented a novel memory efficient segmentation approach based on sequence learning and memory augmented neural networks. Based on a decomposition of the original image into a sequence of image patches, we trained two SHAMANN actors that traverse the sequence bidirectionally and segment each image subregion. An external memory module enables each actor to capture relations between different subregions within its sequence and increase the robustness of its predictions. In particular, the shared nature of the external module serves as a means to implicitly exchange local image context information between actors to better capture shape priors. Despite the fact that we learn the segmentation module at patch-level, our algorithm matches the state-of-the-art performance of image-to-image architectures on a challenging lung segmentation task based on a X-Ray dataset. In addition, we conducted a detailed analysis on two synthetic tasks based on the MNIST dataset, demonstrating the benefits of sharing the external memory among different actors. In our future work, we plan to extend our model to large 3D/4D medical scans and investigate the improved scalability and memory efficiency. We also plan to investigate the benefits of increasing the number of actors with different scanning strategies.
1. What is the focus of the paper regarding semantic segmentation? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. Do you have any concerns or suggestions regarding the experimental results and comparisons? 4. What are some specific questions regarding the methodology and technical aspects of the paper?
Review
Review Summary: The paper proposes a system of semantic segmentation based on sequential processing of the image in a patch-wise manner with multiple "actors", sharing a common external memory. This approach stands in contrast to the more usual approach of single-shot prediction for the whole image, where encoder-decoder architectures or dilated convolutions are used to capture the global context. The authors then discuss three-variants of this method, out of which two use external memory (Bi-MANN, SHAMANN), and one uses memory shared between actors (SHAMANN). Results are presented on segmentation of lung X-ray data and on MNIST digit completion. Comments: The paper is easy to read. The authors cite the relevant literature on the baseline semantic segmentation methods, as well as neural networks with external memories. However, similar patch-wise and sequential methods have been presented in the literature (e.g. https://arxiv.org/abs/1506.07452), including ones with external storage (e.g. https://www.nature.com/articles/s41592-018-0049-4), but these are not discussed as prior work. Overall, the proposed approach is interesting, but significantly more complex than both the baselines and prior work. As is, the experimental results are not compelling enough to justify this (lack of clear quantitative improvement over state of the art). My recommendation would be to conduct additional experiments on semantic segmentation benchmark datasets. The proposed method seems promising for volumetric data as the authors note, but this also needs to be demonstrated experimentally. Some more specific & technical questions follow: - In Table 1, how is the confidence interval for the Dice score computed? - Have any experiments been done with more than 2 actors? - How exactly is the patch sequence formed, i.e. what is the spatial order of the patches? How much to the results depend on this order, if at all? - In the discussion on page 6, it seems to be implied that the reduced parameter count should allow more efficient application to volumetric data. This is a bit surprising, since with modern networks it is usually the input size that is limiting, not the number of network parameters. - Have experiments with Bi-MANN and Bi-LSTM been done on the X-ray segmentation data? How do the results compare to SHAMANN? - How does the inference and training time compare to the baseline methods?
ICLR
Title CNN Compression and Search Using Set Transformations with Width Modifiers on Network Architectures Abstract We propose a new approach, based on discrete filter pruning, to adapt off-the-shelf models into an embedded environment. Importantly, we circumvent the usually prohibitive costs of model compression. Our method, Structured Coarse Block Pruning (SCBP), prunes whole CNN kernels using width modifiers applied to a novel transformation of convlayers into superblocks. SCBP uses set representations to construct a rudimentary search to provide candidate networks. To test our approach, the original ResNet architectures serve as the baseline and also provide the ’seeds’ for our candidate search. The search produces a configurable number of compressed (derived) models. These derived models are often 20% faster and 50% smaller than their unmodified counterparts. At the expense of accuracy, the size can become even smaller and the inference latency lowered even further. The unique SCBP transformations yield many new model variants, each with their own trade-offs, and does not require GPU clusters or expert humans for training or design. N/A We propose a new approach, based on discrete filter pruning, to adapt off-the-shelf models into an embedded environment. Importantly, we circumvent the usually prohibitive costs of model compression. Our method, Structured Coarse Block Pruning (SCBP), prunes whole CNN kernels using width modifiers applied to a novel transformation of convlayers into superblocks. SCBP uses set representations to construct a rudimentary search to provide candidate networks. To test our approach, the original ResNet architectures serve as the baseline and also provide the ’seeds’ for our candidate search. The search produces a configurable number of compressed (derived) models. These derived models are often 20% faster and 50% smaller than their unmodified counterparts. At the expense of accuracy, the size can become even smaller and the inference latency lowered even further. The unique SCBP transformations yield many new model variants, each with their own trade-offs, and does not require GPU clusters or expert humans for training or design. 1 INTRODUCTION Modern Computer Vision (CV) is dominated by the convolution operation introduced by Fukushima & Miyake (1982) and later advanced into a Convolutional Neural Network (CNN or convnet) by LeCun et al. (1989). Until recently, these convnets were limited to rudimentary CV tasks such as classifying handwritten digits LeCun et al. (1998). Present-day convnets have far surpassed other CV approaches by improving their framework to include faster activations Nair & Hinton (2010), stacked convolutional layers (convlayers) Krizhevsky et al. (2012), and better optimizers Kingma & Ba (2014). These multi-layer deep convnets require big data in the form of datasets such as ImageNet Deng et al. (2009) to enable deep learning LeCun et al. (2015) of the feature space. However effective, convnets are held back by their high resource consumption. Utilizing an effective convnet on the edge presents new challenges in latency, energy, and memory costs Chen & Ran (2019). Additionally, many tasks, such as autonomous robotics, require realtime processing and cannot be offloaded to the cloud. As such. resource constrained platforms, such as embedded systems, lack the compute and memory to use convnets in their default constructions. Analysis into convnets reveals that they are overparameterized Denil et al. (2013) and that reducing this overparameterization can be a key mechanism in compressing convnets Hanson & Pratt (1988); LeCun et al. (1990); Han et al. (2015a). The many weights that form a network are not necessarily of the same entropy and can therefore be seen as scaffolding to be removed during a compression step Hassibi & Stork (1993); Han et al. (2015b); Tessier et al. (2021). In this work, our objective is to reduce the size of any given convnet using an automated approach requiring little human engineering and compute resources. To that end, we design Structured Coarse Block Pruning (SCBP), a compressing mechanism that requires no iterative retraining or fine-tuning. SCBP uses a low-cost search method, seeded with an off-the-shelf network, to generate compressed models derivatives with unique accuracy, size, and latency trade-offs. The reminder of this paper is organized as follows. Section 2 focuses on closely related works. Section 3 details the methodology and implementation of SCBP. Section 4 discusses experimental findings, and finally we conclude with key takeaways and future directions in Section 5. 2 RELATED WORKS Early work on removing parameters from Artificial Neural Networks (ANNs) was focused in gaining insights on the purpose of those parameters Hanson & Pratt (1988). Prior to AlexNet Krizhevsky et al. (2012), exploiting ANN overparameterization was used as a regularization mechanism LeCun et al. (1990); Hassibi & Stork (1993). Recently, ANN overparameterization is exploited to reduce the size of models Han et al. (2015b); Zhou et al. (2017); Tessier et al. (2021). Removing parameters compresses the memory footprint of CV models, which can then allow their deployment on embedded systems. Compression additionally facilitates reduced energy costs while also reducing latency by greatly reducing memory traffic Han et al. (2015a); Zhou et al. (2017). Model accuracy is sustained or reduced, depending on the method of compression. Preserving a compressed CV model’s baseline accuracy is challenging and requires large compute Han et al. (2015b); Zhou et al. (2017). A common mechanism for maintaining a trained model’s accuracy is to iteratively reduce its size in prune-retrain cycles. Another mechanism is leveraging a Network Automated Search (NAS), often using reinforcement learning, to build networks from scratch that are both small and accurate Zoph et al. (2018); Cai et al. (2020). However, both prune-retrain and NAS are exorbitant in compute usage, typically on the order of 103 and 105 GPU hours respectively. When computing resources are limited, faster mechanisms for compression are needed. A range of techniques are available, such as tensor factorization Kim et al. (2015); Phan et al. (2020); Swaminathan et al. (2020) and Fast Fourier Transforms (FFT) on CV models’ weight tensors. Hashing can also be used to group similar weights into buckets Chen et al. (2015); Hu et al. (2018). These techniques, while faster to train, do not maintain the original network’s accuracy and often produce larger models relative to prune-retrain and NAS approaches. Quantization is also frequently used Gong et al. (2014); Wu et al. (2016) to reduce the bit-width of parameters from 64-bit floats down-to 5-bit ints or less Wu et al. (2016); Zhou et al. (2017). In special cases, only 2-bit parameters are sufficient Rastegari et al. (2016); Courbariaux et al. (2016; 2015). Other techniques include those based on weight-decay and entropy Luo & Wu (2017); Min et al. (2018); Tessier et al. (2021). In our proposed mechanism, we bridge a gap between manual and NAS approaches by using a low-cost search to order network width attributes of any given CV model, which is partitioned by a novel algorithm into multiple segments with each being assigned its own width modifier. A close work is from Howard et al. (2017) in the form of MobileNets which are a family of networks using different uniform width modifiers on a manually engineered baseline model. Similarly, EfficientNets Tan & Le (2019) expands the idea of modifiers to include depth and input resolution modifiers. Our approach benefits from generalized compression that can be applied to any model because we do not require a new baseline that needs to be engineered and thus can keep cost within 101 GPU hours. 3 METHOD The compression approach detailed below is realized by novel combination of convlayer binning, width modification, and a unique search-train step based on set transforms of the aforementioned combination. Unlike most network architecture search methods that impose prohibitively long search and train times, our work circumvents the cost problem by providing a halfway point between NAS and human engineered architectures. In doing so, we present a rudimentary proof of concept which, in our evaluations, can produce an efficient search and thus generate derivative models when configured by simple human defined search domain set constraints. The SCBP version we use stands on four foundations: (1) a seed architecture from which derivative architectures will be produced; (2) a network segmentation mechanism for the seed architecture for binning and assist in derivation; (3) a set of compression ratios (c-ratios) for each segment of the seed network; and (4) a one shot search for network instantiation based on (1)-(3). 3.1 SEED ARCHITECTURES The instantiation of compressed, derivative architectures is sourced from a seed architecture. In this work, we use the ResNet family as seed architectures. SCBP initialized with a seed network helps cut down on search times by leveraging already known and working architectures to generate new derived variants. Currently, these variants do require training to convergence to determine their accuracy but their latency, memory footprint, and power stats are immediately known. As an aside, the residual connection paradigm in ResNets is widely used today as the foundation of a variety of architectures. As such, using the ResNet testbed here allows for potential extension to later developed networks. To further accelerate search, the seed architecture needs to be segmented into three portions, each of which undergoes its own unique compression. ResNet on CIFAR data consists of three superblocks where we define a superblock to be all ResNet blocks of the same filter dimensions. Thus, we use three segments because our seed architecture accommodates it with little engineering. The input layer and its subsequent down-sampling layers plus the output layers are left untouched. Once the seed network is divided into well-defined segments, each segment is selected for width modification by applying c-ratios. Different segment and c-ratio pairings show changed weight distributions and residual functions and hence result in new derived networks. Interestingly, the features learned per superblock, to a certain extent, can change to maintain accuracy when adjacent superblocks are compressed. i.e. Over-parameterized segments may absorb entropy that may be lost from adjacent c-ratios. To detail, in each derived network, an ordered tuple of c-ratios is required, where each element in the tuple corresponds to a segment and hence encodes the compression factor for that segment. Thus, a set R of c-ratios can be constructed using the Cartesian product of R and the set S of partitioned segments. Both S and R must be small countable sets to prevent combinatorial explosion. In section 4, we find excellent results with |S| = 3 and |R| = 4. If a seed architecture can potentially benefit from different S and R, these hyperparameters can be easily changed. The emergent property from the architecture derivation process above is an effective and quick representation component for enumerating architectures that circumvents intractable search costs. The simplicity of the method allows its application to the growing library of modern convnets. It is therefore possible for many of these off-the-shelf convnets to be automatically modified via compression to meet embedded system resource constraints. This provides a low-cost approach to leverage past and present architecture engineering effort in embedded use cases. 3.2 SELECTING AND APPLYING C-RATIOS TO NETWORK SEGMENTS A segment is a binning of sequentially stacked convolutional layers of the same dimensions. These segments are craved up from the seed network. The number of segments in a seed network is dependent on two factors: the seed’s architecture and the segmenting procedure. For ResNet like models, segmenting is straight-forward: use each superblock as its own segment. In different architectures, such as VGG-19, segmenting can done based on like dimension convolutional layers. The procedure should be adapted based on the seed architecture, the cardinality of R, and available compute. In algorithm 1, we provide a generalized segmenting procedure for arbitrary convnets. Convlayers can be represented in the ordered set C and it is from here that S is constructed. Both S and C are in the same set family. A one-to-one correspondence between S and C is to be avoided, which would mean each convlayer is its own segment. A small segment count is crucial to an efficient search because it limits the number of generated derived networks, as described in equation 4. We can understand each segment sPS as a coarse representation of multiple cPC. In practice, algorithm 1 implements the following superblock segment definition: S “ ts : pc1, ... , ckq P C, |s| P N, 1 ď k, |s| ď |C|u (1) where each s is a tuple of multiple c. The cardinality of S is not required to be identical to C. After network segmentation into superblocks, we need to pair these segments with a c-ratio. The c-ratio, at its core, is a multiplicative factor applied to the filter dimension of convlayers. It is a number strictly ď 1.00. All baseline convolutional layers are at a default c-ratio of 1.00, and this is a valid configuration for SCBP. The c-ratio r is constructed from the set R as follows: R “ tr : r P R and 0.0 ă r ď 1.00u (2) where r values <1.0 compress convlayers by discrete filter pruning (truncation), r=1.0 preserves the original convlayer width attributes, and c-ratios >1.00 are undefined in terms of compression. The operation of applying a c-ratio to a segment is essentially a transform T such that: T pm ˆ n ˆ f ˆ kq “ m ˆ n ˆ r pf ˆ rq s ˆ k (3) Here m ˆ n represent the filter’s spacial dimensions, and f is the filter count. f is modified by r, and k denotes the segment depth (simply, its the stacked convlayer count). Selecting the correct c-ratios can both be done using a search, such as a grid-search, or manually. Our experiments in section 4 use four manually selected c-ratios. In our experience, SCBP shows resiliency and works well without fine-tuned c-ratios. With that said, a low-cost fine-tuning step for c-ratios is a possible future direction. In this work’s seed networks, the four c-ratios were hand-picked as equidistant points from one another to provide a coarse, uniform coverage of the c-ratio search space. They are the set R = {0.25, 0.50, 0.75, 1.00}. The segment and c-ratio sets are used in conjunction to produce compressed derived networks. Derived networks can be thought of as subnets of their overparameterized seed networks. Each subnet behaves uniquely, due in part to their differing weight distribution. The construction of the collection of derived networks requires both sets R and S. Each tuple rPR is crossed with a seed segment sPS using the Cartesian product between the sets: |S| ź i“1 Ri ą S “ tps1r1, ... , snrnq : @sDrwhere r P R, s P S, and n “ |S|u (4) Each segment has its own c-ratio where the c-ratio set is transformed into a multiset with multiplicity equal to the segment set cardinality. The cross product between the c-ratio multiset and segment set yield the space of derived networks. As such, each derived network is configured with unique memory footprints and weight tensors; these configurations can then allow similarly accurate networks to be profiled and culled for the best hardware fit. There are no prune-retrain cycles and hyperparameters are simply adopted from the seed network. The derived network tensors are not sparse and this non-sparsity is of great benefit; it allows hardware acceleration of multiply-add on virtually all platforms without the need of ASICs. 3.3 APPLYING SCBP TO RESNETS AND OTHER ARCHITECTURES CV models typically utilize stacked convlayers, where many successive layers are of identical dimensions. Residual connections are also common in these modern convnets. While innumerable improvements have been made in activations, bottleneck layers, etc, the underlying data structures of most contemporary models stands on the foundations of convlayers and residual connections. Algorithm 1 forms the basis of SCBP as it segments the seed network into superblocks by binning like-dimension convlayers. The bins are sorted according to parameter count where the largest k bins are selected as segments and k is configurable hyperparameter (the segment set cardinality). Algorithm 1: segmenting procedure seed Ð select_convnet(model_constraints); layers Ð extract_dims(seed); buckets Ð bin_like_dims(layers); s_buckets Ð sort_descending(buckets); segments Ð s_buckets[0...k]; return segments; Algorithm 2: derivation procedure segments Ð segmenting_procedure(); cratios Ð multiset_transform(cratio_set, multiplicity Ð segments.length); derived_segments Ð cartesian_product(segments, cratios); return derived_segments; To create derived segments from a binned seed network, we must first identify a set of c-ratios. In this work, we use a default set, given in algorithm 2, for two reasons. One, no additional search is incurred, and second, the c-ratios and their magnitude evenly covers the subspace of widths not larger than the baseline width. We exploit the block structure of our seed architecture by coalescing blocks into superblocks and then selectively compressing these superblocks such that non-uniform compression can be performed to derive both networks and insights into the original seed. Additionally, limiting c-ratios to superblocks (instead of convlayers) significantly cuts down on the enumeration cost for the architecture search. The final derived segments are a product of both the binned convlayers and c-ratios. This product is the building unit of each derived network. The network generation for derived networks starts at the seed network and ends with a pool of derived networks, as given in 3. The procedure acquires a seed, creates segments from the seed and then pairs the segments with c-ratios to construct derived segments. The derived segments are then pieced back together to form the new derived networks. Algorithm 3: network generation seed Ð select_convnet(model_constraints); segments Ð segmenting_procedure(); derived_segments Ð derivation_procedure(segments); derived_networks Ð list(); for segment_tuple in derived_segments do template Ð seed.deepcopy(); network Ð replace_layers(template, segment_tuple); derived_networks.append(network); end return derived_networks; Algorithm 4: train, test, rank derived_networks Ð network_generation(); models Ð list(); performance Ð list(); for convnet in derived_networks do model Ð train(convnet); metrics Ð test(model); models.append(model); performance.append(metrics); end candidate_models Ð rank(models, performance); The train-test regiment remains the same as the seed, but instead of one network, many more networks are trained in parallel. Each derived model has its metrics logged for later ranking so that the best one can be determined based on user-specific constraints. In this work, our c-ratio and segmenting procedures construct 64 unique models per seed. We random initialize each model, train-validate it, and finally collect metrics on the testset. SCBP is generalizable to most deep convnet architectures that use sequentially stacked convlayers. However, when the segmenting procedure is unable to bin layers, as is the case when all convlayers are of different dimensions, SCBP can become limited, especially when segments cover a minor subset of all the parameters. In practice, modern convnets typically use many stacked convolutional layers and thus are receptive to SCBP. In sum, SCBP can generate compressed models without the need to modify training regiments, continuous fine-tuning, prune-retrain cycles, or even modified hyperparameters. The compression of superblocks to different c-ratios can provide valuable insights into their relative importance to the rest of the network. These insights can guide efforts to build better architectures and may also help find hardware quirks that adapt better to certain derived architectures. 4 EVALUATIONS AND DISCUSSIONS 4.1 SETUP, METRICS, AND EXPERIMENTS The CIFAR dataset is used to determine the effectiveness of SCBP on the compressed models. CIFAR provides a complex data distribution with coarse labels that are appropriate for approximating tasks on the edge. The dataset consists of 50K train images and 10K test images. In total, these 60K images, of dimensions 32x32x3, cover a variety of categories such as dogs and trucks. The dataset has two label sets over the same data, one with 10 coarse labels and another with 100 finer labels. We bench with on both using the top-1 metric. Inference statistics are measured on the JETSON Xavier NX and a desktop WorkStation. The platforms serve to approximate real-world hardware constraints so that the effects of compression can be evaluated in terms of latency and power. In table 3, the WorkStation is the least constraint while low memory bottlenecks while the Xavier NX provides a typical embedded environment for benching. sblock1 sblock2 sblock3 5.196% 18.935% 75.455% Table 1: Parameter distribution. The density of weights in convnets is usually spread nonuniformly. For example, our ResNet testbed has almost all parameters are concentrated into the last two superblocks. Compression on high density superblocks yields faster and smaller models while identical c-ratios on lower density superblocks are correlated with worse accuracy. hyperparameters decay epoch momentum learning rate 1e-4 1e2 9e-1 1e-1 Table 2: Hyperparameters. Every derived network repurposes the seed network’s hyperparameters. Here, the training regiment is adapted from He et al. (2016) with no modifications to demonstrate the applicability of SCBP. Training and testing is done using PyTorch 1.9v with two RTX 2080Ti GPUs. Performance is composed of several metrics. In our experiments, we settle on four measures of performances: accuracy, latency, size, and power. These metrics help determine a model’s holistic fit into an embedded environment. Top-1 accuracy is measured using the whole held-out testset. The model size is tied to superblock c-ratios and table 1 provides a reference. Latency and energy-interval are dynamic metrics that need averaging over many readings to reduce their margin of error. Both the model latency and power draw are processed as arithmetic means from 10k inference frames. The seed architectures are unadulterated models pulled from He et al. (2016). We use ResNet20, ResNet44, and ResNet110 to provide ’source’ candidates for the SCBP based search. Additionally, using ResNets helps to indirectly approximate their innumerable successor networks, many of which are minor architectural changes and most of which are prevalent in production today. For CIFAR100 labels (new models), the final denselayer of the ResNets are modified to simply increase node count from 10 to 100. All models use zero-padded summation for their residual connections. 4.2 EFFECT OF SCBP ON ACCURACY AND MODEL SIZE This section provides an overview of the empirical effects of SCBP and further analyzes its effects on model architecture and performance. In particular, SCBP reveals interesting architecture patterns that we discuss and highlight. Parameter updates during training change based on the label granularity. That is, the superblock-toaccuracy relation begins to shift with just a change in the number of labels. When controlling for parameter count, coarse labels on shallow networks usually outperform deeper networks. Conversely, deeper networks outperform shallow networks on finer labels, even when parameter counts are similar. Label granularity additionally influences the distribution of redundant parameters, where coarse labels primarily increase redundancy in the final superblock and fine labels distribute low entropy parameters across all superblocks see table 5. Interestingly, c-ratios have similar accuracy effects, regardless of superblock filter density. For example, a c-ratio of 0.25 on the first superblock usually harms accuracy much more than on the same c-ratio on the significantly denser last superblock. These patterns are evident from experiments, especially with derived nets seeded with ResNet44, see table 4. The tables also indicate the learning ability of different portions within the seed networks by evaluating their response to SCBP. In terms of accuracy resilience to truncations, both the learning task and cumulative layers up to a given superblock need to be considered. The compression impact on accuracy is better correlated with layer count and data granularity than the actual parameter count. For example, the finer-grained labels in CIFAR100 make compression more challenging because each filter encodes more information and therefore incurs a larger cost when it is pruned. This can be inferred from table 4 and figure 1 where the simple CIFAR10 labels allow SCBP to produce 110% more derived networks than with CIFAR100’s finer labels. Within the derived networks, we note several learning patterns. One such pattern is that the final superblock, the largest in our experiments, counter-intuitively contributes the least to final model accuracy. In fact, even removing 50% to 75% of its filters incurs marginal accuracy effects. The large truncation allowance on the final superblock is beneficial because it means that many derived models can be compressed 40-50%. Another pattern SCBP exposes is a weak co-dependence between the first two superblocks where many models lose >2% accuracy if 75% of their parameters are truncated in either superblock. Extensive data tables are provided in Appendix 5. The co-dependence may be due to the low parameter redundancy present in the first two superblocks. Indeed, these possibly higher entropy shallow superblocks may indicate that overparameterization is mainly a problem of deeper layers. The relative compression-accuracy trade-off between superblocks means its better to focus on compressing latter superblocks. Additionally, it should be noted from table 6 and figures ?? that the middle superblock has a large facility to absorb critical weights from its neighbors when they are highly compressed. More generally, high compression tends to cause adjacent superblocks to compensate for missing filters. After a certain amount, typically 30% size reduction, lowering model size correlates positively with lowering accuracy, albeit with some derived net exceptions that are more accurate after compression (see ResNet44-1.0-0.5-1.0, etc). Our takeaway is that SCBP as applied to convnets can effectively compress models with very low search costs. The c-ratios on superblocks help illuminate the relative contribution of parameters to final performance and therefore provide insights into the seed and can possibly assist in designing more compressible seed networks. And while SCBP size reductions are moderate when compared to unstructured pruning, SCBP does not result in weight matrix sparsity and thus benefits from BLAS-based hardware acceleration. Lastly, SCBP saves time by not requiring hyperparameter tuning or modification to the seed’s training regiment. These attributes coupled with no prune-retrain cycles, elevate SCBP far above many comparable techniques in terms of GPU costs. Typically, these costs are in the range of 103 hours for pruning and up-to 105 hours for NAS. In comparison, SCBP consumes less than 72 GPU hours from start to finish. 4.3 EFFECT OF SCBP ON LATENCY AND ENERGY Inference time is end-to-end, meaning it encompasses image preprocessing, data moves, model execution, and postprocessing. It is this time that is averaged for the latency and its during this duration that power draw is also measured. For power draw, the Xavier NX polls its INA3321 micro-controller while the WorkStation samples from turbostat and nvidia-smi. The main determining factor for latency and power metrics is model size. No matter the architecture or hardware, frequent cache misses and, worse, memory swaps absolutely crush real-time performance and balloon latency. The speed and capacity of the memory hierarchy is the determining factor of latency and hence the energy-interval. Because the main contributor to latency and energy costs is memory access, designing smaller convnets is an effective approach for resource constrained platforms which heavily benefit from compression due to their limited memory hierarchy. In figure 1, we see that derived nets can maintain their accuracy while compressed. This means there are smaller models, with possibly fewer cache misses, which could be faster. Then we see in figures 3, (a)-(d), that indeed latency is greatly reduced, by around 20%, and 30% in some cases. Meanwhile, the actual power draw seen in figures 3 (e)-(h) does not show significant changes meaning that overall energy-interval are reduced for our derived networks. Lastly, we find that once a memory bottleneck is successfully eliminated, further compression comes with diminishing returns as can be observed in the WorkStation experiments in the Appendix, figure 4. The data indicates that excessive parameter pruning, past memory bottlenecks is an unneeded computation waste. The pruning, as deployed in many unstructured compression techniques does not lead to faster, less energy hungry models; often they only result in smaller models coming at the cost of very expensive and long train cycles. 5 CONCLUSION AND FUTURE WORK We demonstrate a novel mechanism to segment convlayers into superblocks and set them with different compression ratios using a set represented network search. The SCBP framework constructs a pool of models with different attributes which can help with different hardware fits. These models are much smaller and faster than their unmodified counterparts. The training cost of these models is low and feasible with regular workstations. As such, the embedded native models can be designed without prohibitive costs, allowing rapid iteration to find the best model-hardware pairing. For future work, we are extending SCBP into an iterative mode that operates on pre-trained networks.
1. What is the focus and contribution of the paper on compressing convolutional networks? 2. What are the strengths and weaknesses of the proposed SCBP framework, particularly regarding its effectiveness and novelty? 3. Do you have any concerns or suggestions regarding the segmentation and c-ratio set in the method? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content, especially compared to prior works like AutoSlim? 5. Are there any questions or issues regarding the experimental evaluation, comparison to other methods, and reporting of results?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper demonstrates SCBP, a framework to compress convolutional networks, by first segmenting the layers into superblocks and then associate them with different compression ratios. The SCBP framework constructs a pool of models with different attributes that meet different needs. Strengths And Weaknesses The method proposed is effective and not hard to follow. However, it brings limited new things/insights to me. The authors claim the core of its method is the segmentation and the c-ratio set. However, is the segmentation really needed? The authors claim that the segmentation can reduce the search cost by binning similar layers together in a bucket/segment. It appears to me that this is very trivial. The c-ration, in the paper, is also trivial to set. Therefore, the framework is a bit of engineering with very limited research insight. A previous work, AutoSlim[1], associates a ratio to each single layer, and applies algorithm to automatically learn the ratio, which should be discussed and compared. The work does not compare to other methods. The experiment evaluation is also limited on only CIFAR. The authors claim there is no retrain cycle. However, after the compression, it requires to train and evaluated the candidates' architectures. This step is 'retrain' and would cost a lot of time. The resulted network size and FLOPS are not reported. The writing should be largely improved with many typos. Reference: [1] https://arxiv.org/abs/1903.11728 Clarity, Quality, Novelty And Reproducibility The work is not of good quality, novelty.
ICLR
Title CNN Compression and Search Using Set Transformations with Width Modifiers on Network Architectures Abstract We propose a new approach, based on discrete filter pruning, to adapt off-the-shelf models into an embedded environment. Importantly, we circumvent the usually prohibitive costs of model compression. Our method, Structured Coarse Block Pruning (SCBP), prunes whole CNN kernels using width modifiers applied to a novel transformation of convlayers into superblocks. SCBP uses set representations to construct a rudimentary search to provide candidate networks. To test our approach, the original ResNet architectures serve as the baseline and also provide the ’seeds’ for our candidate search. The search produces a configurable number of compressed (derived) models. These derived models are often 20% faster and 50% smaller than their unmodified counterparts. At the expense of accuracy, the size can become even smaller and the inference latency lowered even further. The unique SCBP transformations yield many new model variants, each with their own trade-offs, and does not require GPU clusters or expert humans for training or design. N/A We propose a new approach, based on discrete filter pruning, to adapt off-the-shelf models into an embedded environment. Importantly, we circumvent the usually prohibitive costs of model compression. Our method, Structured Coarse Block Pruning (SCBP), prunes whole CNN kernels using width modifiers applied to a novel transformation of convlayers into superblocks. SCBP uses set representations to construct a rudimentary search to provide candidate networks. To test our approach, the original ResNet architectures serve as the baseline and also provide the ’seeds’ for our candidate search. The search produces a configurable number of compressed (derived) models. These derived models are often 20% faster and 50% smaller than their unmodified counterparts. At the expense of accuracy, the size can become even smaller and the inference latency lowered even further. The unique SCBP transformations yield many new model variants, each with their own trade-offs, and does not require GPU clusters or expert humans for training or design. 1 INTRODUCTION Modern Computer Vision (CV) is dominated by the convolution operation introduced by Fukushima & Miyake (1982) and later advanced into a Convolutional Neural Network (CNN or convnet) by LeCun et al. (1989). Until recently, these convnets were limited to rudimentary CV tasks such as classifying handwritten digits LeCun et al. (1998). Present-day convnets have far surpassed other CV approaches by improving their framework to include faster activations Nair & Hinton (2010), stacked convolutional layers (convlayers) Krizhevsky et al. (2012), and better optimizers Kingma & Ba (2014). These multi-layer deep convnets require big data in the form of datasets such as ImageNet Deng et al. (2009) to enable deep learning LeCun et al. (2015) of the feature space. However effective, convnets are held back by their high resource consumption. Utilizing an effective convnet on the edge presents new challenges in latency, energy, and memory costs Chen & Ran (2019). Additionally, many tasks, such as autonomous robotics, require realtime processing and cannot be offloaded to the cloud. As such. resource constrained platforms, such as embedded systems, lack the compute and memory to use convnets in their default constructions. Analysis into convnets reveals that they are overparameterized Denil et al. (2013) and that reducing this overparameterization can be a key mechanism in compressing convnets Hanson & Pratt (1988); LeCun et al. (1990); Han et al. (2015a). The many weights that form a network are not necessarily of the same entropy and can therefore be seen as scaffolding to be removed during a compression step Hassibi & Stork (1993); Han et al. (2015b); Tessier et al. (2021). In this work, our objective is to reduce the size of any given convnet using an automated approach requiring little human engineering and compute resources. To that end, we design Structured Coarse Block Pruning (SCBP), a compressing mechanism that requires no iterative retraining or fine-tuning. SCBP uses a low-cost search method, seeded with an off-the-shelf network, to generate compressed models derivatives with unique accuracy, size, and latency trade-offs. The reminder of this paper is organized as follows. Section 2 focuses on closely related works. Section 3 details the methodology and implementation of SCBP. Section 4 discusses experimental findings, and finally we conclude with key takeaways and future directions in Section 5. 2 RELATED WORKS Early work on removing parameters from Artificial Neural Networks (ANNs) was focused in gaining insights on the purpose of those parameters Hanson & Pratt (1988). Prior to AlexNet Krizhevsky et al. (2012), exploiting ANN overparameterization was used as a regularization mechanism LeCun et al. (1990); Hassibi & Stork (1993). Recently, ANN overparameterization is exploited to reduce the size of models Han et al. (2015b); Zhou et al. (2017); Tessier et al. (2021). Removing parameters compresses the memory footprint of CV models, which can then allow their deployment on embedded systems. Compression additionally facilitates reduced energy costs while also reducing latency by greatly reducing memory traffic Han et al. (2015a); Zhou et al. (2017). Model accuracy is sustained or reduced, depending on the method of compression. Preserving a compressed CV model’s baseline accuracy is challenging and requires large compute Han et al. (2015b); Zhou et al. (2017). A common mechanism for maintaining a trained model’s accuracy is to iteratively reduce its size in prune-retrain cycles. Another mechanism is leveraging a Network Automated Search (NAS), often using reinforcement learning, to build networks from scratch that are both small and accurate Zoph et al. (2018); Cai et al. (2020). However, both prune-retrain and NAS are exorbitant in compute usage, typically on the order of 103 and 105 GPU hours respectively. When computing resources are limited, faster mechanisms for compression are needed. A range of techniques are available, such as tensor factorization Kim et al. (2015); Phan et al. (2020); Swaminathan et al. (2020) and Fast Fourier Transforms (FFT) on CV models’ weight tensors. Hashing can also be used to group similar weights into buckets Chen et al. (2015); Hu et al. (2018). These techniques, while faster to train, do not maintain the original network’s accuracy and often produce larger models relative to prune-retrain and NAS approaches. Quantization is also frequently used Gong et al. (2014); Wu et al. (2016) to reduce the bit-width of parameters from 64-bit floats down-to 5-bit ints or less Wu et al. (2016); Zhou et al. (2017). In special cases, only 2-bit parameters are sufficient Rastegari et al. (2016); Courbariaux et al. (2016; 2015). Other techniques include those based on weight-decay and entropy Luo & Wu (2017); Min et al. (2018); Tessier et al. (2021). In our proposed mechanism, we bridge a gap between manual and NAS approaches by using a low-cost search to order network width attributes of any given CV model, which is partitioned by a novel algorithm into multiple segments with each being assigned its own width modifier. A close work is from Howard et al. (2017) in the form of MobileNets which are a family of networks using different uniform width modifiers on a manually engineered baseline model. Similarly, EfficientNets Tan & Le (2019) expands the idea of modifiers to include depth and input resolution modifiers. Our approach benefits from generalized compression that can be applied to any model because we do not require a new baseline that needs to be engineered and thus can keep cost within 101 GPU hours. 3 METHOD The compression approach detailed below is realized by novel combination of convlayer binning, width modification, and a unique search-train step based on set transforms of the aforementioned combination. Unlike most network architecture search methods that impose prohibitively long search and train times, our work circumvents the cost problem by providing a halfway point between NAS and human engineered architectures. In doing so, we present a rudimentary proof of concept which, in our evaluations, can produce an efficient search and thus generate derivative models when configured by simple human defined search domain set constraints. The SCBP version we use stands on four foundations: (1) a seed architecture from which derivative architectures will be produced; (2) a network segmentation mechanism for the seed architecture for binning and assist in derivation; (3) a set of compression ratios (c-ratios) for each segment of the seed network; and (4) a one shot search for network instantiation based on (1)-(3). 3.1 SEED ARCHITECTURES The instantiation of compressed, derivative architectures is sourced from a seed architecture. In this work, we use the ResNet family as seed architectures. SCBP initialized with a seed network helps cut down on search times by leveraging already known and working architectures to generate new derived variants. Currently, these variants do require training to convergence to determine their accuracy but their latency, memory footprint, and power stats are immediately known. As an aside, the residual connection paradigm in ResNets is widely used today as the foundation of a variety of architectures. As such, using the ResNet testbed here allows for potential extension to later developed networks. To further accelerate search, the seed architecture needs to be segmented into three portions, each of which undergoes its own unique compression. ResNet on CIFAR data consists of three superblocks where we define a superblock to be all ResNet blocks of the same filter dimensions. Thus, we use three segments because our seed architecture accommodates it with little engineering. The input layer and its subsequent down-sampling layers plus the output layers are left untouched. Once the seed network is divided into well-defined segments, each segment is selected for width modification by applying c-ratios. Different segment and c-ratio pairings show changed weight distributions and residual functions and hence result in new derived networks. Interestingly, the features learned per superblock, to a certain extent, can change to maintain accuracy when adjacent superblocks are compressed. i.e. Over-parameterized segments may absorb entropy that may be lost from adjacent c-ratios. To detail, in each derived network, an ordered tuple of c-ratios is required, where each element in the tuple corresponds to a segment and hence encodes the compression factor for that segment. Thus, a set R of c-ratios can be constructed using the Cartesian product of R and the set S of partitioned segments. Both S and R must be small countable sets to prevent combinatorial explosion. In section 4, we find excellent results with |S| = 3 and |R| = 4. If a seed architecture can potentially benefit from different S and R, these hyperparameters can be easily changed. The emergent property from the architecture derivation process above is an effective and quick representation component for enumerating architectures that circumvents intractable search costs. The simplicity of the method allows its application to the growing library of modern convnets. It is therefore possible for many of these off-the-shelf convnets to be automatically modified via compression to meet embedded system resource constraints. This provides a low-cost approach to leverage past and present architecture engineering effort in embedded use cases. 3.2 SELECTING AND APPLYING C-RATIOS TO NETWORK SEGMENTS A segment is a binning of sequentially stacked convolutional layers of the same dimensions. These segments are craved up from the seed network. The number of segments in a seed network is dependent on two factors: the seed’s architecture and the segmenting procedure. For ResNet like models, segmenting is straight-forward: use each superblock as its own segment. In different architectures, such as VGG-19, segmenting can done based on like dimension convolutional layers. The procedure should be adapted based on the seed architecture, the cardinality of R, and available compute. In algorithm 1, we provide a generalized segmenting procedure for arbitrary convnets. Convlayers can be represented in the ordered set C and it is from here that S is constructed. Both S and C are in the same set family. A one-to-one correspondence between S and C is to be avoided, which would mean each convlayer is its own segment. A small segment count is crucial to an efficient search because it limits the number of generated derived networks, as described in equation 4. We can understand each segment sPS as a coarse representation of multiple cPC. In practice, algorithm 1 implements the following superblock segment definition: S “ ts : pc1, ... , ckq P C, |s| P N, 1 ď k, |s| ď |C|u (1) where each s is a tuple of multiple c. The cardinality of S is not required to be identical to C. After network segmentation into superblocks, we need to pair these segments with a c-ratio. The c-ratio, at its core, is a multiplicative factor applied to the filter dimension of convlayers. It is a number strictly ď 1.00. All baseline convolutional layers are at a default c-ratio of 1.00, and this is a valid configuration for SCBP. The c-ratio r is constructed from the set R as follows: R “ tr : r P R and 0.0 ă r ď 1.00u (2) where r values <1.0 compress convlayers by discrete filter pruning (truncation), r=1.0 preserves the original convlayer width attributes, and c-ratios >1.00 are undefined in terms of compression. The operation of applying a c-ratio to a segment is essentially a transform T such that: T pm ˆ n ˆ f ˆ kq “ m ˆ n ˆ r pf ˆ rq s ˆ k (3) Here m ˆ n represent the filter’s spacial dimensions, and f is the filter count. f is modified by r, and k denotes the segment depth (simply, its the stacked convlayer count). Selecting the correct c-ratios can both be done using a search, such as a grid-search, or manually. Our experiments in section 4 use four manually selected c-ratios. In our experience, SCBP shows resiliency and works well without fine-tuned c-ratios. With that said, a low-cost fine-tuning step for c-ratios is a possible future direction. In this work’s seed networks, the four c-ratios were hand-picked as equidistant points from one another to provide a coarse, uniform coverage of the c-ratio search space. They are the set R = {0.25, 0.50, 0.75, 1.00}. The segment and c-ratio sets are used in conjunction to produce compressed derived networks. Derived networks can be thought of as subnets of their overparameterized seed networks. Each subnet behaves uniquely, due in part to their differing weight distribution. The construction of the collection of derived networks requires both sets R and S. Each tuple rPR is crossed with a seed segment sPS using the Cartesian product between the sets: |S| ź i“1 Ri ą S “ tps1r1, ... , snrnq : @sDrwhere r P R, s P S, and n “ |S|u (4) Each segment has its own c-ratio where the c-ratio set is transformed into a multiset with multiplicity equal to the segment set cardinality. The cross product between the c-ratio multiset and segment set yield the space of derived networks. As such, each derived network is configured with unique memory footprints and weight tensors; these configurations can then allow similarly accurate networks to be profiled and culled for the best hardware fit. There are no prune-retrain cycles and hyperparameters are simply adopted from the seed network. The derived network tensors are not sparse and this non-sparsity is of great benefit; it allows hardware acceleration of multiply-add on virtually all platforms without the need of ASICs. 3.3 APPLYING SCBP TO RESNETS AND OTHER ARCHITECTURES CV models typically utilize stacked convlayers, where many successive layers are of identical dimensions. Residual connections are also common in these modern convnets. While innumerable improvements have been made in activations, bottleneck layers, etc, the underlying data structures of most contemporary models stands on the foundations of convlayers and residual connections. Algorithm 1 forms the basis of SCBP as it segments the seed network into superblocks by binning like-dimension convlayers. The bins are sorted according to parameter count where the largest k bins are selected as segments and k is configurable hyperparameter (the segment set cardinality). Algorithm 1: segmenting procedure seed Ð select_convnet(model_constraints); layers Ð extract_dims(seed); buckets Ð bin_like_dims(layers); s_buckets Ð sort_descending(buckets); segments Ð s_buckets[0...k]; return segments; Algorithm 2: derivation procedure segments Ð segmenting_procedure(); cratios Ð multiset_transform(cratio_set, multiplicity Ð segments.length); derived_segments Ð cartesian_product(segments, cratios); return derived_segments; To create derived segments from a binned seed network, we must first identify a set of c-ratios. In this work, we use a default set, given in algorithm 2, for two reasons. One, no additional search is incurred, and second, the c-ratios and their magnitude evenly covers the subspace of widths not larger than the baseline width. We exploit the block structure of our seed architecture by coalescing blocks into superblocks and then selectively compressing these superblocks such that non-uniform compression can be performed to derive both networks and insights into the original seed. Additionally, limiting c-ratios to superblocks (instead of convlayers) significantly cuts down on the enumeration cost for the architecture search. The final derived segments are a product of both the binned convlayers and c-ratios. This product is the building unit of each derived network. The network generation for derived networks starts at the seed network and ends with a pool of derived networks, as given in 3. The procedure acquires a seed, creates segments from the seed and then pairs the segments with c-ratios to construct derived segments. The derived segments are then pieced back together to form the new derived networks. Algorithm 3: network generation seed Ð select_convnet(model_constraints); segments Ð segmenting_procedure(); derived_segments Ð derivation_procedure(segments); derived_networks Ð list(); for segment_tuple in derived_segments do template Ð seed.deepcopy(); network Ð replace_layers(template, segment_tuple); derived_networks.append(network); end return derived_networks; Algorithm 4: train, test, rank derived_networks Ð network_generation(); models Ð list(); performance Ð list(); for convnet in derived_networks do model Ð train(convnet); metrics Ð test(model); models.append(model); performance.append(metrics); end candidate_models Ð rank(models, performance); The train-test regiment remains the same as the seed, but instead of one network, many more networks are trained in parallel. Each derived model has its metrics logged for later ranking so that the best one can be determined based on user-specific constraints. In this work, our c-ratio and segmenting procedures construct 64 unique models per seed. We random initialize each model, train-validate it, and finally collect metrics on the testset. SCBP is generalizable to most deep convnet architectures that use sequentially stacked convlayers. However, when the segmenting procedure is unable to bin layers, as is the case when all convlayers are of different dimensions, SCBP can become limited, especially when segments cover a minor subset of all the parameters. In practice, modern convnets typically use many stacked convolutional layers and thus are receptive to SCBP. In sum, SCBP can generate compressed models without the need to modify training regiments, continuous fine-tuning, prune-retrain cycles, or even modified hyperparameters. The compression of superblocks to different c-ratios can provide valuable insights into their relative importance to the rest of the network. These insights can guide efforts to build better architectures and may also help find hardware quirks that adapt better to certain derived architectures. 4 EVALUATIONS AND DISCUSSIONS 4.1 SETUP, METRICS, AND EXPERIMENTS The CIFAR dataset is used to determine the effectiveness of SCBP on the compressed models. CIFAR provides a complex data distribution with coarse labels that are appropriate for approximating tasks on the edge. The dataset consists of 50K train images and 10K test images. In total, these 60K images, of dimensions 32x32x3, cover a variety of categories such as dogs and trucks. The dataset has two label sets over the same data, one with 10 coarse labels and another with 100 finer labels. We bench with on both using the top-1 metric. Inference statistics are measured on the JETSON Xavier NX and a desktop WorkStation. The platforms serve to approximate real-world hardware constraints so that the effects of compression can be evaluated in terms of latency and power. In table 3, the WorkStation is the least constraint while low memory bottlenecks while the Xavier NX provides a typical embedded environment for benching. sblock1 sblock2 sblock3 5.196% 18.935% 75.455% Table 1: Parameter distribution. The density of weights in convnets is usually spread nonuniformly. For example, our ResNet testbed has almost all parameters are concentrated into the last two superblocks. Compression on high density superblocks yields faster and smaller models while identical c-ratios on lower density superblocks are correlated with worse accuracy. hyperparameters decay epoch momentum learning rate 1e-4 1e2 9e-1 1e-1 Table 2: Hyperparameters. Every derived network repurposes the seed network’s hyperparameters. Here, the training regiment is adapted from He et al. (2016) with no modifications to demonstrate the applicability of SCBP. Training and testing is done using PyTorch 1.9v with two RTX 2080Ti GPUs. Performance is composed of several metrics. In our experiments, we settle on four measures of performances: accuracy, latency, size, and power. These metrics help determine a model’s holistic fit into an embedded environment. Top-1 accuracy is measured using the whole held-out testset. The model size is tied to superblock c-ratios and table 1 provides a reference. Latency and energy-interval are dynamic metrics that need averaging over many readings to reduce their margin of error. Both the model latency and power draw are processed as arithmetic means from 10k inference frames. The seed architectures are unadulterated models pulled from He et al. (2016). We use ResNet20, ResNet44, and ResNet110 to provide ’source’ candidates for the SCBP based search. Additionally, using ResNets helps to indirectly approximate their innumerable successor networks, many of which are minor architectural changes and most of which are prevalent in production today. For CIFAR100 labels (new models), the final denselayer of the ResNets are modified to simply increase node count from 10 to 100. All models use zero-padded summation for their residual connections. 4.2 EFFECT OF SCBP ON ACCURACY AND MODEL SIZE This section provides an overview of the empirical effects of SCBP and further analyzes its effects on model architecture and performance. In particular, SCBP reveals interesting architecture patterns that we discuss and highlight. Parameter updates during training change based on the label granularity. That is, the superblock-toaccuracy relation begins to shift with just a change in the number of labels. When controlling for parameter count, coarse labels on shallow networks usually outperform deeper networks. Conversely, deeper networks outperform shallow networks on finer labels, even when parameter counts are similar. Label granularity additionally influences the distribution of redundant parameters, where coarse labels primarily increase redundancy in the final superblock and fine labels distribute low entropy parameters across all superblocks see table 5. Interestingly, c-ratios have similar accuracy effects, regardless of superblock filter density. For example, a c-ratio of 0.25 on the first superblock usually harms accuracy much more than on the same c-ratio on the significantly denser last superblock. These patterns are evident from experiments, especially with derived nets seeded with ResNet44, see table 4. The tables also indicate the learning ability of different portions within the seed networks by evaluating their response to SCBP. In terms of accuracy resilience to truncations, both the learning task and cumulative layers up to a given superblock need to be considered. The compression impact on accuracy is better correlated with layer count and data granularity than the actual parameter count. For example, the finer-grained labels in CIFAR100 make compression more challenging because each filter encodes more information and therefore incurs a larger cost when it is pruned. This can be inferred from table 4 and figure 1 where the simple CIFAR10 labels allow SCBP to produce 110% more derived networks than with CIFAR100’s finer labels. Within the derived networks, we note several learning patterns. One such pattern is that the final superblock, the largest in our experiments, counter-intuitively contributes the least to final model accuracy. In fact, even removing 50% to 75% of its filters incurs marginal accuracy effects. The large truncation allowance on the final superblock is beneficial because it means that many derived models can be compressed 40-50%. Another pattern SCBP exposes is a weak co-dependence between the first two superblocks where many models lose >2% accuracy if 75% of their parameters are truncated in either superblock. Extensive data tables are provided in Appendix 5. The co-dependence may be due to the low parameter redundancy present in the first two superblocks. Indeed, these possibly higher entropy shallow superblocks may indicate that overparameterization is mainly a problem of deeper layers. The relative compression-accuracy trade-off between superblocks means its better to focus on compressing latter superblocks. Additionally, it should be noted from table 6 and figures ?? that the middle superblock has a large facility to absorb critical weights from its neighbors when they are highly compressed. More generally, high compression tends to cause adjacent superblocks to compensate for missing filters. After a certain amount, typically 30% size reduction, lowering model size correlates positively with lowering accuracy, albeit with some derived net exceptions that are more accurate after compression (see ResNet44-1.0-0.5-1.0, etc). Our takeaway is that SCBP as applied to convnets can effectively compress models with very low search costs. The c-ratios on superblocks help illuminate the relative contribution of parameters to final performance and therefore provide insights into the seed and can possibly assist in designing more compressible seed networks. And while SCBP size reductions are moderate when compared to unstructured pruning, SCBP does not result in weight matrix sparsity and thus benefits from BLAS-based hardware acceleration. Lastly, SCBP saves time by not requiring hyperparameter tuning or modification to the seed’s training regiment. These attributes coupled with no prune-retrain cycles, elevate SCBP far above many comparable techniques in terms of GPU costs. Typically, these costs are in the range of 103 hours for pruning and up-to 105 hours for NAS. In comparison, SCBP consumes less than 72 GPU hours from start to finish. 4.3 EFFECT OF SCBP ON LATENCY AND ENERGY Inference time is end-to-end, meaning it encompasses image preprocessing, data moves, model execution, and postprocessing. It is this time that is averaged for the latency and its during this duration that power draw is also measured. For power draw, the Xavier NX polls its INA3321 micro-controller while the WorkStation samples from turbostat and nvidia-smi. The main determining factor for latency and power metrics is model size. No matter the architecture or hardware, frequent cache misses and, worse, memory swaps absolutely crush real-time performance and balloon latency. The speed and capacity of the memory hierarchy is the determining factor of latency and hence the energy-interval. Because the main contributor to latency and energy costs is memory access, designing smaller convnets is an effective approach for resource constrained platforms which heavily benefit from compression due to their limited memory hierarchy. In figure 1, we see that derived nets can maintain their accuracy while compressed. This means there are smaller models, with possibly fewer cache misses, which could be faster. Then we see in figures 3, (a)-(d), that indeed latency is greatly reduced, by around 20%, and 30% in some cases. Meanwhile, the actual power draw seen in figures 3 (e)-(h) does not show significant changes meaning that overall energy-interval are reduced for our derived networks. Lastly, we find that once a memory bottleneck is successfully eliminated, further compression comes with diminishing returns as can be observed in the WorkStation experiments in the Appendix, figure 4. The data indicates that excessive parameter pruning, past memory bottlenecks is an unneeded computation waste. The pruning, as deployed in many unstructured compression techniques does not lead to faster, less energy hungry models; often they only result in smaller models coming at the cost of very expensive and long train cycles. 5 CONCLUSION AND FUTURE WORK We demonstrate a novel mechanism to segment convlayers into superblocks and set them with different compression ratios using a set represented network search. The SCBP framework constructs a pool of models with different attributes which can help with different hardware fits. These models are much smaller and faster than their unmodified counterparts. The training cost of these models is low and feasible with regular workstations. As such, the embedded native models can be designed without prohibitive costs, allowing rapid iteration to find the best model-hardware pairing. For future work, we are extending SCBP into an iterative mode that operates on pre-trained networks.
1. What is the focus of the paper regarding convolutional neural network architecture? 2. What are the strengths and weaknesses of the proposed approach in comparison to other methods? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors propose a systematic method of creating variations of a convolutional architecture by reducing the number of filters at certain groups of layers. Subsequently training and evaluating these new architectures shows that some are significantly more efficient that the initial "seed" architecture achieving similar accuracy scores with significantly less computational resources. Strengths And Weaknesses The method proposed in the paper requires retraining every variation from scratch which is not network compression but simply architecture search. Since there is very little information proposed to guide this architecture search the paper performs either brute force architecture search (using a grid search) or manual architecture search which is simply CNN training. To strengthen the paper the authors should compare with network compression techniques (such as the ones listed in the related work section) and show whether the proposed method finds architectures with specific efficiency characteristics more efficiently than network compression. Clarity, Quality, Novelty And Reproducibility The paper is clearly written and the method is reproducible. There is very little novelty in the proposed method as it amounts to manual CNN design based on a seed architecture.
ICLR
Title CNN Compression and Search Using Set Transformations with Width Modifiers on Network Architectures Abstract We propose a new approach, based on discrete filter pruning, to adapt off-the-shelf models into an embedded environment. Importantly, we circumvent the usually prohibitive costs of model compression. Our method, Structured Coarse Block Pruning (SCBP), prunes whole CNN kernels using width modifiers applied to a novel transformation of convlayers into superblocks. SCBP uses set representations to construct a rudimentary search to provide candidate networks. To test our approach, the original ResNet architectures serve as the baseline and also provide the ’seeds’ for our candidate search. The search produces a configurable number of compressed (derived) models. These derived models are often 20% faster and 50% smaller than their unmodified counterparts. At the expense of accuracy, the size can become even smaller and the inference latency lowered even further. The unique SCBP transformations yield many new model variants, each with their own trade-offs, and does not require GPU clusters or expert humans for training or design. N/A We propose a new approach, based on discrete filter pruning, to adapt off-the-shelf models into an embedded environment. Importantly, we circumvent the usually prohibitive costs of model compression. Our method, Structured Coarse Block Pruning (SCBP), prunes whole CNN kernels using width modifiers applied to a novel transformation of convlayers into superblocks. SCBP uses set representations to construct a rudimentary search to provide candidate networks. To test our approach, the original ResNet architectures serve as the baseline and also provide the ’seeds’ for our candidate search. The search produces a configurable number of compressed (derived) models. These derived models are often 20% faster and 50% smaller than their unmodified counterparts. At the expense of accuracy, the size can become even smaller and the inference latency lowered even further. The unique SCBP transformations yield many new model variants, each with their own trade-offs, and does not require GPU clusters or expert humans for training or design. 1 INTRODUCTION Modern Computer Vision (CV) is dominated by the convolution operation introduced by Fukushima & Miyake (1982) and later advanced into a Convolutional Neural Network (CNN or convnet) by LeCun et al. (1989). Until recently, these convnets were limited to rudimentary CV tasks such as classifying handwritten digits LeCun et al. (1998). Present-day convnets have far surpassed other CV approaches by improving their framework to include faster activations Nair & Hinton (2010), stacked convolutional layers (convlayers) Krizhevsky et al. (2012), and better optimizers Kingma & Ba (2014). These multi-layer deep convnets require big data in the form of datasets such as ImageNet Deng et al. (2009) to enable deep learning LeCun et al. (2015) of the feature space. However effective, convnets are held back by their high resource consumption. Utilizing an effective convnet on the edge presents new challenges in latency, energy, and memory costs Chen & Ran (2019). Additionally, many tasks, such as autonomous robotics, require realtime processing and cannot be offloaded to the cloud. As such. resource constrained platforms, such as embedded systems, lack the compute and memory to use convnets in their default constructions. Analysis into convnets reveals that they are overparameterized Denil et al. (2013) and that reducing this overparameterization can be a key mechanism in compressing convnets Hanson & Pratt (1988); LeCun et al. (1990); Han et al. (2015a). The many weights that form a network are not necessarily of the same entropy and can therefore be seen as scaffolding to be removed during a compression step Hassibi & Stork (1993); Han et al. (2015b); Tessier et al. (2021). In this work, our objective is to reduce the size of any given convnet using an automated approach requiring little human engineering and compute resources. To that end, we design Structured Coarse Block Pruning (SCBP), a compressing mechanism that requires no iterative retraining or fine-tuning. SCBP uses a low-cost search method, seeded with an off-the-shelf network, to generate compressed models derivatives with unique accuracy, size, and latency trade-offs. The reminder of this paper is organized as follows. Section 2 focuses on closely related works. Section 3 details the methodology and implementation of SCBP. Section 4 discusses experimental findings, and finally we conclude with key takeaways and future directions in Section 5. 2 RELATED WORKS Early work on removing parameters from Artificial Neural Networks (ANNs) was focused in gaining insights on the purpose of those parameters Hanson & Pratt (1988). Prior to AlexNet Krizhevsky et al. (2012), exploiting ANN overparameterization was used as a regularization mechanism LeCun et al. (1990); Hassibi & Stork (1993). Recently, ANN overparameterization is exploited to reduce the size of models Han et al. (2015b); Zhou et al. (2017); Tessier et al. (2021). Removing parameters compresses the memory footprint of CV models, which can then allow their deployment on embedded systems. Compression additionally facilitates reduced energy costs while also reducing latency by greatly reducing memory traffic Han et al. (2015a); Zhou et al. (2017). Model accuracy is sustained or reduced, depending on the method of compression. Preserving a compressed CV model’s baseline accuracy is challenging and requires large compute Han et al. (2015b); Zhou et al. (2017). A common mechanism for maintaining a trained model’s accuracy is to iteratively reduce its size in prune-retrain cycles. Another mechanism is leveraging a Network Automated Search (NAS), often using reinforcement learning, to build networks from scratch that are both small and accurate Zoph et al. (2018); Cai et al. (2020). However, both prune-retrain and NAS are exorbitant in compute usage, typically on the order of 103 and 105 GPU hours respectively. When computing resources are limited, faster mechanisms for compression are needed. A range of techniques are available, such as tensor factorization Kim et al. (2015); Phan et al. (2020); Swaminathan et al. (2020) and Fast Fourier Transforms (FFT) on CV models’ weight tensors. Hashing can also be used to group similar weights into buckets Chen et al. (2015); Hu et al. (2018). These techniques, while faster to train, do not maintain the original network’s accuracy and often produce larger models relative to prune-retrain and NAS approaches. Quantization is also frequently used Gong et al. (2014); Wu et al. (2016) to reduce the bit-width of parameters from 64-bit floats down-to 5-bit ints or less Wu et al. (2016); Zhou et al. (2017). In special cases, only 2-bit parameters are sufficient Rastegari et al. (2016); Courbariaux et al. (2016; 2015). Other techniques include those based on weight-decay and entropy Luo & Wu (2017); Min et al. (2018); Tessier et al. (2021). In our proposed mechanism, we bridge a gap between manual and NAS approaches by using a low-cost search to order network width attributes of any given CV model, which is partitioned by a novel algorithm into multiple segments with each being assigned its own width modifier. A close work is from Howard et al. (2017) in the form of MobileNets which are a family of networks using different uniform width modifiers on a manually engineered baseline model. Similarly, EfficientNets Tan & Le (2019) expands the idea of modifiers to include depth and input resolution modifiers. Our approach benefits from generalized compression that can be applied to any model because we do not require a new baseline that needs to be engineered and thus can keep cost within 101 GPU hours. 3 METHOD The compression approach detailed below is realized by novel combination of convlayer binning, width modification, and a unique search-train step based on set transforms of the aforementioned combination. Unlike most network architecture search methods that impose prohibitively long search and train times, our work circumvents the cost problem by providing a halfway point between NAS and human engineered architectures. In doing so, we present a rudimentary proof of concept which, in our evaluations, can produce an efficient search and thus generate derivative models when configured by simple human defined search domain set constraints. The SCBP version we use stands on four foundations: (1) a seed architecture from which derivative architectures will be produced; (2) a network segmentation mechanism for the seed architecture for binning and assist in derivation; (3) a set of compression ratios (c-ratios) for each segment of the seed network; and (4) a one shot search for network instantiation based on (1)-(3). 3.1 SEED ARCHITECTURES The instantiation of compressed, derivative architectures is sourced from a seed architecture. In this work, we use the ResNet family as seed architectures. SCBP initialized with a seed network helps cut down on search times by leveraging already known and working architectures to generate new derived variants. Currently, these variants do require training to convergence to determine their accuracy but their latency, memory footprint, and power stats are immediately known. As an aside, the residual connection paradigm in ResNets is widely used today as the foundation of a variety of architectures. As such, using the ResNet testbed here allows for potential extension to later developed networks. To further accelerate search, the seed architecture needs to be segmented into three portions, each of which undergoes its own unique compression. ResNet on CIFAR data consists of three superblocks where we define a superblock to be all ResNet blocks of the same filter dimensions. Thus, we use three segments because our seed architecture accommodates it with little engineering. The input layer and its subsequent down-sampling layers plus the output layers are left untouched. Once the seed network is divided into well-defined segments, each segment is selected for width modification by applying c-ratios. Different segment and c-ratio pairings show changed weight distributions and residual functions and hence result in new derived networks. Interestingly, the features learned per superblock, to a certain extent, can change to maintain accuracy when adjacent superblocks are compressed. i.e. Over-parameterized segments may absorb entropy that may be lost from adjacent c-ratios. To detail, in each derived network, an ordered tuple of c-ratios is required, where each element in the tuple corresponds to a segment and hence encodes the compression factor for that segment. Thus, a set R of c-ratios can be constructed using the Cartesian product of R and the set S of partitioned segments. Both S and R must be small countable sets to prevent combinatorial explosion. In section 4, we find excellent results with |S| = 3 and |R| = 4. If a seed architecture can potentially benefit from different S and R, these hyperparameters can be easily changed. The emergent property from the architecture derivation process above is an effective and quick representation component for enumerating architectures that circumvents intractable search costs. The simplicity of the method allows its application to the growing library of modern convnets. It is therefore possible for many of these off-the-shelf convnets to be automatically modified via compression to meet embedded system resource constraints. This provides a low-cost approach to leverage past and present architecture engineering effort in embedded use cases. 3.2 SELECTING AND APPLYING C-RATIOS TO NETWORK SEGMENTS A segment is a binning of sequentially stacked convolutional layers of the same dimensions. These segments are craved up from the seed network. The number of segments in a seed network is dependent on two factors: the seed’s architecture and the segmenting procedure. For ResNet like models, segmenting is straight-forward: use each superblock as its own segment. In different architectures, such as VGG-19, segmenting can done based on like dimension convolutional layers. The procedure should be adapted based on the seed architecture, the cardinality of R, and available compute. In algorithm 1, we provide a generalized segmenting procedure for arbitrary convnets. Convlayers can be represented in the ordered set C and it is from here that S is constructed. Both S and C are in the same set family. A one-to-one correspondence between S and C is to be avoided, which would mean each convlayer is its own segment. A small segment count is crucial to an efficient search because it limits the number of generated derived networks, as described in equation 4. We can understand each segment sPS as a coarse representation of multiple cPC. In practice, algorithm 1 implements the following superblock segment definition: S “ ts : pc1, ... , ckq P C, |s| P N, 1 ď k, |s| ď |C|u (1) where each s is a tuple of multiple c. The cardinality of S is not required to be identical to C. After network segmentation into superblocks, we need to pair these segments with a c-ratio. The c-ratio, at its core, is a multiplicative factor applied to the filter dimension of convlayers. It is a number strictly ď 1.00. All baseline convolutional layers are at a default c-ratio of 1.00, and this is a valid configuration for SCBP. The c-ratio r is constructed from the set R as follows: R “ tr : r P R and 0.0 ă r ď 1.00u (2) where r values <1.0 compress convlayers by discrete filter pruning (truncation), r=1.0 preserves the original convlayer width attributes, and c-ratios >1.00 are undefined in terms of compression. The operation of applying a c-ratio to a segment is essentially a transform T such that: T pm ˆ n ˆ f ˆ kq “ m ˆ n ˆ r pf ˆ rq s ˆ k (3) Here m ˆ n represent the filter’s spacial dimensions, and f is the filter count. f is modified by r, and k denotes the segment depth (simply, its the stacked convlayer count). Selecting the correct c-ratios can both be done using a search, such as a grid-search, or manually. Our experiments in section 4 use four manually selected c-ratios. In our experience, SCBP shows resiliency and works well without fine-tuned c-ratios. With that said, a low-cost fine-tuning step for c-ratios is a possible future direction. In this work’s seed networks, the four c-ratios were hand-picked as equidistant points from one another to provide a coarse, uniform coverage of the c-ratio search space. They are the set R = {0.25, 0.50, 0.75, 1.00}. The segment and c-ratio sets are used in conjunction to produce compressed derived networks. Derived networks can be thought of as subnets of their overparameterized seed networks. Each subnet behaves uniquely, due in part to their differing weight distribution. The construction of the collection of derived networks requires both sets R and S. Each tuple rPR is crossed with a seed segment sPS using the Cartesian product between the sets: |S| ź i“1 Ri ą S “ tps1r1, ... , snrnq : @sDrwhere r P R, s P S, and n “ |S|u (4) Each segment has its own c-ratio where the c-ratio set is transformed into a multiset with multiplicity equal to the segment set cardinality. The cross product between the c-ratio multiset and segment set yield the space of derived networks. As such, each derived network is configured with unique memory footprints and weight tensors; these configurations can then allow similarly accurate networks to be profiled and culled for the best hardware fit. There are no prune-retrain cycles and hyperparameters are simply adopted from the seed network. The derived network tensors are not sparse and this non-sparsity is of great benefit; it allows hardware acceleration of multiply-add on virtually all platforms without the need of ASICs. 3.3 APPLYING SCBP TO RESNETS AND OTHER ARCHITECTURES CV models typically utilize stacked convlayers, where many successive layers are of identical dimensions. Residual connections are also common in these modern convnets. While innumerable improvements have been made in activations, bottleneck layers, etc, the underlying data structures of most contemporary models stands on the foundations of convlayers and residual connections. Algorithm 1 forms the basis of SCBP as it segments the seed network into superblocks by binning like-dimension convlayers. The bins are sorted according to parameter count where the largest k bins are selected as segments and k is configurable hyperparameter (the segment set cardinality). Algorithm 1: segmenting procedure seed Ð select_convnet(model_constraints); layers Ð extract_dims(seed); buckets Ð bin_like_dims(layers); s_buckets Ð sort_descending(buckets); segments Ð s_buckets[0...k]; return segments; Algorithm 2: derivation procedure segments Ð segmenting_procedure(); cratios Ð multiset_transform(cratio_set, multiplicity Ð segments.length); derived_segments Ð cartesian_product(segments, cratios); return derived_segments; To create derived segments from a binned seed network, we must first identify a set of c-ratios. In this work, we use a default set, given in algorithm 2, for two reasons. One, no additional search is incurred, and second, the c-ratios and their magnitude evenly covers the subspace of widths not larger than the baseline width. We exploit the block structure of our seed architecture by coalescing blocks into superblocks and then selectively compressing these superblocks such that non-uniform compression can be performed to derive both networks and insights into the original seed. Additionally, limiting c-ratios to superblocks (instead of convlayers) significantly cuts down on the enumeration cost for the architecture search. The final derived segments are a product of both the binned convlayers and c-ratios. This product is the building unit of each derived network. The network generation for derived networks starts at the seed network and ends with a pool of derived networks, as given in 3. The procedure acquires a seed, creates segments from the seed and then pairs the segments with c-ratios to construct derived segments. The derived segments are then pieced back together to form the new derived networks. Algorithm 3: network generation seed Ð select_convnet(model_constraints); segments Ð segmenting_procedure(); derived_segments Ð derivation_procedure(segments); derived_networks Ð list(); for segment_tuple in derived_segments do template Ð seed.deepcopy(); network Ð replace_layers(template, segment_tuple); derived_networks.append(network); end return derived_networks; Algorithm 4: train, test, rank derived_networks Ð network_generation(); models Ð list(); performance Ð list(); for convnet in derived_networks do model Ð train(convnet); metrics Ð test(model); models.append(model); performance.append(metrics); end candidate_models Ð rank(models, performance); The train-test regiment remains the same as the seed, but instead of one network, many more networks are trained in parallel. Each derived model has its metrics logged for later ranking so that the best one can be determined based on user-specific constraints. In this work, our c-ratio and segmenting procedures construct 64 unique models per seed. We random initialize each model, train-validate it, and finally collect metrics on the testset. SCBP is generalizable to most deep convnet architectures that use sequentially stacked convlayers. However, when the segmenting procedure is unable to bin layers, as is the case when all convlayers are of different dimensions, SCBP can become limited, especially when segments cover a minor subset of all the parameters. In practice, modern convnets typically use many stacked convolutional layers and thus are receptive to SCBP. In sum, SCBP can generate compressed models without the need to modify training regiments, continuous fine-tuning, prune-retrain cycles, or even modified hyperparameters. The compression of superblocks to different c-ratios can provide valuable insights into their relative importance to the rest of the network. These insights can guide efforts to build better architectures and may also help find hardware quirks that adapt better to certain derived architectures. 4 EVALUATIONS AND DISCUSSIONS 4.1 SETUP, METRICS, AND EXPERIMENTS The CIFAR dataset is used to determine the effectiveness of SCBP on the compressed models. CIFAR provides a complex data distribution with coarse labels that are appropriate for approximating tasks on the edge. The dataset consists of 50K train images and 10K test images. In total, these 60K images, of dimensions 32x32x3, cover a variety of categories such as dogs and trucks. The dataset has two label sets over the same data, one with 10 coarse labels and another with 100 finer labels. We bench with on both using the top-1 metric. Inference statistics are measured on the JETSON Xavier NX and a desktop WorkStation. The platforms serve to approximate real-world hardware constraints so that the effects of compression can be evaluated in terms of latency and power. In table 3, the WorkStation is the least constraint while low memory bottlenecks while the Xavier NX provides a typical embedded environment for benching. sblock1 sblock2 sblock3 5.196% 18.935% 75.455% Table 1: Parameter distribution. The density of weights in convnets is usually spread nonuniformly. For example, our ResNet testbed has almost all parameters are concentrated into the last two superblocks. Compression on high density superblocks yields faster and smaller models while identical c-ratios on lower density superblocks are correlated with worse accuracy. hyperparameters decay epoch momentum learning rate 1e-4 1e2 9e-1 1e-1 Table 2: Hyperparameters. Every derived network repurposes the seed network’s hyperparameters. Here, the training regiment is adapted from He et al. (2016) with no modifications to demonstrate the applicability of SCBP. Training and testing is done using PyTorch 1.9v with two RTX 2080Ti GPUs. Performance is composed of several metrics. In our experiments, we settle on four measures of performances: accuracy, latency, size, and power. These metrics help determine a model’s holistic fit into an embedded environment. Top-1 accuracy is measured using the whole held-out testset. The model size is tied to superblock c-ratios and table 1 provides a reference. Latency and energy-interval are dynamic metrics that need averaging over many readings to reduce their margin of error. Both the model latency and power draw are processed as arithmetic means from 10k inference frames. The seed architectures are unadulterated models pulled from He et al. (2016). We use ResNet20, ResNet44, and ResNet110 to provide ’source’ candidates for the SCBP based search. Additionally, using ResNets helps to indirectly approximate their innumerable successor networks, many of which are minor architectural changes and most of which are prevalent in production today. For CIFAR100 labels (new models), the final denselayer of the ResNets are modified to simply increase node count from 10 to 100. All models use zero-padded summation for their residual connections. 4.2 EFFECT OF SCBP ON ACCURACY AND MODEL SIZE This section provides an overview of the empirical effects of SCBP and further analyzes its effects on model architecture and performance. In particular, SCBP reveals interesting architecture patterns that we discuss and highlight. Parameter updates during training change based on the label granularity. That is, the superblock-toaccuracy relation begins to shift with just a change in the number of labels. When controlling for parameter count, coarse labels on shallow networks usually outperform deeper networks. Conversely, deeper networks outperform shallow networks on finer labels, even when parameter counts are similar. Label granularity additionally influences the distribution of redundant parameters, where coarse labels primarily increase redundancy in the final superblock and fine labels distribute low entropy parameters across all superblocks see table 5. Interestingly, c-ratios have similar accuracy effects, regardless of superblock filter density. For example, a c-ratio of 0.25 on the first superblock usually harms accuracy much more than on the same c-ratio on the significantly denser last superblock. These patterns are evident from experiments, especially with derived nets seeded with ResNet44, see table 4. The tables also indicate the learning ability of different portions within the seed networks by evaluating their response to SCBP. In terms of accuracy resilience to truncations, both the learning task and cumulative layers up to a given superblock need to be considered. The compression impact on accuracy is better correlated with layer count and data granularity than the actual parameter count. For example, the finer-grained labels in CIFAR100 make compression more challenging because each filter encodes more information and therefore incurs a larger cost when it is pruned. This can be inferred from table 4 and figure 1 where the simple CIFAR10 labels allow SCBP to produce 110% more derived networks than with CIFAR100’s finer labels. Within the derived networks, we note several learning patterns. One such pattern is that the final superblock, the largest in our experiments, counter-intuitively contributes the least to final model accuracy. In fact, even removing 50% to 75% of its filters incurs marginal accuracy effects. The large truncation allowance on the final superblock is beneficial because it means that many derived models can be compressed 40-50%. Another pattern SCBP exposes is a weak co-dependence between the first two superblocks where many models lose >2% accuracy if 75% of their parameters are truncated in either superblock. Extensive data tables are provided in Appendix 5. The co-dependence may be due to the low parameter redundancy present in the first two superblocks. Indeed, these possibly higher entropy shallow superblocks may indicate that overparameterization is mainly a problem of deeper layers. The relative compression-accuracy trade-off between superblocks means its better to focus on compressing latter superblocks. Additionally, it should be noted from table 6 and figures ?? that the middle superblock has a large facility to absorb critical weights from its neighbors when they are highly compressed. More generally, high compression tends to cause adjacent superblocks to compensate for missing filters. After a certain amount, typically 30% size reduction, lowering model size correlates positively with lowering accuracy, albeit with some derived net exceptions that are more accurate after compression (see ResNet44-1.0-0.5-1.0, etc). Our takeaway is that SCBP as applied to convnets can effectively compress models with very low search costs. The c-ratios on superblocks help illuminate the relative contribution of parameters to final performance and therefore provide insights into the seed and can possibly assist in designing more compressible seed networks. And while SCBP size reductions are moderate when compared to unstructured pruning, SCBP does not result in weight matrix sparsity and thus benefits from BLAS-based hardware acceleration. Lastly, SCBP saves time by not requiring hyperparameter tuning or modification to the seed’s training regiment. These attributes coupled with no prune-retrain cycles, elevate SCBP far above many comparable techniques in terms of GPU costs. Typically, these costs are in the range of 103 hours for pruning and up-to 105 hours for NAS. In comparison, SCBP consumes less than 72 GPU hours from start to finish. 4.3 EFFECT OF SCBP ON LATENCY AND ENERGY Inference time is end-to-end, meaning it encompasses image preprocessing, data moves, model execution, and postprocessing. It is this time that is averaged for the latency and its during this duration that power draw is also measured. For power draw, the Xavier NX polls its INA3321 micro-controller while the WorkStation samples from turbostat and nvidia-smi. The main determining factor for latency and power metrics is model size. No matter the architecture or hardware, frequent cache misses and, worse, memory swaps absolutely crush real-time performance and balloon latency. The speed and capacity of the memory hierarchy is the determining factor of latency and hence the energy-interval. Because the main contributor to latency and energy costs is memory access, designing smaller convnets is an effective approach for resource constrained platforms which heavily benefit from compression due to their limited memory hierarchy. In figure 1, we see that derived nets can maintain their accuracy while compressed. This means there are smaller models, with possibly fewer cache misses, which could be faster. Then we see in figures 3, (a)-(d), that indeed latency is greatly reduced, by around 20%, and 30% in some cases. Meanwhile, the actual power draw seen in figures 3 (e)-(h) does not show significant changes meaning that overall energy-interval are reduced for our derived networks. Lastly, we find that once a memory bottleneck is successfully eliminated, further compression comes with diminishing returns as can be observed in the WorkStation experiments in the Appendix, figure 4. The data indicates that excessive parameter pruning, past memory bottlenecks is an unneeded computation waste. The pruning, as deployed in many unstructured compression techniques does not lead to faster, less energy hungry models; often they only result in smaller models coming at the cost of very expensive and long train cycles. 5 CONCLUSION AND FUTURE WORK We demonstrate a novel mechanism to segment convlayers into superblocks and set them with different compression ratios using a set represented network search. The SCBP framework constructs a pool of models with different attributes which can help with different hardware fits. These models are much smaller and faster than their unmodified counterparts. The training cost of these models is low and feasible with regular workstations. As such, the embedded native models can be designed without prohibitive costs, allowing rapid iteration to find the best model-hardware pairing. For future work, we are extending SCBP into an iterative mode that operates on pre-trained networks.
1. What is the focus of the paper regarding efficient neural network design? 2. What are the strengths of the proposed approach, particularly in tackling the problem of designing efficient neural architectures? 3. What are the weaknesses of the paper, especially regarding the grid search method and lack of proper justifications for layer partitioning? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns or suggestions regarding the experimental evaluations and comparisons with other works in the field?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper studies efficient neural network design for faster inference. The authors propose Structured Coarse Block Pruning (SCBP) that first partitions all layers into segments, then assigns a width multiplier to each stage, and finally explores the best configuration by grid search. The proposed SCBP delivers 20% faster and 50% smaller models on CIFAR-10 and CIFAR-100 benchmarks. Strengths And Weaknesses Strengths: This paper targets a critical problem: designing efficient neural architecture without much human effort and computational resources. Weaknesses: The proposed method is essentially a grid search over a simplified design space. The main contribution of this paper lies in partitioning layers into segments. However, the authors have provided no insights into why layers with the same feature map size should have the same width multiplier (i.e., pruning ratio). The current solution appears like an arbitrary design space simplification without proper justifications. The paper is hard to follow. The algorithm boxes in Section 3 are composed of many sub-procedure (function) calls, which I find hard to understand without more clarification. Though the authors claim that they base their method on filter pruning, the proposed algorithm seems to retrain all the candidate networks from scratch without inheriting the weights from the original model. This paper does not provide any baselines in its experimental evaluation. As the proposed method is highly related to pruning and neural architecture search, it is more than necessary to include some numbers of related baselines. The experimental results are all on small-scale benchmarks, which is less representative. It would be essential to include some results on large-scale datasets, such as ImageNet. The authors claim that their search cost is much lower than other methods, which is actually because the dataset they are using is much smaller. Clarity, Quality, Novelty And Reproducibility This paper has poor quality, clarity and novelty.
ICLR
Title Cross-Corpus Training with TreeLSTM for the Extraction of Biomedical Relationships from Text Abstract A bottleneck problem in machine learning-based relationship extraction (RE) algorithms, and particularly of deep learning-based ones, is the availability of training data in the form of annotated corpora. For specific domains, such as biomedicine, the long time and high expertise required for the development of manually annotated corpora explain that most of the existing one are relatively small (i.e., hundreds of sentences). Beside, larger corpora focusing on general or domain-specific relationships (such as citizenship or drug-drug interactions) have been developed. In this paper, we study how large annotated corpora developed for alternative tasks may improve the performances on biomedicine related tasks, for which few annotated resources are available. We experiment two deep learning-based models to extract relationships from biomedical texts with high performance. The first one combine locally extracted features using a Convolutional Neural Network (CNN) model, while the second exploit the syntactic structure of sentences using a Recursive Neural Network (RNN) architecture. Our experiments show that, contrary to the former, the latter benefits from a cross-corpus learning strategy to improve the performance of relationship extraction tasks. Indeed our approach leads to the best published performances for two biomedical RE tasks, and to state-of-the-art results for two other biomedical RE tasks, for which few annotated resources are available (less than 400 manually annotated sentences). This may be particularly impactful in specialized domains in which training resources are scarce, because they would benefit from the training data of other domains for which large annotated corpora does exist. 1 INTRODUCTION Relationship Extraction (RE) from text is a Natural Language Processing (NLP) task that aims at extracting automatically and summarizing in a structured form the unstructured information of texts. A relationships takes the form of a labeled link between two named entities as illustrated in Figure 1. Given two identified entities, the RE extraction task consists in predicting whether their is a relation between them and if so, the type of the relation. It can be seen as a classification task by computing a score for each possible relation type, given a sentence and two identified entities. Deep learning methods have demonstrated good ability for such tasks Zeng et al. (2014), but one of their drawbacks is that they generally require a large amount of training data, i.e., text corpora where entities and relationships between them are annotated, in order to obtain reasonable performances. The building of such a corpus for a specific task, such as those of interest in biomedicine, is time consuming and expensive because it implies complex entities (e.g., genomic variations, complex symptoms), complex relationships (which may be hypothetical, contextualized, negated, n-ary) and requires trained annotators. This explain why only few and relatively small (i.e., few hundreds of sentences) corpora are available, making these resources particularly valuable. Among these tasks, one can mention the extraction of genomic variations-phenotype relationships for which only a manually annotated corpus of 362 sentences, named SNPPhena exists (Bokharaeian et al., 2017). Beside, several larger corpora have been manually annotated with biomedical or general-domain relationships and made available (Hachey et al., 2012; Herrero-Zazo et al., 2013; Gurulingappa et al., 2012). Because these corpora share the same language (i.e., English) and thus a common syntax, we wonder if these resources, developed for slightly different tasks, may be reused for extracting relationships in domain with scarce resources. Several multi-task learning approaches have been proposed to improve performance for a given task using corpus developed for related tasks Collobert et al. (2011). In this paper, we investigate a cross-corpus strategy to improve performances for biomedical RE tasks for which few training data are available, using larger additional corpora developed for other specific RE tasks. This is done by jointly training deep learning-based models while sharing some of the parameters. Before or beside deep learning methods, other approaches for RE have been proposed. Cooccurrence-based methods for instance assumes that two entities mentioned frequently in the same unit of text (such as a sentence or a paragraph) are related (Garten & Altman, 2009). Rule-based methods use manually designed, or learned, rules consisting of word morphosyntactic features or sentence-level syntactic features (Fundel et al., 2007). These methods have the advantage of requiring few or no annotated data. Within machine learning methods, deep learning ones enable to model complex structures such as natural language and successfully applied to various NLP tasks. In particular, it as been successfully applied to RE by training from annotated corpora (Zeng et al., 2014) While other methods mainly depend on the quality of extracted features derived from preexisting NLP systems (e.g., POS tagger, stemmer, lemmatizer or syntactic parser), deep learning models automatically learn lexical features using continuous word vector representations, usually named word embeddings, and sentence level features using deep neural network such as Convolutional Neural Network (CNN) (LeCun et al., 1998) or Recursive Neural Networks (RNN) (Pollack, 1990).These models achieve good performances, but strongly depend on the existence of large training corpora, which make them difficult to use for tasks associated with scarce resources. In this paper investigate within four specific RE tasks, for which only few training data are available, how large annotated corpora can be used to improve performances of deep neural networks. We experiment two different deep learning approaches that have been previously used for RE. The first is a Multi-Channel CNN (MCCNN)-based model used in (Quan et al., 2016) for biomedical RE and the second is the tree-structured Long Short Term Memory (TreeLSTM) model (Tai et al., 2015), which have been adapted with success for RE (Miwa & Bansal, 2016). The main difference between these two models is the ability of the latter to exploit the syntax of the language by including a dependency tree structure in the vector representation of sentences. We conduct our experiments using two relatively small biomedical corpora, SNPPhenA and EUADR. Both contains less than 400 manually annotated sentences for each task, but note that EUADR focus on three different tasks. As supplementary data, we used three larger corpora: SemEval 2013 DDI, ADE and reACE. Details on these five corpora are provided Section 4. Our experiments show that contrary to the MCCNN model, the TreeLSTM model benefit from a cross-corpus learning strategy to improve the RE performances for tasks associated with scarce resources. This is done by training a model with data from two distinct corpora, one small and one large, while sharing the model parameters. In addition, our approach led to state-of-the-art performances for the four biomedical tasks associated with scarce resources. Section 2 review various deep learning methods used for RE and previous multi-task learning approaches. Section 3 details the MCCNN and TreeLSTM models we use. Section 4 describes corpora used in this study and Section 5 presents our experiments and results. We then conclude with a short discussion section. 2 RELATED WORK 2.1 DEEP LEARNING-BASE RELATION EXTRACTION Deep learning models, based on continuous word representations have been proposed to overcome the problem of sparsity inherent to NLP (Huang & Yates, 2009). In Collobert et al. (2011), the authors proposed an unified CNN architecture to tackle various NLP problems traditionally handle with statistical approaches. They obtained state-of-the-art performances for several tasks, while avoiding the hand design of task specific features. These results led to progress on NLP topics such as machine translation (Cho et al.), question-answering (Bordes et al., 2014) and RE. In particular, Zeng et al. (2014) showed that CNN models can also be applied to the task of RE. In this study, they learn a vectorial sentence representation, by applying a CNN model over word and word position embeddings. This representation is then used to feed a softmax classifier (Bishop, 2006). To improve the performance of the RE, other authors consider elements of syntax within the embedding provided to the model: Xu et al. (2015) use the path of grammatical dependencies between two entities, which is provided by a dependency parsing; Yang et al. (2016) include the relative positions of words in a dependency tree. They also take dependency based context (i.e., child and parent nodes) into account during the convolution. Beside CNN models that incorporate syntactic knowledge in their embeddings, other approaches go further by proposing neural networks which topology is adapting to the syntactic structure of the sentence. In particular, RNN have been proposed to adapt to tree structures resulting from constituency parsing (Socher et al., 2013; Legrand & Collobert, 2014). In that vein, Tai et al. (2015) introduced a TreeLSTM, a generalization of LSTM for tree-structured network topologies, which allows to process trees with arbitrary branching factors. The first model to make use of RNN for a RE task was proposed by Liu et al. (2015). The authors introduced a CNN-based model applied on the shortest dependency path, augmented with a RNNbased feature designed to model subtrees attached to the shortest path. Miwa & Bansal (2016) introduced a variant of the TreeLSTM used to compute bidirectional (bottom-up and top-down) tree representations for performing relationship classification. Their model uses different weight matrices depending on whether a node belong to the shortest path or not. In this paper, we use two deep-learning strategies to address the problem of RE. The first one is a MultiChannel Convolutional Neural Network (MCCNN) introduced in Quan et al. (2016) for biomedical RE. Inspired by the three-channel RGB image processing models, it consider different embedding channels (i.e., different word embeddings versions for each word), allowing to capture different aspects of input words. The second model we used is the TreeLSTM model described in Tai et al. (2015) and more specifically its Child-Sum version. This model is suitable for processing dependency trees since it handles trees with arbitrary branching factors and no order between children of a node. 2.2 MULTI-TASK LEARNING Machine learning methods and particularly deep learning ones usually require lots of annotated data in order to obtain reasonable performances. For certain tasks that does not require expert knowledge, such as the recognition of simple objects in an image, gathering lots of annotated data is relatively, easy using for instance crowd-sourcing. Some tasks, such as recognizing a relationship between complex entities that is mentioned in a biomedical scientific publication, are more complex, and the obtention of large corpora in this case can be expensive and time consuming. Several methods have been explored to deal with the lack of training data, such as bootstrapping (Jones et al., 1999), which allows accurate training from a small amount of labeled data, along with a large amount of unlabeled data; or self-training approaches McClosky et al. (2006) that artificially augment the labeled training set with examples from unlabeled datasets, using labels predicted by the model itself. Beside, several studies have focused on transferring knowledge acquired from related tasks to help perform a new related task. For instance, Fei-fei et al. (2006) proposed a Bayesian approach to perform one shot learning, (i.e., learning to categorize objects from a single example) that takes advantage of knowledge coming from previously learned categories. Multi-task Learning is a learning approach in which performances on a given task are improved using information contained in the training signals of auxiliary related tasks Caruana (1997). It is a form of inductive transfer where the auxiliary task introduce an inductive bias during training. This is usually done by training tasks in parallel while using a shared representation (Sutton et al., 2007; Ando & Zhang, 2005). In Collobert et al. (2011), the authors jointly trained a CNN on various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. They showed that sharing a portion of the network weights during training led to better performances for all the individual tasks. 3 MODELS We consider in this article MCCNN and TreeLSTM models that both compute a fixed-size vector representation for the whole sentence by composing input embeddings. A score is then computed for each possible type or relationship (e.g., negative, positive or speculative) between two identified entities. The number of possible relationship types depends on the task (see Section 4). In this section, we first introduce the embedding input layer, which in common to both approaches (i.e., MCCNN and TreeLSTM); Then, we detail how each approach composes sequences of embedding in order to compute an unique vectorial sentence representation; Finally, we present the scoring layer, which is common to both approaches. 3.1 INPUT LAYER Both models are fed with word embeddings (i.e., continuous vectors) of dimension dw, along with extra entity embeddings of size de, which are concatenated to word embeddings. Formally, given a sentence of N words, w1, w2, . . . , wN , each word wi ∈ W is first embedded in a dw-dimensional vector space by applying a lookup-table operation: LTW (wi) =Wwi , where the matrix W ∈ Rdw×|W | represents the parameters to be trained in this lookup layer. Each column Wwi ∈ Rdw corresponds to the vector embedding of the wi th word in our dictionary W . Three entity embeddings (coming from a simple 3-elements dictionary) enable to distinguish between words which compose either the first entity, the second entity or are not part of any entity in a sentence. They are respectively called first entity, second entity and other embeddings. Finally, word and entity embeddings are concatenated to form the input corresponding to a given word. Let’s denote xi the concatenated input corresponding to the ith word. 3.2 COMPOSITION LAYERS Both models take the embeddings as input and output a fixed-size representation rs of size ds. This section details the two models used in this study. 3.2.1 MCCNN The MCCNN models applies a variable kernel size CNN to multiple input channels of word embeddings. More formally, given an input sequence x1, . . . , xN , applying a kernel of size k to the ith window is done using the following formula: C = h( c∑ j=1 W [x i−1 2 , . . . , xi, . . . , x i+1 2 ]j + b) where [ ]j denotes the concatenation of inputs from channel j, W ∈ R(dw+de)×dh and b ∈ Rdh are the parameters , h is a pointwise non-linear function such as the hyperbolic tangent and c is the number of input channels. Inputs with indices exceeding the input boundaries ( i−12 < 1 or i+1 2 > N ) are mapped to a special padding vector (which is also learned). A fixed size representation rh ∈ Rdh is then obtain by applying a max-pooling over time: rh = maxC We denote K the number of kernel with different sizes. A sentence representation rs ∈ Rds (with ds = K ∗ dh) is finally obtained by concatenating the output corresponding to the K kernels rs = [r 1 h, . . . , r k h] , where rkh correspond to the output of the k th kernel. Figure 2 illustrates the structure of a twochannel CNN, with two kernels of size 2 and 3, on a four-words sentence. 3.2.2 TREELSTM The TreeLSTM model (Tai et al., 2015) processes the dependency tree associated with an input sentence in a bottom-up manner. This is done by recursively processing the nodes of the tree, using their child representations as input. The transition function for a node j and a set of children C(j) is given by the following set of equations: h̃t = ∑ k∈C(j) hk ij = σ(W (i)xj + U (i)h̃j + b (i)) fjk = σ(W (f)xj + U (f)hk + b (f)) oj = σ(W (o)xj + U (o)h̃j + b (o)) uj = tanh(W (u)xj + U (u)h̃j + b (u)) cj = ij uj + ∑ k∈C(j) fjk ck hj = oj tanh(cj), where σ denotes the logistic function, the element-wise multiplication, xj ∈ Rdw+de is the input for node j, hk ∈ Rdh is the hidden state of the kth child. Each TreeLSTM unit is a collection of vectors: an input gate ij , a forget gate fjk, an output gate oj , a memory cell cj and and hidden state hj . The matrices W and U and the vectors b are the weight and bias parameters to train. The TreeLSTM outputs a sentence representation rs ∈ Rds corresponding to the output state oj of the top tree node (i.e., the root node of the dependency tree that spans all the others). Figure 3 illustrates the structure of the TreeLSTM computed for a four-words sentence. 3.3 SCORING LAYER Both the MCCNN and the TreeLSTM models output an unique vector representation rs ∈ Rds that takes the entire sentence into account. This representation is used to feed a single layer neural network classifier, which outputs a score vector with one score for each possible type of relationships. This vector of scores is obtained using the following formula: s(rs) =W (s)rs + b (s) , whereW (s) ∈ Rds×|S| and b(s) ∈ R|S| are the trained parameters of the scorer, |S| is the number of possible relations. The scores are interpreted as probabilities using a softmax layer (Bishop, 2006). 4 DATASETS We explore how RE tasks that focus on a type of relationship associated with scarce resources may take advantage from existing corpora, in other words how completing a small training corpus with a larger one may help the RE task when the latter is annotated with a different type of relationships. For this purpose, we selected (i) two small biomedical corpora, SNPPhenA and the EU-ADR corpus and (ii) three larger corpora, the SemEval 2013 DDI corpus, the ADE corpus and the reACE corpus. These corpora are publicly available and detailed in the following section. Table 4.2 summarizes the main characteristics of these five corpora and the following section details them. 4.1 SMALL CORPORA • SNPPhenA (Bokharaeian et al., 2017) is a corpus of abstracts of biomedical publications, obtained from PubMed1, annotated with two types of entities: single nucleotide polymorphisms (SNPs) and phenotypes. Relationships between these entities are annotated and classified in 3 categories: positive, negative and neutral relationships. The neutral relationship type is used when no relationship is mentioned in the sentence between two annotated entities. • EU-ADR (van Mulligen et al., 2012) is a corpus of abstracts obtained from PubMed and annotated with drugs, disorders and drug targets (proteins/genes or gene variants) entities. It is composed of 3 subcorpora, focusing either on target-disease, target-drug or drug-disease relationships. Each of them consist of 100 abstracts. Annotated relationships are classified in 3 categories: positive, speculative and negative associations (PA, SA and NA respectively). In Bravo et al. (2015), performances are assessed over the TRUE class, which is composed of the classes PA, SA and NA, in contrast with the FALSE class composed of sentences where two entities are co-occurring, but without relationship annotated between them. 4.2 LARGE CORPORA • SemEval 2013 DDI (Herrero-Zazo et al., 2013) consists of texts from DrugBank and MEDLINE and is annotated with drugs. Drug mentions are categorized in several types: drug, brand, group and drug_n (i.e., active substances not approved for human use). Relationships between two drug mentions are annotated and classified in 4 categories: mechanism, effect, advice and int. int is the broader and default category for DDI, when no more detail can be provided. • ADE-EXT (Adverse Drug Effect corpus, extended) (Gurulingappa et al., 2012) consists of MEDLINE case reports, annotated with drug and conditions (e.g., diseases, signs and symptoms) along with untyped relationships between them, when one is mentioned. 1https://www.ncbi.nlm.nih.gov/pubmed/ • reACE (Edinburgh Regularized Automatic Content Extraction) (Hachey et al., 2012) consists of English broadcast news and newswire annotated with organization, person, fvw (facility, vehicle or weapon) and gpl (geographical, political or location) entities along with relationships between them. Relationships are classified in five categories (generalaffiliation, organisation-affiliation, part-whole, personal-social and agent-artifact). 5 EXPERIMENTS 5.1 TRAINING AND EXPERIMENTAL SETTINGS Our models are trained by minimizing a log-likelihood function over the training data. All parameters, including weights, biases and embeddings were updated via Backpropagation for the MCCNN and Backpropagation through Structure (BPTS) (Goller & Kuchler, 1996) for the TreeLSTM. All the hyper-parameters were tuned using a 10 fold cross-validation by selecting the values leading to the best averaged performance, and fixed for the rest of the experiments. Word embeddings were pre-trained PubMed abstracts using the method described in Lebret & Collobert (2013). These abstracts correspond to all the abstracts published between January 1, 2014 and December 31, 2016, and available on Pubmed (around 3.4 million). MCCNN model. Following Kim (2014) both channels are initialized with pre-trained word embeddings but gradients were back-propagated only through one of the channels. Hyper-parameters were fixed to dw = 100, de = 10, dh = 100 and ds = 200. We applied a dropout regularization after the embedding layers. TreeLSTM model. Dependency trees were obtained using the Stanford Parser (Chen & Manning, 2014). Hyper-parameters were fixed to dw = 100, de = 10, dh = 200 and ds = 200. We applied a dropout regularization (Srivastava et al., 2014) after every TreeLSTM unit and after the embedding layers. The drop probability for each connexion was fixed to 0.25. All the parameters are initialized randomly except the word embeddings. We evaluated performances in terms of precision (P), recall (R) and f-measure (F). For multi-label classifications, we report the macro-average performance2. Because no proper test corpus is provided with EU-ADR, we performed a 10 fold cross-validation using 10% of the corpus for the validation and 10% for the test of our models. For SNPPhenA, we performed a cross-validation using 10% of the corpus for the validation and the provided test corpus for testing. 5.2 CROSS-CORPUS STUDY In this subsection, we present our cross-corpus training strategy and its results. For each fold of our cross-corpus experiments, the same network, initialized with random weight, is used for the different corpora (i.e., same embedding layer and TreeLSTM weights), except for the scorer, which is 2The macro-average score is less impacted by the performance for classes whith very few test samples (and thus a high variance). For that reason, this score is more representatative of the performance of our model. different for each corpus as the number and types of relationships may change. During the training phase, we randomly pick training sentences from the mixed corpora. Table 2 presents the results of the cross-corpus study. For each of the 10 folds, we performed 10 experiments starting from different random weight initializations. Thus, each result is an average of 100 experiments. We observe that for the TreeLSTM model, additional data consistently improved the performances. More interestingly, this phenomenon occurred even for corpora with different types of entities such as the combination of SNPPhenA and SemEval 2013 DDI and, to a lesser extend, for a corpus outside of the biomedical domain (reACE). This phenomenon was not observed for the MCCNN model for which performance tended to decrease slightly when using the cross-corpus learning strategy. 5.3 COMPARISON WITH THE STATE OF THE ART Table 3 presents a comparison of performances obtained with our approach versus two state-of-theart systems applied to the RE tasks associated respectively with SNPPhenA and EU-ADR, respectively reported in Bokharaeian et al. (2017) and Bravo et al. (2015). Our results are obtained using, for each fold, an ensemble of the 5 best models (according to the validation) starting from different random initialization. The ensembling was done by averaging the scores s(rs) of each individual model, following Legrand & Collobert (2014). We report the 10 folds average performance. Both state-of-the-art systems use a combination of a shallow linguistic kernel with a kernel that exploits deep syntactic features. Our approach outperforms the performances reported for SNPPhenA and for the one EU-ADR subtasks and led to similar performances for the two remaining EU-ADR subtasks. 6 DISCUSSION Results presented in Table 2 show that, in our settings, the TreeLSTM model benefits from a crosscorpus learning strategy, while it is useless, or sometimes counterproductive for the MCCNN model. One may think that the TreeLSTM model, due to its ability to exploit the syntactic structure of the sentence, is better at understanding the sentences from the small datasets by exploiting the syntactic patterns observed in the additional data. This idea is reinforced by the fact that even a corpus that does not share the same entities nor a close vocabulary, such as reACE in which no biomedical vocabulary appear, can be helpful for biomedical RE. This assessment could be interestingly explored in further work. Surprisingly, the best results where consistently obtained using the SemEval 2013 DDI corpus as additional data, even for RE tasks that doesn’t involve drugs like EU-ADR target-disease. Likewise, one might have thought that the ADE-EXT corpus could have been more suitable for the EU-ADR drug-disease corpus, since it shares common entities. Several ideas should be explored to better understand this phenomenon, such as the differences of relation and entity types between the different corpora, as well as the differences of types of texts in sources (e.g., medical case report for ADE-EXT, news for reACE, research articles for the others). Higher level syntactic analysis (such as the average distance between the two entities or the nature of the lowest common ancestor in the dependency graph) could provide insights on this question, and help in characterizing the right corpus to select for a cross-corpus training. For the TreeLSTM model, we also tried to train models with multiple additional corpora but did not obtained better performances. For each of the 4 RE tasks studied, the results were consistently on par with the performances obtained using only the additional corpus leading the worst cross-corpus performances. Further work should be done to better understand this phenomenon. Finally, it would be interesting to enrich our model with additional feature such as POS or morphosyntactic ones. More sophisticated TreeLSTM model, taking the dependency tags into account, in addition to the dependency structure, would also be worth exploring. 7 CONCLUSION In this paper, we empirically demonstrated that a cross-corpus learning strategy can be beneficial to tackle biomedical RE tasks for which few annotated resources are available, when using the TreeLSTM model. Interestingly, we showed that any additional corpus, even when focusing on unrelated domain can carry useful information and lead to improved performances. Additionally, the cross-corpus approach led to the best published results for 2 biomedical RE task focusing on SNP-phenotype and drug-disease and to state-of-the-art result for two others focusing on targetdisease and target-drug. We think that cross-corpus training could be reproduced and thus valuable in other specialized domains in which training resources are scarce.
1. What is the main contribution of the paper in relation extraction? 2. What are the strengths and weaknesses of the proposed cross-corpus approach? 3. How does the reviewer assess the clarity and quality of the paper's content? 4. What are the concerns regarding the experimental setup and results? 5. Do you have any suggestions for improving the paper, such as exploring the use of all auxiliary datasets together or providing a brief explanation of relation extraction?
Review
Review SUMMARY. The paper presents a cross-corpus approach for relation extraction from text. The main idea is complementing small training data for relation extraction with training data with different relation types. The model is also connected with multitask learning approaches where the encoder for the input is the same but the output layer is different for each task. In this work, the output/softmax layer is different for each data type, while the encoder is shared. The authors tried two different sentence encoders (cnn-based and tree-lstm), and final results are calculated on the low resource dataset. Experimental results show that the tree-rnn encoder is able to capture valuable information from auxiliary data, while the cnn based does not. ---------- OVERALL JUDGMENT The paper shows an interesting approach to data augmentation with data of different type for relation extraction. I would have appreciated a section where the authors explain briefly what relation extraction is maybe with an example. The paper is overall clear, although the experimental section has to be improved I believe. From section 5.2 I am not able to understand the experimental setting the authors used, is it 10-fold CV? Did the authors tune the hyperparameters for each fold? Are the results in table 3 obtained with tree-lstm? What kind of ensembling did the authors chose for those experiments? The author overstates that their model outperforms the state-of-the-art models they compare to, but that is not true for the EU-ADR dataset where in 2 out of 3 relation types the proposed model performs on par with the state-of-the-art model. Finally, the authors used only one auxiliary dataset at the time, it would be interesting to see whether using all the auxiliary dataset together would improve results even more. I would suggest the author also to check and revise citations (CNN's are not Collobert et al. invention, the same thing for the maximum likelihood objective) and more in general to improve the reference on relation extraction literature.
ICLR
Title Cross-Corpus Training with TreeLSTM for the Extraction of Biomedical Relationships from Text Abstract A bottleneck problem in machine learning-based relationship extraction (RE) algorithms, and particularly of deep learning-based ones, is the availability of training data in the form of annotated corpora. For specific domains, such as biomedicine, the long time and high expertise required for the development of manually annotated corpora explain that most of the existing one are relatively small (i.e., hundreds of sentences). Beside, larger corpora focusing on general or domain-specific relationships (such as citizenship or drug-drug interactions) have been developed. In this paper, we study how large annotated corpora developed for alternative tasks may improve the performances on biomedicine related tasks, for which few annotated resources are available. We experiment two deep learning-based models to extract relationships from biomedical texts with high performance. The first one combine locally extracted features using a Convolutional Neural Network (CNN) model, while the second exploit the syntactic structure of sentences using a Recursive Neural Network (RNN) architecture. Our experiments show that, contrary to the former, the latter benefits from a cross-corpus learning strategy to improve the performance of relationship extraction tasks. Indeed our approach leads to the best published performances for two biomedical RE tasks, and to state-of-the-art results for two other biomedical RE tasks, for which few annotated resources are available (less than 400 manually annotated sentences). This may be particularly impactful in specialized domains in which training resources are scarce, because they would benefit from the training data of other domains for which large annotated corpora does exist. 1 INTRODUCTION Relationship Extraction (RE) from text is a Natural Language Processing (NLP) task that aims at extracting automatically and summarizing in a structured form the unstructured information of texts. A relationships takes the form of a labeled link between two named entities as illustrated in Figure 1. Given two identified entities, the RE extraction task consists in predicting whether their is a relation between them and if so, the type of the relation. It can be seen as a classification task by computing a score for each possible relation type, given a sentence and two identified entities. Deep learning methods have demonstrated good ability for such tasks Zeng et al. (2014), but one of their drawbacks is that they generally require a large amount of training data, i.e., text corpora where entities and relationships between them are annotated, in order to obtain reasonable performances. The building of such a corpus for a specific task, such as those of interest in biomedicine, is time consuming and expensive because it implies complex entities (e.g., genomic variations, complex symptoms), complex relationships (which may be hypothetical, contextualized, negated, n-ary) and requires trained annotators. This explain why only few and relatively small (i.e., few hundreds of sentences) corpora are available, making these resources particularly valuable. Among these tasks, one can mention the extraction of genomic variations-phenotype relationships for which only a manually annotated corpus of 362 sentences, named SNPPhena exists (Bokharaeian et al., 2017). Beside, several larger corpora have been manually annotated with biomedical or general-domain relationships and made available (Hachey et al., 2012; Herrero-Zazo et al., 2013; Gurulingappa et al., 2012). Because these corpora share the same language (i.e., English) and thus a common syntax, we wonder if these resources, developed for slightly different tasks, may be reused for extracting relationships in domain with scarce resources. Several multi-task learning approaches have been proposed to improve performance for a given task using corpus developed for related tasks Collobert et al. (2011). In this paper, we investigate a cross-corpus strategy to improve performances for biomedical RE tasks for which few training data are available, using larger additional corpora developed for other specific RE tasks. This is done by jointly training deep learning-based models while sharing some of the parameters. Before or beside deep learning methods, other approaches for RE have been proposed. Cooccurrence-based methods for instance assumes that two entities mentioned frequently in the same unit of text (such as a sentence or a paragraph) are related (Garten & Altman, 2009). Rule-based methods use manually designed, or learned, rules consisting of word morphosyntactic features or sentence-level syntactic features (Fundel et al., 2007). These methods have the advantage of requiring few or no annotated data. Within machine learning methods, deep learning ones enable to model complex structures such as natural language and successfully applied to various NLP tasks. In particular, it as been successfully applied to RE by training from annotated corpora (Zeng et al., 2014) While other methods mainly depend on the quality of extracted features derived from preexisting NLP systems (e.g., POS tagger, stemmer, lemmatizer or syntactic parser), deep learning models automatically learn lexical features using continuous word vector representations, usually named word embeddings, and sentence level features using deep neural network such as Convolutional Neural Network (CNN) (LeCun et al., 1998) or Recursive Neural Networks (RNN) (Pollack, 1990).These models achieve good performances, but strongly depend on the existence of large training corpora, which make them difficult to use for tasks associated with scarce resources. In this paper investigate within four specific RE tasks, for which only few training data are available, how large annotated corpora can be used to improve performances of deep neural networks. We experiment two different deep learning approaches that have been previously used for RE. The first is a Multi-Channel CNN (MCCNN)-based model used in (Quan et al., 2016) for biomedical RE and the second is the tree-structured Long Short Term Memory (TreeLSTM) model (Tai et al., 2015), which have been adapted with success for RE (Miwa & Bansal, 2016). The main difference between these two models is the ability of the latter to exploit the syntax of the language by including a dependency tree structure in the vector representation of sentences. We conduct our experiments using two relatively small biomedical corpora, SNPPhenA and EUADR. Both contains less than 400 manually annotated sentences for each task, but note that EUADR focus on three different tasks. As supplementary data, we used three larger corpora: SemEval 2013 DDI, ADE and reACE. Details on these five corpora are provided Section 4. Our experiments show that contrary to the MCCNN model, the TreeLSTM model benefit from a cross-corpus learning strategy to improve the RE performances for tasks associated with scarce resources. This is done by training a model with data from two distinct corpora, one small and one large, while sharing the model parameters. In addition, our approach led to state-of-the-art performances for the four biomedical tasks associated with scarce resources. Section 2 review various deep learning methods used for RE and previous multi-task learning approaches. Section 3 details the MCCNN and TreeLSTM models we use. Section 4 describes corpora used in this study and Section 5 presents our experiments and results. We then conclude with a short discussion section. 2 RELATED WORK 2.1 DEEP LEARNING-BASE RELATION EXTRACTION Deep learning models, based on continuous word representations have been proposed to overcome the problem of sparsity inherent to NLP (Huang & Yates, 2009). In Collobert et al. (2011), the authors proposed an unified CNN architecture to tackle various NLP problems traditionally handle with statistical approaches. They obtained state-of-the-art performances for several tasks, while avoiding the hand design of task specific features. These results led to progress on NLP topics such as machine translation (Cho et al.), question-answering (Bordes et al., 2014) and RE. In particular, Zeng et al. (2014) showed that CNN models can also be applied to the task of RE. In this study, they learn a vectorial sentence representation, by applying a CNN model over word and word position embeddings. This representation is then used to feed a softmax classifier (Bishop, 2006). To improve the performance of the RE, other authors consider elements of syntax within the embedding provided to the model: Xu et al. (2015) use the path of grammatical dependencies between two entities, which is provided by a dependency parsing; Yang et al. (2016) include the relative positions of words in a dependency tree. They also take dependency based context (i.e., child and parent nodes) into account during the convolution. Beside CNN models that incorporate syntactic knowledge in their embeddings, other approaches go further by proposing neural networks which topology is adapting to the syntactic structure of the sentence. In particular, RNN have been proposed to adapt to tree structures resulting from constituency parsing (Socher et al., 2013; Legrand & Collobert, 2014). In that vein, Tai et al. (2015) introduced a TreeLSTM, a generalization of LSTM for tree-structured network topologies, which allows to process trees with arbitrary branching factors. The first model to make use of RNN for a RE task was proposed by Liu et al. (2015). The authors introduced a CNN-based model applied on the shortest dependency path, augmented with a RNNbased feature designed to model subtrees attached to the shortest path. Miwa & Bansal (2016) introduced a variant of the TreeLSTM used to compute bidirectional (bottom-up and top-down) tree representations for performing relationship classification. Their model uses different weight matrices depending on whether a node belong to the shortest path or not. In this paper, we use two deep-learning strategies to address the problem of RE. The first one is a MultiChannel Convolutional Neural Network (MCCNN) introduced in Quan et al. (2016) for biomedical RE. Inspired by the three-channel RGB image processing models, it consider different embedding channels (i.e., different word embeddings versions for each word), allowing to capture different aspects of input words. The second model we used is the TreeLSTM model described in Tai et al. (2015) and more specifically its Child-Sum version. This model is suitable for processing dependency trees since it handles trees with arbitrary branching factors and no order between children of a node. 2.2 MULTI-TASK LEARNING Machine learning methods and particularly deep learning ones usually require lots of annotated data in order to obtain reasonable performances. For certain tasks that does not require expert knowledge, such as the recognition of simple objects in an image, gathering lots of annotated data is relatively, easy using for instance crowd-sourcing. Some tasks, such as recognizing a relationship between complex entities that is mentioned in a biomedical scientific publication, are more complex, and the obtention of large corpora in this case can be expensive and time consuming. Several methods have been explored to deal with the lack of training data, such as bootstrapping (Jones et al., 1999), which allows accurate training from a small amount of labeled data, along with a large amount of unlabeled data; or self-training approaches McClosky et al. (2006) that artificially augment the labeled training set with examples from unlabeled datasets, using labels predicted by the model itself. Beside, several studies have focused on transferring knowledge acquired from related tasks to help perform a new related task. For instance, Fei-fei et al. (2006) proposed a Bayesian approach to perform one shot learning, (i.e., learning to categorize objects from a single example) that takes advantage of knowledge coming from previously learned categories. Multi-task Learning is a learning approach in which performances on a given task are improved using information contained in the training signals of auxiliary related tasks Caruana (1997). It is a form of inductive transfer where the auxiliary task introduce an inductive bias during training. This is usually done by training tasks in parallel while using a shared representation (Sutton et al., 2007; Ando & Zhang, 2005). In Collobert et al. (2011), the authors jointly trained a CNN on various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. They showed that sharing a portion of the network weights during training led to better performances for all the individual tasks. 3 MODELS We consider in this article MCCNN and TreeLSTM models that both compute a fixed-size vector representation for the whole sentence by composing input embeddings. A score is then computed for each possible type or relationship (e.g., negative, positive or speculative) between two identified entities. The number of possible relationship types depends on the task (see Section 4). In this section, we first introduce the embedding input layer, which in common to both approaches (i.e., MCCNN and TreeLSTM); Then, we detail how each approach composes sequences of embedding in order to compute an unique vectorial sentence representation; Finally, we present the scoring layer, which is common to both approaches. 3.1 INPUT LAYER Both models are fed with word embeddings (i.e., continuous vectors) of dimension dw, along with extra entity embeddings of size de, which are concatenated to word embeddings. Formally, given a sentence of N words, w1, w2, . . . , wN , each word wi ∈ W is first embedded in a dw-dimensional vector space by applying a lookup-table operation: LTW (wi) =Wwi , where the matrix W ∈ Rdw×|W | represents the parameters to be trained in this lookup layer. Each column Wwi ∈ Rdw corresponds to the vector embedding of the wi th word in our dictionary W . Three entity embeddings (coming from a simple 3-elements dictionary) enable to distinguish between words which compose either the first entity, the second entity or are not part of any entity in a sentence. They are respectively called first entity, second entity and other embeddings. Finally, word and entity embeddings are concatenated to form the input corresponding to a given word. Let’s denote xi the concatenated input corresponding to the ith word. 3.2 COMPOSITION LAYERS Both models take the embeddings as input and output a fixed-size representation rs of size ds. This section details the two models used in this study. 3.2.1 MCCNN The MCCNN models applies a variable kernel size CNN to multiple input channels of word embeddings. More formally, given an input sequence x1, . . . , xN , applying a kernel of size k to the ith window is done using the following formula: C = h( c∑ j=1 W [x i−1 2 , . . . , xi, . . . , x i+1 2 ]j + b) where [ ]j denotes the concatenation of inputs from channel j, W ∈ R(dw+de)×dh and b ∈ Rdh are the parameters , h is a pointwise non-linear function such as the hyperbolic tangent and c is the number of input channels. Inputs with indices exceeding the input boundaries ( i−12 < 1 or i+1 2 > N ) are mapped to a special padding vector (which is also learned). A fixed size representation rh ∈ Rdh is then obtain by applying a max-pooling over time: rh = maxC We denote K the number of kernel with different sizes. A sentence representation rs ∈ Rds (with ds = K ∗ dh) is finally obtained by concatenating the output corresponding to the K kernels rs = [r 1 h, . . . , r k h] , where rkh correspond to the output of the k th kernel. Figure 2 illustrates the structure of a twochannel CNN, with two kernels of size 2 and 3, on a four-words sentence. 3.2.2 TREELSTM The TreeLSTM model (Tai et al., 2015) processes the dependency tree associated with an input sentence in a bottom-up manner. This is done by recursively processing the nodes of the tree, using their child representations as input. The transition function for a node j and a set of children C(j) is given by the following set of equations: h̃t = ∑ k∈C(j) hk ij = σ(W (i)xj + U (i)h̃j + b (i)) fjk = σ(W (f)xj + U (f)hk + b (f)) oj = σ(W (o)xj + U (o)h̃j + b (o)) uj = tanh(W (u)xj + U (u)h̃j + b (u)) cj = ij uj + ∑ k∈C(j) fjk ck hj = oj tanh(cj), where σ denotes the logistic function, the element-wise multiplication, xj ∈ Rdw+de is the input for node j, hk ∈ Rdh is the hidden state of the kth child. Each TreeLSTM unit is a collection of vectors: an input gate ij , a forget gate fjk, an output gate oj , a memory cell cj and and hidden state hj . The matrices W and U and the vectors b are the weight and bias parameters to train. The TreeLSTM outputs a sentence representation rs ∈ Rds corresponding to the output state oj of the top tree node (i.e., the root node of the dependency tree that spans all the others). Figure 3 illustrates the structure of the TreeLSTM computed for a four-words sentence. 3.3 SCORING LAYER Both the MCCNN and the TreeLSTM models output an unique vector representation rs ∈ Rds that takes the entire sentence into account. This representation is used to feed a single layer neural network classifier, which outputs a score vector with one score for each possible type of relationships. This vector of scores is obtained using the following formula: s(rs) =W (s)rs + b (s) , whereW (s) ∈ Rds×|S| and b(s) ∈ R|S| are the trained parameters of the scorer, |S| is the number of possible relations. The scores are interpreted as probabilities using a softmax layer (Bishop, 2006). 4 DATASETS We explore how RE tasks that focus on a type of relationship associated with scarce resources may take advantage from existing corpora, in other words how completing a small training corpus with a larger one may help the RE task when the latter is annotated with a different type of relationships. For this purpose, we selected (i) two small biomedical corpora, SNPPhenA and the EU-ADR corpus and (ii) three larger corpora, the SemEval 2013 DDI corpus, the ADE corpus and the reACE corpus. These corpora are publicly available and detailed in the following section. Table 4.2 summarizes the main characteristics of these five corpora and the following section details them. 4.1 SMALL CORPORA • SNPPhenA (Bokharaeian et al., 2017) is a corpus of abstracts of biomedical publications, obtained from PubMed1, annotated with two types of entities: single nucleotide polymorphisms (SNPs) and phenotypes. Relationships between these entities are annotated and classified in 3 categories: positive, negative and neutral relationships. The neutral relationship type is used when no relationship is mentioned in the sentence between two annotated entities. • EU-ADR (van Mulligen et al., 2012) is a corpus of abstracts obtained from PubMed and annotated with drugs, disorders and drug targets (proteins/genes or gene variants) entities. It is composed of 3 subcorpora, focusing either on target-disease, target-drug or drug-disease relationships. Each of them consist of 100 abstracts. Annotated relationships are classified in 3 categories: positive, speculative and negative associations (PA, SA and NA respectively). In Bravo et al. (2015), performances are assessed over the TRUE class, which is composed of the classes PA, SA and NA, in contrast with the FALSE class composed of sentences where two entities are co-occurring, but without relationship annotated between them. 4.2 LARGE CORPORA • SemEval 2013 DDI (Herrero-Zazo et al., 2013) consists of texts from DrugBank and MEDLINE and is annotated with drugs. Drug mentions are categorized in several types: drug, brand, group and drug_n (i.e., active substances not approved for human use). Relationships between two drug mentions are annotated and classified in 4 categories: mechanism, effect, advice and int. int is the broader and default category for DDI, when no more detail can be provided. • ADE-EXT (Adverse Drug Effect corpus, extended) (Gurulingappa et al., 2012) consists of MEDLINE case reports, annotated with drug and conditions (e.g., diseases, signs and symptoms) along with untyped relationships between them, when one is mentioned. 1https://www.ncbi.nlm.nih.gov/pubmed/ • reACE (Edinburgh Regularized Automatic Content Extraction) (Hachey et al., 2012) consists of English broadcast news and newswire annotated with organization, person, fvw (facility, vehicle or weapon) and gpl (geographical, political or location) entities along with relationships between them. Relationships are classified in five categories (generalaffiliation, organisation-affiliation, part-whole, personal-social and agent-artifact). 5 EXPERIMENTS 5.1 TRAINING AND EXPERIMENTAL SETTINGS Our models are trained by minimizing a log-likelihood function over the training data. All parameters, including weights, biases and embeddings were updated via Backpropagation for the MCCNN and Backpropagation through Structure (BPTS) (Goller & Kuchler, 1996) for the TreeLSTM. All the hyper-parameters were tuned using a 10 fold cross-validation by selecting the values leading to the best averaged performance, and fixed for the rest of the experiments. Word embeddings were pre-trained PubMed abstracts using the method described in Lebret & Collobert (2013). These abstracts correspond to all the abstracts published between January 1, 2014 and December 31, 2016, and available on Pubmed (around 3.4 million). MCCNN model. Following Kim (2014) both channels are initialized with pre-trained word embeddings but gradients were back-propagated only through one of the channels. Hyper-parameters were fixed to dw = 100, de = 10, dh = 100 and ds = 200. We applied a dropout regularization after the embedding layers. TreeLSTM model. Dependency trees were obtained using the Stanford Parser (Chen & Manning, 2014). Hyper-parameters were fixed to dw = 100, de = 10, dh = 200 and ds = 200. We applied a dropout regularization (Srivastava et al., 2014) after every TreeLSTM unit and after the embedding layers. The drop probability for each connexion was fixed to 0.25. All the parameters are initialized randomly except the word embeddings. We evaluated performances in terms of precision (P), recall (R) and f-measure (F). For multi-label classifications, we report the macro-average performance2. Because no proper test corpus is provided with EU-ADR, we performed a 10 fold cross-validation using 10% of the corpus for the validation and 10% for the test of our models. For SNPPhenA, we performed a cross-validation using 10% of the corpus for the validation and the provided test corpus for testing. 5.2 CROSS-CORPUS STUDY In this subsection, we present our cross-corpus training strategy and its results. For each fold of our cross-corpus experiments, the same network, initialized with random weight, is used for the different corpora (i.e., same embedding layer and TreeLSTM weights), except for the scorer, which is 2The macro-average score is less impacted by the performance for classes whith very few test samples (and thus a high variance). For that reason, this score is more representatative of the performance of our model. different for each corpus as the number and types of relationships may change. During the training phase, we randomly pick training sentences from the mixed corpora. Table 2 presents the results of the cross-corpus study. For each of the 10 folds, we performed 10 experiments starting from different random weight initializations. Thus, each result is an average of 100 experiments. We observe that for the TreeLSTM model, additional data consistently improved the performances. More interestingly, this phenomenon occurred even for corpora with different types of entities such as the combination of SNPPhenA and SemEval 2013 DDI and, to a lesser extend, for a corpus outside of the biomedical domain (reACE). This phenomenon was not observed for the MCCNN model for which performance tended to decrease slightly when using the cross-corpus learning strategy. 5.3 COMPARISON WITH THE STATE OF THE ART Table 3 presents a comparison of performances obtained with our approach versus two state-of-theart systems applied to the RE tasks associated respectively with SNPPhenA and EU-ADR, respectively reported in Bokharaeian et al. (2017) and Bravo et al. (2015). Our results are obtained using, for each fold, an ensemble of the 5 best models (according to the validation) starting from different random initialization. The ensembling was done by averaging the scores s(rs) of each individual model, following Legrand & Collobert (2014). We report the 10 folds average performance. Both state-of-the-art systems use a combination of a shallow linguistic kernel with a kernel that exploits deep syntactic features. Our approach outperforms the performances reported for SNPPhenA and for the one EU-ADR subtasks and led to similar performances for the two remaining EU-ADR subtasks. 6 DISCUSSION Results presented in Table 2 show that, in our settings, the TreeLSTM model benefits from a crosscorpus learning strategy, while it is useless, or sometimes counterproductive for the MCCNN model. One may think that the TreeLSTM model, due to its ability to exploit the syntactic structure of the sentence, is better at understanding the sentences from the small datasets by exploiting the syntactic patterns observed in the additional data. This idea is reinforced by the fact that even a corpus that does not share the same entities nor a close vocabulary, such as reACE in which no biomedical vocabulary appear, can be helpful for biomedical RE. This assessment could be interestingly explored in further work. Surprisingly, the best results where consistently obtained using the SemEval 2013 DDI corpus as additional data, even for RE tasks that doesn’t involve drugs like EU-ADR target-disease. Likewise, one might have thought that the ADE-EXT corpus could have been more suitable for the EU-ADR drug-disease corpus, since it shares common entities. Several ideas should be explored to better understand this phenomenon, such as the differences of relation and entity types between the different corpora, as well as the differences of types of texts in sources (e.g., medical case report for ADE-EXT, news for reACE, research articles for the others). Higher level syntactic analysis (such as the average distance between the two entities or the nature of the lowest common ancestor in the dependency graph) could provide insights on this question, and help in characterizing the right corpus to select for a cross-corpus training. For the TreeLSTM model, we also tried to train models with multiple additional corpora but did not obtained better performances. For each of the 4 RE tasks studied, the results were consistently on par with the performances obtained using only the additional corpus leading the worst cross-corpus performances. Further work should be done to better understand this phenomenon. Finally, it would be interesting to enrich our model with additional feature such as POS or morphosyntactic ones. More sophisticated TreeLSTM model, taking the dependency tags into account, in addition to the dependency structure, would also be worth exploring. 7 CONCLUSION In this paper, we empirically demonstrated that a cross-corpus learning strategy can be beneficial to tackle biomedical RE tasks for which few annotated resources are available, when using the TreeLSTM model. Interestingly, we showed that any additional corpus, even when focusing on unrelated domain can carry useful information and lead to improved performances. Additionally, the cross-corpus approach led to the best published results for 2 biomedical RE task focusing on SNP-phenotype and drug-disease and to state-of-the-art result for two others focusing on targetdisease and target-drug. We think that cross-corpus training could be reproduced and thus valuable in other specialized domains in which training resources are scarce.
1. What are the strengths and weaknesses of the paper's experimental results? 2. Are there any concerns or suggestions regarding the presentation of the research findings? 3. Is there any question about the methodology used in the study, such as the use of entity type information? 4. Are there any inconsistencies in the data presented in different tables or sections of the paper? 5. Do the authors provide sufficient comparison between their approach and previous works in the field? 6. What are the minor comments or suggestions for improving the paper's quality?
Review
Review This is a well-written paper with sound experiments. However, the research outcome is not very surprising. - Only macro-average F-scores are reported. Please present micro-average scores as well. - The detailed procedure of relation extraction should be described. How do you use entity type information? (Probably, you did not use entity types.) - Table 3: The SotA score of EU-ATR target-disease (i.e. 84.6) should be in bold face. - Section 5.3: Your system scorers in Table 3 are not consistent with Table 2 scores. - Page 8. "Our approach outperforms ..." The improvement is clear only for SNPPhenA and EU-ADR durg-disease. Minor comments: - TreeLSTM --> Tree-LSTM - Page 7. connexion --> connection - Page 8. four EU-ADR subtasks --> three ... - I suggest to conduct transfer learning studies in the similar settings.
ICLR
Title Cross-Corpus Training with TreeLSTM for the Extraction of Biomedical Relationships from Text Abstract A bottleneck problem in machine learning-based relationship extraction (RE) algorithms, and particularly of deep learning-based ones, is the availability of training data in the form of annotated corpora. For specific domains, such as biomedicine, the long time and high expertise required for the development of manually annotated corpora explain that most of the existing one are relatively small (i.e., hundreds of sentences). Beside, larger corpora focusing on general or domain-specific relationships (such as citizenship or drug-drug interactions) have been developed. In this paper, we study how large annotated corpora developed for alternative tasks may improve the performances on biomedicine related tasks, for which few annotated resources are available. We experiment two deep learning-based models to extract relationships from biomedical texts with high performance. The first one combine locally extracted features using a Convolutional Neural Network (CNN) model, while the second exploit the syntactic structure of sentences using a Recursive Neural Network (RNN) architecture. Our experiments show that, contrary to the former, the latter benefits from a cross-corpus learning strategy to improve the performance of relationship extraction tasks. Indeed our approach leads to the best published performances for two biomedical RE tasks, and to state-of-the-art results for two other biomedical RE tasks, for which few annotated resources are available (less than 400 manually annotated sentences). This may be particularly impactful in specialized domains in which training resources are scarce, because they would benefit from the training data of other domains for which large annotated corpora does exist. 1 INTRODUCTION Relationship Extraction (RE) from text is a Natural Language Processing (NLP) task that aims at extracting automatically and summarizing in a structured form the unstructured information of texts. A relationships takes the form of a labeled link between two named entities as illustrated in Figure 1. Given two identified entities, the RE extraction task consists in predicting whether their is a relation between them and if so, the type of the relation. It can be seen as a classification task by computing a score for each possible relation type, given a sentence and two identified entities. Deep learning methods have demonstrated good ability for such tasks Zeng et al. (2014), but one of their drawbacks is that they generally require a large amount of training data, i.e., text corpora where entities and relationships between them are annotated, in order to obtain reasonable performances. The building of such a corpus for a specific task, such as those of interest in biomedicine, is time consuming and expensive because it implies complex entities (e.g., genomic variations, complex symptoms), complex relationships (which may be hypothetical, contextualized, negated, n-ary) and requires trained annotators. This explain why only few and relatively small (i.e., few hundreds of sentences) corpora are available, making these resources particularly valuable. Among these tasks, one can mention the extraction of genomic variations-phenotype relationships for which only a manually annotated corpus of 362 sentences, named SNPPhena exists (Bokharaeian et al., 2017). Beside, several larger corpora have been manually annotated with biomedical or general-domain relationships and made available (Hachey et al., 2012; Herrero-Zazo et al., 2013; Gurulingappa et al., 2012). Because these corpora share the same language (i.e., English) and thus a common syntax, we wonder if these resources, developed for slightly different tasks, may be reused for extracting relationships in domain with scarce resources. Several multi-task learning approaches have been proposed to improve performance for a given task using corpus developed for related tasks Collobert et al. (2011). In this paper, we investigate a cross-corpus strategy to improve performances for biomedical RE tasks for which few training data are available, using larger additional corpora developed for other specific RE tasks. This is done by jointly training deep learning-based models while sharing some of the parameters. Before or beside deep learning methods, other approaches for RE have been proposed. Cooccurrence-based methods for instance assumes that two entities mentioned frequently in the same unit of text (such as a sentence or a paragraph) are related (Garten & Altman, 2009). Rule-based methods use manually designed, or learned, rules consisting of word morphosyntactic features or sentence-level syntactic features (Fundel et al., 2007). These methods have the advantage of requiring few or no annotated data. Within machine learning methods, deep learning ones enable to model complex structures such as natural language and successfully applied to various NLP tasks. In particular, it as been successfully applied to RE by training from annotated corpora (Zeng et al., 2014) While other methods mainly depend on the quality of extracted features derived from preexisting NLP systems (e.g., POS tagger, stemmer, lemmatizer or syntactic parser), deep learning models automatically learn lexical features using continuous word vector representations, usually named word embeddings, and sentence level features using deep neural network such as Convolutional Neural Network (CNN) (LeCun et al., 1998) or Recursive Neural Networks (RNN) (Pollack, 1990).These models achieve good performances, but strongly depend on the existence of large training corpora, which make them difficult to use for tasks associated with scarce resources. In this paper investigate within four specific RE tasks, for which only few training data are available, how large annotated corpora can be used to improve performances of deep neural networks. We experiment two different deep learning approaches that have been previously used for RE. The first is a Multi-Channel CNN (MCCNN)-based model used in (Quan et al., 2016) for biomedical RE and the second is the tree-structured Long Short Term Memory (TreeLSTM) model (Tai et al., 2015), which have been adapted with success for RE (Miwa & Bansal, 2016). The main difference between these two models is the ability of the latter to exploit the syntax of the language by including a dependency tree structure in the vector representation of sentences. We conduct our experiments using two relatively small biomedical corpora, SNPPhenA and EUADR. Both contains less than 400 manually annotated sentences for each task, but note that EUADR focus on three different tasks. As supplementary data, we used three larger corpora: SemEval 2013 DDI, ADE and reACE. Details on these five corpora are provided Section 4. Our experiments show that contrary to the MCCNN model, the TreeLSTM model benefit from a cross-corpus learning strategy to improve the RE performances for tasks associated with scarce resources. This is done by training a model with data from two distinct corpora, one small and one large, while sharing the model parameters. In addition, our approach led to state-of-the-art performances for the four biomedical tasks associated with scarce resources. Section 2 review various deep learning methods used for RE and previous multi-task learning approaches. Section 3 details the MCCNN and TreeLSTM models we use. Section 4 describes corpora used in this study and Section 5 presents our experiments and results. We then conclude with a short discussion section. 2 RELATED WORK 2.1 DEEP LEARNING-BASE RELATION EXTRACTION Deep learning models, based on continuous word representations have been proposed to overcome the problem of sparsity inherent to NLP (Huang & Yates, 2009). In Collobert et al. (2011), the authors proposed an unified CNN architecture to tackle various NLP problems traditionally handle with statistical approaches. They obtained state-of-the-art performances for several tasks, while avoiding the hand design of task specific features. These results led to progress on NLP topics such as machine translation (Cho et al.), question-answering (Bordes et al., 2014) and RE. In particular, Zeng et al. (2014) showed that CNN models can also be applied to the task of RE. In this study, they learn a vectorial sentence representation, by applying a CNN model over word and word position embeddings. This representation is then used to feed a softmax classifier (Bishop, 2006). To improve the performance of the RE, other authors consider elements of syntax within the embedding provided to the model: Xu et al. (2015) use the path of grammatical dependencies between two entities, which is provided by a dependency parsing; Yang et al. (2016) include the relative positions of words in a dependency tree. They also take dependency based context (i.e., child and parent nodes) into account during the convolution. Beside CNN models that incorporate syntactic knowledge in their embeddings, other approaches go further by proposing neural networks which topology is adapting to the syntactic structure of the sentence. In particular, RNN have been proposed to adapt to tree structures resulting from constituency parsing (Socher et al., 2013; Legrand & Collobert, 2014). In that vein, Tai et al. (2015) introduced a TreeLSTM, a generalization of LSTM for tree-structured network topologies, which allows to process trees with arbitrary branching factors. The first model to make use of RNN for a RE task was proposed by Liu et al. (2015). The authors introduced a CNN-based model applied on the shortest dependency path, augmented with a RNNbased feature designed to model subtrees attached to the shortest path. Miwa & Bansal (2016) introduced a variant of the TreeLSTM used to compute bidirectional (bottom-up and top-down) tree representations for performing relationship classification. Their model uses different weight matrices depending on whether a node belong to the shortest path or not. In this paper, we use two deep-learning strategies to address the problem of RE. The first one is a MultiChannel Convolutional Neural Network (MCCNN) introduced in Quan et al. (2016) for biomedical RE. Inspired by the three-channel RGB image processing models, it consider different embedding channels (i.e., different word embeddings versions for each word), allowing to capture different aspects of input words. The second model we used is the TreeLSTM model described in Tai et al. (2015) and more specifically its Child-Sum version. This model is suitable for processing dependency trees since it handles trees with arbitrary branching factors and no order between children of a node. 2.2 MULTI-TASK LEARNING Machine learning methods and particularly deep learning ones usually require lots of annotated data in order to obtain reasonable performances. For certain tasks that does not require expert knowledge, such as the recognition of simple objects in an image, gathering lots of annotated data is relatively, easy using for instance crowd-sourcing. Some tasks, such as recognizing a relationship between complex entities that is mentioned in a biomedical scientific publication, are more complex, and the obtention of large corpora in this case can be expensive and time consuming. Several methods have been explored to deal with the lack of training data, such as bootstrapping (Jones et al., 1999), which allows accurate training from a small amount of labeled data, along with a large amount of unlabeled data; or self-training approaches McClosky et al. (2006) that artificially augment the labeled training set with examples from unlabeled datasets, using labels predicted by the model itself. Beside, several studies have focused on transferring knowledge acquired from related tasks to help perform a new related task. For instance, Fei-fei et al. (2006) proposed a Bayesian approach to perform one shot learning, (i.e., learning to categorize objects from a single example) that takes advantage of knowledge coming from previously learned categories. Multi-task Learning is a learning approach in which performances on a given task are improved using information contained in the training signals of auxiliary related tasks Caruana (1997). It is a form of inductive transfer where the auxiliary task introduce an inductive bias during training. This is usually done by training tasks in parallel while using a shared representation (Sutton et al., 2007; Ando & Zhang, 2005). In Collobert et al. (2011), the authors jointly trained a CNN on various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. They showed that sharing a portion of the network weights during training led to better performances for all the individual tasks. 3 MODELS We consider in this article MCCNN and TreeLSTM models that both compute a fixed-size vector representation for the whole sentence by composing input embeddings. A score is then computed for each possible type or relationship (e.g., negative, positive or speculative) between two identified entities. The number of possible relationship types depends on the task (see Section 4). In this section, we first introduce the embedding input layer, which in common to both approaches (i.e., MCCNN and TreeLSTM); Then, we detail how each approach composes sequences of embedding in order to compute an unique vectorial sentence representation; Finally, we present the scoring layer, which is common to both approaches. 3.1 INPUT LAYER Both models are fed with word embeddings (i.e., continuous vectors) of dimension dw, along with extra entity embeddings of size de, which are concatenated to word embeddings. Formally, given a sentence of N words, w1, w2, . . . , wN , each word wi ∈ W is first embedded in a dw-dimensional vector space by applying a lookup-table operation: LTW (wi) =Wwi , where the matrix W ∈ Rdw×|W | represents the parameters to be trained in this lookup layer. Each column Wwi ∈ Rdw corresponds to the vector embedding of the wi th word in our dictionary W . Three entity embeddings (coming from a simple 3-elements dictionary) enable to distinguish between words which compose either the first entity, the second entity or are not part of any entity in a sentence. They are respectively called first entity, second entity and other embeddings. Finally, word and entity embeddings are concatenated to form the input corresponding to a given word. Let’s denote xi the concatenated input corresponding to the ith word. 3.2 COMPOSITION LAYERS Both models take the embeddings as input and output a fixed-size representation rs of size ds. This section details the two models used in this study. 3.2.1 MCCNN The MCCNN models applies a variable kernel size CNN to multiple input channels of word embeddings. More formally, given an input sequence x1, . . . , xN , applying a kernel of size k to the ith window is done using the following formula: C = h( c∑ j=1 W [x i−1 2 , . . . , xi, . . . , x i+1 2 ]j + b) where [ ]j denotes the concatenation of inputs from channel j, W ∈ R(dw+de)×dh and b ∈ Rdh are the parameters , h is a pointwise non-linear function such as the hyperbolic tangent and c is the number of input channels. Inputs with indices exceeding the input boundaries ( i−12 < 1 or i+1 2 > N ) are mapped to a special padding vector (which is also learned). A fixed size representation rh ∈ Rdh is then obtain by applying a max-pooling over time: rh = maxC We denote K the number of kernel with different sizes. A sentence representation rs ∈ Rds (with ds = K ∗ dh) is finally obtained by concatenating the output corresponding to the K kernels rs = [r 1 h, . . . , r k h] , where rkh correspond to the output of the k th kernel. Figure 2 illustrates the structure of a twochannel CNN, with two kernels of size 2 and 3, on a four-words sentence. 3.2.2 TREELSTM The TreeLSTM model (Tai et al., 2015) processes the dependency tree associated with an input sentence in a bottom-up manner. This is done by recursively processing the nodes of the tree, using their child representations as input. The transition function for a node j and a set of children C(j) is given by the following set of equations: h̃t = ∑ k∈C(j) hk ij = σ(W (i)xj + U (i)h̃j + b (i)) fjk = σ(W (f)xj + U (f)hk + b (f)) oj = σ(W (o)xj + U (o)h̃j + b (o)) uj = tanh(W (u)xj + U (u)h̃j + b (u)) cj = ij uj + ∑ k∈C(j) fjk ck hj = oj tanh(cj), where σ denotes the logistic function, the element-wise multiplication, xj ∈ Rdw+de is the input for node j, hk ∈ Rdh is the hidden state of the kth child. Each TreeLSTM unit is a collection of vectors: an input gate ij , a forget gate fjk, an output gate oj , a memory cell cj and and hidden state hj . The matrices W and U and the vectors b are the weight and bias parameters to train. The TreeLSTM outputs a sentence representation rs ∈ Rds corresponding to the output state oj of the top tree node (i.e., the root node of the dependency tree that spans all the others). Figure 3 illustrates the structure of the TreeLSTM computed for a four-words sentence. 3.3 SCORING LAYER Both the MCCNN and the TreeLSTM models output an unique vector representation rs ∈ Rds that takes the entire sentence into account. This representation is used to feed a single layer neural network classifier, which outputs a score vector with one score for each possible type of relationships. This vector of scores is obtained using the following formula: s(rs) =W (s)rs + b (s) , whereW (s) ∈ Rds×|S| and b(s) ∈ R|S| are the trained parameters of the scorer, |S| is the number of possible relations. The scores are interpreted as probabilities using a softmax layer (Bishop, 2006). 4 DATASETS We explore how RE tasks that focus on a type of relationship associated with scarce resources may take advantage from existing corpora, in other words how completing a small training corpus with a larger one may help the RE task when the latter is annotated with a different type of relationships. For this purpose, we selected (i) two small biomedical corpora, SNPPhenA and the EU-ADR corpus and (ii) three larger corpora, the SemEval 2013 DDI corpus, the ADE corpus and the reACE corpus. These corpora are publicly available and detailed in the following section. Table 4.2 summarizes the main characteristics of these five corpora and the following section details them. 4.1 SMALL CORPORA • SNPPhenA (Bokharaeian et al., 2017) is a corpus of abstracts of biomedical publications, obtained from PubMed1, annotated with two types of entities: single nucleotide polymorphisms (SNPs) and phenotypes. Relationships between these entities are annotated and classified in 3 categories: positive, negative and neutral relationships. The neutral relationship type is used when no relationship is mentioned in the sentence between two annotated entities. • EU-ADR (van Mulligen et al., 2012) is a corpus of abstracts obtained from PubMed and annotated with drugs, disorders and drug targets (proteins/genes or gene variants) entities. It is composed of 3 subcorpora, focusing either on target-disease, target-drug or drug-disease relationships. Each of them consist of 100 abstracts. Annotated relationships are classified in 3 categories: positive, speculative and negative associations (PA, SA and NA respectively). In Bravo et al. (2015), performances are assessed over the TRUE class, which is composed of the classes PA, SA and NA, in contrast with the FALSE class composed of sentences where two entities are co-occurring, but without relationship annotated between them. 4.2 LARGE CORPORA • SemEval 2013 DDI (Herrero-Zazo et al., 2013) consists of texts from DrugBank and MEDLINE and is annotated with drugs. Drug mentions are categorized in several types: drug, brand, group and drug_n (i.e., active substances not approved for human use). Relationships between two drug mentions are annotated and classified in 4 categories: mechanism, effect, advice and int. int is the broader and default category for DDI, when no more detail can be provided. • ADE-EXT (Adverse Drug Effect corpus, extended) (Gurulingappa et al., 2012) consists of MEDLINE case reports, annotated with drug and conditions (e.g., diseases, signs and symptoms) along with untyped relationships between them, when one is mentioned. 1https://www.ncbi.nlm.nih.gov/pubmed/ • reACE (Edinburgh Regularized Automatic Content Extraction) (Hachey et al., 2012) consists of English broadcast news and newswire annotated with organization, person, fvw (facility, vehicle or weapon) and gpl (geographical, political or location) entities along with relationships between them. Relationships are classified in five categories (generalaffiliation, organisation-affiliation, part-whole, personal-social and agent-artifact). 5 EXPERIMENTS 5.1 TRAINING AND EXPERIMENTAL SETTINGS Our models are trained by minimizing a log-likelihood function over the training data. All parameters, including weights, biases and embeddings were updated via Backpropagation for the MCCNN and Backpropagation through Structure (BPTS) (Goller & Kuchler, 1996) for the TreeLSTM. All the hyper-parameters were tuned using a 10 fold cross-validation by selecting the values leading to the best averaged performance, and fixed for the rest of the experiments. Word embeddings were pre-trained PubMed abstracts using the method described in Lebret & Collobert (2013). These abstracts correspond to all the abstracts published between January 1, 2014 and December 31, 2016, and available on Pubmed (around 3.4 million). MCCNN model. Following Kim (2014) both channels are initialized with pre-trained word embeddings but gradients were back-propagated only through one of the channels. Hyper-parameters were fixed to dw = 100, de = 10, dh = 100 and ds = 200. We applied a dropout regularization after the embedding layers. TreeLSTM model. Dependency trees were obtained using the Stanford Parser (Chen & Manning, 2014). Hyper-parameters were fixed to dw = 100, de = 10, dh = 200 and ds = 200. We applied a dropout regularization (Srivastava et al., 2014) after every TreeLSTM unit and after the embedding layers. The drop probability for each connexion was fixed to 0.25. All the parameters are initialized randomly except the word embeddings. We evaluated performances in terms of precision (P), recall (R) and f-measure (F). For multi-label classifications, we report the macro-average performance2. Because no proper test corpus is provided with EU-ADR, we performed a 10 fold cross-validation using 10% of the corpus for the validation and 10% for the test of our models. For SNPPhenA, we performed a cross-validation using 10% of the corpus for the validation and the provided test corpus for testing. 5.2 CROSS-CORPUS STUDY In this subsection, we present our cross-corpus training strategy and its results. For each fold of our cross-corpus experiments, the same network, initialized with random weight, is used for the different corpora (i.e., same embedding layer and TreeLSTM weights), except for the scorer, which is 2The macro-average score is less impacted by the performance for classes whith very few test samples (and thus a high variance). For that reason, this score is more representatative of the performance of our model. different for each corpus as the number and types of relationships may change. During the training phase, we randomly pick training sentences from the mixed corpora. Table 2 presents the results of the cross-corpus study. For each of the 10 folds, we performed 10 experiments starting from different random weight initializations. Thus, each result is an average of 100 experiments. We observe that for the TreeLSTM model, additional data consistently improved the performances. More interestingly, this phenomenon occurred even for corpora with different types of entities such as the combination of SNPPhenA and SemEval 2013 DDI and, to a lesser extend, for a corpus outside of the biomedical domain (reACE). This phenomenon was not observed for the MCCNN model for which performance tended to decrease slightly when using the cross-corpus learning strategy. 5.3 COMPARISON WITH THE STATE OF THE ART Table 3 presents a comparison of performances obtained with our approach versus two state-of-theart systems applied to the RE tasks associated respectively with SNPPhenA and EU-ADR, respectively reported in Bokharaeian et al. (2017) and Bravo et al. (2015). Our results are obtained using, for each fold, an ensemble of the 5 best models (according to the validation) starting from different random initialization. The ensembling was done by averaging the scores s(rs) of each individual model, following Legrand & Collobert (2014). We report the 10 folds average performance. Both state-of-the-art systems use a combination of a shallow linguistic kernel with a kernel that exploits deep syntactic features. Our approach outperforms the performances reported for SNPPhenA and for the one EU-ADR subtasks and led to similar performances for the two remaining EU-ADR subtasks. 6 DISCUSSION Results presented in Table 2 show that, in our settings, the TreeLSTM model benefits from a crosscorpus learning strategy, while it is useless, or sometimes counterproductive for the MCCNN model. One may think that the TreeLSTM model, due to its ability to exploit the syntactic structure of the sentence, is better at understanding the sentences from the small datasets by exploiting the syntactic patterns observed in the additional data. This idea is reinforced by the fact that even a corpus that does not share the same entities nor a close vocabulary, such as reACE in which no biomedical vocabulary appear, can be helpful for biomedical RE. This assessment could be interestingly explored in further work. Surprisingly, the best results where consistently obtained using the SemEval 2013 DDI corpus as additional data, even for RE tasks that doesn’t involve drugs like EU-ADR target-disease. Likewise, one might have thought that the ADE-EXT corpus could have been more suitable for the EU-ADR drug-disease corpus, since it shares common entities. Several ideas should be explored to better understand this phenomenon, such as the differences of relation and entity types between the different corpora, as well as the differences of types of texts in sources (e.g., medical case report for ADE-EXT, news for reACE, research articles for the others). Higher level syntactic analysis (such as the average distance between the two entities or the nature of the lowest common ancestor in the dependency graph) could provide insights on this question, and help in characterizing the right corpus to select for a cross-corpus training. For the TreeLSTM model, we also tried to train models with multiple additional corpora but did not obtained better performances. For each of the 4 RE tasks studied, the results were consistently on par with the performances obtained using only the additional corpus leading the worst cross-corpus performances. Further work should be done to better understand this phenomenon. Finally, it would be interesting to enrich our model with additional feature such as POS or morphosyntactic ones. More sophisticated TreeLSTM model, taking the dependency tags into account, in addition to the dependency structure, would also be worth exploring. 7 CONCLUSION In this paper, we empirically demonstrated that a cross-corpus learning strategy can be beneficial to tackle biomedical RE tasks for which few annotated resources are available, when using the TreeLSTM model. Interestingly, we showed that any additional corpus, even when focusing on unrelated domain can carry useful information and lead to improved performances. Additionally, the cross-corpus approach led to the best published results for 2 biomedical RE task focusing on SNP-phenotype and drug-disease and to state-of-the-art result for two others focusing on targetdisease and target-drug. We think that cross-corpus training could be reproduced and thus valuable in other specialized domains in which training resources are scarce.
1. What are the major issues with the paper's writing and formatting? 2. How does the proposed method handle instances with more than two entities? 3. What makes the paper's approach novel, despite using existing deep learning models? 4. Are there any relevant references or citations missing from the paper?
Review
Review This paper proposes to use Cross-Corpus training for biomedical relationship extraction from text. - Many wording issues, like citation formats, grammar mistakes, missing words, e.g., Page 2: it as been - The description of the methods should be improved. For instance, why the input has only two entities? In many biomedical sentences, there are more than two entities. How can the proposed two models handle these cases? - The paper just presents to train on a larger labeled corpus and test on a task with a smaller labeled set. Why is this novel? Nothing is novel in the deep models (CNN and TreeLSTM). - Missing refs, like: A simple neural network module for relational reasoning, Arxiv 2017
ICLR
Title Linear algebra with transformers Abstract Most applications of transformers to mathematics, from integration to theorem proving, focus on symbolic computation. In this paper, we show that transformers can be trained to perform numerical calculations with high accuracy. We consider problems of linear algebra: matrix transposition, addition, multiplication, eigenvalues and vectors, singular value decomposition, and inversion. Training small transformers (up to six layers) over datasets of random matrices, we achieve high accuracies (over 90%) on all problems. We also show that trained models can generalize out of their training distribution, and that out-of-domain accuracy can be greatly improved by working from more diverse datasets (in particular, by training from matrices with non-independent and identically distributed coefficients). Finally, we show that few-shot learning can be leveraged to re-train models to solve larger problems. 1 INTRODUCTION Since their introduction by Vaswani et al. (2017), transformers, originally designed for machine translation, were applied to various problems, from text generation (Radford et al., 2018; 2019) to image processing (Carion et al., 2020) and speech recognition (Dong et al., 2018) where they soon achieved state-of-the-art performance (Dosovitskiy et al., 2021; Wang et al., 2020b). In mathematics, transformers were used for symbolic integration (Lample & Charton, 2019), theorem proving (Polu & Sutskever, 2020), formal logic (Hahn et al., 2021), SAT solving (Shi et al., 2021), symbolic regression (Biggio et al., 2021) and dynamical systems (Charton et al., 2020). All these problems pertain to symbolic mathematics, or involve a large amount of symbolic computation. When working on these tasks, transformers manipulate mathematical symbols, just like words in natural language. But mathematics are not limited to symbol manipulation: many practical applications involve numerical calculations, either exact (e.g. arithmetic) or approximate (e.g. function evaluation, numerical solutions of equations). The use of transformers for numerical computation has been less studied, and many early experiments with arithmetic have proved disappointing (Nogueira et al., 2021). This is, nevertheless, an important question: most problems in mathematics and science involve both symbolic and numerical computations. If we want transformers to solve these problems end-to-end, they need to be able to perform numerical calculations with high accuracy. In this paper, we train transformers to compute solutions of problems of linear algebra, which serve as fundamental building blocks in many scientific problems: basic operations on matrices, matrix inversion, eigenvalue and singular value decompositions. We introduce and discuss four encodings to represent problems and solutions as sequences that transformers can process, and train small transformers (up to 6 layers, 10 to 50 million trainable parameters) over generated datasets of random matrices. Trained models can compute approximate solutions to these problems (to a few percents of their L1 norm) with over 90% accuracy (99% in most cases). We also show that they can generalize out of their training distribution, and be retrained to extrapolate to larger problems than the ones they were trained on. We believe these results pave the way for using transformers as end to end solvers for problems of mathematics and science. After introducing the problems of linear algebra we are studying and presenting the encodings we use to represent them as sequences that can be used by our models, we discuss data generation, architecture and experimental settings. Then, we present our experiments on nine different problems, and discuss out-of-distribution generalization and few shot learning for eigenvalue computation. Finally, we discuss our results and future directions for research, and present related works. 2 PROBLEMS AND DATASETS Let M and N be m× n matrices and V ∈ Rm . We study nine problems of linear algebra: • matrix transposition: find MT , a n×m matrix, • matrix addition: find M +N , a m× n matrix, • matrix-vector multiplication: find MTV , a vector in Rn, • matrix multiplication: find MTN , a n× n matrix, • eigenvalues: M symmetric, find its n (real) eigenvalues, sorted in descending order, • eigenvectors: M symmetric, find D diagonal and Q orthogonal such that M = QTDQ, set as a (n+ 1)× n matrix, with (sorted) eigenvalues in its first line, • singular values: find the n eigenvalues of MTM , sorted in descending order, • singular value decomposition: find orthogonal U, V and diagonal S such that M = USV , set as a (m+ n+ 1)×min(m,n) matrix, • inversion: M square and invertible, find its inverse P , such that MP = PM = Id. These problems range from basic operations on individual coefficients of the input matrices (transposition and addition), to computations involving several arithmetic operations over many coefficients (multiplication), and complex nonlinear transformations involving the whole matrix, with cubic complexity (decompositions and inversion). For each problem, we generate datasets of pairs of matrices (I,O), by sampling random input matrices I (see section 2.2), and computing the output O with a linear algebra package (NumPy linalg). When a problem has several input or output matrices, they are concatenated into one (for instance, the two m× n operands of the addition task are concatenated into onem×2nmatrix I). All coefficients in I andO are set in base ten floating-point representation, and rounded to three significant digits in the mantissa. 2.1 ENCODING MATRICES AS SEQUENCES The input and output to our problems are matrices. To be processed by transformers, they need to be converted into sequences of tokens. We encode a m×n matrix by first coding its dimensions as two symbolic tokens (Vm and Vn), followed by its mn coefficients, encoded as sequences. Through this paper, we will use four encoding schemes for matrix coefficients: P10, P1000, B1999, and FP15. In base 10 positional encoding (P10), a number is represented as a sequence of five tokens : one sign token (+ or -), 3 digits (from 0 to 9) for the mantissa, and a symbolic token (from E-100 to E+100) for the exponent. For instance 3.14 will be represented as 314.10−2, and encoded as [+, 3, 1, 4, E-2]. P1000 (positional base 1000) provides a more compact representation by encoding the mantissa as a single token (from 0 to 999), and representing a number as the triplet (sign, mantissa, exponent). B1999 (balanced base 1999) pushes this one step further by encoding together the sign and mantissa (from -999 to 999). Finally, FP15 encodes each floating point number x = m10b as a unique token FPm/b. Table 1 provides examples of these encodings. Additional details and examples can be found in Appendix A. Selecting an encoding is a trade-off. Long encodings (P10, P1000) embed knowledge about numbers that the model can leverage (e.g. numbers can be compared using their signs and exponents, addition and multiplication can be learned by memorizing small tables). Compact encodings use a larger vocabulary (harder to learn), but generate shorter sequences that facilitate training with transformers. In P10, a 20 × 20 matrix is a sequence of 2002 tokens, close to the practical limit of transformer implementations that use a quadratic attention mechanism. In FP15, it is only 402 tokens long. 2.2 RANDOM MATRIX GENERATION In most of our experiments, we train models over datasets of random matrices with uniformly distributed coefficients in [−A,A] (with A = 10). Occasionally, we sample gaussian coefficients with the same standard deviation (σ = A/ √ 3). In the symmetric case, these matrices are known as Wigner matrices. Their eigenvalues have a centered distribution with standard deviation σ = √ ns, where s is the standard deviation of the coefficients (Mehta, 2004). As n increases, this distribution converges to the semi-circle law (p(λ) = √ 4σ2 − λ2/2πσ2 ) for all coefficient distributions with bounded variance. If the coefficients are gaussian, the associated eigenvectors are uniformly distributed over the unit sphere. When investigating out-of-distribution generalization for the eigenvalue problem, we will need to generate random symmetric matrices with different distributions of their eigenvalues (corresponding to random matrices with non iid coefficients). To this effect, we randomly sample symmetric matrices M , with gaussian coefficients, and compute their eigenvalue decomposition M = PDPT , with P the orthogonal matrix of eigenvectors (uniformly distributed over the unit sphere since the coefficients are gaussian). We then replace D, the diagonal matrix of eigenvalues of M , with a diagonal D′ sampled from another distribution. Finally, we recompute M ′ = PD′PT , a symmetric matrix (because P is orthogonal) with eigenvalues distributed as we choose, and eigenvectors uniformly distributed over the unit sphere. 3 MODELS AND EXPERIMENTAL SETTINGS We use the standard transformer architecture introduced in Vaswani et al. (2017), with an encoder and a decoder connected by a cross-attention mechanism. Most of our models have 512 dimensions, 8 attention heads and up to 6 layers. We experiment with different number of layers and attention heads in the encoder and decoder. All training is supervised, and minimizes the cross-entropy between model prediction and the correct solution. We use the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 10−4, an initial warm-up phase of 10,000 steps and cosine scheduling (Loshchilov & Hutter, 2016). All training data is generated on the fly, in batches of 64 examples. Every 300,000 examples, 10,000 random problems are generated and used to evaluate the model. When evaluating, we consider that a predicted sequence seqP is a correct solution to the problem (I,O) (I and O the input and output matrices) if it can be decoded as a valid matrix P (several matrices for singular and eigen decomposition) that approximates the correct solution to a given tolerance τ (τ ∈ {5, 2, 1, 0.5%}) . For addition, transposition, multiplication, eigen and singular values we check that P verifies ‖P − O‖ < τ‖O‖ (with ‖A‖ = ∑ i,j |ai,j |, for A = (ai,j), i.e. L1 norm). For eigenvalue decomposition, we check that the solution (Q,D) predicted by the model can reconstruct the input matrix, i.e. ‖QTDQ− I‖ < τ‖I‖. For singular value decomposition, we check that ‖USV − I‖ < τ‖I‖. For matrix inversion, we check that ‖PI − Id‖ < τ‖Id‖ = τ . The choice of the L1 norm is important: norms like L2 and L∞ will favor models that correctly predict the largest coefficients in the solution. For eigen and singular value problems, this amounts to predicting the largest values, an easier problem than the one we want to solve. We consider different tolerances for different problems. Since we round numbers to three significant digits, 0.5% is the best we can hope. In fact, a number xwith mantissa 1.00 is subjected to a maximal rounding error of 0.5% (x ∈]1.005, 0.995]), which may accumulate when several (rounded) numbers are summed, and increase again when nonlinear operations are considered. When discussing results, we consider tolerances of 0% for transposition, which involves no arithmetic, 1% for basic matrix operations (addition, multiplication), and 2 or 5% for non linear operations (decomposition, inversion), but we usually provide results for all tolerance levels. Most of our experiments focus on 5× 5 square matrices, or rectangular matrices with as many coefficients (e.g. 6× 4, 2× 13 ). This helps when comparing encodings: for larger dimensions, varying sequence lengths make comparisons difficult. We also study scaled-up versions of the problems (from 8 × 8 to 15 × 15), and datasets with matrices of variable dimensions (5-10 or 5-15). In this paper, we limit ourselves to problem that can be solved using small models (with up to 6 layers). Scaling to larger problems, and leveraging deeper architectures is left for future research. 4 EXPERIMENTS AND RESULTS 4.1 TRANSPOSITION Learning to transpose a matrix amounts to learning a permutation of its elements. For a square matrix, the permutation is composed of cycles of two elements. Permutations for rectangular matrices involve longer cycles. This task involves no arithmetic operations: tokens from the input sequence are merely copied to the output, in different positions. We investigate two formulations of this problem: a fixed-size case, where all matrices in the dataset have the same dimension and only one permutation is to be learned, and a variable-size case, where the dataset includes matrices of different dimensions, with as many permutations to learn. We train transformers with one layer, 256 dimensions and 8 attention heads in the encoder and decoder, over datasets using our four encoding schemes. All models learn to predict the exact solution (with 0% tolerance) in more than 99% of test cases, for fixed-size matrices, square or rectangular, with dimensions up to 30× 30. This holds for all encodings, and for input or output sequences up to 2000 tokens long. Similar accuracies are achieved for variable-size datasets: (over 99% for 5− 15 and 96% for 5− 20), with the rectangular cases proving slightly more difficult to train. Table 2 summarizes our results. 4.2 ADDITION Learning to add two m× n matrices amounts to learning the correspondence between the positions of input and output (as in the transposition task) and the algorithm for adding two numbers in floating point representation, which will be performed on mn pairs of elements. We train transformers with 1 or 2 layers, 8 attention heads and 512 dimensions. Sum of fixed-size matrices with dimensions up to 10, both square and rectangular, are predicted with over 99% accuracy within 1% tolerance (and over 98% within 0.5%), with all four encodings. As dimensions increase, models using the P10 and P1000 encodings become more difficult to train as input sequences grow longer: adding two 15 × 15 matrices involves 450 input coefficients: a sequence of 1352 tokens in P1000 and 2252 in P10. Nevertheless, FP15 models achieve 99.5% accuracy within 0.5% tolerance for 15×15 matrices and B1999 models 89.7% accuracy with 1% tolerance on 20 × 20 matrices. Variable-size matrices with dimensions up to 10 are predicted by 2-layer transformers using the B1999 encoding with over 99.5% accuracy within 1% tolerance. Over matrices with larger dimensions (5-15), shallow models with 1 or 2 layers struggle, and their accuracy drops to 48 and 37% in the square and rectangular case. This can be mitigated by using deeper decoders: models with one layer in the encoder and 6 in the decoder achieve 77 and 87% accuracy on the same datasets. Table 3 summarizes our results. 2% 100 99.5 99.8 100 100 98.4 99.8 53.3 88.1 99.8 50.8 94.9 5% 100 99.9 99.9 100 100 98.8 100 63.1 99.3 100 72.4 99.4 4.3 MULTIPLICATION Multiplication of a matrix M of dimension m × n by a vector V ∈ Rn amounts to computing the m dot products between V and the lines of M . Each calculation features n multiplications and n − 1 additions, and involve one row in the matrix and all coefficients in the vector. The model must now learn the position of the 2n elements in the computation, and two operations (add and multiply). Experimenting with models with 1 or 2 layers, over 5× 5 matrices, we observe that only models using the P10 and P1000 encoding can be trained to high accuracy. The P1000 encoding performs best, with little difference between two and one layer models. Accuracies over 99.9%, at 1% tolerance, are achieved by 2-layer transformers using the P1000 encoding for 5× 5 and 10× 10 square matrices. Comparable accuracies are achieved when multiplying rectangular matrices by vectors with the same overall number of coefficients (30). Experiments with datasets of matrices with variable size (from 5 to 10) achieve non-trivial performance (from 48% with 1% tolerance, to 72% with 5% tolerance, for square matrices). Results are sumarized in table 4. Multiplication of matricesM and P is a scaled-up version of the matrix-vector multiplication, which is now performed for every column in matrixP . As previously, only models using the P10 and P1000 encoding can be trained to predict to high accuracy. Over 5×5 matrices and rectangular matrices of similar size, trained model accuracy is the same as vector-multiplication (over 99% at 1% tolerance, see table 5), but deeper decoders (with 4 to 6 layers) are needed. 5% 100 100 100 100 100 100 100 100 100 99.9 2% 100 100 100 100 100 100 100 100 99.7 99.8 4.4 EIGENVALUES We now turn to non-linear problems that are usually solved by iterative algorithms. We train models with 4 or 6 layers in the encoder or the decoder to predict the eigenvalues of symmetric matrices. Over samples of 5×5 random matrices, we reach 100% accuracy at 5% tolerance, and 98.5% at 1% for all four encodings. For 8×8 matrices, we achieve accuracies of 100 and 85% at 5 and 1% tolerance. Larger problems, however, prove difficult to learn: on 10×10 matrices, 25% accuracy at 5% tolerance is reached after 360 million examples. As a comparison, 5×5 models train to maximum accuracy in about 40 million examples, and 8×8 models in about 60 million. We can overcome this limitation by training models on variable-size datasets. On samples of matrices with 5-10, 5-15 and 5-20 dimensions, we achieve 100% accuracy at 5% tolerance, and 88, 94 and 45% at 1%. Using the 5-15 model, the eigenvalues of 10×10 matrices can be predicted with 100% accuracy at 2% tolerance, and 73% at 1%. Table 6 summarizes our results. 4.5 EIGENVECTORS This is an expanded version of the previous task: together with the eigenvalues, we predict the orthogonal matrix of eigenvectors. Over 5× 5 matrices, models using the P10 and P1000 encoding achieve 97.0 and 94.0% accuracy with 5% tolerance. FP15 models fare less well, with an accuracy of 51.6%, but asymmetric models, with 6-layer FP15 encoder and 1-layer P1000 decoder, achieve 93.5% accuracy at 5% and 67.5% at 1% tolerance. The eigenvectors of 6 × 6 matrices can be predicted by P1000 models with an accuracy of 81.5%. Table 7 summarizes our results. 4.6 INVERSION Inversion of 5×5 matrices proves more difficult than previous tasks, with accuracies of 73.6% for P10 models, and 80.4 for P1000 models (5% tolerance, 6-layer encoders and 1-layer decoders). Increasing the number of attention heads to 10 and 12 brings little improvement in accuracy, but allows for faster training: 8 head models are trained to 75% accuracy in about 250 million examples, 10 and 12 head models in only 120. The highest accuracies (90.0%) are achieved by asymmetric models: a 6-layer FP15 encoder with 12 attention heads, and a 1-layer P1000 decoder with 8 heads. 4.7 SINGULAR VALUE DECOMPOSITION Whereas this task is related to eigen decomposition (the singular values of a symmetric matrix are the absolute values of its eigenvalues), it proves more difficult to learn: transformers with up to 6 layers, using the P10 or P1000 encoding, can predict the singular decomposition of 4×4, but not 5×5 matrices. Accuracies remain high, 100 and 86.7% for singular values (5 and 1% tolerance), and 98.9 and 75.3% for the full decomposition. 5 OUT-OF-DOMAIN GENERALIZATION AND RETRAINING In this section, we focus on the prediction of eigenvalues of symmetric matrices. To train our models, we generate random n×n matrices with independent and identically distributed (iid) coefficients, sampled from a uniform distribution over [−A,A]. They belong to a common class of random matrices, known as Wigner matrices. Their eigenvalues have a centered distribution with standard deviation σ = √ ns, where s is the standard deviation of the coefficients (s = A/ √ 3 when uniform). As n increases, this distribution converges to the semi-circle law (Mehta (2004)). Whereas Wigner matrices are very common in science, random matrices with different eigenvalue distributions (and non iid coefficients) appear in many practical cases. For instance, statistical covariance matrices have all their eigenvalues positive, and the adjacency matrices of scale-free and other non-ErdosRenyi graphs have centered but non semi-circle distributions of eigenvalues (Preciado & Rahimian, 2017). It is, therefore, important to understand how models trained on Wigner matrices perform on matrices with different distributions of eigenvalues. To this effect, we create test sets of 10,000 matrices with different distributions than the training set. First, we generate matrices with uniform iid coefficients (as in the training set), but different standard deviation: σtest ∈ [0.1σtrain, 1.5σtrain]. Over these test sets, our trained models achieve over 96% accuracy (with 2% tolerance) for σtest ∈ [0.6σtrain, σtrain]. However, model accuracy drops when σtest is out of this range: 54% for 0.4σtrain, 0% for 0.2σtrain, 26% for 1.1σtrain and 2% for 1.3σtrain. Out-of-distribution generalization only takes place when test set variance is lower, and not too far from (over 25% of) training set variance. Then, we generate test sets of matrices with different eigenvalue distributions: positive eigenvalues (Wigner matrices with eigenvalues replaced by their absolute values), and eigenvalues distributions according to the uniform, gaussian or Laplace law (see section 2.2), with standard deviation σtrain and 0.6σtrain. Over test sets with σtest = σtrain, accuracies are 26% for Laplace, 25 for gaussian, 19 for uniform, and 0% for positive. Results are slightly better for test sets with lower standard deviation (0.6σtrain): 28, 44, 60 and 0% for Laplace, gaussian, uniform and positive, but out-ofdistribution accuracies are low, and matrices with positive eigenvalues cannot be predicted at all. To improve out-of-distribution accuracy, we train new models on datasets with different distributions of eigenvalues, and evaluate them on the test sets previously created. First, we generate matrices with uniform coefficients but variable standard deviation (by randomly selecting A ∈ [1, 100] for each matrix). Unsurprisingly, models trained on this dataset achieve high accuracies on test sets of Wigner matrices with high or low variance. Performances also increase over the gaussian, uniform and Laplace-distributed test sets (from 25 − 60% to 53 − 68%). Yet, matrices with positive eigenvalues cannot be predicted. Training models over a mixture of (Wigner) matrices with uniform iid coefficients and matrices with positive eigenvalues results in better prediction of positive eigenvalues, but degrades performances over all other tests sets. However, models trained on a mixture of matrices with uniform coefficients and matrices with gaussian eigenvalues, or uniform iid and Laplace eigenvalues, achieve high accuracies over all test sets, as do models trained on matrices with Laplace eigenvalues only, or a mixture of uniform, gaussian and Laplace eigenvalues (all non-Wigner matrices). These experiments are presented in table 10. This is an important result: it suggests that Wigner matrices, often considered as the default model for random matrices, might not be the best choice for training transformers. Models trained on non-Wigner matrices (non-iid coefficients, limit distribution of eigenvalues not a semi-circle) generalize to matrices with iid coefficients, whereas the reverse is not true. This confirms that out-ofdistribution generalization requires that particular attention is paid to training data generation. Models trained on matrices of a given size do not generalize to different dimensions, but they can be retrained over samples of matrices of different size. This takes comparatively few examples: a 5× 5 model, that takes 40 million examples to be trained, can learn to predict with high accuracy eigenvalues of matrices of dimension 6 and 7 with about 25 million additional examples. Table 11 presents those results. The capacity of pre-trained large transformers (such as GPT-3) to perform few-shot learning is well documented, but it is interesting to observe the same phenomenon in smaller models. 6 DISCUSSION AND FUTURE DIRECTIONS Our experiments demonstrate that transformers can be trained to solve problems of linear algebra, using randomly generated datasets. However, their accuracy depends on the encodings used to represent matrix coefficients. We introduce four encoding schemes, and our experiments suggest that P10 is generally dominated by P1000, which is also more economical, and that B1999 never really finds its use, as FP15 is more compact and P1000 more efficient. P1000 seems to be a good choice for problems of moderate size, and FP15 for longer sequences. For advanced problems like eigenvectors and inversion, asymmetric architectures, with a deep FP15 encoder and a shallow P1000 decoder, achieve the best performances. Our interpretation is that P1000 in the decoder facilitates training because the meaningful representation of output as (sign, mantissa, exponent) triplets allows for better error feedback during training. On the other hand, a FP15 deep encoder can provide more complex representations of the input matrix, while being easier to train thanks to the shorter sequences. Such asymmetric architectures also benefit from more attention heads (10 to 12) in the encoder, while less heads (4) in the decoder improve training stability at no cost in accuracy. Those asymmetric architectures deserve further study. Most of our experiments focus on matrices with 5 to 10 lines and columns. Our results on the eigenvalue problem suggest that larger problems can be solved by training over matrices of variable size, or retraining over larger matrices. In this work, matrices of different dimensions are sampled in equal proportion and presented for training in random order. Varying their proportions and scheduling (i.e. curriculum learning) should result in better performance. Yet, as dimension increases, sequence lengths will reach the practical limits of quadratic attention mechanisms. Experimenting with transformers with linear or log-linear attention (Zaheer et al., 2021; Wang et al., 2020a; Vyas et al., 2020; Child et al., 2019) is a natural extension of our work. In terms of asymptotic complexity, matrix inversion (and the other non linear tasks) is usually handled by O(n3) algorithms (although O(n2.37) methods are known). Since our sequence length is O(n2), transformers with quadratic attention mechanisms are O(n4). Linear attention would reduce this to O(n2). The out-of-distribution experiments are our most significant results. They prove that models trained on random data can generalize to a wide range of test distributions. They also confirm the importance of wisely selecting training data distributions, a process that can be counter-intuitive. In our specific case, the “obvious” random model (Wigner matrices) is not the best for out-of-domain generalization. In fact, we show that sets of “special” matrices (non-iid coefficients with Laplace eigenvalues) can produce models with better capability for generalization, notably on Wigner matrices. This matches the intuitive idea that we learn more from edge cases than averages. 7 RELATED WORK Algorithms using neural networks to compute eigenvalues and eigenvectors have been proposed since the early 1990s (Samardzija & Waterland, 1991; Cichocki & Unbehauen, 1992; Yi et al., 2004; Tang & Li, 2010; Oja, 1992), and improvements to the original techniques have been suggested until recently (Finol et al., 2019). Similar approaches have been proposed for other problems in linear algebra (Wang, 1993a;b; Zhang et al., 2008). All these methods leverage the Universal Approximation Theorem (Cybenko, 1989; Hornik, 1991), which states that, under weak conditions on their activation functions, neural networks can approximate any continuous mapping (in our case, the mapping between the coefficients of a matrix and their associated eigenvalues and vectors). These approaches rely on the fact that eigenvalues and vectors appear in the solutions of particular differential equations involving the matrix coefficients (e.g. Brockett (1991)). By designing a neural network that represents this differential equation, with the matrix to decompose as the input, and the coefficients in the output layer as the solution, and by defining a loss function that measures how well the output layer approximates the correct solution, the network can be trained to find better and better approximations to the solution. These techniques have two main limitations: they rely on a problem-specific network architecture that has to be hand-coded, and computation is done at train time, which makes them slow and implies retraining the network every time a new matrix is to be processed. In comparison, the techniques proposed in this paper are trained once, and can compute at inference for any matrix input. Techniques have been proposed to train neural networks to compute basic mathematical operations, and use them as building blocks for larger components. Kaiser & Sutskever (2015) introduced the Neural GPU, that could learn addition and multiplication over binary representations of integers. Trask et al. (2018) proposed Neural Arithmetic Logic Units (NALU), that can learn to perform exact additions, substractions, multiplications and divisions by constraining the weights of a linear network to remain close to 0, 1 or -1. Both Neural GPU and NALU have been shown to be able to extrapolate to numbers far larger than those they were trained on. For matrix multiplication, Blalock & Guttag (2021) use learning techniques to improve on known approximate techniques. Use of transformers in mathematics has mostly focused on symbolic computations. Lample & Charton (2019) showed that transformers could be trained to integrate functions and solve ordinary differential equations and, in a follow-up work (Charton et al., 2020), predict properties of differential systems. Transformers have also been applied to formal systems, in theorem proving (Polu & Sutskever, 2020) and temporal logic (Hahn et al., 2021). The use of sequence to sequence models for arithmetic and the exact solution of mathematical problem has been studied by Saxton et al. (2019). In a recent paper, Nogueira et al. (2021) point to the limitations of experiments on arithmetic. 8 CONCLUSION We have shown that transformers can be trained over generated data to solve problems of linear algebra with high accuracy, and that careful selection of the generative model for their training data can allow them to generalize out of their training distribution. This demonstrates that applications of transformers to mathematics are not limited to symbolic calculation, and can cover a wide range of scientific problems, featuring numerical computations. We believe our results pave the way for wider applicability of transformers in science. Reproducibility statement The transformer implementation and the framework for running the experiments were used in several prior works, and rely on standard libraries (Pytorch for the models, Numpy for mathematical calculations). The model source code, data generation code, and parameters for experiments will be open-sourced and made publicly available. All experiments were run several times (on average 10 times), with multiple random seeds and light modifications of the hyperparameters (e.g. small changes in model size, weight initialization, activation functions), to guarantee their robustness. Ethics statement Given the subject of the paper, and the fact that all data used are randomly generated, we believe that no potential ethical concerns are raised by this research. A NUMBER ENCODINGS Let x be a non-zero real number, it can be represented uniquely as x = s.m.10e, with s ∈ {−1, 1},m ∈ [100, 1000[, e ∈ Z. Rounding m to the nearest integer n, we get the base ten, floating-point representation of x, with three significant digits: x ≈ s.n.10e, (s, n, e) ∈ Z3 By convention 0 is encoded as +0.100. All our encodings are possible representations of the triplets (s, n, e). In this paper, we limit e to the range [−100, 100], and n to the range [100, 999]. In base N positional encoding, we encode s (the sign) and e (the exponent) as unique tokens: + or - for s, and a token from E-100 to E100 for e. The mantissa, n, is encoded as the representation of n in base N (e.g. binary representation if N = 2, decimal representation if N = 10), a sequence of dlogN (1000)e tokens from 0 to N-1. Overall, a number will be encoded as a sequence of dlogN (1000)e+ 2 tokens, from a vocabulary of 202 +N tokens. For instance, x = eπ ≈ 23.14069, will be represented by +231.10−1, and encoded in P10 (base 10 positional) as the sequence [+,2,3,1,E-1], and in P1000 (base 1000 positional) as [+,231,E-1]. x = −0.5 will be represented as −500.10−3, and encoded in P10 as [-,5,0,0,E-3], and in P1000 as [-,500,E-3]. Other bases N could be considered, as well as different bases for the exponent, and different sizes of the mantissa. In this paper, we use P10 to encode numbers with absolute value in [10−100, 10101] as sequences of 5 tokens, using a vocabulary of 213 tokens (10 digits, 2 signs, and 201 values of the exponent), and P1000 as sequences of 3 tokens, with a vocabulary of 1104. Balanced base 2a + 1 uses digits between −a and a (Knuth, 1997). For instance, in balanced base 11, digits range from −5 to 5 (an every day example of a balanced base can be found in the way we state the hour as “twenty to two”, or “twenty past two”). Setting a to 999, we define B1999, and encode the sign an mantissa as a single token between−999 and 999. Numbers are then encoded on two tokens with a vocabulary of 2004. Finally, we encode floating point numbers as unique tokens by rewriting any number x = m10b, with m ∈ [−999, 999], b ∈ [−(p + 2)/2, (p + 2)/2], p + 2 = 0, [2], and encoding it as the unique token FPm, b. This allows to represent numbers with 3 significant digits and a dynamic range of 10p+2, using a vocabulary of (18p+2)103 tokens. In this paper, we use p = 16: encoding numbers as unique tokens, with a vocabulary of 30, 000 (FP15). B L1, L2 AND L∞ NORMS FOR EVALUATION We evaluate the accuracy of our trained models by decoding model predictions and verifying that they approximate the correct solution up to a fixed tolerance τ . In the general case, if the model predict a sequence seqP , and the solution of the problem is O, we consider that the prediction is correct if seqP can be decoded into a matrix P and ‖P −O‖ < τ‖O‖ (1) For eigenvalue decomposition, we check that the solution (Q,D) predicted by the model can reconstruct the input matrix, i.e. ‖QTDQ− I‖ < τ‖I‖. For singular value decomposition, we check that ‖USV − I‖ < τ‖I‖. For matrix inversion, we check that ‖PI − Id‖ < τ‖Id‖ = τ . All our published results use the norm L1: ‖A‖ = ∑ i,j |ai,j |, for A = (ai,j). In this section, we discuss the impact of using different norms, namely L2 (‖A‖ = ∑ i,j a 2 i,j), or L ∞ (‖A‖ = maxi,j |ai,j |). Using L1 norm in equation 1 amounts to comparing the average absolute errors on the predicted coefficients (P − O) to the average absolute value of coefficients of O. Using L2 compares the squared errors, and will biase the estimation towards large absolute errors, and coefficients of O with large absolute values. L∞ will compare the largest absolute error to the largest coefficient in |O|. The choice of the norm has different impact for different problems. Figure 1 presents learning curves using the three norms for our best models on different problems. For basic arithmetic operations (transposition, addition, multiplication), there is little difference between L1 and L2 accuracies, and therefore no reason to prefer one over the other for model evaluation. For eigen and singular value problems, L2 accuracies reach a high value early during training, long before the model begins to learn according to the other norms. This is due to the fact that the eigenvalues of Wigner matrices tend to be regularly spaced over the interval [−2σ, 2σ] (σ = √ (n)s with s the standard deviation of coefficients and n the dimension of the matrix). This means that, in many cases, the model can predict the largest absolute eigenvalues from the bounds of the interval (which can be estimated from the dataset). For this reason, L2 accuracy is not a good evaluation metric for the eigenvalue or singular value problem. This is particularly clear in the 10× 10 case: transformers struggle with such matrices, and L1 and L∞ accuracies remain very low even after a thousand epochs (300 million examples), but L2 accuracy is close to 100% since the beginning of training. A similar phenomenon takes place for eigenvector calculations: L2 and L∞ accuracy rise steeply, long before the model begins to learn (according to the L1 norm). This justifies the choice of L1 as our evaluation norm. C ADDITIONAL EXPERIMENTAL RESULTS C.1 LEARNING CURVES FOR DIFFERENT ENCODINGS AND ARCHITECTURES Figure 2 presents learning curves for loss and accuracy (with 5 and 1% tolerance) on different models, for four problems. These curves indicate the number of training examples needed for each problem. On average, our best models learn basic operations on matrices in less than 50 epochs (15 million examples). Training size requirement increases with operation complexity : from 30 million for eigenvalues, to 120 million for eigenvectors, and over 150 million for matrix inversion. On the inversion problem, we experiment with the number of attention heads in the encoder. Increasing the number of head from 8 to 10 and 12 improves learning speed and accuracy. Over 12 heads, this benefit disappears: with 16 heads, our models need 800 epochs to train to 55% accuracy (with 5% tolerance). We believe that this reflects the trade-off being the number of heads (more heads catch more dependencies between elements in the input sequeunce) and the downsampling of attention patterns (when internal model dimension remains fixed). Finally, we notice that the learning curves for the harder problems (eigenvalues, vectors and inversion) are noisy. This is caused by the learning rates: our models usually need small learning rates (510−4 before scheduling is typical) and there is a trade-off between low rates that will stabilize the learning curve, and larger rates that accelerate training. C.2 MODEL SIZE The two main factors influencing model size are depth and the number of dimensions (see Appendix F). In this section we discuss how model size impacts accuracy for addition of 10 × 10 matrices, multiplication of a 5 × 5 matrix by a vector, and computation of the eigenvalues of a 5× 5 matrix. All models in this section are symmetric (same dimension and number of layers in the encoder and decoder) and have 8 attention heads. For the addition task, tables 12 and 13 present the accuracy reached after 60 epochs (18 million examples) and the number of epochs (of 300,000 examples) needed to reach 95% accuracy, for models using the P1000 and B1999 encoding. Both encodings allow shallow architectures (1/1 and 2/2 layers) to learn addition with high accuracy, but the more compact B1999 support smaller models (256 dimensions). In terms of speed, with B1999, shallow models are learned very fast, but it takes a lot of examples to train deeper models. The opposite is true for P1000 models. Table 14 presents the learning speed of models of different sizes for the matrix/vector product and eigenvalue computation tasks (5× 5 matrices, and P1000 encoding). For each problem, there exist a minimal dimension and depth under which models struggle to learn: one layer and 128 dimensions for products, one layer or 128 dimensions for eigenvalues. Over that limit, increasing the dimension accelerates learning. Increasing the depth, on the other hand, bring no clear improvement in speed or accuracy. Finally, we experiment with larger models on larger problems. We trained models with 8 to 12 layers and 512 to 2048 dimensions on sets of 10 × 10 matrices, without success. As discussed in section 4.4, those problems are out of reach of the models we use in this paper (unless we use curriculum learning and train on mixed-size datasets). Increasing model size does not seem to help scaling to larger matrices. C.3 MODEL PERFORMANCE ON DIFFERENT DATASETS Table 15 sumarizes in-domain performance (i.e. accuracy when the test set is generated with the same procedure as the training set) for different datasets. On Wigner matrices (i.e. matrices with independant and identically distributed, iid, coefficients) uniformly or normally distributed, with fixed-range coefficients (i.e. all matrices in the dataset have coefficients uniformly sampled from the same interval), or variable-range coefficients (i.e. coefficient range vary from one matrix to another), all models achieve very high (99+%) accuracy. The eigenvalues of Non-Wigner matrices with Gaussian or Laplace distributed eigenvalues, are also predicted to high accuracy by all models. Over matrices with positive or uniformly distributed eigenvalues, smaller models using the FP15 encoding prove difficult to train. Finally, on mixtures of Wigner and non Wigner matrices, all models predict to high accuracy. D ALTERNATIVE ARCHITECTURES D.1 OTHER SEQUENCE TO SEQUENCE MODELS : LSTM AND GRU We experimented with two popular recurrent architectures: long short-term memories (Hochreiter & Schmidhuber, 1997), and gated recurrent units (Cho et al., 2014), on three tasks : addition of 5 × 5 and 10 × 10 matrices, eigenvalues and matrix inversion of 5 × 5 matrices. We experiment with sequence to sequence models, featuring an encoder and a decoder (LSTM or GRU), with 2 to 8 layers, and 1024 or 2048 hidden dimensions. The input and output sequences, encoded as in the rest of the paper, are pre-processed (and decodedà) via an embedding layer with 256 or 512 dimensions. Addition, a very easy task for transformers (see section 4.2) proves difficult for LSTM and GRU. None of our models can learn addition of 10×10 matrices. Some models can learn addition of 5×5 matrices, but whereas transformers achieve 100% accuracy for all tolerances, our best LSTM and GRU only exceed 90% at 1% tolerance. GRU seem to perform better than LSTM on this task, and 2-layer models perform better than 4-layer models, but transformers have a distinct advantage over LSTM and GRU on addition. Both LSTM and GRU can be trained to predict eigenvalues of 5×5 matrices with the same accuracy as transformers, for the P1000 and FP15 encoding (table 17). Matrix inversion, on the other hand, cannot be learned. Overall, these experiments show that other sequence to sequence architectures, LSTM and GRU, can learn tasks like eigenvalues and addition of small matrices. However, they are less efficient on addition (in terms of precision and scaling to larger matrices) and fail on more complex tasks, like matrix inversion. D.2 SHARED-LAYER TRANSFORMERS: UNIVERSAL TRANSFORMERS In the Universal Transformer (Dehghani et al., 2018), the stacked layers of usual transformer implementations are replaced by one layer that is looped through a fixed number of times (feeding the output of one iteration into the input of the next). This amounts to sharing the weights of the different layers, therefore greatly reducing the number of parameters in the model. This technique can be applied to the encoder, the decoder or both. The number of iterations is a fixed hyperparameter, but the original paper also proposed a halting mechanism inspired by Adaptive Computation Time (Graves, 2016), to adaptively control loop length at the token level. In this version, a stopping probability is learned for every position in the input sequence, and once it reaches a certain threshold, the layer merely copies the input onto the output. The iteration stops when all positions have halted, or a specific value is reached. A recent paper (Anonymous, 2022) proposed to use a similar copy-gating mechanism to skip iterations in a fixed-length loop. We experiment with these three variants (fixed length, adaptive halting, copy gating) on the addition (of 10 × 10 matrices), eigenvalue and matrix inversion tasks (5× 5 matrices). For the addition task, we train universal transformers with one layer and in the encoder and decoder, 256 or 512 dimensions and 8 attention heads. We use the B1999 encoding for the data. We experiment with looped encoder, looped decoder, and loop in both, a loop length of 4, copy-gating and ACT (the 4 loops in then a maximum number of iterations)and copy-gating. Table 18 summarizes our findings. Only models with encoder loops learn to add, and models with 512 dimensions learn with over 95% accuracy for all tolerances. Universal Transformers with one layer (looped-encoder only) perform as well as 2/2 transformers. On the eigenvalue task, we experiment on the P1000 and FP15 encoding, with encoder-loop only 1/1 Universal Transformers with 4 or 8 loops. Universal transformers using the P1000 encoding achieve the same performances (with only one layer) than the transformers in our main research 4 loop transformers seem best, with gates not improving perfomance and ACT slightly degrading it. With the FP15 encoding, universal transformers become very difficult to train: only the 4 loop gated version achieves significant accuracy (still lower than the 6/1 transformers). Finally, we experimented with matrix inversion, with FP15/P1000 and P1000/P1000 encodings, and 4 or 8 loops in the encoder. A gated universal transformer using FP15 in the input and P1000 in the output achieved 73% accuracy, a significant result albeit lower than the best result acieved with 6/1 transformers using the same encodings (90%). With the P1000 encoding, the best universal transformers reach 55% accuracy, compared to 80% for their 6/1 transformer counterparts. Overall, Universal Transformers seem to achieve comparable performances with deep transformers (except on the inversion tasks), using less parameters. This makes shared layer transformers an interesting direction for future work. E ADDITIONAL EXPERIMENTS E.1 NOISY DATA Experimental data is often noisy, and it is interesting to see how our models behave in the presence of random error. To this effect, we trained models to perform matrix addition and eigenvalue computations on noisy data, by adding a random gaussian error to all coefficients in the input (5 × 5) matrices. Three levels of noise were tested, with standard deviation equal to 1, 2 and 5% of the standard deviation of the matrix coefficients (σ = 5.77 for uniform coefficients in [−10, 10]). For linear operations like addition, we expect the model to predict correct results so long tolerance τ remains larger than error. Table 20 demonstrates that models can be trained on noisy data without loss of accuracy, so long the ratio between the standard deviation of error and that of the coefficients is lower than tolerance. Accuracy drops to about 40% when error levels are approximately equal to tolerance, and to zero once error exceed tolerance. It is worth noticing that model size and encoding have no apparent impact on robustness to noise. A similar pattern appears in eigenvalue calculations (table 21), but trained models prove more resistant to noise in the data than for addition. For instance, the eigenvectors of matrices with error standard deviation up to 0.05σ can be learnt to high accuracy within 5% tolerance (vs 0.02σ for addiition). As before, model size has no impact on robustness. However, FP15 models seem more difficult to train over noisy data than P1000. E.2 CO-TRAINING We have shown that transformers can be trained to performed all the tasks mentioned above, training one specific model for each task. In this section, we experiment with co-training: learning several tasks at once. We add a token at the beginning of the input and output sequence indicating the task to be solved (e.g. Transpose or Add), and generate data by randomly selecting a task (with equal probability for all tasks) and producing the corresponding pairs. We train transformers with 4 or 6 layers, 512 dimensions and 8 attention heads on eight datasets corresponding to different co-training tasks: • Transpose and add (TA) • Transpose, add and dot product (vector matrix multiplication) (TAD) • Transpose, add, dot product and matrix multiplication (TADM) • Transpose, add, dot product, matrix multiplication and eigenvalues (TADME) • Transpose, add, dot product, matrix multiplication, eigenvalues and eigenvectors (TAD- MEF) • Transpose, add, dot product, matrix multiplication, eigenvalues, eigenvectors and matrix inversion (TADMEFI) • Eigenvalues, eigenvectors and matrix inversion (EFI) Table 22 summarizes our findings. Lines correspond to a co-training tasks, columns to the performance achieved on this specific task (with 5% tolerance). Co-training over a mixture of basic operations (transposition, addition, dot products and multiplication: the TA, TAD and TADM tasks) learn to predict the results of all operations with almost perfect accuracy. Co-training on the basic operations and eigenvalue computations (the TADME task) allows the model to predict eigenvalues with 80% accuracy, in exchange for a loss of performances on the dot product task. In other experiments with this task, the model learned all basic operation to 100% accuracy(as in the TADM setting), and the eigenvalue to a few percents. Adding more tasks, eigenvectors and inversion, results in the same performance. Co-training on the advanced tasks only (eigenvalues, vectors and inversion) results in 100% accuracy on eigenvalue computation, 22% on eigenvectors, and 0 on inversion. These results demonstrate the feasibility of co-training on basic matrix operations, but also suggest that further research is needed if one wants to extend it to all the tasks considered in this paper. F NUMBER OF PARAMETERS The number of parameters in the sequence to sequence transformer we use in this paper can be calculated as follows. • A self-attention mechanism with dimension d has 4d(d+ 1) parameters: it is composed of four linear layers (K, Q, V and the output layer), with d input, d output and a bias. • A cross-attention mechanism with de dimensions in the encoder, and d in the decoder has 2d(d+ de + 2) parameters (K and V are de × d layers). • A FFN with one hidden layer, d input and output, and h hidden units has d(h+1)+h(d+1) parameters. • A layer normalization with d dimensions has 2d parameters. • An encoder layer with dimension d has a self-attention mechanism, a FFN with 4d hidden units (in our implementation) and two layer normalizations, for a total number of parameters of 12d2 + 13d. • A decoder layer has a cross-attention layer (encoding dimension de) and a layer normalization on top of an encoder, for a total of 14d2 + 19d+ 2ded parameters. • An embedding of dimension d for a vocabulary of w words will use dw parameters, and 2d more if it is coupled to a layer normalization. • The final prediction layer with an output dimension of d and a decoded vocabulary of w words will use (d + 1)w parameters (but in our case, dw will be shared with the decoder embedding). Overall, the number of parameters for a transformer with ne encoding layers with dimension de, n ∗ d decoding layers with dimension dd, an input vocabulary of wi words, an output vocabulary of wo words and a positional embedding of wp words (corresponding to the maximum sequence length) can be computed by the formula: P = de(wi + wp + 2) + ((wo + wp + 2)dd + wo) + nede(12de + 13) + nddd(14dd + 2de + 19) the four terms in the sum corresponding to the input embedding, the output embedding, the encoder and the decoder. Table 23 provides the number of parameters for some of the models used in this paper. For the positional embedding, we set the number of words as the longest input and output sequence studied with that model. G EIGENVALUE DISTRIBUTION OF WIGNER MATRICES, AN EMPIRICAL JUSTIFICATION Figure 3 provides an empirical confirmation of the property of Wigner matrices mentioned in sections 2.2 and 5: the standard deviation of their eigenvalues is a function of their dimension and standard deviation of their coefficients only, and does not depend on the actual dsitribution of the coefficient. In particular, for coefficients with standard deviation σ = 10/ √ (3) = 5.77, we expect the standard deviation of their eigenvalue distribution to be σ = 12.91, 18.26, 22.36 and 25.81 for square matrices of dimension 5, 10, 15 and 20. For three distributions, uniform, Laplace and gaussian, and four dimensions (5, 10, 15, and 20), we generated 100 000 random matrices with the same standard deviation of coefficients, and computed their eigenvalues. Standard deviations are within 0.01 of theoretical values for all distributions and dimensions. It is interesting to note how the distributions (which correspond to the original coefficient distribution for n = 1) resemble the semi-circle as dimension increases.
1. What is the focus of the paper "Linear Algebra with Transformers"? 2. What are the strengths of the paper regarding its novel approach? 3. What are the weaknesses of the paper, particularly in terms of experimentation and limitations? 4. How can the findings of the paper be generalized to different distributions and applications? 5. Are there any suggestions or recommendations for future research related to the topic?
Summary Of The Paper Review
Summary Of The Paper This paper “Linear algebra with transformers” studies the application of seq2seq transformers to matrix operations. It studies their performance across different encodings of floating point numbers, different sizes of matrices, different operations, and different (synthetic) data distributions. The main findings are that transformers work surprisingly well on various matrix operations (addition, multiplication, eigenvalues, inversion, SVD, …) for small matrices (e.g. 5x5), and that generalization to OOD problems is not symmetric (I.e. generalization from one distribution to another does not imply the other way round). Review Strengths: The authors present a detailed study of many important questions. This is a fairly comprehensive work on the idea. Thought-provoking application of transformers. Very well written. Weaknesses: I would have loved to see more than just L1 distances The paper studies the question only on random matrices. In other symbolic domains we have seen that insights gained from machine learning approaches trained on random data do not necessarily carry over to “real-world” distributions. I would have loved to see a study that includes a wider variety of training and evaluation data. The models used in this paper have sometimes rather odd (small) hyper parameters. E.g., for many experiments the models have only 2 layers. I’d love to see larger models and see if they improve the results (and the exact number of parameters).
ICLR
Title Linear algebra with transformers Abstract Most applications of transformers to mathematics, from integration to theorem proving, focus on symbolic computation. In this paper, we show that transformers can be trained to perform numerical calculations with high accuracy. We consider problems of linear algebra: matrix transposition, addition, multiplication, eigenvalues and vectors, singular value decomposition, and inversion. Training small transformers (up to six layers) over datasets of random matrices, we achieve high accuracies (over 90%) on all problems. We also show that trained models can generalize out of their training distribution, and that out-of-domain accuracy can be greatly improved by working from more diverse datasets (in particular, by training from matrices with non-independent and identically distributed coefficients). Finally, we show that few-shot learning can be leveraged to re-train models to solve larger problems. 1 INTRODUCTION Since their introduction by Vaswani et al. (2017), transformers, originally designed for machine translation, were applied to various problems, from text generation (Radford et al., 2018; 2019) to image processing (Carion et al., 2020) and speech recognition (Dong et al., 2018) where they soon achieved state-of-the-art performance (Dosovitskiy et al., 2021; Wang et al., 2020b). In mathematics, transformers were used for symbolic integration (Lample & Charton, 2019), theorem proving (Polu & Sutskever, 2020), formal logic (Hahn et al., 2021), SAT solving (Shi et al., 2021), symbolic regression (Biggio et al., 2021) and dynamical systems (Charton et al., 2020). All these problems pertain to symbolic mathematics, or involve a large amount of symbolic computation. When working on these tasks, transformers manipulate mathematical symbols, just like words in natural language. But mathematics are not limited to symbol manipulation: many practical applications involve numerical calculations, either exact (e.g. arithmetic) or approximate (e.g. function evaluation, numerical solutions of equations). The use of transformers for numerical computation has been less studied, and many early experiments with arithmetic have proved disappointing (Nogueira et al., 2021). This is, nevertheless, an important question: most problems in mathematics and science involve both symbolic and numerical computations. If we want transformers to solve these problems end-to-end, they need to be able to perform numerical calculations with high accuracy. In this paper, we train transformers to compute solutions of problems of linear algebra, which serve as fundamental building blocks in many scientific problems: basic operations on matrices, matrix inversion, eigenvalue and singular value decompositions. We introduce and discuss four encodings to represent problems and solutions as sequences that transformers can process, and train small transformers (up to 6 layers, 10 to 50 million trainable parameters) over generated datasets of random matrices. Trained models can compute approximate solutions to these problems (to a few percents of their L1 norm) with over 90% accuracy (99% in most cases). We also show that they can generalize out of their training distribution, and be retrained to extrapolate to larger problems than the ones they were trained on. We believe these results pave the way for using transformers as end to end solvers for problems of mathematics and science. After introducing the problems of linear algebra we are studying and presenting the encodings we use to represent them as sequences that can be used by our models, we discuss data generation, architecture and experimental settings. Then, we present our experiments on nine different problems, and discuss out-of-distribution generalization and few shot learning for eigenvalue computation. Finally, we discuss our results and future directions for research, and present related works. 2 PROBLEMS AND DATASETS Let M and N be m× n matrices and V ∈ Rm . We study nine problems of linear algebra: • matrix transposition: find MT , a n×m matrix, • matrix addition: find M +N , a m× n matrix, • matrix-vector multiplication: find MTV , a vector in Rn, • matrix multiplication: find MTN , a n× n matrix, • eigenvalues: M symmetric, find its n (real) eigenvalues, sorted in descending order, • eigenvectors: M symmetric, find D diagonal and Q orthogonal such that M = QTDQ, set as a (n+ 1)× n matrix, with (sorted) eigenvalues in its first line, • singular values: find the n eigenvalues of MTM , sorted in descending order, • singular value decomposition: find orthogonal U, V and diagonal S such that M = USV , set as a (m+ n+ 1)×min(m,n) matrix, • inversion: M square and invertible, find its inverse P , such that MP = PM = Id. These problems range from basic operations on individual coefficients of the input matrices (transposition and addition), to computations involving several arithmetic operations over many coefficients (multiplication), and complex nonlinear transformations involving the whole matrix, with cubic complexity (decompositions and inversion). For each problem, we generate datasets of pairs of matrices (I,O), by sampling random input matrices I (see section 2.2), and computing the output O with a linear algebra package (NumPy linalg). When a problem has several input or output matrices, they are concatenated into one (for instance, the two m× n operands of the addition task are concatenated into onem×2nmatrix I). All coefficients in I andO are set in base ten floating-point representation, and rounded to three significant digits in the mantissa. 2.1 ENCODING MATRICES AS SEQUENCES The input and output to our problems are matrices. To be processed by transformers, they need to be converted into sequences of tokens. We encode a m×n matrix by first coding its dimensions as two symbolic tokens (Vm and Vn), followed by its mn coefficients, encoded as sequences. Through this paper, we will use four encoding schemes for matrix coefficients: P10, P1000, B1999, and FP15. In base 10 positional encoding (P10), a number is represented as a sequence of five tokens : one sign token (+ or -), 3 digits (from 0 to 9) for the mantissa, and a symbolic token (from E-100 to E+100) for the exponent. For instance 3.14 will be represented as 314.10−2, and encoded as [+, 3, 1, 4, E-2]. P1000 (positional base 1000) provides a more compact representation by encoding the mantissa as a single token (from 0 to 999), and representing a number as the triplet (sign, mantissa, exponent). B1999 (balanced base 1999) pushes this one step further by encoding together the sign and mantissa (from -999 to 999). Finally, FP15 encodes each floating point number x = m10b as a unique token FPm/b. Table 1 provides examples of these encodings. Additional details and examples can be found in Appendix A. Selecting an encoding is a trade-off. Long encodings (P10, P1000) embed knowledge about numbers that the model can leverage (e.g. numbers can be compared using their signs and exponents, addition and multiplication can be learned by memorizing small tables). Compact encodings use a larger vocabulary (harder to learn), but generate shorter sequences that facilitate training with transformers. In P10, a 20 × 20 matrix is a sequence of 2002 tokens, close to the practical limit of transformer implementations that use a quadratic attention mechanism. In FP15, it is only 402 tokens long. 2.2 RANDOM MATRIX GENERATION In most of our experiments, we train models over datasets of random matrices with uniformly distributed coefficients in [−A,A] (with A = 10). Occasionally, we sample gaussian coefficients with the same standard deviation (σ = A/ √ 3). In the symmetric case, these matrices are known as Wigner matrices. Their eigenvalues have a centered distribution with standard deviation σ = √ ns, where s is the standard deviation of the coefficients (Mehta, 2004). As n increases, this distribution converges to the semi-circle law (p(λ) = √ 4σ2 − λ2/2πσ2 ) for all coefficient distributions with bounded variance. If the coefficients are gaussian, the associated eigenvectors are uniformly distributed over the unit sphere. When investigating out-of-distribution generalization for the eigenvalue problem, we will need to generate random symmetric matrices with different distributions of their eigenvalues (corresponding to random matrices with non iid coefficients). To this effect, we randomly sample symmetric matrices M , with gaussian coefficients, and compute their eigenvalue decomposition M = PDPT , with P the orthogonal matrix of eigenvectors (uniformly distributed over the unit sphere since the coefficients are gaussian). We then replace D, the diagonal matrix of eigenvalues of M , with a diagonal D′ sampled from another distribution. Finally, we recompute M ′ = PD′PT , a symmetric matrix (because P is orthogonal) with eigenvalues distributed as we choose, and eigenvectors uniformly distributed over the unit sphere. 3 MODELS AND EXPERIMENTAL SETTINGS We use the standard transformer architecture introduced in Vaswani et al. (2017), with an encoder and a decoder connected by a cross-attention mechanism. Most of our models have 512 dimensions, 8 attention heads and up to 6 layers. We experiment with different number of layers and attention heads in the encoder and decoder. All training is supervised, and minimizes the cross-entropy between model prediction and the correct solution. We use the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 10−4, an initial warm-up phase of 10,000 steps and cosine scheduling (Loshchilov & Hutter, 2016). All training data is generated on the fly, in batches of 64 examples. Every 300,000 examples, 10,000 random problems are generated and used to evaluate the model. When evaluating, we consider that a predicted sequence seqP is a correct solution to the problem (I,O) (I and O the input and output matrices) if it can be decoded as a valid matrix P (several matrices for singular and eigen decomposition) that approximates the correct solution to a given tolerance τ (τ ∈ {5, 2, 1, 0.5%}) . For addition, transposition, multiplication, eigen and singular values we check that P verifies ‖P − O‖ < τ‖O‖ (with ‖A‖ = ∑ i,j |ai,j |, for A = (ai,j), i.e. L1 norm). For eigenvalue decomposition, we check that the solution (Q,D) predicted by the model can reconstruct the input matrix, i.e. ‖QTDQ− I‖ < τ‖I‖. For singular value decomposition, we check that ‖USV − I‖ < τ‖I‖. For matrix inversion, we check that ‖PI − Id‖ < τ‖Id‖ = τ . The choice of the L1 norm is important: norms like L2 and L∞ will favor models that correctly predict the largest coefficients in the solution. For eigen and singular value problems, this amounts to predicting the largest values, an easier problem than the one we want to solve. We consider different tolerances for different problems. Since we round numbers to three significant digits, 0.5% is the best we can hope. In fact, a number xwith mantissa 1.00 is subjected to a maximal rounding error of 0.5% (x ∈]1.005, 0.995]), which may accumulate when several (rounded) numbers are summed, and increase again when nonlinear operations are considered. When discussing results, we consider tolerances of 0% for transposition, which involves no arithmetic, 1% for basic matrix operations (addition, multiplication), and 2 or 5% for non linear operations (decomposition, inversion), but we usually provide results for all tolerance levels. Most of our experiments focus on 5× 5 square matrices, or rectangular matrices with as many coefficients (e.g. 6× 4, 2× 13 ). This helps when comparing encodings: for larger dimensions, varying sequence lengths make comparisons difficult. We also study scaled-up versions of the problems (from 8 × 8 to 15 × 15), and datasets with matrices of variable dimensions (5-10 or 5-15). In this paper, we limit ourselves to problem that can be solved using small models (with up to 6 layers). Scaling to larger problems, and leveraging deeper architectures is left for future research. 4 EXPERIMENTS AND RESULTS 4.1 TRANSPOSITION Learning to transpose a matrix amounts to learning a permutation of its elements. For a square matrix, the permutation is composed of cycles of two elements. Permutations for rectangular matrices involve longer cycles. This task involves no arithmetic operations: tokens from the input sequence are merely copied to the output, in different positions. We investigate two formulations of this problem: a fixed-size case, where all matrices in the dataset have the same dimension and only one permutation is to be learned, and a variable-size case, where the dataset includes matrices of different dimensions, with as many permutations to learn. We train transformers with one layer, 256 dimensions and 8 attention heads in the encoder and decoder, over datasets using our four encoding schemes. All models learn to predict the exact solution (with 0% tolerance) in more than 99% of test cases, for fixed-size matrices, square or rectangular, with dimensions up to 30× 30. This holds for all encodings, and for input or output sequences up to 2000 tokens long. Similar accuracies are achieved for variable-size datasets: (over 99% for 5− 15 and 96% for 5− 20), with the rectangular cases proving slightly more difficult to train. Table 2 summarizes our results. 4.2 ADDITION Learning to add two m× n matrices amounts to learning the correspondence between the positions of input and output (as in the transposition task) and the algorithm for adding two numbers in floating point representation, which will be performed on mn pairs of elements. We train transformers with 1 or 2 layers, 8 attention heads and 512 dimensions. Sum of fixed-size matrices with dimensions up to 10, both square and rectangular, are predicted with over 99% accuracy within 1% tolerance (and over 98% within 0.5%), with all four encodings. As dimensions increase, models using the P10 and P1000 encodings become more difficult to train as input sequences grow longer: adding two 15 × 15 matrices involves 450 input coefficients: a sequence of 1352 tokens in P1000 and 2252 in P10. Nevertheless, FP15 models achieve 99.5% accuracy within 0.5% tolerance for 15×15 matrices and B1999 models 89.7% accuracy with 1% tolerance on 20 × 20 matrices. Variable-size matrices with dimensions up to 10 are predicted by 2-layer transformers using the B1999 encoding with over 99.5% accuracy within 1% tolerance. Over matrices with larger dimensions (5-15), shallow models with 1 or 2 layers struggle, and their accuracy drops to 48 and 37% in the square and rectangular case. This can be mitigated by using deeper decoders: models with one layer in the encoder and 6 in the decoder achieve 77 and 87% accuracy on the same datasets. Table 3 summarizes our results. 2% 100 99.5 99.8 100 100 98.4 99.8 53.3 88.1 99.8 50.8 94.9 5% 100 99.9 99.9 100 100 98.8 100 63.1 99.3 100 72.4 99.4 4.3 MULTIPLICATION Multiplication of a matrix M of dimension m × n by a vector V ∈ Rn amounts to computing the m dot products between V and the lines of M . Each calculation features n multiplications and n − 1 additions, and involve one row in the matrix and all coefficients in the vector. The model must now learn the position of the 2n elements in the computation, and two operations (add and multiply). Experimenting with models with 1 or 2 layers, over 5× 5 matrices, we observe that only models using the P10 and P1000 encoding can be trained to high accuracy. The P1000 encoding performs best, with little difference between two and one layer models. Accuracies over 99.9%, at 1% tolerance, are achieved by 2-layer transformers using the P1000 encoding for 5× 5 and 10× 10 square matrices. Comparable accuracies are achieved when multiplying rectangular matrices by vectors with the same overall number of coefficients (30). Experiments with datasets of matrices with variable size (from 5 to 10) achieve non-trivial performance (from 48% with 1% tolerance, to 72% with 5% tolerance, for square matrices). Results are sumarized in table 4. Multiplication of matricesM and P is a scaled-up version of the matrix-vector multiplication, which is now performed for every column in matrixP . As previously, only models using the P10 and P1000 encoding can be trained to predict to high accuracy. Over 5×5 matrices and rectangular matrices of similar size, trained model accuracy is the same as vector-multiplication (over 99% at 1% tolerance, see table 5), but deeper decoders (with 4 to 6 layers) are needed. 5% 100 100 100 100 100 100 100 100 100 99.9 2% 100 100 100 100 100 100 100 100 99.7 99.8 4.4 EIGENVALUES We now turn to non-linear problems that are usually solved by iterative algorithms. We train models with 4 or 6 layers in the encoder or the decoder to predict the eigenvalues of symmetric matrices. Over samples of 5×5 random matrices, we reach 100% accuracy at 5% tolerance, and 98.5% at 1% for all four encodings. For 8×8 matrices, we achieve accuracies of 100 and 85% at 5 and 1% tolerance. Larger problems, however, prove difficult to learn: on 10×10 matrices, 25% accuracy at 5% tolerance is reached after 360 million examples. As a comparison, 5×5 models train to maximum accuracy in about 40 million examples, and 8×8 models in about 60 million. We can overcome this limitation by training models on variable-size datasets. On samples of matrices with 5-10, 5-15 and 5-20 dimensions, we achieve 100% accuracy at 5% tolerance, and 88, 94 and 45% at 1%. Using the 5-15 model, the eigenvalues of 10×10 matrices can be predicted with 100% accuracy at 2% tolerance, and 73% at 1%. Table 6 summarizes our results. 4.5 EIGENVECTORS This is an expanded version of the previous task: together with the eigenvalues, we predict the orthogonal matrix of eigenvectors. Over 5× 5 matrices, models using the P10 and P1000 encoding achieve 97.0 and 94.0% accuracy with 5% tolerance. FP15 models fare less well, with an accuracy of 51.6%, but asymmetric models, with 6-layer FP15 encoder and 1-layer P1000 decoder, achieve 93.5% accuracy at 5% and 67.5% at 1% tolerance. The eigenvectors of 6 × 6 matrices can be predicted by P1000 models with an accuracy of 81.5%. Table 7 summarizes our results. 4.6 INVERSION Inversion of 5×5 matrices proves more difficult than previous tasks, with accuracies of 73.6% for P10 models, and 80.4 for P1000 models (5% tolerance, 6-layer encoders and 1-layer decoders). Increasing the number of attention heads to 10 and 12 brings little improvement in accuracy, but allows for faster training: 8 head models are trained to 75% accuracy in about 250 million examples, 10 and 12 head models in only 120. The highest accuracies (90.0%) are achieved by asymmetric models: a 6-layer FP15 encoder with 12 attention heads, and a 1-layer P1000 decoder with 8 heads. 4.7 SINGULAR VALUE DECOMPOSITION Whereas this task is related to eigen decomposition (the singular values of a symmetric matrix are the absolute values of its eigenvalues), it proves more difficult to learn: transformers with up to 6 layers, using the P10 or P1000 encoding, can predict the singular decomposition of 4×4, but not 5×5 matrices. Accuracies remain high, 100 and 86.7% for singular values (5 and 1% tolerance), and 98.9 and 75.3% for the full decomposition. 5 OUT-OF-DOMAIN GENERALIZATION AND RETRAINING In this section, we focus on the prediction of eigenvalues of symmetric matrices. To train our models, we generate random n×n matrices with independent and identically distributed (iid) coefficients, sampled from a uniform distribution over [−A,A]. They belong to a common class of random matrices, known as Wigner matrices. Their eigenvalues have a centered distribution with standard deviation σ = √ ns, where s is the standard deviation of the coefficients (s = A/ √ 3 when uniform). As n increases, this distribution converges to the semi-circle law (Mehta (2004)). Whereas Wigner matrices are very common in science, random matrices with different eigenvalue distributions (and non iid coefficients) appear in many practical cases. For instance, statistical covariance matrices have all their eigenvalues positive, and the adjacency matrices of scale-free and other non-ErdosRenyi graphs have centered but non semi-circle distributions of eigenvalues (Preciado & Rahimian, 2017). It is, therefore, important to understand how models trained on Wigner matrices perform on matrices with different distributions of eigenvalues. To this effect, we create test sets of 10,000 matrices with different distributions than the training set. First, we generate matrices with uniform iid coefficients (as in the training set), but different standard deviation: σtest ∈ [0.1σtrain, 1.5σtrain]. Over these test sets, our trained models achieve over 96% accuracy (with 2% tolerance) for σtest ∈ [0.6σtrain, σtrain]. However, model accuracy drops when σtest is out of this range: 54% for 0.4σtrain, 0% for 0.2σtrain, 26% for 1.1σtrain and 2% for 1.3σtrain. Out-of-distribution generalization only takes place when test set variance is lower, and not too far from (over 25% of) training set variance. Then, we generate test sets of matrices with different eigenvalue distributions: positive eigenvalues (Wigner matrices with eigenvalues replaced by their absolute values), and eigenvalues distributions according to the uniform, gaussian or Laplace law (see section 2.2), with standard deviation σtrain and 0.6σtrain. Over test sets with σtest = σtrain, accuracies are 26% for Laplace, 25 for gaussian, 19 for uniform, and 0% for positive. Results are slightly better for test sets with lower standard deviation (0.6σtrain): 28, 44, 60 and 0% for Laplace, gaussian, uniform and positive, but out-ofdistribution accuracies are low, and matrices with positive eigenvalues cannot be predicted at all. To improve out-of-distribution accuracy, we train new models on datasets with different distributions of eigenvalues, and evaluate them on the test sets previously created. First, we generate matrices with uniform coefficients but variable standard deviation (by randomly selecting A ∈ [1, 100] for each matrix). Unsurprisingly, models trained on this dataset achieve high accuracies on test sets of Wigner matrices with high or low variance. Performances also increase over the gaussian, uniform and Laplace-distributed test sets (from 25 − 60% to 53 − 68%). Yet, matrices with positive eigenvalues cannot be predicted. Training models over a mixture of (Wigner) matrices with uniform iid coefficients and matrices with positive eigenvalues results in better prediction of positive eigenvalues, but degrades performances over all other tests sets. However, models trained on a mixture of matrices with uniform coefficients and matrices with gaussian eigenvalues, or uniform iid and Laplace eigenvalues, achieve high accuracies over all test sets, as do models trained on matrices with Laplace eigenvalues only, or a mixture of uniform, gaussian and Laplace eigenvalues (all non-Wigner matrices). These experiments are presented in table 10. This is an important result: it suggests that Wigner matrices, often considered as the default model for random matrices, might not be the best choice for training transformers. Models trained on non-Wigner matrices (non-iid coefficients, limit distribution of eigenvalues not a semi-circle) generalize to matrices with iid coefficients, whereas the reverse is not true. This confirms that out-ofdistribution generalization requires that particular attention is paid to training data generation. Models trained on matrices of a given size do not generalize to different dimensions, but they can be retrained over samples of matrices of different size. This takes comparatively few examples: a 5× 5 model, that takes 40 million examples to be trained, can learn to predict with high accuracy eigenvalues of matrices of dimension 6 and 7 with about 25 million additional examples. Table 11 presents those results. The capacity of pre-trained large transformers (such as GPT-3) to perform few-shot learning is well documented, but it is interesting to observe the same phenomenon in smaller models. 6 DISCUSSION AND FUTURE DIRECTIONS Our experiments demonstrate that transformers can be trained to solve problems of linear algebra, using randomly generated datasets. However, their accuracy depends on the encodings used to represent matrix coefficients. We introduce four encoding schemes, and our experiments suggest that P10 is generally dominated by P1000, which is also more economical, and that B1999 never really finds its use, as FP15 is more compact and P1000 more efficient. P1000 seems to be a good choice for problems of moderate size, and FP15 for longer sequences. For advanced problems like eigenvectors and inversion, asymmetric architectures, with a deep FP15 encoder and a shallow P1000 decoder, achieve the best performances. Our interpretation is that P1000 in the decoder facilitates training because the meaningful representation of output as (sign, mantissa, exponent) triplets allows for better error feedback during training. On the other hand, a FP15 deep encoder can provide more complex representations of the input matrix, while being easier to train thanks to the shorter sequences. Such asymmetric architectures also benefit from more attention heads (10 to 12) in the encoder, while less heads (4) in the decoder improve training stability at no cost in accuracy. Those asymmetric architectures deserve further study. Most of our experiments focus on matrices with 5 to 10 lines and columns. Our results on the eigenvalue problem suggest that larger problems can be solved by training over matrices of variable size, or retraining over larger matrices. In this work, matrices of different dimensions are sampled in equal proportion and presented for training in random order. Varying their proportions and scheduling (i.e. curriculum learning) should result in better performance. Yet, as dimension increases, sequence lengths will reach the practical limits of quadratic attention mechanisms. Experimenting with transformers with linear or log-linear attention (Zaheer et al., 2021; Wang et al., 2020a; Vyas et al., 2020; Child et al., 2019) is a natural extension of our work. In terms of asymptotic complexity, matrix inversion (and the other non linear tasks) is usually handled by O(n3) algorithms (although O(n2.37) methods are known). Since our sequence length is O(n2), transformers with quadratic attention mechanisms are O(n4). Linear attention would reduce this to O(n2). The out-of-distribution experiments are our most significant results. They prove that models trained on random data can generalize to a wide range of test distributions. They also confirm the importance of wisely selecting training data distributions, a process that can be counter-intuitive. In our specific case, the “obvious” random model (Wigner matrices) is not the best for out-of-domain generalization. In fact, we show that sets of “special” matrices (non-iid coefficients with Laplace eigenvalues) can produce models with better capability for generalization, notably on Wigner matrices. This matches the intuitive idea that we learn more from edge cases than averages. 7 RELATED WORK Algorithms using neural networks to compute eigenvalues and eigenvectors have been proposed since the early 1990s (Samardzija & Waterland, 1991; Cichocki & Unbehauen, 1992; Yi et al., 2004; Tang & Li, 2010; Oja, 1992), and improvements to the original techniques have been suggested until recently (Finol et al., 2019). Similar approaches have been proposed for other problems in linear algebra (Wang, 1993a;b; Zhang et al., 2008). All these methods leverage the Universal Approximation Theorem (Cybenko, 1989; Hornik, 1991), which states that, under weak conditions on their activation functions, neural networks can approximate any continuous mapping (in our case, the mapping between the coefficients of a matrix and their associated eigenvalues and vectors). These approaches rely on the fact that eigenvalues and vectors appear in the solutions of particular differential equations involving the matrix coefficients (e.g. Brockett (1991)). By designing a neural network that represents this differential equation, with the matrix to decompose as the input, and the coefficients in the output layer as the solution, and by defining a loss function that measures how well the output layer approximates the correct solution, the network can be trained to find better and better approximations to the solution. These techniques have two main limitations: they rely on a problem-specific network architecture that has to be hand-coded, and computation is done at train time, which makes them slow and implies retraining the network every time a new matrix is to be processed. In comparison, the techniques proposed in this paper are trained once, and can compute at inference for any matrix input. Techniques have been proposed to train neural networks to compute basic mathematical operations, and use them as building blocks for larger components. Kaiser & Sutskever (2015) introduced the Neural GPU, that could learn addition and multiplication over binary representations of integers. Trask et al. (2018) proposed Neural Arithmetic Logic Units (NALU), that can learn to perform exact additions, substractions, multiplications and divisions by constraining the weights of a linear network to remain close to 0, 1 or -1. Both Neural GPU and NALU have been shown to be able to extrapolate to numbers far larger than those they were trained on. For matrix multiplication, Blalock & Guttag (2021) use learning techniques to improve on known approximate techniques. Use of transformers in mathematics has mostly focused on symbolic computations. Lample & Charton (2019) showed that transformers could be trained to integrate functions and solve ordinary differential equations and, in a follow-up work (Charton et al., 2020), predict properties of differential systems. Transformers have also been applied to formal systems, in theorem proving (Polu & Sutskever, 2020) and temporal logic (Hahn et al., 2021). The use of sequence to sequence models for arithmetic and the exact solution of mathematical problem has been studied by Saxton et al. (2019). In a recent paper, Nogueira et al. (2021) point to the limitations of experiments on arithmetic. 8 CONCLUSION We have shown that transformers can be trained over generated data to solve problems of linear algebra with high accuracy, and that careful selection of the generative model for their training data can allow them to generalize out of their training distribution. This demonstrates that applications of transformers to mathematics are not limited to symbolic calculation, and can cover a wide range of scientific problems, featuring numerical computations. We believe our results pave the way for wider applicability of transformers in science. Reproducibility statement The transformer implementation and the framework for running the experiments were used in several prior works, and rely on standard libraries (Pytorch for the models, Numpy for mathematical calculations). The model source code, data generation code, and parameters for experiments will be open-sourced and made publicly available. All experiments were run several times (on average 10 times), with multiple random seeds and light modifications of the hyperparameters (e.g. small changes in model size, weight initialization, activation functions), to guarantee their robustness. Ethics statement Given the subject of the paper, and the fact that all data used are randomly generated, we believe that no potential ethical concerns are raised by this research. A NUMBER ENCODINGS Let x be a non-zero real number, it can be represented uniquely as x = s.m.10e, with s ∈ {−1, 1},m ∈ [100, 1000[, e ∈ Z. Rounding m to the nearest integer n, we get the base ten, floating-point representation of x, with three significant digits: x ≈ s.n.10e, (s, n, e) ∈ Z3 By convention 0 is encoded as +0.100. All our encodings are possible representations of the triplets (s, n, e). In this paper, we limit e to the range [−100, 100], and n to the range [100, 999]. In base N positional encoding, we encode s (the sign) and e (the exponent) as unique tokens: + or - for s, and a token from E-100 to E100 for e. The mantissa, n, is encoded as the representation of n in base N (e.g. binary representation if N = 2, decimal representation if N = 10), a sequence of dlogN (1000)e tokens from 0 to N-1. Overall, a number will be encoded as a sequence of dlogN (1000)e+ 2 tokens, from a vocabulary of 202 +N tokens. For instance, x = eπ ≈ 23.14069, will be represented by +231.10−1, and encoded in P10 (base 10 positional) as the sequence [+,2,3,1,E-1], and in P1000 (base 1000 positional) as [+,231,E-1]. x = −0.5 will be represented as −500.10−3, and encoded in P10 as [-,5,0,0,E-3], and in P1000 as [-,500,E-3]. Other bases N could be considered, as well as different bases for the exponent, and different sizes of the mantissa. In this paper, we use P10 to encode numbers with absolute value in [10−100, 10101] as sequences of 5 tokens, using a vocabulary of 213 tokens (10 digits, 2 signs, and 201 values of the exponent), and P1000 as sequences of 3 tokens, with a vocabulary of 1104. Balanced base 2a + 1 uses digits between −a and a (Knuth, 1997). For instance, in balanced base 11, digits range from −5 to 5 (an every day example of a balanced base can be found in the way we state the hour as “twenty to two”, or “twenty past two”). Setting a to 999, we define B1999, and encode the sign an mantissa as a single token between−999 and 999. Numbers are then encoded on two tokens with a vocabulary of 2004. Finally, we encode floating point numbers as unique tokens by rewriting any number x = m10b, with m ∈ [−999, 999], b ∈ [−(p + 2)/2, (p + 2)/2], p + 2 = 0, [2], and encoding it as the unique token FPm, b. This allows to represent numbers with 3 significant digits and a dynamic range of 10p+2, using a vocabulary of (18p+2)103 tokens. In this paper, we use p = 16: encoding numbers as unique tokens, with a vocabulary of 30, 000 (FP15). B L1, L2 AND L∞ NORMS FOR EVALUATION We evaluate the accuracy of our trained models by decoding model predictions and verifying that they approximate the correct solution up to a fixed tolerance τ . In the general case, if the model predict a sequence seqP , and the solution of the problem is O, we consider that the prediction is correct if seqP can be decoded into a matrix P and ‖P −O‖ < τ‖O‖ (1) For eigenvalue decomposition, we check that the solution (Q,D) predicted by the model can reconstruct the input matrix, i.e. ‖QTDQ− I‖ < τ‖I‖. For singular value decomposition, we check that ‖USV − I‖ < τ‖I‖. For matrix inversion, we check that ‖PI − Id‖ < τ‖Id‖ = τ . All our published results use the norm L1: ‖A‖ = ∑ i,j |ai,j |, for A = (ai,j). In this section, we discuss the impact of using different norms, namely L2 (‖A‖ = ∑ i,j a 2 i,j), or L ∞ (‖A‖ = maxi,j |ai,j |). Using L1 norm in equation 1 amounts to comparing the average absolute errors on the predicted coefficients (P − O) to the average absolute value of coefficients of O. Using L2 compares the squared errors, and will biase the estimation towards large absolute errors, and coefficients of O with large absolute values. L∞ will compare the largest absolute error to the largest coefficient in |O|. The choice of the norm has different impact for different problems. Figure 1 presents learning curves using the three norms for our best models on different problems. For basic arithmetic operations (transposition, addition, multiplication), there is little difference between L1 and L2 accuracies, and therefore no reason to prefer one over the other for model evaluation. For eigen and singular value problems, L2 accuracies reach a high value early during training, long before the model begins to learn according to the other norms. This is due to the fact that the eigenvalues of Wigner matrices tend to be regularly spaced over the interval [−2σ, 2σ] (σ = √ (n)s with s the standard deviation of coefficients and n the dimension of the matrix). This means that, in many cases, the model can predict the largest absolute eigenvalues from the bounds of the interval (which can be estimated from the dataset). For this reason, L2 accuracy is not a good evaluation metric for the eigenvalue or singular value problem. This is particularly clear in the 10× 10 case: transformers struggle with such matrices, and L1 and L∞ accuracies remain very low even after a thousand epochs (300 million examples), but L2 accuracy is close to 100% since the beginning of training. A similar phenomenon takes place for eigenvector calculations: L2 and L∞ accuracy rise steeply, long before the model begins to learn (according to the L1 norm). This justifies the choice of L1 as our evaluation norm. C ADDITIONAL EXPERIMENTAL RESULTS C.1 LEARNING CURVES FOR DIFFERENT ENCODINGS AND ARCHITECTURES Figure 2 presents learning curves for loss and accuracy (with 5 and 1% tolerance) on different models, for four problems. These curves indicate the number of training examples needed for each problem. On average, our best models learn basic operations on matrices in less than 50 epochs (15 million examples). Training size requirement increases with operation complexity : from 30 million for eigenvalues, to 120 million for eigenvectors, and over 150 million for matrix inversion. On the inversion problem, we experiment with the number of attention heads in the encoder. Increasing the number of head from 8 to 10 and 12 improves learning speed and accuracy. Over 12 heads, this benefit disappears: with 16 heads, our models need 800 epochs to train to 55% accuracy (with 5% tolerance). We believe that this reflects the trade-off being the number of heads (more heads catch more dependencies between elements in the input sequeunce) and the downsampling of attention patterns (when internal model dimension remains fixed). Finally, we notice that the learning curves for the harder problems (eigenvalues, vectors and inversion) are noisy. This is caused by the learning rates: our models usually need small learning rates (510−4 before scheduling is typical) and there is a trade-off between low rates that will stabilize the learning curve, and larger rates that accelerate training. C.2 MODEL SIZE The two main factors influencing model size are depth and the number of dimensions (see Appendix F). In this section we discuss how model size impacts accuracy for addition of 10 × 10 matrices, multiplication of a 5 × 5 matrix by a vector, and computation of the eigenvalues of a 5× 5 matrix. All models in this section are symmetric (same dimension and number of layers in the encoder and decoder) and have 8 attention heads. For the addition task, tables 12 and 13 present the accuracy reached after 60 epochs (18 million examples) and the number of epochs (of 300,000 examples) needed to reach 95% accuracy, for models using the P1000 and B1999 encoding. Both encodings allow shallow architectures (1/1 and 2/2 layers) to learn addition with high accuracy, but the more compact B1999 support smaller models (256 dimensions). In terms of speed, with B1999, shallow models are learned very fast, but it takes a lot of examples to train deeper models. The opposite is true for P1000 models. Table 14 presents the learning speed of models of different sizes for the matrix/vector product and eigenvalue computation tasks (5× 5 matrices, and P1000 encoding). For each problem, there exist a minimal dimension and depth under which models struggle to learn: one layer and 128 dimensions for products, one layer or 128 dimensions for eigenvalues. Over that limit, increasing the dimension accelerates learning. Increasing the depth, on the other hand, bring no clear improvement in speed or accuracy. Finally, we experiment with larger models on larger problems. We trained models with 8 to 12 layers and 512 to 2048 dimensions on sets of 10 × 10 matrices, without success. As discussed in section 4.4, those problems are out of reach of the models we use in this paper (unless we use curriculum learning and train on mixed-size datasets). Increasing model size does not seem to help scaling to larger matrices. C.3 MODEL PERFORMANCE ON DIFFERENT DATASETS Table 15 sumarizes in-domain performance (i.e. accuracy when the test set is generated with the same procedure as the training set) for different datasets. On Wigner matrices (i.e. matrices with independant and identically distributed, iid, coefficients) uniformly or normally distributed, with fixed-range coefficients (i.e. all matrices in the dataset have coefficients uniformly sampled from the same interval), or variable-range coefficients (i.e. coefficient range vary from one matrix to another), all models achieve very high (99+%) accuracy. The eigenvalues of Non-Wigner matrices with Gaussian or Laplace distributed eigenvalues, are also predicted to high accuracy by all models. Over matrices with positive or uniformly distributed eigenvalues, smaller models using the FP15 encoding prove difficult to train. Finally, on mixtures of Wigner and non Wigner matrices, all models predict to high accuracy. D ALTERNATIVE ARCHITECTURES D.1 OTHER SEQUENCE TO SEQUENCE MODELS : LSTM AND GRU We experimented with two popular recurrent architectures: long short-term memories (Hochreiter & Schmidhuber, 1997), and gated recurrent units (Cho et al., 2014), on three tasks : addition of 5 × 5 and 10 × 10 matrices, eigenvalues and matrix inversion of 5 × 5 matrices. We experiment with sequence to sequence models, featuring an encoder and a decoder (LSTM or GRU), with 2 to 8 layers, and 1024 or 2048 hidden dimensions. The input and output sequences, encoded as in the rest of the paper, are pre-processed (and decodedà) via an embedding layer with 256 or 512 dimensions. Addition, a very easy task for transformers (see section 4.2) proves difficult for LSTM and GRU. None of our models can learn addition of 10×10 matrices. Some models can learn addition of 5×5 matrices, but whereas transformers achieve 100% accuracy for all tolerances, our best LSTM and GRU only exceed 90% at 1% tolerance. GRU seem to perform better than LSTM on this task, and 2-layer models perform better than 4-layer models, but transformers have a distinct advantage over LSTM and GRU on addition. Both LSTM and GRU can be trained to predict eigenvalues of 5×5 matrices with the same accuracy as transformers, for the P1000 and FP15 encoding (table 17). Matrix inversion, on the other hand, cannot be learned. Overall, these experiments show that other sequence to sequence architectures, LSTM and GRU, can learn tasks like eigenvalues and addition of small matrices. However, they are less efficient on addition (in terms of precision and scaling to larger matrices) and fail on more complex tasks, like matrix inversion. D.2 SHARED-LAYER TRANSFORMERS: UNIVERSAL TRANSFORMERS In the Universal Transformer (Dehghani et al., 2018), the stacked layers of usual transformer implementations are replaced by one layer that is looped through a fixed number of times (feeding the output of one iteration into the input of the next). This amounts to sharing the weights of the different layers, therefore greatly reducing the number of parameters in the model. This technique can be applied to the encoder, the decoder or both. The number of iterations is a fixed hyperparameter, but the original paper also proposed a halting mechanism inspired by Adaptive Computation Time (Graves, 2016), to adaptively control loop length at the token level. In this version, a stopping probability is learned for every position in the input sequence, and once it reaches a certain threshold, the layer merely copies the input onto the output. The iteration stops when all positions have halted, or a specific value is reached. A recent paper (Anonymous, 2022) proposed to use a similar copy-gating mechanism to skip iterations in a fixed-length loop. We experiment with these three variants (fixed length, adaptive halting, copy gating) on the addition (of 10 × 10 matrices), eigenvalue and matrix inversion tasks (5× 5 matrices). For the addition task, we train universal transformers with one layer and in the encoder and decoder, 256 or 512 dimensions and 8 attention heads. We use the B1999 encoding for the data. We experiment with looped encoder, looped decoder, and loop in both, a loop length of 4, copy-gating and ACT (the 4 loops in then a maximum number of iterations)and copy-gating. Table 18 summarizes our findings. Only models with encoder loops learn to add, and models with 512 dimensions learn with over 95% accuracy for all tolerances. Universal Transformers with one layer (looped-encoder only) perform as well as 2/2 transformers. On the eigenvalue task, we experiment on the P1000 and FP15 encoding, with encoder-loop only 1/1 Universal Transformers with 4 or 8 loops. Universal transformers using the P1000 encoding achieve the same performances (with only one layer) than the transformers in our main research 4 loop transformers seem best, with gates not improving perfomance and ACT slightly degrading it. With the FP15 encoding, universal transformers become very difficult to train: only the 4 loop gated version achieves significant accuracy (still lower than the 6/1 transformers). Finally, we experimented with matrix inversion, with FP15/P1000 and P1000/P1000 encodings, and 4 or 8 loops in the encoder. A gated universal transformer using FP15 in the input and P1000 in the output achieved 73% accuracy, a significant result albeit lower than the best result acieved with 6/1 transformers using the same encodings (90%). With the P1000 encoding, the best universal transformers reach 55% accuracy, compared to 80% for their 6/1 transformer counterparts. Overall, Universal Transformers seem to achieve comparable performances with deep transformers (except on the inversion tasks), using less parameters. This makes shared layer transformers an interesting direction for future work. E ADDITIONAL EXPERIMENTS E.1 NOISY DATA Experimental data is often noisy, and it is interesting to see how our models behave in the presence of random error. To this effect, we trained models to perform matrix addition and eigenvalue computations on noisy data, by adding a random gaussian error to all coefficients in the input (5 × 5) matrices. Three levels of noise were tested, with standard deviation equal to 1, 2 and 5% of the standard deviation of the matrix coefficients (σ = 5.77 for uniform coefficients in [−10, 10]). For linear operations like addition, we expect the model to predict correct results so long tolerance τ remains larger than error. Table 20 demonstrates that models can be trained on noisy data without loss of accuracy, so long the ratio between the standard deviation of error and that of the coefficients is lower than tolerance. Accuracy drops to about 40% when error levels are approximately equal to tolerance, and to zero once error exceed tolerance. It is worth noticing that model size and encoding have no apparent impact on robustness to noise. A similar pattern appears in eigenvalue calculations (table 21), but trained models prove more resistant to noise in the data than for addition. For instance, the eigenvectors of matrices with error standard deviation up to 0.05σ can be learnt to high accuracy within 5% tolerance (vs 0.02σ for addiition). As before, model size has no impact on robustness. However, FP15 models seem more difficult to train over noisy data than P1000. E.2 CO-TRAINING We have shown that transformers can be trained to performed all the tasks mentioned above, training one specific model for each task. In this section, we experiment with co-training: learning several tasks at once. We add a token at the beginning of the input and output sequence indicating the task to be solved (e.g. Transpose or Add), and generate data by randomly selecting a task (with equal probability for all tasks) and producing the corresponding pairs. We train transformers with 4 or 6 layers, 512 dimensions and 8 attention heads on eight datasets corresponding to different co-training tasks: • Transpose and add (TA) • Transpose, add and dot product (vector matrix multiplication) (TAD) • Transpose, add, dot product and matrix multiplication (TADM) • Transpose, add, dot product, matrix multiplication and eigenvalues (TADME) • Transpose, add, dot product, matrix multiplication, eigenvalues and eigenvectors (TAD- MEF) • Transpose, add, dot product, matrix multiplication, eigenvalues, eigenvectors and matrix inversion (TADMEFI) • Eigenvalues, eigenvectors and matrix inversion (EFI) Table 22 summarizes our findings. Lines correspond to a co-training tasks, columns to the performance achieved on this specific task (with 5% tolerance). Co-training over a mixture of basic operations (transposition, addition, dot products and multiplication: the TA, TAD and TADM tasks) learn to predict the results of all operations with almost perfect accuracy. Co-training on the basic operations and eigenvalue computations (the TADME task) allows the model to predict eigenvalues with 80% accuracy, in exchange for a loss of performances on the dot product task. In other experiments with this task, the model learned all basic operation to 100% accuracy(as in the TADM setting), and the eigenvalue to a few percents. Adding more tasks, eigenvectors and inversion, results in the same performance. Co-training on the advanced tasks only (eigenvalues, vectors and inversion) results in 100% accuracy on eigenvalue computation, 22% on eigenvectors, and 0 on inversion. These results demonstrate the feasibility of co-training on basic matrix operations, but also suggest that further research is needed if one wants to extend it to all the tasks considered in this paper. F NUMBER OF PARAMETERS The number of parameters in the sequence to sequence transformer we use in this paper can be calculated as follows. • A self-attention mechanism with dimension d has 4d(d+ 1) parameters: it is composed of four linear layers (K, Q, V and the output layer), with d input, d output and a bias. • A cross-attention mechanism with de dimensions in the encoder, and d in the decoder has 2d(d+ de + 2) parameters (K and V are de × d layers). • A FFN with one hidden layer, d input and output, and h hidden units has d(h+1)+h(d+1) parameters. • A layer normalization with d dimensions has 2d parameters. • An encoder layer with dimension d has a self-attention mechanism, a FFN with 4d hidden units (in our implementation) and two layer normalizations, for a total number of parameters of 12d2 + 13d. • A decoder layer has a cross-attention layer (encoding dimension de) and a layer normalization on top of an encoder, for a total of 14d2 + 19d+ 2ded parameters. • An embedding of dimension d for a vocabulary of w words will use dw parameters, and 2d more if it is coupled to a layer normalization. • The final prediction layer with an output dimension of d and a decoded vocabulary of w words will use (d + 1)w parameters (but in our case, dw will be shared with the decoder embedding). Overall, the number of parameters for a transformer with ne encoding layers with dimension de, n ∗ d decoding layers with dimension dd, an input vocabulary of wi words, an output vocabulary of wo words and a positional embedding of wp words (corresponding to the maximum sequence length) can be computed by the formula: P = de(wi + wp + 2) + ((wo + wp + 2)dd + wo) + nede(12de + 13) + nddd(14dd + 2de + 19) the four terms in the sum corresponding to the input embedding, the output embedding, the encoder and the decoder. Table 23 provides the number of parameters for some of the models used in this paper. For the positional embedding, we set the number of words as the longest input and output sequence studied with that model. G EIGENVALUE DISTRIBUTION OF WIGNER MATRICES, AN EMPIRICAL JUSTIFICATION Figure 3 provides an empirical confirmation of the property of Wigner matrices mentioned in sections 2.2 and 5: the standard deviation of their eigenvalues is a function of their dimension and standard deviation of their coefficients only, and does not depend on the actual dsitribution of the coefficient. In particular, for coefficients with standard deviation σ = 10/ √ (3) = 5.77, we expect the standard deviation of their eigenvalue distribution to be σ = 12.91, 18.26, 22.36 and 25.81 for square matrices of dimension 5, 10, 15 and 20. For three distributions, uniform, Laplace and gaussian, and four dimensions (5, 10, 15, and 20), we generated 100 000 random matrices with the same standard deviation of coefficients, and computed their eigenvalues. Standard deviations are within 0.01 of theoretical values for all distributions and dimensions. It is interesting to note how the distributions (which correspond to the original coefficient distribution for n = 1) resemble the semi-circle as dimension increases.
1. What is the focus of the paper regarding algebraic computations? 2. What are the strengths and weaknesses of the proposed approach in addressing the problem? 3. Do you have any concerns about the experimental design and its limitations? 4. How does the reviewer assess the impact and novelty of the paper's content? 5. What are some potential research directions related to the topic that the reviewer would like to see explored?
Summary Of The Paper Review
Summary Of The Paper This paper consider the problem of approximating algebraic computations over matrices using transformers. Experiments with different encodings are presented, investigating the use of transformers for approximating a number of algebraic operations. Review While I found the paper well written, I didn't find it very impactful. The authors are not proposing a novel technique for addressing the problem, but only report some experiments with different encodings for the matrices and a standard transformer architecture. I was hoping to get more insights after reading this article, such as what architectural choices are beneficial for each specific problem and why. Instead, the proposed approach is training an off-the-shelf model with randomly distributed examples, and the experiments do not consider alternative techniques. In my opinion, the motivation for this approach is also lacking, since the reported experiments only consider very small problems that can be solved exactly. I wish that the paper considered instead cases that need to be approximated, or possibly prove that indeed transformers trained on smaller problems can generalise to much higher dimensions. Since the learning algorithm has access to an oracle (numpy) that can provide exact supervision, an interesting problem is how to select the most informative input instances to be solved during training. Another aspect that is worth investigating in my opinion is how to cope with noise in the training data, allowing the training of transformers with an approximate oracle.
ICLR
Title Linear algebra with transformers Abstract Most applications of transformers to mathematics, from integration to theorem proving, focus on symbolic computation. In this paper, we show that transformers can be trained to perform numerical calculations with high accuracy. We consider problems of linear algebra: matrix transposition, addition, multiplication, eigenvalues and vectors, singular value decomposition, and inversion. Training small transformers (up to six layers) over datasets of random matrices, we achieve high accuracies (over 90%) on all problems. We also show that trained models can generalize out of their training distribution, and that out-of-domain accuracy can be greatly improved by working from more diverse datasets (in particular, by training from matrices with non-independent and identically distributed coefficients). Finally, we show that few-shot learning can be leveraged to re-train models to solve larger problems. 1 INTRODUCTION Since their introduction by Vaswani et al. (2017), transformers, originally designed for machine translation, were applied to various problems, from text generation (Radford et al., 2018; 2019) to image processing (Carion et al., 2020) and speech recognition (Dong et al., 2018) where they soon achieved state-of-the-art performance (Dosovitskiy et al., 2021; Wang et al., 2020b). In mathematics, transformers were used for symbolic integration (Lample & Charton, 2019), theorem proving (Polu & Sutskever, 2020), formal logic (Hahn et al., 2021), SAT solving (Shi et al., 2021), symbolic regression (Biggio et al., 2021) and dynamical systems (Charton et al., 2020). All these problems pertain to symbolic mathematics, or involve a large amount of symbolic computation. When working on these tasks, transformers manipulate mathematical symbols, just like words in natural language. But mathematics are not limited to symbol manipulation: many practical applications involve numerical calculations, either exact (e.g. arithmetic) or approximate (e.g. function evaluation, numerical solutions of equations). The use of transformers for numerical computation has been less studied, and many early experiments with arithmetic have proved disappointing (Nogueira et al., 2021). This is, nevertheless, an important question: most problems in mathematics and science involve both symbolic and numerical computations. If we want transformers to solve these problems end-to-end, they need to be able to perform numerical calculations with high accuracy. In this paper, we train transformers to compute solutions of problems of linear algebra, which serve as fundamental building blocks in many scientific problems: basic operations on matrices, matrix inversion, eigenvalue and singular value decompositions. We introduce and discuss four encodings to represent problems and solutions as sequences that transformers can process, and train small transformers (up to 6 layers, 10 to 50 million trainable parameters) over generated datasets of random matrices. Trained models can compute approximate solutions to these problems (to a few percents of their L1 norm) with over 90% accuracy (99% in most cases). We also show that they can generalize out of their training distribution, and be retrained to extrapolate to larger problems than the ones they were trained on. We believe these results pave the way for using transformers as end to end solvers for problems of mathematics and science. After introducing the problems of linear algebra we are studying and presenting the encodings we use to represent them as sequences that can be used by our models, we discuss data generation, architecture and experimental settings. Then, we present our experiments on nine different problems, and discuss out-of-distribution generalization and few shot learning for eigenvalue computation. Finally, we discuss our results and future directions for research, and present related works. 2 PROBLEMS AND DATASETS Let M and N be m× n matrices and V ∈ Rm . We study nine problems of linear algebra: • matrix transposition: find MT , a n×m matrix, • matrix addition: find M +N , a m× n matrix, • matrix-vector multiplication: find MTV , a vector in Rn, • matrix multiplication: find MTN , a n× n matrix, • eigenvalues: M symmetric, find its n (real) eigenvalues, sorted in descending order, • eigenvectors: M symmetric, find D diagonal and Q orthogonal such that M = QTDQ, set as a (n+ 1)× n matrix, with (sorted) eigenvalues in its first line, • singular values: find the n eigenvalues of MTM , sorted in descending order, • singular value decomposition: find orthogonal U, V and diagonal S such that M = USV , set as a (m+ n+ 1)×min(m,n) matrix, • inversion: M square and invertible, find its inverse P , such that MP = PM = Id. These problems range from basic operations on individual coefficients of the input matrices (transposition and addition), to computations involving several arithmetic operations over many coefficients (multiplication), and complex nonlinear transformations involving the whole matrix, with cubic complexity (decompositions and inversion). For each problem, we generate datasets of pairs of matrices (I,O), by sampling random input matrices I (see section 2.2), and computing the output O with a linear algebra package (NumPy linalg). When a problem has several input or output matrices, they are concatenated into one (for instance, the two m× n operands of the addition task are concatenated into onem×2nmatrix I). All coefficients in I andO are set in base ten floating-point representation, and rounded to three significant digits in the mantissa. 2.1 ENCODING MATRICES AS SEQUENCES The input and output to our problems are matrices. To be processed by transformers, they need to be converted into sequences of tokens. We encode a m×n matrix by first coding its dimensions as two symbolic tokens (Vm and Vn), followed by its mn coefficients, encoded as sequences. Through this paper, we will use four encoding schemes for matrix coefficients: P10, P1000, B1999, and FP15. In base 10 positional encoding (P10), a number is represented as a sequence of five tokens : one sign token (+ or -), 3 digits (from 0 to 9) for the mantissa, and a symbolic token (from E-100 to E+100) for the exponent. For instance 3.14 will be represented as 314.10−2, and encoded as [+, 3, 1, 4, E-2]. P1000 (positional base 1000) provides a more compact representation by encoding the mantissa as a single token (from 0 to 999), and representing a number as the triplet (sign, mantissa, exponent). B1999 (balanced base 1999) pushes this one step further by encoding together the sign and mantissa (from -999 to 999). Finally, FP15 encodes each floating point number x = m10b as a unique token FPm/b. Table 1 provides examples of these encodings. Additional details and examples can be found in Appendix A. Selecting an encoding is a trade-off. Long encodings (P10, P1000) embed knowledge about numbers that the model can leverage (e.g. numbers can be compared using their signs and exponents, addition and multiplication can be learned by memorizing small tables). Compact encodings use a larger vocabulary (harder to learn), but generate shorter sequences that facilitate training with transformers. In P10, a 20 × 20 matrix is a sequence of 2002 tokens, close to the practical limit of transformer implementations that use a quadratic attention mechanism. In FP15, it is only 402 tokens long. 2.2 RANDOM MATRIX GENERATION In most of our experiments, we train models over datasets of random matrices with uniformly distributed coefficients in [−A,A] (with A = 10). Occasionally, we sample gaussian coefficients with the same standard deviation (σ = A/ √ 3). In the symmetric case, these matrices are known as Wigner matrices. Their eigenvalues have a centered distribution with standard deviation σ = √ ns, where s is the standard deviation of the coefficients (Mehta, 2004). As n increases, this distribution converges to the semi-circle law (p(λ) = √ 4σ2 − λ2/2πσ2 ) for all coefficient distributions with bounded variance. If the coefficients are gaussian, the associated eigenvectors are uniformly distributed over the unit sphere. When investigating out-of-distribution generalization for the eigenvalue problem, we will need to generate random symmetric matrices with different distributions of their eigenvalues (corresponding to random matrices with non iid coefficients). To this effect, we randomly sample symmetric matrices M , with gaussian coefficients, and compute their eigenvalue decomposition M = PDPT , with P the orthogonal matrix of eigenvectors (uniformly distributed over the unit sphere since the coefficients are gaussian). We then replace D, the diagonal matrix of eigenvalues of M , with a diagonal D′ sampled from another distribution. Finally, we recompute M ′ = PD′PT , a symmetric matrix (because P is orthogonal) with eigenvalues distributed as we choose, and eigenvectors uniformly distributed over the unit sphere. 3 MODELS AND EXPERIMENTAL SETTINGS We use the standard transformer architecture introduced in Vaswani et al. (2017), with an encoder and a decoder connected by a cross-attention mechanism. Most of our models have 512 dimensions, 8 attention heads and up to 6 layers. We experiment with different number of layers and attention heads in the encoder and decoder. All training is supervised, and minimizes the cross-entropy between model prediction and the correct solution. We use the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 10−4, an initial warm-up phase of 10,000 steps and cosine scheduling (Loshchilov & Hutter, 2016). All training data is generated on the fly, in batches of 64 examples. Every 300,000 examples, 10,000 random problems are generated and used to evaluate the model. When evaluating, we consider that a predicted sequence seqP is a correct solution to the problem (I,O) (I and O the input and output matrices) if it can be decoded as a valid matrix P (several matrices for singular and eigen decomposition) that approximates the correct solution to a given tolerance τ (τ ∈ {5, 2, 1, 0.5%}) . For addition, transposition, multiplication, eigen and singular values we check that P verifies ‖P − O‖ < τ‖O‖ (with ‖A‖ = ∑ i,j |ai,j |, for A = (ai,j), i.e. L1 norm). For eigenvalue decomposition, we check that the solution (Q,D) predicted by the model can reconstruct the input matrix, i.e. ‖QTDQ− I‖ < τ‖I‖. For singular value decomposition, we check that ‖USV − I‖ < τ‖I‖. For matrix inversion, we check that ‖PI − Id‖ < τ‖Id‖ = τ . The choice of the L1 norm is important: norms like L2 and L∞ will favor models that correctly predict the largest coefficients in the solution. For eigen and singular value problems, this amounts to predicting the largest values, an easier problem than the one we want to solve. We consider different tolerances for different problems. Since we round numbers to three significant digits, 0.5% is the best we can hope. In fact, a number xwith mantissa 1.00 is subjected to a maximal rounding error of 0.5% (x ∈]1.005, 0.995]), which may accumulate when several (rounded) numbers are summed, and increase again when nonlinear operations are considered. When discussing results, we consider tolerances of 0% for transposition, which involves no arithmetic, 1% for basic matrix operations (addition, multiplication), and 2 or 5% for non linear operations (decomposition, inversion), but we usually provide results for all tolerance levels. Most of our experiments focus on 5× 5 square matrices, or rectangular matrices with as many coefficients (e.g. 6× 4, 2× 13 ). This helps when comparing encodings: for larger dimensions, varying sequence lengths make comparisons difficult. We also study scaled-up versions of the problems (from 8 × 8 to 15 × 15), and datasets with matrices of variable dimensions (5-10 or 5-15). In this paper, we limit ourselves to problem that can be solved using small models (with up to 6 layers). Scaling to larger problems, and leveraging deeper architectures is left for future research. 4 EXPERIMENTS AND RESULTS 4.1 TRANSPOSITION Learning to transpose a matrix amounts to learning a permutation of its elements. For a square matrix, the permutation is composed of cycles of two elements. Permutations for rectangular matrices involve longer cycles. This task involves no arithmetic operations: tokens from the input sequence are merely copied to the output, in different positions. We investigate two formulations of this problem: a fixed-size case, where all matrices in the dataset have the same dimension and only one permutation is to be learned, and a variable-size case, where the dataset includes matrices of different dimensions, with as many permutations to learn. We train transformers with one layer, 256 dimensions and 8 attention heads in the encoder and decoder, over datasets using our four encoding schemes. All models learn to predict the exact solution (with 0% tolerance) in more than 99% of test cases, for fixed-size matrices, square or rectangular, with dimensions up to 30× 30. This holds for all encodings, and for input or output sequences up to 2000 tokens long. Similar accuracies are achieved for variable-size datasets: (over 99% for 5− 15 and 96% for 5− 20), with the rectangular cases proving slightly more difficult to train. Table 2 summarizes our results. 4.2 ADDITION Learning to add two m× n matrices amounts to learning the correspondence between the positions of input and output (as in the transposition task) and the algorithm for adding two numbers in floating point representation, which will be performed on mn pairs of elements. We train transformers with 1 or 2 layers, 8 attention heads and 512 dimensions. Sum of fixed-size matrices with dimensions up to 10, both square and rectangular, are predicted with over 99% accuracy within 1% tolerance (and over 98% within 0.5%), with all four encodings. As dimensions increase, models using the P10 and P1000 encodings become more difficult to train as input sequences grow longer: adding two 15 × 15 matrices involves 450 input coefficients: a sequence of 1352 tokens in P1000 and 2252 in P10. Nevertheless, FP15 models achieve 99.5% accuracy within 0.5% tolerance for 15×15 matrices and B1999 models 89.7% accuracy with 1% tolerance on 20 × 20 matrices. Variable-size matrices with dimensions up to 10 are predicted by 2-layer transformers using the B1999 encoding with over 99.5% accuracy within 1% tolerance. Over matrices with larger dimensions (5-15), shallow models with 1 or 2 layers struggle, and their accuracy drops to 48 and 37% in the square and rectangular case. This can be mitigated by using deeper decoders: models with one layer in the encoder and 6 in the decoder achieve 77 and 87% accuracy on the same datasets. Table 3 summarizes our results. 2% 100 99.5 99.8 100 100 98.4 99.8 53.3 88.1 99.8 50.8 94.9 5% 100 99.9 99.9 100 100 98.8 100 63.1 99.3 100 72.4 99.4 4.3 MULTIPLICATION Multiplication of a matrix M of dimension m × n by a vector V ∈ Rn amounts to computing the m dot products between V and the lines of M . Each calculation features n multiplications and n − 1 additions, and involve one row in the matrix and all coefficients in the vector. The model must now learn the position of the 2n elements in the computation, and two operations (add and multiply). Experimenting with models with 1 or 2 layers, over 5× 5 matrices, we observe that only models using the P10 and P1000 encoding can be trained to high accuracy. The P1000 encoding performs best, with little difference between two and one layer models. Accuracies over 99.9%, at 1% tolerance, are achieved by 2-layer transformers using the P1000 encoding for 5× 5 and 10× 10 square matrices. Comparable accuracies are achieved when multiplying rectangular matrices by vectors with the same overall number of coefficients (30). Experiments with datasets of matrices with variable size (from 5 to 10) achieve non-trivial performance (from 48% with 1% tolerance, to 72% with 5% tolerance, for square matrices). Results are sumarized in table 4. Multiplication of matricesM and P is a scaled-up version of the matrix-vector multiplication, which is now performed for every column in matrixP . As previously, only models using the P10 and P1000 encoding can be trained to predict to high accuracy. Over 5×5 matrices and rectangular matrices of similar size, trained model accuracy is the same as vector-multiplication (over 99% at 1% tolerance, see table 5), but deeper decoders (with 4 to 6 layers) are needed. 5% 100 100 100 100 100 100 100 100 100 99.9 2% 100 100 100 100 100 100 100 100 99.7 99.8 4.4 EIGENVALUES We now turn to non-linear problems that are usually solved by iterative algorithms. We train models with 4 or 6 layers in the encoder or the decoder to predict the eigenvalues of symmetric matrices. Over samples of 5×5 random matrices, we reach 100% accuracy at 5% tolerance, and 98.5% at 1% for all four encodings. For 8×8 matrices, we achieve accuracies of 100 and 85% at 5 and 1% tolerance. Larger problems, however, prove difficult to learn: on 10×10 matrices, 25% accuracy at 5% tolerance is reached after 360 million examples. As a comparison, 5×5 models train to maximum accuracy in about 40 million examples, and 8×8 models in about 60 million. We can overcome this limitation by training models on variable-size datasets. On samples of matrices with 5-10, 5-15 and 5-20 dimensions, we achieve 100% accuracy at 5% tolerance, and 88, 94 and 45% at 1%. Using the 5-15 model, the eigenvalues of 10×10 matrices can be predicted with 100% accuracy at 2% tolerance, and 73% at 1%. Table 6 summarizes our results. 4.5 EIGENVECTORS This is an expanded version of the previous task: together with the eigenvalues, we predict the orthogonal matrix of eigenvectors. Over 5× 5 matrices, models using the P10 and P1000 encoding achieve 97.0 and 94.0% accuracy with 5% tolerance. FP15 models fare less well, with an accuracy of 51.6%, but asymmetric models, with 6-layer FP15 encoder and 1-layer P1000 decoder, achieve 93.5% accuracy at 5% and 67.5% at 1% tolerance. The eigenvectors of 6 × 6 matrices can be predicted by P1000 models with an accuracy of 81.5%. Table 7 summarizes our results. 4.6 INVERSION Inversion of 5×5 matrices proves more difficult than previous tasks, with accuracies of 73.6% for P10 models, and 80.4 for P1000 models (5% tolerance, 6-layer encoders and 1-layer decoders). Increasing the number of attention heads to 10 and 12 brings little improvement in accuracy, but allows for faster training: 8 head models are trained to 75% accuracy in about 250 million examples, 10 and 12 head models in only 120. The highest accuracies (90.0%) are achieved by asymmetric models: a 6-layer FP15 encoder with 12 attention heads, and a 1-layer P1000 decoder with 8 heads. 4.7 SINGULAR VALUE DECOMPOSITION Whereas this task is related to eigen decomposition (the singular values of a symmetric matrix are the absolute values of its eigenvalues), it proves more difficult to learn: transformers with up to 6 layers, using the P10 or P1000 encoding, can predict the singular decomposition of 4×4, but not 5×5 matrices. Accuracies remain high, 100 and 86.7% for singular values (5 and 1% tolerance), and 98.9 and 75.3% for the full decomposition. 5 OUT-OF-DOMAIN GENERALIZATION AND RETRAINING In this section, we focus on the prediction of eigenvalues of symmetric matrices. To train our models, we generate random n×n matrices with independent and identically distributed (iid) coefficients, sampled from a uniform distribution over [−A,A]. They belong to a common class of random matrices, known as Wigner matrices. Their eigenvalues have a centered distribution with standard deviation σ = √ ns, where s is the standard deviation of the coefficients (s = A/ √ 3 when uniform). As n increases, this distribution converges to the semi-circle law (Mehta (2004)). Whereas Wigner matrices are very common in science, random matrices with different eigenvalue distributions (and non iid coefficients) appear in many practical cases. For instance, statistical covariance matrices have all their eigenvalues positive, and the adjacency matrices of scale-free and other non-ErdosRenyi graphs have centered but non semi-circle distributions of eigenvalues (Preciado & Rahimian, 2017). It is, therefore, important to understand how models trained on Wigner matrices perform on matrices with different distributions of eigenvalues. To this effect, we create test sets of 10,000 matrices with different distributions than the training set. First, we generate matrices with uniform iid coefficients (as in the training set), but different standard deviation: σtest ∈ [0.1σtrain, 1.5σtrain]. Over these test sets, our trained models achieve over 96% accuracy (with 2% tolerance) for σtest ∈ [0.6σtrain, σtrain]. However, model accuracy drops when σtest is out of this range: 54% for 0.4σtrain, 0% for 0.2σtrain, 26% for 1.1σtrain and 2% for 1.3σtrain. Out-of-distribution generalization only takes place when test set variance is lower, and not too far from (over 25% of) training set variance. Then, we generate test sets of matrices with different eigenvalue distributions: positive eigenvalues (Wigner matrices with eigenvalues replaced by their absolute values), and eigenvalues distributions according to the uniform, gaussian or Laplace law (see section 2.2), with standard deviation σtrain and 0.6σtrain. Over test sets with σtest = σtrain, accuracies are 26% for Laplace, 25 for gaussian, 19 for uniform, and 0% for positive. Results are slightly better for test sets with lower standard deviation (0.6σtrain): 28, 44, 60 and 0% for Laplace, gaussian, uniform and positive, but out-ofdistribution accuracies are low, and matrices with positive eigenvalues cannot be predicted at all. To improve out-of-distribution accuracy, we train new models on datasets with different distributions of eigenvalues, and evaluate them on the test sets previously created. First, we generate matrices with uniform coefficients but variable standard deviation (by randomly selecting A ∈ [1, 100] for each matrix). Unsurprisingly, models trained on this dataset achieve high accuracies on test sets of Wigner matrices with high or low variance. Performances also increase over the gaussian, uniform and Laplace-distributed test sets (from 25 − 60% to 53 − 68%). Yet, matrices with positive eigenvalues cannot be predicted. Training models over a mixture of (Wigner) matrices with uniform iid coefficients and matrices with positive eigenvalues results in better prediction of positive eigenvalues, but degrades performances over all other tests sets. However, models trained on a mixture of matrices with uniform coefficients and matrices with gaussian eigenvalues, or uniform iid and Laplace eigenvalues, achieve high accuracies over all test sets, as do models trained on matrices with Laplace eigenvalues only, or a mixture of uniform, gaussian and Laplace eigenvalues (all non-Wigner matrices). These experiments are presented in table 10. This is an important result: it suggests that Wigner matrices, often considered as the default model for random matrices, might not be the best choice for training transformers. Models trained on non-Wigner matrices (non-iid coefficients, limit distribution of eigenvalues not a semi-circle) generalize to matrices with iid coefficients, whereas the reverse is not true. This confirms that out-ofdistribution generalization requires that particular attention is paid to training data generation. Models trained on matrices of a given size do not generalize to different dimensions, but they can be retrained over samples of matrices of different size. This takes comparatively few examples: a 5× 5 model, that takes 40 million examples to be trained, can learn to predict with high accuracy eigenvalues of matrices of dimension 6 and 7 with about 25 million additional examples. Table 11 presents those results. The capacity of pre-trained large transformers (such as GPT-3) to perform few-shot learning is well documented, but it is interesting to observe the same phenomenon in smaller models. 6 DISCUSSION AND FUTURE DIRECTIONS Our experiments demonstrate that transformers can be trained to solve problems of linear algebra, using randomly generated datasets. However, their accuracy depends on the encodings used to represent matrix coefficients. We introduce four encoding schemes, and our experiments suggest that P10 is generally dominated by P1000, which is also more economical, and that B1999 never really finds its use, as FP15 is more compact and P1000 more efficient. P1000 seems to be a good choice for problems of moderate size, and FP15 for longer sequences. For advanced problems like eigenvectors and inversion, asymmetric architectures, with a deep FP15 encoder and a shallow P1000 decoder, achieve the best performances. Our interpretation is that P1000 in the decoder facilitates training because the meaningful representation of output as (sign, mantissa, exponent) triplets allows for better error feedback during training. On the other hand, a FP15 deep encoder can provide more complex representations of the input matrix, while being easier to train thanks to the shorter sequences. Such asymmetric architectures also benefit from more attention heads (10 to 12) in the encoder, while less heads (4) in the decoder improve training stability at no cost in accuracy. Those asymmetric architectures deserve further study. Most of our experiments focus on matrices with 5 to 10 lines and columns. Our results on the eigenvalue problem suggest that larger problems can be solved by training over matrices of variable size, or retraining over larger matrices. In this work, matrices of different dimensions are sampled in equal proportion and presented for training in random order. Varying their proportions and scheduling (i.e. curriculum learning) should result in better performance. Yet, as dimension increases, sequence lengths will reach the practical limits of quadratic attention mechanisms. Experimenting with transformers with linear or log-linear attention (Zaheer et al., 2021; Wang et al., 2020a; Vyas et al., 2020; Child et al., 2019) is a natural extension of our work. In terms of asymptotic complexity, matrix inversion (and the other non linear tasks) is usually handled by O(n3) algorithms (although O(n2.37) methods are known). Since our sequence length is O(n2), transformers with quadratic attention mechanisms are O(n4). Linear attention would reduce this to O(n2). The out-of-distribution experiments are our most significant results. They prove that models trained on random data can generalize to a wide range of test distributions. They also confirm the importance of wisely selecting training data distributions, a process that can be counter-intuitive. In our specific case, the “obvious” random model (Wigner matrices) is not the best for out-of-domain generalization. In fact, we show that sets of “special” matrices (non-iid coefficients with Laplace eigenvalues) can produce models with better capability for generalization, notably on Wigner matrices. This matches the intuitive idea that we learn more from edge cases than averages. 7 RELATED WORK Algorithms using neural networks to compute eigenvalues and eigenvectors have been proposed since the early 1990s (Samardzija & Waterland, 1991; Cichocki & Unbehauen, 1992; Yi et al., 2004; Tang & Li, 2010; Oja, 1992), and improvements to the original techniques have been suggested until recently (Finol et al., 2019). Similar approaches have been proposed for other problems in linear algebra (Wang, 1993a;b; Zhang et al., 2008). All these methods leverage the Universal Approximation Theorem (Cybenko, 1989; Hornik, 1991), which states that, under weak conditions on their activation functions, neural networks can approximate any continuous mapping (in our case, the mapping between the coefficients of a matrix and their associated eigenvalues and vectors). These approaches rely on the fact that eigenvalues and vectors appear in the solutions of particular differential equations involving the matrix coefficients (e.g. Brockett (1991)). By designing a neural network that represents this differential equation, with the matrix to decompose as the input, and the coefficients in the output layer as the solution, and by defining a loss function that measures how well the output layer approximates the correct solution, the network can be trained to find better and better approximations to the solution. These techniques have two main limitations: they rely on a problem-specific network architecture that has to be hand-coded, and computation is done at train time, which makes them slow and implies retraining the network every time a new matrix is to be processed. In comparison, the techniques proposed in this paper are trained once, and can compute at inference for any matrix input. Techniques have been proposed to train neural networks to compute basic mathematical operations, and use them as building blocks for larger components. Kaiser & Sutskever (2015) introduced the Neural GPU, that could learn addition and multiplication over binary representations of integers. Trask et al. (2018) proposed Neural Arithmetic Logic Units (NALU), that can learn to perform exact additions, substractions, multiplications and divisions by constraining the weights of a linear network to remain close to 0, 1 or -1. Both Neural GPU and NALU have been shown to be able to extrapolate to numbers far larger than those they were trained on. For matrix multiplication, Blalock & Guttag (2021) use learning techniques to improve on known approximate techniques. Use of transformers in mathematics has mostly focused on symbolic computations. Lample & Charton (2019) showed that transformers could be trained to integrate functions and solve ordinary differential equations and, in a follow-up work (Charton et al., 2020), predict properties of differential systems. Transformers have also been applied to formal systems, in theorem proving (Polu & Sutskever, 2020) and temporal logic (Hahn et al., 2021). The use of sequence to sequence models for arithmetic and the exact solution of mathematical problem has been studied by Saxton et al. (2019). In a recent paper, Nogueira et al. (2021) point to the limitations of experiments on arithmetic. 8 CONCLUSION We have shown that transformers can be trained over generated data to solve problems of linear algebra with high accuracy, and that careful selection of the generative model for their training data can allow them to generalize out of their training distribution. This demonstrates that applications of transformers to mathematics are not limited to symbolic calculation, and can cover a wide range of scientific problems, featuring numerical computations. We believe our results pave the way for wider applicability of transformers in science. Reproducibility statement The transformer implementation and the framework for running the experiments were used in several prior works, and rely on standard libraries (Pytorch for the models, Numpy for mathematical calculations). The model source code, data generation code, and parameters for experiments will be open-sourced and made publicly available. All experiments were run several times (on average 10 times), with multiple random seeds and light modifications of the hyperparameters (e.g. small changes in model size, weight initialization, activation functions), to guarantee their robustness. Ethics statement Given the subject of the paper, and the fact that all data used are randomly generated, we believe that no potential ethical concerns are raised by this research. A NUMBER ENCODINGS Let x be a non-zero real number, it can be represented uniquely as x = s.m.10e, with s ∈ {−1, 1},m ∈ [100, 1000[, e ∈ Z. Rounding m to the nearest integer n, we get the base ten, floating-point representation of x, with three significant digits: x ≈ s.n.10e, (s, n, e) ∈ Z3 By convention 0 is encoded as +0.100. All our encodings are possible representations of the triplets (s, n, e). In this paper, we limit e to the range [−100, 100], and n to the range [100, 999]. In base N positional encoding, we encode s (the sign) and e (the exponent) as unique tokens: + or - for s, and a token from E-100 to E100 for e. The mantissa, n, is encoded as the representation of n in base N (e.g. binary representation if N = 2, decimal representation if N = 10), a sequence of dlogN (1000)e tokens from 0 to N-1. Overall, a number will be encoded as a sequence of dlogN (1000)e+ 2 tokens, from a vocabulary of 202 +N tokens. For instance, x = eπ ≈ 23.14069, will be represented by +231.10−1, and encoded in P10 (base 10 positional) as the sequence [+,2,3,1,E-1], and in P1000 (base 1000 positional) as [+,231,E-1]. x = −0.5 will be represented as −500.10−3, and encoded in P10 as [-,5,0,0,E-3], and in P1000 as [-,500,E-3]. Other bases N could be considered, as well as different bases for the exponent, and different sizes of the mantissa. In this paper, we use P10 to encode numbers with absolute value in [10−100, 10101] as sequences of 5 tokens, using a vocabulary of 213 tokens (10 digits, 2 signs, and 201 values of the exponent), and P1000 as sequences of 3 tokens, with a vocabulary of 1104. Balanced base 2a + 1 uses digits between −a and a (Knuth, 1997). For instance, in balanced base 11, digits range from −5 to 5 (an every day example of a balanced base can be found in the way we state the hour as “twenty to two”, or “twenty past two”). Setting a to 999, we define B1999, and encode the sign an mantissa as a single token between−999 and 999. Numbers are then encoded on two tokens with a vocabulary of 2004. Finally, we encode floating point numbers as unique tokens by rewriting any number x = m10b, with m ∈ [−999, 999], b ∈ [−(p + 2)/2, (p + 2)/2], p + 2 = 0, [2], and encoding it as the unique token FPm, b. This allows to represent numbers with 3 significant digits and a dynamic range of 10p+2, using a vocabulary of (18p+2)103 tokens. In this paper, we use p = 16: encoding numbers as unique tokens, with a vocabulary of 30, 000 (FP15). B L1, L2 AND L∞ NORMS FOR EVALUATION We evaluate the accuracy of our trained models by decoding model predictions and verifying that they approximate the correct solution up to a fixed tolerance τ . In the general case, if the model predict a sequence seqP , and the solution of the problem is O, we consider that the prediction is correct if seqP can be decoded into a matrix P and ‖P −O‖ < τ‖O‖ (1) For eigenvalue decomposition, we check that the solution (Q,D) predicted by the model can reconstruct the input matrix, i.e. ‖QTDQ− I‖ < τ‖I‖. For singular value decomposition, we check that ‖USV − I‖ < τ‖I‖. For matrix inversion, we check that ‖PI − Id‖ < τ‖Id‖ = τ . All our published results use the norm L1: ‖A‖ = ∑ i,j |ai,j |, for A = (ai,j). In this section, we discuss the impact of using different norms, namely L2 (‖A‖ = ∑ i,j a 2 i,j), or L ∞ (‖A‖ = maxi,j |ai,j |). Using L1 norm in equation 1 amounts to comparing the average absolute errors on the predicted coefficients (P − O) to the average absolute value of coefficients of O. Using L2 compares the squared errors, and will biase the estimation towards large absolute errors, and coefficients of O with large absolute values. L∞ will compare the largest absolute error to the largest coefficient in |O|. The choice of the norm has different impact for different problems. Figure 1 presents learning curves using the three norms for our best models on different problems. For basic arithmetic operations (transposition, addition, multiplication), there is little difference between L1 and L2 accuracies, and therefore no reason to prefer one over the other for model evaluation. For eigen and singular value problems, L2 accuracies reach a high value early during training, long before the model begins to learn according to the other norms. This is due to the fact that the eigenvalues of Wigner matrices tend to be regularly spaced over the interval [−2σ, 2σ] (σ = √ (n)s with s the standard deviation of coefficients and n the dimension of the matrix). This means that, in many cases, the model can predict the largest absolute eigenvalues from the bounds of the interval (which can be estimated from the dataset). For this reason, L2 accuracy is not a good evaluation metric for the eigenvalue or singular value problem. This is particularly clear in the 10× 10 case: transformers struggle with such matrices, and L1 and L∞ accuracies remain very low even after a thousand epochs (300 million examples), but L2 accuracy is close to 100% since the beginning of training. A similar phenomenon takes place for eigenvector calculations: L2 and L∞ accuracy rise steeply, long before the model begins to learn (according to the L1 norm). This justifies the choice of L1 as our evaluation norm. C ADDITIONAL EXPERIMENTAL RESULTS C.1 LEARNING CURVES FOR DIFFERENT ENCODINGS AND ARCHITECTURES Figure 2 presents learning curves for loss and accuracy (with 5 and 1% tolerance) on different models, for four problems. These curves indicate the number of training examples needed for each problem. On average, our best models learn basic operations on matrices in less than 50 epochs (15 million examples). Training size requirement increases with operation complexity : from 30 million for eigenvalues, to 120 million for eigenvectors, and over 150 million for matrix inversion. On the inversion problem, we experiment with the number of attention heads in the encoder. Increasing the number of head from 8 to 10 and 12 improves learning speed and accuracy. Over 12 heads, this benefit disappears: with 16 heads, our models need 800 epochs to train to 55% accuracy (with 5% tolerance). We believe that this reflects the trade-off being the number of heads (more heads catch more dependencies between elements in the input sequeunce) and the downsampling of attention patterns (when internal model dimension remains fixed). Finally, we notice that the learning curves for the harder problems (eigenvalues, vectors and inversion) are noisy. This is caused by the learning rates: our models usually need small learning rates (510−4 before scheduling is typical) and there is a trade-off between low rates that will stabilize the learning curve, and larger rates that accelerate training. C.2 MODEL SIZE The two main factors influencing model size are depth and the number of dimensions (see Appendix F). In this section we discuss how model size impacts accuracy for addition of 10 × 10 matrices, multiplication of a 5 × 5 matrix by a vector, and computation of the eigenvalues of a 5× 5 matrix. All models in this section are symmetric (same dimension and number of layers in the encoder and decoder) and have 8 attention heads. For the addition task, tables 12 and 13 present the accuracy reached after 60 epochs (18 million examples) and the number of epochs (of 300,000 examples) needed to reach 95% accuracy, for models using the P1000 and B1999 encoding. Both encodings allow shallow architectures (1/1 and 2/2 layers) to learn addition with high accuracy, but the more compact B1999 support smaller models (256 dimensions). In terms of speed, with B1999, shallow models are learned very fast, but it takes a lot of examples to train deeper models. The opposite is true for P1000 models. Table 14 presents the learning speed of models of different sizes for the matrix/vector product and eigenvalue computation tasks (5× 5 matrices, and P1000 encoding). For each problem, there exist a minimal dimension and depth under which models struggle to learn: one layer and 128 dimensions for products, one layer or 128 dimensions for eigenvalues. Over that limit, increasing the dimension accelerates learning. Increasing the depth, on the other hand, bring no clear improvement in speed or accuracy. Finally, we experiment with larger models on larger problems. We trained models with 8 to 12 layers and 512 to 2048 dimensions on sets of 10 × 10 matrices, without success. As discussed in section 4.4, those problems are out of reach of the models we use in this paper (unless we use curriculum learning and train on mixed-size datasets). Increasing model size does not seem to help scaling to larger matrices. C.3 MODEL PERFORMANCE ON DIFFERENT DATASETS Table 15 sumarizes in-domain performance (i.e. accuracy when the test set is generated with the same procedure as the training set) for different datasets. On Wigner matrices (i.e. matrices with independant and identically distributed, iid, coefficients) uniformly or normally distributed, with fixed-range coefficients (i.e. all matrices in the dataset have coefficients uniformly sampled from the same interval), or variable-range coefficients (i.e. coefficient range vary from one matrix to another), all models achieve very high (99+%) accuracy. The eigenvalues of Non-Wigner matrices with Gaussian or Laplace distributed eigenvalues, are also predicted to high accuracy by all models. Over matrices with positive or uniformly distributed eigenvalues, smaller models using the FP15 encoding prove difficult to train. Finally, on mixtures of Wigner and non Wigner matrices, all models predict to high accuracy. D ALTERNATIVE ARCHITECTURES D.1 OTHER SEQUENCE TO SEQUENCE MODELS : LSTM AND GRU We experimented with two popular recurrent architectures: long short-term memories (Hochreiter & Schmidhuber, 1997), and gated recurrent units (Cho et al., 2014), on three tasks : addition of 5 × 5 and 10 × 10 matrices, eigenvalues and matrix inversion of 5 × 5 matrices. We experiment with sequence to sequence models, featuring an encoder and a decoder (LSTM or GRU), with 2 to 8 layers, and 1024 or 2048 hidden dimensions. The input and output sequences, encoded as in the rest of the paper, are pre-processed (and decodedà) via an embedding layer with 256 or 512 dimensions. Addition, a very easy task for transformers (see section 4.2) proves difficult for LSTM and GRU. None of our models can learn addition of 10×10 matrices. Some models can learn addition of 5×5 matrices, but whereas transformers achieve 100% accuracy for all tolerances, our best LSTM and GRU only exceed 90% at 1% tolerance. GRU seem to perform better than LSTM on this task, and 2-layer models perform better than 4-layer models, but transformers have a distinct advantage over LSTM and GRU on addition. Both LSTM and GRU can be trained to predict eigenvalues of 5×5 matrices with the same accuracy as transformers, for the P1000 and FP15 encoding (table 17). Matrix inversion, on the other hand, cannot be learned. Overall, these experiments show that other sequence to sequence architectures, LSTM and GRU, can learn tasks like eigenvalues and addition of small matrices. However, they are less efficient on addition (in terms of precision and scaling to larger matrices) and fail on more complex tasks, like matrix inversion. D.2 SHARED-LAYER TRANSFORMERS: UNIVERSAL TRANSFORMERS In the Universal Transformer (Dehghani et al., 2018), the stacked layers of usual transformer implementations are replaced by one layer that is looped through a fixed number of times (feeding the output of one iteration into the input of the next). This amounts to sharing the weights of the different layers, therefore greatly reducing the number of parameters in the model. This technique can be applied to the encoder, the decoder or both. The number of iterations is a fixed hyperparameter, but the original paper also proposed a halting mechanism inspired by Adaptive Computation Time (Graves, 2016), to adaptively control loop length at the token level. In this version, a stopping probability is learned for every position in the input sequence, and once it reaches a certain threshold, the layer merely copies the input onto the output. The iteration stops when all positions have halted, or a specific value is reached. A recent paper (Anonymous, 2022) proposed to use a similar copy-gating mechanism to skip iterations in a fixed-length loop. We experiment with these three variants (fixed length, adaptive halting, copy gating) on the addition (of 10 × 10 matrices), eigenvalue and matrix inversion tasks (5× 5 matrices). For the addition task, we train universal transformers with one layer and in the encoder and decoder, 256 or 512 dimensions and 8 attention heads. We use the B1999 encoding for the data. We experiment with looped encoder, looped decoder, and loop in both, a loop length of 4, copy-gating and ACT (the 4 loops in then a maximum number of iterations)and copy-gating. Table 18 summarizes our findings. Only models with encoder loops learn to add, and models with 512 dimensions learn with over 95% accuracy for all tolerances. Universal Transformers with one layer (looped-encoder only) perform as well as 2/2 transformers. On the eigenvalue task, we experiment on the P1000 and FP15 encoding, with encoder-loop only 1/1 Universal Transformers with 4 or 8 loops. Universal transformers using the P1000 encoding achieve the same performances (with only one layer) than the transformers in our main research 4 loop transformers seem best, with gates not improving perfomance and ACT slightly degrading it. With the FP15 encoding, universal transformers become very difficult to train: only the 4 loop gated version achieves significant accuracy (still lower than the 6/1 transformers). Finally, we experimented with matrix inversion, with FP15/P1000 and P1000/P1000 encodings, and 4 or 8 loops in the encoder. A gated universal transformer using FP15 in the input and P1000 in the output achieved 73% accuracy, a significant result albeit lower than the best result acieved with 6/1 transformers using the same encodings (90%). With the P1000 encoding, the best universal transformers reach 55% accuracy, compared to 80% for their 6/1 transformer counterparts. Overall, Universal Transformers seem to achieve comparable performances with deep transformers (except on the inversion tasks), using less parameters. This makes shared layer transformers an interesting direction for future work. E ADDITIONAL EXPERIMENTS E.1 NOISY DATA Experimental data is often noisy, and it is interesting to see how our models behave in the presence of random error. To this effect, we trained models to perform matrix addition and eigenvalue computations on noisy data, by adding a random gaussian error to all coefficients in the input (5 × 5) matrices. Three levels of noise were tested, with standard deviation equal to 1, 2 and 5% of the standard deviation of the matrix coefficients (σ = 5.77 for uniform coefficients in [−10, 10]). For linear operations like addition, we expect the model to predict correct results so long tolerance τ remains larger than error. Table 20 demonstrates that models can be trained on noisy data without loss of accuracy, so long the ratio between the standard deviation of error and that of the coefficients is lower than tolerance. Accuracy drops to about 40% when error levels are approximately equal to tolerance, and to zero once error exceed tolerance. It is worth noticing that model size and encoding have no apparent impact on robustness to noise. A similar pattern appears in eigenvalue calculations (table 21), but trained models prove more resistant to noise in the data than for addition. For instance, the eigenvectors of matrices with error standard deviation up to 0.05σ can be learnt to high accuracy within 5% tolerance (vs 0.02σ for addiition). As before, model size has no impact on robustness. However, FP15 models seem more difficult to train over noisy data than P1000. E.2 CO-TRAINING We have shown that transformers can be trained to performed all the tasks mentioned above, training one specific model for each task. In this section, we experiment with co-training: learning several tasks at once. We add a token at the beginning of the input and output sequence indicating the task to be solved (e.g. Transpose or Add), and generate data by randomly selecting a task (with equal probability for all tasks) and producing the corresponding pairs. We train transformers with 4 or 6 layers, 512 dimensions and 8 attention heads on eight datasets corresponding to different co-training tasks: • Transpose and add (TA) • Transpose, add and dot product (vector matrix multiplication) (TAD) • Transpose, add, dot product and matrix multiplication (TADM) • Transpose, add, dot product, matrix multiplication and eigenvalues (TADME) • Transpose, add, dot product, matrix multiplication, eigenvalues and eigenvectors (TAD- MEF) • Transpose, add, dot product, matrix multiplication, eigenvalues, eigenvectors and matrix inversion (TADMEFI) • Eigenvalues, eigenvectors and matrix inversion (EFI) Table 22 summarizes our findings. Lines correspond to a co-training tasks, columns to the performance achieved on this specific task (with 5% tolerance). Co-training over a mixture of basic operations (transposition, addition, dot products and multiplication: the TA, TAD and TADM tasks) learn to predict the results of all operations with almost perfect accuracy. Co-training on the basic operations and eigenvalue computations (the TADME task) allows the model to predict eigenvalues with 80% accuracy, in exchange for a loss of performances on the dot product task. In other experiments with this task, the model learned all basic operation to 100% accuracy(as in the TADM setting), and the eigenvalue to a few percents. Adding more tasks, eigenvectors and inversion, results in the same performance. Co-training on the advanced tasks only (eigenvalues, vectors and inversion) results in 100% accuracy on eigenvalue computation, 22% on eigenvectors, and 0 on inversion. These results demonstrate the feasibility of co-training on basic matrix operations, but also suggest that further research is needed if one wants to extend it to all the tasks considered in this paper. F NUMBER OF PARAMETERS The number of parameters in the sequence to sequence transformer we use in this paper can be calculated as follows. • A self-attention mechanism with dimension d has 4d(d+ 1) parameters: it is composed of four linear layers (K, Q, V and the output layer), with d input, d output and a bias. • A cross-attention mechanism with de dimensions in the encoder, and d in the decoder has 2d(d+ de + 2) parameters (K and V are de × d layers). • A FFN with one hidden layer, d input and output, and h hidden units has d(h+1)+h(d+1) parameters. • A layer normalization with d dimensions has 2d parameters. • An encoder layer with dimension d has a self-attention mechanism, a FFN with 4d hidden units (in our implementation) and two layer normalizations, for a total number of parameters of 12d2 + 13d. • A decoder layer has a cross-attention layer (encoding dimension de) and a layer normalization on top of an encoder, for a total of 14d2 + 19d+ 2ded parameters. • An embedding of dimension d for a vocabulary of w words will use dw parameters, and 2d more if it is coupled to a layer normalization. • The final prediction layer with an output dimension of d and a decoded vocabulary of w words will use (d + 1)w parameters (but in our case, dw will be shared with the decoder embedding). Overall, the number of parameters for a transformer with ne encoding layers with dimension de, n ∗ d decoding layers with dimension dd, an input vocabulary of wi words, an output vocabulary of wo words and a positional embedding of wp words (corresponding to the maximum sequence length) can be computed by the formula: P = de(wi + wp + 2) + ((wo + wp + 2)dd + wo) + nede(12de + 13) + nddd(14dd + 2de + 19) the four terms in the sum corresponding to the input embedding, the output embedding, the encoder and the decoder. Table 23 provides the number of parameters for some of the models used in this paper. For the positional embedding, we set the number of words as the longest input and output sequence studied with that model. G EIGENVALUE DISTRIBUTION OF WIGNER MATRICES, AN EMPIRICAL JUSTIFICATION Figure 3 provides an empirical confirmation of the property of Wigner matrices mentioned in sections 2.2 and 5: the standard deviation of their eigenvalues is a function of their dimension and standard deviation of their coefficients only, and does not depend on the actual dsitribution of the coefficient. In particular, for coefficients with standard deviation σ = 10/ √ (3) = 5.77, we expect the standard deviation of their eigenvalue distribution to be σ = 12.91, 18.26, 22.36 and 25.81 for square matrices of dimension 5, 10, 15 and 20. For three distributions, uniform, Laplace and gaussian, and four dimensions (5, 10, 15, and 20), we generated 100 000 random matrices with the same standard deviation of coefficients, and computed their eigenvalues. Standard deviations are within 0.01 of theoretical values for all distributions and dimensions. It is interesting to note how the distributions (which correspond to the original coefficient distribution for n = 1) resemble the semi-circle as dimension increases.
1. What is the focus of the paper regarding linear algebra computations? 2. What are the strengths and weaknesses of the transformer model in performing various linear algebra tasks? 3. How does the reviewer assess the paper's framing and claims regarding real-world problems? 4. What additional data or analyses would the reviewer like to see in the paper? 5. How does the reviewer view the out-of-distribution findings presented in the paper? 6. Is there any suggestion for further research or improvements related to the paper's content?
Summary Of The Paper Review
Summary Of The Paper The authors train generic, dense transformers to perform several standard linear algebra computations, ranging from simple tasks like transposition to complex nonlinear tasks such as matrix inversion. They restrict themselves to relatively small matrices due to the practical limits of the dense, quadratic attention mechanism. The main result of the paper is that transformers can perform fairly well on all tasks, meaning that they can usually produce outputs that are correct upto relatively small tolerance. The paper also shows that some forms of out-of-distribution generalization are possible, and that this phenomenon is sensitive to the details of the training distribution. Review I found this to be an interesting paper overall. Framing I found the following claim to be problematic: 'Our results on out-of-distribution generalization provide justification to the idea that models trained over random data can be used to solve "real world" problems'. First, the authors only evaluate on other synthetic distributions. Second, in most "real world" problems, matrices are gigantic relative to the tiny context windows of dense transformers. Third, since traditional methods are always perfectly accurate on all distributions, I think the claim carries with it some burden to elaborate on why such noisy (and much less scalable) methods might prove useful in practice. Note that even if the potential practicality cannot be argued for, I think the experiments are interesting enough to stand on their own. Sparse data reporting I would have liked to see much more data collected from the experiments, especially train-loss, validation-loss, and correctness-upto-tolerance curves over a range of architectures. The curves would also make it clear how many samples had been trained on for each measurement, which would be useful for understanding the relative performance of the different encodings. Note: it is not always clear from the current tables which encoding is even being used. It would also be interesting to see some analysis/visualization of the attention patterns, at least for tasks with relatively simple ground-truth algorithms like transposition and addition. Some experimental results also seem to be omitted; for example, S4.3 claims that "deeper decoders are needed" for matrix-matrix multiplication, but Table 5 does not include enough data to defend this claim. Out of distribution findings seem unsurprising It is great that the authors assess out-of-distribution, and I appreciate the negative result of generalizing from Wigner to matrices with positive eigenvalues. However, although the details of the subsequent out-of-distribution experiments are interesting, I found it generally unsurprising that models trained on non-Wigner matrices would generalize better, and I thought that the authors tried to make too big a point of this finding. Co-training? Although not essential, I would be interested to see how co-training on many of the tasks at once affects sample efficiency.
ICLR
Title Linear algebra with transformers Abstract Most applications of transformers to mathematics, from integration to theorem proving, focus on symbolic computation. In this paper, we show that transformers can be trained to perform numerical calculations with high accuracy. We consider problems of linear algebra: matrix transposition, addition, multiplication, eigenvalues and vectors, singular value decomposition, and inversion. Training small transformers (up to six layers) over datasets of random matrices, we achieve high accuracies (over 90%) on all problems. We also show that trained models can generalize out of their training distribution, and that out-of-domain accuracy can be greatly improved by working from more diverse datasets (in particular, by training from matrices with non-independent and identically distributed coefficients). Finally, we show that few-shot learning can be leveraged to re-train models to solve larger problems. 1 INTRODUCTION Since their introduction by Vaswani et al. (2017), transformers, originally designed for machine translation, were applied to various problems, from text generation (Radford et al., 2018; 2019) to image processing (Carion et al., 2020) and speech recognition (Dong et al., 2018) where they soon achieved state-of-the-art performance (Dosovitskiy et al., 2021; Wang et al., 2020b). In mathematics, transformers were used for symbolic integration (Lample & Charton, 2019), theorem proving (Polu & Sutskever, 2020), formal logic (Hahn et al., 2021), SAT solving (Shi et al., 2021), symbolic regression (Biggio et al., 2021) and dynamical systems (Charton et al., 2020). All these problems pertain to symbolic mathematics, or involve a large amount of symbolic computation. When working on these tasks, transformers manipulate mathematical symbols, just like words in natural language. But mathematics are not limited to symbol manipulation: many practical applications involve numerical calculations, either exact (e.g. arithmetic) or approximate (e.g. function evaluation, numerical solutions of equations). The use of transformers for numerical computation has been less studied, and many early experiments with arithmetic have proved disappointing (Nogueira et al., 2021). This is, nevertheless, an important question: most problems in mathematics and science involve both symbolic and numerical computations. If we want transformers to solve these problems end-to-end, they need to be able to perform numerical calculations with high accuracy. In this paper, we train transformers to compute solutions of problems of linear algebra, which serve as fundamental building blocks in many scientific problems: basic operations on matrices, matrix inversion, eigenvalue and singular value decompositions. We introduce and discuss four encodings to represent problems and solutions as sequences that transformers can process, and train small transformers (up to 6 layers, 10 to 50 million trainable parameters) over generated datasets of random matrices. Trained models can compute approximate solutions to these problems (to a few percents of their L1 norm) with over 90% accuracy (99% in most cases). We also show that they can generalize out of their training distribution, and be retrained to extrapolate to larger problems than the ones they were trained on. We believe these results pave the way for using transformers as end to end solvers for problems of mathematics and science. After introducing the problems of linear algebra we are studying and presenting the encodings we use to represent them as sequences that can be used by our models, we discuss data generation, architecture and experimental settings. Then, we present our experiments on nine different problems, and discuss out-of-distribution generalization and few shot learning for eigenvalue computation. Finally, we discuss our results and future directions for research, and present related works. 2 PROBLEMS AND DATASETS Let M and N be m× n matrices and V ∈ Rm . We study nine problems of linear algebra: • matrix transposition: find MT , a n×m matrix, • matrix addition: find M +N , a m× n matrix, • matrix-vector multiplication: find MTV , a vector in Rn, • matrix multiplication: find MTN , a n× n matrix, • eigenvalues: M symmetric, find its n (real) eigenvalues, sorted in descending order, • eigenvectors: M symmetric, find D diagonal and Q orthogonal such that M = QTDQ, set as a (n+ 1)× n matrix, with (sorted) eigenvalues in its first line, • singular values: find the n eigenvalues of MTM , sorted in descending order, • singular value decomposition: find orthogonal U, V and diagonal S such that M = USV , set as a (m+ n+ 1)×min(m,n) matrix, • inversion: M square and invertible, find its inverse P , such that MP = PM = Id. These problems range from basic operations on individual coefficients of the input matrices (transposition and addition), to computations involving several arithmetic operations over many coefficients (multiplication), and complex nonlinear transformations involving the whole matrix, with cubic complexity (decompositions and inversion). For each problem, we generate datasets of pairs of matrices (I,O), by sampling random input matrices I (see section 2.2), and computing the output O with a linear algebra package (NumPy linalg). When a problem has several input or output matrices, they are concatenated into one (for instance, the two m× n operands of the addition task are concatenated into onem×2nmatrix I). All coefficients in I andO are set in base ten floating-point representation, and rounded to three significant digits in the mantissa. 2.1 ENCODING MATRICES AS SEQUENCES The input and output to our problems are matrices. To be processed by transformers, they need to be converted into sequences of tokens. We encode a m×n matrix by first coding its dimensions as two symbolic tokens (Vm and Vn), followed by its mn coefficients, encoded as sequences. Through this paper, we will use four encoding schemes for matrix coefficients: P10, P1000, B1999, and FP15. In base 10 positional encoding (P10), a number is represented as a sequence of five tokens : one sign token (+ or -), 3 digits (from 0 to 9) for the mantissa, and a symbolic token (from E-100 to E+100) for the exponent. For instance 3.14 will be represented as 314.10−2, and encoded as [+, 3, 1, 4, E-2]. P1000 (positional base 1000) provides a more compact representation by encoding the mantissa as a single token (from 0 to 999), and representing a number as the triplet (sign, mantissa, exponent). B1999 (balanced base 1999) pushes this one step further by encoding together the sign and mantissa (from -999 to 999). Finally, FP15 encodes each floating point number x = m10b as a unique token FPm/b. Table 1 provides examples of these encodings. Additional details and examples can be found in Appendix A. Selecting an encoding is a trade-off. Long encodings (P10, P1000) embed knowledge about numbers that the model can leverage (e.g. numbers can be compared using their signs and exponents, addition and multiplication can be learned by memorizing small tables). Compact encodings use a larger vocabulary (harder to learn), but generate shorter sequences that facilitate training with transformers. In P10, a 20 × 20 matrix is a sequence of 2002 tokens, close to the practical limit of transformer implementations that use a quadratic attention mechanism. In FP15, it is only 402 tokens long. 2.2 RANDOM MATRIX GENERATION In most of our experiments, we train models over datasets of random matrices with uniformly distributed coefficients in [−A,A] (with A = 10). Occasionally, we sample gaussian coefficients with the same standard deviation (σ = A/ √ 3). In the symmetric case, these matrices are known as Wigner matrices. Their eigenvalues have a centered distribution with standard deviation σ = √ ns, where s is the standard deviation of the coefficients (Mehta, 2004). As n increases, this distribution converges to the semi-circle law (p(λ) = √ 4σ2 − λ2/2πσ2 ) for all coefficient distributions with bounded variance. If the coefficients are gaussian, the associated eigenvectors are uniformly distributed over the unit sphere. When investigating out-of-distribution generalization for the eigenvalue problem, we will need to generate random symmetric matrices with different distributions of their eigenvalues (corresponding to random matrices with non iid coefficients). To this effect, we randomly sample symmetric matrices M , with gaussian coefficients, and compute their eigenvalue decomposition M = PDPT , with P the orthogonal matrix of eigenvectors (uniformly distributed over the unit sphere since the coefficients are gaussian). We then replace D, the diagonal matrix of eigenvalues of M , with a diagonal D′ sampled from another distribution. Finally, we recompute M ′ = PD′PT , a symmetric matrix (because P is orthogonal) with eigenvalues distributed as we choose, and eigenvectors uniformly distributed over the unit sphere. 3 MODELS AND EXPERIMENTAL SETTINGS We use the standard transformer architecture introduced in Vaswani et al. (2017), with an encoder and a decoder connected by a cross-attention mechanism. Most of our models have 512 dimensions, 8 attention heads and up to 6 layers. We experiment with different number of layers and attention heads in the encoder and decoder. All training is supervised, and minimizes the cross-entropy between model prediction and the correct solution. We use the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 10−4, an initial warm-up phase of 10,000 steps and cosine scheduling (Loshchilov & Hutter, 2016). All training data is generated on the fly, in batches of 64 examples. Every 300,000 examples, 10,000 random problems are generated and used to evaluate the model. When evaluating, we consider that a predicted sequence seqP is a correct solution to the problem (I,O) (I and O the input and output matrices) if it can be decoded as a valid matrix P (several matrices for singular and eigen decomposition) that approximates the correct solution to a given tolerance τ (τ ∈ {5, 2, 1, 0.5%}) . For addition, transposition, multiplication, eigen and singular values we check that P verifies ‖P − O‖ < τ‖O‖ (with ‖A‖ = ∑ i,j |ai,j |, for A = (ai,j), i.e. L1 norm). For eigenvalue decomposition, we check that the solution (Q,D) predicted by the model can reconstruct the input matrix, i.e. ‖QTDQ− I‖ < τ‖I‖. For singular value decomposition, we check that ‖USV − I‖ < τ‖I‖. For matrix inversion, we check that ‖PI − Id‖ < τ‖Id‖ = τ . The choice of the L1 norm is important: norms like L2 and L∞ will favor models that correctly predict the largest coefficients in the solution. For eigen and singular value problems, this amounts to predicting the largest values, an easier problem than the one we want to solve. We consider different tolerances for different problems. Since we round numbers to three significant digits, 0.5% is the best we can hope. In fact, a number xwith mantissa 1.00 is subjected to a maximal rounding error of 0.5% (x ∈]1.005, 0.995]), which may accumulate when several (rounded) numbers are summed, and increase again when nonlinear operations are considered. When discussing results, we consider tolerances of 0% for transposition, which involves no arithmetic, 1% for basic matrix operations (addition, multiplication), and 2 or 5% for non linear operations (decomposition, inversion), but we usually provide results for all tolerance levels. Most of our experiments focus on 5× 5 square matrices, or rectangular matrices with as many coefficients (e.g. 6× 4, 2× 13 ). This helps when comparing encodings: for larger dimensions, varying sequence lengths make comparisons difficult. We also study scaled-up versions of the problems (from 8 × 8 to 15 × 15), and datasets with matrices of variable dimensions (5-10 or 5-15). In this paper, we limit ourselves to problem that can be solved using small models (with up to 6 layers). Scaling to larger problems, and leveraging deeper architectures is left for future research. 4 EXPERIMENTS AND RESULTS 4.1 TRANSPOSITION Learning to transpose a matrix amounts to learning a permutation of its elements. For a square matrix, the permutation is composed of cycles of two elements. Permutations for rectangular matrices involve longer cycles. This task involves no arithmetic operations: tokens from the input sequence are merely copied to the output, in different positions. We investigate two formulations of this problem: a fixed-size case, where all matrices in the dataset have the same dimension and only one permutation is to be learned, and a variable-size case, where the dataset includes matrices of different dimensions, with as many permutations to learn. We train transformers with one layer, 256 dimensions and 8 attention heads in the encoder and decoder, over datasets using our four encoding schemes. All models learn to predict the exact solution (with 0% tolerance) in more than 99% of test cases, for fixed-size matrices, square or rectangular, with dimensions up to 30× 30. This holds for all encodings, and for input or output sequences up to 2000 tokens long. Similar accuracies are achieved for variable-size datasets: (over 99% for 5− 15 and 96% for 5− 20), with the rectangular cases proving slightly more difficult to train. Table 2 summarizes our results. 4.2 ADDITION Learning to add two m× n matrices amounts to learning the correspondence between the positions of input and output (as in the transposition task) and the algorithm for adding two numbers in floating point representation, which will be performed on mn pairs of elements. We train transformers with 1 or 2 layers, 8 attention heads and 512 dimensions. Sum of fixed-size matrices with dimensions up to 10, both square and rectangular, are predicted with over 99% accuracy within 1% tolerance (and over 98% within 0.5%), with all four encodings. As dimensions increase, models using the P10 and P1000 encodings become more difficult to train as input sequences grow longer: adding two 15 × 15 matrices involves 450 input coefficients: a sequence of 1352 tokens in P1000 and 2252 in P10. Nevertheless, FP15 models achieve 99.5% accuracy within 0.5% tolerance for 15×15 matrices and B1999 models 89.7% accuracy with 1% tolerance on 20 × 20 matrices. Variable-size matrices with dimensions up to 10 are predicted by 2-layer transformers using the B1999 encoding with over 99.5% accuracy within 1% tolerance. Over matrices with larger dimensions (5-15), shallow models with 1 or 2 layers struggle, and their accuracy drops to 48 and 37% in the square and rectangular case. This can be mitigated by using deeper decoders: models with one layer in the encoder and 6 in the decoder achieve 77 and 87% accuracy on the same datasets. Table 3 summarizes our results. 2% 100 99.5 99.8 100 100 98.4 99.8 53.3 88.1 99.8 50.8 94.9 5% 100 99.9 99.9 100 100 98.8 100 63.1 99.3 100 72.4 99.4 4.3 MULTIPLICATION Multiplication of a matrix M of dimension m × n by a vector V ∈ Rn amounts to computing the m dot products between V and the lines of M . Each calculation features n multiplications and n − 1 additions, and involve one row in the matrix and all coefficients in the vector. The model must now learn the position of the 2n elements in the computation, and two operations (add and multiply). Experimenting with models with 1 or 2 layers, over 5× 5 matrices, we observe that only models using the P10 and P1000 encoding can be trained to high accuracy. The P1000 encoding performs best, with little difference between two and one layer models. Accuracies over 99.9%, at 1% tolerance, are achieved by 2-layer transformers using the P1000 encoding for 5× 5 and 10× 10 square matrices. Comparable accuracies are achieved when multiplying rectangular matrices by vectors with the same overall number of coefficients (30). Experiments with datasets of matrices with variable size (from 5 to 10) achieve non-trivial performance (from 48% with 1% tolerance, to 72% with 5% tolerance, for square matrices). Results are sumarized in table 4. Multiplication of matricesM and P is a scaled-up version of the matrix-vector multiplication, which is now performed for every column in matrixP . As previously, only models using the P10 and P1000 encoding can be trained to predict to high accuracy. Over 5×5 matrices and rectangular matrices of similar size, trained model accuracy is the same as vector-multiplication (over 99% at 1% tolerance, see table 5), but deeper decoders (with 4 to 6 layers) are needed. 5% 100 100 100 100 100 100 100 100 100 99.9 2% 100 100 100 100 100 100 100 100 99.7 99.8 4.4 EIGENVALUES We now turn to non-linear problems that are usually solved by iterative algorithms. We train models with 4 or 6 layers in the encoder or the decoder to predict the eigenvalues of symmetric matrices. Over samples of 5×5 random matrices, we reach 100% accuracy at 5% tolerance, and 98.5% at 1% for all four encodings. For 8×8 matrices, we achieve accuracies of 100 and 85% at 5 and 1% tolerance. Larger problems, however, prove difficult to learn: on 10×10 matrices, 25% accuracy at 5% tolerance is reached after 360 million examples. As a comparison, 5×5 models train to maximum accuracy in about 40 million examples, and 8×8 models in about 60 million. We can overcome this limitation by training models on variable-size datasets. On samples of matrices with 5-10, 5-15 and 5-20 dimensions, we achieve 100% accuracy at 5% tolerance, and 88, 94 and 45% at 1%. Using the 5-15 model, the eigenvalues of 10×10 matrices can be predicted with 100% accuracy at 2% tolerance, and 73% at 1%. Table 6 summarizes our results. 4.5 EIGENVECTORS This is an expanded version of the previous task: together with the eigenvalues, we predict the orthogonal matrix of eigenvectors. Over 5× 5 matrices, models using the P10 and P1000 encoding achieve 97.0 and 94.0% accuracy with 5% tolerance. FP15 models fare less well, with an accuracy of 51.6%, but asymmetric models, with 6-layer FP15 encoder and 1-layer P1000 decoder, achieve 93.5% accuracy at 5% and 67.5% at 1% tolerance. The eigenvectors of 6 × 6 matrices can be predicted by P1000 models with an accuracy of 81.5%. Table 7 summarizes our results. 4.6 INVERSION Inversion of 5×5 matrices proves more difficult than previous tasks, with accuracies of 73.6% for P10 models, and 80.4 for P1000 models (5% tolerance, 6-layer encoders and 1-layer decoders). Increasing the number of attention heads to 10 and 12 brings little improvement in accuracy, but allows for faster training: 8 head models are trained to 75% accuracy in about 250 million examples, 10 and 12 head models in only 120. The highest accuracies (90.0%) are achieved by asymmetric models: a 6-layer FP15 encoder with 12 attention heads, and a 1-layer P1000 decoder with 8 heads. 4.7 SINGULAR VALUE DECOMPOSITION Whereas this task is related to eigen decomposition (the singular values of a symmetric matrix are the absolute values of its eigenvalues), it proves more difficult to learn: transformers with up to 6 layers, using the P10 or P1000 encoding, can predict the singular decomposition of 4×4, but not 5×5 matrices. Accuracies remain high, 100 and 86.7% for singular values (5 and 1% tolerance), and 98.9 and 75.3% for the full decomposition. 5 OUT-OF-DOMAIN GENERALIZATION AND RETRAINING In this section, we focus on the prediction of eigenvalues of symmetric matrices. To train our models, we generate random n×n matrices with independent and identically distributed (iid) coefficients, sampled from a uniform distribution over [−A,A]. They belong to a common class of random matrices, known as Wigner matrices. Their eigenvalues have a centered distribution with standard deviation σ = √ ns, where s is the standard deviation of the coefficients (s = A/ √ 3 when uniform). As n increases, this distribution converges to the semi-circle law (Mehta (2004)). Whereas Wigner matrices are very common in science, random matrices with different eigenvalue distributions (and non iid coefficients) appear in many practical cases. For instance, statistical covariance matrices have all their eigenvalues positive, and the adjacency matrices of scale-free and other non-ErdosRenyi graphs have centered but non semi-circle distributions of eigenvalues (Preciado & Rahimian, 2017). It is, therefore, important to understand how models trained on Wigner matrices perform on matrices with different distributions of eigenvalues. To this effect, we create test sets of 10,000 matrices with different distributions than the training set. First, we generate matrices with uniform iid coefficients (as in the training set), but different standard deviation: σtest ∈ [0.1σtrain, 1.5σtrain]. Over these test sets, our trained models achieve over 96% accuracy (with 2% tolerance) for σtest ∈ [0.6σtrain, σtrain]. However, model accuracy drops when σtest is out of this range: 54% for 0.4σtrain, 0% for 0.2σtrain, 26% for 1.1σtrain and 2% for 1.3σtrain. Out-of-distribution generalization only takes place when test set variance is lower, and not too far from (over 25% of) training set variance. Then, we generate test sets of matrices with different eigenvalue distributions: positive eigenvalues (Wigner matrices with eigenvalues replaced by their absolute values), and eigenvalues distributions according to the uniform, gaussian or Laplace law (see section 2.2), with standard deviation σtrain and 0.6σtrain. Over test sets with σtest = σtrain, accuracies are 26% for Laplace, 25 for gaussian, 19 for uniform, and 0% for positive. Results are slightly better for test sets with lower standard deviation (0.6σtrain): 28, 44, 60 and 0% for Laplace, gaussian, uniform and positive, but out-ofdistribution accuracies are low, and matrices with positive eigenvalues cannot be predicted at all. To improve out-of-distribution accuracy, we train new models on datasets with different distributions of eigenvalues, and evaluate them on the test sets previously created. First, we generate matrices with uniform coefficients but variable standard deviation (by randomly selecting A ∈ [1, 100] for each matrix). Unsurprisingly, models trained on this dataset achieve high accuracies on test sets of Wigner matrices with high or low variance. Performances also increase over the gaussian, uniform and Laplace-distributed test sets (from 25 − 60% to 53 − 68%). Yet, matrices with positive eigenvalues cannot be predicted. Training models over a mixture of (Wigner) matrices with uniform iid coefficients and matrices with positive eigenvalues results in better prediction of positive eigenvalues, but degrades performances over all other tests sets. However, models trained on a mixture of matrices with uniform coefficients and matrices with gaussian eigenvalues, or uniform iid and Laplace eigenvalues, achieve high accuracies over all test sets, as do models trained on matrices with Laplace eigenvalues only, or a mixture of uniform, gaussian and Laplace eigenvalues (all non-Wigner matrices). These experiments are presented in table 10. This is an important result: it suggests that Wigner matrices, often considered as the default model for random matrices, might not be the best choice for training transformers. Models trained on non-Wigner matrices (non-iid coefficients, limit distribution of eigenvalues not a semi-circle) generalize to matrices with iid coefficients, whereas the reverse is not true. This confirms that out-ofdistribution generalization requires that particular attention is paid to training data generation. Models trained on matrices of a given size do not generalize to different dimensions, but they can be retrained over samples of matrices of different size. This takes comparatively few examples: a 5× 5 model, that takes 40 million examples to be trained, can learn to predict with high accuracy eigenvalues of matrices of dimension 6 and 7 with about 25 million additional examples. Table 11 presents those results. The capacity of pre-trained large transformers (such as GPT-3) to perform few-shot learning is well documented, but it is interesting to observe the same phenomenon in smaller models. 6 DISCUSSION AND FUTURE DIRECTIONS Our experiments demonstrate that transformers can be trained to solve problems of linear algebra, using randomly generated datasets. However, their accuracy depends on the encodings used to represent matrix coefficients. We introduce four encoding schemes, and our experiments suggest that P10 is generally dominated by P1000, which is also more economical, and that B1999 never really finds its use, as FP15 is more compact and P1000 more efficient. P1000 seems to be a good choice for problems of moderate size, and FP15 for longer sequences. For advanced problems like eigenvectors and inversion, asymmetric architectures, with a deep FP15 encoder and a shallow P1000 decoder, achieve the best performances. Our interpretation is that P1000 in the decoder facilitates training because the meaningful representation of output as (sign, mantissa, exponent) triplets allows for better error feedback during training. On the other hand, a FP15 deep encoder can provide more complex representations of the input matrix, while being easier to train thanks to the shorter sequences. Such asymmetric architectures also benefit from more attention heads (10 to 12) in the encoder, while less heads (4) in the decoder improve training stability at no cost in accuracy. Those asymmetric architectures deserve further study. Most of our experiments focus on matrices with 5 to 10 lines and columns. Our results on the eigenvalue problem suggest that larger problems can be solved by training over matrices of variable size, or retraining over larger matrices. In this work, matrices of different dimensions are sampled in equal proportion and presented for training in random order. Varying their proportions and scheduling (i.e. curriculum learning) should result in better performance. Yet, as dimension increases, sequence lengths will reach the practical limits of quadratic attention mechanisms. Experimenting with transformers with linear or log-linear attention (Zaheer et al., 2021; Wang et al., 2020a; Vyas et al., 2020; Child et al., 2019) is a natural extension of our work. In terms of asymptotic complexity, matrix inversion (and the other non linear tasks) is usually handled by O(n3) algorithms (although O(n2.37) methods are known). Since our sequence length is O(n2), transformers with quadratic attention mechanisms are O(n4). Linear attention would reduce this to O(n2). The out-of-distribution experiments are our most significant results. They prove that models trained on random data can generalize to a wide range of test distributions. They also confirm the importance of wisely selecting training data distributions, a process that can be counter-intuitive. In our specific case, the “obvious” random model (Wigner matrices) is not the best for out-of-domain generalization. In fact, we show that sets of “special” matrices (non-iid coefficients with Laplace eigenvalues) can produce models with better capability for generalization, notably on Wigner matrices. This matches the intuitive idea that we learn more from edge cases than averages. 7 RELATED WORK Algorithms using neural networks to compute eigenvalues and eigenvectors have been proposed since the early 1990s (Samardzija & Waterland, 1991; Cichocki & Unbehauen, 1992; Yi et al., 2004; Tang & Li, 2010; Oja, 1992), and improvements to the original techniques have been suggested until recently (Finol et al., 2019). Similar approaches have been proposed for other problems in linear algebra (Wang, 1993a;b; Zhang et al., 2008). All these methods leverage the Universal Approximation Theorem (Cybenko, 1989; Hornik, 1991), which states that, under weak conditions on their activation functions, neural networks can approximate any continuous mapping (in our case, the mapping between the coefficients of a matrix and their associated eigenvalues and vectors). These approaches rely on the fact that eigenvalues and vectors appear in the solutions of particular differential equations involving the matrix coefficients (e.g. Brockett (1991)). By designing a neural network that represents this differential equation, with the matrix to decompose as the input, and the coefficients in the output layer as the solution, and by defining a loss function that measures how well the output layer approximates the correct solution, the network can be trained to find better and better approximations to the solution. These techniques have two main limitations: they rely on a problem-specific network architecture that has to be hand-coded, and computation is done at train time, which makes them slow and implies retraining the network every time a new matrix is to be processed. In comparison, the techniques proposed in this paper are trained once, and can compute at inference for any matrix input. Techniques have been proposed to train neural networks to compute basic mathematical operations, and use them as building blocks for larger components. Kaiser & Sutskever (2015) introduced the Neural GPU, that could learn addition and multiplication over binary representations of integers. Trask et al. (2018) proposed Neural Arithmetic Logic Units (NALU), that can learn to perform exact additions, substractions, multiplications and divisions by constraining the weights of a linear network to remain close to 0, 1 or -1. Both Neural GPU and NALU have been shown to be able to extrapolate to numbers far larger than those they were trained on. For matrix multiplication, Blalock & Guttag (2021) use learning techniques to improve on known approximate techniques. Use of transformers in mathematics has mostly focused on symbolic computations. Lample & Charton (2019) showed that transformers could be trained to integrate functions and solve ordinary differential equations and, in a follow-up work (Charton et al., 2020), predict properties of differential systems. Transformers have also been applied to formal systems, in theorem proving (Polu & Sutskever, 2020) and temporal logic (Hahn et al., 2021). The use of sequence to sequence models for arithmetic and the exact solution of mathematical problem has been studied by Saxton et al. (2019). In a recent paper, Nogueira et al. (2021) point to the limitations of experiments on arithmetic. 8 CONCLUSION We have shown that transformers can be trained over generated data to solve problems of linear algebra with high accuracy, and that careful selection of the generative model for their training data can allow them to generalize out of their training distribution. This demonstrates that applications of transformers to mathematics are not limited to symbolic calculation, and can cover a wide range of scientific problems, featuring numerical computations. We believe our results pave the way for wider applicability of transformers in science. Reproducibility statement The transformer implementation and the framework for running the experiments were used in several prior works, and rely on standard libraries (Pytorch for the models, Numpy for mathematical calculations). The model source code, data generation code, and parameters for experiments will be open-sourced and made publicly available. All experiments were run several times (on average 10 times), with multiple random seeds and light modifications of the hyperparameters (e.g. small changes in model size, weight initialization, activation functions), to guarantee their robustness. Ethics statement Given the subject of the paper, and the fact that all data used are randomly generated, we believe that no potential ethical concerns are raised by this research. A NUMBER ENCODINGS Let x be a non-zero real number, it can be represented uniquely as x = s.m.10e, with s ∈ {−1, 1},m ∈ [100, 1000[, e ∈ Z. Rounding m to the nearest integer n, we get the base ten, floating-point representation of x, with three significant digits: x ≈ s.n.10e, (s, n, e) ∈ Z3 By convention 0 is encoded as +0.100. All our encodings are possible representations of the triplets (s, n, e). In this paper, we limit e to the range [−100, 100], and n to the range [100, 999]. In base N positional encoding, we encode s (the sign) and e (the exponent) as unique tokens: + or - for s, and a token from E-100 to E100 for e. The mantissa, n, is encoded as the representation of n in base N (e.g. binary representation if N = 2, decimal representation if N = 10), a sequence of dlogN (1000)e tokens from 0 to N-1. Overall, a number will be encoded as a sequence of dlogN (1000)e+ 2 tokens, from a vocabulary of 202 +N tokens. For instance, x = eπ ≈ 23.14069, will be represented by +231.10−1, and encoded in P10 (base 10 positional) as the sequence [+,2,3,1,E-1], and in P1000 (base 1000 positional) as [+,231,E-1]. x = −0.5 will be represented as −500.10−3, and encoded in P10 as [-,5,0,0,E-3], and in P1000 as [-,500,E-3]. Other bases N could be considered, as well as different bases for the exponent, and different sizes of the mantissa. In this paper, we use P10 to encode numbers with absolute value in [10−100, 10101] as sequences of 5 tokens, using a vocabulary of 213 tokens (10 digits, 2 signs, and 201 values of the exponent), and P1000 as sequences of 3 tokens, with a vocabulary of 1104. Balanced base 2a + 1 uses digits between −a and a (Knuth, 1997). For instance, in balanced base 11, digits range from −5 to 5 (an every day example of a balanced base can be found in the way we state the hour as “twenty to two”, or “twenty past two”). Setting a to 999, we define B1999, and encode the sign an mantissa as a single token between−999 and 999. Numbers are then encoded on two tokens with a vocabulary of 2004. Finally, we encode floating point numbers as unique tokens by rewriting any number x = m10b, with m ∈ [−999, 999], b ∈ [−(p + 2)/2, (p + 2)/2], p + 2 = 0, [2], and encoding it as the unique token FPm, b. This allows to represent numbers with 3 significant digits and a dynamic range of 10p+2, using a vocabulary of (18p+2)103 tokens. In this paper, we use p = 16: encoding numbers as unique tokens, with a vocabulary of 30, 000 (FP15). B L1, L2 AND L∞ NORMS FOR EVALUATION We evaluate the accuracy of our trained models by decoding model predictions and verifying that they approximate the correct solution up to a fixed tolerance τ . In the general case, if the model predict a sequence seqP , and the solution of the problem is O, we consider that the prediction is correct if seqP can be decoded into a matrix P and ‖P −O‖ < τ‖O‖ (1) For eigenvalue decomposition, we check that the solution (Q,D) predicted by the model can reconstruct the input matrix, i.e. ‖QTDQ− I‖ < τ‖I‖. For singular value decomposition, we check that ‖USV − I‖ < τ‖I‖. For matrix inversion, we check that ‖PI − Id‖ < τ‖Id‖ = τ . All our published results use the norm L1: ‖A‖ = ∑ i,j |ai,j |, for A = (ai,j). In this section, we discuss the impact of using different norms, namely L2 (‖A‖ = ∑ i,j a 2 i,j), or L ∞ (‖A‖ = maxi,j |ai,j |). Using L1 norm in equation 1 amounts to comparing the average absolute errors on the predicted coefficients (P − O) to the average absolute value of coefficients of O. Using L2 compares the squared errors, and will biase the estimation towards large absolute errors, and coefficients of O with large absolute values. L∞ will compare the largest absolute error to the largest coefficient in |O|. The choice of the norm has different impact for different problems. Figure 1 presents learning curves using the three norms for our best models on different problems. For basic arithmetic operations (transposition, addition, multiplication), there is little difference between L1 and L2 accuracies, and therefore no reason to prefer one over the other for model evaluation. For eigen and singular value problems, L2 accuracies reach a high value early during training, long before the model begins to learn according to the other norms. This is due to the fact that the eigenvalues of Wigner matrices tend to be regularly spaced over the interval [−2σ, 2σ] (σ = √ (n)s with s the standard deviation of coefficients and n the dimension of the matrix). This means that, in many cases, the model can predict the largest absolute eigenvalues from the bounds of the interval (which can be estimated from the dataset). For this reason, L2 accuracy is not a good evaluation metric for the eigenvalue or singular value problem. This is particularly clear in the 10× 10 case: transformers struggle with such matrices, and L1 and L∞ accuracies remain very low even after a thousand epochs (300 million examples), but L2 accuracy is close to 100% since the beginning of training. A similar phenomenon takes place for eigenvector calculations: L2 and L∞ accuracy rise steeply, long before the model begins to learn (according to the L1 norm). This justifies the choice of L1 as our evaluation norm. C ADDITIONAL EXPERIMENTAL RESULTS C.1 LEARNING CURVES FOR DIFFERENT ENCODINGS AND ARCHITECTURES Figure 2 presents learning curves for loss and accuracy (with 5 and 1% tolerance) on different models, for four problems. These curves indicate the number of training examples needed for each problem. On average, our best models learn basic operations on matrices in less than 50 epochs (15 million examples). Training size requirement increases with operation complexity : from 30 million for eigenvalues, to 120 million for eigenvectors, and over 150 million for matrix inversion. On the inversion problem, we experiment with the number of attention heads in the encoder. Increasing the number of head from 8 to 10 and 12 improves learning speed and accuracy. Over 12 heads, this benefit disappears: with 16 heads, our models need 800 epochs to train to 55% accuracy (with 5% tolerance). We believe that this reflects the trade-off being the number of heads (more heads catch more dependencies between elements in the input sequeunce) and the downsampling of attention patterns (when internal model dimension remains fixed). Finally, we notice that the learning curves for the harder problems (eigenvalues, vectors and inversion) are noisy. This is caused by the learning rates: our models usually need small learning rates (510−4 before scheduling is typical) and there is a trade-off between low rates that will stabilize the learning curve, and larger rates that accelerate training. C.2 MODEL SIZE The two main factors influencing model size are depth and the number of dimensions (see Appendix F). In this section we discuss how model size impacts accuracy for addition of 10 × 10 matrices, multiplication of a 5 × 5 matrix by a vector, and computation of the eigenvalues of a 5× 5 matrix. All models in this section are symmetric (same dimension and number of layers in the encoder and decoder) and have 8 attention heads. For the addition task, tables 12 and 13 present the accuracy reached after 60 epochs (18 million examples) and the number of epochs (of 300,000 examples) needed to reach 95% accuracy, for models using the P1000 and B1999 encoding. Both encodings allow shallow architectures (1/1 and 2/2 layers) to learn addition with high accuracy, but the more compact B1999 support smaller models (256 dimensions). In terms of speed, with B1999, shallow models are learned very fast, but it takes a lot of examples to train deeper models. The opposite is true for P1000 models. Table 14 presents the learning speed of models of different sizes for the matrix/vector product and eigenvalue computation tasks (5× 5 matrices, and P1000 encoding). For each problem, there exist a minimal dimension and depth under which models struggle to learn: one layer and 128 dimensions for products, one layer or 128 dimensions for eigenvalues. Over that limit, increasing the dimension accelerates learning. Increasing the depth, on the other hand, bring no clear improvement in speed or accuracy. Finally, we experiment with larger models on larger problems. We trained models with 8 to 12 layers and 512 to 2048 dimensions on sets of 10 × 10 matrices, without success. As discussed in section 4.4, those problems are out of reach of the models we use in this paper (unless we use curriculum learning and train on mixed-size datasets). Increasing model size does not seem to help scaling to larger matrices. C.3 MODEL PERFORMANCE ON DIFFERENT DATASETS Table 15 sumarizes in-domain performance (i.e. accuracy when the test set is generated with the same procedure as the training set) for different datasets. On Wigner matrices (i.e. matrices with independant and identically distributed, iid, coefficients) uniformly or normally distributed, with fixed-range coefficients (i.e. all matrices in the dataset have coefficients uniformly sampled from the same interval), or variable-range coefficients (i.e. coefficient range vary from one matrix to another), all models achieve very high (99+%) accuracy. The eigenvalues of Non-Wigner matrices with Gaussian or Laplace distributed eigenvalues, are also predicted to high accuracy by all models. Over matrices with positive or uniformly distributed eigenvalues, smaller models using the FP15 encoding prove difficult to train. Finally, on mixtures of Wigner and non Wigner matrices, all models predict to high accuracy. D ALTERNATIVE ARCHITECTURES D.1 OTHER SEQUENCE TO SEQUENCE MODELS : LSTM AND GRU We experimented with two popular recurrent architectures: long short-term memories (Hochreiter & Schmidhuber, 1997), and gated recurrent units (Cho et al., 2014), on three tasks : addition of 5 × 5 and 10 × 10 matrices, eigenvalues and matrix inversion of 5 × 5 matrices. We experiment with sequence to sequence models, featuring an encoder and a decoder (LSTM or GRU), with 2 to 8 layers, and 1024 or 2048 hidden dimensions. The input and output sequences, encoded as in the rest of the paper, are pre-processed (and decodedà) via an embedding layer with 256 or 512 dimensions. Addition, a very easy task for transformers (see section 4.2) proves difficult for LSTM and GRU. None of our models can learn addition of 10×10 matrices. Some models can learn addition of 5×5 matrices, but whereas transformers achieve 100% accuracy for all tolerances, our best LSTM and GRU only exceed 90% at 1% tolerance. GRU seem to perform better than LSTM on this task, and 2-layer models perform better than 4-layer models, but transformers have a distinct advantage over LSTM and GRU on addition. Both LSTM and GRU can be trained to predict eigenvalues of 5×5 matrices with the same accuracy as transformers, for the P1000 and FP15 encoding (table 17). Matrix inversion, on the other hand, cannot be learned. Overall, these experiments show that other sequence to sequence architectures, LSTM and GRU, can learn tasks like eigenvalues and addition of small matrices. However, they are less efficient on addition (in terms of precision and scaling to larger matrices) and fail on more complex tasks, like matrix inversion. D.2 SHARED-LAYER TRANSFORMERS: UNIVERSAL TRANSFORMERS In the Universal Transformer (Dehghani et al., 2018), the stacked layers of usual transformer implementations are replaced by one layer that is looped through a fixed number of times (feeding the output of one iteration into the input of the next). This amounts to sharing the weights of the different layers, therefore greatly reducing the number of parameters in the model. This technique can be applied to the encoder, the decoder or both. The number of iterations is a fixed hyperparameter, but the original paper also proposed a halting mechanism inspired by Adaptive Computation Time (Graves, 2016), to adaptively control loop length at the token level. In this version, a stopping probability is learned for every position in the input sequence, and once it reaches a certain threshold, the layer merely copies the input onto the output. The iteration stops when all positions have halted, or a specific value is reached. A recent paper (Anonymous, 2022) proposed to use a similar copy-gating mechanism to skip iterations in a fixed-length loop. We experiment with these three variants (fixed length, adaptive halting, copy gating) on the addition (of 10 × 10 matrices), eigenvalue and matrix inversion tasks (5× 5 matrices). For the addition task, we train universal transformers with one layer and in the encoder and decoder, 256 or 512 dimensions and 8 attention heads. We use the B1999 encoding for the data. We experiment with looped encoder, looped decoder, and loop in both, a loop length of 4, copy-gating and ACT (the 4 loops in then a maximum number of iterations)and copy-gating. Table 18 summarizes our findings. Only models with encoder loops learn to add, and models with 512 dimensions learn with over 95% accuracy for all tolerances. Universal Transformers with one layer (looped-encoder only) perform as well as 2/2 transformers. On the eigenvalue task, we experiment on the P1000 and FP15 encoding, with encoder-loop only 1/1 Universal Transformers with 4 or 8 loops. Universal transformers using the P1000 encoding achieve the same performances (with only one layer) than the transformers in our main research 4 loop transformers seem best, with gates not improving perfomance and ACT slightly degrading it. With the FP15 encoding, universal transformers become very difficult to train: only the 4 loop gated version achieves significant accuracy (still lower than the 6/1 transformers). Finally, we experimented with matrix inversion, with FP15/P1000 and P1000/P1000 encodings, and 4 or 8 loops in the encoder. A gated universal transformer using FP15 in the input and P1000 in the output achieved 73% accuracy, a significant result albeit lower than the best result acieved with 6/1 transformers using the same encodings (90%). With the P1000 encoding, the best universal transformers reach 55% accuracy, compared to 80% for their 6/1 transformer counterparts. Overall, Universal Transformers seem to achieve comparable performances with deep transformers (except on the inversion tasks), using less parameters. This makes shared layer transformers an interesting direction for future work. E ADDITIONAL EXPERIMENTS E.1 NOISY DATA Experimental data is often noisy, and it is interesting to see how our models behave in the presence of random error. To this effect, we trained models to perform matrix addition and eigenvalue computations on noisy data, by adding a random gaussian error to all coefficients in the input (5 × 5) matrices. Three levels of noise were tested, with standard deviation equal to 1, 2 and 5% of the standard deviation of the matrix coefficients (σ = 5.77 for uniform coefficients in [−10, 10]). For linear operations like addition, we expect the model to predict correct results so long tolerance τ remains larger than error. Table 20 demonstrates that models can be trained on noisy data without loss of accuracy, so long the ratio between the standard deviation of error and that of the coefficients is lower than tolerance. Accuracy drops to about 40% when error levels are approximately equal to tolerance, and to zero once error exceed tolerance. It is worth noticing that model size and encoding have no apparent impact on robustness to noise. A similar pattern appears in eigenvalue calculations (table 21), but trained models prove more resistant to noise in the data than for addition. For instance, the eigenvectors of matrices with error standard deviation up to 0.05σ can be learnt to high accuracy within 5% tolerance (vs 0.02σ for addiition). As before, model size has no impact on robustness. However, FP15 models seem more difficult to train over noisy data than P1000. E.2 CO-TRAINING We have shown that transformers can be trained to performed all the tasks mentioned above, training one specific model for each task. In this section, we experiment with co-training: learning several tasks at once. We add a token at the beginning of the input and output sequence indicating the task to be solved (e.g. Transpose or Add), and generate data by randomly selecting a task (with equal probability for all tasks) and producing the corresponding pairs. We train transformers with 4 or 6 layers, 512 dimensions and 8 attention heads on eight datasets corresponding to different co-training tasks: • Transpose and add (TA) • Transpose, add and dot product (vector matrix multiplication) (TAD) • Transpose, add, dot product and matrix multiplication (TADM) • Transpose, add, dot product, matrix multiplication and eigenvalues (TADME) • Transpose, add, dot product, matrix multiplication, eigenvalues and eigenvectors (TAD- MEF) • Transpose, add, dot product, matrix multiplication, eigenvalues, eigenvectors and matrix inversion (TADMEFI) • Eigenvalues, eigenvectors and matrix inversion (EFI) Table 22 summarizes our findings. Lines correspond to a co-training tasks, columns to the performance achieved on this specific task (with 5% tolerance). Co-training over a mixture of basic operations (transposition, addition, dot products and multiplication: the TA, TAD and TADM tasks) learn to predict the results of all operations with almost perfect accuracy. Co-training on the basic operations and eigenvalue computations (the TADME task) allows the model to predict eigenvalues with 80% accuracy, in exchange for a loss of performances on the dot product task. In other experiments with this task, the model learned all basic operation to 100% accuracy(as in the TADM setting), and the eigenvalue to a few percents. Adding more tasks, eigenvectors and inversion, results in the same performance. Co-training on the advanced tasks only (eigenvalues, vectors and inversion) results in 100% accuracy on eigenvalue computation, 22% on eigenvectors, and 0 on inversion. These results demonstrate the feasibility of co-training on basic matrix operations, but also suggest that further research is needed if one wants to extend it to all the tasks considered in this paper. F NUMBER OF PARAMETERS The number of parameters in the sequence to sequence transformer we use in this paper can be calculated as follows. • A self-attention mechanism with dimension d has 4d(d+ 1) parameters: it is composed of four linear layers (K, Q, V and the output layer), with d input, d output and a bias. • A cross-attention mechanism with de dimensions in the encoder, and d in the decoder has 2d(d+ de + 2) parameters (K and V are de × d layers). • A FFN with one hidden layer, d input and output, and h hidden units has d(h+1)+h(d+1) parameters. • A layer normalization with d dimensions has 2d parameters. • An encoder layer with dimension d has a self-attention mechanism, a FFN with 4d hidden units (in our implementation) and two layer normalizations, for a total number of parameters of 12d2 + 13d. • A decoder layer has a cross-attention layer (encoding dimension de) and a layer normalization on top of an encoder, for a total of 14d2 + 19d+ 2ded parameters. • An embedding of dimension d for a vocabulary of w words will use dw parameters, and 2d more if it is coupled to a layer normalization. • The final prediction layer with an output dimension of d and a decoded vocabulary of w words will use (d + 1)w parameters (but in our case, dw will be shared with the decoder embedding). Overall, the number of parameters for a transformer with ne encoding layers with dimension de, n ∗ d decoding layers with dimension dd, an input vocabulary of wi words, an output vocabulary of wo words and a positional embedding of wp words (corresponding to the maximum sequence length) can be computed by the formula: P = de(wi + wp + 2) + ((wo + wp + 2)dd + wo) + nede(12de + 13) + nddd(14dd + 2de + 19) the four terms in the sum corresponding to the input embedding, the output embedding, the encoder and the decoder. Table 23 provides the number of parameters for some of the models used in this paper. For the positional embedding, we set the number of words as the longest input and output sequence studied with that model. G EIGENVALUE DISTRIBUTION OF WIGNER MATRICES, AN EMPIRICAL JUSTIFICATION Figure 3 provides an empirical confirmation of the property of Wigner matrices mentioned in sections 2.2 and 5: the standard deviation of their eigenvalues is a function of their dimension and standard deviation of their coefficients only, and does not depend on the actual dsitribution of the coefficient. In particular, for coefficients with standard deviation σ = 10/ √ (3) = 5.77, we expect the standard deviation of their eigenvalue distribution to be σ = 12.91, 18.26, 22.36 and 25.81 for square matrices of dimension 5, 10, 15 and 20. For three distributions, uniform, Laplace and gaussian, and four dimensions (5, 10, 15, and 20), we generated 100 000 random matrices with the same standard deviation of coefficients, and computed their eigenvalues. Standard deviations are within 0.01 of theoretical values for all distributions and dimensions. It is interesting to note how the distributions (which correspond to the original coefficient distribution for n = 1) resemble the semi-circle as dimension increases.
1. What is the focus of the paper, and what are the contributions of the proposed approach? 2. What are the strengths and weaknesses of the paper regarding its experiments, results, and conclusions? 3. Do you have any concerns about the significance or applicability of the paper's findings? 4. Are there any suggestions for future research directions related to the paper's topic? 5. Are there any minor issues or typos in the paper that could be improved?
Summary Of The Paper Review
Summary Of The Paper This paper describes several experiments where transformers are trained to perform real-valued linear algebra calculations (matrix transposition, addition, multiplication, eigenvalues & eigenvectors of symmetric matrices, SVD, inversion). In-distribution accuracy is generally very high, whereas care is needed in order to obtain out-of-domain generalization. Review The paper carries out a very thorough set of experiments on linear algebra calculations with transformers, using four different encodings of input matrices. In addition, the authors are aware of the importance of out-of-distribution generalization, varying both the matrix size and the distribution of input matrices of a given size. Results appear to be complete, and the conclusion drawn from them are generally sound. However, the problem tackled in this paper does not appear to be particularly useful. In my opinion, the conclusions and findings of this paper are only interesting on a theoretical level (perhaps they can help understand what transformers can or cannot do), rather than being directly applicable in a meaningful way. After all, we do have algorithms for all linear algebra problems considered, and they work with 100% accuracy, perfect out-of-domain generalization, and faster run time. As the authors note in the discussion, at the current stage transformers have quadratic complexity in the number of tokens, which translates into O ( n 4 ) complexity for n × n input matrices, and this is asymptotically slower than the exact algorithms we have. A potentially interesting future direction (which perhaps can be advertised more by the author, to strengthen the claim that this paper is useful) is to investigate linear-time transformers on tasks where the exact algorithm requires more than O ( n 2 ) time, so that perhaps transformers can be used to perform approximate computations with less time. I also have the following minor comments. First line of page 2: m × n should be in a math formula. Section 5, fifth line: what does “for small values of n” mean here? The statement is very precise, so it is hard to believe that it is true up to a certain small number (say, 5) and false for a larger n. End of page 7, “This confirms that out-of-distribution generalization is possible when particular attention...”: actually, you showed that it is necessary to pay particular attention, not that it is sufficient. So I would write “out-of-distribution generalization requires particular attention...”.
ICLR
Title Improving Model Robustness with Latent Distribution Locally and Globally Abstract We propose a novel adversarial training method which leverages both the local and global information to defend adversarial attacks. Existing adversarial training methods usually generate adversarial perturbations locally in a supervised manner and fail to consider the data manifold information in a global way. Consequently, the resulting adversarial examples may corrupt the underlying data structure and are typically biased towards the decision boundary. In this work, we exploit both the local and global information of data manifold to generate adversarial examples in an unsupervised manner. Specifically, we design our novel framework via an adversarial game between a discriminator and a classifier: the discriminator is learned to differentiate the latent distributions of the natural data and the perturbed counterpart, while the classifier is trained to recognize accurately the perturbed examples as well as enforcing the invariance between the two latent distributions. We conduct a series of analysis on the model robustness and also verify the effectiveness of our proposed method empirically. Experimental results show that our method substantially outperforms the recent state-of-the-art (i.e. Feature Scattering) in defending adversarial attacks by a large accuracy margin (e.g. 17.0% and 18.1% on SVHN dataset, 9.3% and 17.4% on CIFAR-10 dataset, 6.0% and 16.2% on CIFAR-100 dataset for defending PGD20 and CW20 attacks respectively). 1 INTRODUCTION Deep Neural Networks (DNNs) have achieved impressive performance on a broad range of datasets, yet can be easily fooled by adversarial examples or perturbations (LeCun et al., 2015; He et al., 2016; Gers et al., 1999). Adversarial examples have been shown to be ubiquitous beyond different tasks such as image classification (Goodfellow et al., 2014), segmentation (Fischer et al., 2017), and speech recognition (Carlini & Wagner, 2018). Overall, adversarial examples raise great concerns about the robustness of learning models, and have drawn enormous attention over recent years. To defend adversarial examples, great efforts have been made to improve the model robustness (Kannan et al., 2018; You et al., 2019; Wang & Zhang, 2019; Zhang & Wang, 2019). Most of them are based on the adversarial training, i.e. training the model with adversarially-perturbed samples rather than clean data (Goodfellow et al., 2014; Madry et al., 2017; Lyu et al., 2015). In principle, adversarial training is a min-max game between the adversarial perturbations and classifier. Namely, the indistinguishable adversarial perturbations are designed to mislead the output of the classifier, while the classifier is trained to produce the accurate predictions for these perturbed input data. Currently, the adversarial perturbations are mainly computed by enforcing the output invariance in a supervised manner (Madry et al., 2017). Despite its effectiveness in some scenarios, it is observed recently that these approaches may still be limited in defending adversarial examples. In particular, we argue that these current adversarial training approaches are typically conducted in a local and supervised way and fail to consider globally the overall data manifold information; such information however proves crucially important for attaining better generalization. As a result, the generated adversarial examples may corrupt the underlying data structure and would be typically biased towards the decision boundary. Therefore, the well-generalizing features inherent to the data distribution might be lost, which limits the performance of the DNNs to defend adversarial examples even if adversarial training is applied (Ilyas et al., 2019a; Schmidt et al., 2018). For illustration, we have shown a toy example in Figure 1. As clearly observed, adversarially-perturbed examples gen- erated by PGD, one of the most successful adversarial training method, corrupt the data manifold, which would inevitably lead to poor performance if the training is conducted based on these perturbed examples. On the other hand, the current state-of-the-art method Feature Scattering (Zhang & Wang, 2019) can partially alleviate this problem but still leads to corruptions on the data manifold. To address this limitation, we propose a novel method called Adversarial Training with Latent Distribution (ATLD) which additionally considers the data distribution globally in an unsupervised fashion. In this way, the data manifold could be well preserved, which is beneficial to attain better model generalization. Moreover, since the label information is not required when computing the adversarial perturbations, the resulting adversarial examples would not be biased towards the decision boundary. This can be clearly observed in Figure 1(d). Our method can be divided into two steps: first, we train the deep model with the adversarial examples which maximize the variance between latent distributions of clean data and adversarial counterpart rather than maximizing the loss function. We reformulate it as a minimax game between a discriminator and a classifier. The adversarial examples are crafted by the discriminator to make different implicitly the latent distributions of clean and perturbed data, while the classifier is trained to decrease the discrepancy between these two latent distributions as well as promoting accurate classification on the adversarial examples as Figure 2 shows. Then, during the inference procedure, we generate the specific perturbations through the discriminator network to diminish the impact of the adversarial attack as shown in Figure 6 in Appendix. On the empirical front, with the toy examples, we show that our proposed method can preserve more information of the original distribution and learn a better decision boundary than the existing adversarial training method. We also test our method on three different datasets: CIFAR-10, CIFAR100 and SVHN with the famous PGD, CW and FGSM attacks. Our ATLD method outperforms the state-of-the-art methods by a large margin. e.g. ATLD improves over Feature Scattering (Zhang & Wang, 2019) by 17.0% and 18.1% on SVHN for PGD20 and CW20 attacks. Our method also shows a large superiority to the conventional adversarial training method (Madry et al., 2017), boosting the performance by 32.0% and 30.7% on SVHN for PGD20 and CW20 attacks. 2 RELATED WORK Adversarial Training. Adversarial training is a family of techniques to improve the model robustness (Madry et al., 2017; Lyu et al., 2015). It trains the DNNs with adversarially-perturbed samples instead of clean data. Some approaches extend the conventional adversarial training by injecting the adversarial noise to hidden layers to boost the robustness of latent space (Ilyas et al., 2019b; You et al., 2019; Santurkar et al., 2019; Liu et al., 2019). All of these approaches generate the adversarial examples by maximizing the loss function with the label information. However, the structure of the data distribution is destroyed since the perturbed samples could be highly biased towards the non-optimal decision boundary (Zhang & Wang, 2019). Our proposed method has a similar training scheme with adversarial training by replacing clean data with the perturbed one. Nevertheless, our method generates the adversarial perturbations without the label information which weakens the impact of non-optimal decision boundary and can retain more information of the underlying data distribution. Manifold-based Adversarial Training. Song et al. (2017) propose to generate the adversarial examples by projecting on a proper manifold. Zhang & Wang (2019) leverage the manifold information in the forms of inter-sample relationship within the batch to generate adversarial adversarial perturbations. Virtual Adversarial Training and Manifold Adversarial Training are proposed improve model generalization and robustness against adversarial examples by ensuring the local smoothness of the data distribution (Zhang et al., 2018; Miyato et al., 2017). Some methods are designed to enforce the local smoothness around the natural examples by penalizing the difference between the outputs of adversarial examples and clean counterparts (Kannan et al., 2018; Chan et al., 2020; Jakubovitz & Giryes, 2018). All of these methods just leverage the local information of the distribution or manifold. Differently, our method generates the perturbations additionally considering the structure of distribution globally. Unsupervised Domain Adversarial Training. Domain Adversarial Training shares a training scheme similar to our method where the classifier and discriminator compete with each other (Odena et al., 2017; Long et al., 2018; Ganin et al., 2016). However, its objective is to reduce the gap between the source and target distributions in the latent space. The discriminator is used to measure the divergence between these two distributions in the latent space. The training scheme of our method is also based on competition between the classifier and discriminator. Different from the previous framework, the discriminator of our method is used to capture the information of distributions of adversarial examples and clean counterparts in the latent space which helps generate the adversarial perturbations. GAN-based Adversarial Training Methods. Several GAN-based methods leverage GANs to learn the clean data distribution and purify the adversarial examples by projecting them on clean data manifold before classification (Meng & Chen, 2017; Metzen et al., 2017). The framework of GAN can also be used to generate the adversarial examples (Baluja & Fischer, 2018). The generator produces the adversarial examples to deceive both the discriminator and classifier; the discriminator and classifier attempt to differentiate the adversaries from clean data and produce the correct labels respectively. Some adversary detector networks are proposed to detect the adversarial examples which can be well aligned with our method (Gong et al., 2017; Grosse et al., 2017). In these works, a pretrained network is augmented with a binary detector network. The training of the pretrained network and detector involves generating adversarial examples to maximize their losses. Differently, our method generates the adversarial examples just to minimize the loss of the discriminator and feed them as the training set to the classifier. Such adversarial examples are deemed to induce most different latent representations from the clean counterpart. 3 BACKGROUND 3.1 ADVERSARIAL TRAINING Let us first introduce the widely-adopted adversarial training method for defending against adversarial attacks. Specifically, it solves the following minimax optimization problem through training. min θ {E(x,y)∼D[ max x′∈Sx L(x′, y; θ)]}, (1) where x ∈ Rn and y ∈ R are respectively the clean data samples and the corresponding labels drawn from the dataset D, and L(·) is the loss function of the DNN with the model parameter θ ∈ Rm. Furthermore, we denote the clean data distribution as Q0, i.e. x ∼ Q0. , and denote x′ ∈ Rn as perturbed samples in a feasible region Sx , {z : z ∈ B(x, ) ∩ [−1.0, 1.0]n} with B(z, ) , {z : ‖x− z‖∞ ≤ } being the `∞-ball at center x with radius . By defining fθ(·) as the mapping function from the input layer to the last latent layer, we can also rewrite the loss function of the DNN as l(fθ(x), y) where l(·) denotes the loss function calculated from the last hidden layer of the DNN, e.g. the cross entropy loss as typically used in DNN. Whilst the outer minimization can be conducted by training to find the optimal model parameters θ, the inner maximization essentially generates the strongest adversarial attacks on a given set of model parameters θ. In general, the solution to the minimax problem can be found by training a network minimizing the loss for worst-case adversarial examples, so as to attain adversarial robustness. Given a set of model parameters θ, the commonly adopted solution to the inner maximization problem can lead to either one-step (e.g., FGSM) or multi-step (e.g., PGD) approach (Madry et al., 2017). In particular, for a given single point x, the strongest adversarial example x′ at the t-th iteration can be iteratively obtained by the following updating rule: xt+1 = ΠSx(x t + α · sgn(∇xL(xt, y; θ))), (2) where ΠSx(·) is a projection operator to project the inputs onto the region Sx, sgn(·) is the sign function, and α is the updating step size. For the initialization, x0 can be generated by randomly sampling in B(x, ). It appears in (1) that each perturbed sample x′ is obtained individually by leveraging its loss function L(x′, y; θ) with its label y. However, without considering the inter-relationship between samples, we may lose the global knowledge of the data manifold structure which proves highly useful for attaining better generalization. This issue has been studied in a recent work (Zhang & Wang, 2019) where a new method named feature scattering made a first step to consider the inter-sample relationship within the batch; unfortunately this approach did not take the full advantages of the global knowledge of the entire data distribution. In addition, relying on the maximization of the loss function, the adversarially-perturbed data samples may be highly biased towards the decision boundary, which potentially corrupts the structure of the original data distribution, especially when the decision boundary is non-optimal (see Figure 1 again for the illustration). 3.2 DIVERGENCE ESTIMATION To measure the discrepancy of two distributions, statistical divergence measures (e.g., KullbackLeibler and Jensen-Shannon divergence) have been proposed. In general, given two distributions P and Q with a continuous density function p(x) and q(x) respectively, f -divergence is defined as Df (P||Q) , ∫ X q(x)f ( p(x) q(x) ) dx. The exact computation of f -divergence is challenging, and the estimation from samples has attracted much interest. For instance, leveraging the variational methods, Nguyen et al. (2010) propose a method for estimating f -divergence from only samples; Nowozin et al. (2016) extend this method by estimating the divergence with learning the parameters of discriminator. Specifically, the f -divergence between two distributions P and Q can be lowerbounded using Fenchel conjugate and Jensen’s inequality (Nowozin et al., 2016). Df (P||Q) = ∫ X q(x) sup t∈domf∗ {tp(x) q(x) − f∗(t)}dx ≥ sup T∈τ ( ∫ X p(x)T (x)dx− ∫ X q(x)f∗(T (x))dx) = sup W (Ex∼P[gf (VW (x))] + Ex∼Q[−f∗(gf (VW (x)))]), (3) where VW : X → R is a discriminator network with parameter W and gf : R → domf∗ is an output activation function which is determined by the type of discriminator. τ is an arbitrary class of functions T : X → R. f is a convex lower-semicontinous function and f∗ is its conjugate defined by f∗(t) = supu∈domf [ut− f(u)]. The objective of discriminator for GANs is a special case of (3) with the activation function gf (t) = − log(1+e−t) and f∗(g) = − log(2−eg). It approximates the Jense-Shannon divergence between real and fake distributions. Arjovsky et al. (2017) also develop a method to estimate the Wasserstein-distance by neural network. In this paper, these methods will be used to estimate the Jensen-Shannon divergence between latent distributions induced by adversarial and clean examples. 4 ADVERSARIAL TRAINING WITH LATENT DISTRIBUTION As discussed in Section 3.1, the conventional adversarial training methods rely on the knowledge of data labels. As a result, the local information to generate adversarial examples may be biased toward the decision boundary; such individual adversarial example generation does not capture the global knowledge of the data manifold. To alleviate these limitations, we propose a novel method to compute the perturbed samples by leveraging the global knowledge of the whole data distribution and then disentangling them from the data labels and the loss function. Generally speaking, the perturbations are generated to enlarge the variance between latent distributions induced by clean and adversarial data. Formally, we try to identify the set of adversarial examples Xadv that yield in the latent space a distribution P ∗θ through fθ(·) that is the most different from the latent distribution Qθ induced by the clean samples Xorg = {x : x ∼ Q0}, without resorting to the corresponding labels Y . In other words, the resulting adversarial examples can be deemed as manifold adversarial examples, which ‘deceive’ the manifold rather than fool the classifier as defined in the traditional adversarial examples. It is noted that the latent space to be perturbed could be any hidden layer though it is defined in the last hidden layer before the softmax of a DNN in this paper. The optimization problem of the proposed adversarial training can then be reformulated as follows: min θ {Efθ(xadv)∼P∗θ [l(fθ(x adv), y)] +Df (P ∗ θ ||Qθ)} (4) s.t. P ∗θ = arg max Pθ∈P [Df (Pθ||Qθ)] (5) where l(·) and y are similarly defined as before, and Df (·) is the f -divergence measure of two distributions. P = {P : fθ(x′) ∼ P subject to ∀x ∼ Q0, x′ ∈ B(x, )} is the feasible region for the latent distribution Pθ which is induced by the set of perturbed examplesXp through fθ(·). fθ(x′) and fθ(xadv) represents the latent features of the perturbed example x′ and adversarial example xadv respectively. Intuitively, we try to obtain the worst latent distribution P ∗θ which is induced by Xadv through fθ(·) within the region P , while the model parameter θ is learned to minimize the classification loss on the latent feature fθ(xadv) ∼ P ∗θ (or equivalently adversarial example xadv ∈ Xadv) and the f -divergence between the latent distributions P ∗θ and Qθ induced by adversarial examples Xadv and clean data Xorg. It is still challenging to solve the above optimization problem, since both the objective function and the constraint are entangled with the adversarial examples Xadv and the model parameters θ. To make the problem more tractable, we propose a novel Adversarial Training with Latent Distribution (ATLD) method. In the next subsection, by taking into account the entire data distribution globally, we first focus on the constraint and identify the adversarial samples Xadv through the maximization problem. We then solve the minimization of the objective function with the adversarial training procedure. To further enhance the performance, we add specific perturbations named Inference with Manifold Transformation (IMT) in Section 4.2 to input samples for enforcing them towards the more separable natural data manifold. Finally, we classify the transformed data points with the adversarially-trained model. 4.1 GENERATING ADVERSARIAL EXAMPLES FOR TRAINING First, we optimize the constraint (5) to generate the adversarial examples or its induced distribution P ∗θ for training. Intuitively, the adversarial examples Xadv are crafted to maximize the divergence between the latent distributions induced by natural examplesXorg and adversarial counterpartXadv in an unsupervised fashion since no knowledge of labels Y is required. Together with the objective function in (4), our proposed adversarial training method is to minimize such divergence as well as the classification error for adversarial examples Xadv . However, it is a challenging task to evaluate the divergence between two latent distributions. To make it more tractable, we leverage a discriminator network for estimating the Jensen-Shannon divergence between two distributions P ∗θ /Pθ and Qθ according to Section 3.2. It is noted again that the class label information is not used for generating adversarial examples. Hence the adversarial examples are still generated in an unsupervised way. Then, by using (3), the optimization problem in (4) and (5) can be approximated as follows in a tractable way. min θ { N∑ i=1 L(xadvi , yi; θ)︸ ︷︷ ︸ Lf + sup W N∑ i=1 [logDW (fθ(x adv i )) + (1− logDW (fθ(xi)))︸ ︷︷ ︸ Ld ] } s.t. xadvi = arg max x′i∈B(xi, ) [logDW (fθ(x ′ i)) + (1− logDW (fθ(xi)))︸ ︷︷ ︸ Ld ] (6) where N denotes the number of training samples and DW denotes the discriminator network with the sigmoid activation function and parameter W . fθ(xi) is the latent feature of the clean sample xi. DW is used to determine whether the latent feature is from adversary manifold (output the manifold label of the latent feature). For ease of description, we represent the components in Eq. (6) as two parts: Lf and Ld. Ld is the manifold loss and Lf represents the loss function of the classifier network. We now interpret the above optimization problem. By comparing Eq. (6) and Eq. (4), it is observed that the Jensen-Shannon divergence between P ∗θ andQθ is approximated by supW ∑N i=1 Ld, and the minimization of the classification loss on adversarial examples is given by minθ ∑N i=1 Lf . The problem (6) is optimized by updating parameters θ and W and crafting adversarial examples {xadvi }Ni=0 iteratively. The whole training procedure can be viewed as the game among three players: the classifier, discriminator, and adversarial examples. The discriminator DW is learned to differentiate the latent distributions of the perturbed examples and clean data via maximizing the loss Ld while the classifier fθ is trained to (1) enforce the invariance between these two distributions to confuse the discriminatorDW by minimizing the loss Ld, and (2) classify the adversarial examples as accurately as possible by minimizing Lf . For each training iteration, the adversarial examples are crafted to make different the adversarial latent distribution and natural one by maximizing Ld. Although DW cannot measure the divergence between the two latent distributions exactly at the first several training steps, it can help evaluate the divergence between distributions induced by perturbed examples and clean ones when the parameters W converges. However, when the latent distributions are multi-modal, which is a real scenario due to the nature of multi-class classification, it is challenging for the discriminator to measure the divergence between such distributions. Several work reveals that there is a high risk of failure in using the discriminator networks to measure only a fraction of components underlying different distributions (Arjovsky & Bottou, 2017; Che et al., 2016). Ma (2018) also shows that two different distributions are not guaranteed to be identical even if the discriminator is fully confused. To alleviate such the problem, we additionally train the discriminator DW to predict the class labels for latent features as (Odena et al., 2017; Long et al., 2018). As a result, the problem (6) can then be reformulated as: min θ { N∑ i=1 L(xadvi , yi; θ)︸ ︷︷ ︸ Lf + sup W N∑ i=1 [logD0W (fθ(x adv i )) + (1− logD0W (fθ(xi)))︸ ︷︷ ︸ L0d ] + min W [l(D1:CW (fθ(xi)), yi) + l(D 1:C W (fθ(x adv i )), yi)]︸ ︷︷ ︸ L1:Cd } s.t. xadvi = arg max x′i∈B(xi, ) [logD0W (fθ(x ′ i)) + (1− logD0W (fθ(xi))︸ ︷︷ ︸ L0d ] (7) HereD0W is the first dimension of the output of the discriminator, which indicates the manifold label of the latent features; D1:CW are the remaining C dimensions of the output of DW , used to output the class label of the latent feature; C denotes the number of classes, and L0d and L 1:C d are the manifold loss and the classification loss for the discriminator network respectively. (The detailed derivation for Eq. (6) and Eq. (7) can be seen in Appendix.) The detailed training procedure of our framework is depicted in Figure 2. Remarks. It is worth noting that the labeled information is not required for generating adversarial examples. Therefore, our method prevents the perturbed examples from highly biasing towards the decision boundary and more information of the original distribution structure is preserved. In addition, since the discriminator is trained with the whole dataset (both clean and adversarial examples), it captures the global information of data manifold. Consequently, by training with adversarial examples generated according to the manifold loss of the discriminator, our method can improve the model robustness against adversarial examples with the global structure of data distribution. 4.2 INFERENCE WITH MANIFOLD TRANSFORMATION To enhance the generalization of ATLD, we further develop a new inference method with manifold transformation. Although adversarially-trained models can well recognize adversarial examples, there are still potential examples which are easily misclassified especially for unseen data. In other words, the generalization to adversarial examples is hard to achieve due to the more complex distribution of adversarial examples (Schmidt et al., 2018; Zhai et al., 2019). To alleviate this problem, our proposed inference method first pushes adversarial examples towards the manifold of natural examples which is simpler and further away from the decision boundary than the adversarial distribution. Then the more separable adjusted examples are classified by the adversarially-trained model. Specifically, the input sample is fed into our adversarially-trained model and the discriminator outputs the probability of such a sample lying on the adversarial manifold. If this probability is higher than a certain threshold, we compute the transformed example xt by adding the specific perturbation r∗ to the input sample x to reduce such a probability. This perturbation can be computed as: r∗ = arg min ‖r‖∞≤ logD0W (fθ(x+ r)). (8) Intuitively, the reduction of probability of this data point lying on adversarial manifold indicates that this point moves towards the benign example manifold after adding perturbation r∗. In other words, it becomes more separable since the benign example manifold is further away from the decision boundary. When the probability of the image lying on adversary manifold is lower than threshold, we still add such a perturbation to input image to make it more separable but with smaller magnitude. In the experiment part, we show this perturbation can move the adversarial examples away from the decision boundary. The whole inference procedure can be seen in Figure 5 in Appendix. 5 EXPERIMENTS We conduct experiments on the widely-used datasets, e.g., CIFAR-10, SVHN, and CIFAR-100. Following the Feature Scattering method (Zhang & Wang, 2019), we leverage the wideresnet (Zagoruyko & Komodakis, 2016) as our basic classifier and discriminator model structure. During the training phase, the initial learning rate is empirically set to 0.1 for all three datasets. We train our model 400 epochs with the transition epoch 60, 90 and the decay rate 0.1. The input perturbation budget is set to = 8 with the label smoothing rate as 0.5. We use L∞ perturbation in this paper including all the training and evaluation. We evaluate the various models on white-box attacks and black-box attacks. Under the white-box attacks, we compare the accuracy of the proposed method with several competitive methods, including: (1) the original wideresnet (Standard) trained with natural examples; (2) Traditional Adversarial Training with PGD (AT) (Madry et al., 2017); (3) Triplet Loss Adversarial training (TLA) (Mao et al., 2019); (4) Layer-wise Adversarial Training (LAT): injecting adversarial perturbation into the latent space (Sinha et al., 2019); (5) Bilateral: adversarial perturb on examples and labels both (Wang & Zhang, 2019); (6) Feature-scattering: generating adversarial examples with considering interrelationship of samples (Zhang & Wang, 2019). These comparison algorithms present the most competitive performance on defending adversarial attack. Under the black-box attacks, we compare four different algorithms used to generate the test time attacks: Vanilla training with natural examples, adversarial training with PGD, Feature Scattering, and our proposed model. 5.1 DEFENDING WHITE-BOX ATTACKS We show the classification accuracy under several white-box attacks on CIFAR-10, CIFAR-100, SVHN in this section. We first report the accuracy on CIFAR-10 in Table 1 with the attack iterations T = 20, 40, 100 for PGD (Madry et al., 2017) and CW (Carlini & Wagner, 2017). We also conduct more experiments to further evaluate the robustness of our proposed method against more recent attacks, e.g. AutoAttack (Croce & Hein, 2020) and RayS (Chen & Gu, 2020)) as shown in Appendix B.2. As observed, overall, our proposed method achieves a clear superiority over all the defence approaches on both the clean data and adversarial examples (except that it is slightly inferior to Feature Scattering in FGSM). We also observe one exception that the standard model performs the best on clean data. Our approach performs much better than the other baseline models on PGD and CW attack. Particularly, we improve the performance of the recent state-of-the-art method Feature Scattering almost 3.1% and 5.2% under PGD20 and CW20 attack respectively. With the implementation of Inference with Manifold Transformation (IMT), our approach (ATLD-IMT) is 8.9% and 17.4% higher than the Feature Scattering under PGD20 and CW20 attack respectively. However, the performance on clean data is declined from 93.3% to 86.4% since IMT appears to have a negative effect for classifying clean data. In order to reduce the impact of IMT on the natural data, a threshold is used to limit the perturbation of IMT based on the output of discriminator. The perturbation is halved if the output of discriminator is within the range of [0.3, 0.7] (ATLD-IMT+). Under such setting, our approach could achieve high performance on adversarial attacks without sacrificing its accuracy on clean data. Similarly, the accuracy on CIFAR-100 and SVHN are shown in Table 2 with the attack iterations T = 20, 100 for both PGD and CW for conciseness. Although our method is slightly weaker than Feature Scattering under FGSM attack on CIFAR-100, overall, our proposed method ATLD achieves state-of-the-art performance over all the other approaches under various adversarial attacks. Furthermore, our ATLD-IMT version exceeds Feature Scattering by almost 19.2% and 23.8% against the attack of CW100 on CIFAR-100 and SVHN respectively. More details about the defense of whitebox attacks under different attack budgets can be seen in Appendix. 5.2 DEFENDING BLACK-BOX ATTACKS To further verify the robustness of ATLD, we conduct transfer-based black-box attack experiments on CIFAR-10. More black-box attack results on CIFAR-100 and SVHN are listed in Appendix. Four different models are used for generating test time attacks including the Vanilla Training model, the Adversarial Training with PGD model, the Feature Scattering Training model and our model. As demonstrated by the results in Table 3, our proposed approach can achieve competitive performance almost in all the cases. Specifically, ATLD outperforms Feature Scattering significantly in 8 cases while it demonstrates comparable or slightly worse accuracy in the other 3 cases. It deserves our attention that ATLD-IMT appears to have a negative impact on the black-box attacks though it stills performs much better than PGD. This may be explained in several aspects. On one hand, the distributions of adversarial examples produced by different models may differ significantly in the latent space; on the other hand, our discriminator lacks the ability to deal with the unseen distributions since the discriminator only distinguishes one type of adversarial examples from the natural data during training. We will leave the investigation of this topic as future work. 6 CONCLUSION We have developed a novel adversarial training method which leverages both the local and global information to defend adversarial attacks in this paper. In contrast, existing adversarial training methods mainly generate adversarial perturbations in a local and supervised fashion, which could however limit the model’s generalization. We have established our novel framework via an adversarial game between a discriminator and a classifier: the discriminator is learned to differentiate globally the latent distributions of the natural data and the perturbed counterpart, while the classifier is trained to recognize accurately the perturbed examples as well as enforcing the invariance between the two latent distributions. Extensive empirical evaluations have shown the effectiveness of our proposed model when compared with the recent state-of-the-art in defending adversarial attacks in both the white-box and black-box settings. APPENDIX A LIST OF MAJOR NOTATION For clarity, we list the major notations that are used in our model. • Xorg = {x : x ∼ Q0}: the set of clean data samples, where Q0 is its underlying distribution; • Xp = {x′ : x′ ∈ B(x, ), ∀x ∼ Q0}: the set of perturbed samples, the element x′ ∈ Xp is in the -neighborhood of the clean example x ∼ Q0; • fθ: the mapping function from input to the latent features of the last hidden layer (i.e., the layer before the softmax layer); • Qθ: the underlying distribution of the latent feature fθ(x) for all x ∈ Xorg; • Pθ: the underlying distribution of the latent feature fθ(x′) for all x′ ∈ Xp; • P: the feasible region of the latent distribution Pθ, which is defined as P , {P : fθ(x′) ∼ P subject to ∀x ∼ Q0, x′ ∈ B(x, )}. • Xadv: the set of the worst perturbed samples or manifold adversarial examples, the element xadv ∈ Xadv are in the -neighborhood of clean example x ∼ Q0; • P ∗θ : the worst latent distribution within the feasible region P which leads to the largest divergence or the underlying distribution of the latent feature fθ(xadv) for all xadv ∈ Xadv; B ADDITIONAL EXPERIMENT DETAILS B.1 MODEL ROBUSTNESS AGAINST PGD AND CW ATTACKER UNDER DIFFERENT ATTACK BUDGETS We further evaluate the model robustness against PGD and CW attacks under different attack budgets with a fixed attack step of 20. These results are shown in Figure 3. It is observed that the performance of Adversarial Training with the PGD method (AT) drops quickly as the attack budget increases. The Feature Scattering method (FS) can improve the model robustness across a wide range of attack budgets. The proposed approach ADLT-IMT further boosts the performance over Feature Scattering by a large margin under different attack budgets especially under CW attack, except that our ADLTIMT is slightly inferior to Feature Scattering under PGD attack with budget = 20 on CIFAR-10. B.2 MODEL ROBUSTNESS AGAINST AUTOATTACK AND RAYS As shown in (Croce & Hein, 2020; Chen & Gu, 2020), several models (such as Feature Scattering) could achieve high enough robustness against PGD and CW attack, but they may fail to defend more stronger attacks. To further evaluate the model robustness against stronger attacks, we evaluate the robustness of our proposed method IMT+ against AutoAttack (Croce & Hein, 2020) and RayS (Chen & Gu, 2020) attacks with L∞ budget = 8 on CIFAR-10 and CIFAR-100. We first compare the accuracy of the proposed ATLD-IMT+ with several competitive methods on CIFAR-10 in Table 4 to defend the AutoAttack (AA) and Rays attacks, including: (1) Traditional Adversarial Training with PGD (AT) (Madry et al., 2017); (2) TRADES: trading adversarial robustness off against accuracy (Zhang et al., 2019); (3) Feature Scattering: generating adversarial examples with considering inter-relationship of samples (Zhang & Wang, 2019); (4) Robustoverfitting: improving models adversarial robustness by simply using early stop (Rice et al., 2020); (5) Pretraining: improving models adversarial robustness with pre-training (Hendrycks et al., 2019); (6)WAR: mitigating the perturbation stability deterioration on wider models (Wu et al., 2020); (7) RTS: achieving high robust accuracy with semisupervised learning procedure (self-training) (Carmon et al., 2019); (8) Gowal et al. (2020): achieving state-of-the-art results by combining larger models, Swish/SiLU activations and model weight averaging. These comparison algorithms attain the most competitive performance on defending AA attack. As observed, overall, our proposed method achieves a clear superiority over all the defence approaches on both the clean data and adversarial examples (except on clean data, ours is slightly inferior to Gowal et al. (2020) which is however trained with additional data). Note that Pretraining, WAR and Gowal et al. (2020) with footnote require additional data for training (e.g. unlabeled data, pretraining). We also report the accuracy of ATLD-IMT+ with the state-of-the-arts methods on CIFAR-100 in Table 5 against the AutoAttack (AA). Our proposed method again achieves on both the clean data and AA attacked examples significant better performance than all the other defense approaches (without data augmentation). Furthermore, it is noted that, while our ATLD-IMT+ method is just slightly inferior to Gowal et al. (2020) (which is trained with additional data), it is substantially ahead of the normal version of Gowal et al. (2020). B.3 BLACK-BOX RESULTS ON SVHN AND CIFAR-100 We conduct more evaluations on the transfer-based black-box attacks on SVHN and CIFAR-100. We report the results in Table 6. It can be observed that our proposed method overall outperforms Feature Scattering in most of the cases on SVHN. Surprisingly, the Adversarial Training method, i.e. the PGD, performs better than our method and Feature Scattering method in three cases. This also partially reveals the more challenging nature of defending black-box attacks than white-box attacks. On CIFAR-100, it can be observed that our method and Feature Scattering are comparable. The performance of these two methods differs little though our method outperforms Feature Scattering significantly under PGD20 and CW20 against adversarial attacks generated from the Feature Scattering model. Overall, though the proposed ATLD method may not lead to remarkably higher performance than the current state-of-the-art algorithms in defending black-box attacks (as we observed in the case of white-box attacks), it still generates overall better or comparable performance. We will again leave the further exploration of defending black-box attacks as our future work. B.4 ILLUSTRATION OF THE OVERLAID BOUNDARY CHANGE OF DIFFERENT METHODS We conduct a toy example in Figure 4 to illustrate the effect on how the various methods would affect the decision boundary after the adversarial training is applied. In Figure 4, (a) shows the decision boundary trained with clean data; (b) shows the decision boundary adversarially trained with the perturbed samples by PGD; (c) presents the decision boundary given by the adversarial training of Feature Scattering; and (d) illustrates the decision boundary trained from our proposed ATLD. Clearly, both the PGD (Figure 4(b)) and the FS (Figure 4(c)) vary the original decision boundary significantly. Moreover, it can be observed that the adversarial training with PGD corrupts the data manifold completely. On the other hand, FS appears able to retain partially the data manifold information since it considers the inter-sample relationship locally. Nonetheless, its decision boundary appears non-smooth, which may hence degrade the performance. In contrast, as shown in Figure 4(d), our proposed method considers to retain the data manifold globally, which varies the decision boundary slightly. This may explain why our proposed ATLD method could outperform the other approaches. B.5 FURTHER DETAILS OF ATLD-IMT We elaborate the training procedure of our IMT in this section. The overall architecture of ATLDIMT is plotted in Figure 5. A test sample x is fed into the classifier, and the discriminator outputs the prediction. A special perturbation in IMT is then computed from the loss DW and added back to x; in this way, the sample would be pushed towards the manifold of natural samples, which is supposed to be further away from the decision boundary. The prediction of the transformed xt by the adversarially-trained classifier will then be output as the label of x. To illustrate clearly the effect of our ATLD-IMT, we conduct additional toy experiments as shown in Figure 6 where we respectively plot the clean or natural data, perturbed data attacked by PGD, and adjusted data by ATLD-IMT in (a), (b), and (c). Moreover, the decision boundary is given by ATLD in all the three sub-figures. In (a), it deserves our attention that the boundary learned by ATLD could classify natural data well compared to the PGD and Feature Scattering as shown in Section A.3. As observed in (b), the perturbations generated by PGD will push the natural samples toward or even cross the decision boundary. Our proposed IMT can push the samples towards the manifold of natural examples as observed in (c). Since the manifold of natural examples would be more separable, this may further increase the classification performance as observed in the experiments. Under review as a conference paper at ICLR 2021 B.6 ILLUSTRATION OF VECTOR FIELD OF DIFFERENT PERTURBATION SCHEMES test test test C DETAILED DERIVATION In this section, we provide the details about the derivation for the main objective function (6) and elaborate how to compute the adversarial examples and the transformed examples. C.1 DERIVATION FOR MAIN OBJECTIVE FUNCTION (6) We start with minimizing the largest f -divergence between latent distributions Pθ and Qθ induced by perturbed example x′ and natural example x. And we denote their corresponding probability density functions as p(z) and q(z). According to Eq. (3), we have min θ max Qθ Df (Pθ||Qθ) = min θ max q(z) ∫ Z q(z) sup t∈domf∗ {tp(z) q(z) − f∗(t)}dx ≥ min θ max q(z) sup T∈τ ( ∫ Z p(z)T (z)dz − ∫ Z q(z)f∗(T (z))dz) = min θ max Qθ sup W { Ez∼Pθ [gf (VW (z))] + Ez∼Qθ [−f∗(gf (VW (z)))] } = min θ sup W { Ex∼D { max x′∈B(x, ) [gf (VW (fθ(x ′)))] + [−f∗(gf (VW (fθ(x))))] }} (9) To compute the Jensen-Shannon divergence between Pθ andQθ, we set gf (t) = − log(1+e−t) and f∗(g) = − log(2− eg). Then, we have min θ max Qθ DJS(Pθ||Qθ) ≥ min θ sup W { Ex∼D { max x′∈B(x, ) [logDW (fθ(x ′)))] + [1− logDW (fθ(x))))] }} (10) where DW (x) = 1/(1 + e−VW (x)). (10) is equivalent to optimize the lower bound of JensenShannon divergence between Pθ and Qθ. With disentangling the computation of adversarial examples from Eq. (10) and further considering the classification loss for the classifier Lf and the discriminator L1:Cd , we can obtain the final objective: min θ { sup W N∑ i=1 [logD0W (fθ(x adv i )) + (1− logD0W (fθ(xi))︸ ︷︷ ︸ L0d ] +L(xadvi , yi; θ)︸ ︷︷ ︸ Lf + min W [l(D1:CW (fθ(xi)), yi) + l(D 1:C W (fθ(x adv i )), yi)]︸ ︷︷ ︸ L1:Cd } , s.t. xadvi = arg max x′i∈B(xi, ) [logD0W (fθ(x ′ i)) + (1− logD0W (fθ(xi))︸ ︷︷ ︸ L0d ] (11) C.2 COMPUTATION FOR ADVERSARIAL EXAMPLE AND TRANSFORMED EXAMPLE To compute the adversarial example, we need to solve the following problem: xadvi = arg max x′i∈B(xi, ) [logD0W (fθ(x ′ i)) + (1− logD0W (fθ(xi))︸ ︷︷ ︸ L0d ] (12) It can be reformulated as computing the adversarial perturbation as follows: radvi = arg max‖r‖∞≤ [L0d(xi + ri, θ)] (13) We first consider the more general case ‖r‖p ≤ and expand (13) with the first order Taylor expansion as follows: radvi = arg max‖r‖p≤ [L0d(xi, θ)] +∇xFT ri (14) where F = L(xi, θ). The problem (14) can be reduced to: max ‖ri‖p= ∇xFT ri (15) We solve it with the Lagrangian multiplier method and we have ∇xFri = λ(‖ri‖p − ) (16) Then we make the first derivative with respect to ri: ∇xF = λ rp−1i p( ∑ j(r j i ) p)1− 1 p (17) ∇xF = λ p ( ri )p−1 (∇xF) p p−1 = ( λ p ) p p−1 ( ri )p (18) If we sum over two sides, we have∑ (∇xF) p p−1 = ∑ ( λ p ) p p−1 ( ri )p (19) ‖∇xF‖p ∗ p∗ = ( λ p )p ∗ ∗ 1 (20) where p∗ is the dual of p, i.e. 1p + 1 p∗ = 1. We have ( λ p ) = ‖∇xF‖p∗ (21) By combining (18) and (21), we have r∗i = sgn(∇xF)( |∇xF| ‖∇xF‖p∗ ) 1 p−1 = sgn(∇xL0d)( |∇xL0d| ‖∇xL0d‖p∗ ) 1 p−1 (22) In this paper, we set p to∞. Then we have r∗i = lim p→∞ sgn(∇xL0d)( |∇xL0d| ‖∇xL0d‖p∗ ) 1 p−1 = sgn(∇xL0d)( |∇xL0d| ‖∇xL0d‖1 )0 = sgn(∇xL0d) (23) Then we can obtain the adversarial example: x∗i = xi + sgn(∇xL0d) (24) To compute the transformed example, we need to solve the following problem: r∗ = arg min ‖r‖∞≤ logD0W (fθ(x+ r)). (25) With the similar method, we can easily get the transformed example xt xt = x− sgn(∇x logD0W ). (26)
1. How does the paper analyze the property of local and global data manifolds in adversarial training? 2. What are the strengths of the proposed method, particularly in terms of its intuition and theoretical background? 3. What are the weaknesses of the paper, especially regarding the realization of equations 4 and 5 and the choice of perturbations? 4. Do you have any concerns about the effectiveness of the method against potential attacks that leverage global and local data manifolds?
Review
Review The paper analyzes the property of local and global data manifold for adversarial training. In particular, they used a discriminator-classifier model, where the discriminator tries to differentiate between the natural and adversarial space, and the classifier aims to classify between them while maintaining the constraints between local and global distributions. The authors implemented the proposed method on several datasets and achieved good performance. They also compared with several whitebox and blackbox methods and proved superiority. This paper was, in general, well written. The authors provided a good visualization of their analysis. Using local and global information for adversarial training is intuitive. The authors provided a good theoretical background to establish their method. The empirical evaluations show promising results. Some major concerns are listed as follows: It is not clear how equations 4 and 5 are realized using discriminator and classifier. What kind of perturbations are chosen? It looks like all the experiments are with L-infinity. Does this observation hold for other ones? If the attackers leverage the global and local data manifold, can they bypass this attack?
ICLR
Title Improving Model Robustness with Latent Distribution Locally and Globally Abstract We propose a novel adversarial training method which leverages both the local and global information to defend adversarial attacks. Existing adversarial training methods usually generate adversarial perturbations locally in a supervised manner and fail to consider the data manifold information in a global way. Consequently, the resulting adversarial examples may corrupt the underlying data structure and are typically biased towards the decision boundary. In this work, we exploit both the local and global information of data manifold to generate adversarial examples in an unsupervised manner. Specifically, we design our novel framework via an adversarial game between a discriminator and a classifier: the discriminator is learned to differentiate the latent distributions of the natural data and the perturbed counterpart, while the classifier is trained to recognize accurately the perturbed examples as well as enforcing the invariance between the two latent distributions. We conduct a series of analysis on the model robustness and also verify the effectiveness of our proposed method empirically. Experimental results show that our method substantially outperforms the recent state-of-the-art (i.e. Feature Scattering) in defending adversarial attacks by a large accuracy margin (e.g. 17.0% and 18.1% on SVHN dataset, 9.3% and 17.4% on CIFAR-10 dataset, 6.0% and 16.2% on CIFAR-100 dataset for defending PGD20 and CW20 attacks respectively). 1 INTRODUCTION Deep Neural Networks (DNNs) have achieved impressive performance on a broad range of datasets, yet can be easily fooled by adversarial examples or perturbations (LeCun et al., 2015; He et al., 2016; Gers et al., 1999). Adversarial examples have been shown to be ubiquitous beyond different tasks such as image classification (Goodfellow et al., 2014), segmentation (Fischer et al., 2017), and speech recognition (Carlini & Wagner, 2018). Overall, adversarial examples raise great concerns about the robustness of learning models, and have drawn enormous attention over recent years. To defend adversarial examples, great efforts have been made to improve the model robustness (Kannan et al., 2018; You et al., 2019; Wang & Zhang, 2019; Zhang & Wang, 2019). Most of them are based on the adversarial training, i.e. training the model with adversarially-perturbed samples rather than clean data (Goodfellow et al., 2014; Madry et al., 2017; Lyu et al., 2015). In principle, adversarial training is a min-max game between the adversarial perturbations and classifier. Namely, the indistinguishable adversarial perturbations are designed to mislead the output of the classifier, while the classifier is trained to produce the accurate predictions for these perturbed input data. Currently, the adversarial perturbations are mainly computed by enforcing the output invariance in a supervised manner (Madry et al., 2017). Despite its effectiveness in some scenarios, it is observed recently that these approaches may still be limited in defending adversarial examples. In particular, we argue that these current adversarial training approaches are typically conducted in a local and supervised way and fail to consider globally the overall data manifold information; such information however proves crucially important for attaining better generalization. As a result, the generated adversarial examples may corrupt the underlying data structure and would be typically biased towards the decision boundary. Therefore, the well-generalizing features inherent to the data distribution might be lost, which limits the performance of the DNNs to defend adversarial examples even if adversarial training is applied (Ilyas et al., 2019a; Schmidt et al., 2018). For illustration, we have shown a toy example in Figure 1. As clearly observed, adversarially-perturbed examples gen- erated by PGD, one of the most successful adversarial training method, corrupt the data manifold, which would inevitably lead to poor performance if the training is conducted based on these perturbed examples. On the other hand, the current state-of-the-art method Feature Scattering (Zhang & Wang, 2019) can partially alleviate this problem but still leads to corruptions on the data manifold. To address this limitation, we propose a novel method called Adversarial Training with Latent Distribution (ATLD) which additionally considers the data distribution globally in an unsupervised fashion. In this way, the data manifold could be well preserved, which is beneficial to attain better model generalization. Moreover, since the label information is not required when computing the adversarial perturbations, the resulting adversarial examples would not be biased towards the decision boundary. This can be clearly observed in Figure 1(d). Our method can be divided into two steps: first, we train the deep model with the adversarial examples which maximize the variance between latent distributions of clean data and adversarial counterpart rather than maximizing the loss function. We reformulate it as a minimax game between a discriminator and a classifier. The adversarial examples are crafted by the discriminator to make different implicitly the latent distributions of clean and perturbed data, while the classifier is trained to decrease the discrepancy between these two latent distributions as well as promoting accurate classification on the adversarial examples as Figure 2 shows. Then, during the inference procedure, we generate the specific perturbations through the discriminator network to diminish the impact of the adversarial attack as shown in Figure 6 in Appendix. On the empirical front, with the toy examples, we show that our proposed method can preserve more information of the original distribution and learn a better decision boundary than the existing adversarial training method. We also test our method on three different datasets: CIFAR-10, CIFAR100 and SVHN with the famous PGD, CW and FGSM attacks. Our ATLD method outperforms the state-of-the-art methods by a large margin. e.g. ATLD improves over Feature Scattering (Zhang & Wang, 2019) by 17.0% and 18.1% on SVHN for PGD20 and CW20 attacks. Our method also shows a large superiority to the conventional adversarial training method (Madry et al., 2017), boosting the performance by 32.0% and 30.7% on SVHN for PGD20 and CW20 attacks. 2 RELATED WORK Adversarial Training. Adversarial training is a family of techniques to improve the model robustness (Madry et al., 2017; Lyu et al., 2015). It trains the DNNs with adversarially-perturbed samples instead of clean data. Some approaches extend the conventional adversarial training by injecting the adversarial noise to hidden layers to boost the robustness of latent space (Ilyas et al., 2019b; You et al., 2019; Santurkar et al., 2019; Liu et al., 2019). All of these approaches generate the adversarial examples by maximizing the loss function with the label information. However, the structure of the data distribution is destroyed since the perturbed samples could be highly biased towards the non-optimal decision boundary (Zhang & Wang, 2019). Our proposed method has a similar training scheme with adversarial training by replacing clean data with the perturbed one. Nevertheless, our method generates the adversarial perturbations without the label information which weakens the impact of non-optimal decision boundary and can retain more information of the underlying data distribution. Manifold-based Adversarial Training. Song et al. (2017) propose to generate the adversarial examples by projecting on a proper manifold. Zhang & Wang (2019) leverage the manifold information in the forms of inter-sample relationship within the batch to generate adversarial adversarial perturbations. Virtual Adversarial Training and Manifold Adversarial Training are proposed improve model generalization and robustness against adversarial examples by ensuring the local smoothness of the data distribution (Zhang et al., 2018; Miyato et al., 2017). Some methods are designed to enforce the local smoothness around the natural examples by penalizing the difference between the outputs of adversarial examples and clean counterparts (Kannan et al., 2018; Chan et al., 2020; Jakubovitz & Giryes, 2018). All of these methods just leverage the local information of the distribution or manifold. Differently, our method generates the perturbations additionally considering the structure of distribution globally. Unsupervised Domain Adversarial Training. Domain Adversarial Training shares a training scheme similar to our method where the classifier and discriminator compete with each other (Odena et al., 2017; Long et al., 2018; Ganin et al., 2016). However, its objective is to reduce the gap between the source and target distributions in the latent space. The discriminator is used to measure the divergence between these two distributions in the latent space. The training scheme of our method is also based on competition between the classifier and discriminator. Different from the previous framework, the discriminator of our method is used to capture the information of distributions of adversarial examples and clean counterparts in the latent space which helps generate the adversarial perturbations. GAN-based Adversarial Training Methods. Several GAN-based methods leverage GANs to learn the clean data distribution and purify the adversarial examples by projecting them on clean data manifold before classification (Meng & Chen, 2017; Metzen et al., 2017). The framework of GAN can also be used to generate the adversarial examples (Baluja & Fischer, 2018). The generator produces the adversarial examples to deceive both the discriminator and classifier; the discriminator and classifier attempt to differentiate the adversaries from clean data and produce the correct labels respectively. Some adversary detector networks are proposed to detect the adversarial examples which can be well aligned with our method (Gong et al., 2017; Grosse et al., 2017). In these works, a pretrained network is augmented with a binary detector network. The training of the pretrained network and detector involves generating adversarial examples to maximize their losses. Differently, our method generates the adversarial examples just to minimize the loss of the discriminator and feed them as the training set to the classifier. Such adversarial examples are deemed to induce most different latent representations from the clean counterpart. 3 BACKGROUND 3.1 ADVERSARIAL TRAINING Let us first introduce the widely-adopted adversarial training method for defending against adversarial attacks. Specifically, it solves the following minimax optimization problem through training. min θ {E(x,y)∼D[ max x′∈Sx L(x′, y; θ)]}, (1) where x ∈ Rn and y ∈ R are respectively the clean data samples and the corresponding labels drawn from the dataset D, and L(·) is the loss function of the DNN with the model parameter θ ∈ Rm. Furthermore, we denote the clean data distribution as Q0, i.e. x ∼ Q0. , and denote x′ ∈ Rn as perturbed samples in a feasible region Sx , {z : z ∈ B(x, ) ∩ [−1.0, 1.0]n} with B(z, ) , {z : ‖x− z‖∞ ≤ } being the `∞-ball at center x with radius . By defining fθ(·) as the mapping function from the input layer to the last latent layer, we can also rewrite the loss function of the DNN as l(fθ(x), y) where l(·) denotes the loss function calculated from the last hidden layer of the DNN, e.g. the cross entropy loss as typically used in DNN. Whilst the outer minimization can be conducted by training to find the optimal model parameters θ, the inner maximization essentially generates the strongest adversarial attacks on a given set of model parameters θ. In general, the solution to the minimax problem can be found by training a network minimizing the loss for worst-case adversarial examples, so as to attain adversarial robustness. Given a set of model parameters θ, the commonly adopted solution to the inner maximization problem can lead to either one-step (e.g., FGSM) or multi-step (e.g., PGD) approach (Madry et al., 2017). In particular, for a given single point x, the strongest adversarial example x′ at the t-th iteration can be iteratively obtained by the following updating rule: xt+1 = ΠSx(x t + α · sgn(∇xL(xt, y; θ))), (2) where ΠSx(·) is a projection operator to project the inputs onto the region Sx, sgn(·) is the sign function, and α is the updating step size. For the initialization, x0 can be generated by randomly sampling in B(x, ). It appears in (1) that each perturbed sample x′ is obtained individually by leveraging its loss function L(x′, y; θ) with its label y. However, without considering the inter-relationship between samples, we may lose the global knowledge of the data manifold structure which proves highly useful for attaining better generalization. This issue has been studied in a recent work (Zhang & Wang, 2019) where a new method named feature scattering made a first step to consider the inter-sample relationship within the batch; unfortunately this approach did not take the full advantages of the global knowledge of the entire data distribution. In addition, relying on the maximization of the loss function, the adversarially-perturbed data samples may be highly biased towards the decision boundary, which potentially corrupts the structure of the original data distribution, especially when the decision boundary is non-optimal (see Figure 1 again for the illustration). 3.2 DIVERGENCE ESTIMATION To measure the discrepancy of two distributions, statistical divergence measures (e.g., KullbackLeibler and Jensen-Shannon divergence) have been proposed. In general, given two distributions P and Q with a continuous density function p(x) and q(x) respectively, f -divergence is defined as Df (P||Q) , ∫ X q(x)f ( p(x) q(x) ) dx. The exact computation of f -divergence is challenging, and the estimation from samples has attracted much interest. For instance, leveraging the variational methods, Nguyen et al. (2010) propose a method for estimating f -divergence from only samples; Nowozin et al. (2016) extend this method by estimating the divergence with learning the parameters of discriminator. Specifically, the f -divergence between two distributions P and Q can be lowerbounded using Fenchel conjugate and Jensen’s inequality (Nowozin et al., 2016). Df (P||Q) = ∫ X q(x) sup t∈domf∗ {tp(x) q(x) − f∗(t)}dx ≥ sup T∈τ ( ∫ X p(x)T (x)dx− ∫ X q(x)f∗(T (x))dx) = sup W (Ex∼P[gf (VW (x))] + Ex∼Q[−f∗(gf (VW (x)))]), (3) where VW : X → R is a discriminator network with parameter W and gf : R → domf∗ is an output activation function which is determined by the type of discriminator. τ is an arbitrary class of functions T : X → R. f is a convex lower-semicontinous function and f∗ is its conjugate defined by f∗(t) = supu∈domf [ut− f(u)]. The objective of discriminator for GANs is a special case of (3) with the activation function gf (t) = − log(1+e−t) and f∗(g) = − log(2−eg). It approximates the Jense-Shannon divergence between real and fake distributions. Arjovsky et al. (2017) also develop a method to estimate the Wasserstein-distance by neural network. In this paper, these methods will be used to estimate the Jensen-Shannon divergence between latent distributions induced by adversarial and clean examples. 4 ADVERSARIAL TRAINING WITH LATENT DISTRIBUTION As discussed in Section 3.1, the conventional adversarial training methods rely on the knowledge of data labels. As a result, the local information to generate adversarial examples may be biased toward the decision boundary; such individual adversarial example generation does not capture the global knowledge of the data manifold. To alleviate these limitations, we propose a novel method to compute the perturbed samples by leveraging the global knowledge of the whole data distribution and then disentangling them from the data labels and the loss function. Generally speaking, the perturbations are generated to enlarge the variance between latent distributions induced by clean and adversarial data. Formally, we try to identify the set of adversarial examples Xadv that yield in the latent space a distribution P ∗θ through fθ(·) that is the most different from the latent distribution Qθ induced by the clean samples Xorg = {x : x ∼ Q0}, without resorting to the corresponding labels Y . In other words, the resulting adversarial examples can be deemed as manifold adversarial examples, which ‘deceive’ the manifold rather than fool the classifier as defined in the traditional adversarial examples. It is noted that the latent space to be perturbed could be any hidden layer though it is defined in the last hidden layer before the softmax of a DNN in this paper. The optimization problem of the proposed adversarial training can then be reformulated as follows: min θ {Efθ(xadv)∼P∗θ [l(fθ(x adv), y)] +Df (P ∗ θ ||Qθ)} (4) s.t. P ∗θ = arg max Pθ∈P [Df (Pθ||Qθ)] (5) where l(·) and y are similarly defined as before, and Df (·) is the f -divergence measure of two distributions. P = {P : fθ(x′) ∼ P subject to ∀x ∼ Q0, x′ ∈ B(x, )} is the feasible region for the latent distribution Pθ which is induced by the set of perturbed examplesXp through fθ(·). fθ(x′) and fθ(xadv) represents the latent features of the perturbed example x′ and adversarial example xadv respectively. Intuitively, we try to obtain the worst latent distribution P ∗θ which is induced by Xadv through fθ(·) within the region P , while the model parameter θ is learned to minimize the classification loss on the latent feature fθ(xadv) ∼ P ∗θ (or equivalently adversarial example xadv ∈ Xadv) and the f -divergence between the latent distributions P ∗θ and Qθ induced by adversarial examples Xadv and clean data Xorg. It is still challenging to solve the above optimization problem, since both the objective function and the constraint are entangled with the adversarial examples Xadv and the model parameters θ. To make the problem more tractable, we propose a novel Adversarial Training with Latent Distribution (ATLD) method. In the next subsection, by taking into account the entire data distribution globally, we first focus on the constraint and identify the adversarial samples Xadv through the maximization problem. We then solve the minimization of the objective function with the adversarial training procedure. To further enhance the performance, we add specific perturbations named Inference with Manifold Transformation (IMT) in Section 4.2 to input samples for enforcing them towards the more separable natural data manifold. Finally, we classify the transformed data points with the adversarially-trained model. 4.1 GENERATING ADVERSARIAL EXAMPLES FOR TRAINING First, we optimize the constraint (5) to generate the adversarial examples or its induced distribution P ∗θ for training. Intuitively, the adversarial examples Xadv are crafted to maximize the divergence between the latent distributions induced by natural examplesXorg and adversarial counterpartXadv in an unsupervised fashion since no knowledge of labels Y is required. Together with the objective function in (4), our proposed adversarial training method is to minimize such divergence as well as the classification error for adversarial examples Xadv . However, it is a challenging task to evaluate the divergence between two latent distributions. To make it more tractable, we leverage a discriminator network for estimating the Jensen-Shannon divergence between two distributions P ∗θ /Pθ and Qθ according to Section 3.2. It is noted again that the class label information is not used for generating adversarial examples. Hence the adversarial examples are still generated in an unsupervised way. Then, by using (3), the optimization problem in (4) and (5) can be approximated as follows in a tractable way. min θ { N∑ i=1 L(xadvi , yi; θ)︸ ︷︷ ︸ Lf + sup W N∑ i=1 [logDW (fθ(x adv i )) + (1− logDW (fθ(xi)))︸ ︷︷ ︸ Ld ] } s.t. xadvi = arg max x′i∈B(xi, ) [logDW (fθ(x ′ i)) + (1− logDW (fθ(xi)))︸ ︷︷ ︸ Ld ] (6) where N denotes the number of training samples and DW denotes the discriminator network with the sigmoid activation function and parameter W . fθ(xi) is the latent feature of the clean sample xi. DW is used to determine whether the latent feature is from adversary manifold (output the manifold label of the latent feature). For ease of description, we represent the components in Eq. (6) as two parts: Lf and Ld. Ld is the manifold loss and Lf represents the loss function of the classifier network. We now interpret the above optimization problem. By comparing Eq. (6) and Eq. (4), it is observed that the Jensen-Shannon divergence between P ∗θ andQθ is approximated by supW ∑N i=1 Ld, and the minimization of the classification loss on adversarial examples is given by minθ ∑N i=1 Lf . The problem (6) is optimized by updating parameters θ and W and crafting adversarial examples {xadvi }Ni=0 iteratively. The whole training procedure can be viewed as the game among three players: the classifier, discriminator, and adversarial examples. The discriminator DW is learned to differentiate the latent distributions of the perturbed examples and clean data via maximizing the loss Ld while the classifier fθ is trained to (1) enforce the invariance between these two distributions to confuse the discriminatorDW by minimizing the loss Ld, and (2) classify the adversarial examples as accurately as possible by minimizing Lf . For each training iteration, the adversarial examples are crafted to make different the adversarial latent distribution and natural one by maximizing Ld. Although DW cannot measure the divergence between the two latent distributions exactly at the first several training steps, it can help evaluate the divergence between distributions induced by perturbed examples and clean ones when the parameters W converges. However, when the latent distributions are multi-modal, which is a real scenario due to the nature of multi-class classification, it is challenging for the discriminator to measure the divergence between such distributions. Several work reveals that there is a high risk of failure in using the discriminator networks to measure only a fraction of components underlying different distributions (Arjovsky & Bottou, 2017; Che et al., 2016). Ma (2018) also shows that two different distributions are not guaranteed to be identical even if the discriminator is fully confused. To alleviate such the problem, we additionally train the discriminator DW to predict the class labels for latent features as (Odena et al., 2017; Long et al., 2018). As a result, the problem (6) can then be reformulated as: min θ { N∑ i=1 L(xadvi , yi; θ)︸ ︷︷ ︸ Lf + sup W N∑ i=1 [logD0W (fθ(x adv i )) + (1− logD0W (fθ(xi)))︸ ︷︷ ︸ L0d ] + min W [l(D1:CW (fθ(xi)), yi) + l(D 1:C W (fθ(x adv i )), yi)]︸ ︷︷ ︸ L1:Cd } s.t. xadvi = arg max x′i∈B(xi, ) [logD0W (fθ(x ′ i)) + (1− logD0W (fθ(xi))︸ ︷︷ ︸ L0d ] (7) HereD0W is the first dimension of the output of the discriminator, which indicates the manifold label of the latent features; D1:CW are the remaining C dimensions of the output of DW , used to output the class label of the latent feature; C denotes the number of classes, and L0d and L 1:C d are the manifold loss and the classification loss for the discriminator network respectively. (The detailed derivation for Eq. (6) and Eq. (7) can be seen in Appendix.) The detailed training procedure of our framework is depicted in Figure 2. Remarks. It is worth noting that the labeled information is not required for generating adversarial examples. Therefore, our method prevents the perturbed examples from highly biasing towards the decision boundary and more information of the original distribution structure is preserved. In addition, since the discriminator is trained with the whole dataset (both clean and adversarial examples), it captures the global information of data manifold. Consequently, by training with adversarial examples generated according to the manifold loss of the discriminator, our method can improve the model robustness against adversarial examples with the global structure of data distribution. 4.2 INFERENCE WITH MANIFOLD TRANSFORMATION To enhance the generalization of ATLD, we further develop a new inference method with manifold transformation. Although adversarially-trained models can well recognize adversarial examples, there are still potential examples which are easily misclassified especially for unseen data. In other words, the generalization to adversarial examples is hard to achieve due to the more complex distribution of adversarial examples (Schmidt et al., 2018; Zhai et al., 2019). To alleviate this problem, our proposed inference method first pushes adversarial examples towards the manifold of natural examples which is simpler and further away from the decision boundary than the adversarial distribution. Then the more separable adjusted examples are classified by the adversarially-trained model. Specifically, the input sample is fed into our adversarially-trained model and the discriminator outputs the probability of such a sample lying on the adversarial manifold. If this probability is higher than a certain threshold, we compute the transformed example xt by adding the specific perturbation r∗ to the input sample x to reduce such a probability. This perturbation can be computed as: r∗ = arg min ‖r‖∞≤ logD0W (fθ(x+ r)). (8) Intuitively, the reduction of probability of this data point lying on adversarial manifold indicates that this point moves towards the benign example manifold after adding perturbation r∗. In other words, it becomes more separable since the benign example manifold is further away from the decision boundary. When the probability of the image lying on adversary manifold is lower than threshold, we still add such a perturbation to input image to make it more separable but with smaller magnitude. In the experiment part, we show this perturbation can move the adversarial examples away from the decision boundary. The whole inference procedure can be seen in Figure 5 in Appendix. 5 EXPERIMENTS We conduct experiments on the widely-used datasets, e.g., CIFAR-10, SVHN, and CIFAR-100. Following the Feature Scattering method (Zhang & Wang, 2019), we leverage the wideresnet (Zagoruyko & Komodakis, 2016) as our basic classifier and discriminator model structure. During the training phase, the initial learning rate is empirically set to 0.1 for all three datasets. We train our model 400 epochs with the transition epoch 60, 90 and the decay rate 0.1. The input perturbation budget is set to = 8 with the label smoothing rate as 0.5. We use L∞ perturbation in this paper including all the training and evaluation. We evaluate the various models on white-box attacks and black-box attacks. Under the white-box attacks, we compare the accuracy of the proposed method with several competitive methods, including: (1) the original wideresnet (Standard) trained with natural examples; (2) Traditional Adversarial Training with PGD (AT) (Madry et al., 2017); (3) Triplet Loss Adversarial training (TLA) (Mao et al., 2019); (4) Layer-wise Adversarial Training (LAT): injecting adversarial perturbation into the latent space (Sinha et al., 2019); (5) Bilateral: adversarial perturb on examples and labels both (Wang & Zhang, 2019); (6) Feature-scattering: generating adversarial examples with considering interrelationship of samples (Zhang & Wang, 2019). These comparison algorithms present the most competitive performance on defending adversarial attack. Under the black-box attacks, we compare four different algorithms used to generate the test time attacks: Vanilla training with natural examples, adversarial training with PGD, Feature Scattering, and our proposed model. 5.1 DEFENDING WHITE-BOX ATTACKS We show the classification accuracy under several white-box attacks on CIFAR-10, CIFAR-100, SVHN in this section. We first report the accuracy on CIFAR-10 in Table 1 with the attack iterations T = 20, 40, 100 for PGD (Madry et al., 2017) and CW (Carlini & Wagner, 2017). We also conduct more experiments to further evaluate the robustness of our proposed method against more recent attacks, e.g. AutoAttack (Croce & Hein, 2020) and RayS (Chen & Gu, 2020)) as shown in Appendix B.2. As observed, overall, our proposed method achieves a clear superiority over all the defence approaches on both the clean data and adversarial examples (except that it is slightly inferior to Feature Scattering in FGSM). We also observe one exception that the standard model performs the best on clean data. Our approach performs much better than the other baseline models on PGD and CW attack. Particularly, we improve the performance of the recent state-of-the-art method Feature Scattering almost 3.1% and 5.2% under PGD20 and CW20 attack respectively. With the implementation of Inference with Manifold Transformation (IMT), our approach (ATLD-IMT) is 8.9% and 17.4% higher than the Feature Scattering under PGD20 and CW20 attack respectively. However, the performance on clean data is declined from 93.3% to 86.4% since IMT appears to have a negative effect for classifying clean data. In order to reduce the impact of IMT on the natural data, a threshold is used to limit the perturbation of IMT based on the output of discriminator. The perturbation is halved if the output of discriminator is within the range of [0.3, 0.7] (ATLD-IMT+). Under such setting, our approach could achieve high performance on adversarial attacks without sacrificing its accuracy on clean data. Similarly, the accuracy on CIFAR-100 and SVHN are shown in Table 2 with the attack iterations T = 20, 100 for both PGD and CW for conciseness. Although our method is slightly weaker than Feature Scattering under FGSM attack on CIFAR-100, overall, our proposed method ATLD achieves state-of-the-art performance over all the other approaches under various adversarial attacks. Furthermore, our ATLD-IMT version exceeds Feature Scattering by almost 19.2% and 23.8% against the attack of CW100 on CIFAR-100 and SVHN respectively. More details about the defense of whitebox attacks under different attack budgets can be seen in Appendix. 5.2 DEFENDING BLACK-BOX ATTACKS To further verify the robustness of ATLD, we conduct transfer-based black-box attack experiments on CIFAR-10. More black-box attack results on CIFAR-100 and SVHN are listed in Appendix. Four different models are used for generating test time attacks including the Vanilla Training model, the Adversarial Training with PGD model, the Feature Scattering Training model and our model. As demonstrated by the results in Table 3, our proposed approach can achieve competitive performance almost in all the cases. Specifically, ATLD outperforms Feature Scattering significantly in 8 cases while it demonstrates comparable or slightly worse accuracy in the other 3 cases. It deserves our attention that ATLD-IMT appears to have a negative impact on the black-box attacks though it stills performs much better than PGD. This may be explained in several aspects. On one hand, the distributions of adversarial examples produced by different models may differ significantly in the latent space; on the other hand, our discriminator lacks the ability to deal with the unseen distributions since the discriminator only distinguishes one type of adversarial examples from the natural data during training. We will leave the investigation of this topic as future work. 6 CONCLUSION We have developed a novel adversarial training method which leverages both the local and global information to defend adversarial attacks in this paper. In contrast, existing adversarial training methods mainly generate adversarial perturbations in a local and supervised fashion, which could however limit the model’s generalization. We have established our novel framework via an adversarial game between a discriminator and a classifier: the discriminator is learned to differentiate globally the latent distributions of the natural data and the perturbed counterpart, while the classifier is trained to recognize accurately the perturbed examples as well as enforcing the invariance between the two latent distributions. Extensive empirical evaluations have shown the effectiveness of our proposed model when compared with the recent state-of-the-art in defending adversarial attacks in both the white-box and black-box settings. APPENDIX A LIST OF MAJOR NOTATION For clarity, we list the major notations that are used in our model. • Xorg = {x : x ∼ Q0}: the set of clean data samples, where Q0 is its underlying distribution; • Xp = {x′ : x′ ∈ B(x, ), ∀x ∼ Q0}: the set of perturbed samples, the element x′ ∈ Xp is in the -neighborhood of the clean example x ∼ Q0; • fθ: the mapping function from input to the latent features of the last hidden layer (i.e., the layer before the softmax layer); • Qθ: the underlying distribution of the latent feature fθ(x) for all x ∈ Xorg; • Pθ: the underlying distribution of the latent feature fθ(x′) for all x′ ∈ Xp; • P: the feasible region of the latent distribution Pθ, which is defined as P , {P : fθ(x′) ∼ P subject to ∀x ∼ Q0, x′ ∈ B(x, )}. • Xadv: the set of the worst perturbed samples or manifold adversarial examples, the element xadv ∈ Xadv are in the -neighborhood of clean example x ∼ Q0; • P ∗θ : the worst latent distribution within the feasible region P which leads to the largest divergence or the underlying distribution of the latent feature fθ(xadv) for all xadv ∈ Xadv; B ADDITIONAL EXPERIMENT DETAILS B.1 MODEL ROBUSTNESS AGAINST PGD AND CW ATTACKER UNDER DIFFERENT ATTACK BUDGETS We further evaluate the model robustness against PGD and CW attacks under different attack budgets with a fixed attack step of 20. These results are shown in Figure 3. It is observed that the performance of Adversarial Training with the PGD method (AT) drops quickly as the attack budget increases. The Feature Scattering method (FS) can improve the model robustness across a wide range of attack budgets. The proposed approach ADLT-IMT further boosts the performance over Feature Scattering by a large margin under different attack budgets especially under CW attack, except that our ADLTIMT is slightly inferior to Feature Scattering under PGD attack with budget = 20 on CIFAR-10. B.2 MODEL ROBUSTNESS AGAINST AUTOATTACK AND RAYS As shown in (Croce & Hein, 2020; Chen & Gu, 2020), several models (such as Feature Scattering) could achieve high enough robustness against PGD and CW attack, but they may fail to defend more stronger attacks. To further evaluate the model robustness against stronger attacks, we evaluate the robustness of our proposed method IMT+ against AutoAttack (Croce & Hein, 2020) and RayS (Chen & Gu, 2020) attacks with L∞ budget = 8 on CIFAR-10 and CIFAR-100. We first compare the accuracy of the proposed ATLD-IMT+ with several competitive methods on CIFAR-10 in Table 4 to defend the AutoAttack (AA) and Rays attacks, including: (1) Traditional Adversarial Training with PGD (AT) (Madry et al., 2017); (2) TRADES: trading adversarial robustness off against accuracy (Zhang et al., 2019); (3) Feature Scattering: generating adversarial examples with considering inter-relationship of samples (Zhang & Wang, 2019); (4) Robustoverfitting: improving models adversarial robustness by simply using early stop (Rice et al., 2020); (5) Pretraining: improving models adversarial robustness with pre-training (Hendrycks et al., 2019); (6)WAR: mitigating the perturbation stability deterioration on wider models (Wu et al., 2020); (7) RTS: achieving high robust accuracy with semisupervised learning procedure (self-training) (Carmon et al., 2019); (8) Gowal et al. (2020): achieving state-of-the-art results by combining larger models, Swish/SiLU activations and model weight averaging. These comparison algorithms attain the most competitive performance on defending AA attack. As observed, overall, our proposed method achieves a clear superiority over all the defence approaches on both the clean data and adversarial examples (except on clean data, ours is slightly inferior to Gowal et al. (2020) which is however trained with additional data). Note that Pretraining, WAR and Gowal et al. (2020) with footnote require additional data for training (e.g. unlabeled data, pretraining). We also report the accuracy of ATLD-IMT+ with the state-of-the-arts methods on CIFAR-100 in Table 5 against the AutoAttack (AA). Our proposed method again achieves on both the clean data and AA attacked examples significant better performance than all the other defense approaches (without data augmentation). Furthermore, it is noted that, while our ATLD-IMT+ method is just slightly inferior to Gowal et al. (2020) (which is trained with additional data), it is substantially ahead of the normal version of Gowal et al. (2020). B.3 BLACK-BOX RESULTS ON SVHN AND CIFAR-100 We conduct more evaluations on the transfer-based black-box attacks on SVHN and CIFAR-100. We report the results in Table 6. It can be observed that our proposed method overall outperforms Feature Scattering in most of the cases on SVHN. Surprisingly, the Adversarial Training method, i.e. the PGD, performs better than our method and Feature Scattering method in three cases. This also partially reveals the more challenging nature of defending black-box attacks than white-box attacks. On CIFAR-100, it can be observed that our method and Feature Scattering are comparable. The performance of these two methods differs little though our method outperforms Feature Scattering significantly under PGD20 and CW20 against adversarial attacks generated from the Feature Scattering model. Overall, though the proposed ATLD method may not lead to remarkably higher performance than the current state-of-the-art algorithms in defending black-box attacks (as we observed in the case of white-box attacks), it still generates overall better or comparable performance. We will again leave the further exploration of defending black-box attacks as our future work. B.4 ILLUSTRATION OF THE OVERLAID BOUNDARY CHANGE OF DIFFERENT METHODS We conduct a toy example in Figure 4 to illustrate the effect on how the various methods would affect the decision boundary after the adversarial training is applied. In Figure 4, (a) shows the decision boundary trained with clean data; (b) shows the decision boundary adversarially trained with the perturbed samples by PGD; (c) presents the decision boundary given by the adversarial training of Feature Scattering; and (d) illustrates the decision boundary trained from our proposed ATLD. Clearly, both the PGD (Figure 4(b)) and the FS (Figure 4(c)) vary the original decision boundary significantly. Moreover, it can be observed that the adversarial training with PGD corrupts the data manifold completely. On the other hand, FS appears able to retain partially the data manifold information since it considers the inter-sample relationship locally. Nonetheless, its decision boundary appears non-smooth, which may hence degrade the performance. In contrast, as shown in Figure 4(d), our proposed method considers to retain the data manifold globally, which varies the decision boundary slightly. This may explain why our proposed ATLD method could outperform the other approaches. B.5 FURTHER DETAILS OF ATLD-IMT We elaborate the training procedure of our IMT in this section. The overall architecture of ATLDIMT is plotted in Figure 5. A test sample x is fed into the classifier, and the discriminator outputs the prediction. A special perturbation in IMT is then computed from the loss DW and added back to x; in this way, the sample would be pushed towards the manifold of natural samples, which is supposed to be further away from the decision boundary. The prediction of the transformed xt by the adversarially-trained classifier will then be output as the label of x. To illustrate clearly the effect of our ATLD-IMT, we conduct additional toy experiments as shown in Figure 6 where we respectively plot the clean or natural data, perturbed data attacked by PGD, and adjusted data by ATLD-IMT in (a), (b), and (c). Moreover, the decision boundary is given by ATLD in all the three sub-figures. In (a), it deserves our attention that the boundary learned by ATLD could classify natural data well compared to the PGD and Feature Scattering as shown in Section A.3. As observed in (b), the perturbations generated by PGD will push the natural samples toward or even cross the decision boundary. Our proposed IMT can push the samples towards the manifold of natural examples as observed in (c). Since the manifold of natural examples would be more separable, this may further increase the classification performance as observed in the experiments. Under review as a conference paper at ICLR 2021 B.6 ILLUSTRATION OF VECTOR FIELD OF DIFFERENT PERTURBATION SCHEMES test test test C DETAILED DERIVATION In this section, we provide the details about the derivation for the main objective function (6) and elaborate how to compute the adversarial examples and the transformed examples. C.1 DERIVATION FOR MAIN OBJECTIVE FUNCTION (6) We start with minimizing the largest f -divergence between latent distributions Pθ and Qθ induced by perturbed example x′ and natural example x. And we denote their corresponding probability density functions as p(z) and q(z). According to Eq. (3), we have min θ max Qθ Df (Pθ||Qθ) = min θ max q(z) ∫ Z q(z) sup t∈domf∗ {tp(z) q(z) − f∗(t)}dx ≥ min θ max q(z) sup T∈τ ( ∫ Z p(z)T (z)dz − ∫ Z q(z)f∗(T (z))dz) = min θ max Qθ sup W { Ez∼Pθ [gf (VW (z))] + Ez∼Qθ [−f∗(gf (VW (z)))] } = min θ sup W { Ex∼D { max x′∈B(x, ) [gf (VW (fθ(x ′)))] + [−f∗(gf (VW (fθ(x))))] }} (9) To compute the Jensen-Shannon divergence between Pθ andQθ, we set gf (t) = − log(1+e−t) and f∗(g) = − log(2− eg). Then, we have min θ max Qθ DJS(Pθ||Qθ) ≥ min θ sup W { Ex∼D { max x′∈B(x, ) [logDW (fθ(x ′)))] + [1− logDW (fθ(x))))] }} (10) where DW (x) = 1/(1 + e−VW (x)). (10) is equivalent to optimize the lower bound of JensenShannon divergence between Pθ and Qθ. With disentangling the computation of adversarial examples from Eq. (10) and further considering the classification loss for the classifier Lf and the discriminator L1:Cd , we can obtain the final objective: min θ { sup W N∑ i=1 [logD0W (fθ(x adv i )) + (1− logD0W (fθ(xi))︸ ︷︷ ︸ L0d ] +L(xadvi , yi; θ)︸ ︷︷ ︸ Lf + min W [l(D1:CW (fθ(xi)), yi) + l(D 1:C W (fθ(x adv i )), yi)]︸ ︷︷ ︸ L1:Cd } , s.t. xadvi = arg max x′i∈B(xi, ) [logD0W (fθ(x ′ i)) + (1− logD0W (fθ(xi))︸ ︷︷ ︸ L0d ] (11) C.2 COMPUTATION FOR ADVERSARIAL EXAMPLE AND TRANSFORMED EXAMPLE To compute the adversarial example, we need to solve the following problem: xadvi = arg max x′i∈B(xi, ) [logD0W (fθ(x ′ i)) + (1− logD0W (fθ(xi))︸ ︷︷ ︸ L0d ] (12) It can be reformulated as computing the adversarial perturbation as follows: radvi = arg max‖r‖∞≤ [L0d(xi + ri, θ)] (13) We first consider the more general case ‖r‖p ≤ and expand (13) with the first order Taylor expansion as follows: radvi = arg max‖r‖p≤ [L0d(xi, θ)] +∇xFT ri (14) where F = L(xi, θ). The problem (14) can be reduced to: max ‖ri‖p= ∇xFT ri (15) We solve it with the Lagrangian multiplier method and we have ∇xFri = λ(‖ri‖p − ) (16) Then we make the first derivative with respect to ri: ∇xF = λ rp−1i p( ∑ j(r j i ) p)1− 1 p (17) ∇xF = λ p ( ri )p−1 (∇xF) p p−1 = ( λ p ) p p−1 ( ri )p (18) If we sum over two sides, we have∑ (∇xF) p p−1 = ∑ ( λ p ) p p−1 ( ri )p (19) ‖∇xF‖p ∗ p∗ = ( λ p )p ∗ ∗ 1 (20) where p∗ is the dual of p, i.e. 1p + 1 p∗ = 1. We have ( λ p ) = ‖∇xF‖p∗ (21) By combining (18) and (21), we have r∗i = sgn(∇xF)( |∇xF| ‖∇xF‖p∗ ) 1 p−1 = sgn(∇xL0d)( |∇xL0d| ‖∇xL0d‖p∗ ) 1 p−1 (22) In this paper, we set p to∞. Then we have r∗i = lim p→∞ sgn(∇xL0d)( |∇xL0d| ‖∇xL0d‖p∗ ) 1 p−1 = sgn(∇xL0d)( |∇xL0d| ‖∇xL0d‖1 )0 = sgn(∇xL0d) (23) Then we can obtain the adversarial example: x∗i = xi + sgn(∇xL0d) (24) To compute the transformed example, we need to solve the following problem: r∗ = arg min ‖r‖∞≤ logD0W (fθ(x+ r)). (25) With the similar method, we can easily get the transformed example xt xt = x− sgn(∇x logD0W ). (26)
1. What is the focus of the paper regarding adversarial attacks and training? 2. What are the strengths of the proposed approach in combining different ideas? 3. What are the weaknesses of the paper regarding its evaluations and comparisons with other works? 4. Do you have any concerns about the significance of the novel approach? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review Summary: This paper considers the local and global information in adversarial attacks for adversarial training, where the authors design an adversarial framework containing a discriminator and a classifier. The idea is interesting and the paper is easy to follow. However, I have still some concerns below: The novelty of this work combines the idea of PGD (local information) and Feature-Scatter (global information) . More importantly, the evaluation is no enough, even though Feature-Scatter considers the global information, but many attack methods have shown the robustness of Feature-Scatter was overestimated, such as [1][2][3] and so on. So I think evaluating on PGD and CW is not enough. There are few analysis experiments for the proposed method, more analysis experiments are needed besides the comparision. [1] Feature Attack: https://openreview.net/forum?id=Syejj0NYvr&noteId=rkeBhuBMjS [2] RayS: A Ray Searching Method for Hard-label Adversarial Attack. KDD 2020. [3] Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. ICML 2020.
ICLR
Title Improving Model Robustness with Latent Distribution Locally and Globally Abstract We propose a novel adversarial training method which leverages both the local and global information to defend adversarial attacks. Existing adversarial training methods usually generate adversarial perturbations locally in a supervised manner and fail to consider the data manifold information in a global way. Consequently, the resulting adversarial examples may corrupt the underlying data structure and are typically biased towards the decision boundary. In this work, we exploit both the local and global information of data manifold to generate adversarial examples in an unsupervised manner. Specifically, we design our novel framework via an adversarial game between a discriminator and a classifier: the discriminator is learned to differentiate the latent distributions of the natural data and the perturbed counterpart, while the classifier is trained to recognize accurately the perturbed examples as well as enforcing the invariance between the two latent distributions. We conduct a series of analysis on the model robustness and also verify the effectiveness of our proposed method empirically. Experimental results show that our method substantially outperforms the recent state-of-the-art (i.e. Feature Scattering) in defending adversarial attacks by a large accuracy margin (e.g. 17.0% and 18.1% on SVHN dataset, 9.3% and 17.4% on CIFAR-10 dataset, 6.0% and 16.2% on CIFAR-100 dataset for defending PGD20 and CW20 attacks respectively). 1 INTRODUCTION Deep Neural Networks (DNNs) have achieved impressive performance on a broad range of datasets, yet can be easily fooled by adversarial examples or perturbations (LeCun et al., 2015; He et al., 2016; Gers et al., 1999). Adversarial examples have been shown to be ubiquitous beyond different tasks such as image classification (Goodfellow et al., 2014), segmentation (Fischer et al., 2017), and speech recognition (Carlini & Wagner, 2018). Overall, adversarial examples raise great concerns about the robustness of learning models, and have drawn enormous attention over recent years. To defend adversarial examples, great efforts have been made to improve the model robustness (Kannan et al., 2018; You et al., 2019; Wang & Zhang, 2019; Zhang & Wang, 2019). Most of them are based on the adversarial training, i.e. training the model with adversarially-perturbed samples rather than clean data (Goodfellow et al., 2014; Madry et al., 2017; Lyu et al., 2015). In principle, adversarial training is a min-max game between the adversarial perturbations and classifier. Namely, the indistinguishable adversarial perturbations are designed to mislead the output of the classifier, while the classifier is trained to produce the accurate predictions for these perturbed input data. Currently, the adversarial perturbations are mainly computed by enforcing the output invariance in a supervised manner (Madry et al., 2017). Despite its effectiveness in some scenarios, it is observed recently that these approaches may still be limited in defending adversarial examples. In particular, we argue that these current adversarial training approaches are typically conducted in a local and supervised way and fail to consider globally the overall data manifold information; such information however proves crucially important for attaining better generalization. As a result, the generated adversarial examples may corrupt the underlying data structure and would be typically biased towards the decision boundary. Therefore, the well-generalizing features inherent to the data distribution might be lost, which limits the performance of the DNNs to defend adversarial examples even if adversarial training is applied (Ilyas et al., 2019a; Schmidt et al., 2018). For illustration, we have shown a toy example in Figure 1. As clearly observed, adversarially-perturbed examples gen- erated by PGD, one of the most successful adversarial training method, corrupt the data manifold, which would inevitably lead to poor performance if the training is conducted based on these perturbed examples. On the other hand, the current state-of-the-art method Feature Scattering (Zhang & Wang, 2019) can partially alleviate this problem but still leads to corruptions on the data manifold. To address this limitation, we propose a novel method called Adversarial Training with Latent Distribution (ATLD) which additionally considers the data distribution globally in an unsupervised fashion. In this way, the data manifold could be well preserved, which is beneficial to attain better model generalization. Moreover, since the label information is not required when computing the adversarial perturbations, the resulting adversarial examples would not be biased towards the decision boundary. This can be clearly observed in Figure 1(d). Our method can be divided into two steps: first, we train the deep model with the adversarial examples which maximize the variance between latent distributions of clean data and adversarial counterpart rather than maximizing the loss function. We reformulate it as a minimax game between a discriminator and a classifier. The adversarial examples are crafted by the discriminator to make different implicitly the latent distributions of clean and perturbed data, while the classifier is trained to decrease the discrepancy between these two latent distributions as well as promoting accurate classification on the adversarial examples as Figure 2 shows. Then, during the inference procedure, we generate the specific perturbations through the discriminator network to diminish the impact of the adversarial attack as shown in Figure 6 in Appendix. On the empirical front, with the toy examples, we show that our proposed method can preserve more information of the original distribution and learn a better decision boundary than the existing adversarial training method. We also test our method on three different datasets: CIFAR-10, CIFAR100 and SVHN with the famous PGD, CW and FGSM attacks. Our ATLD method outperforms the state-of-the-art methods by a large margin. e.g. ATLD improves over Feature Scattering (Zhang & Wang, 2019) by 17.0% and 18.1% on SVHN for PGD20 and CW20 attacks. Our method also shows a large superiority to the conventional adversarial training method (Madry et al., 2017), boosting the performance by 32.0% and 30.7% on SVHN for PGD20 and CW20 attacks. 2 RELATED WORK Adversarial Training. Adversarial training is a family of techniques to improve the model robustness (Madry et al., 2017; Lyu et al., 2015). It trains the DNNs with adversarially-perturbed samples instead of clean data. Some approaches extend the conventional adversarial training by injecting the adversarial noise to hidden layers to boost the robustness of latent space (Ilyas et al., 2019b; You et al., 2019; Santurkar et al., 2019; Liu et al., 2019). All of these approaches generate the adversarial examples by maximizing the loss function with the label information. However, the structure of the data distribution is destroyed since the perturbed samples could be highly biased towards the non-optimal decision boundary (Zhang & Wang, 2019). Our proposed method has a similar training scheme with adversarial training by replacing clean data with the perturbed one. Nevertheless, our method generates the adversarial perturbations without the label information which weakens the impact of non-optimal decision boundary and can retain more information of the underlying data distribution. Manifold-based Adversarial Training. Song et al. (2017) propose to generate the adversarial examples by projecting on a proper manifold. Zhang & Wang (2019) leverage the manifold information in the forms of inter-sample relationship within the batch to generate adversarial adversarial perturbations. Virtual Adversarial Training and Manifold Adversarial Training are proposed improve model generalization and robustness against adversarial examples by ensuring the local smoothness of the data distribution (Zhang et al., 2018; Miyato et al., 2017). Some methods are designed to enforce the local smoothness around the natural examples by penalizing the difference between the outputs of adversarial examples and clean counterparts (Kannan et al., 2018; Chan et al., 2020; Jakubovitz & Giryes, 2018). All of these methods just leverage the local information of the distribution or manifold. Differently, our method generates the perturbations additionally considering the structure of distribution globally. Unsupervised Domain Adversarial Training. Domain Adversarial Training shares a training scheme similar to our method where the classifier and discriminator compete with each other (Odena et al., 2017; Long et al., 2018; Ganin et al., 2016). However, its objective is to reduce the gap between the source and target distributions in the latent space. The discriminator is used to measure the divergence between these two distributions in the latent space. The training scheme of our method is also based on competition between the classifier and discriminator. Different from the previous framework, the discriminator of our method is used to capture the information of distributions of adversarial examples and clean counterparts in the latent space which helps generate the adversarial perturbations. GAN-based Adversarial Training Methods. Several GAN-based methods leverage GANs to learn the clean data distribution and purify the adversarial examples by projecting them on clean data manifold before classification (Meng & Chen, 2017; Metzen et al., 2017). The framework of GAN can also be used to generate the adversarial examples (Baluja & Fischer, 2018). The generator produces the adversarial examples to deceive both the discriminator and classifier; the discriminator and classifier attempt to differentiate the adversaries from clean data and produce the correct labels respectively. Some adversary detector networks are proposed to detect the adversarial examples which can be well aligned with our method (Gong et al., 2017; Grosse et al., 2017). In these works, a pretrained network is augmented with a binary detector network. The training of the pretrained network and detector involves generating adversarial examples to maximize their losses. Differently, our method generates the adversarial examples just to minimize the loss of the discriminator and feed them as the training set to the classifier. Such adversarial examples are deemed to induce most different latent representations from the clean counterpart. 3 BACKGROUND 3.1 ADVERSARIAL TRAINING Let us first introduce the widely-adopted adversarial training method for defending against adversarial attacks. Specifically, it solves the following minimax optimization problem through training. min θ {E(x,y)∼D[ max x′∈Sx L(x′, y; θ)]}, (1) where x ∈ Rn and y ∈ R are respectively the clean data samples and the corresponding labels drawn from the dataset D, and L(·) is the loss function of the DNN with the model parameter θ ∈ Rm. Furthermore, we denote the clean data distribution as Q0, i.e. x ∼ Q0. , and denote x′ ∈ Rn as perturbed samples in a feasible region Sx , {z : z ∈ B(x, ) ∩ [−1.0, 1.0]n} with B(z, ) , {z : ‖x− z‖∞ ≤ } being the `∞-ball at center x with radius . By defining fθ(·) as the mapping function from the input layer to the last latent layer, we can also rewrite the loss function of the DNN as l(fθ(x), y) where l(·) denotes the loss function calculated from the last hidden layer of the DNN, e.g. the cross entropy loss as typically used in DNN. Whilst the outer minimization can be conducted by training to find the optimal model parameters θ, the inner maximization essentially generates the strongest adversarial attacks on a given set of model parameters θ. In general, the solution to the minimax problem can be found by training a network minimizing the loss for worst-case adversarial examples, so as to attain adversarial robustness. Given a set of model parameters θ, the commonly adopted solution to the inner maximization problem can lead to either one-step (e.g., FGSM) or multi-step (e.g., PGD) approach (Madry et al., 2017). In particular, for a given single point x, the strongest adversarial example x′ at the t-th iteration can be iteratively obtained by the following updating rule: xt+1 = ΠSx(x t + α · sgn(∇xL(xt, y; θ))), (2) where ΠSx(·) is a projection operator to project the inputs onto the region Sx, sgn(·) is the sign function, and α is the updating step size. For the initialization, x0 can be generated by randomly sampling in B(x, ). It appears in (1) that each perturbed sample x′ is obtained individually by leveraging its loss function L(x′, y; θ) with its label y. However, without considering the inter-relationship between samples, we may lose the global knowledge of the data manifold structure which proves highly useful for attaining better generalization. This issue has been studied in a recent work (Zhang & Wang, 2019) where a new method named feature scattering made a first step to consider the inter-sample relationship within the batch; unfortunately this approach did not take the full advantages of the global knowledge of the entire data distribution. In addition, relying on the maximization of the loss function, the adversarially-perturbed data samples may be highly biased towards the decision boundary, which potentially corrupts the structure of the original data distribution, especially when the decision boundary is non-optimal (see Figure 1 again for the illustration). 3.2 DIVERGENCE ESTIMATION To measure the discrepancy of two distributions, statistical divergence measures (e.g., KullbackLeibler and Jensen-Shannon divergence) have been proposed. In general, given two distributions P and Q with a continuous density function p(x) and q(x) respectively, f -divergence is defined as Df (P||Q) , ∫ X q(x)f ( p(x) q(x) ) dx. The exact computation of f -divergence is challenging, and the estimation from samples has attracted much interest. For instance, leveraging the variational methods, Nguyen et al. (2010) propose a method for estimating f -divergence from only samples; Nowozin et al. (2016) extend this method by estimating the divergence with learning the parameters of discriminator. Specifically, the f -divergence between two distributions P and Q can be lowerbounded using Fenchel conjugate and Jensen’s inequality (Nowozin et al., 2016). Df (P||Q) = ∫ X q(x) sup t∈domf∗ {tp(x) q(x) − f∗(t)}dx ≥ sup T∈τ ( ∫ X p(x)T (x)dx− ∫ X q(x)f∗(T (x))dx) = sup W (Ex∼P[gf (VW (x))] + Ex∼Q[−f∗(gf (VW (x)))]), (3) where VW : X → R is a discriminator network with parameter W and gf : R → domf∗ is an output activation function which is determined by the type of discriminator. τ is an arbitrary class of functions T : X → R. f is a convex lower-semicontinous function and f∗ is its conjugate defined by f∗(t) = supu∈domf [ut− f(u)]. The objective of discriminator for GANs is a special case of (3) with the activation function gf (t) = − log(1+e−t) and f∗(g) = − log(2−eg). It approximates the Jense-Shannon divergence between real and fake distributions. Arjovsky et al. (2017) also develop a method to estimate the Wasserstein-distance by neural network. In this paper, these methods will be used to estimate the Jensen-Shannon divergence between latent distributions induced by adversarial and clean examples. 4 ADVERSARIAL TRAINING WITH LATENT DISTRIBUTION As discussed in Section 3.1, the conventional adversarial training methods rely on the knowledge of data labels. As a result, the local information to generate adversarial examples may be biased toward the decision boundary; such individual adversarial example generation does not capture the global knowledge of the data manifold. To alleviate these limitations, we propose a novel method to compute the perturbed samples by leveraging the global knowledge of the whole data distribution and then disentangling them from the data labels and the loss function. Generally speaking, the perturbations are generated to enlarge the variance between latent distributions induced by clean and adversarial data. Formally, we try to identify the set of adversarial examples Xadv that yield in the latent space a distribution P ∗θ through fθ(·) that is the most different from the latent distribution Qθ induced by the clean samples Xorg = {x : x ∼ Q0}, without resorting to the corresponding labels Y . In other words, the resulting adversarial examples can be deemed as manifold adversarial examples, which ‘deceive’ the manifold rather than fool the classifier as defined in the traditional adversarial examples. It is noted that the latent space to be perturbed could be any hidden layer though it is defined in the last hidden layer before the softmax of a DNN in this paper. The optimization problem of the proposed adversarial training can then be reformulated as follows: min θ {Efθ(xadv)∼P∗θ [l(fθ(x adv), y)] +Df (P ∗ θ ||Qθ)} (4) s.t. P ∗θ = arg max Pθ∈P [Df (Pθ||Qθ)] (5) where l(·) and y are similarly defined as before, and Df (·) is the f -divergence measure of two distributions. P = {P : fθ(x′) ∼ P subject to ∀x ∼ Q0, x′ ∈ B(x, )} is the feasible region for the latent distribution Pθ which is induced by the set of perturbed examplesXp through fθ(·). fθ(x′) and fθ(xadv) represents the latent features of the perturbed example x′ and adversarial example xadv respectively. Intuitively, we try to obtain the worst latent distribution P ∗θ which is induced by Xadv through fθ(·) within the region P , while the model parameter θ is learned to minimize the classification loss on the latent feature fθ(xadv) ∼ P ∗θ (or equivalently adversarial example xadv ∈ Xadv) and the f -divergence between the latent distributions P ∗θ and Qθ induced by adversarial examples Xadv and clean data Xorg. It is still challenging to solve the above optimization problem, since both the objective function and the constraint are entangled with the adversarial examples Xadv and the model parameters θ. To make the problem more tractable, we propose a novel Adversarial Training with Latent Distribution (ATLD) method. In the next subsection, by taking into account the entire data distribution globally, we first focus on the constraint and identify the adversarial samples Xadv through the maximization problem. We then solve the minimization of the objective function with the adversarial training procedure. To further enhance the performance, we add specific perturbations named Inference with Manifold Transformation (IMT) in Section 4.2 to input samples for enforcing them towards the more separable natural data manifold. Finally, we classify the transformed data points with the adversarially-trained model. 4.1 GENERATING ADVERSARIAL EXAMPLES FOR TRAINING First, we optimize the constraint (5) to generate the adversarial examples or its induced distribution P ∗θ for training. Intuitively, the adversarial examples Xadv are crafted to maximize the divergence between the latent distributions induced by natural examplesXorg and adversarial counterpartXadv in an unsupervised fashion since no knowledge of labels Y is required. Together with the objective function in (4), our proposed adversarial training method is to minimize such divergence as well as the classification error for adversarial examples Xadv . However, it is a challenging task to evaluate the divergence between two latent distributions. To make it more tractable, we leverage a discriminator network for estimating the Jensen-Shannon divergence between two distributions P ∗θ /Pθ and Qθ according to Section 3.2. It is noted again that the class label information is not used for generating adversarial examples. Hence the adversarial examples are still generated in an unsupervised way. Then, by using (3), the optimization problem in (4) and (5) can be approximated as follows in a tractable way. min θ { N∑ i=1 L(xadvi , yi; θ)︸ ︷︷ ︸ Lf + sup W N∑ i=1 [logDW (fθ(x adv i )) + (1− logDW (fθ(xi)))︸ ︷︷ ︸ Ld ] } s.t. xadvi = arg max x′i∈B(xi, ) [logDW (fθ(x ′ i)) + (1− logDW (fθ(xi)))︸ ︷︷ ︸ Ld ] (6) where N denotes the number of training samples and DW denotes the discriminator network with the sigmoid activation function and parameter W . fθ(xi) is the latent feature of the clean sample xi. DW is used to determine whether the latent feature is from adversary manifold (output the manifold label of the latent feature). For ease of description, we represent the components in Eq. (6) as two parts: Lf and Ld. Ld is the manifold loss and Lf represents the loss function of the classifier network. We now interpret the above optimization problem. By comparing Eq. (6) and Eq. (4), it is observed that the Jensen-Shannon divergence between P ∗θ andQθ is approximated by supW ∑N i=1 Ld, and the minimization of the classification loss on adversarial examples is given by minθ ∑N i=1 Lf . The problem (6) is optimized by updating parameters θ and W and crafting adversarial examples {xadvi }Ni=0 iteratively. The whole training procedure can be viewed as the game among three players: the classifier, discriminator, and adversarial examples. The discriminator DW is learned to differentiate the latent distributions of the perturbed examples and clean data via maximizing the loss Ld while the classifier fθ is trained to (1) enforce the invariance between these two distributions to confuse the discriminatorDW by minimizing the loss Ld, and (2) classify the adversarial examples as accurately as possible by minimizing Lf . For each training iteration, the adversarial examples are crafted to make different the adversarial latent distribution and natural one by maximizing Ld. Although DW cannot measure the divergence between the two latent distributions exactly at the first several training steps, it can help evaluate the divergence between distributions induced by perturbed examples and clean ones when the parameters W converges. However, when the latent distributions are multi-modal, which is a real scenario due to the nature of multi-class classification, it is challenging for the discriminator to measure the divergence between such distributions. Several work reveals that there is a high risk of failure in using the discriminator networks to measure only a fraction of components underlying different distributions (Arjovsky & Bottou, 2017; Che et al., 2016). Ma (2018) also shows that two different distributions are not guaranteed to be identical even if the discriminator is fully confused. To alleviate such the problem, we additionally train the discriminator DW to predict the class labels for latent features as (Odena et al., 2017; Long et al., 2018). As a result, the problem (6) can then be reformulated as: min θ { N∑ i=1 L(xadvi , yi; θ)︸ ︷︷ ︸ Lf + sup W N∑ i=1 [logD0W (fθ(x adv i )) + (1− logD0W (fθ(xi)))︸ ︷︷ ︸ L0d ] + min W [l(D1:CW (fθ(xi)), yi) + l(D 1:C W (fθ(x adv i )), yi)]︸ ︷︷ ︸ L1:Cd } s.t. xadvi = arg max x′i∈B(xi, ) [logD0W (fθ(x ′ i)) + (1− logD0W (fθ(xi))︸ ︷︷ ︸ L0d ] (7) HereD0W is the first dimension of the output of the discriminator, which indicates the manifold label of the latent features; D1:CW are the remaining C dimensions of the output of DW , used to output the class label of the latent feature; C denotes the number of classes, and L0d and L 1:C d are the manifold loss and the classification loss for the discriminator network respectively. (The detailed derivation for Eq. (6) and Eq. (7) can be seen in Appendix.) The detailed training procedure of our framework is depicted in Figure 2. Remarks. It is worth noting that the labeled information is not required for generating adversarial examples. Therefore, our method prevents the perturbed examples from highly biasing towards the decision boundary and more information of the original distribution structure is preserved. In addition, since the discriminator is trained with the whole dataset (both clean and adversarial examples), it captures the global information of data manifold. Consequently, by training with adversarial examples generated according to the manifold loss of the discriminator, our method can improve the model robustness against adversarial examples with the global structure of data distribution. 4.2 INFERENCE WITH MANIFOLD TRANSFORMATION To enhance the generalization of ATLD, we further develop a new inference method with manifold transformation. Although adversarially-trained models can well recognize adversarial examples, there are still potential examples which are easily misclassified especially for unseen data. In other words, the generalization to adversarial examples is hard to achieve due to the more complex distribution of adversarial examples (Schmidt et al., 2018; Zhai et al., 2019). To alleviate this problem, our proposed inference method first pushes adversarial examples towards the manifold of natural examples which is simpler and further away from the decision boundary than the adversarial distribution. Then the more separable adjusted examples are classified by the adversarially-trained model. Specifically, the input sample is fed into our adversarially-trained model and the discriminator outputs the probability of such a sample lying on the adversarial manifold. If this probability is higher than a certain threshold, we compute the transformed example xt by adding the specific perturbation r∗ to the input sample x to reduce such a probability. This perturbation can be computed as: r∗ = arg min ‖r‖∞≤ logD0W (fθ(x+ r)). (8) Intuitively, the reduction of probability of this data point lying on adversarial manifold indicates that this point moves towards the benign example manifold after adding perturbation r∗. In other words, it becomes more separable since the benign example manifold is further away from the decision boundary. When the probability of the image lying on adversary manifold is lower than threshold, we still add such a perturbation to input image to make it more separable but with smaller magnitude. In the experiment part, we show this perturbation can move the adversarial examples away from the decision boundary. The whole inference procedure can be seen in Figure 5 in Appendix. 5 EXPERIMENTS We conduct experiments on the widely-used datasets, e.g., CIFAR-10, SVHN, and CIFAR-100. Following the Feature Scattering method (Zhang & Wang, 2019), we leverage the wideresnet (Zagoruyko & Komodakis, 2016) as our basic classifier and discriminator model structure. During the training phase, the initial learning rate is empirically set to 0.1 for all three datasets. We train our model 400 epochs with the transition epoch 60, 90 and the decay rate 0.1. The input perturbation budget is set to = 8 with the label smoothing rate as 0.5. We use L∞ perturbation in this paper including all the training and evaluation. We evaluate the various models on white-box attacks and black-box attacks. Under the white-box attacks, we compare the accuracy of the proposed method with several competitive methods, including: (1) the original wideresnet (Standard) trained with natural examples; (2) Traditional Adversarial Training with PGD (AT) (Madry et al., 2017); (3) Triplet Loss Adversarial training (TLA) (Mao et al., 2019); (4) Layer-wise Adversarial Training (LAT): injecting adversarial perturbation into the latent space (Sinha et al., 2019); (5) Bilateral: adversarial perturb on examples and labels both (Wang & Zhang, 2019); (6) Feature-scattering: generating adversarial examples with considering interrelationship of samples (Zhang & Wang, 2019). These comparison algorithms present the most competitive performance on defending adversarial attack. Under the black-box attacks, we compare four different algorithms used to generate the test time attacks: Vanilla training with natural examples, adversarial training with PGD, Feature Scattering, and our proposed model. 5.1 DEFENDING WHITE-BOX ATTACKS We show the classification accuracy under several white-box attacks on CIFAR-10, CIFAR-100, SVHN in this section. We first report the accuracy on CIFAR-10 in Table 1 with the attack iterations T = 20, 40, 100 for PGD (Madry et al., 2017) and CW (Carlini & Wagner, 2017). We also conduct more experiments to further evaluate the robustness of our proposed method against more recent attacks, e.g. AutoAttack (Croce & Hein, 2020) and RayS (Chen & Gu, 2020)) as shown in Appendix B.2. As observed, overall, our proposed method achieves a clear superiority over all the defence approaches on both the clean data and adversarial examples (except that it is slightly inferior to Feature Scattering in FGSM). We also observe one exception that the standard model performs the best on clean data. Our approach performs much better than the other baseline models on PGD and CW attack. Particularly, we improve the performance of the recent state-of-the-art method Feature Scattering almost 3.1% and 5.2% under PGD20 and CW20 attack respectively. With the implementation of Inference with Manifold Transformation (IMT), our approach (ATLD-IMT) is 8.9% and 17.4% higher than the Feature Scattering under PGD20 and CW20 attack respectively. However, the performance on clean data is declined from 93.3% to 86.4% since IMT appears to have a negative effect for classifying clean data. In order to reduce the impact of IMT on the natural data, a threshold is used to limit the perturbation of IMT based on the output of discriminator. The perturbation is halved if the output of discriminator is within the range of [0.3, 0.7] (ATLD-IMT+). Under such setting, our approach could achieve high performance on adversarial attacks without sacrificing its accuracy on clean data. Similarly, the accuracy on CIFAR-100 and SVHN are shown in Table 2 with the attack iterations T = 20, 100 for both PGD and CW for conciseness. Although our method is slightly weaker than Feature Scattering under FGSM attack on CIFAR-100, overall, our proposed method ATLD achieves state-of-the-art performance over all the other approaches under various adversarial attacks. Furthermore, our ATLD-IMT version exceeds Feature Scattering by almost 19.2% and 23.8% against the attack of CW100 on CIFAR-100 and SVHN respectively. More details about the defense of whitebox attacks under different attack budgets can be seen in Appendix. 5.2 DEFENDING BLACK-BOX ATTACKS To further verify the robustness of ATLD, we conduct transfer-based black-box attack experiments on CIFAR-10. More black-box attack results on CIFAR-100 and SVHN are listed in Appendix. Four different models are used for generating test time attacks including the Vanilla Training model, the Adversarial Training with PGD model, the Feature Scattering Training model and our model. As demonstrated by the results in Table 3, our proposed approach can achieve competitive performance almost in all the cases. Specifically, ATLD outperforms Feature Scattering significantly in 8 cases while it demonstrates comparable or slightly worse accuracy in the other 3 cases. It deserves our attention that ATLD-IMT appears to have a negative impact on the black-box attacks though it stills performs much better than PGD. This may be explained in several aspects. On one hand, the distributions of adversarial examples produced by different models may differ significantly in the latent space; on the other hand, our discriminator lacks the ability to deal with the unseen distributions since the discriminator only distinguishes one type of adversarial examples from the natural data during training. We will leave the investigation of this topic as future work. 6 CONCLUSION We have developed a novel adversarial training method which leverages both the local and global information to defend adversarial attacks in this paper. In contrast, existing adversarial training methods mainly generate adversarial perturbations in a local and supervised fashion, which could however limit the model’s generalization. We have established our novel framework via an adversarial game between a discriminator and a classifier: the discriminator is learned to differentiate globally the latent distributions of the natural data and the perturbed counterpart, while the classifier is trained to recognize accurately the perturbed examples as well as enforcing the invariance between the two latent distributions. Extensive empirical evaluations have shown the effectiveness of our proposed model when compared with the recent state-of-the-art in defending adversarial attacks in both the white-box and black-box settings. APPENDIX A LIST OF MAJOR NOTATION For clarity, we list the major notations that are used in our model. • Xorg = {x : x ∼ Q0}: the set of clean data samples, where Q0 is its underlying distribution; • Xp = {x′ : x′ ∈ B(x, ), ∀x ∼ Q0}: the set of perturbed samples, the element x′ ∈ Xp is in the -neighborhood of the clean example x ∼ Q0; • fθ: the mapping function from input to the latent features of the last hidden layer (i.e., the layer before the softmax layer); • Qθ: the underlying distribution of the latent feature fθ(x) for all x ∈ Xorg; • Pθ: the underlying distribution of the latent feature fθ(x′) for all x′ ∈ Xp; • P: the feasible region of the latent distribution Pθ, which is defined as P , {P : fθ(x′) ∼ P subject to ∀x ∼ Q0, x′ ∈ B(x, )}. • Xadv: the set of the worst perturbed samples or manifold adversarial examples, the element xadv ∈ Xadv are in the -neighborhood of clean example x ∼ Q0; • P ∗θ : the worst latent distribution within the feasible region P which leads to the largest divergence or the underlying distribution of the latent feature fθ(xadv) for all xadv ∈ Xadv; B ADDITIONAL EXPERIMENT DETAILS B.1 MODEL ROBUSTNESS AGAINST PGD AND CW ATTACKER UNDER DIFFERENT ATTACK BUDGETS We further evaluate the model robustness against PGD and CW attacks under different attack budgets with a fixed attack step of 20. These results are shown in Figure 3. It is observed that the performance of Adversarial Training with the PGD method (AT) drops quickly as the attack budget increases. The Feature Scattering method (FS) can improve the model robustness across a wide range of attack budgets. The proposed approach ADLT-IMT further boosts the performance over Feature Scattering by a large margin under different attack budgets especially under CW attack, except that our ADLTIMT is slightly inferior to Feature Scattering under PGD attack with budget = 20 on CIFAR-10. B.2 MODEL ROBUSTNESS AGAINST AUTOATTACK AND RAYS As shown in (Croce & Hein, 2020; Chen & Gu, 2020), several models (such as Feature Scattering) could achieve high enough robustness against PGD and CW attack, but they may fail to defend more stronger attacks. To further evaluate the model robustness against stronger attacks, we evaluate the robustness of our proposed method IMT+ against AutoAttack (Croce & Hein, 2020) and RayS (Chen & Gu, 2020) attacks with L∞ budget = 8 on CIFAR-10 and CIFAR-100. We first compare the accuracy of the proposed ATLD-IMT+ with several competitive methods on CIFAR-10 in Table 4 to defend the AutoAttack (AA) and Rays attacks, including: (1) Traditional Adversarial Training with PGD (AT) (Madry et al., 2017); (2) TRADES: trading adversarial robustness off against accuracy (Zhang et al., 2019); (3) Feature Scattering: generating adversarial examples with considering inter-relationship of samples (Zhang & Wang, 2019); (4) Robustoverfitting: improving models adversarial robustness by simply using early stop (Rice et al., 2020); (5) Pretraining: improving models adversarial robustness with pre-training (Hendrycks et al., 2019); (6)WAR: mitigating the perturbation stability deterioration on wider models (Wu et al., 2020); (7) RTS: achieving high robust accuracy with semisupervised learning procedure (self-training) (Carmon et al., 2019); (8) Gowal et al. (2020): achieving state-of-the-art results by combining larger models, Swish/SiLU activations and model weight averaging. These comparison algorithms attain the most competitive performance on defending AA attack. As observed, overall, our proposed method achieves a clear superiority over all the defence approaches on both the clean data and adversarial examples (except on clean data, ours is slightly inferior to Gowal et al. (2020) which is however trained with additional data). Note that Pretraining, WAR and Gowal et al. (2020) with footnote require additional data for training (e.g. unlabeled data, pretraining). We also report the accuracy of ATLD-IMT+ with the state-of-the-arts methods on CIFAR-100 in Table 5 against the AutoAttack (AA). Our proposed method again achieves on both the clean data and AA attacked examples significant better performance than all the other defense approaches (without data augmentation). Furthermore, it is noted that, while our ATLD-IMT+ method is just slightly inferior to Gowal et al. (2020) (which is trained with additional data), it is substantially ahead of the normal version of Gowal et al. (2020). B.3 BLACK-BOX RESULTS ON SVHN AND CIFAR-100 We conduct more evaluations on the transfer-based black-box attacks on SVHN and CIFAR-100. We report the results in Table 6. It can be observed that our proposed method overall outperforms Feature Scattering in most of the cases on SVHN. Surprisingly, the Adversarial Training method, i.e. the PGD, performs better than our method and Feature Scattering method in three cases. This also partially reveals the more challenging nature of defending black-box attacks than white-box attacks. On CIFAR-100, it can be observed that our method and Feature Scattering are comparable. The performance of these two methods differs little though our method outperforms Feature Scattering significantly under PGD20 and CW20 against adversarial attacks generated from the Feature Scattering model. Overall, though the proposed ATLD method may not lead to remarkably higher performance than the current state-of-the-art algorithms in defending black-box attacks (as we observed in the case of white-box attacks), it still generates overall better or comparable performance. We will again leave the further exploration of defending black-box attacks as our future work. B.4 ILLUSTRATION OF THE OVERLAID BOUNDARY CHANGE OF DIFFERENT METHODS We conduct a toy example in Figure 4 to illustrate the effect on how the various methods would affect the decision boundary after the adversarial training is applied. In Figure 4, (a) shows the decision boundary trained with clean data; (b) shows the decision boundary adversarially trained with the perturbed samples by PGD; (c) presents the decision boundary given by the adversarial training of Feature Scattering; and (d) illustrates the decision boundary trained from our proposed ATLD. Clearly, both the PGD (Figure 4(b)) and the FS (Figure 4(c)) vary the original decision boundary significantly. Moreover, it can be observed that the adversarial training with PGD corrupts the data manifold completely. On the other hand, FS appears able to retain partially the data manifold information since it considers the inter-sample relationship locally. Nonetheless, its decision boundary appears non-smooth, which may hence degrade the performance. In contrast, as shown in Figure 4(d), our proposed method considers to retain the data manifold globally, which varies the decision boundary slightly. This may explain why our proposed ATLD method could outperform the other approaches. B.5 FURTHER DETAILS OF ATLD-IMT We elaborate the training procedure of our IMT in this section. The overall architecture of ATLDIMT is plotted in Figure 5. A test sample x is fed into the classifier, and the discriminator outputs the prediction. A special perturbation in IMT is then computed from the loss DW and added back to x; in this way, the sample would be pushed towards the manifold of natural samples, which is supposed to be further away from the decision boundary. The prediction of the transformed xt by the adversarially-trained classifier will then be output as the label of x. To illustrate clearly the effect of our ATLD-IMT, we conduct additional toy experiments as shown in Figure 6 where we respectively plot the clean or natural data, perturbed data attacked by PGD, and adjusted data by ATLD-IMT in (a), (b), and (c). Moreover, the decision boundary is given by ATLD in all the three sub-figures. In (a), it deserves our attention that the boundary learned by ATLD could classify natural data well compared to the PGD and Feature Scattering as shown in Section A.3. As observed in (b), the perturbations generated by PGD will push the natural samples toward or even cross the decision boundary. Our proposed IMT can push the samples towards the manifold of natural examples as observed in (c). Since the manifold of natural examples would be more separable, this may further increase the classification performance as observed in the experiments. Under review as a conference paper at ICLR 2021 B.6 ILLUSTRATION OF VECTOR FIELD OF DIFFERENT PERTURBATION SCHEMES test test test C DETAILED DERIVATION In this section, we provide the details about the derivation for the main objective function (6) and elaborate how to compute the adversarial examples and the transformed examples. C.1 DERIVATION FOR MAIN OBJECTIVE FUNCTION (6) We start with minimizing the largest f -divergence between latent distributions Pθ and Qθ induced by perturbed example x′ and natural example x. And we denote their corresponding probability density functions as p(z) and q(z). According to Eq. (3), we have min θ max Qθ Df (Pθ||Qθ) = min θ max q(z) ∫ Z q(z) sup t∈domf∗ {tp(z) q(z) − f∗(t)}dx ≥ min θ max q(z) sup T∈τ ( ∫ Z p(z)T (z)dz − ∫ Z q(z)f∗(T (z))dz) = min θ max Qθ sup W { Ez∼Pθ [gf (VW (z))] + Ez∼Qθ [−f∗(gf (VW (z)))] } = min θ sup W { Ex∼D { max x′∈B(x, ) [gf (VW (fθ(x ′)))] + [−f∗(gf (VW (fθ(x))))] }} (9) To compute the Jensen-Shannon divergence between Pθ andQθ, we set gf (t) = − log(1+e−t) and f∗(g) = − log(2− eg). Then, we have min θ max Qθ DJS(Pθ||Qθ) ≥ min θ sup W { Ex∼D { max x′∈B(x, ) [logDW (fθ(x ′)))] + [1− logDW (fθ(x))))] }} (10) where DW (x) = 1/(1 + e−VW (x)). (10) is equivalent to optimize the lower bound of JensenShannon divergence between Pθ and Qθ. With disentangling the computation of adversarial examples from Eq. (10) and further considering the classification loss for the classifier Lf and the discriminator L1:Cd , we can obtain the final objective: min θ { sup W N∑ i=1 [logD0W (fθ(x adv i )) + (1− logD0W (fθ(xi))︸ ︷︷ ︸ L0d ] +L(xadvi , yi; θ)︸ ︷︷ ︸ Lf + min W [l(D1:CW (fθ(xi)), yi) + l(D 1:C W (fθ(x adv i )), yi)]︸ ︷︷ ︸ L1:Cd } , s.t. xadvi = arg max x′i∈B(xi, ) [logD0W (fθ(x ′ i)) + (1− logD0W (fθ(xi))︸ ︷︷ ︸ L0d ] (11) C.2 COMPUTATION FOR ADVERSARIAL EXAMPLE AND TRANSFORMED EXAMPLE To compute the adversarial example, we need to solve the following problem: xadvi = arg max x′i∈B(xi, ) [logD0W (fθ(x ′ i)) + (1− logD0W (fθ(xi))︸ ︷︷ ︸ L0d ] (12) It can be reformulated as computing the adversarial perturbation as follows: radvi = arg max‖r‖∞≤ [L0d(xi + ri, θ)] (13) We first consider the more general case ‖r‖p ≤ and expand (13) with the first order Taylor expansion as follows: radvi = arg max‖r‖p≤ [L0d(xi, θ)] +∇xFT ri (14) where F = L(xi, θ). The problem (14) can be reduced to: max ‖ri‖p= ∇xFT ri (15) We solve it with the Lagrangian multiplier method and we have ∇xFri = λ(‖ri‖p − ) (16) Then we make the first derivative with respect to ri: ∇xF = λ rp−1i p( ∑ j(r j i ) p)1− 1 p (17) ∇xF = λ p ( ri )p−1 (∇xF) p p−1 = ( λ p ) p p−1 ( ri )p (18) If we sum over two sides, we have∑ (∇xF) p p−1 = ∑ ( λ p ) p p−1 ( ri )p (19) ‖∇xF‖p ∗ p∗ = ( λ p )p ∗ ∗ 1 (20) where p∗ is the dual of p, i.e. 1p + 1 p∗ = 1. We have ( λ p ) = ‖∇xF‖p∗ (21) By combining (18) and (21), we have r∗i = sgn(∇xF)( |∇xF| ‖∇xF‖p∗ ) 1 p−1 = sgn(∇xL0d)( |∇xL0d| ‖∇xL0d‖p∗ ) 1 p−1 (22) In this paper, we set p to∞. Then we have r∗i = lim p→∞ sgn(∇xL0d)( |∇xL0d| ‖∇xL0d‖p∗ ) 1 p−1 = sgn(∇xL0d)( |∇xL0d| ‖∇xL0d‖1 )0 = sgn(∇xL0d) (23) Then we can obtain the adversarial example: x∗i = xi + sgn(∇xL0d) (24) To compute the transformed example, we need to solve the following problem: r∗ = arg min ‖r‖∞≤ logD0W (fθ(x+ r)). (25) With the similar method, we can easily get the transformed example xt xt = x− sgn(∇x logD0W ). (26)
1. What is the novel approach proposed by the paper in improving model robustness? 2. How does the method differ from existing literature in terms of regularization? 3. Are there any concerns regarding the experimental results or comparisons with baselines? 4. Is there any ambiguity in certain sentences or sections that need further clarification?
Review
Review The paper proposes a new method of improving model robustness by generating adversarial samples that are regularized by their latent distribution through f-divergence, whereas existing literature only uses local manifold property such as smoothness. The method is well-motivated and the clarity of the paper is good. The experimental results are compared with several competitive baselines and the improvement looks significant (Although I am not familiar with the state-of-the-art experimental results). Proofread is needed for the sentence "The adversarial examples are crafted by ... " on page 2 and several other small typos.
ICLR
Title Improving Model Robustness with Latent Distribution Locally and Globally Abstract We propose a novel adversarial training method which leverages both the local and global information to defend adversarial attacks. Existing adversarial training methods usually generate adversarial perturbations locally in a supervised manner and fail to consider the data manifold information in a global way. Consequently, the resulting adversarial examples may corrupt the underlying data structure and are typically biased towards the decision boundary. In this work, we exploit both the local and global information of data manifold to generate adversarial examples in an unsupervised manner. Specifically, we design our novel framework via an adversarial game between a discriminator and a classifier: the discriminator is learned to differentiate the latent distributions of the natural data and the perturbed counterpart, while the classifier is trained to recognize accurately the perturbed examples as well as enforcing the invariance between the two latent distributions. We conduct a series of analysis on the model robustness and also verify the effectiveness of our proposed method empirically. Experimental results show that our method substantially outperforms the recent state-of-the-art (i.e. Feature Scattering) in defending adversarial attacks by a large accuracy margin (e.g. 17.0% and 18.1% on SVHN dataset, 9.3% and 17.4% on CIFAR-10 dataset, 6.0% and 16.2% on CIFAR-100 dataset for defending PGD20 and CW20 attacks respectively). 1 INTRODUCTION Deep Neural Networks (DNNs) have achieved impressive performance on a broad range of datasets, yet can be easily fooled by adversarial examples or perturbations (LeCun et al., 2015; He et al., 2016; Gers et al., 1999). Adversarial examples have been shown to be ubiquitous beyond different tasks such as image classification (Goodfellow et al., 2014), segmentation (Fischer et al., 2017), and speech recognition (Carlini & Wagner, 2018). Overall, adversarial examples raise great concerns about the robustness of learning models, and have drawn enormous attention over recent years. To defend adversarial examples, great efforts have been made to improve the model robustness (Kannan et al., 2018; You et al., 2019; Wang & Zhang, 2019; Zhang & Wang, 2019). Most of them are based on the adversarial training, i.e. training the model with adversarially-perturbed samples rather than clean data (Goodfellow et al., 2014; Madry et al., 2017; Lyu et al., 2015). In principle, adversarial training is a min-max game between the adversarial perturbations and classifier. Namely, the indistinguishable adversarial perturbations are designed to mislead the output of the classifier, while the classifier is trained to produce the accurate predictions for these perturbed input data. Currently, the adversarial perturbations are mainly computed by enforcing the output invariance in a supervised manner (Madry et al., 2017). Despite its effectiveness in some scenarios, it is observed recently that these approaches may still be limited in defending adversarial examples. In particular, we argue that these current adversarial training approaches are typically conducted in a local and supervised way and fail to consider globally the overall data manifold information; such information however proves crucially important for attaining better generalization. As a result, the generated adversarial examples may corrupt the underlying data structure and would be typically biased towards the decision boundary. Therefore, the well-generalizing features inherent to the data distribution might be lost, which limits the performance of the DNNs to defend adversarial examples even if adversarial training is applied (Ilyas et al., 2019a; Schmidt et al., 2018). For illustration, we have shown a toy example in Figure 1. As clearly observed, adversarially-perturbed examples gen- erated by PGD, one of the most successful adversarial training method, corrupt the data manifold, which would inevitably lead to poor performance if the training is conducted based on these perturbed examples. On the other hand, the current state-of-the-art method Feature Scattering (Zhang & Wang, 2019) can partially alleviate this problem but still leads to corruptions on the data manifold. To address this limitation, we propose a novel method called Adversarial Training with Latent Distribution (ATLD) which additionally considers the data distribution globally in an unsupervised fashion. In this way, the data manifold could be well preserved, which is beneficial to attain better model generalization. Moreover, since the label information is not required when computing the adversarial perturbations, the resulting adversarial examples would not be biased towards the decision boundary. This can be clearly observed in Figure 1(d). Our method can be divided into two steps: first, we train the deep model with the adversarial examples which maximize the variance between latent distributions of clean data and adversarial counterpart rather than maximizing the loss function. We reformulate it as a minimax game between a discriminator and a classifier. The adversarial examples are crafted by the discriminator to make different implicitly the latent distributions of clean and perturbed data, while the classifier is trained to decrease the discrepancy between these two latent distributions as well as promoting accurate classification on the adversarial examples as Figure 2 shows. Then, during the inference procedure, we generate the specific perturbations through the discriminator network to diminish the impact of the adversarial attack as shown in Figure 6 in Appendix. On the empirical front, with the toy examples, we show that our proposed method can preserve more information of the original distribution and learn a better decision boundary than the existing adversarial training method. We also test our method on three different datasets: CIFAR-10, CIFAR100 and SVHN with the famous PGD, CW and FGSM attacks. Our ATLD method outperforms the state-of-the-art methods by a large margin. e.g. ATLD improves over Feature Scattering (Zhang & Wang, 2019) by 17.0% and 18.1% on SVHN for PGD20 and CW20 attacks. Our method also shows a large superiority to the conventional adversarial training method (Madry et al., 2017), boosting the performance by 32.0% and 30.7% on SVHN for PGD20 and CW20 attacks. 2 RELATED WORK Adversarial Training. Adversarial training is a family of techniques to improve the model robustness (Madry et al., 2017; Lyu et al., 2015). It trains the DNNs with adversarially-perturbed samples instead of clean data. Some approaches extend the conventional adversarial training by injecting the adversarial noise to hidden layers to boost the robustness of latent space (Ilyas et al., 2019b; You et al., 2019; Santurkar et al., 2019; Liu et al., 2019). All of these approaches generate the adversarial examples by maximizing the loss function with the label information. However, the structure of the data distribution is destroyed since the perturbed samples could be highly biased towards the non-optimal decision boundary (Zhang & Wang, 2019). Our proposed method has a similar training scheme with adversarial training by replacing clean data with the perturbed one. Nevertheless, our method generates the adversarial perturbations without the label information which weakens the impact of non-optimal decision boundary and can retain more information of the underlying data distribution. Manifold-based Adversarial Training. Song et al. (2017) propose to generate the adversarial examples by projecting on a proper manifold. Zhang & Wang (2019) leverage the manifold information in the forms of inter-sample relationship within the batch to generate adversarial adversarial perturbations. Virtual Adversarial Training and Manifold Adversarial Training are proposed improve model generalization and robustness against adversarial examples by ensuring the local smoothness of the data distribution (Zhang et al., 2018; Miyato et al., 2017). Some methods are designed to enforce the local smoothness around the natural examples by penalizing the difference between the outputs of adversarial examples and clean counterparts (Kannan et al., 2018; Chan et al., 2020; Jakubovitz & Giryes, 2018). All of these methods just leverage the local information of the distribution or manifold. Differently, our method generates the perturbations additionally considering the structure of distribution globally. Unsupervised Domain Adversarial Training. Domain Adversarial Training shares a training scheme similar to our method where the classifier and discriminator compete with each other (Odena et al., 2017; Long et al., 2018; Ganin et al., 2016). However, its objective is to reduce the gap between the source and target distributions in the latent space. The discriminator is used to measure the divergence between these two distributions in the latent space. The training scheme of our method is also based on competition between the classifier and discriminator. Different from the previous framework, the discriminator of our method is used to capture the information of distributions of adversarial examples and clean counterparts in the latent space which helps generate the adversarial perturbations. GAN-based Adversarial Training Methods. Several GAN-based methods leverage GANs to learn the clean data distribution and purify the adversarial examples by projecting them on clean data manifold before classification (Meng & Chen, 2017; Metzen et al., 2017). The framework of GAN can also be used to generate the adversarial examples (Baluja & Fischer, 2018). The generator produces the adversarial examples to deceive both the discriminator and classifier; the discriminator and classifier attempt to differentiate the adversaries from clean data and produce the correct labels respectively. Some adversary detector networks are proposed to detect the adversarial examples which can be well aligned with our method (Gong et al., 2017; Grosse et al., 2017). In these works, a pretrained network is augmented with a binary detector network. The training of the pretrained network and detector involves generating adversarial examples to maximize their losses. Differently, our method generates the adversarial examples just to minimize the loss of the discriminator and feed them as the training set to the classifier. Such adversarial examples are deemed to induce most different latent representations from the clean counterpart. 3 BACKGROUND 3.1 ADVERSARIAL TRAINING Let us first introduce the widely-adopted adversarial training method for defending against adversarial attacks. Specifically, it solves the following minimax optimization problem through training. min θ {E(x,y)∼D[ max x′∈Sx L(x′, y; θ)]}, (1) where x ∈ Rn and y ∈ R are respectively the clean data samples and the corresponding labels drawn from the dataset D, and L(·) is the loss function of the DNN with the model parameter θ ∈ Rm. Furthermore, we denote the clean data distribution as Q0, i.e. x ∼ Q0. , and denote x′ ∈ Rn as perturbed samples in a feasible region Sx , {z : z ∈ B(x, ) ∩ [−1.0, 1.0]n} with B(z, ) , {z : ‖x− z‖∞ ≤ } being the `∞-ball at center x with radius . By defining fθ(·) as the mapping function from the input layer to the last latent layer, we can also rewrite the loss function of the DNN as l(fθ(x), y) where l(·) denotes the loss function calculated from the last hidden layer of the DNN, e.g. the cross entropy loss as typically used in DNN. Whilst the outer minimization can be conducted by training to find the optimal model parameters θ, the inner maximization essentially generates the strongest adversarial attacks on a given set of model parameters θ. In general, the solution to the minimax problem can be found by training a network minimizing the loss for worst-case adversarial examples, so as to attain adversarial robustness. Given a set of model parameters θ, the commonly adopted solution to the inner maximization problem can lead to either one-step (e.g., FGSM) or multi-step (e.g., PGD) approach (Madry et al., 2017). In particular, for a given single point x, the strongest adversarial example x′ at the t-th iteration can be iteratively obtained by the following updating rule: xt+1 = ΠSx(x t + α · sgn(∇xL(xt, y; θ))), (2) where ΠSx(·) is a projection operator to project the inputs onto the region Sx, sgn(·) is the sign function, and α is the updating step size. For the initialization, x0 can be generated by randomly sampling in B(x, ). It appears in (1) that each perturbed sample x′ is obtained individually by leveraging its loss function L(x′, y; θ) with its label y. However, without considering the inter-relationship between samples, we may lose the global knowledge of the data manifold structure which proves highly useful for attaining better generalization. This issue has been studied in a recent work (Zhang & Wang, 2019) where a new method named feature scattering made a first step to consider the inter-sample relationship within the batch; unfortunately this approach did not take the full advantages of the global knowledge of the entire data distribution. In addition, relying on the maximization of the loss function, the adversarially-perturbed data samples may be highly biased towards the decision boundary, which potentially corrupts the structure of the original data distribution, especially when the decision boundary is non-optimal (see Figure 1 again for the illustration). 3.2 DIVERGENCE ESTIMATION To measure the discrepancy of two distributions, statistical divergence measures (e.g., KullbackLeibler and Jensen-Shannon divergence) have been proposed. In general, given two distributions P and Q with a continuous density function p(x) and q(x) respectively, f -divergence is defined as Df (P||Q) , ∫ X q(x)f ( p(x) q(x) ) dx. The exact computation of f -divergence is challenging, and the estimation from samples has attracted much interest. For instance, leveraging the variational methods, Nguyen et al. (2010) propose a method for estimating f -divergence from only samples; Nowozin et al. (2016) extend this method by estimating the divergence with learning the parameters of discriminator. Specifically, the f -divergence between two distributions P and Q can be lowerbounded using Fenchel conjugate and Jensen’s inequality (Nowozin et al., 2016). Df (P||Q) = ∫ X q(x) sup t∈domf∗ {tp(x) q(x) − f∗(t)}dx ≥ sup T∈τ ( ∫ X p(x)T (x)dx− ∫ X q(x)f∗(T (x))dx) = sup W (Ex∼P[gf (VW (x))] + Ex∼Q[−f∗(gf (VW (x)))]), (3) where VW : X → R is a discriminator network with parameter W and gf : R → domf∗ is an output activation function which is determined by the type of discriminator. τ is an arbitrary class of functions T : X → R. f is a convex lower-semicontinous function and f∗ is its conjugate defined by f∗(t) = supu∈domf [ut− f(u)]. The objective of discriminator for GANs is a special case of (3) with the activation function gf (t) = − log(1+e−t) and f∗(g) = − log(2−eg). It approximates the Jense-Shannon divergence between real and fake distributions. Arjovsky et al. (2017) also develop a method to estimate the Wasserstein-distance by neural network. In this paper, these methods will be used to estimate the Jensen-Shannon divergence between latent distributions induced by adversarial and clean examples. 4 ADVERSARIAL TRAINING WITH LATENT DISTRIBUTION As discussed in Section 3.1, the conventional adversarial training methods rely on the knowledge of data labels. As a result, the local information to generate adversarial examples may be biased toward the decision boundary; such individual adversarial example generation does not capture the global knowledge of the data manifold. To alleviate these limitations, we propose a novel method to compute the perturbed samples by leveraging the global knowledge of the whole data distribution and then disentangling them from the data labels and the loss function. Generally speaking, the perturbations are generated to enlarge the variance between latent distributions induced by clean and adversarial data. Formally, we try to identify the set of adversarial examples Xadv that yield in the latent space a distribution P ∗θ through fθ(·) that is the most different from the latent distribution Qθ induced by the clean samples Xorg = {x : x ∼ Q0}, without resorting to the corresponding labels Y . In other words, the resulting adversarial examples can be deemed as manifold adversarial examples, which ‘deceive’ the manifold rather than fool the classifier as defined in the traditional adversarial examples. It is noted that the latent space to be perturbed could be any hidden layer though it is defined in the last hidden layer before the softmax of a DNN in this paper. The optimization problem of the proposed adversarial training can then be reformulated as follows: min θ {Efθ(xadv)∼P∗θ [l(fθ(x adv), y)] +Df (P ∗ θ ||Qθ)} (4) s.t. P ∗θ = arg max Pθ∈P [Df (Pθ||Qθ)] (5) where l(·) and y are similarly defined as before, and Df (·) is the f -divergence measure of two distributions. P = {P : fθ(x′) ∼ P subject to ∀x ∼ Q0, x′ ∈ B(x, )} is the feasible region for the latent distribution Pθ which is induced by the set of perturbed examplesXp through fθ(·). fθ(x′) and fθ(xadv) represents the latent features of the perturbed example x′ and adversarial example xadv respectively. Intuitively, we try to obtain the worst latent distribution P ∗θ which is induced by Xadv through fθ(·) within the region P , while the model parameter θ is learned to minimize the classification loss on the latent feature fθ(xadv) ∼ P ∗θ (or equivalently adversarial example xadv ∈ Xadv) and the f -divergence between the latent distributions P ∗θ and Qθ induced by adversarial examples Xadv and clean data Xorg. It is still challenging to solve the above optimization problem, since both the objective function and the constraint are entangled with the adversarial examples Xadv and the model parameters θ. To make the problem more tractable, we propose a novel Adversarial Training with Latent Distribution (ATLD) method. In the next subsection, by taking into account the entire data distribution globally, we first focus on the constraint and identify the adversarial samples Xadv through the maximization problem. We then solve the minimization of the objective function with the adversarial training procedure. To further enhance the performance, we add specific perturbations named Inference with Manifold Transformation (IMT) in Section 4.2 to input samples for enforcing them towards the more separable natural data manifold. Finally, we classify the transformed data points with the adversarially-trained model. 4.1 GENERATING ADVERSARIAL EXAMPLES FOR TRAINING First, we optimize the constraint (5) to generate the adversarial examples or its induced distribution P ∗θ for training. Intuitively, the adversarial examples Xadv are crafted to maximize the divergence between the latent distributions induced by natural examplesXorg and adversarial counterpartXadv in an unsupervised fashion since no knowledge of labels Y is required. Together with the objective function in (4), our proposed adversarial training method is to minimize such divergence as well as the classification error for adversarial examples Xadv . However, it is a challenging task to evaluate the divergence between two latent distributions. To make it more tractable, we leverage a discriminator network for estimating the Jensen-Shannon divergence between two distributions P ∗θ /Pθ and Qθ according to Section 3.2. It is noted again that the class label information is not used for generating adversarial examples. Hence the adversarial examples are still generated in an unsupervised way. Then, by using (3), the optimization problem in (4) and (5) can be approximated as follows in a tractable way. min θ { N∑ i=1 L(xadvi , yi; θ)︸ ︷︷ ︸ Lf + sup W N∑ i=1 [logDW (fθ(x adv i )) + (1− logDW (fθ(xi)))︸ ︷︷ ︸ Ld ] } s.t. xadvi = arg max x′i∈B(xi, ) [logDW (fθ(x ′ i)) + (1− logDW (fθ(xi)))︸ ︷︷ ︸ Ld ] (6) where N denotes the number of training samples and DW denotes the discriminator network with the sigmoid activation function and parameter W . fθ(xi) is the latent feature of the clean sample xi. DW is used to determine whether the latent feature is from adversary manifold (output the manifold label of the latent feature). For ease of description, we represent the components in Eq. (6) as two parts: Lf and Ld. Ld is the manifold loss and Lf represents the loss function of the classifier network. We now interpret the above optimization problem. By comparing Eq. (6) and Eq. (4), it is observed that the Jensen-Shannon divergence between P ∗θ andQθ is approximated by supW ∑N i=1 Ld, and the minimization of the classification loss on adversarial examples is given by minθ ∑N i=1 Lf . The problem (6) is optimized by updating parameters θ and W and crafting adversarial examples {xadvi }Ni=0 iteratively. The whole training procedure can be viewed as the game among three players: the classifier, discriminator, and adversarial examples. The discriminator DW is learned to differentiate the latent distributions of the perturbed examples and clean data via maximizing the loss Ld while the classifier fθ is trained to (1) enforce the invariance between these two distributions to confuse the discriminatorDW by minimizing the loss Ld, and (2) classify the adversarial examples as accurately as possible by minimizing Lf . For each training iteration, the adversarial examples are crafted to make different the adversarial latent distribution and natural one by maximizing Ld. Although DW cannot measure the divergence between the two latent distributions exactly at the first several training steps, it can help evaluate the divergence between distributions induced by perturbed examples and clean ones when the parameters W converges. However, when the latent distributions are multi-modal, which is a real scenario due to the nature of multi-class classification, it is challenging for the discriminator to measure the divergence between such distributions. Several work reveals that there is a high risk of failure in using the discriminator networks to measure only a fraction of components underlying different distributions (Arjovsky & Bottou, 2017; Che et al., 2016). Ma (2018) also shows that two different distributions are not guaranteed to be identical even if the discriminator is fully confused. To alleviate such the problem, we additionally train the discriminator DW to predict the class labels for latent features as (Odena et al., 2017; Long et al., 2018). As a result, the problem (6) can then be reformulated as: min θ { N∑ i=1 L(xadvi , yi; θ)︸ ︷︷ ︸ Lf + sup W N∑ i=1 [logD0W (fθ(x adv i )) + (1− logD0W (fθ(xi)))︸ ︷︷ ︸ L0d ] + min W [l(D1:CW (fθ(xi)), yi) + l(D 1:C W (fθ(x adv i )), yi)]︸ ︷︷ ︸ L1:Cd } s.t. xadvi = arg max x′i∈B(xi, ) [logD0W (fθ(x ′ i)) + (1− logD0W (fθ(xi))︸ ︷︷ ︸ L0d ] (7) HereD0W is the first dimension of the output of the discriminator, which indicates the manifold label of the latent features; D1:CW are the remaining C dimensions of the output of DW , used to output the class label of the latent feature; C denotes the number of classes, and L0d and L 1:C d are the manifold loss and the classification loss for the discriminator network respectively. (The detailed derivation for Eq. (6) and Eq. (7) can be seen in Appendix.) The detailed training procedure of our framework is depicted in Figure 2. Remarks. It is worth noting that the labeled information is not required for generating adversarial examples. Therefore, our method prevents the perturbed examples from highly biasing towards the decision boundary and more information of the original distribution structure is preserved. In addition, since the discriminator is trained with the whole dataset (both clean and adversarial examples), it captures the global information of data manifold. Consequently, by training with adversarial examples generated according to the manifold loss of the discriminator, our method can improve the model robustness against adversarial examples with the global structure of data distribution. 4.2 INFERENCE WITH MANIFOLD TRANSFORMATION To enhance the generalization of ATLD, we further develop a new inference method with manifold transformation. Although adversarially-trained models can well recognize adversarial examples, there are still potential examples which are easily misclassified especially for unseen data. In other words, the generalization to adversarial examples is hard to achieve due to the more complex distribution of adversarial examples (Schmidt et al., 2018; Zhai et al., 2019). To alleviate this problem, our proposed inference method first pushes adversarial examples towards the manifold of natural examples which is simpler and further away from the decision boundary than the adversarial distribution. Then the more separable adjusted examples are classified by the adversarially-trained model. Specifically, the input sample is fed into our adversarially-trained model and the discriminator outputs the probability of such a sample lying on the adversarial manifold. If this probability is higher than a certain threshold, we compute the transformed example xt by adding the specific perturbation r∗ to the input sample x to reduce such a probability. This perturbation can be computed as: r∗ = arg min ‖r‖∞≤ logD0W (fθ(x+ r)). (8) Intuitively, the reduction of probability of this data point lying on adversarial manifold indicates that this point moves towards the benign example manifold after adding perturbation r∗. In other words, it becomes more separable since the benign example manifold is further away from the decision boundary. When the probability of the image lying on adversary manifold is lower than threshold, we still add such a perturbation to input image to make it more separable but with smaller magnitude. In the experiment part, we show this perturbation can move the adversarial examples away from the decision boundary. The whole inference procedure can be seen in Figure 5 in Appendix. 5 EXPERIMENTS We conduct experiments on the widely-used datasets, e.g., CIFAR-10, SVHN, and CIFAR-100. Following the Feature Scattering method (Zhang & Wang, 2019), we leverage the wideresnet (Zagoruyko & Komodakis, 2016) as our basic classifier and discriminator model structure. During the training phase, the initial learning rate is empirically set to 0.1 for all three datasets. We train our model 400 epochs with the transition epoch 60, 90 and the decay rate 0.1. The input perturbation budget is set to = 8 with the label smoothing rate as 0.5. We use L∞ perturbation in this paper including all the training and evaluation. We evaluate the various models on white-box attacks and black-box attacks. Under the white-box attacks, we compare the accuracy of the proposed method with several competitive methods, including: (1) the original wideresnet (Standard) trained with natural examples; (2) Traditional Adversarial Training with PGD (AT) (Madry et al., 2017); (3) Triplet Loss Adversarial training (TLA) (Mao et al., 2019); (4) Layer-wise Adversarial Training (LAT): injecting adversarial perturbation into the latent space (Sinha et al., 2019); (5) Bilateral: adversarial perturb on examples and labels both (Wang & Zhang, 2019); (6) Feature-scattering: generating adversarial examples with considering interrelationship of samples (Zhang & Wang, 2019). These comparison algorithms present the most competitive performance on defending adversarial attack. Under the black-box attacks, we compare four different algorithms used to generate the test time attacks: Vanilla training with natural examples, adversarial training with PGD, Feature Scattering, and our proposed model. 5.1 DEFENDING WHITE-BOX ATTACKS We show the classification accuracy under several white-box attacks on CIFAR-10, CIFAR-100, SVHN in this section. We first report the accuracy on CIFAR-10 in Table 1 with the attack iterations T = 20, 40, 100 for PGD (Madry et al., 2017) and CW (Carlini & Wagner, 2017). We also conduct more experiments to further evaluate the robustness of our proposed method against more recent attacks, e.g. AutoAttack (Croce & Hein, 2020) and RayS (Chen & Gu, 2020)) as shown in Appendix B.2. As observed, overall, our proposed method achieves a clear superiority over all the defence approaches on both the clean data and adversarial examples (except that it is slightly inferior to Feature Scattering in FGSM). We also observe one exception that the standard model performs the best on clean data. Our approach performs much better than the other baseline models on PGD and CW attack. Particularly, we improve the performance of the recent state-of-the-art method Feature Scattering almost 3.1% and 5.2% under PGD20 and CW20 attack respectively. With the implementation of Inference with Manifold Transformation (IMT), our approach (ATLD-IMT) is 8.9% and 17.4% higher than the Feature Scattering under PGD20 and CW20 attack respectively. However, the performance on clean data is declined from 93.3% to 86.4% since IMT appears to have a negative effect for classifying clean data. In order to reduce the impact of IMT on the natural data, a threshold is used to limit the perturbation of IMT based on the output of discriminator. The perturbation is halved if the output of discriminator is within the range of [0.3, 0.7] (ATLD-IMT+). Under such setting, our approach could achieve high performance on adversarial attacks without sacrificing its accuracy on clean data. Similarly, the accuracy on CIFAR-100 and SVHN are shown in Table 2 with the attack iterations T = 20, 100 for both PGD and CW for conciseness. Although our method is slightly weaker than Feature Scattering under FGSM attack on CIFAR-100, overall, our proposed method ATLD achieves state-of-the-art performance over all the other approaches under various adversarial attacks. Furthermore, our ATLD-IMT version exceeds Feature Scattering by almost 19.2% and 23.8% against the attack of CW100 on CIFAR-100 and SVHN respectively. More details about the defense of whitebox attacks under different attack budgets can be seen in Appendix. 5.2 DEFENDING BLACK-BOX ATTACKS To further verify the robustness of ATLD, we conduct transfer-based black-box attack experiments on CIFAR-10. More black-box attack results on CIFAR-100 and SVHN are listed in Appendix. Four different models are used for generating test time attacks including the Vanilla Training model, the Adversarial Training with PGD model, the Feature Scattering Training model and our model. As demonstrated by the results in Table 3, our proposed approach can achieve competitive performance almost in all the cases. Specifically, ATLD outperforms Feature Scattering significantly in 8 cases while it demonstrates comparable or slightly worse accuracy in the other 3 cases. It deserves our attention that ATLD-IMT appears to have a negative impact on the black-box attacks though it stills performs much better than PGD. This may be explained in several aspects. On one hand, the distributions of adversarial examples produced by different models may differ significantly in the latent space; on the other hand, our discriminator lacks the ability to deal with the unseen distributions since the discriminator only distinguishes one type of adversarial examples from the natural data during training. We will leave the investigation of this topic as future work. 6 CONCLUSION We have developed a novel adversarial training method which leverages both the local and global information to defend adversarial attacks in this paper. In contrast, existing adversarial training methods mainly generate adversarial perturbations in a local and supervised fashion, which could however limit the model’s generalization. We have established our novel framework via an adversarial game between a discriminator and a classifier: the discriminator is learned to differentiate globally the latent distributions of the natural data and the perturbed counterpart, while the classifier is trained to recognize accurately the perturbed examples as well as enforcing the invariance between the two latent distributions. Extensive empirical evaluations have shown the effectiveness of our proposed model when compared with the recent state-of-the-art in defending adversarial attacks in both the white-box and black-box settings. APPENDIX A LIST OF MAJOR NOTATION For clarity, we list the major notations that are used in our model. • Xorg = {x : x ∼ Q0}: the set of clean data samples, where Q0 is its underlying distribution; • Xp = {x′ : x′ ∈ B(x, ), ∀x ∼ Q0}: the set of perturbed samples, the element x′ ∈ Xp is in the -neighborhood of the clean example x ∼ Q0; • fθ: the mapping function from input to the latent features of the last hidden layer (i.e., the layer before the softmax layer); • Qθ: the underlying distribution of the latent feature fθ(x) for all x ∈ Xorg; • Pθ: the underlying distribution of the latent feature fθ(x′) for all x′ ∈ Xp; • P: the feasible region of the latent distribution Pθ, which is defined as P , {P : fθ(x′) ∼ P subject to ∀x ∼ Q0, x′ ∈ B(x, )}. • Xadv: the set of the worst perturbed samples or manifold adversarial examples, the element xadv ∈ Xadv are in the -neighborhood of clean example x ∼ Q0; • P ∗θ : the worst latent distribution within the feasible region P which leads to the largest divergence or the underlying distribution of the latent feature fθ(xadv) for all xadv ∈ Xadv; B ADDITIONAL EXPERIMENT DETAILS B.1 MODEL ROBUSTNESS AGAINST PGD AND CW ATTACKER UNDER DIFFERENT ATTACK BUDGETS We further evaluate the model robustness against PGD and CW attacks under different attack budgets with a fixed attack step of 20. These results are shown in Figure 3. It is observed that the performance of Adversarial Training with the PGD method (AT) drops quickly as the attack budget increases. The Feature Scattering method (FS) can improve the model robustness across a wide range of attack budgets. The proposed approach ADLT-IMT further boosts the performance over Feature Scattering by a large margin under different attack budgets especially under CW attack, except that our ADLTIMT is slightly inferior to Feature Scattering under PGD attack with budget = 20 on CIFAR-10. B.2 MODEL ROBUSTNESS AGAINST AUTOATTACK AND RAYS As shown in (Croce & Hein, 2020; Chen & Gu, 2020), several models (such as Feature Scattering) could achieve high enough robustness against PGD and CW attack, but they may fail to defend more stronger attacks. To further evaluate the model robustness against stronger attacks, we evaluate the robustness of our proposed method IMT+ against AutoAttack (Croce & Hein, 2020) and RayS (Chen & Gu, 2020) attacks with L∞ budget = 8 on CIFAR-10 and CIFAR-100. We first compare the accuracy of the proposed ATLD-IMT+ with several competitive methods on CIFAR-10 in Table 4 to defend the AutoAttack (AA) and Rays attacks, including: (1) Traditional Adversarial Training with PGD (AT) (Madry et al., 2017); (2) TRADES: trading adversarial robustness off against accuracy (Zhang et al., 2019); (3) Feature Scattering: generating adversarial examples with considering inter-relationship of samples (Zhang & Wang, 2019); (4) Robustoverfitting: improving models adversarial robustness by simply using early stop (Rice et al., 2020); (5) Pretraining: improving models adversarial robustness with pre-training (Hendrycks et al., 2019); (6)WAR: mitigating the perturbation stability deterioration on wider models (Wu et al., 2020); (7) RTS: achieving high robust accuracy with semisupervised learning procedure (self-training) (Carmon et al., 2019); (8) Gowal et al. (2020): achieving state-of-the-art results by combining larger models, Swish/SiLU activations and model weight averaging. These comparison algorithms attain the most competitive performance on defending AA attack. As observed, overall, our proposed method achieves a clear superiority over all the defence approaches on both the clean data and adversarial examples (except on clean data, ours is slightly inferior to Gowal et al. (2020) which is however trained with additional data). Note that Pretraining, WAR and Gowal et al. (2020) with footnote require additional data for training (e.g. unlabeled data, pretraining). We also report the accuracy of ATLD-IMT+ with the state-of-the-arts methods on CIFAR-100 in Table 5 against the AutoAttack (AA). Our proposed method again achieves on both the clean data and AA attacked examples significant better performance than all the other defense approaches (without data augmentation). Furthermore, it is noted that, while our ATLD-IMT+ method is just slightly inferior to Gowal et al. (2020) (which is trained with additional data), it is substantially ahead of the normal version of Gowal et al. (2020). B.3 BLACK-BOX RESULTS ON SVHN AND CIFAR-100 We conduct more evaluations on the transfer-based black-box attacks on SVHN and CIFAR-100. We report the results in Table 6. It can be observed that our proposed method overall outperforms Feature Scattering in most of the cases on SVHN. Surprisingly, the Adversarial Training method, i.e. the PGD, performs better than our method and Feature Scattering method in three cases. This also partially reveals the more challenging nature of defending black-box attacks than white-box attacks. On CIFAR-100, it can be observed that our method and Feature Scattering are comparable. The performance of these two methods differs little though our method outperforms Feature Scattering significantly under PGD20 and CW20 against adversarial attacks generated from the Feature Scattering model. Overall, though the proposed ATLD method may not lead to remarkably higher performance than the current state-of-the-art algorithms in defending black-box attacks (as we observed in the case of white-box attacks), it still generates overall better or comparable performance. We will again leave the further exploration of defending black-box attacks as our future work. B.4 ILLUSTRATION OF THE OVERLAID BOUNDARY CHANGE OF DIFFERENT METHODS We conduct a toy example in Figure 4 to illustrate the effect on how the various methods would affect the decision boundary after the adversarial training is applied. In Figure 4, (a) shows the decision boundary trained with clean data; (b) shows the decision boundary adversarially trained with the perturbed samples by PGD; (c) presents the decision boundary given by the adversarial training of Feature Scattering; and (d) illustrates the decision boundary trained from our proposed ATLD. Clearly, both the PGD (Figure 4(b)) and the FS (Figure 4(c)) vary the original decision boundary significantly. Moreover, it can be observed that the adversarial training with PGD corrupts the data manifold completely. On the other hand, FS appears able to retain partially the data manifold information since it considers the inter-sample relationship locally. Nonetheless, its decision boundary appears non-smooth, which may hence degrade the performance. In contrast, as shown in Figure 4(d), our proposed method considers to retain the data manifold globally, which varies the decision boundary slightly. This may explain why our proposed ATLD method could outperform the other approaches. B.5 FURTHER DETAILS OF ATLD-IMT We elaborate the training procedure of our IMT in this section. The overall architecture of ATLDIMT is plotted in Figure 5. A test sample x is fed into the classifier, and the discriminator outputs the prediction. A special perturbation in IMT is then computed from the loss DW and added back to x; in this way, the sample would be pushed towards the manifold of natural samples, which is supposed to be further away from the decision boundary. The prediction of the transformed xt by the adversarially-trained classifier will then be output as the label of x. To illustrate clearly the effect of our ATLD-IMT, we conduct additional toy experiments as shown in Figure 6 where we respectively plot the clean or natural data, perturbed data attacked by PGD, and adjusted data by ATLD-IMT in (a), (b), and (c). Moreover, the decision boundary is given by ATLD in all the three sub-figures. In (a), it deserves our attention that the boundary learned by ATLD could classify natural data well compared to the PGD and Feature Scattering as shown in Section A.3. As observed in (b), the perturbations generated by PGD will push the natural samples toward or even cross the decision boundary. Our proposed IMT can push the samples towards the manifold of natural examples as observed in (c). Since the manifold of natural examples would be more separable, this may further increase the classification performance as observed in the experiments. Under review as a conference paper at ICLR 2021 B.6 ILLUSTRATION OF VECTOR FIELD OF DIFFERENT PERTURBATION SCHEMES test test test C DETAILED DERIVATION In this section, we provide the details about the derivation for the main objective function (6) and elaborate how to compute the adversarial examples and the transformed examples. C.1 DERIVATION FOR MAIN OBJECTIVE FUNCTION (6) We start with minimizing the largest f -divergence between latent distributions Pθ and Qθ induced by perturbed example x′ and natural example x. And we denote their corresponding probability density functions as p(z) and q(z). According to Eq. (3), we have min θ max Qθ Df (Pθ||Qθ) = min θ max q(z) ∫ Z q(z) sup t∈domf∗ {tp(z) q(z) − f∗(t)}dx ≥ min θ max q(z) sup T∈τ ( ∫ Z p(z)T (z)dz − ∫ Z q(z)f∗(T (z))dz) = min θ max Qθ sup W { Ez∼Pθ [gf (VW (z))] + Ez∼Qθ [−f∗(gf (VW (z)))] } = min θ sup W { Ex∼D { max x′∈B(x, ) [gf (VW (fθ(x ′)))] + [−f∗(gf (VW (fθ(x))))] }} (9) To compute the Jensen-Shannon divergence between Pθ andQθ, we set gf (t) = − log(1+e−t) and f∗(g) = − log(2− eg). Then, we have min θ max Qθ DJS(Pθ||Qθ) ≥ min θ sup W { Ex∼D { max x′∈B(x, ) [logDW (fθ(x ′)))] + [1− logDW (fθ(x))))] }} (10) where DW (x) = 1/(1 + e−VW (x)). (10) is equivalent to optimize the lower bound of JensenShannon divergence between Pθ and Qθ. With disentangling the computation of adversarial examples from Eq. (10) and further considering the classification loss for the classifier Lf and the discriminator L1:Cd , we can obtain the final objective: min θ { sup W N∑ i=1 [logD0W (fθ(x adv i )) + (1− logD0W (fθ(xi))︸ ︷︷ ︸ L0d ] +L(xadvi , yi; θ)︸ ︷︷ ︸ Lf + min W [l(D1:CW (fθ(xi)), yi) + l(D 1:C W (fθ(x adv i )), yi)]︸ ︷︷ ︸ L1:Cd } , s.t. xadvi = arg max x′i∈B(xi, ) [logD0W (fθ(x ′ i)) + (1− logD0W (fθ(xi))︸ ︷︷ ︸ L0d ] (11) C.2 COMPUTATION FOR ADVERSARIAL EXAMPLE AND TRANSFORMED EXAMPLE To compute the adversarial example, we need to solve the following problem: xadvi = arg max x′i∈B(xi, ) [logD0W (fθ(x ′ i)) + (1− logD0W (fθ(xi))︸ ︷︷ ︸ L0d ] (12) It can be reformulated as computing the adversarial perturbation as follows: radvi = arg max‖r‖∞≤ [L0d(xi + ri, θ)] (13) We first consider the more general case ‖r‖p ≤ and expand (13) with the first order Taylor expansion as follows: radvi = arg max‖r‖p≤ [L0d(xi, θ)] +∇xFT ri (14) where F = L(xi, θ). The problem (14) can be reduced to: max ‖ri‖p= ∇xFT ri (15) We solve it with the Lagrangian multiplier method and we have ∇xFri = λ(‖ri‖p − ) (16) Then we make the first derivative with respect to ri: ∇xF = λ rp−1i p( ∑ j(r j i ) p)1− 1 p (17) ∇xF = λ p ( ri )p−1 (∇xF) p p−1 = ( λ p ) p p−1 ( ri )p (18) If we sum over two sides, we have∑ (∇xF) p p−1 = ∑ ( λ p ) p p−1 ( ri )p (19) ‖∇xF‖p ∗ p∗ = ( λ p )p ∗ ∗ 1 (20) where p∗ is the dual of p, i.e. 1p + 1 p∗ = 1. We have ( λ p ) = ‖∇xF‖p∗ (21) By combining (18) and (21), we have r∗i = sgn(∇xF)( |∇xF| ‖∇xF‖p∗ ) 1 p−1 = sgn(∇xL0d)( |∇xL0d| ‖∇xL0d‖p∗ ) 1 p−1 (22) In this paper, we set p to∞. Then we have r∗i = lim p→∞ sgn(∇xL0d)( |∇xL0d| ‖∇xL0d‖p∗ ) 1 p−1 = sgn(∇xL0d)( |∇xL0d| ‖∇xL0d‖1 )0 = sgn(∇xL0d) (23) Then we can obtain the adversarial example: x∗i = xi + sgn(∇xL0d) (24) To compute the transformed example, we need to solve the following problem: r∗ = arg min ‖r‖∞≤ logD0W (fθ(x+ r)). (25) With the similar method, we can easily get the transformed example xt xt = x− sgn(∇x logD0W ). (26)
1. What is the main contribution of the paper regarding adversarial robustness? 2. What are the strengths and weaknesses of the proposed framework for incorporating local and global structure of the data manifold? 3. Do you have any questions or concerns about the objective presented in Equations 4 and 5? 4. How does the reviewer assess the clarity and organization of the paper, particularly in Sections 4.1 and 4? 5. What are the minor comments and suggestions mentioned by the reviewer?
Review
Review This paper presents a framework for adversarial robustness via incorporating local and global structure of the data manifold. Specifically, the key motivation is that standard adversarial methods typically use only sample specific perturbations for generating the adversarial examples, and thus using them for robustness of the learning model is limited. Instead the paper proposes to capture the global data manifold as well in the robustifying framework. To this end, an objective is presented (4,5) that uses latent data distributions, with the goal that the adversarial perturbations should maximize the f-divergence against the latent distribution of the clean samples. Experiments are provided on several datasets and demonstrate significant performance improvements. Pros: The key idea of using the global data manifold into the robustifying framework is quite interesting. Experiments demonstrate good empirical benefits of the approach. Cons: While, the paper seemed well organized in the beginning, I got totally lost with Eq. (4-5). As I see, this objective is inaccurate and needs significant refinement. Specifically, it is unclear how is x^{adv} is related to x, and how is x^{adv} related to P*theta? The paper tries to explain this objective in the paragraph below, but the explanation is very confusing as well. A few other things that could help here: a) It is said that "Q_theta and P\theta* are the latent distributions induced by the natural example x". How can a single data point induce a distribution? Do you assume the feature map from a hidden layer of a network represents a distribution? If so, in what sense? b) "The adversarial example is crafted to induce the worst case distribution P*". How is it crafted and what is the relation between P* and x? This is the key connection that is missing from (4-5). Moving along, Section 4.1 is organized very poorly as well. I believe too many concepts are tied together into one formulation in (6), making it hard to decipher. For example, why to include the classifier D^{1:C} within this formulation? Why not talk about it elsewhere and focus on the meat of the objective, systematically? Further, as I understand, x^{adv} is the first step that happens in (6), however, there is no "adversary" in this case, instead is finding a perturbed sample x' that maximizes the f-divergence. In what sense is x^{adv} then an adversarial sample? Perhaps the paper should re-define what is the definition of an adversarial example that it is using, to clearly state what the idea is. Technically, there is no requirement that the point x^{adv} found by this step will promote any data misclassification; however can be any point that is within a B(x,\eps) ball from x, and that happens to maximize this divergence loss. Note that none of the other components D_W, f_theta, etc. are well trained in doing this optimization. So they could also be sub-optimal (in the sense of what the paper argues in the beginning of Page 4). Why is the middle formula in (6) minimizing over W to have both x and x^adv matched with the same label? Again, where is the adversary here? Or for that matter, how will the proposed approach achieve adversarial robustness ? Minor comments: a. What is \tau and T in (3)? b. How is f_\theta defined in (6)? c. The paper writes that back and forth that there is no use of label information in the setup, however has labels used in discriminator in (6). This is very confusing. d. There is also reference to data manifold and manifold label in Figure 2, but these are not clearly explained. What precisely is the data manifold? Is it the latent distribution for a specific label? e. Page 4, top para: "without considering the inter-relationship between data samples". Won't this relation be captured implicitly through the neural network parameters theta when perturbations on all the samples are used in the training process? Overall, I think this paper needs a thorough revision to explain well its technical contributions.
ICLR
Title Representational correlates of hierarchical phrase structure in deep language models Abstract While contextual representations from pretrained Transformer models have set a new standard for many NLP tasks, there is not yet a complete accounting of their inner workings. In particular, it is not entirely clear what aspects of sentence-level syntax are captured by these representations, nor how (if at all) they are built along the stacked layers of the network. In this paper, we aim to address such questions with a general class of interventional, input perturbation-based analyses of representations from Transformers networks pretrained with self-supervision. Importing from computational and cognitive neuroscience the notion of representational invariance, we perform a series of probes designed to test the sensitivity of Transformer representations to several kinds of structure in sentences. Each probe involves swapping words in a sentence and comparing the representations from perturbed sentences against the original. We experiment with three different perturbations: (1) random permutations of n-grams of varying width, to test the scale at which a representation is sensitive to word position; (2) swapping of two spans which do or do not form a syntactic phrase, to test sensitivity to global phrase structure; and (3) swapping of two adjacent words which do or do not break apart a syntactic phrase, to test sensitivity to local phrase structure. We also connect our probe results to the Transformer architecture by relating the attention mechanism to syntactic distance between two words. Results from the three probes collectively suggest that Transformers build sensitivity to larger parts of the sentence along their layers, and that hierarchical phrase structure plays a role in this process. In particular, sensitivity to local phrase structure increases along deeper layers. Based on our analysis of attention, we show that this is at least partly explained by generally larger attention weights between syntactically distant words.1 1 INTRODUCTION AND RELATED WORK It is still unknown how distributed information processing systems encode and exploit complex relational structures in data. The fields of deep learning (Saxe et al., 2013; Hewitt & Manning, 2019), neuroscience (Sarafyazd & Jazayeri, 2019; Stachenfeld et al., 2017), and cognitive science (Elman, 1991; Kemp & Tenenbaum, 2008; Tervo et al., 2016) have given great attention to this question, including a productive focus on the potential models and their implementations of hierarchical tasks, such as predictive maps and graphs. Natural (human) language provides a rich domain for studying how complex hierarchical structures are encoded in information processing systems. More so than other domains, human language is unique in that its underlying hierarchy has been extensively studied and theorized in linguistics, which provides source of “ground truth” structures for stimulus data. Much prior work on characterizing the types of linguistic information encoded in computational models of language such as neural networks has focused on supervised readout probes, which train a classifier on top pretrained models to predict a particular linguistic label (Belinkov & Glass, 2017; Liu et al., 2019a; Tenney et al., 2019). In particular, Hewitt & Manning (2019) apply probes to discover linear subspaces that encode tree-distances as distances in the representational subspace, and Kim et al. (2020) show that these distances can be used even without any labeled information to induce hierarchical structure. However, recent work has highlighted issues with correlating supervised probe performance with the amount 1Datasets, extracted features and code will be publicly available upon publication. of language structure encoded in such representations (Hewitt & Liang, 2019). Another popular approach to analyzing deep models is through the lens of geometry (Reif et al., 2019; Gigante et al., 2019). While geometric interpretations provide significant insights, they present another challenge in summarizing the structure in a quantifiable way. More recent techniques such as replica-based mean field manifold analysis method (Chung et al., 2018; Cohen et al., 2019; Mamou et al., 2020) connects representation geometry with linear classification performance, but the method is limited to categorization tasks. In this work, we make use of an experimental framework from cognitive science and neuroscience to probe for hierarchical structure in contextual representations from pretrained Transformer models (i.e., BERT (Devlin et al., 2018) and its variants). A popular technique in neuroscience involves measuring change in the population activity in response to controlled, input perturbations (Mollica et al., 2020; Ding et al., 2016). We apply this approach to test the characteristic scale and the complexity (Fig. 1) of hierarchical phrase structure encoded deep contextual representations, and present several key findings: 1. Representations are distorted by shuffling small n-grams in early layers, while the distortion caused by shuffling large n-grams starts to occur in later layers, implying the scale of characteristic word length increases from input to downstream layers. 2. Representational distortion caused by swapping two constituent phrases is smaller than when the control sequences of the same length are swapped, indicating that the BERT representations are sensitive to hierarchical phrase structure. 3. Representational distortion caused by swapping adjacent words across phrasal boundary is larger than when the swap is within a phrasal boundary; furthermore, the amount of distortion increases with the syntactic distance between the swapped words. The correlation between distortion and tree distance increases across the layers, suggesting that the characteristic complexity of phrasal subtrees increases across the layers. 4. Early layers pay more attention between syntactically closer adjacent pairs and deeper layers pay more attention between syntactically distant adjacent pairs. The attention paid in each layer can explain some of the emergent sensitivity to phrasal structure across layers. Our work demonstrates that interventional tools such as controlled input perturbations can be useful for analyzing deep networks, adding to the growing, interdisciplinary body of work which profitably adapt experimental techniques from cognitive neuroscience and psycholinguistics to analyze computational models of language (Futrell et al., 2018; Wilcox et al., 2019; Futrell et al., 2019; Ettinger, 2020). 2 METHODS Eliciting changes in behavioral and neural responses through controlled input perturbations is a common experimental technique in cognitive neuroscience and psycholinguistics (Tsao & Livingstone, 2008; Mollica et al., 2020). Inspired by these approaches, we perturb input sentences and measure the discrepancy between the resulting, perturbed representation against the original. While conceptually simple, this approach allows for a targeted analysis of internal representations obtained from different layers of deep models, and can suggest partial mechanisms by which such models are able to encode linguistic structure. We note that sentence perturbations have been primarily utilized in NLP for representation learning (Hill et al., 2016; Artetxe et al., 2018; Lample et al., 2018), data augmentation (Wang et al., 2018; Andreas, 2020), and testing for model robustness (e.g., against adversarial examples) (Jia & Liang, 2017; Belinkov & Bisk, 2018). A methodological contribution of our work is to show that input perturbations can serve as a useful tool for analyzing representations learned by deep networks. 2.1 SENTENCE PERTURBATIONS In this work we consider three different types of sentence perturbations designed to probe for different phenomena. n-gram shuffling In the n-gram shuffling experiments, we randomly shuffle the words of a sentence in units of n-grams, with n varying from 1 (i.e., individual words) to 7 (see Fig. 2a for an example). While the number of words which change absolute position is similar for different n, larger n will better preserve the local context (i.e., relative position) of more words. Thus, we reason that n-gram swaps affect the representations selective to the context with size n or higher within the sentence, and that lower n will result in greater distortion in sentence representations. Phrase swaps The n-gram shuffling experiments probe for sensitivity of representations to local context without taking into account syntactic structure. In the phrase swap experiments, we perturb a sentence by swapping two randomly chosen spans. We explore two ways of swapping spans. In the first setting, the spans are chosen such that they are valid phrases according to its parse tree.2 In the second setting, the spans are chosen that they are invalid phrases. Importantly, in the second, control setting, we fix the length of the spans such that the lengths of spans that are chosen to be swapped are the same as in the first setting (see Fig. 3a for an example). We hypothesize that swapping invalid phrases will result in more distortion than swapping valid phrases, since invalid swaps will result in greater denigration of syntactic structure. Adjacent word swaps In the adjacent word swapping experiments, we swap two adjacent words in a sentence. We again experiment with two settings – in the first setting, the swapped words stay within the phrase boundary (i.e., the two words share the same parent), while in the second setting, the swapped words cross phrase boundaries. We also perform a more fine-grained analysis where we condition the swaps based on the “syntactic distance” between the swapped words, where syntactic distance is defined as the distance between the two words in the parse tree (see Fig. 4c). Since a phrase corresponds to a subtree of the parse tree, this distance also quantifies the number of nested phrase boundaries between two adjacent words. Here, we expect the amount of distortion to be positively correlated with the syntactic distance of the words that are swapped. 2.2 CONTEXTUAL REPRESENTATIONS FROM TRANSFORMERS For our sentence representation, we focus on the Transformer-family of models pretrained on large-scale language datasets (BERT and its variants). Given an input word embedding matrix X ∈ RT×d for a sentence of length T , the Transformer applies self attention over the previous layer’s representation to produce a new representation, Xl = fl([Hl,1, . . . ,Hl,H ]), Hl,i = Al,iXl−1Vl,i, Al,i = softmax ( (Xl−1Ql,i)(Xl−1Kl,i) > √ dk ) , (1) 2We use constituency parse trees from the English Penn Treebank (Marcus et al., 1994). where fl is an MLP layer, H is the number of heads, dH = dH is the head embedding dimension, and Ql,i,Kl,i,Vl,i ∈ Rd×dk are respectively the learned query, key, and value projection matrices at layer l for head i. The MLP layer consists of a residual layer followed by layer normalization and a nonlinearity. The 0-th layer representation X0 is obtained by adding the position embeddings and the segment embeddings to the input token embeddings X, and passing it through normalization layer.3 In this paper, we conduct our distortion analysis mainly on the intermediate Transformer representations Xl = [xl,1, . . . ,xl,T ], where xl,t ∈ Rd is the contextualized representation for word t at layer l.4 We analyze the trend in distortion as a function of layer depth l for the different perturbations. We also explore the different attention heads Hl,i ∈ RT×dH and the associated attention matrix Al,i ∈ RT×T to inspect whether certain attention heads specialize at encoding syntactic information. 2.3 DISTORTION METRIC Our input manipulations allow us to specify the distortion at the input level, and we wish to measure the corresponding distortion in the representation space (Fig. 1). Due to the attention mechanism, a single vector in an intermediate layer is a function of the representations of (potentially all) the other tokens in the sentence. Therefore, the information about a particular word might be distributed among the many feature vectors of a sentence, and we wish to consider all feature vectors together as a single sentence-level representation. We thus represent each sentence as a matrix and use the distance induced by matrix 2-norm. Specifically, let P ∈ {0, 1}T×T be the binary matrix representation of a permutation that perturbs the input sentence, i.e., X̃ = PX. Further let X̃l and Xl be the corresponding sentence representations for the l-th layer for the perturbed and original sentences. To ignore uniform shifting and scaling, we also z-score each feature dimension of each layer (by subtracting the mean and dividing by the standard deviation where these statistics are obtained from the full Penn Treebank corpus) to give Z̃l and Zl. Our distortion metric for layer l is then defined as ‖Zl −P−1Z̃l‖/ √ Td, where ‖ · ‖ is the matrix 2-norm (i.e., Frobenius norm).5 Importantly, we invert the permutation of the perturbed representation to preserve the original ordering, which allows us to measure the distortion due to structural change, rather than distortion due to simple differences in surface form. We divide by √ Td to make the metric comparable between sentences (with different T ) and networks (with different d). Intuitively, our metric is the scaled Euclidean distance between the z-scored, flattened sentence representations, zl ∈ RTd. Because each dimension is independently centered and standardized, the maximally unstructured distribution of zl is an isotropic Td-dimensional Gaussian. The expected distance between two such vectors is approximately √ 2Td. Therefore, we can interpret a distortion value approaching √ 2 as comparable to if we had randomly redrawn the perturbed representation. 3 EXPERIMENTAL SETUP We apply our perturbation-based analysis on sentences from the English Penn Treebank (Marcus et al., 1994), where we average the distortion metric across randomly chosen sentences (see Sec. A.1 for the exact details). We analyze the distortion, as measured by length-normalized Frobenius norm between the perturbed and original representations, as a function of layer depth. Layers that experience large distortion when the syntactic structure is disrupted from the perturbation can be interpreted as being more sensitive to hierarchical syntactic structure. As we found the trend to be largely similar across the different models, in the following section, we primarily discuss results from BERT (bert-base-cased), which has 12 layers and hidden size of 768 (Devlin et al., 2018). We show results from other pretrained and randomly-initialized Transformer-based models, including RoBERTa (Liu et al., 2019b), ALBERT (Lan et al., 2019), DistilBERT (Sanh et al., 2019), and XLNet (Yang et al., 2019), in the appendix (Sec. A.2). 3However, the exact specification for the MLP and X0 may vary across different pretrained models. 4BERT uses BPE tokenization (Sennrich et al., 2015), which means that some words are split into multiple tokens. Since we wish to evaluate representations at word-level, if a word is split into multiple tokens, its word representation is computed as the average of all its token representations. 5There are many possible ways of measuring distortion, such as the average cosine similarity or distance between corresponding feature vectors, as well as different matrix norms. We observed the results to be qualitatively similar for different measures, and hence we focus on the Frobenius norm in our main results. We show the results from additional distortion metrics in Sec. A.3. 4 RESULTS We summarize our findings for the different perturbations below. While not shown in the main results, we note that randomly-initialized (i.e. untrained) models (somewhat unsuprisingly) exhibit a flat distortion trend for all perturbations (see Sec. A.2). This indicates that the patterns observed here are due to the model’s structural knowledge acquired through training, and not simply due to the underlying architecture. 4.1 CHARACTERISTIC SCALE INCREASES ALONG BERT LAYERS When we shuffle in units of larger n-grams, it only introduces distortions in the deeper BERT layers compared to smaller n-gram shuffles. The n-gram sized shuffles break contexts larger than n, while preserving contexts of size n or smaller. Interestingly, smaller n-gram shuffles diverge from the original sentence in the early layers (Fig. 2b, top curve), implying that only in early layers are representations built from short-range contexts. Larger n-gram shuffles remain minimally distorted for ‘longer’ (Fig. 2b, bottom curve), implying that long-range contexts play a larger role deeper layer representations. Phrasal boundaries matter Since BERT seems to build larger contexts along its layers, we now ask whether those contexts are structures of some grammatical significance. A basic and important syntactic feature is the constituent phrase, which BERT has previously been shown to represented in some fashion (Goldberg, 2019; Kim et al., 2020). We applied two targeted probes of phrase structure in the BERT representation, and found that phrasal boundaries are indeed influential. If we swap just two n-grams, the BERT representations are less affected when phrases are kept intact. We show this by swapping only two n-grams per sentence and comparing the distortion when those n-grams are phrases to when they cross phrase boundaries (Fig. 3a), where we control for the length of n-grams that are swapped in both settings. There is less distortion when respecting phrase boundaries. Furthermore, the distortion is evident among all feature vectors, including those in the position of words which did not get swapped (Fig. 2d). The global contextual information, distributed across the sentence, is affected by the phrase boundary. 4.2 PHRASE HIERARCHY MATTERS Having seen that representations are sensitive to phrase boundaries, we next explore whether that sensitivity is proportional to the number of phrase boundaries that are broken, which is a quantity related to the phrase hierarchy. Instead of swapping entire phrases, we swap two adjacent words and analyze the distortion based on how far apart the two words are in the constituency tree (Fig. 3a)6. This analysis varies the distance in the deeper tree structure while keeping the distance in surface form constant (since we always swap adjacent words). If the hierarchical representations are indeed being gradually built up along the layers of these pretrained models, we expect distortion to be greater for word swaps that are further apart in tree distance. We indeed find that there is a larger distortion when swapping syntactically distant words (Fig. 3b). This distortion grows from earlier to later BERT layers. Furthermore, when looking at the per-head representations of each layer, we see that in deeper layers there are more heads showing a 6Note that for adjacent words, the number of broken phrase boundaries equals the tree distance minus two. positive rank correlation between distortion and tree distance (Fig. 3c). In addition to a sensitivity to phrase boundaries, deeper BERT layers develop a sensitivity to the number of boundaries that are broken. Controlling for co-occurrence Since words in the same phrase may tend to occur together more often, co-occurrence is a potential confound when assessing the effects of adjacent word swaps. Co-occurrence is a simple statistic which does not require any notion of grammar to compute – indeed it is used to train many non-contextual word embeddings (e.g., word2vec Mikolov et al. (2013), GloVe Pennington et al. (2014)). So it is natural to ask whether BERT’s resilience to syntactically closer swaps goes beyond simple co-occurrence statistics. For simplicity, let us focus on whether a swap occurs within a phrase (tree distance = 2) or not. As an estimate of co-occurrence, we used the pointwise mutual information (PMI). Specifically, for two words w and v, the PMI is log p(w,v)p(w)p(v) , which is estimated from the empirical probabilities. We confirm that adjacent words in the same phrase do indeed have a second mode at high PMI (Fig. 3d). Dividing the swaps into those whose words have high PMI (above the marginal median) and low PMI (below it), we can see visually that the difference between within-phrase swaps and out-of-phrase swaps persists in both groups (Fig. 3e). For a more careful statistical test, in the appendix we show results from running a linear regression between distortion and the phrase boundary which accounts for dependency on any smooth function of PMI (see details in A.4). Even when accounting for the effect of PMI, there is a significant correlation between the breaking of a phrase and the subsequent distortion. This indicates that the greater distortion for word swaps which cross phrase boundaries is not simply due to surface co-occurrence statistics. Effects on linguistic information Do our input perturbations, and the resulting the distortions, reflect changes in the encoding of important linguistic information? One way to address this question, which is popular in computational neuroscience (DiCarlo & Cox, 2007) and more recently BERTology (Liu et al., 2019a; Tenney et al., 2019), is to see how well a linear classifier trained on a linguistic task generalizes from the (representations of the) unperturbed sentences to the perturbed ones. With supervised probes, we can see how much the representations change with respect to the subspaces that encode specific linguistic information. Specifically, we relate representational distortion to three common linguistic tasks of increasing complexity: part of speech (POS) classification; grandparent tag (GP) classification (Tenney et al., 2019); and a parse tree distance reconstruction (Hewitt & Manning, 2019)7. The probe trained on each of these tasks is a generalized linear model, linearly mapping a datapoint x (i.e. BERT representations from different layers) to a conditional distribution of the labels, p(y|θ(x)) (see A.5 for more details). Thus a ready measure of the effect of each type of swap, for a single sentence, is log p(y|θ(xi)) − log p(y|θ(x̃i)), where x̃i is same datum as xi in the perturbed representation8. Averaging this quantity over all datapoints gives a measure of content-specific distortion within a representation, which we will call “inference impairment”. Based on the three linguistic tasks, the distortion we measure from the adjacent word swaps is more strongly related to more complex information. The inverted L shape of Fig. 4a suggests that increasing distortion is only weakly related to impairment of POS inference, which is perhaps unsurprising given that POS tags can be readily predicted from local context. A deeper syntactic probe, the GP classifier (4b), does show a consistent positive relationship, but only for swaps which break a phrase boundary (i.e. distance >2). Meanwhile, impairment of the distance probe (4c), which reconstructs the full parse tree, has a consistently positive relationship with distortion, whose slope is proportionate to the tree distance of the swap. Thus, when specifically perturbing the phrasal boundary, the representational distortion is related to relatively more complex linguistic information. 4.3 A POSSIBLE MECHANISTIC EXPLANATION THROUGH ATTENTION In the transformer architecture, contexts are built with the attention mechanism. Recall that attention is a mechanism for allowing input vectors to interact when forming the output, and the ultimate output for a given token is a convex combination of the features of all tokens (Eq. 1). While any interactions between inputs must be mediated by attention, it is not obvious how the contextual information of a particular layer is captured by attention in that layer. It has been shown qualitatively that, within a layer, BERT allocates attention preferentially to words in the same phrase (Kim et al., 2020). Our next suite of experiments asks whether this might explain the observed relationship between tree distance and distortion. We find that in many Transformer heads, the attention—much like distortion—is proportional to the syntactic distance between two words. Fig. 5c summarizes this trend by showing the Spearman rank correlation between the parse tree distance of adjacent word pairs, and the attention paid between those words. Different attention heads within a layer range from correlated to anti-correlated, and with slightly more positively correlated heads in deeper layers. However, there is great variability in this, suggesting that only certain heads learn to specialize to syntactic phenomena. 7While the original paper predicted dependency tree distance, in this paper we instead predict the constituency tree distance. 8POS- and GP-tag prediction outputs a sequence of labels for each sentence, while the distance probe outputs the constituency tree distance between each pair of words. Then log p(y|θ(xi)) is simply the log probability of an individual label. We observe that at least some of the observed correlation between swap-induced distortion and parse distance can be accounted for by attention. Of course, all interactions between inputs are mediated by attention, but it is not certain that the contextual information in a particular layer comes from the attention at that layer. To test a whether the correlation between tree distance and distortion persists when accounting for attention, we used a linear regression with any smooth function of attention as a covariate (see A.4). We observe larger p-values in the controlled regression, indicating that the correlations become less significant when accounting for attention (Fig. 5d). This suggests that the attention in each layer helps to build sensitivity to syntactic distance. 5 DISCUSSION In this paper, we used the representational change in response to perturbed input in order to study the encoding of hierarchical phrasal structure in deep language models. We also identify a link between the perturbation-induced distortion to the magnitude of attention paid to words within and out of phrase boundaries as a potential mechanistic explanation. Across different models, we find that the word-level contexts used to represent a sentence grow in size and complexity along the model layers, similar to the increasing size of receptive fields found in sensory systems. In neuroscience, it is well accepted that small receptive fields tuned to simple stimuli (i.e., edges) are combined to form larger receptive fields tuned to more complex stimuli (i.e., objects) (Riesenhuber & Poggio, 1999). In language, a phrase within a sentence can serve as a conceptual unit, much like an object in a visual scene, thus motivating our perturbation-based probe for object-like representational sensitivity of phrases. We showed that BERT and its variants are indeed sensitive to the phrasal unit, as demonstrated by greater invariance to perturbations preserving phrasal boundaries compared to control perturbations which break the phrasal boundaries (Fig. 2-5 for BERT, see SM for other models). Our method and results suggest many interesting future directions. We hope that this work will motivate: (1) a formal theory of efficient hierarchical data representations in distributed features; (2) a search for the causal connection between attention structure, the representational geometry, and the model performance; (3) potential applications in network pruning studies; (4) an extension of the current work as a hypothesis generator in neuroscience to understand how neural populations implement tasks with an underlying compositional structure. A SUPPLEMENTARY MATERIAL (SM) A.1 ADDITIONAL DETAILS ON THE DATASET In this section, we describe additional details of the manipulations done on the datasets. n-gram shuffling For a given a sentence, we split it into sequential non-overlapping n-gram’s from left to right; if the length of the sentence is not a multiple of n, the remaining words form an additional m-gram, m < n. The list of the n-gram’s is randomly shuffled. Note that the 1-gram case is equivalent to a random shuffling of the words. In our analysis, we consider n-grams, with n varying from 1 (i.e., individual words) to 7 and all the sentences have at least 10 words. We provide here an example of n-gram shuffling. • Original: The market ’s pessimism reflects the gloomy outlook in Detroit • 1-gram : market pessimism the ’s Detroit in The gloomy reflects outlook • 2-gram : ’s pessimism in Detroit The market reflects the gloomy outlook • 3-gram : The market ’s gloomy outlook in pessimism reflects the Detroit • 4-gram : in Detroit The market ’s pessimism reflects the gloomy outlook • 5-gram : the gloomy outlook in Detroit The market ’s pessimism reflects • 6-gram : outlook in Detroit The market ’s pessimism reflects the gloomy • 7-gram : in Detroit The market ’s pessimism reflects the gloomy outlook Phrase swaps Using constituency trees from the Penn TreebankMarcus et al. (1994), we define phrases as constituents which don’t contain any others within them. (See Fig. 2c or Fig. 3a in the main text.) Phrase swaps thus consist of swapping one phrase with another, and leaving other words intact. To provide an appropriate control perturbation, we swap two disjoint n-grams, which are the same size as true phrases but cross phrase boundaries. Adjacent word swaps To better isolate the effect of broken phrase boundaries, we used adjacent word swaps. Adjacent words were chosen randomly, and one swap was performed per sentence. A.2 ADDITIONAL MODELS A.2.1 PRE-TRAINED MODELS We present briefly the pre-trained models that we used for the experiments.9 • BERT bert-base-cased. 12-layer, 768-hidden, 12-heads, 110M parameters. • RoBERTa roberta-base. 12-layer, 768-hidden, 12-heads, 125M parameters. • ALBERT albert-base-v1. 12 repeating layers, 128 embedding, 768-hidden, 12-heads, 11M parameters. • DistilBERT distilbert-uncased. 6-layer, 768-hidden, 12-heads, 66M parameters. The model distilled from the BERT model bert-base-uncased checkpoint. • XLNet xlnet-base-cased. 12-layer, 768-hidden, 12-heads, 110M parameters. Note that the hidden size is 768 across all the models. For each pre-trained model, input text is tokenized using its default tokenizer and features are extracted at token level. A.2.2 UNTRAINED MODELS To control for properties which come purely from the architecture, we also compute with randomlyinitialized (untrained) models. All model weights are set to a random number. Note that this random initialization has also an impact on the embedding layer. Here, we provide a side-by-side comparison of results on a trained an untrained model from each model class (n-gram: Fig. 7; adjacent: Fig. 8). Across different model classes and tasks, none of our results were replicated in the untrained models. Thus the pattern of invariance we report cannot be explained by architecture alone. A.3 ADDITIONAL DISTORTION METRICS In addition to the scaled Frobenius distance, we considered other ways of measuring distortion in the representation. Here we report results for two different metrics – canonical correlation analysis and a shifted cosine similarity. CCA Canonical correlations analysis (CCA) Raghu et al. (2017) measures the similarity of two sets of variables using many samples from each. Given two sets of random variables x = (x1, x2, ..., xn) and y = (y1, y2, ..., ym), CCA finds linear weights a ∈ Rn and b ∈ Rm which maximise cov(a · x,b·y). In our context, we treat the representation of the original sentence as x, and the representation of the perturbed sentence as y, and the resulting correlation as a similarity measure. 9We use the implementation from https://github.com/huggingface/transformers. Since CCA requires many samples, we use the set of all word-level representations across all perturbed sentences. For example, to construct the samples of x from S perturbed sentences, we get use [X1|X2|...|XS ], where each Xi ∈ R768×Ti . Unless specified otherwise, S = 400. For good estimates, CCA requires many samples (on the order of at least the number of dimensions), and we facilitate this by first reducing the dimension of the matrices using PCA. Using 400 components preserves ∼ 90% of the variance. Thus, while CCA gives a good principled measure of representational similarity, its hunger for samples makes it unsuitable as a per-sentence metric. We also measured distortion using Projection Weighted Canonical Correlation Analysis (PWCCA), an improved version of CCA to estimate the true correlation between tensors Morcos et al. (2018).10 As reported in Figure 9, we did not find any qualitative differences between PWCCA and CCA in our experiments. Cosine A similarity measure defined on individual sentences is the cosine between the sentencelevel representations. By sentence-level representation, we mean the concatenation of the wordlevel vectors into a single vector s ∈ RNT (where N is the dimension of each feature vector). Treating each dimension of the vector as a sample, we can then define the following metric: corr ( soriginali , s swapped i ) . This is equivalent to computing the cosine of the vectors after subtracting the (scalar) mean across dimensions, hence we will refer to it as ‘cosine’. A.4 PARTIAL LINEAR REGRESSION In order to control for uninteresting explanations of our results, we often make use of a simple method for regressing out confounds. Generally, we want to assess the linear relationship between X and Y , 10For both CCA and PWCCA, we use the implementation from https://github.com/google/ svcca. when accounting for the (potentially non-linear) effect of another variable Z. In our experiments, X is always the swap-induced distortion and Y is the swap type, like integer-valued tree distance or binary-valued in/out phrase. We wish to allow E[Y |Z] and E[X|Z] to be any smooth function of Z, which is achieved by the least-squares solution to the following partially linear model: Y ∼ βxX + βz · f(Z)⇐⇒ (Y − E[Y |Z]) ∼ βx(X − E[X|Z]) where f(z) is a vector of several (we use 10) basis functions (we used cubic splines with knots at 10 quantiles) of Z. Both regressions have the same optimal βx, but the one on the left is computationally simpler (Hansen, 2000). The standard confidence intervals on βx apply. Intuitively, the βx obtained by the partially linear regression above is related to the conditional correlation of X and Y given Z: ρ(X,Y |Z). Like an unconditonal correlation, it will be zero if X and Y are conditionally independent given Z, but not necessarily vice versa (both X and Y must be Gaussian for the other direction to be true). To compute conditional rank correlations (which assess a monotonic relationship between X and Y ), we rank-transform X and Y (this changes the confidence interval calculations). We apply this method to attentions in Fig. 5. In these supplemental materials, we will also report the results when X is the binary in/out phrase variable, and Z is PMI. The full p-values and coefficients of the uncontrolled and controlled regressions can be found in Table 1, where we observe that past layer 2, the p-value on phrase boundary is very significant (p < 10−12). A.5 SUPERVISED PROBES In this section, we describe the experiments based on the three linguistic tasks: parts of Speech (POS); grandparent tags (GP); and constituency tree distance. The POS and GP classifiers were multinomial logistic regressions trained to classify each word’s POS tag (e.g. ‘NNP’, ‘VB’) and the tag of its grandparent in the constituency tree, respectively. If a word has no grandparent, its label is the root token ‘S’. The probes were optimized with standard stochastic gradient descent, 50 sentences from the PTB per mini-batch. 10 epochs, at 10−3 learning rate, were sufficient to reach convergence. The distance probe is a linear map B applied to each word-vector w in the sentence, and trained such that, for all word pairs i, j, TreeDist(i, j) matches ‖B(wi −wj)‖22 as closely as possible. Unlike the classifiers, there is freedom in the output dimension of B; we used 100, although performance and results are empirically the same for any choice greater than ∼ 64. Our probes are different from Hewitt & Manning (2019) in two ways: (1) we use constituency trees, instead of dependency trees, and (2) instead of an L1 loss function, we use the Poisson (negative) log-likelihood as the loss function. That is, if λi,j = ‖B(wi −wj)‖22, and yi,j = TreeDist(i, j) −li,j = yi,j log λi,j − λi,j − log yi,j ! Otherwise, the probes are trained exactly as in Hewitt & Manning (2019). Specifically, we used standard SGD with 20 sentences from the PTB in each mini-batch, for 40 epochs. Evaluation A linear model is fit to maximize p(y|θ(x)), with p a probability function (multinomial for classifiers, Poisson for distance), and x coming from the unperturbed transformer representation. We evaluate the model on x̃, which are the representations of the data when generated from a perturbed sentence. We take the average of log p(y|θ(xi))− log p(y|θ(x̃i)) over all the data i in all sentences. For example, all words for the classifiers, and all pairs of words for the distance probe. Concretely, we are just measuring the difference in validation loss of the same probe on the x data and the x̃ data. But because the loss is an appropriate probability function, we can interpret the same quantity as a difference in log-likelihood between the distribution conditioned on the regular representation and that conditioned on the perturbed representation. Distortion is similarly computed using the full sentence, providing a number for each swap in each sentence.
1. What is the main contribution of the paper regarding linguistic structure and transformer models? 2. What are the strengths and weaknesses of the proposed distortion metric? 3. How does the reviewer assess the effectiveness and novelty of the experimental approach? 4. Do you have any suggestions for improving the paper, such as introducing additional probes or comparing different models? 5. Is there a clear connection between the paper's contributions and neuroscience research?
Review
Review The paper investigates the extent to which pre-trained transformer models successfully capture linguistic structure. The approach taken is to present the model with carefully constructed pairs of linguistic probes and then measure the difference in response to a naturally occurring sentence versus one the has been mutated using one of three different strategies. The first is to permute n-grams of a predetermined order. The next is to swap phrases within a parse tree with the results being contrasted with swapping n-grams than don't correspond to phrase boundaries. Finally, the authors explore swapping words that cross varying amounts of syntactic structure. The part of the paper that I liked best was the introduction of the distortion metric in section 2.3. This seems like it could be a generally useful means for probing networks both for NLP as well as possible for other domains (e.g., CV). The paper would have benefited from spending more time developing and motivating the metric. The construction looks well thought out, however it would have been good to include at least some ablation experiments to show that z-score normalization and including the perturbation matrix in the formulation is necessary for the types of experiments performed later in the paper. I imagine the z-score normalization is helpful, but I wonder if the probe would also work without the perturbation matrix. If this is the case, it would allow the metric to use to explore manipulations that go beyond word re-orderings. The experiments in the paper are interesting in that they support the case that large pre-trained transformer models do capture linguistic structure. However, this is also a complementary weakness in that the paper doesn't add anything particularly surprising given prior work in this area. The paper would be strengthen by introducing additional probes in order to more deeply understand what is an isn't captured by existing pretrained models. I would have also liked to see some treatment of other pretrained models including near BERT variants such as RoBERTa or Albert as well as other models with more distinct architectures and training objectives (e.g., T5 or a GPT model). Alternatively, the paper could have take the approach of developing the best overall probe to detect what is captured by a model. The latter would involve running experiments with different probes and in the best case on different models to discover which method is the most discriminating. As a minor nite, the paper attempts to make a connection to neuro-science. This would be better done if there was more of a clear and explicit connection between the techniques explored in the paper and existing neuro-science work beyond just the fact the model is using probes and measuring network activity.
ICLR
Title Representational correlates of hierarchical phrase structure in deep language models Abstract While contextual representations from pretrained Transformer models have set a new standard for many NLP tasks, there is not yet a complete accounting of their inner workings. In particular, it is not entirely clear what aspects of sentence-level syntax are captured by these representations, nor how (if at all) they are built along the stacked layers of the network. In this paper, we aim to address such questions with a general class of interventional, input perturbation-based analyses of representations from Transformers networks pretrained with self-supervision. Importing from computational and cognitive neuroscience the notion of representational invariance, we perform a series of probes designed to test the sensitivity of Transformer representations to several kinds of structure in sentences. Each probe involves swapping words in a sentence and comparing the representations from perturbed sentences against the original. We experiment with three different perturbations: (1) random permutations of n-grams of varying width, to test the scale at which a representation is sensitive to word position; (2) swapping of two spans which do or do not form a syntactic phrase, to test sensitivity to global phrase structure; and (3) swapping of two adjacent words which do or do not break apart a syntactic phrase, to test sensitivity to local phrase structure. We also connect our probe results to the Transformer architecture by relating the attention mechanism to syntactic distance between two words. Results from the three probes collectively suggest that Transformers build sensitivity to larger parts of the sentence along their layers, and that hierarchical phrase structure plays a role in this process. In particular, sensitivity to local phrase structure increases along deeper layers. Based on our analysis of attention, we show that this is at least partly explained by generally larger attention weights between syntactically distant words.1 1 INTRODUCTION AND RELATED WORK It is still unknown how distributed information processing systems encode and exploit complex relational structures in data. The fields of deep learning (Saxe et al., 2013; Hewitt & Manning, 2019), neuroscience (Sarafyazd & Jazayeri, 2019; Stachenfeld et al., 2017), and cognitive science (Elman, 1991; Kemp & Tenenbaum, 2008; Tervo et al., 2016) have given great attention to this question, including a productive focus on the potential models and their implementations of hierarchical tasks, such as predictive maps and graphs. Natural (human) language provides a rich domain for studying how complex hierarchical structures are encoded in information processing systems. More so than other domains, human language is unique in that its underlying hierarchy has been extensively studied and theorized in linguistics, which provides source of “ground truth” structures for stimulus data. Much prior work on characterizing the types of linguistic information encoded in computational models of language such as neural networks has focused on supervised readout probes, which train a classifier on top pretrained models to predict a particular linguistic label (Belinkov & Glass, 2017; Liu et al., 2019a; Tenney et al., 2019). In particular, Hewitt & Manning (2019) apply probes to discover linear subspaces that encode tree-distances as distances in the representational subspace, and Kim et al. (2020) show that these distances can be used even without any labeled information to induce hierarchical structure. However, recent work has highlighted issues with correlating supervised probe performance with the amount 1Datasets, extracted features and code will be publicly available upon publication. of language structure encoded in such representations (Hewitt & Liang, 2019). Another popular approach to analyzing deep models is through the lens of geometry (Reif et al., 2019; Gigante et al., 2019). While geometric interpretations provide significant insights, they present another challenge in summarizing the structure in a quantifiable way. More recent techniques such as replica-based mean field manifold analysis method (Chung et al., 2018; Cohen et al., 2019; Mamou et al., 2020) connects representation geometry with linear classification performance, but the method is limited to categorization tasks. In this work, we make use of an experimental framework from cognitive science and neuroscience to probe for hierarchical structure in contextual representations from pretrained Transformer models (i.e., BERT (Devlin et al., 2018) and its variants). A popular technique in neuroscience involves measuring change in the population activity in response to controlled, input perturbations (Mollica et al., 2020; Ding et al., 2016). We apply this approach to test the characteristic scale and the complexity (Fig. 1) of hierarchical phrase structure encoded deep contextual representations, and present several key findings: 1. Representations are distorted by shuffling small n-grams in early layers, while the distortion caused by shuffling large n-grams starts to occur in later layers, implying the scale of characteristic word length increases from input to downstream layers. 2. Representational distortion caused by swapping two constituent phrases is smaller than when the control sequences of the same length are swapped, indicating that the BERT representations are sensitive to hierarchical phrase structure. 3. Representational distortion caused by swapping adjacent words across phrasal boundary is larger than when the swap is within a phrasal boundary; furthermore, the amount of distortion increases with the syntactic distance between the swapped words. The correlation between distortion and tree distance increases across the layers, suggesting that the characteristic complexity of phrasal subtrees increases across the layers. 4. Early layers pay more attention between syntactically closer adjacent pairs and deeper layers pay more attention between syntactically distant adjacent pairs. The attention paid in each layer can explain some of the emergent sensitivity to phrasal structure across layers. Our work demonstrates that interventional tools such as controlled input perturbations can be useful for analyzing deep networks, adding to the growing, interdisciplinary body of work which profitably adapt experimental techniques from cognitive neuroscience and psycholinguistics to analyze computational models of language (Futrell et al., 2018; Wilcox et al., 2019; Futrell et al., 2019; Ettinger, 2020). 2 METHODS Eliciting changes in behavioral and neural responses through controlled input perturbations is a common experimental technique in cognitive neuroscience and psycholinguistics (Tsao & Livingstone, 2008; Mollica et al., 2020). Inspired by these approaches, we perturb input sentences and measure the discrepancy between the resulting, perturbed representation against the original. While conceptually simple, this approach allows for a targeted analysis of internal representations obtained from different layers of deep models, and can suggest partial mechanisms by which such models are able to encode linguistic structure. We note that sentence perturbations have been primarily utilized in NLP for representation learning (Hill et al., 2016; Artetxe et al., 2018; Lample et al., 2018), data augmentation (Wang et al., 2018; Andreas, 2020), and testing for model robustness (e.g., against adversarial examples) (Jia & Liang, 2017; Belinkov & Bisk, 2018). A methodological contribution of our work is to show that input perturbations can serve as a useful tool for analyzing representations learned by deep networks. 2.1 SENTENCE PERTURBATIONS In this work we consider three different types of sentence perturbations designed to probe for different phenomena. n-gram shuffling In the n-gram shuffling experiments, we randomly shuffle the words of a sentence in units of n-grams, with n varying from 1 (i.e., individual words) to 7 (see Fig. 2a for an example). While the number of words which change absolute position is similar for different n, larger n will better preserve the local context (i.e., relative position) of more words. Thus, we reason that n-gram swaps affect the representations selective to the context with size n or higher within the sentence, and that lower n will result in greater distortion in sentence representations. Phrase swaps The n-gram shuffling experiments probe for sensitivity of representations to local context without taking into account syntactic structure. In the phrase swap experiments, we perturb a sentence by swapping two randomly chosen spans. We explore two ways of swapping spans. In the first setting, the spans are chosen such that they are valid phrases according to its parse tree.2 In the second setting, the spans are chosen that they are invalid phrases. Importantly, in the second, control setting, we fix the length of the spans such that the lengths of spans that are chosen to be swapped are the same as in the first setting (see Fig. 3a for an example). We hypothesize that swapping invalid phrases will result in more distortion than swapping valid phrases, since invalid swaps will result in greater denigration of syntactic structure. Adjacent word swaps In the adjacent word swapping experiments, we swap two adjacent words in a sentence. We again experiment with two settings – in the first setting, the swapped words stay within the phrase boundary (i.e., the two words share the same parent), while in the second setting, the swapped words cross phrase boundaries. We also perform a more fine-grained analysis where we condition the swaps based on the “syntactic distance” between the swapped words, where syntactic distance is defined as the distance between the two words in the parse tree (see Fig. 4c). Since a phrase corresponds to a subtree of the parse tree, this distance also quantifies the number of nested phrase boundaries between two adjacent words. Here, we expect the amount of distortion to be positively correlated with the syntactic distance of the words that are swapped. 2.2 CONTEXTUAL REPRESENTATIONS FROM TRANSFORMERS For our sentence representation, we focus on the Transformer-family of models pretrained on large-scale language datasets (BERT and its variants). Given an input word embedding matrix X ∈ RT×d for a sentence of length T , the Transformer applies self attention over the previous layer’s representation to produce a new representation, Xl = fl([Hl,1, . . . ,Hl,H ]), Hl,i = Al,iXl−1Vl,i, Al,i = softmax ( (Xl−1Ql,i)(Xl−1Kl,i) > √ dk ) , (1) 2We use constituency parse trees from the English Penn Treebank (Marcus et al., 1994). where fl is an MLP layer, H is the number of heads, dH = dH is the head embedding dimension, and Ql,i,Kl,i,Vl,i ∈ Rd×dk are respectively the learned query, key, and value projection matrices at layer l for head i. The MLP layer consists of a residual layer followed by layer normalization and a nonlinearity. The 0-th layer representation X0 is obtained by adding the position embeddings and the segment embeddings to the input token embeddings X, and passing it through normalization layer.3 In this paper, we conduct our distortion analysis mainly on the intermediate Transformer representations Xl = [xl,1, . . . ,xl,T ], where xl,t ∈ Rd is the contextualized representation for word t at layer l.4 We analyze the trend in distortion as a function of layer depth l for the different perturbations. We also explore the different attention heads Hl,i ∈ RT×dH and the associated attention matrix Al,i ∈ RT×T to inspect whether certain attention heads specialize at encoding syntactic information. 2.3 DISTORTION METRIC Our input manipulations allow us to specify the distortion at the input level, and we wish to measure the corresponding distortion in the representation space (Fig. 1). Due to the attention mechanism, a single vector in an intermediate layer is a function of the representations of (potentially all) the other tokens in the sentence. Therefore, the information about a particular word might be distributed among the many feature vectors of a sentence, and we wish to consider all feature vectors together as a single sentence-level representation. We thus represent each sentence as a matrix and use the distance induced by matrix 2-norm. Specifically, let P ∈ {0, 1}T×T be the binary matrix representation of a permutation that perturbs the input sentence, i.e., X̃ = PX. Further let X̃l and Xl be the corresponding sentence representations for the l-th layer for the perturbed and original sentences. To ignore uniform shifting and scaling, we also z-score each feature dimension of each layer (by subtracting the mean and dividing by the standard deviation where these statistics are obtained from the full Penn Treebank corpus) to give Z̃l and Zl. Our distortion metric for layer l is then defined as ‖Zl −P−1Z̃l‖/ √ Td, where ‖ · ‖ is the matrix 2-norm (i.e., Frobenius norm).5 Importantly, we invert the permutation of the perturbed representation to preserve the original ordering, which allows us to measure the distortion due to structural change, rather than distortion due to simple differences in surface form. We divide by √ Td to make the metric comparable between sentences (with different T ) and networks (with different d). Intuitively, our metric is the scaled Euclidean distance between the z-scored, flattened sentence representations, zl ∈ RTd. Because each dimension is independently centered and standardized, the maximally unstructured distribution of zl is an isotropic Td-dimensional Gaussian. The expected distance between two such vectors is approximately √ 2Td. Therefore, we can interpret a distortion value approaching √ 2 as comparable to if we had randomly redrawn the perturbed representation. 3 EXPERIMENTAL SETUP We apply our perturbation-based analysis on sentences from the English Penn Treebank (Marcus et al., 1994), where we average the distortion metric across randomly chosen sentences (see Sec. A.1 for the exact details). We analyze the distortion, as measured by length-normalized Frobenius norm between the perturbed and original representations, as a function of layer depth. Layers that experience large distortion when the syntactic structure is disrupted from the perturbation can be interpreted as being more sensitive to hierarchical syntactic structure. As we found the trend to be largely similar across the different models, in the following section, we primarily discuss results from BERT (bert-base-cased), which has 12 layers and hidden size of 768 (Devlin et al., 2018). We show results from other pretrained and randomly-initialized Transformer-based models, including RoBERTa (Liu et al., 2019b), ALBERT (Lan et al., 2019), DistilBERT (Sanh et al., 2019), and XLNet (Yang et al., 2019), in the appendix (Sec. A.2). 3However, the exact specification for the MLP and X0 may vary across different pretrained models. 4BERT uses BPE tokenization (Sennrich et al., 2015), which means that some words are split into multiple tokens. Since we wish to evaluate representations at word-level, if a word is split into multiple tokens, its word representation is computed as the average of all its token representations. 5There are many possible ways of measuring distortion, such as the average cosine similarity or distance between corresponding feature vectors, as well as different matrix norms. We observed the results to be qualitatively similar for different measures, and hence we focus on the Frobenius norm in our main results. We show the results from additional distortion metrics in Sec. A.3. 4 RESULTS We summarize our findings for the different perturbations below. While not shown in the main results, we note that randomly-initialized (i.e. untrained) models (somewhat unsuprisingly) exhibit a flat distortion trend for all perturbations (see Sec. A.2). This indicates that the patterns observed here are due to the model’s structural knowledge acquired through training, and not simply due to the underlying architecture. 4.1 CHARACTERISTIC SCALE INCREASES ALONG BERT LAYERS When we shuffle in units of larger n-grams, it only introduces distortions in the deeper BERT layers compared to smaller n-gram shuffles. The n-gram sized shuffles break contexts larger than n, while preserving contexts of size n or smaller. Interestingly, smaller n-gram shuffles diverge from the original sentence in the early layers (Fig. 2b, top curve), implying that only in early layers are representations built from short-range contexts. Larger n-gram shuffles remain minimally distorted for ‘longer’ (Fig. 2b, bottom curve), implying that long-range contexts play a larger role deeper layer representations. Phrasal boundaries matter Since BERT seems to build larger contexts along its layers, we now ask whether those contexts are structures of some grammatical significance. A basic and important syntactic feature is the constituent phrase, which BERT has previously been shown to represented in some fashion (Goldberg, 2019; Kim et al., 2020). We applied two targeted probes of phrase structure in the BERT representation, and found that phrasal boundaries are indeed influential. If we swap just two n-grams, the BERT representations are less affected when phrases are kept intact. We show this by swapping only two n-grams per sentence and comparing the distortion when those n-grams are phrases to when they cross phrase boundaries (Fig. 3a), where we control for the length of n-grams that are swapped in both settings. There is less distortion when respecting phrase boundaries. Furthermore, the distortion is evident among all feature vectors, including those in the position of words which did not get swapped (Fig. 2d). The global contextual information, distributed across the sentence, is affected by the phrase boundary. 4.2 PHRASE HIERARCHY MATTERS Having seen that representations are sensitive to phrase boundaries, we next explore whether that sensitivity is proportional to the number of phrase boundaries that are broken, which is a quantity related to the phrase hierarchy. Instead of swapping entire phrases, we swap two adjacent words and analyze the distortion based on how far apart the two words are in the constituency tree (Fig. 3a)6. This analysis varies the distance in the deeper tree structure while keeping the distance in surface form constant (since we always swap adjacent words). If the hierarchical representations are indeed being gradually built up along the layers of these pretrained models, we expect distortion to be greater for word swaps that are further apart in tree distance. We indeed find that there is a larger distortion when swapping syntactically distant words (Fig. 3b). This distortion grows from earlier to later BERT layers. Furthermore, when looking at the per-head representations of each layer, we see that in deeper layers there are more heads showing a 6Note that for adjacent words, the number of broken phrase boundaries equals the tree distance minus two. positive rank correlation between distortion and tree distance (Fig. 3c). In addition to a sensitivity to phrase boundaries, deeper BERT layers develop a sensitivity to the number of boundaries that are broken. Controlling for co-occurrence Since words in the same phrase may tend to occur together more often, co-occurrence is a potential confound when assessing the effects of adjacent word swaps. Co-occurrence is a simple statistic which does not require any notion of grammar to compute – indeed it is used to train many non-contextual word embeddings (e.g., word2vec Mikolov et al. (2013), GloVe Pennington et al. (2014)). So it is natural to ask whether BERT’s resilience to syntactically closer swaps goes beyond simple co-occurrence statistics. For simplicity, let us focus on whether a swap occurs within a phrase (tree distance = 2) or not. As an estimate of co-occurrence, we used the pointwise mutual information (PMI). Specifically, for two words w and v, the PMI is log p(w,v)p(w)p(v) , which is estimated from the empirical probabilities. We confirm that adjacent words in the same phrase do indeed have a second mode at high PMI (Fig. 3d). Dividing the swaps into those whose words have high PMI (above the marginal median) and low PMI (below it), we can see visually that the difference between within-phrase swaps and out-of-phrase swaps persists in both groups (Fig. 3e). For a more careful statistical test, in the appendix we show results from running a linear regression between distortion and the phrase boundary which accounts for dependency on any smooth function of PMI (see details in A.4). Even when accounting for the effect of PMI, there is a significant correlation between the breaking of a phrase and the subsequent distortion. This indicates that the greater distortion for word swaps which cross phrase boundaries is not simply due to surface co-occurrence statistics. Effects on linguistic information Do our input perturbations, and the resulting the distortions, reflect changes in the encoding of important linguistic information? One way to address this question, which is popular in computational neuroscience (DiCarlo & Cox, 2007) and more recently BERTology (Liu et al., 2019a; Tenney et al., 2019), is to see how well a linear classifier trained on a linguistic task generalizes from the (representations of the) unperturbed sentences to the perturbed ones. With supervised probes, we can see how much the representations change with respect to the subspaces that encode specific linguistic information. Specifically, we relate representational distortion to three common linguistic tasks of increasing complexity: part of speech (POS) classification; grandparent tag (GP) classification (Tenney et al., 2019); and a parse tree distance reconstruction (Hewitt & Manning, 2019)7. The probe trained on each of these tasks is a generalized linear model, linearly mapping a datapoint x (i.e. BERT representations from different layers) to a conditional distribution of the labels, p(y|θ(x)) (see A.5 for more details). Thus a ready measure of the effect of each type of swap, for a single sentence, is log p(y|θ(xi)) − log p(y|θ(x̃i)), where x̃i is same datum as xi in the perturbed representation8. Averaging this quantity over all datapoints gives a measure of content-specific distortion within a representation, which we will call “inference impairment”. Based on the three linguistic tasks, the distortion we measure from the adjacent word swaps is more strongly related to more complex information. The inverted L shape of Fig. 4a suggests that increasing distortion is only weakly related to impairment of POS inference, which is perhaps unsurprising given that POS tags can be readily predicted from local context. A deeper syntactic probe, the GP classifier (4b), does show a consistent positive relationship, but only for swaps which break a phrase boundary (i.e. distance >2). Meanwhile, impairment of the distance probe (4c), which reconstructs the full parse tree, has a consistently positive relationship with distortion, whose slope is proportionate to the tree distance of the swap. Thus, when specifically perturbing the phrasal boundary, the representational distortion is related to relatively more complex linguistic information. 4.3 A POSSIBLE MECHANISTIC EXPLANATION THROUGH ATTENTION In the transformer architecture, contexts are built with the attention mechanism. Recall that attention is a mechanism for allowing input vectors to interact when forming the output, and the ultimate output for a given token is a convex combination of the features of all tokens (Eq. 1). While any interactions between inputs must be mediated by attention, it is not obvious how the contextual information of a particular layer is captured by attention in that layer. It has been shown qualitatively that, within a layer, BERT allocates attention preferentially to words in the same phrase (Kim et al., 2020). Our next suite of experiments asks whether this might explain the observed relationship between tree distance and distortion. We find that in many Transformer heads, the attention—much like distortion—is proportional to the syntactic distance between two words. Fig. 5c summarizes this trend by showing the Spearman rank correlation between the parse tree distance of adjacent word pairs, and the attention paid between those words. Different attention heads within a layer range from correlated to anti-correlated, and with slightly more positively correlated heads in deeper layers. However, there is great variability in this, suggesting that only certain heads learn to specialize to syntactic phenomena. 7While the original paper predicted dependency tree distance, in this paper we instead predict the constituency tree distance. 8POS- and GP-tag prediction outputs a sequence of labels for each sentence, while the distance probe outputs the constituency tree distance between each pair of words. Then log p(y|θ(xi)) is simply the log probability of an individual label. We observe that at least some of the observed correlation between swap-induced distortion and parse distance can be accounted for by attention. Of course, all interactions between inputs are mediated by attention, but it is not certain that the contextual information in a particular layer comes from the attention at that layer. To test a whether the correlation between tree distance and distortion persists when accounting for attention, we used a linear regression with any smooth function of attention as a covariate (see A.4). We observe larger p-values in the controlled regression, indicating that the correlations become less significant when accounting for attention (Fig. 5d). This suggests that the attention in each layer helps to build sensitivity to syntactic distance. 5 DISCUSSION In this paper, we used the representational change in response to perturbed input in order to study the encoding of hierarchical phrasal structure in deep language models. We also identify a link between the perturbation-induced distortion to the magnitude of attention paid to words within and out of phrase boundaries as a potential mechanistic explanation. Across different models, we find that the word-level contexts used to represent a sentence grow in size and complexity along the model layers, similar to the increasing size of receptive fields found in sensory systems. In neuroscience, it is well accepted that small receptive fields tuned to simple stimuli (i.e., edges) are combined to form larger receptive fields tuned to more complex stimuli (i.e., objects) (Riesenhuber & Poggio, 1999). In language, a phrase within a sentence can serve as a conceptual unit, much like an object in a visual scene, thus motivating our perturbation-based probe for object-like representational sensitivity of phrases. We showed that BERT and its variants are indeed sensitive to the phrasal unit, as demonstrated by greater invariance to perturbations preserving phrasal boundaries compared to control perturbations which break the phrasal boundaries (Fig. 2-5 for BERT, see SM for other models). Our method and results suggest many interesting future directions. We hope that this work will motivate: (1) a formal theory of efficient hierarchical data representations in distributed features; (2) a search for the causal connection between attention structure, the representational geometry, and the model performance; (3) potential applications in network pruning studies; (4) an extension of the current work as a hypothesis generator in neuroscience to understand how neural populations implement tasks with an underlying compositional structure. A SUPPLEMENTARY MATERIAL (SM) A.1 ADDITIONAL DETAILS ON THE DATASET In this section, we describe additional details of the manipulations done on the datasets. n-gram shuffling For a given a sentence, we split it into sequential non-overlapping n-gram’s from left to right; if the length of the sentence is not a multiple of n, the remaining words form an additional m-gram, m < n. The list of the n-gram’s is randomly shuffled. Note that the 1-gram case is equivalent to a random shuffling of the words. In our analysis, we consider n-grams, with n varying from 1 (i.e., individual words) to 7 and all the sentences have at least 10 words. We provide here an example of n-gram shuffling. • Original: The market ’s pessimism reflects the gloomy outlook in Detroit • 1-gram : market pessimism the ’s Detroit in The gloomy reflects outlook • 2-gram : ’s pessimism in Detroit The market reflects the gloomy outlook • 3-gram : The market ’s gloomy outlook in pessimism reflects the Detroit • 4-gram : in Detroit The market ’s pessimism reflects the gloomy outlook • 5-gram : the gloomy outlook in Detroit The market ’s pessimism reflects • 6-gram : outlook in Detroit The market ’s pessimism reflects the gloomy • 7-gram : in Detroit The market ’s pessimism reflects the gloomy outlook Phrase swaps Using constituency trees from the Penn TreebankMarcus et al. (1994), we define phrases as constituents which don’t contain any others within them. (See Fig. 2c or Fig. 3a in the main text.) Phrase swaps thus consist of swapping one phrase with another, and leaving other words intact. To provide an appropriate control perturbation, we swap two disjoint n-grams, which are the same size as true phrases but cross phrase boundaries. Adjacent word swaps To better isolate the effect of broken phrase boundaries, we used adjacent word swaps. Adjacent words were chosen randomly, and one swap was performed per sentence. A.2 ADDITIONAL MODELS A.2.1 PRE-TRAINED MODELS We present briefly the pre-trained models that we used for the experiments.9 • BERT bert-base-cased. 12-layer, 768-hidden, 12-heads, 110M parameters. • RoBERTa roberta-base. 12-layer, 768-hidden, 12-heads, 125M parameters. • ALBERT albert-base-v1. 12 repeating layers, 128 embedding, 768-hidden, 12-heads, 11M parameters. • DistilBERT distilbert-uncased. 6-layer, 768-hidden, 12-heads, 66M parameters. The model distilled from the BERT model bert-base-uncased checkpoint. • XLNet xlnet-base-cased. 12-layer, 768-hidden, 12-heads, 110M parameters. Note that the hidden size is 768 across all the models. For each pre-trained model, input text is tokenized using its default tokenizer and features are extracted at token level. A.2.2 UNTRAINED MODELS To control for properties which come purely from the architecture, we also compute with randomlyinitialized (untrained) models. All model weights are set to a random number. Note that this random initialization has also an impact on the embedding layer. Here, we provide a side-by-side comparison of results on a trained an untrained model from each model class (n-gram: Fig. 7; adjacent: Fig. 8). Across different model classes and tasks, none of our results were replicated in the untrained models. Thus the pattern of invariance we report cannot be explained by architecture alone. A.3 ADDITIONAL DISTORTION METRICS In addition to the scaled Frobenius distance, we considered other ways of measuring distortion in the representation. Here we report results for two different metrics – canonical correlation analysis and a shifted cosine similarity. CCA Canonical correlations analysis (CCA) Raghu et al. (2017) measures the similarity of two sets of variables using many samples from each. Given two sets of random variables x = (x1, x2, ..., xn) and y = (y1, y2, ..., ym), CCA finds linear weights a ∈ Rn and b ∈ Rm which maximise cov(a · x,b·y). In our context, we treat the representation of the original sentence as x, and the representation of the perturbed sentence as y, and the resulting correlation as a similarity measure. 9We use the implementation from https://github.com/huggingface/transformers. Since CCA requires many samples, we use the set of all word-level representations across all perturbed sentences. For example, to construct the samples of x from S perturbed sentences, we get use [X1|X2|...|XS ], where each Xi ∈ R768×Ti . Unless specified otherwise, S = 400. For good estimates, CCA requires many samples (on the order of at least the number of dimensions), and we facilitate this by first reducing the dimension of the matrices using PCA. Using 400 components preserves ∼ 90% of the variance. Thus, while CCA gives a good principled measure of representational similarity, its hunger for samples makes it unsuitable as a per-sentence metric. We also measured distortion using Projection Weighted Canonical Correlation Analysis (PWCCA), an improved version of CCA to estimate the true correlation between tensors Morcos et al. (2018).10 As reported in Figure 9, we did not find any qualitative differences between PWCCA and CCA in our experiments. Cosine A similarity measure defined on individual sentences is the cosine between the sentencelevel representations. By sentence-level representation, we mean the concatenation of the wordlevel vectors into a single vector s ∈ RNT (where N is the dimension of each feature vector). Treating each dimension of the vector as a sample, we can then define the following metric: corr ( soriginali , s swapped i ) . This is equivalent to computing the cosine of the vectors after subtracting the (scalar) mean across dimensions, hence we will refer to it as ‘cosine’. A.4 PARTIAL LINEAR REGRESSION In order to control for uninteresting explanations of our results, we often make use of a simple method for regressing out confounds. Generally, we want to assess the linear relationship between X and Y , 10For both CCA and PWCCA, we use the implementation from https://github.com/google/ svcca. when accounting for the (potentially non-linear) effect of another variable Z. In our experiments, X is always the swap-induced distortion and Y is the swap type, like integer-valued tree distance or binary-valued in/out phrase. We wish to allow E[Y |Z] and E[X|Z] to be any smooth function of Z, which is achieved by the least-squares solution to the following partially linear model: Y ∼ βxX + βz · f(Z)⇐⇒ (Y − E[Y |Z]) ∼ βx(X − E[X|Z]) where f(z) is a vector of several (we use 10) basis functions (we used cubic splines with knots at 10 quantiles) of Z. Both regressions have the same optimal βx, but the one on the left is computationally simpler (Hansen, 2000). The standard confidence intervals on βx apply. Intuitively, the βx obtained by the partially linear regression above is related to the conditional correlation of X and Y given Z: ρ(X,Y |Z). Like an unconditonal correlation, it will be zero if X and Y are conditionally independent given Z, but not necessarily vice versa (both X and Y must be Gaussian for the other direction to be true). To compute conditional rank correlations (which assess a monotonic relationship between X and Y ), we rank-transform X and Y (this changes the confidence interval calculations). We apply this method to attentions in Fig. 5. In these supplemental materials, we will also report the results when X is the binary in/out phrase variable, and Z is PMI. The full p-values and coefficients of the uncontrolled and controlled regressions can be found in Table 1, where we observe that past layer 2, the p-value on phrase boundary is very significant (p < 10−12). A.5 SUPERVISED PROBES In this section, we describe the experiments based on the three linguistic tasks: parts of Speech (POS); grandparent tags (GP); and constituency tree distance. The POS and GP classifiers were multinomial logistic regressions trained to classify each word’s POS tag (e.g. ‘NNP’, ‘VB’) and the tag of its grandparent in the constituency tree, respectively. If a word has no grandparent, its label is the root token ‘S’. The probes were optimized with standard stochastic gradient descent, 50 sentences from the PTB per mini-batch. 10 epochs, at 10−3 learning rate, were sufficient to reach convergence. The distance probe is a linear map B applied to each word-vector w in the sentence, and trained such that, for all word pairs i, j, TreeDist(i, j) matches ‖B(wi −wj)‖22 as closely as possible. Unlike the classifiers, there is freedom in the output dimension of B; we used 100, although performance and results are empirically the same for any choice greater than ∼ 64. Our probes are different from Hewitt & Manning (2019) in two ways: (1) we use constituency trees, instead of dependency trees, and (2) instead of an L1 loss function, we use the Poisson (negative) log-likelihood as the loss function. That is, if λi,j = ‖B(wi −wj)‖22, and yi,j = TreeDist(i, j) −li,j = yi,j log λi,j − λi,j − log yi,j ! Otherwise, the probes are trained exactly as in Hewitt & Manning (2019). Specifically, we used standard SGD with 20 sentences from the PTB in each mini-batch, for 40 epochs. Evaluation A linear model is fit to maximize p(y|θ(x)), with p a probability function (multinomial for classifiers, Poisson for distance), and x coming from the unperturbed transformer representation. We evaluate the model on x̃, which are the representations of the data when generated from a perturbed sentence. We take the average of log p(y|θ(xi))− log p(y|θ(x̃i)) over all the data i in all sentences. For example, all words for the classifiers, and all pairs of words for the distance probe. Concretely, we are just measuring the difference in validation loss of the same probe on the x data and the x̃ data. But because the loss is an appropriate probability function, we can interpret the same quantity as a difference in log-likelihood between the distribution conditioned on the regular representation and that conditioned on the perturbed representation. Distortion is similarly computed using the full sentence, providing a number for each swap in each sentence.
1. What is the focus of the review, particularly regarding the research question and methodology? 2. What are the strengths and weaknesses of the proposed approach, especially concerning the measurement used for analyzing representation sensitivity? 3. Are there any concerns or questions regarding the results and their interpretation, including the validity of the measure used? 4. Are there any missing references or relevant works that should be considered in the discussion? 5. How can the presentation be improved, such as figure readability?
Review
Review The paper investigates the sensitivity of BERT representations to different kinds of permutations in the input sentence. These transformations include n-gram permutation, span swaps (with or without crossing syntactic phrase boundaries), adjacent token swaps (with different syntactic distance). The authors measure the l2 distance between representations coming from original and perturbed input. Overall, the results suggest that BERT is sensitive to hierarchical phrase structure. Strengths The idea of measuring a network’s response to input transformations is nice and potentially could be used to test different kinds of hypotheses. Weaknesses My main concern is the measure used for a distance between representations. Namely, this is the l2 distance, which accounts for distance in neurons. Therefore, it does not tell us to what extent representations encode different things but rather how different are their individual neurons. For example, different phenomena can be either focused in a network (encoded by only a few neurons), or distributed (see, e.g., the paper [1] or a more recent [2]). Therefore, the phenomena that suffer from perturbations have different impact on the l2 distance because they affect different number of neurons as a consequence of the above, it is not clear what to conclude from the proposed results: they are likely to be not because of the high-level differences in what original/perturbed representations encode, but rather in how these underlying phenomena affect individual neurons. (To measure differences in what representations encode you can use, for example, PWCCA (NeurIPS 2019 “Insights on representational similarity in neural networks with canonical correlation”) or other related measures.) [1] AAAI 2019 “What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models” [2] EMNLP 2020 “Analyzing Individual Neurons in Pre-trained Language Models” Other concerns. If we put aside the validity of the measure, it is still not clear what to take out of these results: they rather show that the method passes sanity checks rather than tell us something we didn’t know before. For example, there’s a huge line of works showing that BERT “understands” syntax and phrase composition (looking at representations, geometry, attention heads, etc). Hence the results stated in contributions 2, 3, 4 are not surprising. Contribution 1 is also more of a sanity check: of course, shuffling smaller parts has to cause more distortion than shuffling longer phrases. Questions Section 4.1, paragraph 1: when referring to figure 2b, you say “When we shuffle in units of larger n-grams, it only introduces distortions in the deeper BERT layers compared to smaller n-gram shuffles.” I have trouble seeing this from the figure. For all lines, the distortion goes up almost linearly from layer 2 to layers 9-10. Yes, shuffling larger ngrams causes lower distortion, but this is expected. Am I missing something? Missing references (in addition to mentioned above) When hypothesizing about heads specializing in syntax (section 4.3), none of the relevant previous work is mentioned. For example, [3] - for NMT Transformer, showed that the important heads are specialized, and many of them are syntactic. [4] - repeated the previous syntax-heads evaluation for BERT. [5] - also mention that some heads track syntax. [6] - looks relevant: it also measures changes in representations across models, layers, for different tokens, etc. Maybe the most relevant to your work is the part showing how different words influence each other (e.g., rare words influence others more than frequent ones; the same analysis for POS). Note also which measure they use - pwcca, which is invariant to linear transformations of representations. [3] ACL 2019 “Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned” [4] “Do Attention Heads in BERT Track Syntactic Dependencies?” [5] BlackBoxNLP 2019 “What Does BERT Look At? An Analysis of BERT's Attention” [6] EMNLP 2019 “The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives” Presentation The paper is overall fairly clear, but figures 2d and 3e are not readable. Namely, (even with a maximum zoom) it is very hard to distinguish between solid, dashed and dash-dot lines of the same color.
ICLR
Title Representational correlates of hierarchical phrase structure in deep language models Abstract While contextual representations from pretrained Transformer models have set a new standard for many NLP tasks, there is not yet a complete accounting of their inner workings. In particular, it is not entirely clear what aspects of sentence-level syntax are captured by these representations, nor how (if at all) they are built along the stacked layers of the network. In this paper, we aim to address such questions with a general class of interventional, input perturbation-based analyses of representations from Transformers networks pretrained with self-supervision. Importing from computational and cognitive neuroscience the notion of representational invariance, we perform a series of probes designed to test the sensitivity of Transformer representations to several kinds of structure in sentences. Each probe involves swapping words in a sentence and comparing the representations from perturbed sentences against the original. We experiment with three different perturbations: (1) random permutations of n-grams of varying width, to test the scale at which a representation is sensitive to word position; (2) swapping of two spans which do or do not form a syntactic phrase, to test sensitivity to global phrase structure; and (3) swapping of two adjacent words which do or do not break apart a syntactic phrase, to test sensitivity to local phrase structure. We also connect our probe results to the Transformer architecture by relating the attention mechanism to syntactic distance between two words. Results from the three probes collectively suggest that Transformers build sensitivity to larger parts of the sentence along their layers, and that hierarchical phrase structure plays a role in this process. In particular, sensitivity to local phrase structure increases along deeper layers. Based on our analysis of attention, we show that this is at least partly explained by generally larger attention weights between syntactically distant words.1 1 INTRODUCTION AND RELATED WORK It is still unknown how distributed information processing systems encode and exploit complex relational structures in data. The fields of deep learning (Saxe et al., 2013; Hewitt & Manning, 2019), neuroscience (Sarafyazd & Jazayeri, 2019; Stachenfeld et al., 2017), and cognitive science (Elman, 1991; Kemp & Tenenbaum, 2008; Tervo et al., 2016) have given great attention to this question, including a productive focus on the potential models and their implementations of hierarchical tasks, such as predictive maps and graphs. Natural (human) language provides a rich domain for studying how complex hierarchical structures are encoded in information processing systems. More so than other domains, human language is unique in that its underlying hierarchy has been extensively studied and theorized in linguistics, which provides source of “ground truth” structures for stimulus data. Much prior work on characterizing the types of linguistic information encoded in computational models of language such as neural networks has focused on supervised readout probes, which train a classifier on top pretrained models to predict a particular linguistic label (Belinkov & Glass, 2017; Liu et al., 2019a; Tenney et al., 2019). In particular, Hewitt & Manning (2019) apply probes to discover linear subspaces that encode tree-distances as distances in the representational subspace, and Kim et al. (2020) show that these distances can be used even without any labeled information to induce hierarchical structure. However, recent work has highlighted issues with correlating supervised probe performance with the amount 1Datasets, extracted features and code will be publicly available upon publication. of language structure encoded in such representations (Hewitt & Liang, 2019). Another popular approach to analyzing deep models is through the lens of geometry (Reif et al., 2019; Gigante et al., 2019). While geometric interpretations provide significant insights, they present another challenge in summarizing the structure in a quantifiable way. More recent techniques such as replica-based mean field manifold analysis method (Chung et al., 2018; Cohen et al., 2019; Mamou et al., 2020) connects representation geometry with linear classification performance, but the method is limited to categorization tasks. In this work, we make use of an experimental framework from cognitive science and neuroscience to probe for hierarchical structure in contextual representations from pretrained Transformer models (i.e., BERT (Devlin et al., 2018) and its variants). A popular technique in neuroscience involves measuring change in the population activity in response to controlled, input perturbations (Mollica et al., 2020; Ding et al., 2016). We apply this approach to test the characteristic scale and the complexity (Fig. 1) of hierarchical phrase structure encoded deep contextual representations, and present several key findings: 1. Representations are distorted by shuffling small n-grams in early layers, while the distortion caused by shuffling large n-grams starts to occur in later layers, implying the scale of characteristic word length increases from input to downstream layers. 2. Representational distortion caused by swapping two constituent phrases is smaller than when the control sequences of the same length are swapped, indicating that the BERT representations are sensitive to hierarchical phrase structure. 3. Representational distortion caused by swapping adjacent words across phrasal boundary is larger than when the swap is within a phrasal boundary; furthermore, the amount of distortion increases with the syntactic distance between the swapped words. The correlation between distortion and tree distance increases across the layers, suggesting that the characteristic complexity of phrasal subtrees increases across the layers. 4. Early layers pay more attention between syntactically closer adjacent pairs and deeper layers pay more attention between syntactically distant adjacent pairs. The attention paid in each layer can explain some of the emergent sensitivity to phrasal structure across layers. Our work demonstrates that interventional tools such as controlled input perturbations can be useful for analyzing deep networks, adding to the growing, interdisciplinary body of work which profitably adapt experimental techniques from cognitive neuroscience and psycholinguistics to analyze computational models of language (Futrell et al., 2018; Wilcox et al., 2019; Futrell et al., 2019; Ettinger, 2020). 2 METHODS Eliciting changes in behavioral and neural responses through controlled input perturbations is a common experimental technique in cognitive neuroscience and psycholinguistics (Tsao & Livingstone, 2008; Mollica et al., 2020). Inspired by these approaches, we perturb input sentences and measure the discrepancy between the resulting, perturbed representation against the original. While conceptually simple, this approach allows for a targeted analysis of internal representations obtained from different layers of deep models, and can suggest partial mechanisms by which such models are able to encode linguistic structure. We note that sentence perturbations have been primarily utilized in NLP for representation learning (Hill et al., 2016; Artetxe et al., 2018; Lample et al., 2018), data augmentation (Wang et al., 2018; Andreas, 2020), and testing for model robustness (e.g., against adversarial examples) (Jia & Liang, 2017; Belinkov & Bisk, 2018). A methodological contribution of our work is to show that input perturbations can serve as a useful tool for analyzing representations learned by deep networks. 2.1 SENTENCE PERTURBATIONS In this work we consider three different types of sentence perturbations designed to probe for different phenomena. n-gram shuffling In the n-gram shuffling experiments, we randomly shuffle the words of a sentence in units of n-grams, with n varying from 1 (i.e., individual words) to 7 (see Fig. 2a for an example). While the number of words which change absolute position is similar for different n, larger n will better preserve the local context (i.e., relative position) of more words. Thus, we reason that n-gram swaps affect the representations selective to the context with size n or higher within the sentence, and that lower n will result in greater distortion in sentence representations. Phrase swaps The n-gram shuffling experiments probe for sensitivity of representations to local context without taking into account syntactic structure. In the phrase swap experiments, we perturb a sentence by swapping two randomly chosen spans. We explore two ways of swapping spans. In the first setting, the spans are chosen such that they are valid phrases according to its parse tree.2 In the second setting, the spans are chosen that they are invalid phrases. Importantly, in the second, control setting, we fix the length of the spans such that the lengths of spans that are chosen to be swapped are the same as in the first setting (see Fig. 3a for an example). We hypothesize that swapping invalid phrases will result in more distortion than swapping valid phrases, since invalid swaps will result in greater denigration of syntactic structure. Adjacent word swaps In the adjacent word swapping experiments, we swap two adjacent words in a sentence. We again experiment with two settings – in the first setting, the swapped words stay within the phrase boundary (i.e., the two words share the same parent), while in the second setting, the swapped words cross phrase boundaries. We also perform a more fine-grained analysis where we condition the swaps based on the “syntactic distance” between the swapped words, where syntactic distance is defined as the distance between the two words in the parse tree (see Fig. 4c). Since a phrase corresponds to a subtree of the parse tree, this distance also quantifies the number of nested phrase boundaries between two adjacent words. Here, we expect the amount of distortion to be positively correlated with the syntactic distance of the words that are swapped. 2.2 CONTEXTUAL REPRESENTATIONS FROM TRANSFORMERS For our sentence representation, we focus on the Transformer-family of models pretrained on large-scale language datasets (BERT and its variants). Given an input word embedding matrix X ∈ RT×d for a sentence of length T , the Transformer applies self attention over the previous layer’s representation to produce a new representation, Xl = fl([Hl,1, . . . ,Hl,H ]), Hl,i = Al,iXl−1Vl,i, Al,i = softmax ( (Xl−1Ql,i)(Xl−1Kl,i) > √ dk ) , (1) 2We use constituency parse trees from the English Penn Treebank (Marcus et al., 1994). where fl is an MLP layer, H is the number of heads, dH = dH is the head embedding dimension, and Ql,i,Kl,i,Vl,i ∈ Rd×dk are respectively the learned query, key, and value projection matrices at layer l for head i. The MLP layer consists of a residual layer followed by layer normalization and a nonlinearity. The 0-th layer representation X0 is obtained by adding the position embeddings and the segment embeddings to the input token embeddings X, and passing it through normalization layer.3 In this paper, we conduct our distortion analysis mainly on the intermediate Transformer representations Xl = [xl,1, . . . ,xl,T ], where xl,t ∈ Rd is the contextualized representation for word t at layer l.4 We analyze the trend in distortion as a function of layer depth l for the different perturbations. We also explore the different attention heads Hl,i ∈ RT×dH and the associated attention matrix Al,i ∈ RT×T to inspect whether certain attention heads specialize at encoding syntactic information. 2.3 DISTORTION METRIC Our input manipulations allow us to specify the distortion at the input level, and we wish to measure the corresponding distortion in the representation space (Fig. 1). Due to the attention mechanism, a single vector in an intermediate layer is a function of the representations of (potentially all) the other tokens in the sentence. Therefore, the information about a particular word might be distributed among the many feature vectors of a sentence, and we wish to consider all feature vectors together as a single sentence-level representation. We thus represent each sentence as a matrix and use the distance induced by matrix 2-norm. Specifically, let P ∈ {0, 1}T×T be the binary matrix representation of a permutation that perturbs the input sentence, i.e., X̃ = PX. Further let X̃l and Xl be the corresponding sentence representations for the l-th layer for the perturbed and original sentences. To ignore uniform shifting and scaling, we also z-score each feature dimension of each layer (by subtracting the mean and dividing by the standard deviation where these statistics are obtained from the full Penn Treebank corpus) to give Z̃l and Zl. Our distortion metric for layer l is then defined as ‖Zl −P−1Z̃l‖/ √ Td, where ‖ · ‖ is the matrix 2-norm (i.e., Frobenius norm).5 Importantly, we invert the permutation of the perturbed representation to preserve the original ordering, which allows us to measure the distortion due to structural change, rather than distortion due to simple differences in surface form. We divide by √ Td to make the metric comparable between sentences (with different T ) and networks (with different d). Intuitively, our metric is the scaled Euclidean distance between the z-scored, flattened sentence representations, zl ∈ RTd. Because each dimension is independently centered and standardized, the maximally unstructured distribution of zl is an isotropic Td-dimensional Gaussian. The expected distance between two such vectors is approximately √ 2Td. Therefore, we can interpret a distortion value approaching √ 2 as comparable to if we had randomly redrawn the perturbed representation. 3 EXPERIMENTAL SETUP We apply our perturbation-based analysis on sentences from the English Penn Treebank (Marcus et al., 1994), where we average the distortion metric across randomly chosen sentences (see Sec. A.1 for the exact details). We analyze the distortion, as measured by length-normalized Frobenius norm between the perturbed and original representations, as a function of layer depth. Layers that experience large distortion when the syntactic structure is disrupted from the perturbation can be interpreted as being more sensitive to hierarchical syntactic structure. As we found the trend to be largely similar across the different models, in the following section, we primarily discuss results from BERT (bert-base-cased), which has 12 layers and hidden size of 768 (Devlin et al., 2018). We show results from other pretrained and randomly-initialized Transformer-based models, including RoBERTa (Liu et al., 2019b), ALBERT (Lan et al., 2019), DistilBERT (Sanh et al., 2019), and XLNet (Yang et al., 2019), in the appendix (Sec. A.2). 3However, the exact specification for the MLP and X0 may vary across different pretrained models. 4BERT uses BPE tokenization (Sennrich et al., 2015), which means that some words are split into multiple tokens. Since we wish to evaluate representations at word-level, if a word is split into multiple tokens, its word representation is computed as the average of all its token representations. 5There are many possible ways of measuring distortion, such as the average cosine similarity or distance between corresponding feature vectors, as well as different matrix norms. We observed the results to be qualitatively similar for different measures, and hence we focus on the Frobenius norm in our main results. We show the results from additional distortion metrics in Sec. A.3. 4 RESULTS We summarize our findings for the different perturbations below. While not shown in the main results, we note that randomly-initialized (i.e. untrained) models (somewhat unsuprisingly) exhibit a flat distortion trend for all perturbations (see Sec. A.2). This indicates that the patterns observed here are due to the model’s structural knowledge acquired through training, and not simply due to the underlying architecture. 4.1 CHARACTERISTIC SCALE INCREASES ALONG BERT LAYERS When we shuffle in units of larger n-grams, it only introduces distortions in the deeper BERT layers compared to smaller n-gram shuffles. The n-gram sized shuffles break contexts larger than n, while preserving contexts of size n or smaller. Interestingly, smaller n-gram shuffles diverge from the original sentence in the early layers (Fig. 2b, top curve), implying that only in early layers are representations built from short-range contexts. Larger n-gram shuffles remain minimally distorted for ‘longer’ (Fig. 2b, bottom curve), implying that long-range contexts play a larger role deeper layer representations. Phrasal boundaries matter Since BERT seems to build larger contexts along its layers, we now ask whether those contexts are structures of some grammatical significance. A basic and important syntactic feature is the constituent phrase, which BERT has previously been shown to represented in some fashion (Goldberg, 2019; Kim et al., 2020). We applied two targeted probes of phrase structure in the BERT representation, and found that phrasal boundaries are indeed influential. If we swap just two n-grams, the BERT representations are less affected when phrases are kept intact. We show this by swapping only two n-grams per sentence and comparing the distortion when those n-grams are phrases to when they cross phrase boundaries (Fig. 3a), where we control for the length of n-grams that are swapped in both settings. There is less distortion when respecting phrase boundaries. Furthermore, the distortion is evident among all feature vectors, including those in the position of words which did not get swapped (Fig. 2d). The global contextual information, distributed across the sentence, is affected by the phrase boundary. 4.2 PHRASE HIERARCHY MATTERS Having seen that representations are sensitive to phrase boundaries, we next explore whether that sensitivity is proportional to the number of phrase boundaries that are broken, which is a quantity related to the phrase hierarchy. Instead of swapping entire phrases, we swap two adjacent words and analyze the distortion based on how far apart the two words are in the constituency tree (Fig. 3a)6. This analysis varies the distance in the deeper tree structure while keeping the distance in surface form constant (since we always swap adjacent words). If the hierarchical representations are indeed being gradually built up along the layers of these pretrained models, we expect distortion to be greater for word swaps that are further apart in tree distance. We indeed find that there is a larger distortion when swapping syntactically distant words (Fig. 3b). This distortion grows from earlier to later BERT layers. Furthermore, when looking at the per-head representations of each layer, we see that in deeper layers there are more heads showing a 6Note that for adjacent words, the number of broken phrase boundaries equals the tree distance minus two. positive rank correlation between distortion and tree distance (Fig. 3c). In addition to a sensitivity to phrase boundaries, deeper BERT layers develop a sensitivity to the number of boundaries that are broken. Controlling for co-occurrence Since words in the same phrase may tend to occur together more often, co-occurrence is a potential confound when assessing the effects of adjacent word swaps. Co-occurrence is a simple statistic which does not require any notion of grammar to compute – indeed it is used to train many non-contextual word embeddings (e.g., word2vec Mikolov et al. (2013), GloVe Pennington et al. (2014)). So it is natural to ask whether BERT’s resilience to syntactically closer swaps goes beyond simple co-occurrence statistics. For simplicity, let us focus on whether a swap occurs within a phrase (tree distance = 2) or not. As an estimate of co-occurrence, we used the pointwise mutual information (PMI). Specifically, for two words w and v, the PMI is log p(w,v)p(w)p(v) , which is estimated from the empirical probabilities. We confirm that adjacent words in the same phrase do indeed have a second mode at high PMI (Fig. 3d). Dividing the swaps into those whose words have high PMI (above the marginal median) and low PMI (below it), we can see visually that the difference between within-phrase swaps and out-of-phrase swaps persists in both groups (Fig. 3e). For a more careful statistical test, in the appendix we show results from running a linear regression between distortion and the phrase boundary which accounts for dependency on any smooth function of PMI (see details in A.4). Even when accounting for the effect of PMI, there is a significant correlation between the breaking of a phrase and the subsequent distortion. This indicates that the greater distortion for word swaps which cross phrase boundaries is not simply due to surface co-occurrence statistics. Effects on linguistic information Do our input perturbations, and the resulting the distortions, reflect changes in the encoding of important linguistic information? One way to address this question, which is popular in computational neuroscience (DiCarlo & Cox, 2007) and more recently BERTology (Liu et al., 2019a; Tenney et al., 2019), is to see how well a linear classifier trained on a linguistic task generalizes from the (representations of the) unperturbed sentences to the perturbed ones. With supervised probes, we can see how much the representations change with respect to the subspaces that encode specific linguistic information. Specifically, we relate representational distortion to three common linguistic tasks of increasing complexity: part of speech (POS) classification; grandparent tag (GP) classification (Tenney et al., 2019); and a parse tree distance reconstruction (Hewitt & Manning, 2019)7. The probe trained on each of these tasks is a generalized linear model, linearly mapping a datapoint x (i.e. BERT representations from different layers) to a conditional distribution of the labels, p(y|θ(x)) (see A.5 for more details). Thus a ready measure of the effect of each type of swap, for a single sentence, is log p(y|θ(xi)) − log p(y|θ(x̃i)), where x̃i is same datum as xi in the perturbed representation8. Averaging this quantity over all datapoints gives a measure of content-specific distortion within a representation, which we will call “inference impairment”. Based on the three linguistic tasks, the distortion we measure from the adjacent word swaps is more strongly related to more complex information. The inverted L shape of Fig. 4a suggests that increasing distortion is only weakly related to impairment of POS inference, which is perhaps unsurprising given that POS tags can be readily predicted from local context. A deeper syntactic probe, the GP classifier (4b), does show a consistent positive relationship, but only for swaps which break a phrase boundary (i.e. distance >2). Meanwhile, impairment of the distance probe (4c), which reconstructs the full parse tree, has a consistently positive relationship with distortion, whose slope is proportionate to the tree distance of the swap. Thus, when specifically perturbing the phrasal boundary, the representational distortion is related to relatively more complex linguistic information. 4.3 A POSSIBLE MECHANISTIC EXPLANATION THROUGH ATTENTION In the transformer architecture, contexts are built with the attention mechanism. Recall that attention is a mechanism for allowing input vectors to interact when forming the output, and the ultimate output for a given token is a convex combination of the features of all tokens (Eq. 1). While any interactions between inputs must be mediated by attention, it is not obvious how the contextual information of a particular layer is captured by attention in that layer. It has been shown qualitatively that, within a layer, BERT allocates attention preferentially to words in the same phrase (Kim et al., 2020). Our next suite of experiments asks whether this might explain the observed relationship between tree distance and distortion. We find that in many Transformer heads, the attention—much like distortion—is proportional to the syntactic distance between two words. Fig. 5c summarizes this trend by showing the Spearman rank correlation between the parse tree distance of adjacent word pairs, and the attention paid between those words. Different attention heads within a layer range from correlated to anti-correlated, and with slightly more positively correlated heads in deeper layers. However, there is great variability in this, suggesting that only certain heads learn to specialize to syntactic phenomena. 7While the original paper predicted dependency tree distance, in this paper we instead predict the constituency tree distance. 8POS- and GP-tag prediction outputs a sequence of labels for each sentence, while the distance probe outputs the constituency tree distance between each pair of words. Then log p(y|θ(xi)) is simply the log probability of an individual label. We observe that at least some of the observed correlation between swap-induced distortion and parse distance can be accounted for by attention. Of course, all interactions between inputs are mediated by attention, but it is not certain that the contextual information in a particular layer comes from the attention at that layer. To test a whether the correlation between tree distance and distortion persists when accounting for attention, we used a linear regression with any smooth function of attention as a covariate (see A.4). We observe larger p-values in the controlled regression, indicating that the correlations become less significant when accounting for attention (Fig. 5d). This suggests that the attention in each layer helps to build sensitivity to syntactic distance. 5 DISCUSSION In this paper, we used the representational change in response to perturbed input in order to study the encoding of hierarchical phrasal structure in deep language models. We also identify a link between the perturbation-induced distortion to the magnitude of attention paid to words within and out of phrase boundaries as a potential mechanistic explanation. Across different models, we find that the word-level contexts used to represent a sentence grow in size and complexity along the model layers, similar to the increasing size of receptive fields found in sensory systems. In neuroscience, it is well accepted that small receptive fields tuned to simple stimuli (i.e., edges) are combined to form larger receptive fields tuned to more complex stimuli (i.e., objects) (Riesenhuber & Poggio, 1999). In language, a phrase within a sentence can serve as a conceptual unit, much like an object in a visual scene, thus motivating our perturbation-based probe for object-like representational sensitivity of phrases. We showed that BERT and its variants are indeed sensitive to the phrasal unit, as demonstrated by greater invariance to perturbations preserving phrasal boundaries compared to control perturbations which break the phrasal boundaries (Fig. 2-5 for BERT, see SM for other models). Our method and results suggest many interesting future directions. We hope that this work will motivate: (1) a formal theory of efficient hierarchical data representations in distributed features; (2) a search for the causal connection between attention structure, the representational geometry, and the model performance; (3) potential applications in network pruning studies; (4) an extension of the current work as a hypothesis generator in neuroscience to understand how neural populations implement tasks with an underlying compositional structure. A SUPPLEMENTARY MATERIAL (SM) A.1 ADDITIONAL DETAILS ON THE DATASET In this section, we describe additional details of the manipulations done on the datasets. n-gram shuffling For a given a sentence, we split it into sequential non-overlapping n-gram’s from left to right; if the length of the sentence is not a multiple of n, the remaining words form an additional m-gram, m < n. The list of the n-gram’s is randomly shuffled. Note that the 1-gram case is equivalent to a random shuffling of the words. In our analysis, we consider n-grams, with n varying from 1 (i.e., individual words) to 7 and all the sentences have at least 10 words. We provide here an example of n-gram shuffling. • Original: The market ’s pessimism reflects the gloomy outlook in Detroit • 1-gram : market pessimism the ’s Detroit in The gloomy reflects outlook • 2-gram : ’s pessimism in Detroit The market reflects the gloomy outlook • 3-gram : The market ’s gloomy outlook in pessimism reflects the Detroit • 4-gram : in Detroit The market ’s pessimism reflects the gloomy outlook • 5-gram : the gloomy outlook in Detroit The market ’s pessimism reflects • 6-gram : outlook in Detroit The market ’s pessimism reflects the gloomy • 7-gram : in Detroit The market ’s pessimism reflects the gloomy outlook Phrase swaps Using constituency trees from the Penn TreebankMarcus et al. (1994), we define phrases as constituents which don’t contain any others within them. (See Fig. 2c or Fig. 3a in the main text.) Phrase swaps thus consist of swapping one phrase with another, and leaving other words intact. To provide an appropriate control perturbation, we swap two disjoint n-grams, which are the same size as true phrases but cross phrase boundaries. Adjacent word swaps To better isolate the effect of broken phrase boundaries, we used adjacent word swaps. Adjacent words were chosen randomly, and one swap was performed per sentence. A.2 ADDITIONAL MODELS A.2.1 PRE-TRAINED MODELS We present briefly the pre-trained models that we used for the experiments.9 • BERT bert-base-cased. 12-layer, 768-hidden, 12-heads, 110M parameters. • RoBERTa roberta-base. 12-layer, 768-hidden, 12-heads, 125M parameters. • ALBERT albert-base-v1. 12 repeating layers, 128 embedding, 768-hidden, 12-heads, 11M parameters. • DistilBERT distilbert-uncased. 6-layer, 768-hidden, 12-heads, 66M parameters. The model distilled from the BERT model bert-base-uncased checkpoint. • XLNet xlnet-base-cased. 12-layer, 768-hidden, 12-heads, 110M parameters. Note that the hidden size is 768 across all the models. For each pre-trained model, input text is tokenized using its default tokenizer and features are extracted at token level. A.2.2 UNTRAINED MODELS To control for properties which come purely from the architecture, we also compute with randomlyinitialized (untrained) models. All model weights are set to a random number. Note that this random initialization has also an impact on the embedding layer. Here, we provide a side-by-side comparison of results on a trained an untrained model from each model class (n-gram: Fig. 7; adjacent: Fig. 8). Across different model classes and tasks, none of our results were replicated in the untrained models. Thus the pattern of invariance we report cannot be explained by architecture alone. A.3 ADDITIONAL DISTORTION METRICS In addition to the scaled Frobenius distance, we considered other ways of measuring distortion in the representation. Here we report results for two different metrics – canonical correlation analysis and a shifted cosine similarity. CCA Canonical correlations analysis (CCA) Raghu et al. (2017) measures the similarity of two sets of variables using many samples from each. Given two sets of random variables x = (x1, x2, ..., xn) and y = (y1, y2, ..., ym), CCA finds linear weights a ∈ Rn and b ∈ Rm which maximise cov(a · x,b·y). In our context, we treat the representation of the original sentence as x, and the representation of the perturbed sentence as y, and the resulting correlation as a similarity measure. 9We use the implementation from https://github.com/huggingface/transformers. Since CCA requires many samples, we use the set of all word-level representations across all perturbed sentences. For example, to construct the samples of x from S perturbed sentences, we get use [X1|X2|...|XS ], where each Xi ∈ R768×Ti . Unless specified otherwise, S = 400. For good estimates, CCA requires many samples (on the order of at least the number of dimensions), and we facilitate this by first reducing the dimension of the matrices using PCA. Using 400 components preserves ∼ 90% of the variance. Thus, while CCA gives a good principled measure of representational similarity, its hunger for samples makes it unsuitable as a per-sentence metric. We also measured distortion using Projection Weighted Canonical Correlation Analysis (PWCCA), an improved version of CCA to estimate the true correlation between tensors Morcos et al. (2018).10 As reported in Figure 9, we did not find any qualitative differences between PWCCA and CCA in our experiments. Cosine A similarity measure defined on individual sentences is the cosine between the sentencelevel representations. By sentence-level representation, we mean the concatenation of the wordlevel vectors into a single vector s ∈ RNT (where N is the dimension of each feature vector). Treating each dimension of the vector as a sample, we can then define the following metric: corr ( soriginali , s swapped i ) . This is equivalent to computing the cosine of the vectors after subtracting the (scalar) mean across dimensions, hence we will refer to it as ‘cosine’. A.4 PARTIAL LINEAR REGRESSION In order to control for uninteresting explanations of our results, we often make use of a simple method for regressing out confounds. Generally, we want to assess the linear relationship between X and Y , 10For both CCA and PWCCA, we use the implementation from https://github.com/google/ svcca. when accounting for the (potentially non-linear) effect of another variable Z. In our experiments, X is always the swap-induced distortion and Y is the swap type, like integer-valued tree distance or binary-valued in/out phrase. We wish to allow E[Y |Z] and E[X|Z] to be any smooth function of Z, which is achieved by the least-squares solution to the following partially linear model: Y ∼ βxX + βz · f(Z)⇐⇒ (Y − E[Y |Z]) ∼ βx(X − E[X|Z]) where f(z) is a vector of several (we use 10) basis functions (we used cubic splines with knots at 10 quantiles) of Z. Both regressions have the same optimal βx, but the one on the left is computationally simpler (Hansen, 2000). The standard confidence intervals on βx apply. Intuitively, the βx obtained by the partially linear regression above is related to the conditional correlation of X and Y given Z: ρ(X,Y |Z). Like an unconditonal correlation, it will be zero if X and Y are conditionally independent given Z, but not necessarily vice versa (both X and Y must be Gaussian for the other direction to be true). To compute conditional rank correlations (which assess a monotonic relationship between X and Y ), we rank-transform X and Y (this changes the confidence interval calculations). We apply this method to attentions in Fig. 5. In these supplemental materials, we will also report the results when X is the binary in/out phrase variable, and Z is PMI. The full p-values and coefficients of the uncontrolled and controlled regressions can be found in Table 1, where we observe that past layer 2, the p-value on phrase boundary is very significant (p < 10−12). A.5 SUPERVISED PROBES In this section, we describe the experiments based on the three linguistic tasks: parts of Speech (POS); grandparent tags (GP); and constituency tree distance. The POS and GP classifiers were multinomial logistic regressions trained to classify each word’s POS tag (e.g. ‘NNP’, ‘VB’) and the tag of its grandparent in the constituency tree, respectively. If a word has no grandparent, its label is the root token ‘S’. The probes were optimized with standard stochastic gradient descent, 50 sentences from the PTB per mini-batch. 10 epochs, at 10−3 learning rate, were sufficient to reach convergence. The distance probe is a linear map B applied to each word-vector w in the sentence, and trained such that, for all word pairs i, j, TreeDist(i, j) matches ‖B(wi −wj)‖22 as closely as possible. Unlike the classifiers, there is freedom in the output dimension of B; we used 100, although performance and results are empirically the same for any choice greater than ∼ 64. Our probes are different from Hewitt & Manning (2019) in two ways: (1) we use constituency trees, instead of dependency trees, and (2) instead of an L1 loss function, we use the Poisson (negative) log-likelihood as the loss function. That is, if λi,j = ‖B(wi −wj)‖22, and yi,j = TreeDist(i, j) −li,j = yi,j log λi,j − λi,j − log yi,j ! Otherwise, the probes are trained exactly as in Hewitt & Manning (2019). Specifically, we used standard SGD with 20 sentences from the PTB in each mini-batch, for 40 epochs. Evaluation A linear model is fit to maximize p(y|θ(x)), with p a probability function (multinomial for classifiers, Poisson for distance), and x coming from the unperturbed transformer representation. We evaluate the model on x̃, which are the representations of the data when generated from a perturbed sentence. We take the average of log p(y|θ(xi))− log p(y|θ(x̃i)) over all the data i in all sentences. For example, all words for the classifiers, and all pairs of words for the distance probe. Concretely, we are just measuring the difference in validation loss of the same probe on the x data and the x̃ data. But because the loss is an appropriate probability function, we can interpret the same quantity as a difference in log-likelihood between the distribution conditioned on the regular representation and that conditioned on the perturbed representation. Distortion is similarly computed using the full sentence, providing a number for each swap in each sentence.
1. What is the focus of the paper, and what does it aim to achieve? 2. What is the methodology used in the paper, and how effective is it? 3. What are the results of the paper, and how significant are they? 4. Are there any limitations to the study, and how might they affect the conclusions drawn from it? 5. How does the reviewer assess the impact of the work, and why do they feel it is not substantial enough? 6. What are the suggestions made by the reviewer for improving the study, and how might they enhance its value?
Review
Review This paper analyzes the ability of BERT model to learn good representations of sentences, through a purely empirical study, there is no other contribution of a new model or a new analysis technique. In the experimental analysis, phrases or words are swapped and the corresponding changes in the sentence representations are analyzed from all the hidden layers. I didn't see any surprising outcomes from the analysis, so not sure how impactful this work is. I would rather see this kind of work in a workshop track for ICLR. A minor comment: only one dataset is considered in analysis. I would like to see the analysis sentences from the training set as well as many test sets.
ICLR
Title Representational correlates of hierarchical phrase structure in deep language models Abstract While contextual representations from pretrained Transformer models have set a new standard for many NLP tasks, there is not yet a complete accounting of their inner workings. In particular, it is not entirely clear what aspects of sentence-level syntax are captured by these representations, nor how (if at all) they are built along the stacked layers of the network. In this paper, we aim to address such questions with a general class of interventional, input perturbation-based analyses of representations from Transformers networks pretrained with self-supervision. Importing from computational and cognitive neuroscience the notion of representational invariance, we perform a series of probes designed to test the sensitivity of Transformer representations to several kinds of structure in sentences. Each probe involves swapping words in a sentence and comparing the representations from perturbed sentences against the original. We experiment with three different perturbations: (1) random permutations of n-grams of varying width, to test the scale at which a representation is sensitive to word position; (2) swapping of two spans which do or do not form a syntactic phrase, to test sensitivity to global phrase structure; and (3) swapping of two adjacent words which do or do not break apart a syntactic phrase, to test sensitivity to local phrase structure. We also connect our probe results to the Transformer architecture by relating the attention mechanism to syntactic distance between two words. Results from the three probes collectively suggest that Transformers build sensitivity to larger parts of the sentence along their layers, and that hierarchical phrase structure plays a role in this process. In particular, sensitivity to local phrase structure increases along deeper layers. Based on our analysis of attention, we show that this is at least partly explained by generally larger attention weights between syntactically distant words.1 1 INTRODUCTION AND RELATED WORK It is still unknown how distributed information processing systems encode and exploit complex relational structures in data. The fields of deep learning (Saxe et al., 2013; Hewitt & Manning, 2019), neuroscience (Sarafyazd & Jazayeri, 2019; Stachenfeld et al., 2017), and cognitive science (Elman, 1991; Kemp & Tenenbaum, 2008; Tervo et al., 2016) have given great attention to this question, including a productive focus on the potential models and their implementations of hierarchical tasks, such as predictive maps and graphs. Natural (human) language provides a rich domain for studying how complex hierarchical structures are encoded in information processing systems. More so than other domains, human language is unique in that its underlying hierarchy has been extensively studied and theorized in linguistics, which provides source of “ground truth” structures for stimulus data. Much prior work on characterizing the types of linguistic information encoded in computational models of language such as neural networks has focused on supervised readout probes, which train a classifier on top pretrained models to predict a particular linguistic label (Belinkov & Glass, 2017; Liu et al., 2019a; Tenney et al., 2019). In particular, Hewitt & Manning (2019) apply probes to discover linear subspaces that encode tree-distances as distances in the representational subspace, and Kim et al. (2020) show that these distances can be used even without any labeled information to induce hierarchical structure. However, recent work has highlighted issues with correlating supervised probe performance with the amount 1Datasets, extracted features and code will be publicly available upon publication. of language structure encoded in such representations (Hewitt & Liang, 2019). Another popular approach to analyzing deep models is through the lens of geometry (Reif et al., 2019; Gigante et al., 2019). While geometric interpretations provide significant insights, they present another challenge in summarizing the structure in a quantifiable way. More recent techniques such as replica-based mean field manifold analysis method (Chung et al., 2018; Cohen et al., 2019; Mamou et al., 2020) connects representation geometry with linear classification performance, but the method is limited to categorization tasks. In this work, we make use of an experimental framework from cognitive science and neuroscience to probe for hierarchical structure in contextual representations from pretrained Transformer models (i.e., BERT (Devlin et al., 2018) and its variants). A popular technique in neuroscience involves measuring change in the population activity in response to controlled, input perturbations (Mollica et al., 2020; Ding et al., 2016). We apply this approach to test the characteristic scale and the complexity (Fig. 1) of hierarchical phrase structure encoded deep contextual representations, and present several key findings: 1. Representations are distorted by shuffling small n-grams in early layers, while the distortion caused by shuffling large n-grams starts to occur in later layers, implying the scale of characteristic word length increases from input to downstream layers. 2. Representational distortion caused by swapping two constituent phrases is smaller than when the control sequences of the same length are swapped, indicating that the BERT representations are sensitive to hierarchical phrase structure. 3. Representational distortion caused by swapping adjacent words across phrasal boundary is larger than when the swap is within a phrasal boundary; furthermore, the amount of distortion increases with the syntactic distance between the swapped words. The correlation between distortion and tree distance increases across the layers, suggesting that the characteristic complexity of phrasal subtrees increases across the layers. 4. Early layers pay more attention between syntactically closer adjacent pairs and deeper layers pay more attention between syntactically distant adjacent pairs. The attention paid in each layer can explain some of the emergent sensitivity to phrasal structure across layers. Our work demonstrates that interventional tools such as controlled input perturbations can be useful for analyzing deep networks, adding to the growing, interdisciplinary body of work which profitably adapt experimental techniques from cognitive neuroscience and psycholinguistics to analyze computational models of language (Futrell et al., 2018; Wilcox et al., 2019; Futrell et al., 2019; Ettinger, 2020). 2 METHODS Eliciting changes in behavioral and neural responses through controlled input perturbations is a common experimental technique in cognitive neuroscience and psycholinguistics (Tsao & Livingstone, 2008; Mollica et al., 2020). Inspired by these approaches, we perturb input sentences and measure the discrepancy between the resulting, perturbed representation against the original. While conceptually simple, this approach allows for a targeted analysis of internal representations obtained from different layers of deep models, and can suggest partial mechanisms by which such models are able to encode linguistic structure. We note that sentence perturbations have been primarily utilized in NLP for representation learning (Hill et al., 2016; Artetxe et al., 2018; Lample et al., 2018), data augmentation (Wang et al., 2018; Andreas, 2020), and testing for model robustness (e.g., against adversarial examples) (Jia & Liang, 2017; Belinkov & Bisk, 2018). A methodological contribution of our work is to show that input perturbations can serve as a useful tool for analyzing representations learned by deep networks. 2.1 SENTENCE PERTURBATIONS In this work we consider three different types of sentence perturbations designed to probe for different phenomena. n-gram shuffling In the n-gram shuffling experiments, we randomly shuffle the words of a sentence in units of n-grams, with n varying from 1 (i.e., individual words) to 7 (see Fig. 2a for an example). While the number of words which change absolute position is similar for different n, larger n will better preserve the local context (i.e., relative position) of more words. Thus, we reason that n-gram swaps affect the representations selective to the context with size n or higher within the sentence, and that lower n will result in greater distortion in sentence representations. Phrase swaps The n-gram shuffling experiments probe for sensitivity of representations to local context without taking into account syntactic structure. In the phrase swap experiments, we perturb a sentence by swapping two randomly chosen spans. We explore two ways of swapping spans. In the first setting, the spans are chosen such that they are valid phrases according to its parse tree.2 In the second setting, the spans are chosen that they are invalid phrases. Importantly, in the second, control setting, we fix the length of the spans such that the lengths of spans that are chosen to be swapped are the same as in the first setting (see Fig. 3a for an example). We hypothesize that swapping invalid phrases will result in more distortion than swapping valid phrases, since invalid swaps will result in greater denigration of syntactic structure. Adjacent word swaps In the adjacent word swapping experiments, we swap two adjacent words in a sentence. We again experiment with two settings – in the first setting, the swapped words stay within the phrase boundary (i.e., the two words share the same parent), while in the second setting, the swapped words cross phrase boundaries. We also perform a more fine-grained analysis where we condition the swaps based on the “syntactic distance” between the swapped words, where syntactic distance is defined as the distance between the two words in the parse tree (see Fig. 4c). Since a phrase corresponds to a subtree of the parse tree, this distance also quantifies the number of nested phrase boundaries between two adjacent words. Here, we expect the amount of distortion to be positively correlated with the syntactic distance of the words that are swapped. 2.2 CONTEXTUAL REPRESENTATIONS FROM TRANSFORMERS For our sentence representation, we focus on the Transformer-family of models pretrained on large-scale language datasets (BERT and its variants). Given an input word embedding matrix X ∈ RT×d for a sentence of length T , the Transformer applies self attention over the previous layer’s representation to produce a new representation, Xl = fl([Hl,1, . . . ,Hl,H ]), Hl,i = Al,iXl−1Vl,i, Al,i = softmax ( (Xl−1Ql,i)(Xl−1Kl,i) > √ dk ) , (1) 2We use constituency parse trees from the English Penn Treebank (Marcus et al., 1994). where fl is an MLP layer, H is the number of heads, dH = dH is the head embedding dimension, and Ql,i,Kl,i,Vl,i ∈ Rd×dk are respectively the learned query, key, and value projection matrices at layer l for head i. The MLP layer consists of a residual layer followed by layer normalization and a nonlinearity. The 0-th layer representation X0 is obtained by adding the position embeddings and the segment embeddings to the input token embeddings X, and passing it through normalization layer.3 In this paper, we conduct our distortion analysis mainly on the intermediate Transformer representations Xl = [xl,1, . . . ,xl,T ], where xl,t ∈ Rd is the contextualized representation for word t at layer l.4 We analyze the trend in distortion as a function of layer depth l for the different perturbations. We also explore the different attention heads Hl,i ∈ RT×dH and the associated attention matrix Al,i ∈ RT×T to inspect whether certain attention heads specialize at encoding syntactic information. 2.3 DISTORTION METRIC Our input manipulations allow us to specify the distortion at the input level, and we wish to measure the corresponding distortion in the representation space (Fig. 1). Due to the attention mechanism, a single vector in an intermediate layer is a function of the representations of (potentially all) the other tokens in the sentence. Therefore, the information about a particular word might be distributed among the many feature vectors of a sentence, and we wish to consider all feature vectors together as a single sentence-level representation. We thus represent each sentence as a matrix and use the distance induced by matrix 2-norm. Specifically, let P ∈ {0, 1}T×T be the binary matrix representation of a permutation that perturbs the input sentence, i.e., X̃ = PX. Further let X̃l and Xl be the corresponding sentence representations for the l-th layer for the perturbed and original sentences. To ignore uniform shifting and scaling, we also z-score each feature dimension of each layer (by subtracting the mean and dividing by the standard deviation where these statistics are obtained from the full Penn Treebank corpus) to give Z̃l and Zl. Our distortion metric for layer l is then defined as ‖Zl −P−1Z̃l‖/ √ Td, where ‖ · ‖ is the matrix 2-norm (i.e., Frobenius norm).5 Importantly, we invert the permutation of the perturbed representation to preserve the original ordering, which allows us to measure the distortion due to structural change, rather than distortion due to simple differences in surface form. We divide by √ Td to make the metric comparable between sentences (with different T ) and networks (with different d). Intuitively, our metric is the scaled Euclidean distance between the z-scored, flattened sentence representations, zl ∈ RTd. Because each dimension is independently centered and standardized, the maximally unstructured distribution of zl is an isotropic Td-dimensional Gaussian. The expected distance between two such vectors is approximately √ 2Td. Therefore, we can interpret a distortion value approaching √ 2 as comparable to if we had randomly redrawn the perturbed representation. 3 EXPERIMENTAL SETUP We apply our perturbation-based analysis on sentences from the English Penn Treebank (Marcus et al., 1994), where we average the distortion metric across randomly chosen sentences (see Sec. A.1 for the exact details). We analyze the distortion, as measured by length-normalized Frobenius norm between the perturbed and original representations, as a function of layer depth. Layers that experience large distortion when the syntactic structure is disrupted from the perturbation can be interpreted as being more sensitive to hierarchical syntactic structure. As we found the trend to be largely similar across the different models, in the following section, we primarily discuss results from BERT (bert-base-cased), which has 12 layers and hidden size of 768 (Devlin et al., 2018). We show results from other pretrained and randomly-initialized Transformer-based models, including RoBERTa (Liu et al., 2019b), ALBERT (Lan et al., 2019), DistilBERT (Sanh et al., 2019), and XLNet (Yang et al., 2019), in the appendix (Sec. A.2). 3However, the exact specification for the MLP and X0 may vary across different pretrained models. 4BERT uses BPE tokenization (Sennrich et al., 2015), which means that some words are split into multiple tokens. Since we wish to evaluate representations at word-level, if a word is split into multiple tokens, its word representation is computed as the average of all its token representations. 5There are many possible ways of measuring distortion, such as the average cosine similarity or distance between corresponding feature vectors, as well as different matrix norms. We observed the results to be qualitatively similar for different measures, and hence we focus on the Frobenius norm in our main results. We show the results from additional distortion metrics in Sec. A.3. 4 RESULTS We summarize our findings for the different perturbations below. While not shown in the main results, we note that randomly-initialized (i.e. untrained) models (somewhat unsuprisingly) exhibit a flat distortion trend for all perturbations (see Sec. A.2). This indicates that the patterns observed here are due to the model’s structural knowledge acquired through training, and not simply due to the underlying architecture. 4.1 CHARACTERISTIC SCALE INCREASES ALONG BERT LAYERS When we shuffle in units of larger n-grams, it only introduces distortions in the deeper BERT layers compared to smaller n-gram shuffles. The n-gram sized shuffles break contexts larger than n, while preserving contexts of size n or smaller. Interestingly, smaller n-gram shuffles diverge from the original sentence in the early layers (Fig. 2b, top curve), implying that only in early layers are representations built from short-range contexts. Larger n-gram shuffles remain minimally distorted for ‘longer’ (Fig. 2b, bottom curve), implying that long-range contexts play a larger role deeper layer representations. Phrasal boundaries matter Since BERT seems to build larger contexts along its layers, we now ask whether those contexts are structures of some grammatical significance. A basic and important syntactic feature is the constituent phrase, which BERT has previously been shown to represented in some fashion (Goldberg, 2019; Kim et al., 2020). We applied two targeted probes of phrase structure in the BERT representation, and found that phrasal boundaries are indeed influential. If we swap just two n-grams, the BERT representations are less affected when phrases are kept intact. We show this by swapping only two n-grams per sentence and comparing the distortion when those n-grams are phrases to when they cross phrase boundaries (Fig. 3a), where we control for the length of n-grams that are swapped in both settings. There is less distortion when respecting phrase boundaries. Furthermore, the distortion is evident among all feature vectors, including those in the position of words which did not get swapped (Fig. 2d). The global contextual information, distributed across the sentence, is affected by the phrase boundary. 4.2 PHRASE HIERARCHY MATTERS Having seen that representations are sensitive to phrase boundaries, we next explore whether that sensitivity is proportional to the number of phrase boundaries that are broken, which is a quantity related to the phrase hierarchy. Instead of swapping entire phrases, we swap two adjacent words and analyze the distortion based on how far apart the two words are in the constituency tree (Fig. 3a)6. This analysis varies the distance in the deeper tree structure while keeping the distance in surface form constant (since we always swap adjacent words). If the hierarchical representations are indeed being gradually built up along the layers of these pretrained models, we expect distortion to be greater for word swaps that are further apart in tree distance. We indeed find that there is a larger distortion when swapping syntactically distant words (Fig. 3b). This distortion grows from earlier to later BERT layers. Furthermore, when looking at the per-head representations of each layer, we see that in deeper layers there are more heads showing a 6Note that for adjacent words, the number of broken phrase boundaries equals the tree distance minus two. positive rank correlation between distortion and tree distance (Fig. 3c). In addition to a sensitivity to phrase boundaries, deeper BERT layers develop a sensitivity to the number of boundaries that are broken. Controlling for co-occurrence Since words in the same phrase may tend to occur together more often, co-occurrence is a potential confound when assessing the effects of adjacent word swaps. Co-occurrence is a simple statistic which does not require any notion of grammar to compute – indeed it is used to train many non-contextual word embeddings (e.g., word2vec Mikolov et al. (2013), GloVe Pennington et al. (2014)). So it is natural to ask whether BERT’s resilience to syntactically closer swaps goes beyond simple co-occurrence statistics. For simplicity, let us focus on whether a swap occurs within a phrase (tree distance = 2) or not. As an estimate of co-occurrence, we used the pointwise mutual information (PMI). Specifically, for two words w and v, the PMI is log p(w,v)p(w)p(v) , which is estimated from the empirical probabilities. We confirm that adjacent words in the same phrase do indeed have a second mode at high PMI (Fig. 3d). Dividing the swaps into those whose words have high PMI (above the marginal median) and low PMI (below it), we can see visually that the difference between within-phrase swaps and out-of-phrase swaps persists in both groups (Fig. 3e). For a more careful statistical test, in the appendix we show results from running a linear regression between distortion and the phrase boundary which accounts for dependency on any smooth function of PMI (see details in A.4). Even when accounting for the effect of PMI, there is a significant correlation between the breaking of a phrase and the subsequent distortion. This indicates that the greater distortion for word swaps which cross phrase boundaries is not simply due to surface co-occurrence statistics. Effects on linguistic information Do our input perturbations, and the resulting the distortions, reflect changes in the encoding of important linguistic information? One way to address this question, which is popular in computational neuroscience (DiCarlo & Cox, 2007) and more recently BERTology (Liu et al., 2019a; Tenney et al., 2019), is to see how well a linear classifier trained on a linguistic task generalizes from the (representations of the) unperturbed sentences to the perturbed ones. With supervised probes, we can see how much the representations change with respect to the subspaces that encode specific linguistic information. Specifically, we relate representational distortion to three common linguistic tasks of increasing complexity: part of speech (POS) classification; grandparent tag (GP) classification (Tenney et al., 2019); and a parse tree distance reconstruction (Hewitt & Manning, 2019)7. The probe trained on each of these tasks is a generalized linear model, linearly mapping a datapoint x (i.e. BERT representations from different layers) to a conditional distribution of the labels, p(y|θ(x)) (see A.5 for more details). Thus a ready measure of the effect of each type of swap, for a single sentence, is log p(y|θ(xi)) − log p(y|θ(x̃i)), where x̃i is same datum as xi in the perturbed representation8. Averaging this quantity over all datapoints gives a measure of content-specific distortion within a representation, which we will call “inference impairment”. Based on the three linguistic tasks, the distortion we measure from the adjacent word swaps is more strongly related to more complex information. The inverted L shape of Fig. 4a suggests that increasing distortion is only weakly related to impairment of POS inference, which is perhaps unsurprising given that POS tags can be readily predicted from local context. A deeper syntactic probe, the GP classifier (4b), does show a consistent positive relationship, but only for swaps which break a phrase boundary (i.e. distance >2). Meanwhile, impairment of the distance probe (4c), which reconstructs the full parse tree, has a consistently positive relationship with distortion, whose slope is proportionate to the tree distance of the swap. Thus, when specifically perturbing the phrasal boundary, the representational distortion is related to relatively more complex linguistic information. 4.3 A POSSIBLE MECHANISTIC EXPLANATION THROUGH ATTENTION In the transformer architecture, contexts are built with the attention mechanism. Recall that attention is a mechanism for allowing input vectors to interact when forming the output, and the ultimate output for a given token is a convex combination of the features of all tokens (Eq. 1). While any interactions between inputs must be mediated by attention, it is not obvious how the contextual information of a particular layer is captured by attention in that layer. It has been shown qualitatively that, within a layer, BERT allocates attention preferentially to words in the same phrase (Kim et al., 2020). Our next suite of experiments asks whether this might explain the observed relationship between tree distance and distortion. We find that in many Transformer heads, the attention—much like distortion—is proportional to the syntactic distance between two words. Fig. 5c summarizes this trend by showing the Spearman rank correlation between the parse tree distance of adjacent word pairs, and the attention paid between those words. Different attention heads within a layer range from correlated to anti-correlated, and with slightly more positively correlated heads in deeper layers. However, there is great variability in this, suggesting that only certain heads learn to specialize to syntactic phenomena. 7While the original paper predicted dependency tree distance, in this paper we instead predict the constituency tree distance. 8POS- and GP-tag prediction outputs a sequence of labels for each sentence, while the distance probe outputs the constituency tree distance between each pair of words. Then log p(y|θ(xi)) is simply the log probability of an individual label. We observe that at least some of the observed correlation between swap-induced distortion and parse distance can be accounted for by attention. Of course, all interactions between inputs are mediated by attention, but it is not certain that the contextual information in a particular layer comes from the attention at that layer. To test a whether the correlation between tree distance and distortion persists when accounting for attention, we used a linear regression with any smooth function of attention as a covariate (see A.4). We observe larger p-values in the controlled regression, indicating that the correlations become less significant when accounting for attention (Fig. 5d). This suggests that the attention in each layer helps to build sensitivity to syntactic distance. 5 DISCUSSION In this paper, we used the representational change in response to perturbed input in order to study the encoding of hierarchical phrasal structure in deep language models. We also identify a link between the perturbation-induced distortion to the magnitude of attention paid to words within and out of phrase boundaries as a potential mechanistic explanation. Across different models, we find that the word-level contexts used to represent a sentence grow in size and complexity along the model layers, similar to the increasing size of receptive fields found in sensory systems. In neuroscience, it is well accepted that small receptive fields tuned to simple stimuli (i.e., edges) are combined to form larger receptive fields tuned to more complex stimuli (i.e., objects) (Riesenhuber & Poggio, 1999). In language, a phrase within a sentence can serve as a conceptual unit, much like an object in a visual scene, thus motivating our perturbation-based probe for object-like representational sensitivity of phrases. We showed that BERT and its variants are indeed sensitive to the phrasal unit, as demonstrated by greater invariance to perturbations preserving phrasal boundaries compared to control perturbations which break the phrasal boundaries (Fig. 2-5 for BERT, see SM for other models). Our method and results suggest many interesting future directions. We hope that this work will motivate: (1) a formal theory of efficient hierarchical data representations in distributed features; (2) a search for the causal connection between attention structure, the representational geometry, and the model performance; (3) potential applications in network pruning studies; (4) an extension of the current work as a hypothesis generator in neuroscience to understand how neural populations implement tasks with an underlying compositional structure. A SUPPLEMENTARY MATERIAL (SM) A.1 ADDITIONAL DETAILS ON THE DATASET In this section, we describe additional details of the manipulations done on the datasets. n-gram shuffling For a given a sentence, we split it into sequential non-overlapping n-gram’s from left to right; if the length of the sentence is not a multiple of n, the remaining words form an additional m-gram, m < n. The list of the n-gram’s is randomly shuffled. Note that the 1-gram case is equivalent to a random shuffling of the words. In our analysis, we consider n-grams, with n varying from 1 (i.e., individual words) to 7 and all the sentences have at least 10 words. We provide here an example of n-gram shuffling. • Original: The market ’s pessimism reflects the gloomy outlook in Detroit • 1-gram : market pessimism the ’s Detroit in The gloomy reflects outlook • 2-gram : ’s pessimism in Detroit The market reflects the gloomy outlook • 3-gram : The market ’s gloomy outlook in pessimism reflects the Detroit • 4-gram : in Detroit The market ’s pessimism reflects the gloomy outlook • 5-gram : the gloomy outlook in Detroit The market ’s pessimism reflects • 6-gram : outlook in Detroit The market ’s pessimism reflects the gloomy • 7-gram : in Detroit The market ’s pessimism reflects the gloomy outlook Phrase swaps Using constituency trees from the Penn TreebankMarcus et al. (1994), we define phrases as constituents which don’t contain any others within them. (See Fig. 2c or Fig. 3a in the main text.) Phrase swaps thus consist of swapping one phrase with another, and leaving other words intact. To provide an appropriate control perturbation, we swap two disjoint n-grams, which are the same size as true phrases but cross phrase boundaries. Adjacent word swaps To better isolate the effect of broken phrase boundaries, we used adjacent word swaps. Adjacent words were chosen randomly, and one swap was performed per sentence. A.2 ADDITIONAL MODELS A.2.1 PRE-TRAINED MODELS We present briefly the pre-trained models that we used for the experiments.9 • BERT bert-base-cased. 12-layer, 768-hidden, 12-heads, 110M parameters. • RoBERTa roberta-base. 12-layer, 768-hidden, 12-heads, 125M parameters. • ALBERT albert-base-v1. 12 repeating layers, 128 embedding, 768-hidden, 12-heads, 11M parameters. • DistilBERT distilbert-uncased. 6-layer, 768-hidden, 12-heads, 66M parameters. The model distilled from the BERT model bert-base-uncased checkpoint. • XLNet xlnet-base-cased. 12-layer, 768-hidden, 12-heads, 110M parameters. Note that the hidden size is 768 across all the models. For each pre-trained model, input text is tokenized using its default tokenizer and features are extracted at token level. A.2.2 UNTRAINED MODELS To control for properties which come purely from the architecture, we also compute with randomlyinitialized (untrained) models. All model weights are set to a random number. Note that this random initialization has also an impact on the embedding layer. Here, we provide a side-by-side comparison of results on a trained an untrained model from each model class (n-gram: Fig. 7; adjacent: Fig. 8). Across different model classes and tasks, none of our results were replicated in the untrained models. Thus the pattern of invariance we report cannot be explained by architecture alone. A.3 ADDITIONAL DISTORTION METRICS In addition to the scaled Frobenius distance, we considered other ways of measuring distortion in the representation. Here we report results for two different metrics – canonical correlation analysis and a shifted cosine similarity. CCA Canonical correlations analysis (CCA) Raghu et al. (2017) measures the similarity of two sets of variables using many samples from each. Given two sets of random variables x = (x1, x2, ..., xn) and y = (y1, y2, ..., ym), CCA finds linear weights a ∈ Rn and b ∈ Rm which maximise cov(a · x,b·y). In our context, we treat the representation of the original sentence as x, and the representation of the perturbed sentence as y, and the resulting correlation as a similarity measure. 9We use the implementation from https://github.com/huggingface/transformers. Since CCA requires many samples, we use the set of all word-level representations across all perturbed sentences. For example, to construct the samples of x from S perturbed sentences, we get use [X1|X2|...|XS ], where each Xi ∈ R768×Ti . Unless specified otherwise, S = 400. For good estimates, CCA requires many samples (on the order of at least the number of dimensions), and we facilitate this by first reducing the dimension of the matrices using PCA. Using 400 components preserves ∼ 90% of the variance. Thus, while CCA gives a good principled measure of representational similarity, its hunger for samples makes it unsuitable as a per-sentence metric. We also measured distortion using Projection Weighted Canonical Correlation Analysis (PWCCA), an improved version of CCA to estimate the true correlation between tensors Morcos et al. (2018).10 As reported in Figure 9, we did not find any qualitative differences between PWCCA and CCA in our experiments. Cosine A similarity measure defined on individual sentences is the cosine between the sentencelevel representations. By sentence-level representation, we mean the concatenation of the wordlevel vectors into a single vector s ∈ RNT (where N is the dimension of each feature vector). Treating each dimension of the vector as a sample, we can then define the following metric: corr ( soriginali , s swapped i ) . This is equivalent to computing the cosine of the vectors after subtracting the (scalar) mean across dimensions, hence we will refer to it as ‘cosine’. A.4 PARTIAL LINEAR REGRESSION In order to control for uninteresting explanations of our results, we often make use of a simple method for regressing out confounds. Generally, we want to assess the linear relationship between X and Y , 10For both CCA and PWCCA, we use the implementation from https://github.com/google/ svcca. when accounting for the (potentially non-linear) effect of another variable Z. In our experiments, X is always the swap-induced distortion and Y is the swap type, like integer-valued tree distance or binary-valued in/out phrase. We wish to allow E[Y |Z] and E[X|Z] to be any smooth function of Z, which is achieved by the least-squares solution to the following partially linear model: Y ∼ βxX + βz · f(Z)⇐⇒ (Y − E[Y |Z]) ∼ βx(X − E[X|Z]) where f(z) is a vector of several (we use 10) basis functions (we used cubic splines with knots at 10 quantiles) of Z. Both regressions have the same optimal βx, but the one on the left is computationally simpler (Hansen, 2000). The standard confidence intervals on βx apply. Intuitively, the βx obtained by the partially linear regression above is related to the conditional correlation of X and Y given Z: ρ(X,Y |Z). Like an unconditonal correlation, it will be zero if X and Y are conditionally independent given Z, but not necessarily vice versa (both X and Y must be Gaussian for the other direction to be true). To compute conditional rank correlations (which assess a monotonic relationship between X and Y ), we rank-transform X and Y (this changes the confidence interval calculations). We apply this method to attentions in Fig. 5. In these supplemental materials, we will also report the results when X is the binary in/out phrase variable, and Z is PMI. The full p-values and coefficients of the uncontrolled and controlled regressions can be found in Table 1, where we observe that past layer 2, the p-value on phrase boundary is very significant (p < 10−12). A.5 SUPERVISED PROBES In this section, we describe the experiments based on the three linguistic tasks: parts of Speech (POS); grandparent tags (GP); and constituency tree distance. The POS and GP classifiers were multinomial logistic regressions trained to classify each word’s POS tag (e.g. ‘NNP’, ‘VB’) and the tag of its grandparent in the constituency tree, respectively. If a word has no grandparent, its label is the root token ‘S’. The probes were optimized with standard stochastic gradient descent, 50 sentences from the PTB per mini-batch. 10 epochs, at 10−3 learning rate, were sufficient to reach convergence. The distance probe is a linear map B applied to each word-vector w in the sentence, and trained such that, for all word pairs i, j, TreeDist(i, j) matches ‖B(wi −wj)‖22 as closely as possible. Unlike the classifiers, there is freedom in the output dimension of B; we used 100, although performance and results are empirically the same for any choice greater than ∼ 64. Our probes are different from Hewitt & Manning (2019) in two ways: (1) we use constituency trees, instead of dependency trees, and (2) instead of an L1 loss function, we use the Poisson (negative) log-likelihood as the loss function. That is, if λi,j = ‖B(wi −wj)‖22, and yi,j = TreeDist(i, j) −li,j = yi,j log λi,j − λi,j − log yi,j ! Otherwise, the probes are trained exactly as in Hewitt & Manning (2019). Specifically, we used standard SGD with 20 sentences from the PTB in each mini-batch, for 40 epochs. Evaluation A linear model is fit to maximize p(y|θ(x)), with p a probability function (multinomial for classifiers, Poisson for distance), and x coming from the unperturbed transformer representation. We evaluate the model on x̃, which are the representations of the data when generated from a perturbed sentence. We take the average of log p(y|θ(xi))− log p(y|θ(x̃i)) over all the data i in all sentences. For example, all words for the classifiers, and all pairs of words for the distance probe. Concretely, we are just measuring the difference in validation loss of the same probe on the x data and the x̃ data. But because the loss is an appropriate probability function, we can interpret the same quantity as a difference in log-likelihood between the distribution conditioned on the regular representation and that conditioned on the perturbed representation. Distortion is similarly computed using the full sentence, providing a number for each swap in each sentence.
1. What is the focus of the paper, and what are the novel findings regarding how pre-trained Transformer models build their contextual representations? 2. What are the strengths of the paper, particularly in its approach to studying the behavior of Transformer models? 3. What are the weaknesses of the paper, such as difficulties in following some of the implications of the results and the lack of a baseline for comparing the distortions among tree distance and rank correlations? 4. Do you have any questions or suggestions for improving the experiment, such as including different model architectures or analyzing changes or distortion of attention weights? 5. Are there any concerns or limitations regarding the methodology used in the paper, such as handling subwords and aggregating them into words when probing?
Review
Review Summary: This paper addresses how pre-trained Transformer models build their contextual representations along with the layers by measuring changes in the outputs on a series of probes. Specifically, a probe involves swapping words in a sentence and measuring the distortion in the representations. The series of probes are designed to test how the models respond to different syntactic-related swaps including n-gram size and syntactic phrase boundary and distance. The results mainly focus on BERT (base-cased), but some results of the other variants have the same trend. The author first confirms that the observed distortions are the learned behavior by showing a "flat" distortion trend of untrained models. When subjecting to n-gram swaps, the results show that smaller n-grams have a larger distortion. We can also see that later layers are affected more than earlier layers (for all kinds of swaps). Regarding hierarchical phrase structure, the results show that the phrasal boundaries are important, even accounted for PMI of swapped words. The rank correlations between distortion and the syntactic distance are higher in the deeper layers across all attention heads, but the correlations are somewhat weak (around 0.3). In addition, distortions have a larger impact on a relatively more complex task (parse tree distance reconstruction > POS tagging). Finally, an analysis of the attention shows that the attention layers contribute to the observed distortion. The main conclusion is that BERT (and its variants) builds its contextual representation by increasingly incorporating more phrasal units along with the layers. Recommendation: Overall, I would like to recommend this paper for the conference. The paper presents a novel approach to study the behavior of Transformer models and the findings are quite interesting. My main concerns are that some of the implications of the results are quite hard to follow, and I am not sure how they might impact future downstream researches. Pros: I would like to commend the author for clear writing. The paper was very well written and I enjoyed reading the paper. The main goal of the paper is to understand (or at least shade some light on) how BERT works. I think it is equally important as the interpretability of the models (i.e., explain the prediction). The perturbation-based probes are easily related to language features than other methods such as attention visualization or a salience-based approach. Although there exists work that uses the same approach, I think this paper presents novels finding on how the representations of the pre-trained models "distort" given different perturbations. The paper provides comprehensive experiments including many carefully designed experiments to reveal the key insights. Cons: Some of the key findings, such as "long-range contexts play a larger role deeper layer representations" are very hard to follow. The differences between the distortions among tree distance and the rank correlations are somewhat weak, or rather we do not have a baseline to compare against. I am not sure how strong finding 3 (in the introduction) is supported by the results. Perhaps, to improve the experiment further, the author could include the results of different model architectures (RNN-based models) to compare and contrast. I think the experiment on the attention weights is not helpful. Since attention layers are the main mechanism in the Transformer models, it is not surprising that it plays an important role. In the end, it is still inconclusive how attention captures the contextual information. The author could analyze the changes or distortion of attention weights. Questions: It is very important how the author handles subwords. The explanation should not be a footnote and there should be more detail on when the "average" happens (i.e., from input to BERT or only distortion computation). I can see why the author would like to aggregate subwords into words when probing. How much do the averaging embeddings change the distortion? Does it affect the findings? Wouldn't it be more helpful to include a probe regarding subwords? What is the corpus that PMI is computed on? What are the base likelihoods (without the perturbation)? Could we imply that the average decrease in log-likelihood is explained by the base likelihoods? Comments: A.4 is not quite related to the attention covariate, though the intention is understandable. Fig. 5e does not exist. I think you mean Fig. 5d (which is quite hard to interpret). I think a more careful analysis of the distortion vectors (Fig 1c) should be done (i.e., why is it desirable that the models should be robust to some perturbations? And what is the trend of the distortions across different types of perturbations?).
ICLR
Title Representational correlates of hierarchical phrase structure in deep language models Abstract While contextual representations from pretrained Transformer models have set a new standard for many NLP tasks, there is not yet a complete accounting of their inner workings. In particular, it is not entirely clear what aspects of sentence-level syntax are captured by these representations, nor how (if at all) they are built along the stacked layers of the network. In this paper, we aim to address such questions with a general class of interventional, input perturbation-based analyses of representations from Transformers networks pretrained with self-supervision. Importing from computational and cognitive neuroscience the notion of representational invariance, we perform a series of probes designed to test the sensitivity of Transformer representations to several kinds of structure in sentences. Each probe involves swapping words in a sentence and comparing the representations from perturbed sentences against the original. We experiment with three different perturbations: (1) random permutations of n-grams of varying width, to test the scale at which a representation is sensitive to word position; (2) swapping of two spans which do or do not form a syntactic phrase, to test sensitivity to global phrase structure; and (3) swapping of two adjacent words which do or do not break apart a syntactic phrase, to test sensitivity to local phrase structure. We also connect our probe results to the Transformer architecture by relating the attention mechanism to syntactic distance between two words. Results from the three probes collectively suggest that Transformers build sensitivity to larger parts of the sentence along their layers, and that hierarchical phrase structure plays a role in this process. In particular, sensitivity to local phrase structure increases along deeper layers. Based on our analysis of attention, we show that this is at least partly explained by generally larger attention weights between syntactically distant words.1 1 INTRODUCTION AND RELATED WORK It is still unknown how distributed information processing systems encode and exploit complex relational structures in data. The fields of deep learning (Saxe et al., 2013; Hewitt & Manning, 2019), neuroscience (Sarafyazd & Jazayeri, 2019; Stachenfeld et al., 2017), and cognitive science (Elman, 1991; Kemp & Tenenbaum, 2008; Tervo et al., 2016) have given great attention to this question, including a productive focus on the potential models and their implementations of hierarchical tasks, such as predictive maps and graphs. Natural (human) language provides a rich domain for studying how complex hierarchical structures are encoded in information processing systems. More so than other domains, human language is unique in that its underlying hierarchy has been extensively studied and theorized in linguistics, which provides source of “ground truth” structures for stimulus data. Much prior work on characterizing the types of linguistic information encoded in computational models of language such as neural networks has focused on supervised readout probes, which train a classifier on top pretrained models to predict a particular linguistic label (Belinkov & Glass, 2017; Liu et al., 2019a; Tenney et al., 2019). In particular, Hewitt & Manning (2019) apply probes to discover linear subspaces that encode tree-distances as distances in the representational subspace, and Kim et al. (2020) show that these distances can be used even without any labeled information to induce hierarchical structure. However, recent work has highlighted issues with correlating supervised probe performance with the amount 1Datasets, extracted features and code will be publicly available upon publication. of language structure encoded in such representations (Hewitt & Liang, 2019). Another popular approach to analyzing deep models is through the lens of geometry (Reif et al., 2019; Gigante et al., 2019). While geometric interpretations provide significant insights, they present another challenge in summarizing the structure in a quantifiable way. More recent techniques such as replica-based mean field manifold analysis method (Chung et al., 2018; Cohen et al., 2019; Mamou et al., 2020) connects representation geometry with linear classification performance, but the method is limited to categorization tasks. In this work, we make use of an experimental framework from cognitive science and neuroscience to probe for hierarchical structure in contextual representations from pretrained Transformer models (i.e., BERT (Devlin et al., 2018) and its variants). A popular technique in neuroscience involves measuring change in the population activity in response to controlled, input perturbations (Mollica et al., 2020; Ding et al., 2016). We apply this approach to test the characteristic scale and the complexity (Fig. 1) of hierarchical phrase structure encoded deep contextual representations, and present several key findings: 1. Representations are distorted by shuffling small n-grams in early layers, while the distortion caused by shuffling large n-grams starts to occur in later layers, implying the scale of characteristic word length increases from input to downstream layers. 2. Representational distortion caused by swapping two constituent phrases is smaller than when the control sequences of the same length are swapped, indicating that the BERT representations are sensitive to hierarchical phrase structure. 3. Representational distortion caused by swapping adjacent words across phrasal boundary is larger than when the swap is within a phrasal boundary; furthermore, the amount of distortion increases with the syntactic distance between the swapped words. The correlation between distortion and tree distance increases across the layers, suggesting that the characteristic complexity of phrasal subtrees increases across the layers. 4. Early layers pay more attention between syntactically closer adjacent pairs and deeper layers pay more attention between syntactically distant adjacent pairs. The attention paid in each layer can explain some of the emergent sensitivity to phrasal structure across layers. Our work demonstrates that interventional tools such as controlled input perturbations can be useful for analyzing deep networks, adding to the growing, interdisciplinary body of work which profitably adapt experimental techniques from cognitive neuroscience and psycholinguistics to analyze computational models of language (Futrell et al., 2018; Wilcox et al., 2019; Futrell et al., 2019; Ettinger, 2020). 2 METHODS Eliciting changes in behavioral and neural responses through controlled input perturbations is a common experimental technique in cognitive neuroscience and psycholinguistics (Tsao & Livingstone, 2008; Mollica et al., 2020). Inspired by these approaches, we perturb input sentences and measure the discrepancy between the resulting, perturbed representation against the original. While conceptually simple, this approach allows for a targeted analysis of internal representations obtained from different layers of deep models, and can suggest partial mechanisms by which such models are able to encode linguistic structure. We note that sentence perturbations have been primarily utilized in NLP for representation learning (Hill et al., 2016; Artetxe et al., 2018; Lample et al., 2018), data augmentation (Wang et al., 2018; Andreas, 2020), and testing for model robustness (e.g., against adversarial examples) (Jia & Liang, 2017; Belinkov & Bisk, 2018). A methodological contribution of our work is to show that input perturbations can serve as a useful tool for analyzing representations learned by deep networks. 2.1 SENTENCE PERTURBATIONS In this work we consider three different types of sentence perturbations designed to probe for different phenomena. n-gram shuffling In the n-gram shuffling experiments, we randomly shuffle the words of a sentence in units of n-grams, with n varying from 1 (i.e., individual words) to 7 (see Fig. 2a for an example). While the number of words which change absolute position is similar for different n, larger n will better preserve the local context (i.e., relative position) of more words. Thus, we reason that n-gram swaps affect the representations selective to the context with size n or higher within the sentence, and that lower n will result in greater distortion in sentence representations. Phrase swaps The n-gram shuffling experiments probe for sensitivity of representations to local context without taking into account syntactic structure. In the phrase swap experiments, we perturb a sentence by swapping two randomly chosen spans. We explore two ways of swapping spans. In the first setting, the spans are chosen such that they are valid phrases according to its parse tree.2 In the second setting, the spans are chosen that they are invalid phrases. Importantly, in the second, control setting, we fix the length of the spans such that the lengths of spans that are chosen to be swapped are the same as in the first setting (see Fig. 3a for an example). We hypothesize that swapping invalid phrases will result in more distortion than swapping valid phrases, since invalid swaps will result in greater denigration of syntactic structure. Adjacent word swaps In the adjacent word swapping experiments, we swap two adjacent words in a sentence. We again experiment with two settings – in the first setting, the swapped words stay within the phrase boundary (i.e., the two words share the same parent), while in the second setting, the swapped words cross phrase boundaries. We also perform a more fine-grained analysis where we condition the swaps based on the “syntactic distance” between the swapped words, where syntactic distance is defined as the distance between the two words in the parse tree (see Fig. 4c). Since a phrase corresponds to a subtree of the parse tree, this distance also quantifies the number of nested phrase boundaries between two adjacent words. Here, we expect the amount of distortion to be positively correlated with the syntactic distance of the words that are swapped. 2.2 CONTEXTUAL REPRESENTATIONS FROM TRANSFORMERS For our sentence representation, we focus on the Transformer-family of models pretrained on large-scale language datasets (BERT and its variants). Given an input word embedding matrix X ∈ RT×d for a sentence of length T , the Transformer applies self attention over the previous layer’s representation to produce a new representation, Xl = fl([Hl,1, . . . ,Hl,H ]), Hl,i = Al,iXl−1Vl,i, Al,i = softmax ( (Xl−1Ql,i)(Xl−1Kl,i) > √ dk ) , (1) 2We use constituency parse trees from the English Penn Treebank (Marcus et al., 1994). where fl is an MLP layer, H is the number of heads, dH = dH is the head embedding dimension, and Ql,i,Kl,i,Vl,i ∈ Rd×dk are respectively the learned query, key, and value projection matrices at layer l for head i. The MLP layer consists of a residual layer followed by layer normalization and a nonlinearity. The 0-th layer representation X0 is obtained by adding the position embeddings and the segment embeddings to the input token embeddings X, and passing it through normalization layer.3 In this paper, we conduct our distortion analysis mainly on the intermediate Transformer representations Xl = [xl,1, . . . ,xl,T ], where xl,t ∈ Rd is the contextualized representation for word t at layer l.4 We analyze the trend in distortion as a function of layer depth l for the different perturbations. We also explore the different attention heads Hl,i ∈ RT×dH and the associated attention matrix Al,i ∈ RT×T to inspect whether certain attention heads specialize at encoding syntactic information. 2.3 DISTORTION METRIC Our input manipulations allow us to specify the distortion at the input level, and we wish to measure the corresponding distortion in the representation space (Fig. 1). Due to the attention mechanism, a single vector in an intermediate layer is a function of the representations of (potentially all) the other tokens in the sentence. Therefore, the information about a particular word might be distributed among the many feature vectors of a sentence, and we wish to consider all feature vectors together as a single sentence-level representation. We thus represent each sentence as a matrix and use the distance induced by matrix 2-norm. Specifically, let P ∈ {0, 1}T×T be the binary matrix representation of a permutation that perturbs the input sentence, i.e., X̃ = PX. Further let X̃l and Xl be the corresponding sentence representations for the l-th layer for the perturbed and original sentences. To ignore uniform shifting and scaling, we also z-score each feature dimension of each layer (by subtracting the mean and dividing by the standard deviation where these statistics are obtained from the full Penn Treebank corpus) to give Z̃l and Zl. Our distortion metric for layer l is then defined as ‖Zl −P−1Z̃l‖/ √ Td, where ‖ · ‖ is the matrix 2-norm (i.e., Frobenius norm).5 Importantly, we invert the permutation of the perturbed representation to preserve the original ordering, which allows us to measure the distortion due to structural change, rather than distortion due to simple differences in surface form. We divide by √ Td to make the metric comparable between sentences (with different T ) and networks (with different d). Intuitively, our metric is the scaled Euclidean distance between the z-scored, flattened sentence representations, zl ∈ RTd. Because each dimension is independently centered and standardized, the maximally unstructured distribution of zl is an isotropic Td-dimensional Gaussian. The expected distance between two such vectors is approximately √ 2Td. Therefore, we can interpret a distortion value approaching √ 2 as comparable to if we had randomly redrawn the perturbed representation. 3 EXPERIMENTAL SETUP We apply our perturbation-based analysis on sentences from the English Penn Treebank (Marcus et al., 1994), where we average the distortion metric across randomly chosen sentences (see Sec. A.1 for the exact details). We analyze the distortion, as measured by length-normalized Frobenius norm between the perturbed and original representations, as a function of layer depth. Layers that experience large distortion when the syntactic structure is disrupted from the perturbation can be interpreted as being more sensitive to hierarchical syntactic structure. As we found the trend to be largely similar across the different models, in the following section, we primarily discuss results from BERT (bert-base-cased), which has 12 layers and hidden size of 768 (Devlin et al., 2018). We show results from other pretrained and randomly-initialized Transformer-based models, including RoBERTa (Liu et al., 2019b), ALBERT (Lan et al., 2019), DistilBERT (Sanh et al., 2019), and XLNet (Yang et al., 2019), in the appendix (Sec. A.2). 3However, the exact specification for the MLP and X0 may vary across different pretrained models. 4BERT uses BPE tokenization (Sennrich et al., 2015), which means that some words are split into multiple tokens. Since we wish to evaluate representations at word-level, if a word is split into multiple tokens, its word representation is computed as the average of all its token representations. 5There are many possible ways of measuring distortion, such as the average cosine similarity or distance between corresponding feature vectors, as well as different matrix norms. We observed the results to be qualitatively similar for different measures, and hence we focus on the Frobenius norm in our main results. We show the results from additional distortion metrics in Sec. A.3. 4 RESULTS We summarize our findings for the different perturbations below. While not shown in the main results, we note that randomly-initialized (i.e. untrained) models (somewhat unsuprisingly) exhibit a flat distortion trend for all perturbations (see Sec. A.2). This indicates that the patterns observed here are due to the model’s structural knowledge acquired through training, and not simply due to the underlying architecture. 4.1 CHARACTERISTIC SCALE INCREASES ALONG BERT LAYERS When we shuffle in units of larger n-grams, it only introduces distortions in the deeper BERT layers compared to smaller n-gram shuffles. The n-gram sized shuffles break contexts larger than n, while preserving contexts of size n or smaller. Interestingly, smaller n-gram shuffles diverge from the original sentence in the early layers (Fig. 2b, top curve), implying that only in early layers are representations built from short-range contexts. Larger n-gram shuffles remain minimally distorted for ‘longer’ (Fig. 2b, bottom curve), implying that long-range contexts play a larger role deeper layer representations. Phrasal boundaries matter Since BERT seems to build larger contexts along its layers, we now ask whether those contexts are structures of some grammatical significance. A basic and important syntactic feature is the constituent phrase, which BERT has previously been shown to represented in some fashion (Goldberg, 2019; Kim et al., 2020). We applied two targeted probes of phrase structure in the BERT representation, and found that phrasal boundaries are indeed influential. If we swap just two n-grams, the BERT representations are less affected when phrases are kept intact. We show this by swapping only two n-grams per sentence and comparing the distortion when those n-grams are phrases to when they cross phrase boundaries (Fig. 3a), where we control for the length of n-grams that are swapped in both settings. There is less distortion when respecting phrase boundaries. Furthermore, the distortion is evident among all feature vectors, including those in the position of words which did not get swapped (Fig. 2d). The global contextual information, distributed across the sentence, is affected by the phrase boundary. 4.2 PHRASE HIERARCHY MATTERS Having seen that representations are sensitive to phrase boundaries, we next explore whether that sensitivity is proportional to the number of phrase boundaries that are broken, which is a quantity related to the phrase hierarchy. Instead of swapping entire phrases, we swap two adjacent words and analyze the distortion based on how far apart the two words are in the constituency tree (Fig. 3a)6. This analysis varies the distance in the deeper tree structure while keeping the distance in surface form constant (since we always swap adjacent words). If the hierarchical representations are indeed being gradually built up along the layers of these pretrained models, we expect distortion to be greater for word swaps that are further apart in tree distance. We indeed find that there is a larger distortion when swapping syntactically distant words (Fig. 3b). This distortion grows from earlier to later BERT layers. Furthermore, when looking at the per-head representations of each layer, we see that in deeper layers there are more heads showing a 6Note that for adjacent words, the number of broken phrase boundaries equals the tree distance minus two. positive rank correlation between distortion and tree distance (Fig. 3c). In addition to a sensitivity to phrase boundaries, deeper BERT layers develop a sensitivity to the number of boundaries that are broken. Controlling for co-occurrence Since words in the same phrase may tend to occur together more often, co-occurrence is a potential confound when assessing the effects of adjacent word swaps. Co-occurrence is a simple statistic which does not require any notion of grammar to compute – indeed it is used to train many non-contextual word embeddings (e.g., word2vec Mikolov et al. (2013), GloVe Pennington et al. (2014)). So it is natural to ask whether BERT’s resilience to syntactically closer swaps goes beyond simple co-occurrence statistics. For simplicity, let us focus on whether a swap occurs within a phrase (tree distance = 2) or not. As an estimate of co-occurrence, we used the pointwise mutual information (PMI). Specifically, for two words w and v, the PMI is log p(w,v)p(w)p(v) , which is estimated from the empirical probabilities. We confirm that adjacent words in the same phrase do indeed have a second mode at high PMI (Fig. 3d). Dividing the swaps into those whose words have high PMI (above the marginal median) and low PMI (below it), we can see visually that the difference between within-phrase swaps and out-of-phrase swaps persists in both groups (Fig. 3e). For a more careful statistical test, in the appendix we show results from running a linear regression between distortion and the phrase boundary which accounts for dependency on any smooth function of PMI (see details in A.4). Even when accounting for the effect of PMI, there is a significant correlation between the breaking of a phrase and the subsequent distortion. This indicates that the greater distortion for word swaps which cross phrase boundaries is not simply due to surface co-occurrence statistics. Effects on linguistic information Do our input perturbations, and the resulting the distortions, reflect changes in the encoding of important linguistic information? One way to address this question, which is popular in computational neuroscience (DiCarlo & Cox, 2007) and more recently BERTology (Liu et al., 2019a; Tenney et al., 2019), is to see how well a linear classifier trained on a linguistic task generalizes from the (representations of the) unperturbed sentences to the perturbed ones. With supervised probes, we can see how much the representations change with respect to the subspaces that encode specific linguistic information. Specifically, we relate representational distortion to three common linguistic tasks of increasing complexity: part of speech (POS) classification; grandparent tag (GP) classification (Tenney et al., 2019); and a parse tree distance reconstruction (Hewitt & Manning, 2019)7. The probe trained on each of these tasks is a generalized linear model, linearly mapping a datapoint x (i.e. BERT representations from different layers) to a conditional distribution of the labels, p(y|θ(x)) (see A.5 for more details). Thus a ready measure of the effect of each type of swap, for a single sentence, is log p(y|θ(xi)) − log p(y|θ(x̃i)), where x̃i is same datum as xi in the perturbed representation8. Averaging this quantity over all datapoints gives a measure of content-specific distortion within a representation, which we will call “inference impairment”. Based on the three linguistic tasks, the distortion we measure from the adjacent word swaps is more strongly related to more complex information. The inverted L shape of Fig. 4a suggests that increasing distortion is only weakly related to impairment of POS inference, which is perhaps unsurprising given that POS tags can be readily predicted from local context. A deeper syntactic probe, the GP classifier (4b), does show a consistent positive relationship, but only for swaps which break a phrase boundary (i.e. distance >2). Meanwhile, impairment of the distance probe (4c), which reconstructs the full parse tree, has a consistently positive relationship with distortion, whose slope is proportionate to the tree distance of the swap. Thus, when specifically perturbing the phrasal boundary, the representational distortion is related to relatively more complex linguistic information. 4.3 A POSSIBLE MECHANISTIC EXPLANATION THROUGH ATTENTION In the transformer architecture, contexts are built with the attention mechanism. Recall that attention is a mechanism for allowing input vectors to interact when forming the output, and the ultimate output for a given token is a convex combination of the features of all tokens (Eq. 1). While any interactions between inputs must be mediated by attention, it is not obvious how the contextual information of a particular layer is captured by attention in that layer. It has been shown qualitatively that, within a layer, BERT allocates attention preferentially to words in the same phrase (Kim et al., 2020). Our next suite of experiments asks whether this might explain the observed relationship between tree distance and distortion. We find that in many Transformer heads, the attention—much like distortion—is proportional to the syntactic distance between two words. Fig. 5c summarizes this trend by showing the Spearman rank correlation between the parse tree distance of adjacent word pairs, and the attention paid between those words. Different attention heads within a layer range from correlated to anti-correlated, and with slightly more positively correlated heads in deeper layers. However, there is great variability in this, suggesting that only certain heads learn to specialize to syntactic phenomena. 7While the original paper predicted dependency tree distance, in this paper we instead predict the constituency tree distance. 8POS- and GP-tag prediction outputs a sequence of labels for each sentence, while the distance probe outputs the constituency tree distance between each pair of words. Then log p(y|θ(xi)) is simply the log probability of an individual label. We observe that at least some of the observed correlation between swap-induced distortion and parse distance can be accounted for by attention. Of course, all interactions between inputs are mediated by attention, but it is not certain that the contextual information in a particular layer comes from the attention at that layer. To test a whether the correlation between tree distance and distortion persists when accounting for attention, we used a linear regression with any smooth function of attention as a covariate (see A.4). We observe larger p-values in the controlled regression, indicating that the correlations become less significant when accounting for attention (Fig. 5d). This suggests that the attention in each layer helps to build sensitivity to syntactic distance. 5 DISCUSSION In this paper, we used the representational change in response to perturbed input in order to study the encoding of hierarchical phrasal structure in deep language models. We also identify a link between the perturbation-induced distortion to the magnitude of attention paid to words within and out of phrase boundaries as a potential mechanistic explanation. Across different models, we find that the word-level contexts used to represent a sentence grow in size and complexity along the model layers, similar to the increasing size of receptive fields found in sensory systems. In neuroscience, it is well accepted that small receptive fields tuned to simple stimuli (i.e., edges) are combined to form larger receptive fields tuned to more complex stimuli (i.e., objects) (Riesenhuber & Poggio, 1999). In language, a phrase within a sentence can serve as a conceptual unit, much like an object in a visual scene, thus motivating our perturbation-based probe for object-like representational sensitivity of phrases. We showed that BERT and its variants are indeed sensitive to the phrasal unit, as demonstrated by greater invariance to perturbations preserving phrasal boundaries compared to control perturbations which break the phrasal boundaries (Fig. 2-5 for BERT, see SM for other models). Our method and results suggest many interesting future directions. We hope that this work will motivate: (1) a formal theory of efficient hierarchical data representations in distributed features; (2) a search for the causal connection between attention structure, the representational geometry, and the model performance; (3) potential applications in network pruning studies; (4) an extension of the current work as a hypothesis generator in neuroscience to understand how neural populations implement tasks with an underlying compositional structure. A SUPPLEMENTARY MATERIAL (SM) A.1 ADDITIONAL DETAILS ON THE DATASET In this section, we describe additional details of the manipulations done on the datasets. n-gram shuffling For a given a sentence, we split it into sequential non-overlapping n-gram’s from left to right; if the length of the sentence is not a multiple of n, the remaining words form an additional m-gram, m < n. The list of the n-gram’s is randomly shuffled. Note that the 1-gram case is equivalent to a random shuffling of the words. In our analysis, we consider n-grams, with n varying from 1 (i.e., individual words) to 7 and all the sentences have at least 10 words. We provide here an example of n-gram shuffling. • Original: The market ’s pessimism reflects the gloomy outlook in Detroit • 1-gram : market pessimism the ’s Detroit in The gloomy reflects outlook • 2-gram : ’s pessimism in Detroit The market reflects the gloomy outlook • 3-gram : The market ’s gloomy outlook in pessimism reflects the Detroit • 4-gram : in Detroit The market ’s pessimism reflects the gloomy outlook • 5-gram : the gloomy outlook in Detroit The market ’s pessimism reflects • 6-gram : outlook in Detroit The market ’s pessimism reflects the gloomy • 7-gram : in Detroit The market ’s pessimism reflects the gloomy outlook Phrase swaps Using constituency trees from the Penn TreebankMarcus et al. (1994), we define phrases as constituents which don’t contain any others within them. (See Fig. 2c or Fig. 3a in the main text.) Phrase swaps thus consist of swapping one phrase with another, and leaving other words intact. To provide an appropriate control perturbation, we swap two disjoint n-grams, which are the same size as true phrases but cross phrase boundaries. Adjacent word swaps To better isolate the effect of broken phrase boundaries, we used adjacent word swaps. Adjacent words were chosen randomly, and one swap was performed per sentence. A.2 ADDITIONAL MODELS A.2.1 PRE-TRAINED MODELS We present briefly the pre-trained models that we used for the experiments.9 • BERT bert-base-cased. 12-layer, 768-hidden, 12-heads, 110M parameters. • RoBERTa roberta-base. 12-layer, 768-hidden, 12-heads, 125M parameters. • ALBERT albert-base-v1. 12 repeating layers, 128 embedding, 768-hidden, 12-heads, 11M parameters. • DistilBERT distilbert-uncased. 6-layer, 768-hidden, 12-heads, 66M parameters. The model distilled from the BERT model bert-base-uncased checkpoint. • XLNet xlnet-base-cased. 12-layer, 768-hidden, 12-heads, 110M parameters. Note that the hidden size is 768 across all the models. For each pre-trained model, input text is tokenized using its default tokenizer and features are extracted at token level. A.2.2 UNTRAINED MODELS To control for properties which come purely from the architecture, we also compute with randomlyinitialized (untrained) models. All model weights are set to a random number. Note that this random initialization has also an impact on the embedding layer. Here, we provide a side-by-side comparison of results on a trained an untrained model from each model class (n-gram: Fig. 7; adjacent: Fig. 8). Across different model classes and tasks, none of our results were replicated in the untrained models. Thus the pattern of invariance we report cannot be explained by architecture alone. A.3 ADDITIONAL DISTORTION METRICS In addition to the scaled Frobenius distance, we considered other ways of measuring distortion in the representation. Here we report results for two different metrics – canonical correlation analysis and a shifted cosine similarity. CCA Canonical correlations analysis (CCA) Raghu et al. (2017) measures the similarity of two sets of variables using many samples from each. Given two sets of random variables x = (x1, x2, ..., xn) and y = (y1, y2, ..., ym), CCA finds linear weights a ∈ Rn and b ∈ Rm which maximise cov(a · x,b·y). In our context, we treat the representation of the original sentence as x, and the representation of the perturbed sentence as y, and the resulting correlation as a similarity measure. 9We use the implementation from https://github.com/huggingface/transformers. Since CCA requires many samples, we use the set of all word-level representations across all perturbed sentences. For example, to construct the samples of x from S perturbed sentences, we get use [X1|X2|...|XS ], where each Xi ∈ R768×Ti . Unless specified otherwise, S = 400. For good estimates, CCA requires many samples (on the order of at least the number of dimensions), and we facilitate this by first reducing the dimension of the matrices using PCA. Using 400 components preserves ∼ 90% of the variance. Thus, while CCA gives a good principled measure of representational similarity, its hunger for samples makes it unsuitable as a per-sentence metric. We also measured distortion using Projection Weighted Canonical Correlation Analysis (PWCCA), an improved version of CCA to estimate the true correlation between tensors Morcos et al. (2018).10 As reported in Figure 9, we did not find any qualitative differences between PWCCA and CCA in our experiments. Cosine A similarity measure defined on individual sentences is the cosine between the sentencelevel representations. By sentence-level representation, we mean the concatenation of the wordlevel vectors into a single vector s ∈ RNT (where N is the dimension of each feature vector). Treating each dimension of the vector as a sample, we can then define the following metric: corr ( soriginali , s swapped i ) . This is equivalent to computing the cosine of the vectors after subtracting the (scalar) mean across dimensions, hence we will refer to it as ‘cosine’. A.4 PARTIAL LINEAR REGRESSION In order to control for uninteresting explanations of our results, we often make use of a simple method for regressing out confounds. Generally, we want to assess the linear relationship between X and Y , 10For both CCA and PWCCA, we use the implementation from https://github.com/google/ svcca. when accounting for the (potentially non-linear) effect of another variable Z. In our experiments, X is always the swap-induced distortion and Y is the swap type, like integer-valued tree distance or binary-valued in/out phrase. We wish to allow E[Y |Z] and E[X|Z] to be any smooth function of Z, which is achieved by the least-squares solution to the following partially linear model: Y ∼ βxX + βz · f(Z)⇐⇒ (Y − E[Y |Z]) ∼ βx(X − E[X|Z]) where f(z) is a vector of several (we use 10) basis functions (we used cubic splines with knots at 10 quantiles) of Z. Both regressions have the same optimal βx, but the one on the left is computationally simpler (Hansen, 2000). The standard confidence intervals on βx apply. Intuitively, the βx obtained by the partially linear regression above is related to the conditional correlation of X and Y given Z: ρ(X,Y |Z). Like an unconditonal correlation, it will be zero if X and Y are conditionally independent given Z, but not necessarily vice versa (both X and Y must be Gaussian for the other direction to be true). To compute conditional rank correlations (which assess a monotonic relationship between X and Y ), we rank-transform X and Y (this changes the confidence interval calculations). We apply this method to attentions in Fig. 5. In these supplemental materials, we will also report the results when X is the binary in/out phrase variable, and Z is PMI. The full p-values and coefficients of the uncontrolled and controlled regressions can be found in Table 1, where we observe that past layer 2, the p-value on phrase boundary is very significant (p < 10−12). A.5 SUPERVISED PROBES In this section, we describe the experiments based on the three linguistic tasks: parts of Speech (POS); grandparent tags (GP); and constituency tree distance. The POS and GP classifiers were multinomial logistic regressions trained to classify each word’s POS tag (e.g. ‘NNP’, ‘VB’) and the tag of its grandparent in the constituency tree, respectively. If a word has no grandparent, its label is the root token ‘S’. The probes were optimized with standard stochastic gradient descent, 50 sentences from the PTB per mini-batch. 10 epochs, at 10−3 learning rate, were sufficient to reach convergence. The distance probe is a linear map B applied to each word-vector w in the sentence, and trained such that, for all word pairs i, j, TreeDist(i, j) matches ‖B(wi −wj)‖22 as closely as possible. Unlike the classifiers, there is freedom in the output dimension of B; we used 100, although performance and results are empirically the same for any choice greater than ∼ 64. Our probes are different from Hewitt & Manning (2019) in two ways: (1) we use constituency trees, instead of dependency trees, and (2) instead of an L1 loss function, we use the Poisson (negative) log-likelihood as the loss function. That is, if λi,j = ‖B(wi −wj)‖22, and yi,j = TreeDist(i, j) −li,j = yi,j log λi,j − λi,j − log yi,j ! Otherwise, the probes are trained exactly as in Hewitt & Manning (2019). Specifically, we used standard SGD with 20 sentences from the PTB in each mini-batch, for 40 epochs. Evaluation A linear model is fit to maximize p(y|θ(x)), with p a probability function (multinomial for classifiers, Poisson for distance), and x coming from the unperturbed transformer representation. We evaluate the model on x̃, which are the representations of the data when generated from a perturbed sentence. We take the average of log p(y|θ(xi))− log p(y|θ(x̃i)) over all the data i in all sentences. For example, all words for the classifiers, and all pairs of words for the distance probe. Concretely, we are just measuring the difference in validation loss of the same probe on the x data and the x̃ data. But because the loss is an appropriate probability function, we can interpret the same quantity as a difference in log-likelihood between the distribution conditioned on the regular representation and that conditioned on the perturbed representation. Distortion is similarly computed using the full sentence, providing a number for each swap in each sentence.
1. What are the main contributions and strengths of the paper regarding BERT's output and parser trees? 2. What are the weaknesses or limitations of the paper, particularly in terms of theoretical explanations and novelty? 3. Do you have any concerns or suggestions regarding the experiments, such as deduced randomness or testing on different domains? 4. Are there any questions or ambiguities regarding specific details in the paper, such as X~ and X in section 2.3 or the settings in Fig. 2(d)? 5. Is there any confusion regarding the step back on certain layers in Fig. 4(b) and 4(c)?
Review
Review --- Overall --- This paper provides some insights into the relation of BERT's output w.r.t. the parser tree (in terms of constituent phrases) of the input sentence. As some previous work has pointed out, BERT model contains the parsing information (e.g., Hewitt & Manning NAACL’19)). This work can be regarded as a moderate reification and improvement of that thought (but it is still limited to existing scopes and methodologies). The merits of this paper: (1) the paper reveals some interesting facts such as BERT are sensitive to phrasal hierarchy and there are behavioural discrepancies between different layer; (2) the experiments are comprehensive, including both the distortion analysis and conventional probe approaches. In my point of view, the main issue of this paper is, like many other works, there is no strong theoretical explanation for the phenomenon being investigated. In this sense, the novelty of this paper is not so strong. --- Major comments --- In experiments, it is not clear whether the randomness of BERT itself has been deducted. The randomness could be caused by the dropout operation which may lead to the discrepancy on output even using the same sentence. In the future version, I recommend to further provide two tests: (1) in current settings, the distortions are showed in sentence-level (e.g., by summing up all distortions within a sentence?). I would like to see a finer-grained test, i.e., whether the most distortion parts are produced by the swapping part; (2) The genre of the Penn treebank dataset is limited to general texts or articles that may be seen in the training corpus of BERT. I would recommend testing on other domains (e.g., biomedical or academic) that BERT never saw before (or structure-less data that do not present a syntactic structure). Do X~ and X in section 2.3 use the same mean and deviation? It seems that Fig. 2(c) takes account into both NP and VP, what if we only constrain the phrases to be only NP? Will the distortion becomes large since swapping subject and object will lead to totally different meanings? There is a lack of explanation about "all words", "swapped", and "unswapped" settings of Fig. 2(d). Is there any intuition for the step back on the layers 11 and 12 in Fig. 4(b) and 4(c)? --- Minor comments --- Strictly speaking, the terminology “gram” in Fig.2(a) should be called "chunk", as “grams” are usually related with sliding window thus overlap with each other. “Fig.3(a)” in the paragraph just above Section 4.2 should be “Fig.2(c)”.
ICLR
Title MGMA: Mesh Graph Masked Autoencoders for Self-supervised Learning on 3D Shape Abstract We introduce a self-supervised learning model to extract face nodes and global graph embeddings on meshes. We define a model with graph masking on a mesh graph composed of faces to pre-train on self-supervised tasks. We evaluate our pre-trained model on shape classification and segmentation benchmarks. The results suggest that our model outperforms prior state-of-the-art mesh encoders: In ModelNet40 classification task, it achieves an accuracy of 89.8%, and in ShapeNet segmentation task, it performs a mean Intersection-over-Union (mIoU) of 78.5. Further, we explore and explain the correlation between test and training masking ratios on Mesh Graph Masked Autoencoders (MGMA). And we find best performances are obtained when mesh graph masked autoencoders are trained and evaluated under different masking ratios. Our work may open up new opportunities to address label scarcity and improve the learning power in geometric deep learning research. 1 INTRODUCTION Mesh is a data format widely used in computer graphics and is used more and more frequently in computer vision tasks as additional supervision or inference targets. It provides an accurate, efficient, and irregular representation of three-dimensional shapes. These properties make it a popular format for capturing continuous underlying surfaces. Many commonly used datasets, such as ModelNet (Wu et al., 2015), ShapeNet (Chang et al., 2015), ScanNet (Dai et al., 2017), and Pix3D (Sun et al., 2018), utilize meshes as the core or intermediate agent. A number of 3D data formats can be derived from the mesh structure, such as voxel grids, point clouds, and implicit surfaces. Researchers customize a series of methods to analyze those regular data formats using deep learning, like using 3D convolution to parse 3D voxel grids (Wu et al., 2016), using symmetric functions (Qi et al., 2017a) to process point clouds, and using signed distance fields to represent the implicit surfaces (Cruz et al., 2021; Park et al., 2019). Mesh representation itself could provide excellent quality and computational efficiency while preserving sharp shape features. Deep learning with data formats extracted from meshes have gained more and more success in 3D shape analysis, while analyzing their original data format with deep learning approaches is still an open problem. So studies on developing deep learning methods on mesh data attract lots of interest. Traditional approaches treat a mesh as a graph with vertices as nodes (Hanocka et al., 2019b; Verma et al., 2018) and develop methods akin to CNN, which contains convolution and pooling operations, to learn shared filters to extract features from edges in meshes. However, such approaches ignore the rich manifold structure meshes can represent, such as topology and Riemannian metric. On the other hand, most of the current mesh-based networks validate themselves on small or synthetic datasets. The dearth of studies that demonstrate the effectiveness of mesh on large datasets limits the development of deep learning applications on meshes. Moreover, the compact and efficient essence of mesh data representation should also be well utilized in ongoing geometric deep learning research. A powerful tool to analyze 3D meshes would benefit computer graphics and computer vision researchers. There are significant challenges in developing mesh-based geometric deep learning methods. The first challenge is passing mesh, an irregular data format, forward in a neural network. In our work, we take the mesh as a graph composed of multiple faces as nodes of the graph. The emergence of success in graph processing provides us with a model to handle graph data. Thus, the mesh is another data format that a graph model naturally processes. Further, we design an attention mechanism along graph convolution on meshes to leverage its excellent feature extraction ability. Meanwhile, because of the high cost and high variability associated with manual data labeling, there are more and more unlabeled 3D data. Traditional studies do not consider unlabeled data, which induces a huge sacrifice of untapped information. Therefore, unsupervised learning attracts more attention and has become an important concept for extracting information from unlabeled data. When we review the trend and development of artificial intelligence, self-supervised training on large datasets and producing pre-trained models for downstream tasks is becoming a predominant power in processing and extracting important features from billions and millions of data (Chen et al., 2020b;a; Dosovitskiy et al., 2021; Brown et al., 2020). Training an au- toencoder with masking (Devlin et al., 2019) on the input data during training has been proved to be an effective method for image classification (He et al., 2022). In this paper, benefiting from using the mesh data representation, we propose to apply graph masking and point cloud reconstruction to support our self-supervised learning architecture and advance 3D deep learning research. In our paper, we present a mesh-based framework, Mesh Graph Masked Autoencoder (MGMA), which is pre-trained on self-analyzing the mesh data, and apply the pre-trained model to large-scale 3D imaging datasets. Our network is designed to be suitable for different kinds of mesh representations to increase flexibility and support a variety of available data. MGMA exhibits state-of-the-art performance on supervised tasks. Furthermore, it could perform unsupervised and semi-supervised classification and segmentation tasks. We show in Figure 1 that a mesh could be considered as a graph with faces as nodes and pre-trained to have a model which could be applied to multiple tasks in recognition tasks. To demonstrate the effectiveness of our method, we perform a variety of experiments and show state-of-the-art performance among the mesh-based shape feature extractors. The key contributions of our work are as follows: 1. We introduce a mesh graph autoencoder and train it with graph masking. 2. With our novel MGMA encoder, our self-supervised learning model incorporates unlabeled data into the training stage and enhances the 3D data learning power. 3. We comprehensively evaluate our model under various learning benchmarks on SHREC11, ModelNet40 supervised and unsupervised classification, and ShapeNetPart semi-supervised segmentation tasks and show that our model achieves state-of-the-art results w.r.t prior mesh-based neural network models. 4. We explore and explain the correlation between test and training masking ratios on MGMA. And we find best performances are obtained when mesh graph masked autoencoders are trained and evaluated under different masking ratios. This gained insight may guide future self-supervised learning algorithm development. 2 RELATE WORK Deep Learning on Meshes Treating a polygon mesh as a graph would accordingly apply graphbased methods on it. There are two existing categories for graph methods: spectral methods (Bruna et al., 2013; Henaff et al., 2015; Defferrard et al., 2016; Kipf & Welling, 2016; Levie et al., 2019) and spatial methods (Micheli, 2009; Atwood & Towsley, 2016; Niepert et al., 2016; Gilmer et al., 2017; Fey et al., 2018; Masci et al., 2015; Monti et al., 2017; Huang et al., 2019). Moreover, the convolution in the spectral domain is non-localized filtering (Defferrard et al., 2016). Chebyshev polynomial expansion is a method to solve the non-localization problem (Defferrard et al., 2016). On the other hand, there is no easy way to induce the weight sharing across different locations of the graph due to the difficulty of matching local neighborhoods in the spatial domain (Bruna et al., 2013). Nevertheless, Atwood & Towsley (2016) proposed a spatial filtering method that assumes information is transferred from a vertex to its adjacent vertex with a specific transition probability. The power of the transition probability matrix implies that farther adjacent vertices provide little information for the central vertex. Furthermore, Geodesic CNN (Masci et al., 2015), MoNet (Monti et al., 2017), and SplineCNN (Fey et al., 2018) deal with the weight sharing problem by designing local coordinate systems for the central vertex in a local patch. They apply a set of weighting functions to aggregate the characteristics at the adjacent vertices. Next, they calculate a weighted mean of these aggregates. However, these methods are informatically expensive and require pre-defined local coordinate systems. In addition, Neural3DMM (Bouritsas et al., 2019) introduces the spiral convolution operation by enforcing a local ordering of vertices through the spiral operator. An initial point for each spiral is a vertex with the shortest geodesic path to a fixed reference point on a template shape. The remaining vertices of the spiral are ordered in the clockwise or counterclockwise directions inductively. However, finding a reference point for an arbitrary shape is challenging. Moreover, the initial point is not unique once two adjacent vertices have the same shortest path to the reference point. FeaStNet (Verma et al., 2018) proposes a graphical neural network in which the neighborhood of each peak for the convolution operation is not preset but instead calculated dynamically. Tangent convolution is introduced in (Tatarchenko et al., 2018), where a small neighborhood around each vertex is used to reconstruct the local function upon which convolution is applied. Some generative models have also been tried on the mesh. Litany et al. (2018) perform shape completion via a graph autoencoder. MeshCNN (Hanocka et al., 2019b) utilizes the particular property of edge in a triangle mesh to extract edge features. Yang et al. (2021) apply continuous convolution on a geodesic region of mesh. Self-Supervised Learning Self-supervised learning is to define some tasks from the data itself, and those human-defined tasks are used to pre-train the model. It is used in computer vision with proxy tasks such as predicting order in time (Wei et al., 2018), finding missing pixels (Pathak et al., 2016), location of patches (Doersch et al., 2015), image orientations (Gidaris et al., 2018), human-made artifacts (Jenni & Favaro, 2018), clusters of images (Caron et al., 2018), camera locations (Agrawal et al., 2015), jaggle puzzle (Noroozi & Favaro, 2016), color of videos (Vondrick et al., 2018), and tracking of image patches(Wang & Gupta, 2015). These works demonstrate promising results in transferring visual features from proxy tasks to other tasks. Thus, defining proxy tasks that are related enough to the downstream task is quite important (Jenni & Favaro, 2018). On the other hand, supervisions, like density estimation or clustering, are not domain-specific (Caron et al., 2018). Deep clustering models(Aljalbout et al., 2018; Min et al., 2018; Yang et al., 2017; Hershey et al., 2016; Xie et al., 2016; Ghasedi Dizaji et al., 2017; Shaham et al., 2018; Yang et al., 2016; Hsu & Lin, 2018) come up to jointly train with a network-specific loss. There are many works exploring self-supervised learning on point clouds. They use multi-tasks learning (Hassani & Haley, 2019), reconstruction (Achlioptas et al., 2018; Yang et al., 2018;?) contrast learning (Zhang & Zhu, 2019), restoring point cloud (Shi et al., 2020), point cloud autoregression (Sun et al., 2019b), the orientation prediction (Poursaeed et al., 2020; Han et al., 2019), and approximating convex decomposition (Gadelha et al., 2020) to pre-train the model and achieve state-of-the-art results on point cloud classification and segmentation tasks. Recently, masked autoencoders are used for self-supervised learning on image classification tasks(He et al., 2022). Transformer Applications Transformer, which is proposed by Vaswani et al. (2017), has been widely used in natural language processing (NLP) and then computer vision. In NLP, large Transformer-based models are often pre-trained on large datasets and then fine-tuned for the downstream tasks, like BERT (Devlin et al., 2019) and GPT (Radford et al., 2018; 2019; Brown et al., 2020). In computer vision, applying Transformer on image processing experiences the local-to-global and low-to-high resolution process. Image Transformer (Parmar et al., 2018) applies self-attention to local neighborhoods. And this local attention replaces convolutions (Hu et al., 2019; Ramachandran et al., 2019; Zhao et al., 2020). Sparse Transformers (Child et al., 2019) use scalable approximations to global self-attention for images. Another way to apply attention to blocks of varying sizes (Weissenborn et al., 2019), in this particular case, along individual axes (Ho et al., 2019; Wang et al., 2020). Some models (Cordonnier et al., 2020; Dosovitskiy et al., 2021; Bello et al., 2019; Wu et al., 2020; Chen et al., 2020a) extract patches of size 2 × 2 or 7 × 7 from the input image then apply CNN and Transformer sequentially. These works make Transformer achieve state-of-the-art results on small and medium resolution images. Instead of just classification, Transformer is also used in video processing(Wang et al., 2018; Sun et al., 2019a), object detection(Hu et al., 2018; Carion et al., 2020), unsupervised object discovery(Locatello et al., 2020), and unified text-vision tasks(Chen et al., 2020c; Lu et al., 2019; Li et al., 2019). Recently, Liang et al. (2022) use Transformer as an autoencoder network for mesh reconstruction and self-supervised learning. 3 METHOD MGMA is a masked autoencoder that interprets the mesh as a graph, and each graph node is a face on the mesh. The features on the face nodes are randomly masked first and passed through multiple face graph attention layers. Then max-pooling is applied to obtain the global graph embedding, which is passed to a point cloud decoder for reconstruction pre-train tasks. Masking on face graph is achieved by randomly selecting nodes on the graph according to the masking ratio. After one node is selected as the masked node, a learnable masking embedding takes the place of the original embedding, which is adopted from (He et al., 2022; Devlin et al., 2019). Face graph attention layer is the core of our network, as shown in Figure 2. The layer takes a graph and the features on each node of the graph as input. For each node in the graph, the layer first gathers its neighbors according to an adjacency matrix which could be an n-ring neighbor adjacency matrix in our architecture. We denote r as the feature of the root node and n as the gathered features of the root node and its neighbors. Three linear layers fV , fQ, and fK take n, r, and n as input to compute V , Q, and K. In our work, we keep the output dimension of key, value, and query fixed to 64. FaceNodeEmbedding = softmax( QKT√ dk )V (1) After obtaining V , Q, and K, we use Equation 1to get the embedding of each face node. In Equation 1, dk stands for the dimensional size of K. Details of composing the layers into an encoder are in Section A.2. Reconstruction loss In the reconstruction loss function, we create a reconstruction decoder for this function. The input to this decoder is the graph embedding of the mesh. The expected output is the point cloud sampled from the mesh. Like the paper(Achlioptas et al., 2018), we use a similar network architecture fD for decoding a point cloud. So we choose the point cloud as the target for the decoder to generate. And the loss function is the Chamfer Distance (CD), as shown in Equation 2. LCD = 1 N N∑ n=1 min p̂∈ŝ ∥pn − p̂∥22 + 1 M M∑ m=1 min p∈s ∥p̂m − p∥22 (2) where s and ŝ are the ground truth and predicted point sets. M and N denote the number of points in the ground truth and predicted point sets. pn and p̂m are points in point set s and ŝ. 4 EXPERIMENTS AND RESULTS In this section, we introduce experiments to validate the effectiveness of our neural networks. First, we demonstrate the effectiveness of the encoder part of our networks on two supervised classification tasks. Then, we verify our work by pre-training the network for an unsupervised classification task. Finally, we conduct a semi-supervised experiment for part segmentation on 3D shapes. 4.1 SUPERVISED CLASSIFICATION we first verify that our network’s encoder could outperform other networks. By using the designed mesh graph attention encoder, we achieve state-of-the-art performance on SHREC11 and ModelNet40 when the mesh is the input data modality. SHREC11 is a dataset introduced in (Lian et al., 2011) that contains 30 classes, with 20 3D objects in each class. We follow the setup in which split 16 and 10 are the numbers of training 3D objects in each class, making split 10 a harder classification task than split 16. We use the meshes processed by, (Hanocka et al., 2019a) and each mesh contains 500 faces. Our results are reported in Table 1. We train our encoder 300 epochs with Adam optimizer, (Kingma & Ba, 2015) which is with β equal to 0.9 and 0.999, ϵ equal to 1−8, learning rate 0.0002 and weight decay equal to 0.0. We compare our mesh graph attention encoder against eight methods that also take meshes as the input to their networks. It turns out that our encoder is able to get 100% accuracy on both setups. Because SHREC11 is a relatively small dataset for supervised classification and some methods have reached 100% accuracy, we further validate our mesh graph attention encoder on ModelNet40 (Wu et al., 2015). ModelNet40 is a dataset that contains 40 classes, and there are 9840 meshes for training and 2468 meshes for testing. Because the meshes in ModelNet40 have different numbers of faces. To fit the meshes onto GPU and to improve the GPU utilization, we follow the method in (Huang et al., 2018) to first make the mesh watertight, then simplify the meshes into 2048 faces. We train our encoder 300 epochs with the same optimizer settings as for SHREC11. The learning rate is decayed by a multiplicative factor of 0.1 at steps 30 and 60. Our method achieved 92.95% test accuracy on ModelNet40. The results are reported in Table 2. We compare our encoder with other night methods. Our results are on par with state-of-the-art classification on ModelNet40. These experiments validate that our encoder could get state-of-the-art performances on 3D shape classification tasks. The next experiments are to validate the model’s performance on unsupervised tasks. 4.2 UNSUPERVISED CLASSIFICATION We pre-train the model across all the provided training data in ModelNet40. We keep the pre-trained model’s weight and use it for the classification tasks. We do not perform fine-tuning when using the pre-trained model for downstream tasks. After obtaining the graph embedding, we use a linear SVM as the unsupervised tool for classification on ModelNet40. The process of our unsupervised learning is stated as follows. We first pre-train the masked autoencoder with training data with the same training hype-parameter setting as in Section 4.1. After pre-training the model, we pick the model with the lowest Chamfer Distance on provided test data. Because the data used for pre-training do not contain label information, we do not consider computing test data’s Chamfer Distance as information leaking. We use the best model to extract global embeddings from the training and test data, a vector with dimension 1024. Once we obtain the global embeddings, we use linear SVMs to train on ModelNet40 training data’s global embeddings. We use 5-fold cross-validation to compute the average validation accuracy on the data split from training data. We also perform a logarithm search on the regularization parameter C of SVM from 1 to 1000 with the number of steps equal to 10. Then we pick the SVM model with the best average accuracy on validation data to compute the test accuracy. In Section A.1, we visualize graph embeddings using t-SNE(Van der Maaten & Hinton, 2008). In Table 3, our method performs best compared with other meshbased neural networks on unsupervised learning on ModelNet40. There are two reasons our method outperforms other mesh-based methods. The first reason is our encoder utilizes an attention mechanism to pick important points while ignoring the noisy information by assigning lower weight to the noisy neighboring. The second reason could contribute to the masking mechanism. It provides more data augmentation to our model and forces the model to focus less on the details of the shapes than the general information in the graph. And three methods (Han et al., 2019; Chen et al., 2021; Poursaeed et al., 2020) that outperform our methods are point cloud-based methods. The possible reason could be that data augmentation, like rotation (Han et al., 2019), is not considered when designing our framework. Adding such design components to our framework will be explored in future work. In Figure 3, we show the reconstruction results on ModelNet40 test data. To some extent, the autoencoder ignores the input mesh’s detailed features while preserving the input mesh’s overall structure. Those detailed features, like the airplane’s engine, the chair’s arm, and the leg style of a table, are ignored during the reconstruction. Ignoring those detailed features means that the encoder encodes the information that is good for decoding into an average shape in the class but forgets the detail. For reconstruction tasks, this is not desired. But for classification, this process is like cleaning redundant information from the input shape. More reconstruction visualization results are shown in Figure 10. 4.3 PART SEGMENTATION Part segmentation is a fine-grained point-wise classification task that aims to predict each point’s part category label in a given shape. In our work, we need to predict the part category label for each face in a mesh. We evaluate the learned point features on the ShapeNetPart dataset (Yi et al., 2016), which contains 16,881 objects from 16 categories (12149 for training, 2874 for testing, and 1858 for validation). Each object consists of 2 to 6 parts with a total of 50 distinct parts among all categories. We use the mean Intersection-over-Union (mIoU) as the measurement calculated by averaging the IoUs of the different parts occurring in one shape. For the segmentation result, we follow the protocol from (Hassani & Haley, 2019). The results are shown in Table 4. In the original dataset, only point clouds and their corresponding point-wise labels are provided. To get ground truth for meshes, we need to first align the mesh with the point cloud by sampling points on the mesh and align the centers of the sampled point clouds with the provided point clouds. After the alignments, we first sample points on the face uniformly for each face on the mesh. Then we compute the nearest point in the ground truth point cloud. After that, the face’s label is determined by the major vote of all the sampled points’ labels. After the processing, we follow (Zhao et al., 2019) to randomly use 5% and 1% of the ShapeNetPart training data to evaluate the segment part task in a semi-supervised setting. We use the same pretrained model to extract the face features of the sampled training data, along with validation and test samples without any finetuning. Following (Hassani & Haley, 2019), We then train a 4-layer MLP [2048, 4096, 1024, 50] on the sampled training sets and evaluate it on all test data. The input feature to the MLP is the concatenation of face node embeddings and global graph embeddings which makes the input features have a dimension size of 2048. We train the model with Adam optimizer with a fixed learning rate of 0.002. This training process takes 30 epochs and converges very fast. Because the features are clear for the MLP to distinguish, the entire process takes about 15 minutes, including the testing after each epoch’s training. During testing, we project the label computed on mesh’s faces back to the provided point clouds according to the distance between the points and faces. Results shown in Table 4 suggest that our method is able to perform on par with the point cloud baselines and on ShapeNetPart semi-supervised learning segmentation task. In Figure 3, we show the visualization result of our semi-supervised learning segmentation. More segmentation visualization results are shown in Figure 9. 5 DISCUSSION 5.1 Is 0 masking ratio the Best Choice for Evaluating MGMA In He et al. (2022), the masking ratio at testing is fixed at 0. This is under the assumption that providing as much information to the trained masked autoencoder is the best choice. We explore the effect of test masking ratios on the unsupervised classification task. In our experiments, the test masking ratio is not fixed but also variable when evaluating the pre-trained model. In Figure 4 (a), we fix the test masking ratio to 0.0 and vary the training masking ratio from 0.1 to 0.9. And it demonstrates that varying training masking ratios could change the performance on unsupervised learning tasks. In Figure 4 (b) and (c), we vary the masking ratio not only during training but also during testing and validation. It turns out that the maximum test accuracy is obtained when the training masking ratio is 0.6 and the test masking ratio is 0.1 or 0.3. This result suggests that choosing 0 as the test masking ratio is not the only choice for evaluating a model trained with masking. For the convenience of delivery, we denote a 2D coordinate (a, b) as the situation when the training masking ratio is a, and the test masking ratio is b. In Figure 5, we investigate why the best test accuracy happens at (0.6, 0.1) and (0.6, 0.3). We compute the difference between validation accuracy and test accuracy. This difference is usually taken as the symbol of overfitting or underfitting. It turns out that in most cases, our model overfitted the task. But those maximum test accuracy points happen to points less overfitting. Another point that exhibits such property is (0.7, 0.7) in the difference map. But at that point, more information about the mesh is lost. There are totally three regions on the heat map in Figure 5 exhibiting the less overfitting property. The last one is at (0.2, 0.6). But the testing ratio is too high that the model is not overfitting but also extracts less useful information. Even though in MaskMAE, 0.75 is the best choice for masking, our 3D mesh dataset differs from the image dataset. In 3D space, this masking ratio becomes lower, which means a face in a mesh participating in classification plays a more important role than each pixel in an image. Also, the point at position (0.5, 0.5) makes the model most overfitting. There are two possible reasons. First, training with a masking ratio of 0.5 gives the input model the most freedom, making validation easier and testing harder. Second, having the same masking ratio could make the model rely on finding masking information from the mesh. An opposite example is (0.6, 0.1) points. The model is trained at a masking ratio of 0.6 but tested at a masking ratio of 0.1. At this time, the masking still helps purge out the redundant information unrelated to the classification. But also, the training and test difference make the model force itself to discard information on masking but find common details. For more accuracy curves under different training and test masking ratios, see Section A.3. 6 CONCLUSION We propose a self-supervised mesh encoding approach to learn point and shape features on meshes that use three self-supervised losses, including context, COD, and autoencoding multi-scale graph-based encoder. We thoroughly evaluated our model on mesh classification and segmentation benchmarks. The results suggest that the learned block-level and class-level features outperform prior state-of-theart models in self-supervised representation learning. For instance, in ModelNet40 shape classification tasks, our model achieved the state-of-the-art (among self-supervised mesh encoders) accuracy of 89.8%. We also find that different combinations of test and training masking ratios in MGMA could provide varying information to downstream tasks. In the ShapeNetPart segmentation task, it achieved a mIoU of 78.5, which outperforms the state-of-the-art mesh encoders. We hope our work could provide a new direction at mesh deep learning analysis and self-supervised learning on mesh data. A APPENDIX A.1 T-SNE VISUALIZATION We visualize graph embeddings obtained by fixing the test masking ratio to 0 using t-SNE (Van der Maaten & Hinton, 2008). We could observe that plant and chair are two classes clustering close but easy to distinguish. The reason could be that both of them are tall cuboids. But chairs have a more regular appearance than plants. The piano and range hood are also the same cases. They have a similar outlook but are different when looking in detail. Usually, the mesh of the range hood has a hole inside its body. Another confusion to the model is the nightstand and dresser, two potentially similar objects. The t-SNE plot at ratios 0.3 and 0.6 are quite similar in clustering. While the plot at a ratio of 0.9 begins to confuse objects like desks and pianos (see the bed category move from corner to the center). A.2 NETWORK ARCHITECTURE The overall architecture of our network is shown in Figure 7. It has a heavier encoder than the decoder. It follows the design logic in MaskMAE since, after pre-training, we no longer need the decoder. The reason we did not use batch normalization in the decoder is to follow (Achlioptas et al., 2018). And decoder is not the mean focus of our paper. The input features to the graph are computed using descriptors defined in (Singh et al., 2021b), which are 320-dimension vectors for face nodes. For the first two mesh graph attention blocks, we use 1-ring neighbors for neighboring lookup. For the third mesh graph attention block, we use 2-ring neighbors. A.3 MASKING RATIO ANALYSIS In Figure 8, we plot the accuracy curve under different training and test masking ratio. Three patterns of accuracy curve are found when the test masking ratios are fixed. The first happens at test masking ratios of 0.0, 0.1, and 0.2. The accuracy goes up and down. The second one is at test masking ratios of 0.3, 0.4, 0.5, and 0.6. The accuracy goes up and done and up again. The last one happened at test masking ratios of 0.7, 0.8, and 0.9. The accuracy goes up. The reason is straightforward for the first and third patterns. For the first pattern, the models are trained with low masking ratios. When the training masking ratio increases, the models focus on extracting information other than just masking, which explains why there is an increasing curve at the beginning. And when the ratio is too high, there is not enough information. Thus, the curve begins to drop. The third pattern is caused by test masking ratios being too high such that the models trained with low masking ratios could efficiently capture the information of testing meshes. And only models trained under high masking ratios could capture information from testing meshes. The second pattern is generated when the first and third patterns merge. One pattern of accuracy curve is found when the training masking ratios are fixed. Test Ratio 0.0 Test Ratio 0.1 Test Ratio 0.2 Test Ratio 0.3 Test Ratio 0.4 Test Ratio 0.5 Test Ratio 0.6 Test Ratio 0.7 Test Ratio 0.8 Test Ratio 0.9 Train Ratio 0.1 Train Ratio 0.2 Train Ratio 0.3 Train Ratio 0.4 Train Ratio 0.5 Train Ratio 0.6 Train Ratio 0.7 Train Ratio 0.8 Train Ratio 0.9 A.4 SEGMENTATION VISUALIZATION RESULTS More segmentation results are shown in Figure 9 A.5 RECONSTRUCTION VISUALIZATION RESULTS More reconstruction results are shown in Figure 10.
1. What is the focus of the paper regarding mesh encoding? 2. What are the strengths and weaknesses of the proposed attention-based encoder architecture? 3. Do you have any concerns or questions about the contribution of the paper, particularly in comparison to other works like Point Transformer? 4. How would you assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a novel attention based encoder architecture for meshes, where each node corresponds to a face on the mesh, the attention module aggregates information from the nodes, then the face node encodings are pooled into a global feature. The paper also proposes a self-supervised task to train the encoder by randomly masking out some of the node features, then trying to reconstruct the shape of the mesh Strengths And Weaknesses Strengths: The idea and implementation is very simple, yet effective. Competitive with SOTA. Weaknesses: The contribution is incremental in my opinion: The encoder is a twist on point cloud based transformers: Instead of using the vertices, the paper uses midpoint of faces. The self supervised task is a natural extension of [He et al. 2022] to mesh data Sometimes there is a lack of clarity, more precise writing would be helpful for the reader. It is weird that point cloud only methods perform better than mesh based ones (tab 3), especially as a point cloud can easily be extracted from a mesh. As if using the neighbourhood info hurts. So why not just use e.g. Point Transformer [https://arxiv.org/abs/2012.09164]? It has the highest baseline for supervised classification and the self-supervised representation learning task would work exactly the same way. Why is the mesh advantageous (for te tasks it is applied to)? Clarity, Quality, Novelty And Reproducibility Clarity: There are a lot of details missing, the reader can only guess them from the context: What are the face node features? Wre they 3D coordinates of face midpoints? in eq2 again, what are the input features to the encoder? Face midpoints? 4.1. it is not explicitly shown with a loss equation that the output of the encoder is used to predict the classes and cross-entropy is used (?), while the decoder and the self supervised loss does not play any part here. Quality: The results are competitive with SOTA Novelty: Incremental, but novel. I have not seen this encoder before, nor the extension of [He et al. 2022] to mesh data. Reproducibility: From the paper a competent practitione could reimplement it.
ICLR
Title MGMA: Mesh Graph Masked Autoencoders for Self-supervised Learning on 3D Shape Abstract We introduce a self-supervised learning model to extract face nodes and global graph embeddings on meshes. We define a model with graph masking on a mesh graph composed of faces to pre-train on self-supervised tasks. We evaluate our pre-trained model on shape classification and segmentation benchmarks. The results suggest that our model outperforms prior state-of-the-art mesh encoders: In ModelNet40 classification task, it achieves an accuracy of 89.8%, and in ShapeNet segmentation task, it performs a mean Intersection-over-Union (mIoU) of 78.5. Further, we explore and explain the correlation between test and training masking ratios on Mesh Graph Masked Autoencoders (MGMA). And we find best performances are obtained when mesh graph masked autoencoders are trained and evaluated under different masking ratios. Our work may open up new opportunities to address label scarcity and improve the learning power in geometric deep learning research. 1 INTRODUCTION Mesh is a data format widely used in computer graphics and is used more and more frequently in computer vision tasks as additional supervision or inference targets. It provides an accurate, efficient, and irregular representation of three-dimensional shapes. These properties make it a popular format for capturing continuous underlying surfaces. Many commonly used datasets, such as ModelNet (Wu et al., 2015), ShapeNet (Chang et al., 2015), ScanNet (Dai et al., 2017), and Pix3D (Sun et al., 2018), utilize meshes as the core or intermediate agent. A number of 3D data formats can be derived from the mesh structure, such as voxel grids, point clouds, and implicit surfaces. Researchers customize a series of methods to analyze those regular data formats using deep learning, like using 3D convolution to parse 3D voxel grids (Wu et al., 2016), using symmetric functions (Qi et al., 2017a) to process point clouds, and using signed distance fields to represent the implicit surfaces (Cruz et al., 2021; Park et al., 2019). Mesh representation itself could provide excellent quality and computational efficiency while preserving sharp shape features. Deep learning with data formats extracted from meshes have gained more and more success in 3D shape analysis, while analyzing their original data format with deep learning approaches is still an open problem. So studies on developing deep learning methods on mesh data attract lots of interest. Traditional approaches treat a mesh as a graph with vertices as nodes (Hanocka et al., 2019b; Verma et al., 2018) and develop methods akin to CNN, which contains convolution and pooling operations, to learn shared filters to extract features from edges in meshes. However, such approaches ignore the rich manifold structure meshes can represent, such as topology and Riemannian metric. On the other hand, most of the current mesh-based networks validate themselves on small or synthetic datasets. The dearth of studies that demonstrate the effectiveness of mesh on large datasets limits the development of deep learning applications on meshes. Moreover, the compact and efficient essence of mesh data representation should also be well utilized in ongoing geometric deep learning research. A powerful tool to analyze 3D meshes would benefit computer graphics and computer vision researchers. There are significant challenges in developing mesh-based geometric deep learning methods. The first challenge is passing mesh, an irregular data format, forward in a neural network. In our work, we take the mesh as a graph composed of multiple faces as nodes of the graph. The emergence of success in graph processing provides us with a model to handle graph data. Thus, the mesh is another data format that a graph model naturally processes. Further, we design an attention mechanism along graph convolution on meshes to leverage its excellent feature extraction ability. Meanwhile, because of the high cost and high variability associated with manual data labeling, there are more and more unlabeled 3D data. Traditional studies do not consider unlabeled data, which induces a huge sacrifice of untapped information. Therefore, unsupervised learning attracts more attention and has become an important concept for extracting information from unlabeled data. When we review the trend and development of artificial intelligence, self-supervised training on large datasets and producing pre-trained models for downstream tasks is becoming a predominant power in processing and extracting important features from billions and millions of data (Chen et al., 2020b;a; Dosovitskiy et al., 2021; Brown et al., 2020). Training an au- toencoder with masking (Devlin et al., 2019) on the input data during training has been proved to be an effective method for image classification (He et al., 2022). In this paper, benefiting from using the mesh data representation, we propose to apply graph masking and point cloud reconstruction to support our self-supervised learning architecture and advance 3D deep learning research. In our paper, we present a mesh-based framework, Mesh Graph Masked Autoencoder (MGMA), which is pre-trained on self-analyzing the mesh data, and apply the pre-trained model to large-scale 3D imaging datasets. Our network is designed to be suitable for different kinds of mesh representations to increase flexibility and support a variety of available data. MGMA exhibits state-of-the-art performance on supervised tasks. Furthermore, it could perform unsupervised and semi-supervised classification and segmentation tasks. We show in Figure 1 that a mesh could be considered as a graph with faces as nodes and pre-trained to have a model which could be applied to multiple tasks in recognition tasks. To demonstrate the effectiveness of our method, we perform a variety of experiments and show state-of-the-art performance among the mesh-based shape feature extractors. The key contributions of our work are as follows: 1. We introduce a mesh graph autoencoder and train it with graph masking. 2. With our novel MGMA encoder, our self-supervised learning model incorporates unlabeled data into the training stage and enhances the 3D data learning power. 3. We comprehensively evaluate our model under various learning benchmarks on SHREC11, ModelNet40 supervised and unsupervised classification, and ShapeNetPart semi-supervised segmentation tasks and show that our model achieves state-of-the-art results w.r.t prior mesh-based neural network models. 4. We explore and explain the correlation between test and training masking ratios on MGMA. And we find best performances are obtained when mesh graph masked autoencoders are trained and evaluated under different masking ratios. This gained insight may guide future self-supervised learning algorithm development. 2 RELATE WORK Deep Learning on Meshes Treating a polygon mesh as a graph would accordingly apply graphbased methods on it. There are two existing categories for graph methods: spectral methods (Bruna et al., 2013; Henaff et al., 2015; Defferrard et al., 2016; Kipf & Welling, 2016; Levie et al., 2019) and spatial methods (Micheli, 2009; Atwood & Towsley, 2016; Niepert et al., 2016; Gilmer et al., 2017; Fey et al., 2018; Masci et al., 2015; Monti et al., 2017; Huang et al., 2019). Moreover, the convolution in the spectral domain is non-localized filtering (Defferrard et al., 2016). Chebyshev polynomial expansion is a method to solve the non-localization problem (Defferrard et al., 2016). On the other hand, there is no easy way to induce the weight sharing across different locations of the graph due to the difficulty of matching local neighborhoods in the spatial domain (Bruna et al., 2013). Nevertheless, Atwood & Towsley (2016) proposed a spatial filtering method that assumes information is transferred from a vertex to its adjacent vertex with a specific transition probability. The power of the transition probability matrix implies that farther adjacent vertices provide little information for the central vertex. Furthermore, Geodesic CNN (Masci et al., 2015), MoNet (Monti et al., 2017), and SplineCNN (Fey et al., 2018) deal with the weight sharing problem by designing local coordinate systems for the central vertex in a local patch. They apply a set of weighting functions to aggregate the characteristics at the adjacent vertices. Next, they calculate a weighted mean of these aggregates. However, these methods are informatically expensive and require pre-defined local coordinate systems. In addition, Neural3DMM (Bouritsas et al., 2019) introduces the spiral convolution operation by enforcing a local ordering of vertices through the spiral operator. An initial point for each spiral is a vertex with the shortest geodesic path to a fixed reference point on a template shape. The remaining vertices of the spiral are ordered in the clockwise or counterclockwise directions inductively. However, finding a reference point for an arbitrary shape is challenging. Moreover, the initial point is not unique once two adjacent vertices have the same shortest path to the reference point. FeaStNet (Verma et al., 2018) proposes a graphical neural network in which the neighborhood of each peak for the convolution operation is not preset but instead calculated dynamically. Tangent convolution is introduced in (Tatarchenko et al., 2018), where a small neighborhood around each vertex is used to reconstruct the local function upon which convolution is applied. Some generative models have also been tried on the mesh. Litany et al. (2018) perform shape completion via a graph autoencoder. MeshCNN (Hanocka et al., 2019b) utilizes the particular property of edge in a triangle mesh to extract edge features. Yang et al. (2021) apply continuous convolution on a geodesic region of mesh. Self-Supervised Learning Self-supervised learning is to define some tasks from the data itself, and those human-defined tasks are used to pre-train the model. It is used in computer vision with proxy tasks such as predicting order in time (Wei et al., 2018), finding missing pixels (Pathak et al., 2016), location of patches (Doersch et al., 2015), image orientations (Gidaris et al., 2018), human-made artifacts (Jenni & Favaro, 2018), clusters of images (Caron et al., 2018), camera locations (Agrawal et al., 2015), jaggle puzzle (Noroozi & Favaro, 2016), color of videos (Vondrick et al., 2018), and tracking of image patches(Wang & Gupta, 2015). These works demonstrate promising results in transferring visual features from proxy tasks to other tasks. Thus, defining proxy tasks that are related enough to the downstream task is quite important (Jenni & Favaro, 2018). On the other hand, supervisions, like density estimation or clustering, are not domain-specific (Caron et al., 2018). Deep clustering models(Aljalbout et al., 2018; Min et al., 2018; Yang et al., 2017; Hershey et al., 2016; Xie et al., 2016; Ghasedi Dizaji et al., 2017; Shaham et al., 2018; Yang et al., 2016; Hsu & Lin, 2018) come up to jointly train with a network-specific loss. There are many works exploring self-supervised learning on point clouds. They use multi-tasks learning (Hassani & Haley, 2019), reconstruction (Achlioptas et al., 2018; Yang et al., 2018;?) contrast learning (Zhang & Zhu, 2019), restoring point cloud (Shi et al., 2020), point cloud autoregression (Sun et al., 2019b), the orientation prediction (Poursaeed et al., 2020; Han et al., 2019), and approximating convex decomposition (Gadelha et al., 2020) to pre-train the model and achieve state-of-the-art results on point cloud classification and segmentation tasks. Recently, masked autoencoders are used for self-supervised learning on image classification tasks(He et al., 2022). Transformer Applications Transformer, which is proposed by Vaswani et al. (2017), has been widely used in natural language processing (NLP) and then computer vision. In NLP, large Transformer-based models are often pre-trained on large datasets and then fine-tuned for the downstream tasks, like BERT (Devlin et al., 2019) and GPT (Radford et al., 2018; 2019; Brown et al., 2020). In computer vision, applying Transformer on image processing experiences the local-to-global and low-to-high resolution process. Image Transformer (Parmar et al., 2018) applies self-attention to local neighborhoods. And this local attention replaces convolutions (Hu et al., 2019; Ramachandran et al., 2019; Zhao et al., 2020). Sparse Transformers (Child et al., 2019) use scalable approximations to global self-attention for images. Another way to apply attention to blocks of varying sizes (Weissenborn et al., 2019), in this particular case, along individual axes (Ho et al., 2019; Wang et al., 2020). Some models (Cordonnier et al., 2020; Dosovitskiy et al., 2021; Bello et al., 2019; Wu et al., 2020; Chen et al., 2020a) extract patches of size 2 × 2 or 7 × 7 from the input image then apply CNN and Transformer sequentially. These works make Transformer achieve state-of-the-art results on small and medium resolution images. Instead of just classification, Transformer is also used in video processing(Wang et al., 2018; Sun et al., 2019a), object detection(Hu et al., 2018; Carion et al., 2020), unsupervised object discovery(Locatello et al., 2020), and unified text-vision tasks(Chen et al., 2020c; Lu et al., 2019; Li et al., 2019). Recently, Liang et al. (2022) use Transformer as an autoencoder network for mesh reconstruction and self-supervised learning. 3 METHOD MGMA is a masked autoencoder that interprets the mesh as a graph, and each graph node is a face on the mesh. The features on the face nodes are randomly masked first and passed through multiple face graph attention layers. Then max-pooling is applied to obtain the global graph embedding, which is passed to a point cloud decoder for reconstruction pre-train tasks. Masking on face graph is achieved by randomly selecting nodes on the graph according to the masking ratio. After one node is selected as the masked node, a learnable masking embedding takes the place of the original embedding, which is adopted from (He et al., 2022; Devlin et al., 2019). Face graph attention layer is the core of our network, as shown in Figure 2. The layer takes a graph and the features on each node of the graph as input. For each node in the graph, the layer first gathers its neighbors according to an adjacency matrix which could be an n-ring neighbor adjacency matrix in our architecture. We denote r as the feature of the root node and n as the gathered features of the root node and its neighbors. Three linear layers fV , fQ, and fK take n, r, and n as input to compute V , Q, and K. In our work, we keep the output dimension of key, value, and query fixed to 64. FaceNodeEmbedding = softmax( QKT√ dk )V (1) After obtaining V , Q, and K, we use Equation 1to get the embedding of each face node. In Equation 1, dk stands for the dimensional size of K. Details of composing the layers into an encoder are in Section A.2. Reconstruction loss In the reconstruction loss function, we create a reconstruction decoder for this function. The input to this decoder is the graph embedding of the mesh. The expected output is the point cloud sampled from the mesh. Like the paper(Achlioptas et al., 2018), we use a similar network architecture fD for decoding a point cloud. So we choose the point cloud as the target for the decoder to generate. And the loss function is the Chamfer Distance (CD), as shown in Equation 2. LCD = 1 N N∑ n=1 min p̂∈ŝ ∥pn − p̂∥22 + 1 M M∑ m=1 min p∈s ∥p̂m − p∥22 (2) where s and ŝ are the ground truth and predicted point sets. M and N denote the number of points in the ground truth and predicted point sets. pn and p̂m are points in point set s and ŝ. 4 EXPERIMENTS AND RESULTS In this section, we introduce experiments to validate the effectiveness of our neural networks. First, we demonstrate the effectiveness of the encoder part of our networks on two supervised classification tasks. Then, we verify our work by pre-training the network for an unsupervised classification task. Finally, we conduct a semi-supervised experiment for part segmentation on 3D shapes. 4.1 SUPERVISED CLASSIFICATION we first verify that our network’s encoder could outperform other networks. By using the designed mesh graph attention encoder, we achieve state-of-the-art performance on SHREC11 and ModelNet40 when the mesh is the input data modality. SHREC11 is a dataset introduced in (Lian et al., 2011) that contains 30 classes, with 20 3D objects in each class. We follow the setup in which split 16 and 10 are the numbers of training 3D objects in each class, making split 10 a harder classification task than split 16. We use the meshes processed by, (Hanocka et al., 2019a) and each mesh contains 500 faces. Our results are reported in Table 1. We train our encoder 300 epochs with Adam optimizer, (Kingma & Ba, 2015) which is with β equal to 0.9 and 0.999, ϵ equal to 1−8, learning rate 0.0002 and weight decay equal to 0.0. We compare our mesh graph attention encoder against eight methods that also take meshes as the input to their networks. It turns out that our encoder is able to get 100% accuracy on both setups. Because SHREC11 is a relatively small dataset for supervised classification and some methods have reached 100% accuracy, we further validate our mesh graph attention encoder on ModelNet40 (Wu et al., 2015). ModelNet40 is a dataset that contains 40 classes, and there are 9840 meshes for training and 2468 meshes for testing. Because the meshes in ModelNet40 have different numbers of faces. To fit the meshes onto GPU and to improve the GPU utilization, we follow the method in (Huang et al., 2018) to first make the mesh watertight, then simplify the meshes into 2048 faces. We train our encoder 300 epochs with the same optimizer settings as for SHREC11. The learning rate is decayed by a multiplicative factor of 0.1 at steps 30 and 60. Our method achieved 92.95% test accuracy on ModelNet40. The results are reported in Table 2. We compare our encoder with other night methods. Our results are on par with state-of-the-art classification on ModelNet40. These experiments validate that our encoder could get state-of-the-art performances on 3D shape classification tasks. The next experiments are to validate the model’s performance on unsupervised tasks. 4.2 UNSUPERVISED CLASSIFICATION We pre-train the model across all the provided training data in ModelNet40. We keep the pre-trained model’s weight and use it for the classification tasks. We do not perform fine-tuning when using the pre-trained model for downstream tasks. After obtaining the graph embedding, we use a linear SVM as the unsupervised tool for classification on ModelNet40. The process of our unsupervised learning is stated as follows. We first pre-train the masked autoencoder with training data with the same training hype-parameter setting as in Section 4.1. After pre-training the model, we pick the model with the lowest Chamfer Distance on provided test data. Because the data used for pre-training do not contain label information, we do not consider computing test data’s Chamfer Distance as information leaking. We use the best model to extract global embeddings from the training and test data, a vector with dimension 1024. Once we obtain the global embeddings, we use linear SVMs to train on ModelNet40 training data’s global embeddings. We use 5-fold cross-validation to compute the average validation accuracy on the data split from training data. We also perform a logarithm search on the regularization parameter C of SVM from 1 to 1000 with the number of steps equal to 10. Then we pick the SVM model with the best average accuracy on validation data to compute the test accuracy. In Section A.1, we visualize graph embeddings using t-SNE(Van der Maaten & Hinton, 2008). In Table 3, our method performs best compared with other meshbased neural networks on unsupervised learning on ModelNet40. There are two reasons our method outperforms other mesh-based methods. The first reason is our encoder utilizes an attention mechanism to pick important points while ignoring the noisy information by assigning lower weight to the noisy neighboring. The second reason could contribute to the masking mechanism. It provides more data augmentation to our model and forces the model to focus less on the details of the shapes than the general information in the graph. And three methods (Han et al., 2019; Chen et al., 2021; Poursaeed et al., 2020) that outperform our methods are point cloud-based methods. The possible reason could be that data augmentation, like rotation (Han et al., 2019), is not considered when designing our framework. Adding such design components to our framework will be explored in future work. In Figure 3, we show the reconstruction results on ModelNet40 test data. To some extent, the autoencoder ignores the input mesh’s detailed features while preserving the input mesh’s overall structure. Those detailed features, like the airplane’s engine, the chair’s arm, and the leg style of a table, are ignored during the reconstruction. Ignoring those detailed features means that the encoder encodes the information that is good for decoding into an average shape in the class but forgets the detail. For reconstruction tasks, this is not desired. But for classification, this process is like cleaning redundant information from the input shape. More reconstruction visualization results are shown in Figure 10. 4.3 PART SEGMENTATION Part segmentation is a fine-grained point-wise classification task that aims to predict each point’s part category label in a given shape. In our work, we need to predict the part category label for each face in a mesh. We evaluate the learned point features on the ShapeNetPart dataset (Yi et al., 2016), which contains 16,881 objects from 16 categories (12149 for training, 2874 for testing, and 1858 for validation). Each object consists of 2 to 6 parts with a total of 50 distinct parts among all categories. We use the mean Intersection-over-Union (mIoU) as the measurement calculated by averaging the IoUs of the different parts occurring in one shape. For the segmentation result, we follow the protocol from (Hassani & Haley, 2019). The results are shown in Table 4. In the original dataset, only point clouds and their corresponding point-wise labels are provided. To get ground truth for meshes, we need to first align the mesh with the point cloud by sampling points on the mesh and align the centers of the sampled point clouds with the provided point clouds. After the alignments, we first sample points on the face uniformly for each face on the mesh. Then we compute the nearest point in the ground truth point cloud. After that, the face’s label is determined by the major vote of all the sampled points’ labels. After the processing, we follow (Zhao et al., 2019) to randomly use 5% and 1% of the ShapeNetPart training data to evaluate the segment part task in a semi-supervised setting. We use the same pretrained model to extract the face features of the sampled training data, along with validation and test samples without any finetuning. Following (Hassani & Haley, 2019), We then train a 4-layer MLP [2048, 4096, 1024, 50] on the sampled training sets and evaluate it on all test data. The input feature to the MLP is the concatenation of face node embeddings and global graph embeddings which makes the input features have a dimension size of 2048. We train the model with Adam optimizer with a fixed learning rate of 0.002. This training process takes 30 epochs and converges very fast. Because the features are clear for the MLP to distinguish, the entire process takes about 15 minutes, including the testing after each epoch’s training. During testing, we project the label computed on mesh’s faces back to the provided point clouds according to the distance between the points and faces. Results shown in Table 4 suggest that our method is able to perform on par with the point cloud baselines and on ShapeNetPart semi-supervised learning segmentation task. In Figure 3, we show the visualization result of our semi-supervised learning segmentation. More segmentation visualization results are shown in Figure 9. 5 DISCUSSION 5.1 Is 0 masking ratio the Best Choice for Evaluating MGMA In He et al. (2022), the masking ratio at testing is fixed at 0. This is under the assumption that providing as much information to the trained masked autoencoder is the best choice. We explore the effect of test masking ratios on the unsupervised classification task. In our experiments, the test masking ratio is not fixed but also variable when evaluating the pre-trained model. In Figure 4 (a), we fix the test masking ratio to 0.0 and vary the training masking ratio from 0.1 to 0.9. And it demonstrates that varying training masking ratios could change the performance on unsupervised learning tasks. In Figure 4 (b) and (c), we vary the masking ratio not only during training but also during testing and validation. It turns out that the maximum test accuracy is obtained when the training masking ratio is 0.6 and the test masking ratio is 0.1 or 0.3. This result suggests that choosing 0 as the test masking ratio is not the only choice for evaluating a model trained with masking. For the convenience of delivery, we denote a 2D coordinate (a, b) as the situation when the training masking ratio is a, and the test masking ratio is b. In Figure 5, we investigate why the best test accuracy happens at (0.6, 0.1) and (0.6, 0.3). We compute the difference between validation accuracy and test accuracy. This difference is usually taken as the symbol of overfitting or underfitting. It turns out that in most cases, our model overfitted the task. But those maximum test accuracy points happen to points less overfitting. Another point that exhibits such property is (0.7, 0.7) in the difference map. But at that point, more information about the mesh is lost. There are totally three regions on the heat map in Figure 5 exhibiting the less overfitting property. The last one is at (0.2, 0.6). But the testing ratio is too high that the model is not overfitting but also extracts less useful information. Even though in MaskMAE, 0.75 is the best choice for masking, our 3D mesh dataset differs from the image dataset. In 3D space, this masking ratio becomes lower, which means a face in a mesh participating in classification plays a more important role than each pixel in an image. Also, the point at position (0.5, 0.5) makes the model most overfitting. There are two possible reasons. First, training with a masking ratio of 0.5 gives the input model the most freedom, making validation easier and testing harder. Second, having the same masking ratio could make the model rely on finding masking information from the mesh. An opposite example is (0.6, 0.1) points. The model is trained at a masking ratio of 0.6 but tested at a masking ratio of 0.1. At this time, the masking still helps purge out the redundant information unrelated to the classification. But also, the training and test difference make the model force itself to discard information on masking but find common details. For more accuracy curves under different training and test masking ratios, see Section A.3. 6 CONCLUSION We propose a self-supervised mesh encoding approach to learn point and shape features on meshes that use three self-supervised losses, including context, COD, and autoencoding multi-scale graph-based encoder. We thoroughly evaluated our model on mesh classification and segmentation benchmarks. The results suggest that the learned block-level and class-level features outperform prior state-of-theart models in self-supervised representation learning. For instance, in ModelNet40 shape classification tasks, our model achieved the state-of-the-art (among self-supervised mesh encoders) accuracy of 89.8%. We also find that different combinations of test and training masking ratios in MGMA could provide varying information to downstream tasks. In the ShapeNetPart segmentation task, it achieved a mIoU of 78.5, which outperforms the state-of-the-art mesh encoders. We hope our work could provide a new direction at mesh deep learning analysis and self-supervised learning on mesh data. A APPENDIX A.1 T-SNE VISUALIZATION We visualize graph embeddings obtained by fixing the test masking ratio to 0 using t-SNE (Van der Maaten & Hinton, 2008). We could observe that plant and chair are two classes clustering close but easy to distinguish. The reason could be that both of them are tall cuboids. But chairs have a more regular appearance than plants. The piano and range hood are also the same cases. They have a similar outlook but are different when looking in detail. Usually, the mesh of the range hood has a hole inside its body. Another confusion to the model is the nightstand and dresser, two potentially similar objects. The t-SNE plot at ratios 0.3 and 0.6 are quite similar in clustering. While the plot at a ratio of 0.9 begins to confuse objects like desks and pianos (see the bed category move from corner to the center). A.2 NETWORK ARCHITECTURE The overall architecture of our network is shown in Figure 7. It has a heavier encoder than the decoder. It follows the design logic in MaskMAE since, after pre-training, we no longer need the decoder. The reason we did not use batch normalization in the decoder is to follow (Achlioptas et al., 2018). And decoder is not the mean focus of our paper. The input features to the graph are computed using descriptors defined in (Singh et al., 2021b), which are 320-dimension vectors for face nodes. For the first two mesh graph attention blocks, we use 1-ring neighbors for neighboring lookup. For the third mesh graph attention block, we use 2-ring neighbors. A.3 MASKING RATIO ANALYSIS In Figure 8, we plot the accuracy curve under different training and test masking ratio. Three patterns of accuracy curve are found when the test masking ratios are fixed. The first happens at test masking ratios of 0.0, 0.1, and 0.2. The accuracy goes up and down. The second one is at test masking ratios of 0.3, 0.4, 0.5, and 0.6. The accuracy goes up and done and up again. The last one happened at test masking ratios of 0.7, 0.8, and 0.9. The accuracy goes up. The reason is straightforward for the first and third patterns. For the first pattern, the models are trained with low masking ratios. When the training masking ratio increases, the models focus on extracting information other than just masking, which explains why there is an increasing curve at the beginning. And when the ratio is too high, there is not enough information. Thus, the curve begins to drop. The third pattern is caused by test masking ratios being too high such that the models trained with low masking ratios could efficiently capture the information of testing meshes. And only models trained under high masking ratios could capture information from testing meshes. The second pattern is generated when the first and third patterns merge. One pattern of accuracy curve is found when the training masking ratios are fixed. Test Ratio 0.0 Test Ratio 0.1 Test Ratio 0.2 Test Ratio 0.3 Test Ratio 0.4 Test Ratio 0.5 Test Ratio 0.6 Test Ratio 0.7 Test Ratio 0.8 Test Ratio 0.9 Train Ratio 0.1 Train Ratio 0.2 Train Ratio 0.3 Train Ratio 0.4 Train Ratio 0.5 Train Ratio 0.6 Train Ratio 0.7 Train Ratio 0.8 Train Ratio 0.9 A.4 SEGMENTATION VISUALIZATION RESULTS More segmentation results are shown in Figure 9 A.5 RECONSTRUCTION VISUALIZATION RESULTS More reconstruction results are shown in Figure 10.
1. What is the main contribution of the paper on mesh autoencoders? 2. What are the strengths and weaknesses of the proposed approach, particularly in its application to graph neural networks? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Do you have any questions regarding the paper's adaptation of masked autoencoders to meshes? 5. Are there any concerns about the limited novelty and marginal improvements in performance?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper presents mask autoencoders for meshes. The idea is motivated by masked autoencoders in the case of images. The work treats meshes as graphs with mesh faces as nodes and edges to capture mesh topology/connectivity. Masking is done by randomly removing nodes/faces along with associated edges, similar to the idea of masked autoencoders. The actual method is a pretty direct application of masked AE to the graph network setting by adopting face attention mechanism. Comparison is provided on Shrec11 and modelNet40 (both datasets are close to being saturated), with very marginal improvement, if any. The paper has limited novelty and results in very marginal improvements. Strengths And Weaknesses Uses masked AE in the context of meshes A simple adaptation and application to graph neural networks Very limited novelty Performance improvements are marginal at best Scores low wrt novelty and performance. Would have liked to see test on more challenging datasets or real/scan data. Clarity, Quality, Novelty And Reproducibility Section 3 is simple and concise. It is appropriate given the (limited) contribution of the work. Novelty is low. A pretty direct adaptation of masked AE for images. Should be reproducible.
ICLR
Title MGMA: Mesh Graph Masked Autoencoders for Self-supervised Learning on 3D Shape Abstract We introduce a self-supervised learning model to extract face nodes and global graph embeddings on meshes. We define a model with graph masking on a mesh graph composed of faces to pre-train on self-supervised tasks. We evaluate our pre-trained model on shape classification and segmentation benchmarks. The results suggest that our model outperforms prior state-of-the-art mesh encoders: In ModelNet40 classification task, it achieves an accuracy of 89.8%, and in ShapeNet segmentation task, it performs a mean Intersection-over-Union (mIoU) of 78.5. Further, we explore and explain the correlation between test and training masking ratios on Mesh Graph Masked Autoencoders (MGMA). And we find best performances are obtained when mesh graph masked autoencoders are trained and evaluated under different masking ratios. Our work may open up new opportunities to address label scarcity and improve the learning power in geometric deep learning research. 1 INTRODUCTION Mesh is a data format widely used in computer graphics and is used more and more frequently in computer vision tasks as additional supervision or inference targets. It provides an accurate, efficient, and irregular representation of three-dimensional shapes. These properties make it a popular format for capturing continuous underlying surfaces. Many commonly used datasets, such as ModelNet (Wu et al., 2015), ShapeNet (Chang et al., 2015), ScanNet (Dai et al., 2017), and Pix3D (Sun et al., 2018), utilize meshes as the core or intermediate agent. A number of 3D data formats can be derived from the mesh structure, such as voxel grids, point clouds, and implicit surfaces. Researchers customize a series of methods to analyze those regular data formats using deep learning, like using 3D convolution to parse 3D voxel grids (Wu et al., 2016), using symmetric functions (Qi et al., 2017a) to process point clouds, and using signed distance fields to represent the implicit surfaces (Cruz et al., 2021; Park et al., 2019). Mesh representation itself could provide excellent quality and computational efficiency while preserving sharp shape features. Deep learning with data formats extracted from meshes have gained more and more success in 3D shape analysis, while analyzing their original data format with deep learning approaches is still an open problem. So studies on developing deep learning methods on mesh data attract lots of interest. Traditional approaches treat a mesh as a graph with vertices as nodes (Hanocka et al., 2019b; Verma et al., 2018) and develop methods akin to CNN, which contains convolution and pooling operations, to learn shared filters to extract features from edges in meshes. However, such approaches ignore the rich manifold structure meshes can represent, such as topology and Riemannian metric. On the other hand, most of the current mesh-based networks validate themselves on small or synthetic datasets. The dearth of studies that demonstrate the effectiveness of mesh on large datasets limits the development of deep learning applications on meshes. Moreover, the compact and efficient essence of mesh data representation should also be well utilized in ongoing geometric deep learning research. A powerful tool to analyze 3D meshes would benefit computer graphics and computer vision researchers. There are significant challenges in developing mesh-based geometric deep learning methods. The first challenge is passing mesh, an irregular data format, forward in a neural network. In our work, we take the mesh as a graph composed of multiple faces as nodes of the graph. The emergence of success in graph processing provides us with a model to handle graph data. Thus, the mesh is another data format that a graph model naturally processes. Further, we design an attention mechanism along graph convolution on meshes to leverage its excellent feature extraction ability. Meanwhile, because of the high cost and high variability associated with manual data labeling, there are more and more unlabeled 3D data. Traditional studies do not consider unlabeled data, which induces a huge sacrifice of untapped information. Therefore, unsupervised learning attracts more attention and has become an important concept for extracting information from unlabeled data. When we review the trend and development of artificial intelligence, self-supervised training on large datasets and producing pre-trained models for downstream tasks is becoming a predominant power in processing and extracting important features from billions and millions of data (Chen et al., 2020b;a; Dosovitskiy et al., 2021; Brown et al., 2020). Training an au- toencoder with masking (Devlin et al., 2019) on the input data during training has been proved to be an effective method for image classification (He et al., 2022). In this paper, benefiting from using the mesh data representation, we propose to apply graph masking and point cloud reconstruction to support our self-supervised learning architecture and advance 3D deep learning research. In our paper, we present a mesh-based framework, Mesh Graph Masked Autoencoder (MGMA), which is pre-trained on self-analyzing the mesh data, and apply the pre-trained model to large-scale 3D imaging datasets. Our network is designed to be suitable for different kinds of mesh representations to increase flexibility and support a variety of available data. MGMA exhibits state-of-the-art performance on supervised tasks. Furthermore, it could perform unsupervised and semi-supervised classification and segmentation tasks. We show in Figure 1 that a mesh could be considered as a graph with faces as nodes and pre-trained to have a model which could be applied to multiple tasks in recognition tasks. To demonstrate the effectiveness of our method, we perform a variety of experiments and show state-of-the-art performance among the mesh-based shape feature extractors. The key contributions of our work are as follows: 1. We introduce a mesh graph autoencoder and train it with graph masking. 2. With our novel MGMA encoder, our self-supervised learning model incorporates unlabeled data into the training stage and enhances the 3D data learning power. 3. We comprehensively evaluate our model under various learning benchmarks on SHREC11, ModelNet40 supervised and unsupervised classification, and ShapeNetPart semi-supervised segmentation tasks and show that our model achieves state-of-the-art results w.r.t prior mesh-based neural network models. 4. We explore and explain the correlation between test and training masking ratios on MGMA. And we find best performances are obtained when mesh graph masked autoencoders are trained and evaluated under different masking ratios. This gained insight may guide future self-supervised learning algorithm development. 2 RELATE WORK Deep Learning on Meshes Treating a polygon mesh as a graph would accordingly apply graphbased methods on it. There are two existing categories for graph methods: spectral methods (Bruna et al., 2013; Henaff et al., 2015; Defferrard et al., 2016; Kipf & Welling, 2016; Levie et al., 2019) and spatial methods (Micheli, 2009; Atwood & Towsley, 2016; Niepert et al., 2016; Gilmer et al., 2017; Fey et al., 2018; Masci et al., 2015; Monti et al., 2017; Huang et al., 2019). Moreover, the convolution in the spectral domain is non-localized filtering (Defferrard et al., 2016). Chebyshev polynomial expansion is a method to solve the non-localization problem (Defferrard et al., 2016). On the other hand, there is no easy way to induce the weight sharing across different locations of the graph due to the difficulty of matching local neighborhoods in the spatial domain (Bruna et al., 2013). Nevertheless, Atwood & Towsley (2016) proposed a spatial filtering method that assumes information is transferred from a vertex to its adjacent vertex with a specific transition probability. The power of the transition probability matrix implies that farther adjacent vertices provide little information for the central vertex. Furthermore, Geodesic CNN (Masci et al., 2015), MoNet (Monti et al., 2017), and SplineCNN (Fey et al., 2018) deal with the weight sharing problem by designing local coordinate systems for the central vertex in a local patch. They apply a set of weighting functions to aggregate the characteristics at the adjacent vertices. Next, they calculate a weighted mean of these aggregates. However, these methods are informatically expensive and require pre-defined local coordinate systems. In addition, Neural3DMM (Bouritsas et al., 2019) introduces the spiral convolution operation by enforcing a local ordering of vertices through the spiral operator. An initial point for each spiral is a vertex with the shortest geodesic path to a fixed reference point on a template shape. The remaining vertices of the spiral are ordered in the clockwise or counterclockwise directions inductively. However, finding a reference point for an arbitrary shape is challenging. Moreover, the initial point is not unique once two adjacent vertices have the same shortest path to the reference point. FeaStNet (Verma et al., 2018) proposes a graphical neural network in which the neighborhood of each peak for the convolution operation is not preset but instead calculated dynamically. Tangent convolution is introduced in (Tatarchenko et al., 2018), where a small neighborhood around each vertex is used to reconstruct the local function upon which convolution is applied. Some generative models have also been tried on the mesh. Litany et al. (2018) perform shape completion via a graph autoencoder. MeshCNN (Hanocka et al., 2019b) utilizes the particular property of edge in a triangle mesh to extract edge features. Yang et al. (2021) apply continuous convolution on a geodesic region of mesh. Self-Supervised Learning Self-supervised learning is to define some tasks from the data itself, and those human-defined tasks are used to pre-train the model. It is used in computer vision with proxy tasks such as predicting order in time (Wei et al., 2018), finding missing pixels (Pathak et al., 2016), location of patches (Doersch et al., 2015), image orientations (Gidaris et al., 2018), human-made artifacts (Jenni & Favaro, 2018), clusters of images (Caron et al., 2018), camera locations (Agrawal et al., 2015), jaggle puzzle (Noroozi & Favaro, 2016), color of videos (Vondrick et al., 2018), and tracking of image patches(Wang & Gupta, 2015). These works demonstrate promising results in transferring visual features from proxy tasks to other tasks. Thus, defining proxy tasks that are related enough to the downstream task is quite important (Jenni & Favaro, 2018). On the other hand, supervisions, like density estimation or clustering, are not domain-specific (Caron et al., 2018). Deep clustering models(Aljalbout et al., 2018; Min et al., 2018; Yang et al., 2017; Hershey et al., 2016; Xie et al., 2016; Ghasedi Dizaji et al., 2017; Shaham et al., 2018; Yang et al., 2016; Hsu & Lin, 2018) come up to jointly train with a network-specific loss. There are many works exploring self-supervised learning on point clouds. They use multi-tasks learning (Hassani & Haley, 2019), reconstruction (Achlioptas et al., 2018; Yang et al., 2018;?) contrast learning (Zhang & Zhu, 2019), restoring point cloud (Shi et al., 2020), point cloud autoregression (Sun et al., 2019b), the orientation prediction (Poursaeed et al., 2020; Han et al., 2019), and approximating convex decomposition (Gadelha et al., 2020) to pre-train the model and achieve state-of-the-art results on point cloud classification and segmentation tasks. Recently, masked autoencoders are used for self-supervised learning on image classification tasks(He et al., 2022). Transformer Applications Transformer, which is proposed by Vaswani et al. (2017), has been widely used in natural language processing (NLP) and then computer vision. In NLP, large Transformer-based models are often pre-trained on large datasets and then fine-tuned for the downstream tasks, like BERT (Devlin et al., 2019) and GPT (Radford et al., 2018; 2019; Brown et al., 2020). In computer vision, applying Transformer on image processing experiences the local-to-global and low-to-high resolution process. Image Transformer (Parmar et al., 2018) applies self-attention to local neighborhoods. And this local attention replaces convolutions (Hu et al., 2019; Ramachandran et al., 2019; Zhao et al., 2020). Sparse Transformers (Child et al., 2019) use scalable approximations to global self-attention for images. Another way to apply attention to blocks of varying sizes (Weissenborn et al., 2019), in this particular case, along individual axes (Ho et al., 2019; Wang et al., 2020). Some models (Cordonnier et al., 2020; Dosovitskiy et al., 2021; Bello et al., 2019; Wu et al., 2020; Chen et al., 2020a) extract patches of size 2 × 2 or 7 × 7 from the input image then apply CNN and Transformer sequentially. These works make Transformer achieve state-of-the-art results on small and medium resolution images. Instead of just classification, Transformer is also used in video processing(Wang et al., 2018; Sun et al., 2019a), object detection(Hu et al., 2018; Carion et al., 2020), unsupervised object discovery(Locatello et al., 2020), and unified text-vision tasks(Chen et al., 2020c; Lu et al., 2019; Li et al., 2019). Recently, Liang et al. (2022) use Transformer as an autoencoder network for mesh reconstruction and self-supervised learning. 3 METHOD MGMA is a masked autoencoder that interprets the mesh as a graph, and each graph node is a face on the mesh. The features on the face nodes are randomly masked first and passed through multiple face graph attention layers. Then max-pooling is applied to obtain the global graph embedding, which is passed to a point cloud decoder for reconstruction pre-train tasks. Masking on face graph is achieved by randomly selecting nodes on the graph according to the masking ratio. After one node is selected as the masked node, a learnable masking embedding takes the place of the original embedding, which is adopted from (He et al., 2022; Devlin et al., 2019). Face graph attention layer is the core of our network, as shown in Figure 2. The layer takes a graph and the features on each node of the graph as input. For each node in the graph, the layer first gathers its neighbors according to an adjacency matrix which could be an n-ring neighbor adjacency matrix in our architecture. We denote r as the feature of the root node and n as the gathered features of the root node and its neighbors. Three linear layers fV , fQ, and fK take n, r, and n as input to compute V , Q, and K. In our work, we keep the output dimension of key, value, and query fixed to 64. FaceNodeEmbedding = softmax( QKT√ dk )V (1) After obtaining V , Q, and K, we use Equation 1to get the embedding of each face node. In Equation 1, dk stands for the dimensional size of K. Details of composing the layers into an encoder are in Section A.2. Reconstruction loss In the reconstruction loss function, we create a reconstruction decoder for this function. The input to this decoder is the graph embedding of the mesh. The expected output is the point cloud sampled from the mesh. Like the paper(Achlioptas et al., 2018), we use a similar network architecture fD for decoding a point cloud. So we choose the point cloud as the target for the decoder to generate. And the loss function is the Chamfer Distance (CD), as shown in Equation 2. LCD = 1 N N∑ n=1 min p̂∈ŝ ∥pn − p̂∥22 + 1 M M∑ m=1 min p∈s ∥p̂m − p∥22 (2) where s and ŝ are the ground truth and predicted point sets. M and N denote the number of points in the ground truth and predicted point sets. pn and p̂m are points in point set s and ŝ. 4 EXPERIMENTS AND RESULTS In this section, we introduce experiments to validate the effectiveness of our neural networks. First, we demonstrate the effectiveness of the encoder part of our networks on two supervised classification tasks. Then, we verify our work by pre-training the network for an unsupervised classification task. Finally, we conduct a semi-supervised experiment for part segmentation on 3D shapes. 4.1 SUPERVISED CLASSIFICATION we first verify that our network’s encoder could outperform other networks. By using the designed mesh graph attention encoder, we achieve state-of-the-art performance on SHREC11 and ModelNet40 when the mesh is the input data modality. SHREC11 is a dataset introduced in (Lian et al., 2011) that contains 30 classes, with 20 3D objects in each class. We follow the setup in which split 16 and 10 are the numbers of training 3D objects in each class, making split 10 a harder classification task than split 16. We use the meshes processed by, (Hanocka et al., 2019a) and each mesh contains 500 faces. Our results are reported in Table 1. We train our encoder 300 epochs with Adam optimizer, (Kingma & Ba, 2015) which is with β equal to 0.9 and 0.999, ϵ equal to 1−8, learning rate 0.0002 and weight decay equal to 0.0. We compare our mesh graph attention encoder against eight methods that also take meshes as the input to their networks. It turns out that our encoder is able to get 100% accuracy on both setups. Because SHREC11 is a relatively small dataset for supervised classification and some methods have reached 100% accuracy, we further validate our mesh graph attention encoder on ModelNet40 (Wu et al., 2015). ModelNet40 is a dataset that contains 40 classes, and there are 9840 meshes for training and 2468 meshes for testing. Because the meshes in ModelNet40 have different numbers of faces. To fit the meshes onto GPU and to improve the GPU utilization, we follow the method in (Huang et al., 2018) to first make the mesh watertight, then simplify the meshes into 2048 faces. We train our encoder 300 epochs with the same optimizer settings as for SHREC11. The learning rate is decayed by a multiplicative factor of 0.1 at steps 30 and 60. Our method achieved 92.95% test accuracy on ModelNet40. The results are reported in Table 2. We compare our encoder with other night methods. Our results are on par with state-of-the-art classification on ModelNet40. These experiments validate that our encoder could get state-of-the-art performances on 3D shape classification tasks. The next experiments are to validate the model’s performance on unsupervised tasks. 4.2 UNSUPERVISED CLASSIFICATION We pre-train the model across all the provided training data in ModelNet40. We keep the pre-trained model’s weight and use it for the classification tasks. We do not perform fine-tuning when using the pre-trained model for downstream tasks. After obtaining the graph embedding, we use a linear SVM as the unsupervised tool for classification on ModelNet40. The process of our unsupervised learning is stated as follows. We first pre-train the masked autoencoder with training data with the same training hype-parameter setting as in Section 4.1. After pre-training the model, we pick the model with the lowest Chamfer Distance on provided test data. Because the data used for pre-training do not contain label information, we do not consider computing test data’s Chamfer Distance as information leaking. We use the best model to extract global embeddings from the training and test data, a vector with dimension 1024. Once we obtain the global embeddings, we use linear SVMs to train on ModelNet40 training data’s global embeddings. We use 5-fold cross-validation to compute the average validation accuracy on the data split from training data. We also perform a logarithm search on the regularization parameter C of SVM from 1 to 1000 with the number of steps equal to 10. Then we pick the SVM model with the best average accuracy on validation data to compute the test accuracy. In Section A.1, we visualize graph embeddings using t-SNE(Van der Maaten & Hinton, 2008). In Table 3, our method performs best compared with other meshbased neural networks on unsupervised learning on ModelNet40. There are two reasons our method outperforms other mesh-based methods. The first reason is our encoder utilizes an attention mechanism to pick important points while ignoring the noisy information by assigning lower weight to the noisy neighboring. The second reason could contribute to the masking mechanism. It provides more data augmentation to our model and forces the model to focus less on the details of the shapes than the general information in the graph. And three methods (Han et al., 2019; Chen et al., 2021; Poursaeed et al., 2020) that outperform our methods are point cloud-based methods. The possible reason could be that data augmentation, like rotation (Han et al., 2019), is not considered when designing our framework. Adding such design components to our framework will be explored in future work. In Figure 3, we show the reconstruction results on ModelNet40 test data. To some extent, the autoencoder ignores the input mesh’s detailed features while preserving the input mesh’s overall structure. Those detailed features, like the airplane’s engine, the chair’s arm, and the leg style of a table, are ignored during the reconstruction. Ignoring those detailed features means that the encoder encodes the information that is good for decoding into an average shape in the class but forgets the detail. For reconstruction tasks, this is not desired. But for classification, this process is like cleaning redundant information from the input shape. More reconstruction visualization results are shown in Figure 10. 4.3 PART SEGMENTATION Part segmentation is a fine-grained point-wise classification task that aims to predict each point’s part category label in a given shape. In our work, we need to predict the part category label for each face in a mesh. We evaluate the learned point features on the ShapeNetPart dataset (Yi et al., 2016), which contains 16,881 objects from 16 categories (12149 for training, 2874 for testing, and 1858 for validation). Each object consists of 2 to 6 parts with a total of 50 distinct parts among all categories. We use the mean Intersection-over-Union (mIoU) as the measurement calculated by averaging the IoUs of the different parts occurring in one shape. For the segmentation result, we follow the protocol from (Hassani & Haley, 2019). The results are shown in Table 4. In the original dataset, only point clouds and their corresponding point-wise labels are provided. To get ground truth for meshes, we need to first align the mesh with the point cloud by sampling points on the mesh and align the centers of the sampled point clouds with the provided point clouds. After the alignments, we first sample points on the face uniformly for each face on the mesh. Then we compute the nearest point in the ground truth point cloud. After that, the face’s label is determined by the major vote of all the sampled points’ labels. After the processing, we follow (Zhao et al., 2019) to randomly use 5% and 1% of the ShapeNetPart training data to evaluate the segment part task in a semi-supervised setting. We use the same pretrained model to extract the face features of the sampled training data, along with validation and test samples without any finetuning. Following (Hassani & Haley, 2019), We then train a 4-layer MLP [2048, 4096, 1024, 50] on the sampled training sets and evaluate it on all test data. The input feature to the MLP is the concatenation of face node embeddings and global graph embeddings which makes the input features have a dimension size of 2048. We train the model with Adam optimizer with a fixed learning rate of 0.002. This training process takes 30 epochs and converges very fast. Because the features are clear for the MLP to distinguish, the entire process takes about 15 minutes, including the testing after each epoch’s training. During testing, we project the label computed on mesh’s faces back to the provided point clouds according to the distance between the points and faces. Results shown in Table 4 suggest that our method is able to perform on par with the point cloud baselines and on ShapeNetPart semi-supervised learning segmentation task. In Figure 3, we show the visualization result of our semi-supervised learning segmentation. More segmentation visualization results are shown in Figure 9. 5 DISCUSSION 5.1 Is 0 masking ratio the Best Choice for Evaluating MGMA In He et al. (2022), the masking ratio at testing is fixed at 0. This is under the assumption that providing as much information to the trained masked autoencoder is the best choice. We explore the effect of test masking ratios on the unsupervised classification task. In our experiments, the test masking ratio is not fixed but also variable when evaluating the pre-trained model. In Figure 4 (a), we fix the test masking ratio to 0.0 and vary the training masking ratio from 0.1 to 0.9. And it demonstrates that varying training masking ratios could change the performance on unsupervised learning tasks. In Figure 4 (b) and (c), we vary the masking ratio not only during training but also during testing and validation. It turns out that the maximum test accuracy is obtained when the training masking ratio is 0.6 and the test masking ratio is 0.1 or 0.3. This result suggests that choosing 0 as the test masking ratio is not the only choice for evaluating a model trained with masking. For the convenience of delivery, we denote a 2D coordinate (a, b) as the situation when the training masking ratio is a, and the test masking ratio is b. In Figure 5, we investigate why the best test accuracy happens at (0.6, 0.1) and (0.6, 0.3). We compute the difference between validation accuracy and test accuracy. This difference is usually taken as the symbol of overfitting or underfitting. It turns out that in most cases, our model overfitted the task. But those maximum test accuracy points happen to points less overfitting. Another point that exhibits such property is (0.7, 0.7) in the difference map. But at that point, more information about the mesh is lost. There are totally three regions on the heat map in Figure 5 exhibiting the less overfitting property. The last one is at (0.2, 0.6). But the testing ratio is too high that the model is not overfitting but also extracts less useful information. Even though in MaskMAE, 0.75 is the best choice for masking, our 3D mesh dataset differs from the image dataset. In 3D space, this masking ratio becomes lower, which means a face in a mesh participating in classification plays a more important role than each pixel in an image. Also, the point at position (0.5, 0.5) makes the model most overfitting. There are two possible reasons. First, training with a masking ratio of 0.5 gives the input model the most freedom, making validation easier and testing harder. Second, having the same masking ratio could make the model rely on finding masking information from the mesh. An opposite example is (0.6, 0.1) points. The model is trained at a masking ratio of 0.6 but tested at a masking ratio of 0.1. At this time, the masking still helps purge out the redundant information unrelated to the classification. But also, the training and test difference make the model force itself to discard information on masking but find common details. For more accuracy curves under different training and test masking ratios, see Section A.3. 6 CONCLUSION We propose a self-supervised mesh encoding approach to learn point and shape features on meshes that use three self-supervised losses, including context, COD, and autoencoding multi-scale graph-based encoder. We thoroughly evaluated our model on mesh classification and segmentation benchmarks. The results suggest that the learned block-level and class-level features outperform prior state-of-theart models in self-supervised representation learning. For instance, in ModelNet40 shape classification tasks, our model achieved the state-of-the-art (among self-supervised mesh encoders) accuracy of 89.8%. We also find that different combinations of test and training masking ratios in MGMA could provide varying information to downstream tasks. In the ShapeNetPart segmentation task, it achieved a mIoU of 78.5, which outperforms the state-of-the-art mesh encoders. We hope our work could provide a new direction at mesh deep learning analysis and self-supervised learning on mesh data. A APPENDIX A.1 T-SNE VISUALIZATION We visualize graph embeddings obtained by fixing the test masking ratio to 0 using t-SNE (Van der Maaten & Hinton, 2008). We could observe that plant and chair are two classes clustering close but easy to distinguish. The reason could be that both of them are tall cuboids. But chairs have a more regular appearance than plants. The piano and range hood are also the same cases. They have a similar outlook but are different when looking in detail. Usually, the mesh of the range hood has a hole inside its body. Another confusion to the model is the nightstand and dresser, two potentially similar objects. The t-SNE plot at ratios 0.3 and 0.6 are quite similar in clustering. While the plot at a ratio of 0.9 begins to confuse objects like desks and pianos (see the bed category move from corner to the center). A.2 NETWORK ARCHITECTURE The overall architecture of our network is shown in Figure 7. It has a heavier encoder than the decoder. It follows the design logic in MaskMAE since, after pre-training, we no longer need the decoder. The reason we did not use batch normalization in the decoder is to follow (Achlioptas et al., 2018). And decoder is not the mean focus of our paper. The input features to the graph are computed using descriptors defined in (Singh et al., 2021b), which are 320-dimension vectors for face nodes. For the first two mesh graph attention blocks, we use 1-ring neighbors for neighboring lookup. For the third mesh graph attention block, we use 2-ring neighbors. A.3 MASKING RATIO ANALYSIS In Figure 8, we plot the accuracy curve under different training and test masking ratio. Three patterns of accuracy curve are found when the test masking ratios are fixed. The first happens at test masking ratios of 0.0, 0.1, and 0.2. The accuracy goes up and down. The second one is at test masking ratios of 0.3, 0.4, 0.5, and 0.6. The accuracy goes up and done and up again. The last one happened at test masking ratios of 0.7, 0.8, and 0.9. The accuracy goes up. The reason is straightforward for the first and third patterns. For the first pattern, the models are trained with low masking ratios. When the training masking ratio increases, the models focus on extracting information other than just masking, which explains why there is an increasing curve at the beginning. And when the ratio is too high, there is not enough information. Thus, the curve begins to drop. The third pattern is caused by test masking ratios being too high such that the models trained with low masking ratios could efficiently capture the information of testing meshes. And only models trained under high masking ratios could capture information from testing meshes. The second pattern is generated when the first and third patterns merge. One pattern of accuracy curve is found when the training masking ratios are fixed. Test Ratio 0.0 Test Ratio 0.1 Test Ratio 0.2 Test Ratio 0.3 Test Ratio 0.4 Test Ratio 0.5 Test Ratio 0.6 Test Ratio 0.7 Test Ratio 0.8 Test Ratio 0.9 Train Ratio 0.1 Train Ratio 0.2 Train Ratio 0.3 Train Ratio 0.4 Train Ratio 0.5 Train Ratio 0.6 Train Ratio 0.7 Train Ratio 0.8 Train Ratio 0.9 A.4 SEGMENTATION VISUALIZATION RESULTS More segmentation results are shown in Figure 9 A.5 RECONSTRUCTION VISUALIZATION RESULTS More reconstruction results are shown in Figure 10.
1. What is the focus and contribution of the paper regarding mesh representation learning? 2. What are the strengths and weaknesses of the proposed method, particularly in terms of its self-supervised nature and use of graph attention layer? 3. Do you have any concerns regarding the training and evaluation process, such as the selection of the best model and the involvement of test data? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or concerns regarding the presentation of results, such as the order of tasks in figures and tables, and the lack of baselines in Table 4? 6. Is there anything unclear or confusing in the paper, such as the statement about finding masking information or the setting of 1-ring and 2-ring neighbors?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper In this paper, the authors propose a self-supervised method to learn mesh representation. The proposed method uses graph attention layer to build the model and the graph masking strategy to train the model. The learned global feature is trained with the task to reconstruct the point clouds. The trained model is evaluated on shape classification and segmentation tasks on SHREC11 and ModelNet40 datasets. Strengths And Weaknesses Pros -This paper is well written and easy to follow. -The proposed representation learning is self-supervised without extra annotation efforts. It also takes advantage of the graph attention layer and graph masking to learn effective representation. -Experimental results shows the proposed method achieves comparable performance against the existing methods on the shape classification and segmentation tasks. The analysis on the effect of masking ratios at training and evaluation stages reveals that different ratios lead to performance variation. Cons -The technical contribution of this work is limited given the similar work on graph attention layer for point cloud/mesh processing and masking strategy in image processing. The studies of the effect of the training and evaluation ratios is more an empirical study than a contribution as no guidance is provided on how to select the optimal values without trying all possible values. In Section 4.2, the best model is picked according to the lowest Chamfer Distance on provided test data. Although the authors argue there is no label leakage, the involvement of test data in the selecting of models is not acceptable. What is the averaged performance of the performance of a randomly picked model? -The authors argues one reason that their method outperforms others in Table 3 is the attention mechanism. However, evidence of the learned attention is expected to be provided to support the statement. -In the text, the classification task is put before the task of segmentation, which is different from the order in the figures 3,9, and 10. The authors are advised to make consistent order. -In Table 4, it is unclear if the Multi-Task model is the one from the reference Hassani & Haley, 2019. Furthermore, more baselines on this task are expected to be added to Table 4. -In Figure 4, both (b) and (c) are about the train and test masking ratios. -It is unclear what the authors intend to state at the end of Section 5.1 'Second, having the same masking ratio could make the model rely on finding masking information from the mesh'. It is unclear how finding masking information can help the task of classification or segmentation. -The networks for classification and segmentation are not shown in either the main text or appendix. It is weird to me the author set 1-ring neighbor for neighboring lookup in the first two mesh graph attention blocks and 2-ring neighbors for the last layer. The authors may explain the reason for the setting. -In Section 4.1, 'other night methods' should be 'other nine methods'. The format of references in Table 3 are incorrect. Clarity, Quality, Novelty And Reproducibility The paper is easy to follow. It combines existing techniques and conduct some further experimental analysis on the setting of masking ratios. However, the contribution is limited.
ICLR
Title MGMA: Mesh Graph Masked Autoencoders for Self-supervised Learning on 3D Shape Abstract We introduce a self-supervised learning model to extract face nodes and global graph embeddings on meshes. We define a model with graph masking on a mesh graph composed of faces to pre-train on self-supervised tasks. We evaluate our pre-trained model on shape classification and segmentation benchmarks. The results suggest that our model outperforms prior state-of-the-art mesh encoders: In ModelNet40 classification task, it achieves an accuracy of 89.8%, and in ShapeNet segmentation task, it performs a mean Intersection-over-Union (mIoU) of 78.5. Further, we explore and explain the correlation between test and training masking ratios on Mesh Graph Masked Autoencoders (MGMA). And we find best performances are obtained when mesh graph masked autoencoders are trained and evaluated under different masking ratios. Our work may open up new opportunities to address label scarcity and improve the learning power in geometric deep learning research. 1 INTRODUCTION Mesh is a data format widely used in computer graphics and is used more and more frequently in computer vision tasks as additional supervision or inference targets. It provides an accurate, efficient, and irregular representation of three-dimensional shapes. These properties make it a popular format for capturing continuous underlying surfaces. Many commonly used datasets, such as ModelNet (Wu et al., 2015), ShapeNet (Chang et al., 2015), ScanNet (Dai et al., 2017), and Pix3D (Sun et al., 2018), utilize meshes as the core or intermediate agent. A number of 3D data formats can be derived from the mesh structure, such as voxel grids, point clouds, and implicit surfaces. Researchers customize a series of methods to analyze those regular data formats using deep learning, like using 3D convolution to parse 3D voxel grids (Wu et al., 2016), using symmetric functions (Qi et al., 2017a) to process point clouds, and using signed distance fields to represent the implicit surfaces (Cruz et al., 2021; Park et al., 2019). Mesh representation itself could provide excellent quality and computational efficiency while preserving sharp shape features. Deep learning with data formats extracted from meshes have gained more and more success in 3D shape analysis, while analyzing their original data format with deep learning approaches is still an open problem. So studies on developing deep learning methods on mesh data attract lots of interest. Traditional approaches treat a mesh as a graph with vertices as nodes (Hanocka et al., 2019b; Verma et al., 2018) and develop methods akin to CNN, which contains convolution and pooling operations, to learn shared filters to extract features from edges in meshes. However, such approaches ignore the rich manifold structure meshes can represent, such as topology and Riemannian metric. On the other hand, most of the current mesh-based networks validate themselves on small or synthetic datasets. The dearth of studies that demonstrate the effectiveness of mesh on large datasets limits the development of deep learning applications on meshes. Moreover, the compact and efficient essence of mesh data representation should also be well utilized in ongoing geometric deep learning research. A powerful tool to analyze 3D meshes would benefit computer graphics and computer vision researchers. There are significant challenges in developing mesh-based geometric deep learning methods. The first challenge is passing mesh, an irregular data format, forward in a neural network. In our work, we take the mesh as a graph composed of multiple faces as nodes of the graph. The emergence of success in graph processing provides us with a model to handle graph data. Thus, the mesh is another data format that a graph model naturally processes. Further, we design an attention mechanism along graph convolution on meshes to leverage its excellent feature extraction ability. Meanwhile, because of the high cost and high variability associated with manual data labeling, there are more and more unlabeled 3D data. Traditional studies do not consider unlabeled data, which induces a huge sacrifice of untapped information. Therefore, unsupervised learning attracts more attention and has become an important concept for extracting information from unlabeled data. When we review the trend and development of artificial intelligence, self-supervised training on large datasets and producing pre-trained models for downstream tasks is becoming a predominant power in processing and extracting important features from billions and millions of data (Chen et al., 2020b;a; Dosovitskiy et al., 2021; Brown et al., 2020). Training an au- toencoder with masking (Devlin et al., 2019) on the input data during training has been proved to be an effective method for image classification (He et al., 2022). In this paper, benefiting from using the mesh data representation, we propose to apply graph masking and point cloud reconstruction to support our self-supervised learning architecture and advance 3D deep learning research. In our paper, we present a mesh-based framework, Mesh Graph Masked Autoencoder (MGMA), which is pre-trained on self-analyzing the mesh data, and apply the pre-trained model to large-scale 3D imaging datasets. Our network is designed to be suitable for different kinds of mesh representations to increase flexibility and support a variety of available data. MGMA exhibits state-of-the-art performance on supervised tasks. Furthermore, it could perform unsupervised and semi-supervised classification and segmentation tasks. We show in Figure 1 that a mesh could be considered as a graph with faces as nodes and pre-trained to have a model which could be applied to multiple tasks in recognition tasks. To demonstrate the effectiveness of our method, we perform a variety of experiments and show state-of-the-art performance among the mesh-based shape feature extractors. The key contributions of our work are as follows: 1. We introduce a mesh graph autoencoder and train it with graph masking. 2. With our novel MGMA encoder, our self-supervised learning model incorporates unlabeled data into the training stage and enhances the 3D data learning power. 3. We comprehensively evaluate our model under various learning benchmarks on SHREC11, ModelNet40 supervised and unsupervised classification, and ShapeNetPart semi-supervised segmentation tasks and show that our model achieves state-of-the-art results w.r.t prior mesh-based neural network models. 4. We explore and explain the correlation between test and training masking ratios on MGMA. And we find best performances are obtained when mesh graph masked autoencoders are trained and evaluated under different masking ratios. This gained insight may guide future self-supervised learning algorithm development. 2 RELATE WORK Deep Learning on Meshes Treating a polygon mesh as a graph would accordingly apply graphbased methods on it. There are two existing categories for graph methods: spectral methods (Bruna et al., 2013; Henaff et al., 2015; Defferrard et al., 2016; Kipf & Welling, 2016; Levie et al., 2019) and spatial methods (Micheli, 2009; Atwood & Towsley, 2016; Niepert et al., 2016; Gilmer et al., 2017; Fey et al., 2018; Masci et al., 2015; Monti et al., 2017; Huang et al., 2019). Moreover, the convolution in the spectral domain is non-localized filtering (Defferrard et al., 2016). Chebyshev polynomial expansion is a method to solve the non-localization problem (Defferrard et al., 2016). On the other hand, there is no easy way to induce the weight sharing across different locations of the graph due to the difficulty of matching local neighborhoods in the spatial domain (Bruna et al., 2013). Nevertheless, Atwood & Towsley (2016) proposed a spatial filtering method that assumes information is transferred from a vertex to its adjacent vertex with a specific transition probability. The power of the transition probability matrix implies that farther adjacent vertices provide little information for the central vertex. Furthermore, Geodesic CNN (Masci et al., 2015), MoNet (Monti et al., 2017), and SplineCNN (Fey et al., 2018) deal with the weight sharing problem by designing local coordinate systems for the central vertex in a local patch. They apply a set of weighting functions to aggregate the characteristics at the adjacent vertices. Next, they calculate a weighted mean of these aggregates. However, these methods are informatically expensive and require pre-defined local coordinate systems. In addition, Neural3DMM (Bouritsas et al., 2019) introduces the spiral convolution operation by enforcing a local ordering of vertices through the spiral operator. An initial point for each spiral is a vertex with the shortest geodesic path to a fixed reference point on a template shape. The remaining vertices of the spiral are ordered in the clockwise or counterclockwise directions inductively. However, finding a reference point for an arbitrary shape is challenging. Moreover, the initial point is not unique once two adjacent vertices have the same shortest path to the reference point. FeaStNet (Verma et al., 2018) proposes a graphical neural network in which the neighborhood of each peak for the convolution operation is not preset but instead calculated dynamically. Tangent convolution is introduced in (Tatarchenko et al., 2018), where a small neighborhood around each vertex is used to reconstruct the local function upon which convolution is applied. Some generative models have also been tried on the mesh. Litany et al. (2018) perform shape completion via a graph autoencoder. MeshCNN (Hanocka et al., 2019b) utilizes the particular property of edge in a triangle mesh to extract edge features. Yang et al. (2021) apply continuous convolution on a geodesic region of mesh. Self-Supervised Learning Self-supervised learning is to define some tasks from the data itself, and those human-defined tasks are used to pre-train the model. It is used in computer vision with proxy tasks such as predicting order in time (Wei et al., 2018), finding missing pixels (Pathak et al., 2016), location of patches (Doersch et al., 2015), image orientations (Gidaris et al., 2018), human-made artifacts (Jenni & Favaro, 2018), clusters of images (Caron et al., 2018), camera locations (Agrawal et al., 2015), jaggle puzzle (Noroozi & Favaro, 2016), color of videos (Vondrick et al., 2018), and tracking of image patches(Wang & Gupta, 2015). These works demonstrate promising results in transferring visual features from proxy tasks to other tasks. Thus, defining proxy tasks that are related enough to the downstream task is quite important (Jenni & Favaro, 2018). On the other hand, supervisions, like density estimation or clustering, are not domain-specific (Caron et al., 2018). Deep clustering models(Aljalbout et al., 2018; Min et al., 2018; Yang et al., 2017; Hershey et al., 2016; Xie et al., 2016; Ghasedi Dizaji et al., 2017; Shaham et al., 2018; Yang et al., 2016; Hsu & Lin, 2018) come up to jointly train with a network-specific loss. There are many works exploring self-supervised learning on point clouds. They use multi-tasks learning (Hassani & Haley, 2019), reconstruction (Achlioptas et al., 2018; Yang et al., 2018;?) contrast learning (Zhang & Zhu, 2019), restoring point cloud (Shi et al., 2020), point cloud autoregression (Sun et al., 2019b), the orientation prediction (Poursaeed et al., 2020; Han et al., 2019), and approximating convex decomposition (Gadelha et al., 2020) to pre-train the model and achieve state-of-the-art results on point cloud classification and segmentation tasks. Recently, masked autoencoders are used for self-supervised learning on image classification tasks(He et al., 2022). Transformer Applications Transformer, which is proposed by Vaswani et al. (2017), has been widely used in natural language processing (NLP) and then computer vision. In NLP, large Transformer-based models are often pre-trained on large datasets and then fine-tuned for the downstream tasks, like BERT (Devlin et al., 2019) and GPT (Radford et al., 2018; 2019; Brown et al., 2020). In computer vision, applying Transformer on image processing experiences the local-to-global and low-to-high resolution process. Image Transformer (Parmar et al., 2018) applies self-attention to local neighborhoods. And this local attention replaces convolutions (Hu et al., 2019; Ramachandran et al., 2019; Zhao et al., 2020). Sparse Transformers (Child et al., 2019) use scalable approximations to global self-attention for images. Another way to apply attention to blocks of varying sizes (Weissenborn et al., 2019), in this particular case, along individual axes (Ho et al., 2019; Wang et al., 2020). Some models (Cordonnier et al., 2020; Dosovitskiy et al., 2021; Bello et al., 2019; Wu et al., 2020; Chen et al., 2020a) extract patches of size 2 × 2 or 7 × 7 from the input image then apply CNN and Transformer sequentially. These works make Transformer achieve state-of-the-art results on small and medium resolution images. Instead of just classification, Transformer is also used in video processing(Wang et al., 2018; Sun et al., 2019a), object detection(Hu et al., 2018; Carion et al., 2020), unsupervised object discovery(Locatello et al., 2020), and unified text-vision tasks(Chen et al., 2020c; Lu et al., 2019; Li et al., 2019). Recently, Liang et al. (2022) use Transformer as an autoencoder network for mesh reconstruction and self-supervised learning. 3 METHOD MGMA is a masked autoencoder that interprets the mesh as a graph, and each graph node is a face on the mesh. The features on the face nodes are randomly masked first and passed through multiple face graph attention layers. Then max-pooling is applied to obtain the global graph embedding, which is passed to a point cloud decoder for reconstruction pre-train tasks. Masking on face graph is achieved by randomly selecting nodes on the graph according to the masking ratio. After one node is selected as the masked node, a learnable masking embedding takes the place of the original embedding, which is adopted from (He et al., 2022; Devlin et al., 2019). Face graph attention layer is the core of our network, as shown in Figure 2. The layer takes a graph and the features on each node of the graph as input. For each node in the graph, the layer first gathers its neighbors according to an adjacency matrix which could be an n-ring neighbor adjacency matrix in our architecture. We denote r as the feature of the root node and n as the gathered features of the root node and its neighbors. Three linear layers fV , fQ, and fK take n, r, and n as input to compute V , Q, and K. In our work, we keep the output dimension of key, value, and query fixed to 64. FaceNodeEmbedding = softmax( QKT√ dk )V (1) After obtaining V , Q, and K, we use Equation 1to get the embedding of each face node. In Equation 1, dk stands for the dimensional size of K. Details of composing the layers into an encoder are in Section A.2. Reconstruction loss In the reconstruction loss function, we create a reconstruction decoder for this function. The input to this decoder is the graph embedding of the mesh. The expected output is the point cloud sampled from the mesh. Like the paper(Achlioptas et al., 2018), we use a similar network architecture fD for decoding a point cloud. So we choose the point cloud as the target for the decoder to generate. And the loss function is the Chamfer Distance (CD), as shown in Equation 2. LCD = 1 N N∑ n=1 min p̂∈ŝ ∥pn − p̂∥22 + 1 M M∑ m=1 min p∈s ∥p̂m − p∥22 (2) where s and ŝ are the ground truth and predicted point sets. M and N denote the number of points in the ground truth and predicted point sets. pn and p̂m are points in point set s and ŝ. 4 EXPERIMENTS AND RESULTS In this section, we introduce experiments to validate the effectiveness of our neural networks. First, we demonstrate the effectiveness of the encoder part of our networks on two supervised classification tasks. Then, we verify our work by pre-training the network for an unsupervised classification task. Finally, we conduct a semi-supervised experiment for part segmentation on 3D shapes. 4.1 SUPERVISED CLASSIFICATION we first verify that our network’s encoder could outperform other networks. By using the designed mesh graph attention encoder, we achieve state-of-the-art performance on SHREC11 and ModelNet40 when the mesh is the input data modality. SHREC11 is a dataset introduced in (Lian et al., 2011) that contains 30 classes, with 20 3D objects in each class. We follow the setup in which split 16 and 10 are the numbers of training 3D objects in each class, making split 10 a harder classification task than split 16. We use the meshes processed by, (Hanocka et al., 2019a) and each mesh contains 500 faces. Our results are reported in Table 1. We train our encoder 300 epochs with Adam optimizer, (Kingma & Ba, 2015) which is with β equal to 0.9 and 0.999, ϵ equal to 1−8, learning rate 0.0002 and weight decay equal to 0.0. We compare our mesh graph attention encoder against eight methods that also take meshes as the input to their networks. It turns out that our encoder is able to get 100% accuracy on both setups. Because SHREC11 is a relatively small dataset for supervised classification and some methods have reached 100% accuracy, we further validate our mesh graph attention encoder on ModelNet40 (Wu et al., 2015). ModelNet40 is a dataset that contains 40 classes, and there are 9840 meshes for training and 2468 meshes for testing. Because the meshes in ModelNet40 have different numbers of faces. To fit the meshes onto GPU and to improve the GPU utilization, we follow the method in (Huang et al., 2018) to first make the mesh watertight, then simplify the meshes into 2048 faces. We train our encoder 300 epochs with the same optimizer settings as for SHREC11. The learning rate is decayed by a multiplicative factor of 0.1 at steps 30 and 60. Our method achieved 92.95% test accuracy on ModelNet40. The results are reported in Table 2. We compare our encoder with other night methods. Our results are on par with state-of-the-art classification on ModelNet40. These experiments validate that our encoder could get state-of-the-art performances on 3D shape classification tasks. The next experiments are to validate the model’s performance on unsupervised tasks. 4.2 UNSUPERVISED CLASSIFICATION We pre-train the model across all the provided training data in ModelNet40. We keep the pre-trained model’s weight and use it for the classification tasks. We do not perform fine-tuning when using the pre-trained model for downstream tasks. After obtaining the graph embedding, we use a linear SVM as the unsupervised tool for classification on ModelNet40. The process of our unsupervised learning is stated as follows. We first pre-train the masked autoencoder with training data with the same training hype-parameter setting as in Section 4.1. After pre-training the model, we pick the model with the lowest Chamfer Distance on provided test data. Because the data used for pre-training do not contain label information, we do not consider computing test data’s Chamfer Distance as information leaking. We use the best model to extract global embeddings from the training and test data, a vector with dimension 1024. Once we obtain the global embeddings, we use linear SVMs to train on ModelNet40 training data’s global embeddings. We use 5-fold cross-validation to compute the average validation accuracy on the data split from training data. We also perform a logarithm search on the regularization parameter C of SVM from 1 to 1000 with the number of steps equal to 10. Then we pick the SVM model with the best average accuracy on validation data to compute the test accuracy. In Section A.1, we visualize graph embeddings using t-SNE(Van der Maaten & Hinton, 2008). In Table 3, our method performs best compared with other meshbased neural networks on unsupervised learning on ModelNet40. There are two reasons our method outperforms other mesh-based methods. The first reason is our encoder utilizes an attention mechanism to pick important points while ignoring the noisy information by assigning lower weight to the noisy neighboring. The second reason could contribute to the masking mechanism. It provides more data augmentation to our model and forces the model to focus less on the details of the shapes than the general information in the graph. And three methods (Han et al., 2019; Chen et al., 2021; Poursaeed et al., 2020) that outperform our methods are point cloud-based methods. The possible reason could be that data augmentation, like rotation (Han et al., 2019), is not considered when designing our framework. Adding such design components to our framework will be explored in future work. In Figure 3, we show the reconstruction results on ModelNet40 test data. To some extent, the autoencoder ignores the input mesh’s detailed features while preserving the input mesh’s overall structure. Those detailed features, like the airplane’s engine, the chair’s arm, and the leg style of a table, are ignored during the reconstruction. Ignoring those detailed features means that the encoder encodes the information that is good for decoding into an average shape in the class but forgets the detail. For reconstruction tasks, this is not desired. But for classification, this process is like cleaning redundant information from the input shape. More reconstruction visualization results are shown in Figure 10. 4.3 PART SEGMENTATION Part segmentation is a fine-grained point-wise classification task that aims to predict each point’s part category label in a given shape. In our work, we need to predict the part category label for each face in a mesh. We evaluate the learned point features on the ShapeNetPart dataset (Yi et al., 2016), which contains 16,881 objects from 16 categories (12149 for training, 2874 for testing, and 1858 for validation). Each object consists of 2 to 6 parts with a total of 50 distinct parts among all categories. We use the mean Intersection-over-Union (mIoU) as the measurement calculated by averaging the IoUs of the different parts occurring in one shape. For the segmentation result, we follow the protocol from (Hassani & Haley, 2019). The results are shown in Table 4. In the original dataset, only point clouds and their corresponding point-wise labels are provided. To get ground truth for meshes, we need to first align the mesh with the point cloud by sampling points on the mesh and align the centers of the sampled point clouds with the provided point clouds. After the alignments, we first sample points on the face uniformly for each face on the mesh. Then we compute the nearest point in the ground truth point cloud. After that, the face’s label is determined by the major vote of all the sampled points’ labels. After the processing, we follow (Zhao et al., 2019) to randomly use 5% and 1% of the ShapeNetPart training data to evaluate the segment part task in a semi-supervised setting. We use the same pretrained model to extract the face features of the sampled training data, along with validation and test samples without any finetuning. Following (Hassani & Haley, 2019), We then train a 4-layer MLP [2048, 4096, 1024, 50] on the sampled training sets and evaluate it on all test data. The input feature to the MLP is the concatenation of face node embeddings and global graph embeddings which makes the input features have a dimension size of 2048. We train the model with Adam optimizer with a fixed learning rate of 0.002. This training process takes 30 epochs and converges very fast. Because the features are clear for the MLP to distinguish, the entire process takes about 15 minutes, including the testing after each epoch’s training. During testing, we project the label computed on mesh’s faces back to the provided point clouds according to the distance between the points and faces. Results shown in Table 4 suggest that our method is able to perform on par with the point cloud baselines and on ShapeNetPart semi-supervised learning segmentation task. In Figure 3, we show the visualization result of our semi-supervised learning segmentation. More segmentation visualization results are shown in Figure 9. 5 DISCUSSION 5.1 Is 0 masking ratio the Best Choice for Evaluating MGMA In He et al. (2022), the masking ratio at testing is fixed at 0. This is under the assumption that providing as much information to the trained masked autoencoder is the best choice. We explore the effect of test masking ratios on the unsupervised classification task. In our experiments, the test masking ratio is not fixed but also variable when evaluating the pre-trained model. In Figure 4 (a), we fix the test masking ratio to 0.0 and vary the training masking ratio from 0.1 to 0.9. And it demonstrates that varying training masking ratios could change the performance on unsupervised learning tasks. In Figure 4 (b) and (c), we vary the masking ratio not only during training but also during testing and validation. It turns out that the maximum test accuracy is obtained when the training masking ratio is 0.6 and the test masking ratio is 0.1 or 0.3. This result suggests that choosing 0 as the test masking ratio is not the only choice for evaluating a model trained with masking. For the convenience of delivery, we denote a 2D coordinate (a, b) as the situation when the training masking ratio is a, and the test masking ratio is b. In Figure 5, we investigate why the best test accuracy happens at (0.6, 0.1) and (0.6, 0.3). We compute the difference between validation accuracy and test accuracy. This difference is usually taken as the symbol of overfitting or underfitting. It turns out that in most cases, our model overfitted the task. But those maximum test accuracy points happen to points less overfitting. Another point that exhibits such property is (0.7, 0.7) in the difference map. But at that point, more information about the mesh is lost. There are totally three regions on the heat map in Figure 5 exhibiting the less overfitting property. The last one is at (0.2, 0.6). But the testing ratio is too high that the model is not overfitting but also extracts less useful information. Even though in MaskMAE, 0.75 is the best choice for masking, our 3D mesh dataset differs from the image dataset. In 3D space, this masking ratio becomes lower, which means a face in a mesh participating in classification plays a more important role than each pixel in an image. Also, the point at position (0.5, 0.5) makes the model most overfitting. There are two possible reasons. First, training with a masking ratio of 0.5 gives the input model the most freedom, making validation easier and testing harder. Second, having the same masking ratio could make the model rely on finding masking information from the mesh. An opposite example is (0.6, 0.1) points. The model is trained at a masking ratio of 0.6 but tested at a masking ratio of 0.1. At this time, the masking still helps purge out the redundant information unrelated to the classification. But also, the training and test difference make the model force itself to discard information on masking but find common details. For more accuracy curves under different training and test masking ratios, see Section A.3. 6 CONCLUSION We propose a self-supervised mesh encoding approach to learn point and shape features on meshes that use three self-supervised losses, including context, COD, and autoencoding multi-scale graph-based encoder. We thoroughly evaluated our model on mesh classification and segmentation benchmarks. The results suggest that the learned block-level and class-level features outperform prior state-of-theart models in self-supervised representation learning. For instance, in ModelNet40 shape classification tasks, our model achieved the state-of-the-art (among self-supervised mesh encoders) accuracy of 89.8%. We also find that different combinations of test and training masking ratios in MGMA could provide varying information to downstream tasks. In the ShapeNetPart segmentation task, it achieved a mIoU of 78.5, which outperforms the state-of-the-art mesh encoders. We hope our work could provide a new direction at mesh deep learning analysis and self-supervised learning on mesh data. A APPENDIX A.1 T-SNE VISUALIZATION We visualize graph embeddings obtained by fixing the test masking ratio to 0 using t-SNE (Van der Maaten & Hinton, 2008). We could observe that plant and chair are two classes clustering close but easy to distinguish. The reason could be that both of them are tall cuboids. But chairs have a more regular appearance than plants. The piano and range hood are also the same cases. They have a similar outlook but are different when looking in detail. Usually, the mesh of the range hood has a hole inside its body. Another confusion to the model is the nightstand and dresser, two potentially similar objects. The t-SNE plot at ratios 0.3 and 0.6 are quite similar in clustering. While the plot at a ratio of 0.9 begins to confuse objects like desks and pianos (see the bed category move from corner to the center). A.2 NETWORK ARCHITECTURE The overall architecture of our network is shown in Figure 7. It has a heavier encoder than the decoder. It follows the design logic in MaskMAE since, after pre-training, we no longer need the decoder. The reason we did not use batch normalization in the decoder is to follow (Achlioptas et al., 2018). And decoder is not the mean focus of our paper. The input features to the graph are computed using descriptors defined in (Singh et al., 2021b), which are 320-dimension vectors for face nodes. For the first two mesh graph attention blocks, we use 1-ring neighbors for neighboring lookup. For the third mesh graph attention block, we use 2-ring neighbors. A.3 MASKING RATIO ANALYSIS In Figure 8, we plot the accuracy curve under different training and test masking ratio. Three patterns of accuracy curve are found when the test masking ratios are fixed. The first happens at test masking ratios of 0.0, 0.1, and 0.2. The accuracy goes up and down. The second one is at test masking ratios of 0.3, 0.4, 0.5, and 0.6. The accuracy goes up and done and up again. The last one happened at test masking ratios of 0.7, 0.8, and 0.9. The accuracy goes up. The reason is straightforward for the first and third patterns. For the first pattern, the models are trained with low masking ratios. When the training masking ratio increases, the models focus on extracting information other than just masking, which explains why there is an increasing curve at the beginning. And when the ratio is too high, there is not enough information. Thus, the curve begins to drop. The third pattern is caused by test masking ratios being too high such that the models trained with low masking ratios could efficiently capture the information of testing meshes. And only models trained under high masking ratios could capture information from testing meshes. The second pattern is generated when the first and third patterns merge. One pattern of accuracy curve is found when the training masking ratios are fixed. Test Ratio 0.0 Test Ratio 0.1 Test Ratio 0.2 Test Ratio 0.3 Test Ratio 0.4 Test Ratio 0.5 Test Ratio 0.6 Test Ratio 0.7 Test Ratio 0.8 Test Ratio 0.9 Train Ratio 0.1 Train Ratio 0.2 Train Ratio 0.3 Train Ratio 0.4 Train Ratio 0.5 Train Ratio 0.6 Train Ratio 0.7 Train Ratio 0.8 Train Ratio 0.9 A.4 SEGMENTATION VISUALIZATION RESULTS More segmentation results are shown in Figure 9 A.5 RECONSTRUCTION VISUALIZATION RESULTS More reconstruction results are shown in Figure 10.
1. What is the focus of the paper regarding mesh data processing? 2. What are the strengths of the proposed masked autoencoder architecture for 3D meshes? 3. What are the weaknesses of the paper, particularly regarding its novelty and experimental scope? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any minor fixes or typos in the review that should be addressed?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a masked autoencoder architecture for 3D meshes. The input mesh is processed to consider the faces as nodes of a connected non-oriented graph. Then, some node of the so obtained graph is masked and passed to attention layers; a final max-pool produces the graph embedding that can be used for different applications. The method is tested on the Classification and Part segmentation tasks, with some qualitative results about shape reconstruction, showing results on par with SoTA. Strengths And Weaknesses STRENGTH S1) RESEARCH DIRECTION: Looking for adaptation of masking and attention mechanism on mesh data is a compelling and quite active research field in the Geometric Deep Learning Community. S2) MASK DISCUSSION: The paper discusses the performance variation at a different portion of masking. Such analysis is useful for future works, pointing out that there is a margin to find more sophisticated masking policies. WEAKNESSES W1) NOVELTY: The proposed method does not introduce any particular methodological novelty. Attention mechanisms are already applied to 3D shapes (e.g., "Shape registration in the time of transformers", Trappolini et al., 2021), as well as masking techniques on meshes (Liang et al. 2022). On graphs, some attempts can be mentioned as well (e.g., "MGAE: Masked Autoencoders for Self-Supervised Learning on Graphs", Tan et al., 2022, "Graph Masked Autoencoders with Transformers", Zhang et al., 2022). Further references can be found in a recent survey ("A Survey on Masked Autoencoder for Self-supervised Learning in Vision and Beyond", Zhang et al., 2022). Similarly, several references can be found in "Transformers in 3D Point Clouds: A Survey", Lu et al., 2022. I am particularly concerned by (Liang et al. 2022), since it is mentioned in the related work without providing its positioning w.r.t. paper contribution. I think a proper discussion of the methodological novelty should be provided. W2) RELATED WORKS AND COMPARISONS: I think there are significant works that are not discussed and on which a comparison would be relevant. DiffusionNet (Sharp et al., 2020) and DeltaConv (Wiersma et al., 2022) are both SoTA networks for meshes and pointcloud that show results on the same datasets used in this paper (also, DeltaConv has the best results on ModelNet40). I also suggest modifying both "Self-Supervised Learning" and "Transformer Applications" paragraphs to be more focused on 3D data and, in particular meshes. W3) EXPERIMENTS: The architecture is quantitatively tested only on two datasets, of which previous methods already saturate one. It is not clear the applicability of the method on more challenging and advanced tasks (e.g., protein segmentation, shape matching, non-rigid objects segmentation/classification). I think this significantly limits the impact of the work and does not clarify the real applicative scenarios of the method. Finally, while the results are promising, in general, they do not fully reach the state of the art. Clarity, Quality, Novelty And Reproducibility The novelty of the method is among my main concerns. Similar ideas have already been proposed in the same domain. I have no particular doubts about the method's reproducibility. The paper is overall clear, while the introduction contain vague statements which may convey a wrong message: "mesh, an irregular data format" -> "irregular" is unclear; since they enjoy more structure than a point cloud. "Traditional studies do not consider unlabeled data" -> what does "traditional" mean exactly? MINOR FIXES I have also found some typos: "neighing nodes' -> Neighbourhood nodes? "Equation1to" -> Missing a space "we first verify" -> Missing capital letter
ICLR
Title A TWO-STAGE FRAMEWORK FOR MATHEMATICAL EXPRESSION RECOGNITION Abstract Although mathematical expressions (MEs) recognition have achieved great progress, the development of MEs recognition in real scenes is still unsatisfactory. Inspired by the recent work of neutral network, this paper proposes a novel two-stage approach which takes a printed mathematical expression image as input and generates LaTeX sequence as output. In the first stage, this method locates and recognizes the math symbols of input image by object detection algorithm. In the second stage, it translates math symbols with position information into LaTeX sequences by seq2seq model equipped with attention mechanism. In particular, the detection of mathematical symbols and the structural analysis of mathematical formulas are carried out separately in two steps, which effectively improves the recognition accuracy and enhances the generalization ability. The experiment demonstrates that the two-stage method significantly outperforms the end-to-end method. Especially, the ExpRate(expression recognition rate) of our model is 74.1%, 20.3 percentage points higher than that of the end-to-end model on the test data that doesn’t come from the same source as training data. 1 INTRODUCTION Mathematical expressions (MEs) play an essential role in math, physics and many other fields. Recognizing mathematical expressions is receiving increasing attentions for application in digitization and retrieval of printed documents. The process of recognizing mathematical expressions is to convert mathematical expressions into LaTeX strings, which includes three stages: symbol segmentation, symbol recognition and structural analysis. We usually divide recognition of MEs into handwritten and printed domains. In the domain of printed MEs, researchers face three challenges(Anderson (1967); Belaid & Haton (1984)): the complicated two-dimensional structures, various styles of images in printed input and strong dependency on contextual information. Three major problems are involved in MEs recognition (Zanibbi et al. (2012); Mouchre et al. (2016)): symbol segmentation, symbol recognition, structural analysis. These problems can be solved sequentially or globally. Deng et al. (2016a) proposed an end-to-end formula recognition method for images generated directly from LaTeX code. For their method, a CNN is applied to extract visual features from the input images and then every row is encoded using a recurrent neural network (RNN). These encoded features are then used by an RNN decoder with a visual attention mechanism to produce final outputs. Their model achieved 77.46% accuracy on test data which had the same distribution as the training data. For convenience, we define homologous test data and non-homologous test data here. It is called homologous test data if the test data comes from the same source as the training data. Otherwise, It is called non-homologous test data. The result demonstrated that the model had good performance on homologous test data. However, in practice we found that the model had poor performance on non-homologous test data, that is, the generalization ability of the above model is very weak. In real scenes, the images may be of great variety, like a wide range of backgrounds, as is shown in Figure 1. It is impossible to predict all input cases and the test data may diverge from what the system has seen before. Therefore, the method above is not appropriate to be applied to recognize images from real scenes. It seems a necessary task to design a model that has strong generalization ability to recognize real MEs images. To improve the generalization ability, we decouple the feature extraction process and the translation process. In the feature extraction process, we use YOLOv3(Redmon & Farhadi, 2018) to locate and classify the symbols of images. Then the class and location information of each symbol is vectorized, which is used as input for the seq2seq model and translated into LaTeX strings. The two-stage approach has several benefits. Firstly, changes in the input images styles would have no influence on the encoder-decoder model, and YOLOv3 has good generalization ability(Redmon et al., 2015; Redmon & Farhadi, 2018), so the two-stage method would have better generalization ability than the end-to-end method(Deng et al., 2016b). Secondly, the feature vectors composed by position and classification information are much more concise than the feature vectors extracted directly by the convolutional layers(Deng et al., 2016b), which are easier to learn and get higher recognition rate. The main contributions of this paper can be summarized as: (1) A two-stage method for MEs recognition is proposed to decouple the feature extraction process and the translation process, which has better generalization ability and achieve better accuracy. (2) By concatenating position information and classification information into feature vectors, we successfully translate symbols with position information into LaTeX strings by the seq2seq model. To the best of our knowledge, we use the seq2seq model to solve structural analysis problem for the first time and achieve satisfactory results, which may accelerate progress in machine recognition of other two-dimensional languages. (3) We propose a method to automatically generate MEs images with position and classification information of the symbols, which avoid expensive manual annotation. 2 RELATED WORK The detection framework based on state-of-the-art convolutional neural network (CNN) can be divided into two categories: one-stage method and two-stage method. The typical two-stage methods include R-CNN(Girshick et al. (2014)), Fast R-CNN(Uijlings et al. (2013)), Faster RCNN (Ren et al. (2017)) and R-FCN(Dai et al. (2016)), which firstly utilize the region proposal generation algorithms to generate potential symbol regions and then perform classification on the proposed regions. These methods improve the accuracy, but yield slow processing speed. On the contrary, single-stage methods like SSD(Liu et al. (2016)) and YOLO(Redmon et al. (2015)), apply predefined sliding default boxes of different scales on one or multiple feature maps, thus keep balance between speed and accuracy. Aiming to structural analysis, some papers prefer approaches based on predefined grammars to solve the problem. For example, Lavirotte (1998) investigated graph grammar and graph rewriting as a solution to recognize two dimensional mathematical notations. Chan & Yeung (2001) developed a system based on definite clause grammars(DCG), which incorporated an error detection and correction mechanism into a parser. Huang (2007) proposed a structural analysis approach based on the Attribute String Grammar and the Baseline Tree Transformation approaches. lvaro et al. (2014) introduced hidden Markov models to recognize mathematical symbols and stochastic context-free grammar to model the relationship between these symbols. Yamamoto et al. (2006) et al. proposed a new two-dimensional structure model for mathematical expressions by the new concept of Hidden Writing Area (HWA). Maclean & Labahn (2015) presented a system, which captured all recognizable interpretations of the input and organized them in parse trees by a Bayesian scoring model. lvaro et al. (2016) described a statistical framework of grammar-based approach that deal with structured problems by two-dimensional grammars. Some papers introduced an end-to-end (Tao et al. (2013); Cirean et al. (2010); Deng et al. (2016a)) framework totally based on neutral networks for mathematical expressions which did not require a predefined math grammar. Zhang et al. (2017b) proposed a model namely Watch, Attend and Parse (WAP), which had two components: a watcher and a parser. In Zhang et al. (2017a), the authors improved the CNN encoder by employing a novel architecture called densely connected convolutional networks (DenseNet)(Gao et al. (2016)). 3 NETWORK ARCHITECTURE In this section, we will give a detailed introduction about the two-stage framework proposed in this paper for printed mathematical expressions recognition. In the first stage, the main task is to locate and recognize the math symbols of input images by YOLOv3. In the second stage, the task is to translate the vector containing symbols categories and positional information into LaTeX strings. We will elaborate on the classic attention mechanism based on encoder-decoder framework and explain the process of parsing. As illustrated in Fig. 2, the detection is YOLOv3, which generates a series of symbol sequences with location information from the whole MEs images. Then these information including symbols’ location and classes are processed and converted into vectors as input to the seq2seq model. Finally, the output of the decoder is LaTeX strings. 3.1 DETECTION:YOLO3 Symbol detection in formula images are more complex than other natural scene object detection due to its smaller contours and various shapes. There are many small symbols, like ’·,, ’−’, and subscript and superscript symbols, which may be lossed in symbol detection. Besides, some symbols are very similar, like digit ’1’ and letter ’l’, which might cause error classification. Error detection will seriously affect the parsing in next stage, thus reduce the final formula recognition accuracy. Among many object detection algorithms, YOLOv3 has proved to be the most effective in detecting small objects. YOLOv3 has integrated advanced conception from other object detection algorithms like feature pyramid networks (Lin et al., 2016) and anchor box prediction mechanism (Redmon & Farhadi, 2017). It uses entire image as input of the network and draknet-53 without fully connected layer to extract feature, and finally predicts the bounding box of object and its category directly at the output layer. In the part of predicting boxes, it extracts features from three different scales and merges up-sampled feature map with earlier feature map, thus get better performance in small object detection. We adjust the size of the anchor box to reduce missed detection. Especially, anchors like (10,10), (6,40) and (40,6) are added to detect points, horizontal or vertical lines. For similar symbols like digit ’1’ and letter ’l’, it is almost impossible to be distinguished by the model. Such errors might be corrected if extra rules are added as a postprocessing. 3.2 PARSE:ENCODER-DECODER A lot of grammar-based parsing algorithms have been proposed for structural analysis and performed well in several systems (lvaro et al. (2014); Awal et al. (2014); lvaro et al. (2016)). But grammar-based methods requires priori knowledge to define ME grammar and its complexity increases exponentially with the size of the predefined grammar. The encoder-decoder framework was first presented in the paper(Cho et al. (2014)), which was used to translate one language into another. Structural analysis is to parse symbols with position information into LaTeX sequence, which is similar to translation. The only difference between them is that the former is two-dimensional and the latter is one-dimensional. By creatively concatenating position information and classification information as input vectors, we successfully employed encoder-decoder model to perform structural analysis of MEs. 3.2.1 ENCODER-DECODER The LSTM is an improved version of simple RNN, so we employ LSTM as the encoder for alleviating the vanishing and exploding gradient problems(Bengio et al. (2002)). In encoding part, the input of each time step is a symbol with position information, represented by feature vector (x, y, w, h, o-h) where (x, y, w, h) are the coordinates of the top-left corner, width and height of the rectangle that contains the symbol, and o-h is a one-hot vector to distinguish the symbol from others. Similarly, we employ lstm as the encoder. In the decoding stage, we aim to generate a most likely LaTeX string given the input feature vectors: ŷ = argmax logP (y|x) (1) In fact, the Y sequence is closely related to the semantic vector C generated by the encoder, and yt at one time also is affected by y generated at other times, which can be described as the following formula: yt = argmaxP (yt) = T∏ t=1 p (yt| {y1, y2, . . . , yt−1} , C) (2) In addition, it is intuitive to adopt the ensemble method(Dietterich (2000)) for improving the performance of beam algorithm. 3.2.2 ATTENTION Although, the encoder-decoder model has been widely applied to many problems, its biggest limitation is that the only link between encoding and decoding is a fixed-length semantic vector C. However, this semantic vector may not fully represent the information of the whole sequence, and the information carried by the first input will be diluted by the later input information. The accuracy of model is also affected by inadequate input sequence information. To solve this problem, Bahdanau et al. (2014) proposed attention model that generates an ”attention range” when output is generated, indicating which parts of the input sequence should be paid attention to in the next output, and then generates the next output according to the region of interest. Between encoder and decoder, we introduce attention mechanism. In standard attention(Bahdanau et al. (2014)), we compute a context vector candidate ĉi as: ĉi = Tx∑ j=1 αijhj (3) Where hj is the hidden state of encoder and αij is the correlation coefficient. We use a neural network to approximate the attention distribution αij : αij = softmax (ut) = exp (eti)∑L k=1 exp (etk) eti = v T att tanh (Wattht + Uattai) (4) Where eti denotes the energy of annotation vector ai at time step t conditioned on the current lstm hidden state prediction ht. 4 EXPERIMENTS To validate the effectiveness of the proposed method for printed mathematical expression recognition, we design a set of experiments to evaluate the answers of the following questions: Q1 Is the improved YOLOv3 for locating and recognizing mathematical symbols effective? Q2 How does encoder-decoder analyze the 2D structure of MEs? Q3 Does the proposed approach outperform state-of-the-arts? 4.1 METRIC Metric of detection model: For detection tasks, IoU (Intersection-over-Union) is a standard metric to measure performance. In this paper, detections are considered true or false positives based on IoU. A detection is to be considered correct if the IoU exceeds 50% and the classification is right. Metric of translation model: For the translation model, we use accuracy and Word Error Rate(WER)Klakow & Peters (2002) as metrics to measure the performance. The accuracy is calculated based on expression recognition rates (ExpRate), i.e., the percentage of predicted mathematical expressions exactly matching the ground truth, which gives a useful global performance metric. 4.2 DATA Data with classification and position information is expensive to annotate and rare. To address this problem, a tool is developed to provide detailed ground-truth annotations, which are cheap and more precise than data annotated manually. After filtering some inappropriate equations IM2LATEX100K (Deng et al., 2016b), a data set containing 81214 equations is obtained to generate mathematical expression images. Besides, 6000 background images are collected and split into two parts: one part contains 4000 images, which are used as background of training set, validation set and the first test set; the other part 2000 images are used as background of the second test set. The two test sets are generated by the same LaTeX equations but with different background. The first test data set, called homologous test set, has the same distribution as the training set. The second test set, called non-homologous test set, doesn’t have the same distribution as the training set, which is used to evaluate the generalization ability of the models. For details to generate data, see Appendix A. 4.3 DETECTION MODEL EXPERIMENTS We train the model using minibatch SGD with four GPUs, 0.9 momentum, 0.0005 weight decay, and batch size 64. We first train the model with 0.0005 learning rate for 20000 iterations, and then continue training for 4000 iterations with 0.0001 and 1000 iterations with 0.00001. The anchors we use are (10,10), (6,40), (40,40), (40,6), (80,80), (12,100), (120,120), (100,12), (228,228). Especially, anchor (10,10) is designed to detect small symbol like ’·’ and ’.’, and the anchors (6,40), (40,6), (12,100), (100,12) to detect horizontal or vertical lines, like ’-’, ’—’ or ’|’. Figure 3 gives an example of YOLOv3. Testing on 5000 images of the homologous test set, there are 4822 images that all the symbols are correctly located and recognized. Besides, location errors occurs in 125 images and classification errors occur in 62 images, among which there are 9 images that contain both location and classification errors.Testing on 5000 images of the non-homologous test set, there are 4813 images that all the symbols are correctly located and recognized. In addition, location errors and classification errors occur in 126 images and 72 images, respectively. The images that contain both location and classification errors are 11. Table 1 shows the testing results of both homologous and non-homologous test set, which are almost identical. Although the detection model has never ’seen’ the background images of the nonhomologous test set, it still achieves good performance. This demonstrates the detection model has good generalization ability, which are very important in practical applications. 4.4 TRANSLATION MODEL EXPERIMENTS We regard the conversion of symbol sequences with location information into LaTeX sequences as a translation process and solve this problem with the encoder-decoder model, which is of great significance. As far as we know, we are the first to use the encoder-decoder model to solve the twodimensional translation problem. As is shown in Section 4.2, we have 81214 different LaTeX math equations, 71214 expressions for training, 5000 for validation and 5000 for test. In order to train the encoder-decoder model, we run our YOLOv3 model on the training dataset, validation dataset and test dataset, respectively and store the symbols’ position and classification information of every image. After filtering the wrong cases, we obtained 69405 training expressions, 4826 validation expressions and 4822 test expressions. The encoder is a bi-directional LSTM. The hidden state of the decoder is of size 512, encoder LSTM of 256, which are the same as the configuration of the RNN layers in Im2LaTex (Deng et al., 2016b). The model was trained using SGD with batch size 64 for 120 epoches. The initial learning rate is 0.001, and reduced to 0.0001 after 100 epoches. Besides, the input symbols are ordered by the upper-left corners of their boxes from left to right, top to bottom. For example, the input order of the symbols is -, m, k, c, 2, =,√, 3, T for expression mc 2 k = √ 3T . Since we have only 69405 mathematical expressions to train the encoder-decoder model, how to avoid overfitting is a great challenge. To address the problem, we use extensive data augmentation. Firstly, we replace symbols with other symbols to get more mathematical expressions. For example, we replace the ’a’, ’b’, ’c’ and ’2’ in a2 + b2 = c2 with four other random chosen symbols ’e’, ’g’, ’l’ and ’4’, thus obtain a new equation e4 + g4 = l4. For position information, we introduce random scaling and translations. For each training epoch, up to 80% of the 69405 mathematical expressions would be augmented by the strategy above. Data augmentation plays an important role in training the encoder-decoder model. The experiment shows that we can improve12.8% ExpRate with the augmentation strategy above. Why does the encoder-decoder model benefit so much from the augmentation strategy? One reason may be that the data is not enough to train a good seq2seq model and the feature vectors(position + one-hot vectors) are too concise, which don’t contain any noise compared to the feature vectors extracted by the convolution layers in Im2LaTex (Deng et al., 2016b). In Table 2, the performance of standard attention model is clear to be observed. For homologous test set, the ExpRate is 78.0% and the wer is 4.4%. Besides, the result of non-homologous test set is almost the same as that of homologous test set. It is understandable because the input of the encoder- decoder model is the position and classification information of symbols, not the image. Obviously, changes in image style has little influences on the encoder-decoder model. We would inspect the alignment between the symbols (with position information) in a source sentence and the words in a generated LaTeX by visualizing the annotation weights αij from Eq. (4), as in Fig. 4. Each row of a matrix in each plot indicates the weights associated with the annota- tions. From the matrix we see which positions in the source sentence were more important when generating the target LaTeX. From the Fig. 4 we can conclude that the alignment between the symbols with position information and the target LaTeX is monotonic. In general, weights along the diagonal of each matrix are closer to 1. However, there are also many non-trivial, non-monotonic alignments. It’s obvious that the alignments depend on the spatial relationships between the mathematical symbols, which might be horizontal, vertical, subscript, superscript or inside. Fig. 4(a) shows a strictly monotonic alignment since all the symbols are horizonal relationship. Vertical, subscript, superscript or inside relationships are usually ordered differently between the input symbols and generated LaTeX. When a target LaTeX contains several different spatial relationships, like mc 2 k (vertical and superscript), the weights seem disorderly in some degree. In Fig. 4(d), the symbol ’k’ of the input sequence is ahead of the symbol ’c’, but the encoder-decoder model was able to correctly align the symbol ’c’ with numerator and the symbol ’k’ with denominator, which show that the model successfully captured the position information of the symbols. Fig. 4(c),(d) demonstrated the encoder-decoder model can handle complicated two-dimensional structure. 4.5 EVALUATE AS A WHOLE The current state-of-art OCR-based mathematical expression recognition system is Im2LaTex(Deng et al., 2016b), an end-to-end recognition system, which combines symbol recognition and structural analysis as a whole. In this part we run experiments comparing our model to Im2LaTex. To make a fair comparison, we downloaded the code they exposed in GitHub and trained with the same data set. In Table 3, we can observe that our model achieved an ExpRate of 74.3%, while its WER was only 0.048 for homologous test set, which is slightly better than Im2LaTex. However, Table 4 shows the results of the non-homologous test dataset. The ExpRate of our model is 74.1%, 20.3 percentage points higher than Im2LaTex. The ≤ 1% error percentages ( 78.21%) and ≤ 2% error percentages ( 81.5%) of our model is also much higer than Im2LaTex. From these indicators, our system outperforms Im2LaTex for homologous test dataset, and is evidently better than Im2LaTex for nonhomologous test dataset. So our model has better generalization, which is superior to Im2LaTex for recognizing MEs with real backgrounds. 5 CONCLUSIONS In this study, we introduce a two-stage framework to recognize mathematic expressions. It is the first work that employs YOLOv3 to detect mathematic symbols and uses the seq2seq model based attention to perform structural analysis. We demonstrate through experiment results that the novel two-stage framework has better generalization ability and performs better than the state-of-the-art methods in real scenario. In future work, we plan to further improve the accuracy of parser by adding more data and introducing more advanced mechanisms, thereby improving the accuracy of the whole mathematical formula recognition system. A DATA A.1 BACKGROUND IMAGES (a) bg1. (b) bg2. (c) bg6. Figure 5: Background images To favour variety, 6,000 background images are collected from printed articles, books, examination papers and so on. We collected a lot of printed articles, books, examination papers and then took photos of them. We only cropped the non-text areas of the photos as background images. Figure 5 shows some examples of the background images. A.2 DATA GENERATION There are two kinds of mathematical symbols: single-component (like ’a’, ’b’, ’c’, etc) symbols and multi-components symbols (like ’i’, ’j’, ’=’, etc). To get the position information, the multicomponents symbols are rendered with specific colors. Then the single-component symbols are segmented by connected components and the multi-components symbols by colors. After obtaining the position information, the symbols were cropped and then classified by a neural network. Eventually, we obtained a dataset with 81214 images and corresponding location and category information of symbols. Finally, the images are binarized and blended into the background images using Poisson image editing(Rez et al., 2003). Figure 6 gives an example to show the process to generate data.
1. What is the main contribution of the paper regarding converting images of math expressions into LaTeX? 2. What are the strengths of the proposed two-stage approach compared to end-to-end models? 3. Do you have any concerns about the applicability and interest of the paper for the ICLR community? 4. Are there any specific improvements or modifications that can be made to the proposed model or experimental design? 5. How does the model perform on the original IM2LATEX-100K problem? 6. Can you provide examples of scenarios where the proposed model outperforms Deng's model, and vice versa? 7. Did you consider using a soft encoding of character identity instead of a one-hot representation?
Review
Review This paper considers the problem of converting an image of a math expression into LaTeX. They note that while the model proposed in Deng et al works well on the IM2LATEX-100K dataset, is doesn't generalize well to equations in real-world settings that you'd have in a photograph or a scan of an equation. They propose an approach that breaks the problem into two steps. In the first step they detect all the characters in the image, identifying the character type and bounding box for each. In the second step they use an encoder/decoder (LSTM/LSTM with attention) model to translate this sequence of character encodings into a LaTeX sequence. They create a new dataset of LaTeX equations rendered on backgrounds sampled from real photographs of books and papers. They split this into a training and "homologous" test set. They find performance of their two stage model is a bit better than Deng's model on this test set. They then create a "non-homologous" test set, in which the test equations are rendered on a new set of backgrounds, unseen in training. On this test set, the model in the paper performs essentially the same as on the homologous dataset, while the Deng model performs substantially worse. The conclusion is that the two stage approach creates a model that is much more robust to the appearance of the equation. I think the main takeaway from this paper is that there can be disadvantages to an end-to-end approach. To achieve their improvement, authors used their insight that there is an intermediate representation that summarizes all relevant information (the sequence of characters and positions), together with the ability to generate a new training set automatically to learn this intermediate representation. I think this is an interesting case study in applied machine learning, but I don't think it will be of enough general use or interest to the ICLR community to merit acceptance. As an applications paper, I think there are several aspects that can be improved. Here are some specific questions and comments that may help a further iteration of this paper: - You have examples of ME images from the real world, but you don't have any examples of your artificial "real world" equations, overlayed on your sampled backgrounds. Those would be helpful. Are any other modifications done to the equation to simulate a real-life picture or scan, such as color or darkness distortions, angles, etc? - How does your proposed model perform on the original IM2LATEX-100K problem? - How well does the encoder-decoder model do an a gold encoding of the input? It would be nice to separate the errors into translation errors vs object detection errors. - Can you give examples of scenarios where your model got things right and Deng's model did not, and vice versa? Is there any interpretation to why each model does better or worse for various example? I imagine the model may have more difficulty with equations where one needs to refer to characters that are on the left of the image quite late in the LaTeX expression. For example, fractions with long numerator expressions and and cases environments. - Did you try or consider using a soft encoding of the character identity, instead of a one-hot? Perhaps there are context clues that the encoder/decoder model could use to disambiguate between a 1 and an l, for example.
ICLR
Title A TWO-STAGE FRAMEWORK FOR MATHEMATICAL EXPRESSION RECOGNITION Abstract Although mathematical expressions (MEs) recognition have achieved great progress, the development of MEs recognition in real scenes is still unsatisfactory. Inspired by the recent work of neutral network, this paper proposes a novel two-stage approach which takes a printed mathematical expression image as input and generates LaTeX sequence as output. In the first stage, this method locates and recognizes the math symbols of input image by object detection algorithm. In the second stage, it translates math symbols with position information into LaTeX sequences by seq2seq model equipped with attention mechanism. In particular, the detection of mathematical symbols and the structural analysis of mathematical formulas are carried out separately in two steps, which effectively improves the recognition accuracy and enhances the generalization ability. The experiment demonstrates that the two-stage method significantly outperforms the end-to-end method. Especially, the ExpRate(expression recognition rate) of our model is 74.1%, 20.3 percentage points higher than that of the end-to-end model on the test data that doesn’t come from the same source as training data. 1 INTRODUCTION Mathematical expressions (MEs) play an essential role in math, physics and many other fields. Recognizing mathematical expressions is receiving increasing attentions for application in digitization and retrieval of printed documents. The process of recognizing mathematical expressions is to convert mathematical expressions into LaTeX strings, which includes three stages: symbol segmentation, symbol recognition and structural analysis. We usually divide recognition of MEs into handwritten and printed domains. In the domain of printed MEs, researchers face three challenges(Anderson (1967); Belaid & Haton (1984)): the complicated two-dimensional structures, various styles of images in printed input and strong dependency on contextual information. Three major problems are involved in MEs recognition (Zanibbi et al. (2012); Mouchre et al. (2016)): symbol segmentation, symbol recognition, structural analysis. These problems can be solved sequentially or globally. Deng et al. (2016a) proposed an end-to-end formula recognition method for images generated directly from LaTeX code. For their method, a CNN is applied to extract visual features from the input images and then every row is encoded using a recurrent neural network (RNN). These encoded features are then used by an RNN decoder with a visual attention mechanism to produce final outputs. Their model achieved 77.46% accuracy on test data which had the same distribution as the training data. For convenience, we define homologous test data and non-homologous test data here. It is called homologous test data if the test data comes from the same source as the training data. Otherwise, It is called non-homologous test data. The result demonstrated that the model had good performance on homologous test data. However, in practice we found that the model had poor performance on non-homologous test data, that is, the generalization ability of the above model is very weak. In real scenes, the images may be of great variety, like a wide range of backgrounds, as is shown in Figure 1. It is impossible to predict all input cases and the test data may diverge from what the system has seen before. Therefore, the method above is not appropriate to be applied to recognize images from real scenes. It seems a necessary task to design a model that has strong generalization ability to recognize real MEs images. To improve the generalization ability, we decouple the feature extraction process and the translation process. In the feature extraction process, we use YOLOv3(Redmon & Farhadi, 2018) to locate and classify the symbols of images. Then the class and location information of each symbol is vectorized, which is used as input for the seq2seq model and translated into LaTeX strings. The two-stage approach has several benefits. Firstly, changes in the input images styles would have no influence on the encoder-decoder model, and YOLOv3 has good generalization ability(Redmon et al., 2015; Redmon & Farhadi, 2018), so the two-stage method would have better generalization ability than the end-to-end method(Deng et al., 2016b). Secondly, the feature vectors composed by position and classification information are much more concise than the feature vectors extracted directly by the convolutional layers(Deng et al., 2016b), which are easier to learn and get higher recognition rate. The main contributions of this paper can be summarized as: (1) A two-stage method for MEs recognition is proposed to decouple the feature extraction process and the translation process, which has better generalization ability and achieve better accuracy. (2) By concatenating position information and classification information into feature vectors, we successfully translate symbols with position information into LaTeX strings by the seq2seq model. To the best of our knowledge, we use the seq2seq model to solve structural analysis problem for the first time and achieve satisfactory results, which may accelerate progress in machine recognition of other two-dimensional languages. (3) We propose a method to automatically generate MEs images with position and classification information of the symbols, which avoid expensive manual annotation. 2 RELATED WORK The detection framework based on state-of-the-art convolutional neural network (CNN) can be divided into two categories: one-stage method and two-stage method. The typical two-stage methods include R-CNN(Girshick et al. (2014)), Fast R-CNN(Uijlings et al. (2013)), Faster RCNN (Ren et al. (2017)) and R-FCN(Dai et al. (2016)), which firstly utilize the region proposal generation algorithms to generate potential symbol regions and then perform classification on the proposed regions. These methods improve the accuracy, but yield slow processing speed. On the contrary, single-stage methods like SSD(Liu et al. (2016)) and YOLO(Redmon et al. (2015)), apply predefined sliding default boxes of different scales on one or multiple feature maps, thus keep balance between speed and accuracy. Aiming to structural analysis, some papers prefer approaches based on predefined grammars to solve the problem. For example, Lavirotte (1998) investigated graph grammar and graph rewriting as a solution to recognize two dimensional mathematical notations. Chan & Yeung (2001) developed a system based on definite clause grammars(DCG), which incorporated an error detection and correction mechanism into a parser. Huang (2007) proposed a structural analysis approach based on the Attribute String Grammar and the Baseline Tree Transformation approaches. lvaro et al. (2014) introduced hidden Markov models to recognize mathematical symbols and stochastic context-free grammar to model the relationship between these symbols. Yamamoto et al. (2006) et al. proposed a new two-dimensional structure model for mathematical expressions by the new concept of Hidden Writing Area (HWA). Maclean & Labahn (2015) presented a system, which captured all recognizable interpretations of the input and organized them in parse trees by a Bayesian scoring model. lvaro et al. (2016) described a statistical framework of grammar-based approach that deal with structured problems by two-dimensional grammars. Some papers introduced an end-to-end (Tao et al. (2013); Cirean et al. (2010); Deng et al. (2016a)) framework totally based on neutral networks for mathematical expressions which did not require a predefined math grammar. Zhang et al. (2017b) proposed a model namely Watch, Attend and Parse (WAP), which had two components: a watcher and a parser. In Zhang et al. (2017a), the authors improved the CNN encoder by employing a novel architecture called densely connected convolutional networks (DenseNet)(Gao et al. (2016)). 3 NETWORK ARCHITECTURE In this section, we will give a detailed introduction about the two-stage framework proposed in this paper for printed mathematical expressions recognition. In the first stage, the main task is to locate and recognize the math symbols of input images by YOLOv3. In the second stage, the task is to translate the vector containing symbols categories and positional information into LaTeX strings. We will elaborate on the classic attention mechanism based on encoder-decoder framework and explain the process of parsing. As illustrated in Fig. 2, the detection is YOLOv3, which generates a series of symbol sequences with location information from the whole MEs images. Then these information including symbols’ location and classes are processed and converted into vectors as input to the seq2seq model. Finally, the output of the decoder is LaTeX strings. 3.1 DETECTION:YOLO3 Symbol detection in formula images are more complex than other natural scene object detection due to its smaller contours and various shapes. There are many small symbols, like ’·,, ’−’, and subscript and superscript symbols, which may be lossed in symbol detection. Besides, some symbols are very similar, like digit ’1’ and letter ’l’, which might cause error classification. Error detection will seriously affect the parsing in next stage, thus reduce the final formula recognition accuracy. Among many object detection algorithms, YOLOv3 has proved to be the most effective in detecting small objects. YOLOv3 has integrated advanced conception from other object detection algorithms like feature pyramid networks (Lin et al., 2016) and anchor box prediction mechanism (Redmon & Farhadi, 2017). It uses entire image as input of the network and draknet-53 without fully connected layer to extract feature, and finally predicts the bounding box of object and its category directly at the output layer. In the part of predicting boxes, it extracts features from three different scales and merges up-sampled feature map with earlier feature map, thus get better performance in small object detection. We adjust the size of the anchor box to reduce missed detection. Especially, anchors like (10,10), (6,40) and (40,6) are added to detect points, horizontal or vertical lines. For similar symbols like digit ’1’ and letter ’l’, it is almost impossible to be distinguished by the model. Such errors might be corrected if extra rules are added as a postprocessing. 3.2 PARSE:ENCODER-DECODER A lot of grammar-based parsing algorithms have been proposed for structural analysis and performed well in several systems (lvaro et al. (2014); Awal et al. (2014); lvaro et al. (2016)). But grammar-based methods requires priori knowledge to define ME grammar and its complexity increases exponentially with the size of the predefined grammar. The encoder-decoder framework was first presented in the paper(Cho et al. (2014)), which was used to translate one language into another. Structural analysis is to parse symbols with position information into LaTeX sequence, which is similar to translation. The only difference between them is that the former is two-dimensional and the latter is one-dimensional. By creatively concatenating position information and classification information as input vectors, we successfully employed encoder-decoder model to perform structural analysis of MEs. 3.2.1 ENCODER-DECODER The LSTM is an improved version of simple RNN, so we employ LSTM as the encoder for alleviating the vanishing and exploding gradient problems(Bengio et al. (2002)). In encoding part, the input of each time step is a symbol with position information, represented by feature vector (x, y, w, h, o-h) where (x, y, w, h) are the coordinates of the top-left corner, width and height of the rectangle that contains the symbol, and o-h is a one-hot vector to distinguish the symbol from others. Similarly, we employ lstm as the encoder. In the decoding stage, we aim to generate a most likely LaTeX string given the input feature vectors: ŷ = argmax logP (y|x) (1) In fact, the Y sequence is closely related to the semantic vector C generated by the encoder, and yt at one time also is affected by y generated at other times, which can be described as the following formula: yt = argmaxP (yt) = T∏ t=1 p (yt| {y1, y2, . . . , yt−1} , C) (2) In addition, it is intuitive to adopt the ensemble method(Dietterich (2000)) for improving the performance of beam algorithm. 3.2.2 ATTENTION Although, the encoder-decoder model has been widely applied to many problems, its biggest limitation is that the only link between encoding and decoding is a fixed-length semantic vector C. However, this semantic vector may not fully represent the information of the whole sequence, and the information carried by the first input will be diluted by the later input information. The accuracy of model is also affected by inadequate input sequence information. To solve this problem, Bahdanau et al. (2014) proposed attention model that generates an ”attention range” when output is generated, indicating which parts of the input sequence should be paid attention to in the next output, and then generates the next output according to the region of interest. Between encoder and decoder, we introduce attention mechanism. In standard attention(Bahdanau et al. (2014)), we compute a context vector candidate ĉi as: ĉi = Tx∑ j=1 αijhj (3) Where hj is the hidden state of encoder and αij is the correlation coefficient. We use a neural network to approximate the attention distribution αij : αij = softmax (ut) = exp (eti)∑L k=1 exp (etk) eti = v T att tanh (Wattht + Uattai) (4) Where eti denotes the energy of annotation vector ai at time step t conditioned on the current lstm hidden state prediction ht. 4 EXPERIMENTS To validate the effectiveness of the proposed method for printed mathematical expression recognition, we design a set of experiments to evaluate the answers of the following questions: Q1 Is the improved YOLOv3 for locating and recognizing mathematical symbols effective? Q2 How does encoder-decoder analyze the 2D structure of MEs? Q3 Does the proposed approach outperform state-of-the-arts? 4.1 METRIC Metric of detection model: For detection tasks, IoU (Intersection-over-Union) is a standard metric to measure performance. In this paper, detections are considered true or false positives based on IoU. A detection is to be considered correct if the IoU exceeds 50% and the classification is right. Metric of translation model: For the translation model, we use accuracy and Word Error Rate(WER)Klakow & Peters (2002) as metrics to measure the performance. The accuracy is calculated based on expression recognition rates (ExpRate), i.e., the percentage of predicted mathematical expressions exactly matching the ground truth, which gives a useful global performance metric. 4.2 DATA Data with classification and position information is expensive to annotate and rare. To address this problem, a tool is developed to provide detailed ground-truth annotations, which are cheap and more precise than data annotated manually. After filtering some inappropriate equations IM2LATEX100K (Deng et al., 2016b), a data set containing 81214 equations is obtained to generate mathematical expression images. Besides, 6000 background images are collected and split into two parts: one part contains 4000 images, which are used as background of training set, validation set and the first test set; the other part 2000 images are used as background of the second test set. The two test sets are generated by the same LaTeX equations but with different background. The first test data set, called homologous test set, has the same distribution as the training set. The second test set, called non-homologous test set, doesn’t have the same distribution as the training set, which is used to evaluate the generalization ability of the models. For details to generate data, see Appendix A. 4.3 DETECTION MODEL EXPERIMENTS We train the model using minibatch SGD with four GPUs, 0.9 momentum, 0.0005 weight decay, and batch size 64. We first train the model with 0.0005 learning rate for 20000 iterations, and then continue training for 4000 iterations with 0.0001 and 1000 iterations with 0.00001. The anchors we use are (10,10), (6,40), (40,40), (40,6), (80,80), (12,100), (120,120), (100,12), (228,228). Especially, anchor (10,10) is designed to detect small symbol like ’·’ and ’.’, and the anchors (6,40), (40,6), (12,100), (100,12) to detect horizontal or vertical lines, like ’-’, ’—’ or ’|’. Figure 3 gives an example of YOLOv3. Testing on 5000 images of the homologous test set, there are 4822 images that all the symbols are correctly located and recognized. Besides, location errors occurs in 125 images and classification errors occur in 62 images, among which there are 9 images that contain both location and classification errors.Testing on 5000 images of the non-homologous test set, there are 4813 images that all the symbols are correctly located and recognized. In addition, location errors and classification errors occur in 126 images and 72 images, respectively. The images that contain both location and classification errors are 11. Table 1 shows the testing results of both homologous and non-homologous test set, which are almost identical. Although the detection model has never ’seen’ the background images of the nonhomologous test set, it still achieves good performance. This demonstrates the detection model has good generalization ability, which are very important in practical applications. 4.4 TRANSLATION MODEL EXPERIMENTS We regard the conversion of symbol sequences with location information into LaTeX sequences as a translation process and solve this problem with the encoder-decoder model, which is of great significance. As far as we know, we are the first to use the encoder-decoder model to solve the twodimensional translation problem. As is shown in Section 4.2, we have 81214 different LaTeX math equations, 71214 expressions for training, 5000 for validation and 5000 for test. In order to train the encoder-decoder model, we run our YOLOv3 model on the training dataset, validation dataset and test dataset, respectively and store the symbols’ position and classification information of every image. After filtering the wrong cases, we obtained 69405 training expressions, 4826 validation expressions and 4822 test expressions. The encoder is a bi-directional LSTM. The hidden state of the decoder is of size 512, encoder LSTM of 256, which are the same as the configuration of the RNN layers in Im2LaTex (Deng et al., 2016b). The model was trained using SGD with batch size 64 for 120 epoches. The initial learning rate is 0.001, and reduced to 0.0001 after 100 epoches. Besides, the input symbols are ordered by the upper-left corners of their boxes from left to right, top to bottom. For example, the input order of the symbols is -, m, k, c, 2, =,√, 3, T for expression mc 2 k = √ 3T . Since we have only 69405 mathematical expressions to train the encoder-decoder model, how to avoid overfitting is a great challenge. To address the problem, we use extensive data augmentation. Firstly, we replace symbols with other symbols to get more mathematical expressions. For example, we replace the ’a’, ’b’, ’c’ and ’2’ in a2 + b2 = c2 with four other random chosen symbols ’e’, ’g’, ’l’ and ’4’, thus obtain a new equation e4 + g4 = l4. For position information, we introduce random scaling and translations. For each training epoch, up to 80% of the 69405 mathematical expressions would be augmented by the strategy above. Data augmentation plays an important role in training the encoder-decoder model. The experiment shows that we can improve12.8% ExpRate with the augmentation strategy above. Why does the encoder-decoder model benefit so much from the augmentation strategy? One reason may be that the data is not enough to train a good seq2seq model and the feature vectors(position + one-hot vectors) are too concise, which don’t contain any noise compared to the feature vectors extracted by the convolution layers in Im2LaTex (Deng et al., 2016b). In Table 2, the performance of standard attention model is clear to be observed. For homologous test set, the ExpRate is 78.0% and the wer is 4.4%. Besides, the result of non-homologous test set is almost the same as that of homologous test set. It is understandable because the input of the encoder- decoder model is the position and classification information of symbols, not the image. Obviously, changes in image style has little influences on the encoder-decoder model. We would inspect the alignment between the symbols (with position information) in a source sentence and the words in a generated LaTeX by visualizing the annotation weights αij from Eq. (4), as in Fig. 4. Each row of a matrix in each plot indicates the weights associated with the annota- tions. From the matrix we see which positions in the source sentence were more important when generating the target LaTeX. From the Fig. 4 we can conclude that the alignment between the symbols with position information and the target LaTeX is monotonic. In general, weights along the diagonal of each matrix are closer to 1. However, there are also many non-trivial, non-monotonic alignments. It’s obvious that the alignments depend on the spatial relationships between the mathematical symbols, which might be horizontal, vertical, subscript, superscript or inside. Fig. 4(a) shows a strictly monotonic alignment since all the symbols are horizonal relationship. Vertical, subscript, superscript or inside relationships are usually ordered differently between the input symbols and generated LaTeX. When a target LaTeX contains several different spatial relationships, like mc 2 k (vertical and superscript), the weights seem disorderly in some degree. In Fig. 4(d), the symbol ’k’ of the input sequence is ahead of the symbol ’c’, but the encoder-decoder model was able to correctly align the symbol ’c’ with numerator and the symbol ’k’ with denominator, which show that the model successfully captured the position information of the symbols. Fig. 4(c),(d) demonstrated the encoder-decoder model can handle complicated two-dimensional structure. 4.5 EVALUATE AS A WHOLE The current state-of-art OCR-based mathematical expression recognition system is Im2LaTex(Deng et al., 2016b), an end-to-end recognition system, which combines symbol recognition and structural analysis as a whole. In this part we run experiments comparing our model to Im2LaTex. To make a fair comparison, we downloaded the code they exposed in GitHub and trained with the same data set. In Table 3, we can observe that our model achieved an ExpRate of 74.3%, while its WER was only 0.048 for homologous test set, which is slightly better than Im2LaTex. However, Table 4 shows the results of the non-homologous test dataset. The ExpRate of our model is 74.1%, 20.3 percentage points higher than Im2LaTex. The ≤ 1% error percentages ( 78.21%) and ≤ 2% error percentages ( 81.5%) of our model is also much higer than Im2LaTex. From these indicators, our system outperforms Im2LaTex for homologous test dataset, and is evidently better than Im2LaTex for nonhomologous test dataset. So our model has better generalization, which is superior to Im2LaTex for recognizing MEs with real backgrounds. 5 CONCLUSIONS In this study, we introduce a two-stage framework to recognize mathematic expressions. It is the first work that employs YOLOv3 to detect mathematic symbols and uses the seq2seq model based attention to perform structural analysis. We demonstrate through experiment results that the novel two-stage framework has better generalization ability and performs better than the state-of-the-art methods in real scenario. In future work, we plan to further improve the accuracy of parser by adding more data and introducing more advanced mechanisms, thereby improving the accuracy of the whole mathematical formula recognition system. A DATA A.1 BACKGROUND IMAGES (a) bg1. (b) bg2. (c) bg6. Figure 5: Background images To favour variety, 6,000 background images are collected from printed articles, books, examination papers and so on. We collected a lot of printed articles, books, examination papers and then took photos of them. We only cropped the non-text areas of the photos as background images. Figure 5 shows some examples of the background images. A.2 DATA GENERATION There are two kinds of mathematical symbols: single-component (like ’a’, ’b’, ’c’, etc) symbols and multi-components symbols (like ’i’, ’j’, ’=’, etc). To get the position information, the multicomponents symbols are rendered with specific colors. Then the single-component symbols are segmented by connected components and the multi-components symbols by colors. After obtaining the position information, the symbols were cropped and then classified by a neural network. Eventually, we obtained a dataset with 81214 images and corresponding location and category information of symbols. Finally, the images are binarized and blended into the background images using Poisson image editing(Rez et al., 2003). Figure 6 gives an example to show the process to generate data.
1. What is the focus of the paper regarding mathematical expression recognition? 2. What are the strengths and weaknesses of the proposed two-stage pipeline? 3. How does the reviewer assess the novelty and significance of the proposed approach compared to prior works? 4. What are some concerns regarding the comparison with other methods in the field? 5. Are there any questions regarding the clarity, quality, and reproducibility of the paper's content?
Review
Review In this paper the authors propose a two stage pipeline that aims to solve for mathematical expression recognition. The main approach uses the following stages, a detection stage that is based on YoloV3 and a sequence to sequence approach. The authors compare their method against Image2Latex approach (2016) that is an end to end pipeline and show that there is significant improvement compared to this approach. However, this problem has been a standard task and solved both in the handwritten math expression problem (CROHME challenge of ICDAR and typeset formula detection and recognition. There have been much progress through these challenges with various teams competing. A variety of approaches have been tried for this task and unfortunately the present work has not compared nor evaluated against these approaches. Im2Latex work is quite old benchmark and there have been numerous works as have been cited by the authors and more as can be available from the challenge. The methods presented are also not novel. Using Yolov3 for detection and sequence2sequence for parsing expressions are more or less standard approaches. Hence, the proposed work does not add a significant insight in solving the problem.
ICLR
Title A TWO-STAGE FRAMEWORK FOR MATHEMATICAL EXPRESSION RECOGNITION Abstract Although mathematical expressions (MEs) recognition have achieved great progress, the development of MEs recognition in real scenes is still unsatisfactory. Inspired by the recent work of neutral network, this paper proposes a novel two-stage approach which takes a printed mathematical expression image as input and generates LaTeX sequence as output. In the first stage, this method locates and recognizes the math symbols of input image by object detection algorithm. In the second stage, it translates math symbols with position information into LaTeX sequences by seq2seq model equipped with attention mechanism. In particular, the detection of mathematical symbols and the structural analysis of mathematical formulas are carried out separately in two steps, which effectively improves the recognition accuracy and enhances the generalization ability. The experiment demonstrates that the two-stage method significantly outperforms the end-to-end method. Especially, the ExpRate(expression recognition rate) of our model is 74.1%, 20.3 percentage points higher than that of the end-to-end model on the test data that doesn’t come from the same source as training data. 1 INTRODUCTION Mathematical expressions (MEs) play an essential role in math, physics and many other fields. Recognizing mathematical expressions is receiving increasing attentions for application in digitization and retrieval of printed documents. The process of recognizing mathematical expressions is to convert mathematical expressions into LaTeX strings, which includes three stages: symbol segmentation, symbol recognition and structural analysis. We usually divide recognition of MEs into handwritten and printed domains. In the domain of printed MEs, researchers face three challenges(Anderson (1967); Belaid & Haton (1984)): the complicated two-dimensional structures, various styles of images in printed input and strong dependency on contextual information. Three major problems are involved in MEs recognition (Zanibbi et al. (2012); Mouchre et al. (2016)): symbol segmentation, symbol recognition, structural analysis. These problems can be solved sequentially or globally. Deng et al. (2016a) proposed an end-to-end formula recognition method for images generated directly from LaTeX code. For their method, a CNN is applied to extract visual features from the input images and then every row is encoded using a recurrent neural network (RNN). These encoded features are then used by an RNN decoder with a visual attention mechanism to produce final outputs. Their model achieved 77.46% accuracy on test data which had the same distribution as the training data. For convenience, we define homologous test data and non-homologous test data here. It is called homologous test data if the test data comes from the same source as the training data. Otherwise, It is called non-homologous test data. The result demonstrated that the model had good performance on homologous test data. However, in practice we found that the model had poor performance on non-homologous test data, that is, the generalization ability of the above model is very weak. In real scenes, the images may be of great variety, like a wide range of backgrounds, as is shown in Figure 1. It is impossible to predict all input cases and the test data may diverge from what the system has seen before. Therefore, the method above is not appropriate to be applied to recognize images from real scenes. It seems a necessary task to design a model that has strong generalization ability to recognize real MEs images. To improve the generalization ability, we decouple the feature extraction process and the translation process. In the feature extraction process, we use YOLOv3(Redmon & Farhadi, 2018) to locate and classify the symbols of images. Then the class and location information of each symbol is vectorized, which is used as input for the seq2seq model and translated into LaTeX strings. The two-stage approach has several benefits. Firstly, changes in the input images styles would have no influence on the encoder-decoder model, and YOLOv3 has good generalization ability(Redmon et al., 2015; Redmon & Farhadi, 2018), so the two-stage method would have better generalization ability than the end-to-end method(Deng et al., 2016b). Secondly, the feature vectors composed by position and classification information are much more concise than the feature vectors extracted directly by the convolutional layers(Deng et al., 2016b), which are easier to learn and get higher recognition rate. The main contributions of this paper can be summarized as: (1) A two-stage method for MEs recognition is proposed to decouple the feature extraction process and the translation process, which has better generalization ability and achieve better accuracy. (2) By concatenating position information and classification information into feature vectors, we successfully translate symbols with position information into LaTeX strings by the seq2seq model. To the best of our knowledge, we use the seq2seq model to solve structural analysis problem for the first time and achieve satisfactory results, which may accelerate progress in machine recognition of other two-dimensional languages. (3) We propose a method to automatically generate MEs images with position and classification information of the symbols, which avoid expensive manual annotation. 2 RELATED WORK The detection framework based on state-of-the-art convolutional neural network (CNN) can be divided into two categories: one-stage method and two-stage method. The typical two-stage methods include R-CNN(Girshick et al. (2014)), Fast R-CNN(Uijlings et al. (2013)), Faster RCNN (Ren et al. (2017)) and R-FCN(Dai et al. (2016)), which firstly utilize the region proposal generation algorithms to generate potential symbol regions and then perform classification on the proposed regions. These methods improve the accuracy, but yield slow processing speed. On the contrary, single-stage methods like SSD(Liu et al. (2016)) and YOLO(Redmon et al. (2015)), apply predefined sliding default boxes of different scales on one or multiple feature maps, thus keep balance between speed and accuracy. Aiming to structural analysis, some papers prefer approaches based on predefined grammars to solve the problem. For example, Lavirotte (1998) investigated graph grammar and graph rewriting as a solution to recognize two dimensional mathematical notations. Chan & Yeung (2001) developed a system based on definite clause grammars(DCG), which incorporated an error detection and correction mechanism into a parser. Huang (2007) proposed a structural analysis approach based on the Attribute String Grammar and the Baseline Tree Transformation approaches. lvaro et al. (2014) introduced hidden Markov models to recognize mathematical symbols and stochastic context-free grammar to model the relationship between these symbols. Yamamoto et al. (2006) et al. proposed a new two-dimensional structure model for mathematical expressions by the new concept of Hidden Writing Area (HWA). Maclean & Labahn (2015) presented a system, which captured all recognizable interpretations of the input and organized them in parse trees by a Bayesian scoring model. lvaro et al. (2016) described a statistical framework of grammar-based approach that deal with structured problems by two-dimensional grammars. Some papers introduced an end-to-end (Tao et al. (2013); Cirean et al. (2010); Deng et al. (2016a)) framework totally based on neutral networks for mathematical expressions which did not require a predefined math grammar. Zhang et al. (2017b) proposed a model namely Watch, Attend and Parse (WAP), which had two components: a watcher and a parser. In Zhang et al. (2017a), the authors improved the CNN encoder by employing a novel architecture called densely connected convolutional networks (DenseNet)(Gao et al. (2016)). 3 NETWORK ARCHITECTURE In this section, we will give a detailed introduction about the two-stage framework proposed in this paper for printed mathematical expressions recognition. In the first stage, the main task is to locate and recognize the math symbols of input images by YOLOv3. In the second stage, the task is to translate the vector containing symbols categories and positional information into LaTeX strings. We will elaborate on the classic attention mechanism based on encoder-decoder framework and explain the process of parsing. As illustrated in Fig. 2, the detection is YOLOv3, which generates a series of symbol sequences with location information from the whole MEs images. Then these information including symbols’ location and classes are processed and converted into vectors as input to the seq2seq model. Finally, the output of the decoder is LaTeX strings. 3.1 DETECTION:YOLO3 Symbol detection in formula images are more complex than other natural scene object detection due to its smaller contours and various shapes. There are many small symbols, like ’·,, ’−’, and subscript and superscript symbols, which may be lossed in symbol detection. Besides, some symbols are very similar, like digit ’1’ and letter ’l’, which might cause error classification. Error detection will seriously affect the parsing in next stage, thus reduce the final formula recognition accuracy. Among many object detection algorithms, YOLOv3 has proved to be the most effective in detecting small objects. YOLOv3 has integrated advanced conception from other object detection algorithms like feature pyramid networks (Lin et al., 2016) and anchor box prediction mechanism (Redmon & Farhadi, 2017). It uses entire image as input of the network and draknet-53 without fully connected layer to extract feature, and finally predicts the bounding box of object and its category directly at the output layer. In the part of predicting boxes, it extracts features from three different scales and merges up-sampled feature map with earlier feature map, thus get better performance in small object detection. We adjust the size of the anchor box to reduce missed detection. Especially, anchors like (10,10), (6,40) and (40,6) are added to detect points, horizontal or vertical lines. For similar symbols like digit ’1’ and letter ’l’, it is almost impossible to be distinguished by the model. Such errors might be corrected if extra rules are added as a postprocessing. 3.2 PARSE:ENCODER-DECODER A lot of grammar-based parsing algorithms have been proposed for structural analysis and performed well in several systems (lvaro et al. (2014); Awal et al. (2014); lvaro et al. (2016)). But grammar-based methods requires priori knowledge to define ME grammar and its complexity increases exponentially with the size of the predefined grammar. The encoder-decoder framework was first presented in the paper(Cho et al. (2014)), which was used to translate one language into another. Structural analysis is to parse symbols with position information into LaTeX sequence, which is similar to translation. The only difference between them is that the former is two-dimensional and the latter is one-dimensional. By creatively concatenating position information and classification information as input vectors, we successfully employed encoder-decoder model to perform structural analysis of MEs. 3.2.1 ENCODER-DECODER The LSTM is an improved version of simple RNN, so we employ LSTM as the encoder for alleviating the vanishing and exploding gradient problems(Bengio et al. (2002)). In encoding part, the input of each time step is a symbol with position information, represented by feature vector (x, y, w, h, o-h) where (x, y, w, h) are the coordinates of the top-left corner, width and height of the rectangle that contains the symbol, and o-h is a one-hot vector to distinguish the symbol from others. Similarly, we employ lstm as the encoder. In the decoding stage, we aim to generate a most likely LaTeX string given the input feature vectors: ŷ = argmax logP (y|x) (1) In fact, the Y sequence is closely related to the semantic vector C generated by the encoder, and yt at one time also is affected by y generated at other times, which can be described as the following formula: yt = argmaxP (yt) = T∏ t=1 p (yt| {y1, y2, . . . , yt−1} , C) (2) In addition, it is intuitive to adopt the ensemble method(Dietterich (2000)) for improving the performance of beam algorithm. 3.2.2 ATTENTION Although, the encoder-decoder model has been widely applied to many problems, its biggest limitation is that the only link between encoding and decoding is a fixed-length semantic vector C. However, this semantic vector may not fully represent the information of the whole sequence, and the information carried by the first input will be diluted by the later input information. The accuracy of model is also affected by inadequate input sequence information. To solve this problem, Bahdanau et al. (2014) proposed attention model that generates an ”attention range” when output is generated, indicating which parts of the input sequence should be paid attention to in the next output, and then generates the next output according to the region of interest. Between encoder and decoder, we introduce attention mechanism. In standard attention(Bahdanau et al. (2014)), we compute a context vector candidate ĉi as: ĉi = Tx∑ j=1 αijhj (3) Where hj is the hidden state of encoder and αij is the correlation coefficient. We use a neural network to approximate the attention distribution αij : αij = softmax (ut) = exp (eti)∑L k=1 exp (etk) eti = v T att tanh (Wattht + Uattai) (4) Where eti denotes the energy of annotation vector ai at time step t conditioned on the current lstm hidden state prediction ht. 4 EXPERIMENTS To validate the effectiveness of the proposed method for printed mathematical expression recognition, we design a set of experiments to evaluate the answers of the following questions: Q1 Is the improved YOLOv3 for locating and recognizing mathematical symbols effective? Q2 How does encoder-decoder analyze the 2D structure of MEs? Q3 Does the proposed approach outperform state-of-the-arts? 4.1 METRIC Metric of detection model: For detection tasks, IoU (Intersection-over-Union) is a standard metric to measure performance. In this paper, detections are considered true or false positives based on IoU. A detection is to be considered correct if the IoU exceeds 50% and the classification is right. Metric of translation model: For the translation model, we use accuracy and Word Error Rate(WER)Klakow & Peters (2002) as metrics to measure the performance. The accuracy is calculated based on expression recognition rates (ExpRate), i.e., the percentage of predicted mathematical expressions exactly matching the ground truth, which gives a useful global performance metric. 4.2 DATA Data with classification and position information is expensive to annotate and rare. To address this problem, a tool is developed to provide detailed ground-truth annotations, which are cheap and more precise than data annotated manually. After filtering some inappropriate equations IM2LATEX100K (Deng et al., 2016b), a data set containing 81214 equations is obtained to generate mathematical expression images. Besides, 6000 background images are collected and split into two parts: one part contains 4000 images, which are used as background of training set, validation set and the first test set; the other part 2000 images are used as background of the second test set. The two test sets are generated by the same LaTeX equations but with different background. The first test data set, called homologous test set, has the same distribution as the training set. The second test set, called non-homologous test set, doesn’t have the same distribution as the training set, which is used to evaluate the generalization ability of the models. For details to generate data, see Appendix A. 4.3 DETECTION MODEL EXPERIMENTS We train the model using minibatch SGD with four GPUs, 0.9 momentum, 0.0005 weight decay, and batch size 64. We first train the model with 0.0005 learning rate for 20000 iterations, and then continue training for 4000 iterations with 0.0001 and 1000 iterations with 0.00001. The anchors we use are (10,10), (6,40), (40,40), (40,6), (80,80), (12,100), (120,120), (100,12), (228,228). Especially, anchor (10,10) is designed to detect small symbol like ’·’ and ’.’, and the anchors (6,40), (40,6), (12,100), (100,12) to detect horizontal or vertical lines, like ’-’, ’—’ or ’|’. Figure 3 gives an example of YOLOv3. Testing on 5000 images of the homologous test set, there are 4822 images that all the symbols are correctly located and recognized. Besides, location errors occurs in 125 images and classification errors occur in 62 images, among which there are 9 images that contain both location and classification errors.Testing on 5000 images of the non-homologous test set, there are 4813 images that all the symbols are correctly located and recognized. In addition, location errors and classification errors occur in 126 images and 72 images, respectively. The images that contain both location and classification errors are 11. Table 1 shows the testing results of both homologous and non-homologous test set, which are almost identical. Although the detection model has never ’seen’ the background images of the nonhomologous test set, it still achieves good performance. This demonstrates the detection model has good generalization ability, which are very important in practical applications. 4.4 TRANSLATION MODEL EXPERIMENTS We regard the conversion of symbol sequences with location information into LaTeX sequences as a translation process and solve this problem with the encoder-decoder model, which is of great significance. As far as we know, we are the first to use the encoder-decoder model to solve the twodimensional translation problem. As is shown in Section 4.2, we have 81214 different LaTeX math equations, 71214 expressions for training, 5000 for validation and 5000 for test. In order to train the encoder-decoder model, we run our YOLOv3 model on the training dataset, validation dataset and test dataset, respectively and store the symbols’ position and classification information of every image. After filtering the wrong cases, we obtained 69405 training expressions, 4826 validation expressions and 4822 test expressions. The encoder is a bi-directional LSTM. The hidden state of the decoder is of size 512, encoder LSTM of 256, which are the same as the configuration of the RNN layers in Im2LaTex (Deng et al., 2016b). The model was trained using SGD with batch size 64 for 120 epoches. The initial learning rate is 0.001, and reduced to 0.0001 after 100 epoches. Besides, the input symbols are ordered by the upper-left corners of their boxes from left to right, top to bottom. For example, the input order of the symbols is -, m, k, c, 2, =,√, 3, T for expression mc 2 k = √ 3T . Since we have only 69405 mathematical expressions to train the encoder-decoder model, how to avoid overfitting is a great challenge. To address the problem, we use extensive data augmentation. Firstly, we replace symbols with other symbols to get more mathematical expressions. For example, we replace the ’a’, ’b’, ’c’ and ’2’ in a2 + b2 = c2 with four other random chosen symbols ’e’, ’g’, ’l’ and ’4’, thus obtain a new equation e4 + g4 = l4. For position information, we introduce random scaling and translations. For each training epoch, up to 80% of the 69405 mathematical expressions would be augmented by the strategy above. Data augmentation plays an important role in training the encoder-decoder model. The experiment shows that we can improve12.8% ExpRate with the augmentation strategy above. Why does the encoder-decoder model benefit so much from the augmentation strategy? One reason may be that the data is not enough to train a good seq2seq model and the feature vectors(position + one-hot vectors) are too concise, which don’t contain any noise compared to the feature vectors extracted by the convolution layers in Im2LaTex (Deng et al., 2016b). In Table 2, the performance of standard attention model is clear to be observed. For homologous test set, the ExpRate is 78.0% and the wer is 4.4%. Besides, the result of non-homologous test set is almost the same as that of homologous test set. It is understandable because the input of the encoder- decoder model is the position and classification information of symbols, not the image. Obviously, changes in image style has little influences on the encoder-decoder model. We would inspect the alignment between the symbols (with position information) in a source sentence and the words in a generated LaTeX by visualizing the annotation weights αij from Eq. (4), as in Fig. 4. Each row of a matrix in each plot indicates the weights associated with the annota- tions. From the matrix we see which positions in the source sentence were more important when generating the target LaTeX. From the Fig. 4 we can conclude that the alignment between the symbols with position information and the target LaTeX is monotonic. In general, weights along the diagonal of each matrix are closer to 1. However, there are also many non-trivial, non-monotonic alignments. It’s obvious that the alignments depend on the spatial relationships between the mathematical symbols, which might be horizontal, vertical, subscript, superscript or inside. Fig. 4(a) shows a strictly monotonic alignment since all the symbols are horizonal relationship. Vertical, subscript, superscript or inside relationships are usually ordered differently between the input symbols and generated LaTeX. When a target LaTeX contains several different spatial relationships, like mc 2 k (vertical and superscript), the weights seem disorderly in some degree. In Fig. 4(d), the symbol ’k’ of the input sequence is ahead of the symbol ’c’, but the encoder-decoder model was able to correctly align the symbol ’c’ with numerator and the symbol ’k’ with denominator, which show that the model successfully captured the position information of the symbols. Fig. 4(c),(d) demonstrated the encoder-decoder model can handle complicated two-dimensional structure. 4.5 EVALUATE AS A WHOLE The current state-of-art OCR-based mathematical expression recognition system is Im2LaTex(Deng et al., 2016b), an end-to-end recognition system, which combines symbol recognition and structural analysis as a whole. In this part we run experiments comparing our model to Im2LaTex. To make a fair comparison, we downloaded the code they exposed in GitHub and trained with the same data set. In Table 3, we can observe that our model achieved an ExpRate of 74.3%, while its WER was only 0.048 for homologous test set, which is slightly better than Im2LaTex. However, Table 4 shows the results of the non-homologous test dataset. The ExpRate of our model is 74.1%, 20.3 percentage points higher than Im2LaTex. The ≤ 1% error percentages ( 78.21%) and ≤ 2% error percentages ( 81.5%) of our model is also much higer than Im2LaTex. From these indicators, our system outperforms Im2LaTex for homologous test dataset, and is evidently better than Im2LaTex for nonhomologous test dataset. So our model has better generalization, which is superior to Im2LaTex for recognizing MEs with real backgrounds. 5 CONCLUSIONS In this study, we introduce a two-stage framework to recognize mathematic expressions. It is the first work that employs YOLOv3 to detect mathematic symbols and uses the seq2seq model based attention to perform structural analysis. We demonstrate through experiment results that the novel two-stage framework has better generalization ability and performs better than the state-of-the-art methods in real scenario. In future work, we plan to further improve the accuracy of parser by adding more data and introducing more advanced mechanisms, thereby improving the accuracy of the whole mathematical formula recognition system. A DATA A.1 BACKGROUND IMAGES (a) bg1. (b) bg2. (c) bg6. Figure 5: Background images To favour variety, 6,000 background images are collected from printed articles, books, examination papers and so on. We collected a lot of printed articles, books, examination papers and then took photos of them. We only cropped the non-text areas of the photos as background images. Figure 5 shows some examples of the background images. A.2 DATA GENERATION There are two kinds of mathematical symbols: single-component (like ’a’, ’b’, ’c’, etc) symbols and multi-components symbols (like ’i’, ’j’, ’=’, etc). To get the position information, the multicomponents symbols are rendered with specific colors. Then the single-component symbols are segmented by connected components and the multi-components symbols by colors. After obtaining the position information, the symbols were cropped and then classified by a neural network. Eventually, we obtained a dataset with 81214 images and corresponding location and category information of symbols. Finally, the images are binarized and blended into the background images using Poisson image editing(Rez et al., 2003). Figure 6 gives an example to show the process to generate data.
1. What is the focus of the paper in terms of mathematical expression recognition? 2. What are the strengths of the proposed approach, particularly in its two-stage framework? 3. Do you have any concerns or suggestions regarding the paper's content, formulas, or references?
Review
Review This paper proposes to recognise a mathmatical expression using a two-stage framework, including object detection by YOLOv3, and encoder-decoder based translation. The paper is written well, and easy to read. In the experiments, the recognition and translation methods both work well on Homologous and non-Homologous test data. In particular, the proposed method improved the performance over the state-of-the-art method Im2LaTex. Addtionally, the authors visualised the sample alignments of encoder-decoder model, which is helpful for understanding the method. A few comments are as follow: 1) On Page 1, the first and second paragraphs both contain "symbol segmentation, symbol recognition and structural analysis". It looks this framework is repeated again and again. 2) In formula (4), the reviewer did not see the explanation of "v_{att}^T" and "u_T". 3) Some references have only authors and title information, without conference/journal information.
ICLR
Title Carousel Memory: Rethinking the Design of Episodic Memory for Continual Learning Abstract Continual Learning (CL) is an emerging machine learning paradigm that aims to learn from a continuous stream of tasks without forgetting knowledge learned from the previous tasks. To avoid performance decrease caused by forgetting, prior studies exploit episodic memory (EM), which stores a subset of the past observed samples while learning from new non-i.i.d. data. Despite the promising results, since CL is often assumed to execute on mobile or IoT devices, the EM size is bounded by the small hardware memory capacity and makes it infeasible to meet the accuracy requirements for real-world applications. Specifically, all prior CL methods discard samples overflowed from the EM and can never retrieve them back for subsequent training steps, incurring loss of information that would exacerbate catastrophic forgetting. We explore a novel hierarchical EM management strategy to address the forgetting issue. In particular, in mobile and IoT devices, real-time data can be stored not just in high-speed RAMs but in internal storage devices as well, which offer significantly larger capacity than the RAMs. Based on this insight, we propose to exploit the abundant storage to preserve past experiences and alleviate the forgetting by allowing CL to efficiently migrate samples between memory and storage without being interfered by the slow access speed of the storage. We call it Carousel Memory (CarM). As CarM is complementary to existing CL methods, we conduct extensive evaluations of our method with seven popular CL methods and show that CarM significantly improves the accuracy of the methods across different settings by large margins in final average accuracy (up to 28.4%) while retaining the same training efficiency. 1 INTRODUCTION With the rising demand for realistic on-device machine learning, recent years have witnessed a novel learning paradigm, namely continual learning (CL), for training neural networks (NN) with a stream of non-i.i.d. data. In such a paradigm, the neural network is incrementally learned with insertions of new tasks (e.g., a set of classes) (Rebuffi et al., 2017). The NN model is expected to continuously learn new knowledge from new tasks over time while retaining previously learned knowledge, which is a closer representation of how intelligent systems operate in the real world. In this learning setup, the knowledge should be acquired not only from the new data timely but also in a computationally efficient manner. In this regard, CL is suitable for learning on mobile and IoT devices (Hayes et al., 2020; Wang et al., 2019). However, CL faces significant challenges from the notorious catastrophic forgetting problem— knowledge learned in the past fading away as the NN model continues to learn new tasks (McCloskey & Neal, 1989). Among many prior approaches to addressing this issue, episodic memory (EM) is one of the most effective approaches (Buzzega et al., 2020; Chaudhry et al., 2019a;b; Lopez-Paz & Ranzato, 2017; Prabhu et al., 2020). EM is an in-memory buffer that stores old samples and replays them periodically while training models with new samples. EM needs to have a sufficiently large capacity to achieve a desired accuracy, and such capacity in need may vary significantly since incoming data may contain a varying number of tasks and classes at different time slots and geolocations (Bang et al., 2021). However, in practice, the size of EM is often quite small, bounded by limited on-device memory capacity. The limited EM size makes it difficult to store a large number of samples or scale up to a large number of tasks, preventing CL models from achieving high accuracy as training moves forward. To address the forgetting problem, we introduce a hierarchical EM method, which significantly enhances the effectiveness of episodic memory. Our method is motivated by the fact that modern mobile and IoT devices are commonly equipped with a deep memory hierarchy including small memory with fast access (50–150 ns) and large storage with slow access (25–250 µs), which is typically orders of magnitude larger than the memory. Provided by these different hardware characteristics, the memory is an ideal place to access samples at high speed during training, promising short training time. In contrast, the storage is an ideal place to store a significantly large number of old samples and use them for greatly improving model accuracy. The design goal of our scheme, Carousel Memory or CarM, is to combine the best of both worlds to improve the episodic memory capacity by leveraging on-device storage but without significantly prolonging traditional memorybased CL approaches. CarM stores as many observed samples as possible so long as it does not exceed a given storage capacity (rather than discarding those overflowed from EM as done in existing methods) and updates the in-memory EM while the model is still learning with samples already in EM. One key research question is how to manage samples across EM and storage for both system efficiency and model accuracy. Here we propose a hierarchical memory-aware data swapping, an online process that dynamically replaces a subset of in-memory samples used for model training with other samples stored in storage, with an optimization goal in two aspects. (1) System efficiency. Prior single-level memory-only training approaches promise timely model updates even in the face of real-time data that arrives with high throughput. Therefore, we expect drawing old samples from slow storage does not incur significant I/O overhead that affects the overall system efficiency, especially for mobile and IoT devices. (2) Model accuracy. CarM significantly increases the effective EM size, mitigating forgetting issues by avoiding important information from overflowing due to limited memory capacity. As a result, we expect our approach also improves the model accuracy by exploiting data samples preserved in the storage more effectively for training. To fulfill the competing goals, we design CarM from two different perspectives: swapping mechanism (Section 3.1) and swapping policy (Section 3.2). The swapping mechanism of CarM ensures that the slow speed of accessing the storage does not become a bottleneck of continual model training by carefully hiding sample swapping latency through asynchrony. Moreover, we propose various swapping policies to decide which and when to swap samples and incorporate them into a single component, namely, gate function. The gate function allows for fewer swapping samples, making CarM to march with low I/O bandwidth storage which is common for mobile and IoT devices. One major benefit of CarM is that it is largely complementary to existing episodic memory-based CL methods. By exploiting the memory hierarchy, we show that CarM helps boost the accuracy of many existing methods by up to 28.4% for Rainbow Memory (RM) (Bang et al., 2021) in Tiny-ImageNet dataset (Section 4.1) and even allows them to retain their accuracy with much smaller EM sizes. With CarM as a strong baseline for episodic memory-based CL methods, some well-known algorithmic optimizations may need to be revisited to ensure that they are not actually at odds with data swapping. For example, we observe that iCaRL (Rebuffi et al., 2017), BiC (Wu et al., 2019), and DER++ (Buzzega et al., 2020), which strongly depend on knowledge distillation for old tasks, can deliver higher accuracy with CarM by limiting the weight coefficient on the distillation loss as a small value in calculating training loss. With CarM, such weight coefficient does not indeed necessarily be high or managed complicatedly as done in prior work, because we could now leverage a large amount of data in storage (with ground truth) to facilitate training performance. 2 RELATED WORK Class incremental learning. The performance of CL algorithms heavily depends on scenarios and setups, as summarized by Van de Ven et al. (van de Ven & Tolias, 2018). Among them, we are particularly interested in class-incremental learning (CIL), where task IDs are not given during inference (Gepperth & Hammer, 2016). Many prior proposals are broadly divided into two categories, rehearsal-based and regularization-based. In rehearsal-based approaches, episodic memory stores a few samples of old tasks to replay in the future (Bang et al., 2021; Castro et al., 2018; Chaudhry et al., 2018; Rebuffi et al., 2017). On the contrary, regularization-based approaches exploit the information of old tasks implicitly retained in the model parameters, without storing samples representing old tasks (Kirkpatrick et al., 2017; Zenke et al., 2017; Liu et al., 2018; Li & Hoiem, 2017; Lee et al., 2017; Mallya et al., 2018). As rehearsal-based approaches generally have shown the better performance in CIL (Prabhu et al., 2020), we aim to alleviate current drawbacks of the approaches (i.e., limited memory space) by incorporating data management across the memory-storage hierarchy. The CIL setup usually assumes that the tasks contain disjoint set of classes (Rebuffi et al., 2017; Castro et al., 2018; Gepperth & Hammer, 2016). More recent studies introduce methods to learn from the blurry stream of tasks, where some samples across the tasks overlap in terms of class ID (Aljundi et al., 2019; Prabhu et al., 2020). Moreover, prior works can be classified as either offline (Wu et al., 2019; Rebuffi et al., 2017; Chaudhry et al., 2018; Castro et al., 2018), which allows a buffer to store incoming samples for the current task, or online (Fini et al., 2020; Aljundi et al., 2019; Jin et al., 2020), which has no such buffer: a few works consider both (Prabhu et al., 2020). Both online and offline methods can take advantage of CarM as our work focuses on improving episodic memory with a storage device. Episodic memory management. There are numerous episodic memory management strategies proposed in the literature (Parisi et al., 2018) such as herding selection (Welling, 2009), discriminative sampling (Liu et al., 2020), entropy-based sampling (Chaudhry et al., 2018) and diversity-based sampling (Kang et al., 2020; Bang et al., 2021). A number of works have been proposed to compose the episodic memory with representative and discriminative samples. Liu et al. propose a strategy to store samples representing the mean and boundary of each class distribution (Liu et al., 2020). Borsos et al. propose a coreset generation method using cardinality-constrained bi-level optimization (Borsos et al., 2020). Cong et al. propose a GAN-based memory aiming to perturb styles of remembered samples for incremental learning (Cong et al., 2020). Bang et al. propose a strategy to promote the diversity of samples in the episodic memory (Bang et al., 2021). These recent works improve the quality of the samples stored in the memory at the expense of excessive computation or difficulty (Borsos et al., 2020) involved in training a generation model for perturbation (Cong et al., 2020; Borsos et al., 2020). Interestingly, most of strategies show marginal accuracy improvements over the uniform random sampling despite the computational complexity (Chaudhry et al., 2018; Castro et al., 2018; Rebuffi et al., 2017). Other than sampling, there are works to generate samples of past tasks (Shin et al., 2017; Seff et al., 2017; Wu et al., 2018; Hu et al., 2019). Unlike these works addressing the sampling efficiency, we focus on the systematically efficient method to manage samples across the system memory hierarchy. Memory over-commitment in NN training. Prior work studies using storage or slow memory (e.g., host memory) as an extension of fast memory (e.g., GPU memory) to increase memory capacity for NN training (Rhu et al., 2016; Wang et al., 2018; Hildebrand et al., 2020; Huang et al., 2020; Peng et al., 2020; Jin et al., 2018; Ren et al., 2021). However, most of these works target at optimizing the conventional offline learning scenarios by swapping optimizer states, activations, or model weights between the fast memory and slow memory (or storage), whereas we focus on swapping samples in between episodic memory and storage to tackle the forgetting problem in the context of continual learning. In more general context, memory-storage caching has been studied to reduce memory and energy consumption for various applications (Ananthanarayanan et al., 2012; Oh et al., 2012; Zaharia et al., 2010), which is orthogonal to our work. 3 PROPOSED METHOD: CAROUSEL MEMORY We describe how data swapping in CarM extends the current workflow of episodic memory (EM) in Figure 1. For ease of illustration, we assume that the input stream data is organized by consecutive tasks but CL learners do not necessarily rely on boundaries between tasks to perform training and update EM. There are three common stages involved in traditional EM methods, which proceed in order: data incoming, training, and EM updating. This workflow corresponds to many existing methods including TinyER (Chaudhry et al., 2019b), CBRS (Chrysakis & Moens, 2020), iCaRL (Rebuffi et al., 2017), BiC (Wu et al., 2019), and DER++ (Buzzega et al., 2020). Then, we add two additional key stages for data swapping: storage updating and storage sample retrieving. • Data incoming I : The episodic memory maintains a subset of samples from previous tasks {T1, . . . , Ti−1}. When samples for a new task Ti arrive, they are first enqueued into a stream buffer and later exercised for training. Different CL algorithms require different amounts of samples to be enqueued for training. The task-level learning relies on task boundaries, waiting until all Ti’s samples Time = 𝒊 + 𝟏Time = 𝒊 appear (Rebuffi et al., 2017; Wu et al., 2019). On the contrary, the batch-level learning initiates the training stage as soon as a batch of Ti’s samples in a pre-defined size is available (Chaudhry et al., 2019b; ?; Chrysakis & Moens, 2020). • Training T : The training combines old samples in EM with new samples in a stream buffer to compose a training bundle. The CL learner organizes the bundle into one or more mini-batches, where each mini-batch is a mixture of old and new samples. The mini-batch size and the ratio between the two types of samples within a mini-batch are configured by the learning algorithm. Typically, several mini-batches are constructed in the task-level learning. Learners may go over multiple passes given a bundle, trading off computation cost for accuracy. • EM updating M : Once the training stage is completed, samples in the stream buffer are no longer new and represent a past experience, requiring EM to be updated with these samples. EM may have enough space to store all of them, but if it does not, the CL method applies a sampling strategy like reservoir sampling (Vitter, 1985) and greedy-balancing sampling (Prabhu et al., 2020) to select a subset of samples from EM as well as from the stream buffer to keep in EM. All prior works “discard” the samples which are not chosen to be kept in EM. • Storage updating S : CarM flushes the stream data onto the storage before cleaning up the stream buffer for the next data incoming step. No loss of information occurs if the free space available in the storage is large enough for the stream data. However, if the storage is filled due to lack of capacity, we end up having victim samples to remove from the storage. In this case, we randomly choose samples to evict for each class while keeping the in-storage data class-balanced. • Storage sample retrieving R : With the large number of samples maintained in the storage, data swapping replaces in-memory samples with in-storage samples during training to exercise abundant information preserved in the storage regarding past experiences. CarM collects various useful signals for each in-memory sample used in the training stage and determines whether to replace that sample or not. This decision is made by our generic gating function that selects a subset of the samples for replacement with effectively little runtime cost. Since old samples for training are drawn directly from EM and a large pool of samples is always kept in the storage ES, the training phase is forced to have a boundary of sample selection restricted by the size of EM . The continual learning with data swapping that optimizes model parameters θ for old and new samples x (and corresponding labels y) can hence be formulated as follows: argmin θ i∑ task id=1 E(x,y)∼ES∪Ti [L(f(x, θ), y)],where (x, y) ∈ EM. (1) 3.1 MINIMIZING DELAY TO CONTINUAL MODEL TRAINING The primary objective in our proposed design is hiding performance interference caused by the data swapping so that CarM incurs low latency during training. To that end, we propose two techniques that encompass in-storage sample retrieval and EM updating stages. 𝑅3 𝑏 𝑇1 𝑏 𝑅1 𝑏 𝑇2 𝑏 𝑇3 𝑏Sync Async Time … … Training Time Reduction … 𝑇1 𝑏 𝑅1 𝑏 𝑅2 𝑏 𝑅2 𝑏 𝑅3 𝑏 𝑇2 𝑏 𝑇3 𝑏 Figure 2: Training time reduction with async sample retrieval. Asynchronous sample retrieval. Similar to the conventional learning practice, CarM maintains fetch workers performing data decoding and augmentation. As shown in Figure 1, CarM has an additional swap worker dedicated to deciding in-memory samples to evict and issuing I/O requests to bring new samples from storage into memory. In the CL workflow, the data retrieval stage R has dependency on the training stage T since training triggers the replacement of an in-memory sample when it is used as training input. To illustrate, we assume that the system has a single fetch worker to pre-process the training input bundle and creates N mini-batch from the bundle – this pre-processing is incurred every time a sample is fetched for training. The swap worker identifies samples in EM to be replaced from mini-batch i training (T bi ) and then issues I/O requests to retrieve other samples from storage (R b i ). If we want to allow the next mini-batch training to exercise EM completely refreshed with the replaced samples, executions of T b and Rb by definition must be serialized such that we have a sequence of T b1 → Rb1 → T b2 → Rb2 → T b3 → Rb3, as shown in Figure 2 (Sync). However, committing to such strict serialized executions slows down training speed significantly, e.g., the second mini-batch training T b2 can start only after finishing up T b 1 → Rb1, which takes much longer time than T b1 with no retrieval of storage data as done in the traditional EM design. To prevent this performance degradation, CarM adopts asynchronous sample retrieval that runs the retrieval step in parallel with the subsequent training steps. By the asynchronous method, we keep the minimum possible dependency as shown in Figure 2 (Async), with an arbitrary Rbi not necessarily happened before T b i+1. Apparently, this design choice poses a delay on applying in-storage samples to EM, making it possible for the next training steps to access some samples undergoing replacement. However, we found such accesses do not frequently occur, and the delay does not nullify the benefit that comes from data swapping. In addition, when the swap worker retrieves in-storage samples and writes on memory, it may interfere with fetch workers that attempt to read samples from it for pre-processing. To mitigate such interference, CarM could opt for EM partitioning to parallelize read/write operations over independent partitions. With EM partitioning, only those operations that access the same partition need coordination, achieving concurrency against operations that access other partitions. 3.2 DATA SWAPPING POLICY BY A GATE FUNCTION The gate function in Figure 1 is a core component in CarM for adjusting I/O traffic. The gate, as guided by its decision logic, allows us to select a certain portion of samples to swap out from those EM samples drawn in the training stage. Having this control knob is of big practical importance as the maximum sustainable I/O traffic differs considerably among devices due to their in-use storage mediums with different characteristics (e.g., high-bandwidth flash drive vs low-bandwidth magnetic drive). At the same time, the gate is required to be effective with such partial data swapping in terms of accuracy in the subsequent training steps. To facilitate this, we propose a sample scoring method that ranks the samples in the same mini-batch so that the training algorithm can decide at which point along the continuum of the ranks we can separate samples to swap from other samples to keep further in memory. Score-based replacement. The score quantifies the relative importance of a trained sample to keep in EM with respect to other samples in the same mini-batch. Intuitively, a higher score means that the sample is in a higher rank, so is better “not” to be replaced if we need to reduce I/O traffic and vice versa. To this end, we define the gate function σi for ith sample, xi, as σi = 1(s(xi) > τ), where s(xi) is a scoring function and τ is a scoring threshold, with both s(xi) and τ between 0 and 1. The threshold is determined by the proportion of the samples that we want to replace from the EM with samples in storage with the consideration of computational efficiency. It allows data swapping to match with I/O bandwidth available on the storage medium, and prevents the system from over-subscribing the bandwidth leading to I/O back-pressure and increased queueing time or under-subscribing the bandwidth leaving storage data exploited less opportunistically. Policies. We design several swapping policies driven by the sample scoring method in the context of CL with data swapping for the first time. Specifically, we propose the following three policies: (1) Random selects random samples to replace from EM. Its scoring function assigns 0 to the τ proportion of the samples randomly selected from a mini-batch while assigning 1 to the other samples. (2) Entropy collects two useful signals for each sample produced during training: prediction correctness and the associated entropy. This policy prefers to replace the samples that are correctly predicted because these samples may not be much beneficial to improve the model in the near future. Furthermore, in this group of samples, if any specific sample has a lower entropy value than the other samples, the prediction confidence is relatively stronger, making it a better candidate for replacement. By contrast, for the samples that are incorrectly predicted, this policy prefers to “not” replace the samples that exhibit lower entropy, i.e., incorrect prediction with stronger confidence, since they may take longer to be predicted correctly. Thus, the scoring function s(xi) with a model f(·) is defined as: s(xi) = 1 U [g(xi)(H(f(xi))) + (1− g(xi)) (U −H(f(xi)))] , (2) where g(xi) = 1(f(xi) = yi), H(·) is an entropy function, and U is the maximum entropy value. (3) Dynamic combines Random and Entropy to perform the first half of training passes given a bundle with Random and the next half of the passes with Entropy. This policy is motivated by curriculum learning (Bengio et al., 2009), which gradually focuses on training harder samples as time elapses. It is indeed possible to come up with a number of replacement policies, for which this paper introduces a few concrete examples. Regardless, designing the gate logic with more effective replacement policies is a promising research direction that we want to further explore in CarM. 4 EXPERIMENTS As CarM is broadly applicable to a variety of EM-based CL methods, we compare the performance with and without CarM in the methods of their own setups. We select seven methods as shown in Table 1, to cover several aspects discussed in Section 3 such as bundle boundary of learning (i.e., task-level vs batch-level) and number of passes taken per bundle. We discuss detailed reproducible settings in Section 5. For evaluation, we implement CarM in PyTorch 1.7.1 as a working prototype. Datasets and metrics. Datasets include CIFAR subset—CIFAR10 (C10) and CIFAR100 (C100)— , ImageNet subset—ImageNet-100 (I100), Mini-ImageNet (100 classes) (MI100), and TinyImageNet (200 classes) (TI200)—, and ImageNet-1000. We use two popular metrics, the final accuracy and the final forgetting (Chaudhry et al., 2018) averaged over classes, to reflect the performance of continual learning. Except for ImageNet-1000 that represents a significantly large-scale training, the results are averaged over five runs while each method assigns an equal of classes to each task. We also measure training speed measured from the time the training stage receives a bundle to the time it completes training the bundle. Baselines and architectures. On top of each CL method, we vary the amount of data swapping to study the effectiveness of CarM in detail. Unless otherwise stated, CarM-N means that our swap worker is configured to replace N% of EM samples drawn by the training stage. All experiments are based on either ResNet or DenseNet neural networks, with all using the SGD optimizer as suggested by the original works, and use the entropy-based data swapping policy (i.e., Entropy) by default. 4.1 RESULTS We compare existing methods with two CarM versions, CarM-50 that performs partial swapping for a half of the data and CarM-100 that performs full swapping. Table 1 presents the performance in terms of the top-1 accuracy (Acc.) and the forgetting score (Fgt.), except for ER, iCarL, and BiC that measure the top-5 accuracy for the ImageNet subset as done in the original works. First, CarM-100 improves the accuracy remarkably over almost all of the methods under consideration, advancing the state-of-the-art performances for CIFAR and ImageNet datasets. The results clearly show the effectiveness of using the storage device in large capacity to allow CL to exploit abundant information of the previous tasks. Among the seven methods, CarM-100 delivers relatively larger accuracy gains for BiC (Wu et al., 2019), GDumb (Prabhu et al., 2020), DER++ (Buzzega et al., 2020), and RM (Bang et al., 2021) that take multi-passes on each training input. We believe that as long as old samples in EM are exercised more frequently for a new bundle to train (i.e., new samples plus old samples), data swapping can subsequently bring in more diverse samples from storage to take advantage of them. Regardless, although TinyER (Chaudhry et al., 2019b) is designed to take a single pass over new samples and thus exercise EM less aggressively, as applied with our techniques, it improves the accuracy by 7.72%, 11.79%, and 3.89% for CIFAR-100, Mini-ImageNet, and ImageNet-1000, respectively. In comparison to CarM-100, CarM-50 obtains slightly lower accuracy across the models. We argue that such a small sacrifice in accuracy is indeed worthwhile when storage I/O bandwidth is the primary constraint. In CarM-50, with 50% lower I/O traffic caused by data swapping, the accuracy as compared to CarM-100 diminishes only by 1%, 0.6%, and 0.7% on average for CIFAR subset, ImageNet subset, and ImageNet-1000, respectively, providing an ability to trade-off small accuracy loss for substantial I/O traffic reduction. Similarly to the accuracy, our data swapping approaches considerably reduce forgetting scores over the majority of the original methods. Perhaps, one method that shows less promising results in Table 1 would be iCaRL (Rebuffi et al., 2017), where CarM makes the accuracy occasionally worse. From the in-depth investigation of iCaRL in Appendix A.4.1, we observe that using data swapping and knowledge distillation at the same time cannot not deliver great accuracy. That is, as knowledge distillation may not be much compatible with data swapping, we revisit distillation-based CL methods (i.e., iCaRL, BiC, and DER++) when they are used along with data swapping in detail. Knowledge distillation on CarM. Note that the ways to distill the knowledge of old data in iCaRL, BiC, and DER++ are all different (see Appendix A.4.1). Briefly speaking, in calculating loss for old data, iCaRL uses only soft labels obtained from an old classifier, whereas BiC and DER++ use both hard labels (i.e., ground truth) as well as soft labels. To investigate the effect of using these two types of loss, we first modify the loss function of iCaRL similarly to that of BiC, i.e., α× soft label loss+ (1−α)× hard label loss, and then show accuracy over varying α values for all three distillation-based methods in Figure 3. For each method, we also include accuracy when α increases incrementally over time as done in BiC. The results show that distillation-based methods with CarM significantly improve accuracy when the α is very small. For iCaRL, compared to α = 1.0 (i.e., no hard label loss as iCaRL does), we obtain 5.4 higher accuracy when α = 0.0 (i.e., no distillation) and 5.7 higher accuracy when α = 0.1, which is the best result. Similarly, for BiC and DER++ with CarM, we found that the coefficient α to be applied on the soft label loss does not necessarily be high (i.e., iCaRL) or managed complicatedly (i.e., BiC) to achieve higher accuracy. Please refer to Figure 6 for the CIFAR subset results. Our best interpretation for the reason behind is as follows. The key assumption of knowledge distillation is that once the model is trained with a new task, the knowledge newly learned is supposed to generalize the task well and can be effectively transferable to subsequent task training. However, if the model is not sufficiently generalized for old tasks, using distillation losses extensively might be adverse—Data swapping attempts to correct decision boundaries driven by abundant in-storage samples to further generalize old tasks, but interfered by the knowledge distilled by the old models. Comparison for data swapping policies. We compare the performance of the three data swapping policies proposed in Section 3.2 under CarM-50. As shown in Table 2, both Entropy and Dynamic outperform Random by 0.16% on average for ImageNet subset (see Table 10 for CIFAR subset). We highlight that our major contribution for gating mechanism is computational efficiency while matching with I/O bandwidth available on the storage medium, and the primary objective of exploring data swapping policies is establishing a good baseline for the gating mechanism. In this regard, we found that all three policies can serve as good baselines. Impact on training speed. Delay optimization techniques in Section 3.1 are intended to incur insignificant delay on training. To confirm this, we examine how training speed in CarM-50 changes over the original memory-only methods, measured as the percentage of wall-clock time (i.e., actual time taken) increase as applied with asynchronous (Async) vs synchronous sample retrieval (Sync). To consider the most challenging scenario, we make data entered into the stream buffer at a rate enough to keep training always busy with new mini-batches. As shown in Table 3, regardless of EM methods, the asynchronous version of CarM does not dramatically affect training speeds for both CIFAR and ImageNet subsets. By contrast, the synchronous version slows down training time up to 71.6% for CIFAR subset and 62.0% for ImageNet subset. Regardless of the version in use, in-memory samples undergoing data swapping are rarely drawn in the subsequent training steps since the size of an episodic memory size is typically much larger than the size of a training batch. Therefore, no difference in accuracy is observed between the two version. 4.2 ABLATION STUDY We present an ablation study using four methods (TinyER, BiC, DER++, and RM) that represent the state-of-the-art in each type of EM methodologies, using the CIFAR subset. Size of EM. To confirm performance benefits over using different memory sizes, we empirically evaluate CarM-50 over varying EM sizes and show the average accuracy in Figure 4(a). In all cases, CarM-50 outperforms the existing methods, with BiC, DER++, and RM having relatively higher accuracy increases. Moreover, we observe that data swapping delivers better accuracy over conventional memory-only approaches using much smaller memory. For example, CarM-50 with DER++ on EM size 300 shows higher accuracy than pure DER++ on EM size 1000, and CarM-50 with TinyER on EM size 300 shows higher accuracy than pure TinyEM on EM size 500. Therefore, it turns out that data swapping could help reduce the EM size without hurting the accuracy of existing methods in both the multi-pass method and single-pass method. Data swapping ratio. We present results with different swapping ratios to show that our gate model indeed brings out meaningful benefits over using different I/O bandwidths. To that end, Figure 4(b) shows the change in accuracy when our gating policy decreases the swapping ratio down to 20% (CarM-20) or increases it up to 80% (CarM-80). Obviously, at CarM-80 in high swapping ratio, the accuracy across the four EM methods gets very close to the accuracy obtained in full swapping. A surprising result is that even at CarM-20 in 20% data swapping, the accuracy is very comparable to when we allow higher data swapping ratios. The results indicate that our method would be effective even when applied to the system with low-bandwidth storage. Size of storage. As local storage cannot store all the past data, the system must discard some old samples once the storage is fully occupied. Figure 4(c) shows accuracy degradation in CarM-50 when storage capacity is limited to 1.5–10× of the EM size. The results show that data swapping improves performance over traditional approaches even with using 50% larger capacity for the storage. Large number of tasks. One pressing issue on CL is learning a large number of tasks as it is required to keep the knowledge learned in the remote past. To evaluate this aspect, we split CIFAR-100 (100 classes) into 50 tasks and run with the four methods. As Figure 4(d) shows, CarM significantly outperforms the baselines, showing the potential for long-term continual model training. 5 CONCLUSION We alleviate catastrophic forgetting by integrating traditional episodic memory-based continual learning methods with device-internal data storage, named CarM. We design data swapping strategies to improve model accuracy by dynamically utilizing a large amount of the past data available in the storage. Our swapping mechanism addresses the cumbersome performance hurdle incurred by slow storage access, and hence continual model training is not dramatically affected by data transfers between memory and storage. We show the effectiveness of CarM using seven well-known methods on standard datasets, over varying memory sizes, storage sizes, and data swapping ratios. REPRODUCIBILITY STATEMENT We take the reproducibility of the research very seriously. Appendix hence includes detailed information necessary for reproducing all the experiments performed in this work, as follows: • Appendix A.1 describes the implementation details of building CarM. • Appendix A.2 specifies dataset information used in the experiments (e.g., the number of tasks and the number of classes per task). • Appendix A.3 provides experimental details (e.g., metrics and hyper-parameters). • Appendix A.3.4 presents detailed specification of machines (e.g., GPU model) used in the experiments. Our source code is available at https://anonymous.4open.science/r/CarM, where we include running environments and configuration files for all the experiments that make it possible to reproduce the results reported in this paper with minimal effort. ETHICS STATEMENT All continual learning (CL) methods including the proposed one would adapt and extend the already trained AI model to recognize better with the streamed data. The CL methods will expedite the deployment of AI systems to help humans by its versatility of adapting to a new environment out of the factory or research labs. As all CL methods, however, would suffer from adversarial streamed data as well as data bias, which may cause ethnic, gender or biased gender issues, the proposed method would not be an exception. Although the proposed method has no intention to allow such problematic cases, the method may be exposed to such threats. Relentless efforts should be made to develop mechanisms to prevent such usage cases in order to make the continuously updating machine learning models safer and enjoyable to be used by humans. A APPENDIX A.1 IMPLEMENTATION DETAILS First, we describe implementation details about the two components of the proposed method: swap worker and episodic memory. Then, we describe the details about PyTorch integration of our implementation for ease of use. Swap worker. CarM implements the swap worker through multiprocessing (pyt) in popular Python standard library so that data swapping is running in parallel with PyTorch’s default fetch workers dedicated to data decoding and augmentation. The swap worker uses asyncio (asy) to asynchronously load samples from storage to memory, effectively overlapping high-latency I/O operations with other CarM-related operations, such as image decoding, sample replacement on EM, and entropy calculation. The swap worker issues multiple data swapping requests without spinning on or being blocked by I/O. As a result, it is sufficient to have only one swap worker for CarM in the system. Episodic memory. There are various ways to implement EM to be shared between fetch workers and the swap worker. The current system favors flexibility over performance, so we opt for implementing EM as a shared object provided by Manager (man) in the Python standard library (multiprocessing.managers), which is based on message passing in the server-client semantics. In terms of flexibility, the Manager does not require the clients (i.e., fetch workers and swap workers) to define the exact data layout in the EM address space or coordinate for potential memory resizing to accommodate raw samples of different sizes (e.g., image resolutions). Hence, it is sufficient for the client workers to perform reads and writes on EM using indexes on the EM samples. An alternative yet obviously higher-performance implementation would be using multiprocessing.shared_memory (sha), which enables direct reads and writes on EM by exposing a common region of memory to the processes. Despite good performance, this method is less flexible as all processes should be aware of the data layout in a designated EM address range precisely at runtime, thus requiring additional coordination for sample lookups and EM resizing. As our system evolves, we ultimately want to combine the best of both methods to promise both flexibility and performance. A.2 DATASETS Each baseline is evaluated on its own dataset used in the original work. The first rows of Table 4 and Table 5 show datasets used in the CIFAR subset and the ImageNet subset, respectively, for all baselines. ImageNet-100 is a ImageNet ILSVRC2012 subset used in iCaRL, which contains images in the same resolution as those in the original ImageNet ILSVRC2012. Other datasets used as the ImageNet subset have smaller image resolution than the original one (e.g., 64×64 for Tiny-ImageNet, 84×84 for Mini-ImageNet). In addition, we trained all baselines on ImageNet-1000 to verify the effectiveness of CarM on a large-scale dataset. We note that only ER, iCaRL, and BiC have been compared using the ImageNet-1000 dataset in the literature (Wu et al., 2019). Datasets are split as done in the original work. The second and third rows of Table 4 and Table 5 show the detailed information on the splitting strategy. For all baselines, the ImageNet-1000 dataset is split into 10 tasks, each with 100 classes. Note that all datasets are non-blurry, meaning that each task consists of its own set of classes and samples belonging to a previous task never appear in the next tasks. Since the experimental results are highly sensitive to the class order in the continuous tasks to train, we follow the same class order used in the original works. A.3 EXPERIMENTAL DETAILS We present the effectiveness of the proposed CarM using seven CL methods of their own setups. This section discusses detailed settings for each method so that the results are reproducible by our source code. We first describe the metrics used for the evaluations. A.3.1 METRICS Final accuracy. Final accuracy is an average accuracy over all classes observed after the last task training is done. Final forgetting. Forgetting indicates how much each task has been forgotten while training new tasks (Chaudhry et al., 2018). Forgetting for a task is calculated by comparing the best accuracy observed over task insertions to the final accuracy of the task when training is over. Final forgetting is an average forgetting across all tasks when training is over. A.3.2 BASELINE DETAILS • ER (Ratcliff, 1990) combines all samples in the current stream buffer and the current EM, and passes them over to the model as a training set, i.e., training bundle. There is no algorithmic optimization applied to the model itself. We manage EM as a ring buffer that assigns EM space equally over all classes observed so far. We use the same hyper-parameters and loss function (binary cross-entropy loss) as used in iCaRL. • iCaRL (Rebuffi et al., 2017) uses three algorithmic optimizations: distillation loss, herding, and nearest-mean-of-exemplar classification. To transfer the information of old tasks, iCaRL leverages the distillation loss using logits obtained from the most recently trained model for old classes: this loss information is considered as the ground truth for old classes. Herding is its own EM management method, which populates the samples whose feature vectors are the closest to the average feature vector overall stream data for each class. iCaRL allocates EM space equally overall observed classes. • TinyER (Chaudhry et al., 2019b) explores four EM management strategies named reservoir, ring buffer, k-means, and mean of features. We adopt the reservoir in the experiments because it shows overall the highest performance in the original paper. Similar to ER, TinyER retrieves old samples from EM without other optimizations on the model itself. TinyER is batch-level learning and focuses on an extremely online setup that takes a single pass for every streamed batch. • BiC (Wu et al., 2019) runs bias correction on the last layer of the neural network, structured as fully connected layer, to mitigate data imbalance problem between old samples and new samples. The data imbalance is an inherent problem due to the limited size of EM, and it gets worse as we have a larger number of consecutive classes to train. Similar to iCaRL, BiC opts for distillation loss, but its entire loss function is a mixture of distillation loss and cross-entropy loss that is directly calculated from some reserved samples for old classes. • GDumb (Prabhu et al., 2020) is a simple rehearsal-based method that uses only the memory to train the model. The memory management is done via greedy balanced sampling, where GDumb tries to keep each class balanced by evicting data categorized into the majority class out of EM. Unlike other methods, the model is trained from scratch for inference and then discarded every time the memory is updated. GDumb uses cosine annealing learning-rate scheduler and cross-entropy loss for gradient descend. • DER++ (Buzzega et al., 2020) is one of rehearsal-based methods with knowledge distillation. Unlike other methods, this approach retains logits (along with samples) in EM for knowledge distillation. For knowledge distillation, DER++ calculates euclidean distance between the logits stored in EM and the logits generated by the current network. To enable data swapping on DER++, we store the logits in the storage along with samples. • RM (Bang et al., 2021) uses the same backbone as GDumb, but it improves memory update policy and training method over GDumb. For memory management, RM calculates the uncertainty of each sample and tries to fill the memory with samples in a wide spectrum that ranges from robust samples with low uncertainity to fragile samples with high uncertainity while keeping the classes balanced. In addition, data augmentation (DA) is proposed to advance the original RM implementation. We use RM without DA to apply data swapping in our work, but we include some results of RM with DA in Section A.4.4. Reproduction. We use reported numbers from the original paper for DER++ on TinyImageNet (Buzzega et al., 2020). For iCaRL, we believe we faithfully implement its details, but could not reach the accuracy reported in the paper. As far as we know, there is no PyTorch source code that reproduces iCaRL on both CIFAR-100 and ImageNet-100 datasets. In our implementation for iCaRL, we refer to a PyTorch version written by the PodNet authors (Douillard et al., 2020) as they achieve the most comparable results. We use the results obtained from the referred version rather than the reported results, because compared to the reported accuracy, the obtained accuracy is nearly the same for CIFAR-100 and higher for ImageNet-100. A.3.3 HYPER-PARAMETERS We follow hyper-parameters presented in the original works: we did not perform hyper-parameter search for the baselines. Table 6, Table 7, and Table 8 present all the details on the hyper-parameters. Although DER++ updates EM in batch-level and does not consider task boundary, for a larger dataset than MNIST, the original paper chooses to takes multiple passes per bundle. So, we deem DER++ to be a task-level learning method as long as we use CIFAR-100 and Tiny ImageNet as its training dataset. Here, TI and CI denote task-incremental learning and class-incremental learning, respectively. TI is an easy and simplified scenario, where the task ID is given at both training and inference. In TI setting, the model can classify the input among the classes that belong to the provided task ID. On the contrary, CI is the setting where the task ID is unknown during inference, which is a more realistic case than TI. A.3.4 DETAILED SPECIFICATION OF MACHINES Our experiments are performed on machines with HW specification as presented in Table 9. These machines are also used in measuring the impact on training speed with data swapping. A.4 ADDITIONAL RESULTS A.4.1 DISTILLATION ANALYSIS Effectiveness of features of iCaRL on CarM. We explore iCaRL by measuring accuracy for all possible 32 combinations based on its algorithmic features, i.e., knowledge distillation (D), herding (H), and nearest-mean-of-exemplars (N), along with our CarM-100 (F) or CarM-50 (P). In Figure 5, we show eight combinations that are sufficient to support the three interesting findings. First, data swapping without distillation (orange bars) outperforms the other combinations including pure iCaRL (blue and green bars). Second, for combinations with distillation, applying data swapping does not deliver great accuracy (D/H/N vs the other two in blue bars). Finally, data swapping does not seem to necessitate sophisticated algorithmic features (F&H vs D/H/N), inferring a model simplification potential for episodic memory. Knowledge distillation on CarM. Figure 6 show accuracy for CIFAR subset while varying α values in α× soft label loss+ (1− α)× hard label loss for iCaRL, BiC, and DER++. We can make the same conclusions as discussed in the ‘Knowledge distillation on CarM’ paragraph of Section 4.1. Below, we describe how each distillation-based method can be transformed into the presented model for loss calculation. The original loss function of iCaRL (Rebuffi et al., 2017) is defined as: Licarl(xi) = −[ t∑ y=s {δy=yi log gy(xi) + δy 6=yi log(1− gy(xi))} + s−1∑ y=1 {qyi log gy(xi) + (1− q y i ) log(1− gy(xi))}] (3) where qyi is the output of the old model, gy(xi) is the output of the current model, {1, 2, .., s− 1} is a set of old classes and {s, ..., t} is the set of new classes. For distillation, it uses a soft target from the previous model for old classes of all current training set. As a result, training the current model heavily relies on the performance of the previous model. Especially, when data that belongs to old classes replays, since the target of loss is only soft output from the previous model, it is likely that the similar soft output from the old model is repeatedly distilled without the correct hard label. Due to such aggressive distillation, iCaRL cannot take an advantage of CarM of data swapping, which enables to replay and train abundant old data, hindering positive decision boundary corrections. That is, the wrongly predicted samples from the old model will be predicted wrongly also in the future even if they are replayed several times by CarM. BiC and DER++ use distillation loss, however, unlike iCaRL, they provide a loss term of which target for old classes is ground truth, the correct hard label. As result, BiC and DER++ could get higher accuracies with CarM. To evaluate Figure 3 and Figure 6, we modified the loss function of iCaRL, adding another binary cross entropy that uses ground truth as a target, which is referred to hard label loss, as following: Lmodified(xi) = αLicarl(xi)− (1− α) t∑ y=1 {δy=yi log gy(xi) + δy 6=yi log(1− gy(xi))} (4) BiC and DER++ already have its own hard label loss, we did not modify loss function. Note that when α is set to 1.0 in BiC, since it is unable to train any new data, which is unrealistic situation, we excluded the result of α = 1.0. A.4.2 INCREMENTAL ACCURACY OF TABLE 1 IN THE MAIN PAPER Incremental accuracy. We here report incremental accuracy as an additional performance metric. Incremental accuracy is a set of average accuracy over classes observed so far after training each task. Figure 11 and Figure 11 show the incremental accuracy of Table 1 in the main paper. We also mark the accuracy from original paper of iCaRL on CIFAR-100, iCaRL on ImageNet-100, BiC on ImageNet-100 and DER++ on Tiny-ImageNet. In general, the more tasks (classes) come, the larger gap of accuracy between original and CarM. This implies that running on CarM could better mitigate the catastrophic forgetting for long-term training. A.4.3 ABLATION STUDY ON ER, ICARL, AND GDUMB We report the results for an ablation study on ER, iCaRL, and GDumb, which were not presented in the main paper. Figure 7 shows accuracy over varying EM sizes, Figure 8 shows accuracy over varying swapping ratios, Figure 9 shows accuracy over varying storage capacity, and Figure 10 shows accuracy with learning 50 tasks. In general, we found the similar observations as discussed in Section 4.2. ER iCaRL GDumb 50 100 Ac cu ra cy (% ) 33 .7 46 .3 47 .352 .7 47 .7 49 .954 .0 48 .4 52 .6 53 .8 48 .2 53 .3 54 .9 49 .5 53 .7 Original 20% 50% 80% 100% Figure 8: Accuracy over varying data swapping ratios. ER iCaRL GDumb 50 100 Ac cu ra cy (% ) 33 .7 46 .3 47 .3 37 .5 44 .6 47 .7 42 .1 46 .2 49 .552 .0 47 .4 49 .053 .0 46 .8 51 .054 .0 48 .4 52 .6 Original 1.5X 2X 5X 10X All Figure 9: Accuracy over varying storage capacities. A.4.4 RESULTS OF RM WITH DATA AUGMENTATION We implement RM with data augmentation and show the results in Table 11 using CIFAR-10 dataset. Both CarM and CarM-50 improves accuracy significantly over the baseline method. A.4.5 CARM ON EMBEDDED DEVICE We evaluate CarM using a NVIDIA Jetson TX2 to show its efficacy when running on a representative embedded AI computing device. Table 12 shows all baselines with CarM-50 and CarM-100 on CIFAR subset. We see accuracy improvements with CarM as similarly observed in the main paper. A.5 DISCUSSIONS We have taken early steps towards leveraging both memory and storage to overcome the forgetting problem in CL while preserving the same training efficiency, which we find to be effective for the hardware we tested. However, as the characteristics between the memory and storage may vary significantly, the storage access latency may still become a significant bottleneck unless carefully exploited. Ideally, given the specs of a hardware configuration (e.g., computation, memory, and available I/O bandwidth), the swapping mechanism could decide an optimal policy to increase the memory capacity without adding additional latency. We leave this as an area of future work, which would make CarM more robust and resilient to variations in different hardware settings.
1. What is the focus of the paper regarding continual learning? 2. What are the strengths of the proposed approach, particularly in terms of its clarity and experimental evaluation? 3. What are the weaknesses of the paper, especially regarding its contributions and novelty? 4. How does the reviewer assess the significance of the two main ideas presented in the paper? 5. Are there any questions or suggestions from the reviewer regarding the swapping policies and their performance?
Summary Of The Paper Review
Summary Of The Paper This paper proposes the use of an external storage in addition to the episodic memory in continual learning. The authors describe a procedure for executing training and data transfer between the RAM and the external storage asynchronously, and three swapping policies to decide which training instances to transfer from the RAM to the storage. Review The strengths of this paper are really obvious to anyone that reads it. The text is clear, and well-structured; the main ideas presented in the paper can be understood effortlessly; and the experimental evaluation is quite thorough. (One minor issue with the text: there seems to be a broken citation in the 3rd line of page 4.) However, I am not convinced that this paper offers a significant contribution worthy of a publication. The two main ideas of the paper are i) the asynchronous swapping mechanism, and ii) three different swapping policies. Asynchronous programming is not something novel or particularly complicated, and, at least to me, it seems obvious that if you'd like to involve an external storage in replay-based continual learning, you would not execute training steps and I/O sequentially. It was also obvious that storing more data would lead to higher accuracy and less forgetting. Moreover, the obvious way to execute the swapping is at random. The authors do propose an entropy-based approach (which can be considered a minor contribution) and a combination of the random and entropy-based swapping. Still, by examining Table 2 it does not look like there is significant performance gap between the random approach and the other two. Questions: I expected DER++ and RM to perform significantly better on the ImageNet Subnet and ImageNet-1000 benchmarks. Do you have any idea why they perform so badly? Did you try swapping out the instances with the smallest loss? (This idea came to mind when I was reading the paper.)
ICLR
Title Carousel Memory: Rethinking the Design of Episodic Memory for Continual Learning Abstract Continual Learning (CL) is an emerging machine learning paradigm that aims to learn from a continuous stream of tasks without forgetting knowledge learned from the previous tasks. To avoid performance decrease caused by forgetting, prior studies exploit episodic memory (EM), which stores a subset of the past observed samples while learning from new non-i.i.d. data. Despite the promising results, since CL is often assumed to execute on mobile or IoT devices, the EM size is bounded by the small hardware memory capacity and makes it infeasible to meet the accuracy requirements for real-world applications. Specifically, all prior CL methods discard samples overflowed from the EM and can never retrieve them back for subsequent training steps, incurring loss of information that would exacerbate catastrophic forgetting. We explore a novel hierarchical EM management strategy to address the forgetting issue. In particular, in mobile and IoT devices, real-time data can be stored not just in high-speed RAMs but in internal storage devices as well, which offer significantly larger capacity than the RAMs. Based on this insight, we propose to exploit the abundant storage to preserve past experiences and alleviate the forgetting by allowing CL to efficiently migrate samples between memory and storage without being interfered by the slow access speed of the storage. We call it Carousel Memory (CarM). As CarM is complementary to existing CL methods, we conduct extensive evaluations of our method with seven popular CL methods and show that CarM significantly improves the accuracy of the methods across different settings by large margins in final average accuracy (up to 28.4%) while retaining the same training efficiency. 1 INTRODUCTION With the rising demand for realistic on-device machine learning, recent years have witnessed a novel learning paradigm, namely continual learning (CL), for training neural networks (NN) with a stream of non-i.i.d. data. In such a paradigm, the neural network is incrementally learned with insertions of new tasks (e.g., a set of classes) (Rebuffi et al., 2017). The NN model is expected to continuously learn new knowledge from new tasks over time while retaining previously learned knowledge, which is a closer representation of how intelligent systems operate in the real world. In this learning setup, the knowledge should be acquired not only from the new data timely but also in a computationally efficient manner. In this regard, CL is suitable for learning on mobile and IoT devices (Hayes et al., 2020; Wang et al., 2019). However, CL faces significant challenges from the notorious catastrophic forgetting problem— knowledge learned in the past fading away as the NN model continues to learn new tasks (McCloskey & Neal, 1989). Among many prior approaches to addressing this issue, episodic memory (EM) is one of the most effective approaches (Buzzega et al., 2020; Chaudhry et al., 2019a;b; Lopez-Paz & Ranzato, 2017; Prabhu et al., 2020). EM is an in-memory buffer that stores old samples and replays them periodically while training models with new samples. EM needs to have a sufficiently large capacity to achieve a desired accuracy, and such capacity in need may vary significantly since incoming data may contain a varying number of tasks and classes at different time slots and geolocations (Bang et al., 2021). However, in practice, the size of EM is often quite small, bounded by limited on-device memory capacity. The limited EM size makes it difficult to store a large number of samples or scale up to a large number of tasks, preventing CL models from achieving high accuracy as training moves forward. To address the forgetting problem, we introduce a hierarchical EM method, which significantly enhances the effectiveness of episodic memory. Our method is motivated by the fact that modern mobile and IoT devices are commonly equipped with a deep memory hierarchy including small memory with fast access (50–150 ns) and large storage with slow access (25–250 µs), which is typically orders of magnitude larger than the memory. Provided by these different hardware characteristics, the memory is an ideal place to access samples at high speed during training, promising short training time. In contrast, the storage is an ideal place to store a significantly large number of old samples and use them for greatly improving model accuracy. The design goal of our scheme, Carousel Memory or CarM, is to combine the best of both worlds to improve the episodic memory capacity by leveraging on-device storage but without significantly prolonging traditional memorybased CL approaches. CarM stores as many observed samples as possible so long as it does not exceed a given storage capacity (rather than discarding those overflowed from EM as done in existing methods) and updates the in-memory EM while the model is still learning with samples already in EM. One key research question is how to manage samples across EM and storage for both system efficiency and model accuracy. Here we propose a hierarchical memory-aware data swapping, an online process that dynamically replaces a subset of in-memory samples used for model training with other samples stored in storage, with an optimization goal in two aspects. (1) System efficiency. Prior single-level memory-only training approaches promise timely model updates even in the face of real-time data that arrives with high throughput. Therefore, we expect drawing old samples from slow storage does not incur significant I/O overhead that affects the overall system efficiency, especially for mobile and IoT devices. (2) Model accuracy. CarM significantly increases the effective EM size, mitigating forgetting issues by avoiding important information from overflowing due to limited memory capacity. As a result, we expect our approach also improves the model accuracy by exploiting data samples preserved in the storage more effectively for training. To fulfill the competing goals, we design CarM from two different perspectives: swapping mechanism (Section 3.1) and swapping policy (Section 3.2). The swapping mechanism of CarM ensures that the slow speed of accessing the storage does not become a bottleneck of continual model training by carefully hiding sample swapping latency through asynchrony. Moreover, we propose various swapping policies to decide which and when to swap samples and incorporate them into a single component, namely, gate function. The gate function allows for fewer swapping samples, making CarM to march with low I/O bandwidth storage which is common for mobile and IoT devices. One major benefit of CarM is that it is largely complementary to existing episodic memory-based CL methods. By exploiting the memory hierarchy, we show that CarM helps boost the accuracy of many existing methods by up to 28.4% for Rainbow Memory (RM) (Bang et al., 2021) in Tiny-ImageNet dataset (Section 4.1) and even allows them to retain their accuracy with much smaller EM sizes. With CarM as a strong baseline for episodic memory-based CL methods, some well-known algorithmic optimizations may need to be revisited to ensure that they are not actually at odds with data swapping. For example, we observe that iCaRL (Rebuffi et al., 2017), BiC (Wu et al., 2019), and DER++ (Buzzega et al., 2020), which strongly depend on knowledge distillation for old tasks, can deliver higher accuracy with CarM by limiting the weight coefficient on the distillation loss as a small value in calculating training loss. With CarM, such weight coefficient does not indeed necessarily be high or managed complicatedly as done in prior work, because we could now leverage a large amount of data in storage (with ground truth) to facilitate training performance. 2 RELATED WORK Class incremental learning. The performance of CL algorithms heavily depends on scenarios and setups, as summarized by Van de Ven et al. (van de Ven & Tolias, 2018). Among them, we are particularly interested in class-incremental learning (CIL), where task IDs are not given during inference (Gepperth & Hammer, 2016). Many prior proposals are broadly divided into two categories, rehearsal-based and regularization-based. In rehearsal-based approaches, episodic memory stores a few samples of old tasks to replay in the future (Bang et al., 2021; Castro et al., 2018; Chaudhry et al., 2018; Rebuffi et al., 2017). On the contrary, regularization-based approaches exploit the information of old tasks implicitly retained in the model parameters, without storing samples representing old tasks (Kirkpatrick et al., 2017; Zenke et al., 2017; Liu et al., 2018; Li & Hoiem, 2017; Lee et al., 2017; Mallya et al., 2018). As rehearsal-based approaches generally have shown the better performance in CIL (Prabhu et al., 2020), we aim to alleviate current drawbacks of the approaches (i.e., limited memory space) by incorporating data management across the memory-storage hierarchy. The CIL setup usually assumes that the tasks contain disjoint set of classes (Rebuffi et al., 2017; Castro et al., 2018; Gepperth & Hammer, 2016). More recent studies introduce methods to learn from the blurry stream of tasks, where some samples across the tasks overlap in terms of class ID (Aljundi et al., 2019; Prabhu et al., 2020). Moreover, prior works can be classified as either offline (Wu et al., 2019; Rebuffi et al., 2017; Chaudhry et al., 2018; Castro et al., 2018), which allows a buffer to store incoming samples for the current task, or online (Fini et al., 2020; Aljundi et al., 2019; Jin et al., 2020), which has no such buffer: a few works consider both (Prabhu et al., 2020). Both online and offline methods can take advantage of CarM as our work focuses on improving episodic memory with a storage device. Episodic memory management. There are numerous episodic memory management strategies proposed in the literature (Parisi et al., 2018) such as herding selection (Welling, 2009), discriminative sampling (Liu et al., 2020), entropy-based sampling (Chaudhry et al., 2018) and diversity-based sampling (Kang et al., 2020; Bang et al., 2021). A number of works have been proposed to compose the episodic memory with representative and discriminative samples. Liu et al. propose a strategy to store samples representing the mean and boundary of each class distribution (Liu et al., 2020). Borsos et al. propose a coreset generation method using cardinality-constrained bi-level optimization (Borsos et al., 2020). Cong et al. propose a GAN-based memory aiming to perturb styles of remembered samples for incremental learning (Cong et al., 2020). Bang et al. propose a strategy to promote the diversity of samples in the episodic memory (Bang et al., 2021). These recent works improve the quality of the samples stored in the memory at the expense of excessive computation or difficulty (Borsos et al., 2020) involved in training a generation model for perturbation (Cong et al., 2020; Borsos et al., 2020). Interestingly, most of strategies show marginal accuracy improvements over the uniform random sampling despite the computational complexity (Chaudhry et al., 2018; Castro et al., 2018; Rebuffi et al., 2017). Other than sampling, there are works to generate samples of past tasks (Shin et al., 2017; Seff et al., 2017; Wu et al., 2018; Hu et al., 2019). Unlike these works addressing the sampling efficiency, we focus on the systematically efficient method to manage samples across the system memory hierarchy. Memory over-commitment in NN training. Prior work studies using storage or slow memory (e.g., host memory) as an extension of fast memory (e.g., GPU memory) to increase memory capacity for NN training (Rhu et al., 2016; Wang et al., 2018; Hildebrand et al., 2020; Huang et al., 2020; Peng et al., 2020; Jin et al., 2018; Ren et al., 2021). However, most of these works target at optimizing the conventional offline learning scenarios by swapping optimizer states, activations, or model weights between the fast memory and slow memory (or storage), whereas we focus on swapping samples in between episodic memory and storage to tackle the forgetting problem in the context of continual learning. In more general context, memory-storage caching has been studied to reduce memory and energy consumption for various applications (Ananthanarayanan et al., 2012; Oh et al., 2012; Zaharia et al., 2010), which is orthogonal to our work. 3 PROPOSED METHOD: CAROUSEL MEMORY We describe how data swapping in CarM extends the current workflow of episodic memory (EM) in Figure 1. For ease of illustration, we assume that the input stream data is organized by consecutive tasks but CL learners do not necessarily rely on boundaries between tasks to perform training and update EM. There are three common stages involved in traditional EM methods, which proceed in order: data incoming, training, and EM updating. This workflow corresponds to many existing methods including TinyER (Chaudhry et al., 2019b), CBRS (Chrysakis & Moens, 2020), iCaRL (Rebuffi et al., 2017), BiC (Wu et al., 2019), and DER++ (Buzzega et al., 2020). Then, we add two additional key stages for data swapping: storage updating and storage sample retrieving. • Data incoming I : The episodic memory maintains a subset of samples from previous tasks {T1, . . . , Ti−1}. When samples for a new task Ti arrive, they are first enqueued into a stream buffer and later exercised for training. Different CL algorithms require different amounts of samples to be enqueued for training. The task-level learning relies on task boundaries, waiting until all Ti’s samples Time = 𝒊 + 𝟏Time = 𝒊 appear (Rebuffi et al., 2017; Wu et al., 2019). On the contrary, the batch-level learning initiates the training stage as soon as a batch of Ti’s samples in a pre-defined size is available (Chaudhry et al., 2019b; ?; Chrysakis & Moens, 2020). • Training T : The training combines old samples in EM with new samples in a stream buffer to compose a training bundle. The CL learner organizes the bundle into one or more mini-batches, where each mini-batch is a mixture of old and new samples. The mini-batch size and the ratio between the two types of samples within a mini-batch are configured by the learning algorithm. Typically, several mini-batches are constructed in the task-level learning. Learners may go over multiple passes given a bundle, trading off computation cost for accuracy. • EM updating M : Once the training stage is completed, samples in the stream buffer are no longer new and represent a past experience, requiring EM to be updated with these samples. EM may have enough space to store all of them, but if it does not, the CL method applies a sampling strategy like reservoir sampling (Vitter, 1985) and greedy-balancing sampling (Prabhu et al., 2020) to select a subset of samples from EM as well as from the stream buffer to keep in EM. All prior works “discard” the samples which are not chosen to be kept in EM. • Storage updating S : CarM flushes the stream data onto the storage before cleaning up the stream buffer for the next data incoming step. No loss of information occurs if the free space available in the storage is large enough for the stream data. However, if the storage is filled due to lack of capacity, we end up having victim samples to remove from the storage. In this case, we randomly choose samples to evict for each class while keeping the in-storage data class-balanced. • Storage sample retrieving R : With the large number of samples maintained in the storage, data swapping replaces in-memory samples with in-storage samples during training to exercise abundant information preserved in the storage regarding past experiences. CarM collects various useful signals for each in-memory sample used in the training stage and determines whether to replace that sample or not. This decision is made by our generic gating function that selects a subset of the samples for replacement with effectively little runtime cost. Since old samples for training are drawn directly from EM and a large pool of samples is always kept in the storage ES, the training phase is forced to have a boundary of sample selection restricted by the size of EM . The continual learning with data swapping that optimizes model parameters θ for old and new samples x (and corresponding labels y) can hence be formulated as follows: argmin θ i∑ task id=1 E(x,y)∼ES∪Ti [L(f(x, θ), y)],where (x, y) ∈ EM. (1) 3.1 MINIMIZING DELAY TO CONTINUAL MODEL TRAINING The primary objective in our proposed design is hiding performance interference caused by the data swapping so that CarM incurs low latency during training. To that end, we propose two techniques that encompass in-storage sample retrieval and EM updating stages. 𝑅3 𝑏 𝑇1 𝑏 𝑅1 𝑏 𝑇2 𝑏 𝑇3 𝑏Sync Async Time … … Training Time Reduction … 𝑇1 𝑏 𝑅1 𝑏 𝑅2 𝑏 𝑅2 𝑏 𝑅3 𝑏 𝑇2 𝑏 𝑇3 𝑏 Figure 2: Training time reduction with async sample retrieval. Asynchronous sample retrieval. Similar to the conventional learning practice, CarM maintains fetch workers performing data decoding and augmentation. As shown in Figure 1, CarM has an additional swap worker dedicated to deciding in-memory samples to evict and issuing I/O requests to bring new samples from storage into memory. In the CL workflow, the data retrieval stage R has dependency on the training stage T since training triggers the replacement of an in-memory sample when it is used as training input. To illustrate, we assume that the system has a single fetch worker to pre-process the training input bundle and creates N mini-batch from the bundle – this pre-processing is incurred every time a sample is fetched for training. The swap worker identifies samples in EM to be replaced from mini-batch i training (T bi ) and then issues I/O requests to retrieve other samples from storage (R b i ). If we want to allow the next mini-batch training to exercise EM completely refreshed with the replaced samples, executions of T b and Rb by definition must be serialized such that we have a sequence of T b1 → Rb1 → T b2 → Rb2 → T b3 → Rb3, as shown in Figure 2 (Sync). However, committing to such strict serialized executions slows down training speed significantly, e.g., the second mini-batch training T b2 can start only after finishing up T b 1 → Rb1, which takes much longer time than T b1 with no retrieval of storage data as done in the traditional EM design. To prevent this performance degradation, CarM adopts asynchronous sample retrieval that runs the retrieval step in parallel with the subsequent training steps. By the asynchronous method, we keep the minimum possible dependency as shown in Figure 2 (Async), with an arbitrary Rbi not necessarily happened before T b i+1. Apparently, this design choice poses a delay on applying in-storage samples to EM, making it possible for the next training steps to access some samples undergoing replacement. However, we found such accesses do not frequently occur, and the delay does not nullify the benefit that comes from data swapping. In addition, when the swap worker retrieves in-storage samples and writes on memory, it may interfere with fetch workers that attempt to read samples from it for pre-processing. To mitigate such interference, CarM could opt for EM partitioning to parallelize read/write operations over independent partitions. With EM partitioning, only those operations that access the same partition need coordination, achieving concurrency against operations that access other partitions. 3.2 DATA SWAPPING POLICY BY A GATE FUNCTION The gate function in Figure 1 is a core component in CarM for adjusting I/O traffic. The gate, as guided by its decision logic, allows us to select a certain portion of samples to swap out from those EM samples drawn in the training stage. Having this control knob is of big practical importance as the maximum sustainable I/O traffic differs considerably among devices due to their in-use storage mediums with different characteristics (e.g., high-bandwidth flash drive vs low-bandwidth magnetic drive). At the same time, the gate is required to be effective with such partial data swapping in terms of accuracy in the subsequent training steps. To facilitate this, we propose a sample scoring method that ranks the samples in the same mini-batch so that the training algorithm can decide at which point along the continuum of the ranks we can separate samples to swap from other samples to keep further in memory. Score-based replacement. The score quantifies the relative importance of a trained sample to keep in EM with respect to other samples in the same mini-batch. Intuitively, a higher score means that the sample is in a higher rank, so is better “not” to be replaced if we need to reduce I/O traffic and vice versa. To this end, we define the gate function σi for ith sample, xi, as σi = 1(s(xi) > τ), where s(xi) is a scoring function and τ is a scoring threshold, with both s(xi) and τ between 0 and 1. The threshold is determined by the proportion of the samples that we want to replace from the EM with samples in storage with the consideration of computational efficiency. It allows data swapping to match with I/O bandwidth available on the storage medium, and prevents the system from over-subscribing the bandwidth leading to I/O back-pressure and increased queueing time or under-subscribing the bandwidth leaving storage data exploited less opportunistically. Policies. We design several swapping policies driven by the sample scoring method in the context of CL with data swapping for the first time. Specifically, we propose the following three policies: (1) Random selects random samples to replace from EM. Its scoring function assigns 0 to the τ proportion of the samples randomly selected from a mini-batch while assigning 1 to the other samples. (2) Entropy collects two useful signals for each sample produced during training: prediction correctness and the associated entropy. This policy prefers to replace the samples that are correctly predicted because these samples may not be much beneficial to improve the model in the near future. Furthermore, in this group of samples, if any specific sample has a lower entropy value than the other samples, the prediction confidence is relatively stronger, making it a better candidate for replacement. By contrast, for the samples that are incorrectly predicted, this policy prefers to “not” replace the samples that exhibit lower entropy, i.e., incorrect prediction with stronger confidence, since they may take longer to be predicted correctly. Thus, the scoring function s(xi) with a model f(·) is defined as: s(xi) = 1 U [g(xi)(H(f(xi))) + (1− g(xi)) (U −H(f(xi)))] , (2) where g(xi) = 1(f(xi) = yi), H(·) is an entropy function, and U is the maximum entropy value. (3) Dynamic combines Random and Entropy to perform the first half of training passes given a bundle with Random and the next half of the passes with Entropy. This policy is motivated by curriculum learning (Bengio et al., 2009), which gradually focuses on training harder samples as time elapses. It is indeed possible to come up with a number of replacement policies, for which this paper introduces a few concrete examples. Regardless, designing the gate logic with more effective replacement policies is a promising research direction that we want to further explore in CarM. 4 EXPERIMENTS As CarM is broadly applicable to a variety of EM-based CL methods, we compare the performance with and without CarM in the methods of their own setups. We select seven methods as shown in Table 1, to cover several aspects discussed in Section 3 such as bundle boundary of learning (i.e., task-level vs batch-level) and number of passes taken per bundle. We discuss detailed reproducible settings in Section 5. For evaluation, we implement CarM in PyTorch 1.7.1 as a working prototype. Datasets and metrics. Datasets include CIFAR subset—CIFAR10 (C10) and CIFAR100 (C100)— , ImageNet subset—ImageNet-100 (I100), Mini-ImageNet (100 classes) (MI100), and TinyImageNet (200 classes) (TI200)—, and ImageNet-1000. We use two popular metrics, the final accuracy and the final forgetting (Chaudhry et al., 2018) averaged over classes, to reflect the performance of continual learning. Except for ImageNet-1000 that represents a significantly large-scale training, the results are averaged over five runs while each method assigns an equal of classes to each task. We also measure training speed measured from the time the training stage receives a bundle to the time it completes training the bundle. Baselines and architectures. On top of each CL method, we vary the amount of data swapping to study the effectiveness of CarM in detail. Unless otherwise stated, CarM-N means that our swap worker is configured to replace N% of EM samples drawn by the training stage. All experiments are based on either ResNet or DenseNet neural networks, with all using the SGD optimizer as suggested by the original works, and use the entropy-based data swapping policy (i.e., Entropy) by default. 4.1 RESULTS We compare existing methods with two CarM versions, CarM-50 that performs partial swapping for a half of the data and CarM-100 that performs full swapping. Table 1 presents the performance in terms of the top-1 accuracy (Acc.) and the forgetting score (Fgt.), except for ER, iCarL, and BiC that measure the top-5 accuracy for the ImageNet subset as done in the original works. First, CarM-100 improves the accuracy remarkably over almost all of the methods under consideration, advancing the state-of-the-art performances for CIFAR and ImageNet datasets. The results clearly show the effectiveness of using the storage device in large capacity to allow CL to exploit abundant information of the previous tasks. Among the seven methods, CarM-100 delivers relatively larger accuracy gains for BiC (Wu et al., 2019), GDumb (Prabhu et al., 2020), DER++ (Buzzega et al., 2020), and RM (Bang et al., 2021) that take multi-passes on each training input. We believe that as long as old samples in EM are exercised more frequently for a new bundle to train (i.e., new samples plus old samples), data swapping can subsequently bring in more diverse samples from storage to take advantage of them. Regardless, although TinyER (Chaudhry et al., 2019b) is designed to take a single pass over new samples and thus exercise EM less aggressively, as applied with our techniques, it improves the accuracy by 7.72%, 11.79%, and 3.89% for CIFAR-100, Mini-ImageNet, and ImageNet-1000, respectively. In comparison to CarM-100, CarM-50 obtains slightly lower accuracy across the models. We argue that such a small sacrifice in accuracy is indeed worthwhile when storage I/O bandwidth is the primary constraint. In CarM-50, with 50% lower I/O traffic caused by data swapping, the accuracy as compared to CarM-100 diminishes only by 1%, 0.6%, and 0.7% on average for CIFAR subset, ImageNet subset, and ImageNet-1000, respectively, providing an ability to trade-off small accuracy loss for substantial I/O traffic reduction. Similarly to the accuracy, our data swapping approaches considerably reduce forgetting scores over the majority of the original methods. Perhaps, one method that shows less promising results in Table 1 would be iCaRL (Rebuffi et al., 2017), where CarM makes the accuracy occasionally worse. From the in-depth investigation of iCaRL in Appendix A.4.1, we observe that using data swapping and knowledge distillation at the same time cannot not deliver great accuracy. That is, as knowledge distillation may not be much compatible with data swapping, we revisit distillation-based CL methods (i.e., iCaRL, BiC, and DER++) when they are used along with data swapping in detail. Knowledge distillation on CarM. Note that the ways to distill the knowledge of old data in iCaRL, BiC, and DER++ are all different (see Appendix A.4.1). Briefly speaking, in calculating loss for old data, iCaRL uses only soft labels obtained from an old classifier, whereas BiC and DER++ use both hard labels (i.e., ground truth) as well as soft labels. To investigate the effect of using these two types of loss, we first modify the loss function of iCaRL similarly to that of BiC, i.e., α× soft label loss+ (1−α)× hard label loss, and then show accuracy over varying α values for all three distillation-based methods in Figure 3. For each method, we also include accuracy when α increases incrementally over time as done in BiC. The results show that distillation-based methods with CarM significantly improve accuracy when the α is very small. For iCaRL, compared to α = 1.0 (i.e., no hard label loss as iCaRL does), we obtain 5.4 higher accuracy when α = 0.0 (i.e., no distillation) and 5.7 higher accuracy when α = 0.1, which is the best result. Similarly, for BiC and DER++ with CarM, we found that the coefficient α to be applied on the soft label loss does not necessarily be high (i.e., iCaRL) or managed complicatedly (i.e., BiC) to achieve higher accuracy. Please refer to Figure 6 for the CIFAR subset results. Our best interpretation for the reason behind is as follows. The key assumption of knowledge distillation is that once the model is trained with a new task, the knowledge newly learned is supposed to generalize the task well and can be effectively transferable to subsequent task training. However, if the model is not sufficiently generalized for old tasks, using distillation losses extensively might be adverse—Data swapping attempts to correct decision boundaries driven by abundant in-storage samples to further generalize old tasks, but interfered by the knowledge distilled by the old models. Comparison for data swapping policies. We compare the performance of the three data swapping policies proposed in Section 3.2 under CarM-50. As shown in Table 2, both Entropy and Dynamic outperform Random by 0.16% on average for ImageNet subset (see Table 10 for CIFAR subset). We highlight that our major contribution for gating mechanism is computational efficiency while matching with I/O bandwidth available on the storage medium, and the primary objective of exploring data swapping policies is establishing a good baseline for the gating mechanism. In this regard, we found that all three policies can serve as good baselines. Impact on training speed. Delay optimization techniques in Section 3.1 are intended to incur insignificant delay on training. To confirm this, we examine how training speed in CarM-50 changes over the original memory-only methods, measured as the percentage of wall-clock time (i.e., actual time taken) increase as applied with asynchronous (Async) vs synchronous sample retrieval (Sync). To consider the most challenging scenario, we make data entered into the stream buffer at a rate enough to keep training always busy with new mini-batches. As shown in Table 3, regardless of EM methods, the asynchronous version of CarM does not dramatically affect training speeds for both CIFAR and ImageNet subsets. By contrast, the synchronous version slows down training time up to 71.6% for CIFAR subset and 62.0% for ImageNet subset. Regardless of the version in use, in-memory samples undergoing data swapping are rarely drawn in the subsequent training steps since the size of an episodic memory size is typically much larger than the size of a training batch. Therefore, no difference in accuracy is observed between the two version. 4.2 ABLATION STUDY We present an ablation study using four methods (TinyER, BiC, DER++, and RM) that represent the state-of-the-art in each type of EM methodologies, using the CIFAR subset. Size of EM. To confirm performance benefits over using different memory sizes, we empirically evaluate CarM-50 over varying EM sizes and show the average accuracy in Figure 4(a). In all cases, CarM-50 outperforms the existing methods, with BiC, DER++, and RM having relatively higher accuracy increases. Moreover, we observe that data swapping delivers better accuracy over conventional memory-only approaches using much smaller memory. For example, CarM-50 with DER++ on EM size 300 shows higher accuracy than pure DER++ on EM size 1000, and CarM-50 with TinyER on EM size 300 shows higher accuracy than pure TinyEM on EM size 500. Therefore, it turns out that data swapping could help reduce the EM size without hurting the accuracy of existing methods in both the multi-pass method and single-pass method. Data swapping ratio. We present results with different swapping ratios to show that our gate model indeed brings out meaningful benefits over using different I/O bandwidths. To that end, Figure 4(b) shows the change in accuracy when our gating policy decreases the swapping ratio down to 20% (CarM-20) or increases it up to 80% (CarM-80). Obviously, at CarM-80 in high swapping ratio, the accuracy across the four EM methods gets very close to the accuracy obtained in full swapping. A surprising result is that even at CarM-20 in 20% data swapping, the accuracy is very comparable to when we allow higher data swapping ratios. The results indicate that our method would be effective even when applied to the system with low-bandwidth storage. Size of storage. As local storage cannot store all the past data, the system must discard some old samples once the storage is fully occupied. Figure 4(c) shows accuracy degradation in CarM-50 when storage capacity is limited to 1.5–10× of the EM size. The results show that data swapping improves performance over traditional approaches even with using 50% larger capacity for the storage. Large number of tasks. One pressing issue on CL is learning a large number of tasks as it is required to keep the knowledge learned in the remote past. To evaluate this aspect, we split CIFAR-100 (100 classes) into 50 tasks and run with the four methods. As Figure 4(d) shows, CarM significantly outperforms the baselines, showing the potential for long-term continual model training. 5 CONCLUSION We alleviate catastrophic forgetting by integrating traditional episodic memory-based continual learning methods with device-internal data storage, named CarM. We design data swapping strategies to improve model accuracy by dynamically utilizing a large amount of the past data available in the storage. Our swapping mechanism addresses the cumbersome performance hurdle incurred by slow storage access, and hence continual model training is not dramatically affected by data transfers between memory and storage. We show the effectiveness of CarM using seven well-known methods on standard datasets, over varying memory sizes, storage sizes, and data swapping ratios. REPRODUCIBILITY STATEMENT We take the reproducibility of the research very seriously. Appendix hence includes detailed information necessary for reproducing all the experiments performed in this work, as follows: • Appendix A.1 describes the implementation details of building CarM. • Appendix A.2 specifies dataset information used in the experiments (e.g., the number of tasks and the number of classes per task). • Appendix A.3 provides experimental details (e.g., metrics and hyper-parameters). • Appendix A.3.4 presents detailed specification of machines (e.g., GPU model) used in the experiments. Our source code is available at https://anonymous.4open.science/r/CarM, where we include running environments and configuration files for all the experiments that make it possible to reproduce the results reported in this paper with minimal effort. ETHICS STATEMENT All continual learning (CL) methods including the proposed one would adapt and extend the already trained AI model to recognize better with the streamed data. The CL methods will expedite the deployment of AI systems to help humans by its versatility of adapting to a new environment out of the factory or research labs. As all CL methods, however, would suffer from adversarial streamed data as well as data bias, which may cause ethnic, gender or biased gender issues, the proposed method would not be an exception. Although the proposed method has no intention to allow such problematic cases, the method may be exposed to such threats. Relentless efforts should be made to develop mechanisms to prevent such usage cases in order to make the continuously updating machine learning models safer and enjoyable to be used by humans. A APPENDIX A.1 IMPLEMENTATION DETAILS First, we describe implementation details about the two components of the proposed method: swap worker and episodic memory. Then, we describe the details about PyTorch integration of our implementation for ease of use. Swap worker. CarM implements the swap worker through multiprocessing (pyt) in popular Python standard library so that data swapping is running in parallel with PyTorch’s default fetch workers dedicated to data decoding and augmentation. The swap worker uses asyncio (asy) to asynchronously load samples from storage to memory, effectively overlapping high-latency I/O operations with other CarM-related operations, such as image decoding, sample replacement on EM, and entropy calculation. The swap worker issues multiple data swapping requests without spinning on or being blocked by I/O. As a result, it is sufficient to have only one swap worker for CarM in the system. Episodic memory. There are various ways to implement EM to be shared between fetch workers and the swap worker. The current system favors flexibility over performance, so we opt for implementing EM as a shared object provided by Manager (man) in the Python standard library (multiprocessing.managers), which is based on message passing in the server-client semantics. In terms of flexibility, the Manager does not require the clients (i.e., fetch workers and swap workers) to define the exact data layout in the EM address space or coordinate for potential memory resizing to accommodate raw samples of different sizes (e.g., image resolutions). Hence, it is sufficient for the client workers to perform reads and writes on EM using indexes on the EM samples. An alternative yet obviously higher-performance implementation would be using multiprocessing.shared_memory (sha), which enables direct reads and writes on EM by exposing a common region of memory to the processes. Despite good performance, this method is less flexible as all processes should be aware of the data layout in a designated EM address range precisely at runtime, thus requiring additional coordination for sample lookups and EM resizing. As our system evolves, we ultimately want to combine the best of both methods to promise both flexibility and performance. A.2 DATASETS Each baseline is evaluated on its own dataset used in the original work. The first rows of Table 4 and Table 5 show datasets used in the CIFAR subset and the ImageNet subset, respectively, for all baselines. ImageNet-100 is a ImageNet ILSVRC2012 subset used in iCaRL, which contains images in the same resolution as those in the original ImageNet ILSVRC2012. Other datasets used as the ImageNet subset have smaller image resolution than the original one (e.g., 64×64 for Tiny-ImageNet, 84×84 for Mini-ImageNet). In addition, we trained all baselines on ImageNet-1000 to verify the effectiveness of CarM on a large-scale dataset. We note that only ER, iCaRL, and BiC have been compared using the ImageNet-1000 dataset in the literature (Wu et al., 2019). Datasets are split as done in the original work. The second and third rows of Table 4 and Table 5 show the detailed information on the splitting strategy. For all baselines, the ImageNet-1000 dataset is split into 10 tasks, each with 100 classes. Note that all datasets are non-blurry, meaning that each task consists of its own set of classes and samples belonging to a previous task never appear in the next tasks. Since the experimental results are highly sensitive to the class order in the continuous tasks to train, we follow the same class order used in the original works. A.3 EXPERIMENTAL DETAILS We present the effectiveness of the proposed CarM using seven CL methods of their own setups. This section discusses detailed settings for each method so that the results are reproducible by our source code. We first describe the metrics used for the evaluations. A.3.1 METRICS Final accuracy. Final accuracy is an average accuracy over all classes observed after the last task training is done. Final forgetting. Forgetting indicates how much each task has been forgotten while training new tasks (Chaudhry et al., 2018). Forgetting for a task is calculated by comparing the best accuracy observed over task insertions to the final accuracy of the task when training is over. Final forgetting is an average forgetting across all tasks when training is over. A.3.2 BASELINE DETAILS • ER (Ratcliff, 1990) combines all samples in the current stream buffer and the current EM, and passes them over to the model as a training set, i.e., training bundle. There is no algorithmic optimization applied to the model itself. We manage EM as a ring buffer that assigns EM space equally over all classes observed so far. We use the same hyper-parameters and loss function (binary cross-entropy loss) as used in iCaRL. • iCaRL (Rebuffi et al., 2017) uses three algorithmic optimizations: distillation loss, herding, and nearest-mean-of-exemplar classification. To transfer the information of old tasks, iCaRL leverages the distillation loss using logits obtained from the most recently trained model for old classes: this loss information is considered as the ground truth for old classes. Herding is its own EM management method, which populates the samples whose feature vectors are the closest to the average feature vector overall stream data for each class. iCaRL allocates EM space equally overall observed classes. • TinyER (Chaudhry et al., 2019b) explores four EM management strategies named reservoir, ring buffer, k-means, and mean of features. We adopt the reservoir in the experiments because it shows overall the highest performance in the original paper. Similar to ER, TinyER retrieves old samples from EM without other optimizations on the model itself. TinyER is batch-level learning and focuses on an extremely online setup that takes a single pass for every streamed batch. • BiC (Wu et al., 2019) runs bias correction on the last layer of the neural network, structured as fully connected layer, to mitigate data imbalance problem between old samples and new samples. The data imbalance is an inherent problem due to the limited size of EM, and it gets worse as we have a larger number of consecutive classes to train. Similar to iCaRL, BiC opts for distillation loss, but its entire loss function is a mixture of distillation loss and cross-entropy loss that is directly calculated from some reserved samples for old classes. • GDumb (Prabhu et al., 2020) is a simple rehearsal-based method that uses only the memory to train the model. The memory management is done via greedy balanced sampling, where GDumb tries to keep each class balanced by evicting data categorized into the majority class out of EM. Unlike other methods, the model is trained from scratch for inference and then discarded every time the memory is updated. GDumb uses cosine annealing learning-rate scheduler and cross-entropy loss for gradient descend. • DER++ (Buzzega et al., 2020) is one of rehearsal-based methods with knowledge distillation. Unlike other methods, this approach retains logits (along with samples) in EM for knowledge distillation. For knowledge distillation, DER++ calculates euclidean distance between the logits stored in EM and the logits generated by the current network. To enable data swapping on DER++, we store the logits in the storage along with samples. • RM (Bang et al., 2021) uses the same backbone as GDumb, but it improves memory update policy and training method over GDumb. For memory management, RM calculates the uncertainty of each sample and tries to fill the memory with samples in a wide spectrum that ranges from robust samples with low uncertainity to fragile samples with high uncertainity while keeping the classes balanced. In addition, data augmentation (DA) is proposed to advance the original RM implementation. We use RM without DA to apply data swapping in our work, but we include some results of RM with DA in Section A.4.4. Reproduction. We use reported numbers from the original paper for DER++ on TinyImageNet (Buzzega et al., 2020). For iCaRL, we believe we faithfully implement its details, but could not reach the accuracy reported in the paper. As far as we know, there is no PyTorch source code that reproduces iCaRL on both CIFAR-100 and ImageNet-100 datasets. In our implementation for iCaRL, we refer to a PyTorch version written by the PodNet authors (Douillard et al., 2020) as they achieve the most comparable results. We use the results obtained from the referred version rather than the reported results, because compared to the reported accuracy, the obtained accuracy is nearly the same for CIFAR-100 and higher for ImageNet-100. A.3.3 HYPER-PARAMETERS We follow hyper-parameters presented in the original works: we did not perform hyper-parameter search for the baselines. Table 6, Table 7, and Table 8 present all the details on the hyper-parameters. Although DER++ updates EM in batch-level and does not consider task boundary, for a larger dataset than MNIST, the original paper chooses to takes multiple passes per bundle. So, we deem DER++ to be a task-level learning method as long as we use CIFAR-100 and Tiny ImageNet as its training dataset. Here, TI and CI denote task-incremental learning and class-incremental learning, respectively. TI is an easy and simplified scenario, where the task ID is given at both training and inference. In TI setting, the model can classify the input among the classes that belong to the provided task ID. On the contrary, CI is the setting where the task ID is unknown during inference, which is a more realistic case than TI. A.3.4 DETAILED SPECIFICATION OF MACHINES Our experiments are performed on machines with HW specification as presented in Table 9. These machines are also used in measuring the impact on training speed with data swapping. A.4 ADDITIONAL RESULTS A.4.1 DISTILLATION ANALYSIS Effectiveness of features of iCaRL on CarM. We explore iCaRL by measuring accuracy for all possible 32 combinations based on its algorithmic features, i.e., knowledge distillation (D), herding (H), and nearest-mean-of-exemplars (N), along with our CarM-100 (F) or CarM-50 (P). In Figure 5, we show eight combinations that are sufficient to support the three interesting findings. First, data swapping without distillation (orange bars) outperforms the other combinations including pure iCaRL (blue and green bars). Second, for combinations with distillation, applying data swapping does not deliver great accuracy (D/H/N vs the other two in blue bars). Finally, data swapping does not seem to necessitate sophisticated algorithmic features (F&H vs D/H/N), inferring a model simplification potential for episodic memory. Knowledge distillation on CarM. Figure 6 show accuracy for CIFAR subset while varying α values in α× soft label loss+ (1− α)× hard label loss for iCaRL, BiC, and DER++. We can make the same conclusions as discussed in the ‘Knowledge distillation on CarM’ paragraph of Section 4.1. Below, we describe how each distillation-based method can be transformed into the presented model for loss calculation. The original loss function of iCaRL (Rebuffi et al., 2017) is defined as: Licarl(xi) = −[ t∑ y=s {δy=yi log gy(xi) + δy 6=yi log(1− gy(xi))} + s−1∑ y=1 {qyi log gy(xi) + (1− q y i ) log(1− gy(xi))}] (3) where qyi is the output of the old model, gy(xi) is the output of the current model, {1, 2, .., s− 1} is a set of old classes and {s, ..., t} is the set of new classes. For distillation, it uses a soft target from the previous model for old classes of all current training set. As a result, training the current model heavily relies on the performance of the previous model. Especially, when data that belongs to old classes replays, since the target of loss is only soft output from the previous model, it is likely that the similar soft output from the old model is repeatedly distilled without the correct hard label. Due to such aggressive distillation, iCaRL cannot take an advantage of CarM of data swapping, which enables to replay and train abundant old data, hindering positive decision boundary corrections. That is, the wrongly predicted samples from the old model will be predicted wrongly also in the future even if they are replayed several times by CarM. BiC and DER++ use distillation loss, however, unlike iCaRL, they provide a loss term of which target for old classes is ground truth, the correct hard label. As result, BiC and DER++ could get higher accuracies with CarM. To evaluate Figure 3 and Figure 6, we modified the loss function of iCaRL, adding another binary cross entropy that uses ground truth as a target, which is referred to hard label loss, as following: Lmodified(xi) = αLicarl(xi)− (1− α) t∑ y=1 {δy=yi log gy(xi) + δy 6=yi log(1− gy(xi))} (4) BiC and DER++ already have its own hard label loss, we did not modify loss function. Note that when α is set to 1.0 in BiC, since it is unable to train any new data, which is unrealistic situation, we excluded the result of α = 1.0. A.4.2 INCREMENTAL ACCURACY OF TABLE 1 IN THE MAIN PAPER Incremental accuracy. We here report incremental accuracy as an additional performance metric. Incremental accuracy is a set of average accuracy over classes observed so far after training each task. Figure 11 and Figure 11 show the incremental accuracy of Table 1 in the main paper. We also mark the accuracy from original paper of iCaRL on CIFAR-100, iCaRL on ImageNet-100, BiC on ImageNet-100 and DER++ on Tiny-ImageNet. In general, the more tasks (classes) come, the larger gap of accuracy between original and CarM. This implies that running on CarM could better mitigate the catastrophic forgetting for long-term training. A.4.3 ABLATION STUDY ON ER, ICARL, AND GDUMB We report the results for an ablation study on ER, iCaRL, and GDumb, which were not presented in the main paper. Figure 7 shows accuracy over varying EM sizes, Figure 8 shows accuracy over varying swapping ratios, Figure 9 shows accuracy over varying storage capacity, and Figure 10 shows accuracy with learning 50 tasks. In general, we found the similar observations as discussed in Section 4.2. ER iCaRL GDumb 50 100 Ac cu ra cy (% ) 33 .7 46 .3 47 .352 .7 47 .7 49 .954 .0 48 .4 52 .6 53 .8 48 .2 53 .3 54 .9 49 .5 53 .7 Original 20% 50% 80% 100% Figure 8: Accuracy over varying data swapping ratios. ER iCaRL GDumb 50 100 Ac cu ra cy (% ) 33 .7 46 .3 47 .3 37 .5 44 .6 47 .7 42 .1 46 .2 49 .552 .0 47 .4 49 .053 .0 46 .8 51 .054 .0 48 .4 52 .6 Original 1.5X 2X 5X 10X All Figure 9: Accuracy over varying storage capacities. A.4.4 RESULTS OF RM WITH DATA AUGMENTATION We implement RM with data augmentation and show the results in Table 11 using CIFAR-10 dataset. Both CarM and CarM-50 improves accuracy significantly over the baseline method. A.4.5 CARM ON EMBEDDED DEVICE We evaluate CarM using a NVIDIA Jetson TX2 to show its efficacy when running on a representative embedded AI computing device. Table 12 shows all baselines with CarM-50 and CarM-100 on CIFAR subset. We see accuracy improvements with CarM as similarly observed in the main paper. A.5 DISCUSSIONS We have taken early steps towards leveraging both memory and storage to overcome the forgetting problem in CL while preserving the same training efficiency, which we find to be effective for the hardware we tested. However, as the characteristics between the memory and storage may vary significantly, the storage access latency may still become a significant bottleneck unless carefully exploited. Ideally, given the specs of a hardware configuration (e.g., computation, memory, and available I/O bandwidth), the swapping mechanism could decide an optimal policy to increase the memory capacity without adding additional latency. We leave this as an area of future work, which would make CarM more robust and resilient to variations in different hardware settings.
1. What is the focus of the paper regarding continual learning? 2. What are the strengths of the proposed CarM approach, particularly its novelty and practicality? 3. What are the weaknesses of the paper, especially regarding the comments in the Abstract and Introduction sections? 4. Do you have any concerns about the experimental results and their comparability to other approaches? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper introduces a new episodic memory (EM) management technique CarM in continual learning. Unlike previous approaches, the authors show the hierarchical memory architecture which consists of EM and external storage. At training time, the main model that learns the tasks from stream data utilizes the EM to prevent the catastrophic forgetting, and using the scoring function from the main model, the samples to be swapped are chosen. In addition to proposing a new memory architecture, the authors adopt asynchronous sample retrieval technique which can reduce the training time compared to the synchronous version. In the experiment, the baseline methods equipped with CarM achieves much higher accuracy than their original version, and in the ablation study section, the authors show the effectiveness of each component in CarM. Review Pros: Unlike the previous approaches, the settings for CarM is more practical, and it has a novelty on proposing the hierarchical memory architecture. The data swapping policy using entropy and accuracy is quite impressive. This technique can be also used for measuring the importance of image samples in continual learning. Using CarM can increase the overall performance of baseline methods. Cons: Before talking about the cons, I respectfully disagree about the comments that authors said in Abstract and Introduction section. First, the authors said that EM is usually in-memory buffer, and it is stored in RAM. However, in modern mobile devices, their memory size (e.g. RAM) is at most 4GB, but the resolution of image is excessively high, so it is hard to store almost 20000 images in RAM. Second, the authors said that swapping the samples between RAM and storage does not incur significant I/O overhead that affects the overall system efficiency. However, in case of operating system, the more frequently the swap area of the storage is used, the more the system efficiency tends to decrease. To avoid this problem, most of the systems usually adopt large size of RAM. Therefore, if CarM always try to swap the samples between RAM and storage, this approach is highly similar as the case for using swap area frequently in operating system. Now, I'll specify the cons about this paper. The overall experimental results are quite impressive. However, I guess the total memory size of CarM is much larger than the original version of the baselines. So, it is hard to know whether the increase of accuracy is due to the large size of memory or the effectiveness of algorithm. I think comparing CarM with baselines that use large size of memory is much fair approach. For example, the accuracy of 'all' of DER, BIC, and RM in Figure 4(c) is same as in Table 1, which means DER, BiC and RM uses all the training data, while the original version only uses small subset of training data. Deciding which samples to be swapped using the scoring function is a kind of memory retrieval technique. Therefore, it would be great to compare the proposed method to other retrieval methods (e.g. MIR [1] ) [1] Online Continual Learning with Maximally Interfered Retrieval, Aljundi et.al., NeurIPS 2019
ICLR
Title Carousel Memory: Rethinking the Design of Episodic Memory for Continual Learning Abstract Continual Learning (CL) is an emerging machine learning paradigm that aims to learn from a continuous stream of tasks without forgetting knowledge learned from the previous tasks. To avoid performance decrease caused by forgetting, prior studies exploit episodic memory (EM), which stores a subset of the past observed samples while learning from new non-i.i.d. data. Despite the promising results, since CL is often assumed to execute on mobile or IoT devices, the EM size is bounded by the small hardware memory capacity and makes it infeasible to meet the accuracy requirements for real-world applications. Specifically, all prior CL methods discard samples overflowed from the EM and can never retrieve them back for subsequent training steps, incurring loss of information that would exacerbate catastrophic forgetting. We explore a novel hierarchical EM management strategy to address the forgetting issue. In particular, in mobile and IoT devices, real-time data can be stored not just in high-speed RAMs but in internal storage devices as well, which offer significantly larger capacity than the RAMs. Based on this insight, we propose to exploit the abundant storage to preserve past experiences and alleviate the forgetting by allowing CL to efficiently migrate samples between memory and storage without being interfered by the slow access speed of the storage. We call it Carousel Memory (CarM). As CarM is complementary to existing CL methods, we conduct extensive evaluations of our method with seven popular CL methods and show that CarM significantly improves the accuracy of the methods across different settings by large margins in final average accuracy (up to 28.4%) while retaining the same training efficiency. 1 INTRODUCTION With the rising demand for realistic on-device machine learning, recent years have witnessed a novel learning paradigm, namely continual learning (CL), for training neural networks (NN) with a stream of non-i.i.d. data. In such a paradigm, the neural network is incrementally learned with insertions of new tasks (e.g., a set of classes) (Rebuffi et al., 2017). The NN model is expected to continuously learn new knowledge from new tasks over time while retaining previously learned knowledge, which is a closer representation of how intelligent systems operate in the real world. In this learning setup, the knowledge should be acquired not only from the new data timely but also in a computationally efficient manner. In this regard, CL is suitable for learning on mobile and IoT devices (Hayes et al., 2020; Wang et al., 2019). However, CL faces significant challenges from the notorious catastrophic forgetting problem— knowledge learned in the past fading away as the NN model continues to learn new tasks (McCloskey & Neal, 1989). Among many prior approaches to addressing this issue, episodic memory (EM) is one of the most effective approaches (Buzzega et al., 2020; Chaudhry et al., 2019a;b; Lopez-Paz & Ranzato, 2017; Prabhu et al., 2020). EM is an in-memory buffer that stores old samples and replays them periodically while training models with new samples. EM needs to have a sufficiently large capacity to achieve a desired accuracy, and such capacity in need may vary significantly since incoming data may contain a varying number of tasks and classes at different time slots and geolocations (Bang et al., 2021). However, in practice, the size of EM is often quite small, bounded by limited on-device memory capacity. The limited EM size makes it difficult to store a large number of samples or scale up to a large number of tasks, preventing CL models from achieving high accuracy as training moves forward. To address the forgetting problem, we introduce a hierarchical EM method, which significantly enhances the effectiveness of episodic memory. Our method is motivated by the fact that modern mobile and IoT devices are commonly equipped with a deep memory hierarchy including small memory with fast access (50–150 ns) and large storage with slow access (25–250 µs), which is typically orders of magnitude larger than the memory. Provided by these different hardware characteristics, the memory is an ideal place to access samples at high speed during training, promising short training time. In contrast, the storage is an ideal place to store a significantly large number of old samples and use them for greatly improving model accuracy. The design goal of our scheme, Carousel Memory or CarM, is to combine the best of both worlds to improve the episodic memory capacity by leveraging on-device storage but without significantly prolonging traditional memorybased CL approaches. CarM stores as many observed samples as possible so long as it does not exceed a given storage capacity (rather than discarding those overflowed from EM as done in existing methods) and updates the in-memory EM while the model is still learning with samples already in EM. One key research question is how to manage samples across EM and storage for both system efficiency and model accuracy. Here we propose a hierarchical memory-aware data swapping, an online process that dynamically replaces a subset of in-memory samples used for model training with other samples stored in storage, with an optimization goal in two aspects. (1) System efficiency. Prior single-level memory-only training approaches promise timely model updates even in the face of real-time data that arrives with high throughput. Therefore, we expect drawing old samples from slow storage does not incur significant I/O overhead that affects the overall system efficiency, especially for mobile and IoT devices. (2) Model accuracy. CarM significantly increases the effective EM size, mitigating forgetting issues by avoiding important information from overflowing due to limited memory capacity. As a result, we expect our approach also improves the model accuracy by exploiting data samples preserved in the storage more effectively for training. To fulfill the competing goals, we design CarM from two different perspectives: swapping mechanism (Section 3.1) and swapping policy (Section 3.2). The swapping mechanism of CarM ensures that the slow speed of accessing the storage does not become a bottleneck of continual model training by carefully hiding sample swapping latency through asynchrony. Moreover, we propose various swapping policies to decide which and when to swap samples and incorporate them into a single component, namely, gate function. The gate function allows for fewer swapping samples, making CarM to march with low I/O bandwidth storage which is common for mobile and IoT devices. One major benefit of CarM is that it is largely complementary to existing episodic memory-based CL methods. By exploiting the memory hierarchy, we show that CarM helps boost the accuracy of many existing methods by up to 28.4% for Rainbow Memory (RM) (Bang et al., 2021) in Tiny-ImageNet dataset (Section 4.1) and even allows them to retain their accuracy with much smaller EM sizes. With CarM as a strong baseline for episodic memory-based CL methods, some well-known algorithmic optimizations may need to be revisited to ensure that they are not actually at odds with data swapping. For example, we observe that iCaRL (Rebuffi et al., 2017), BiC (Wu et al., 2019), and DER++ (Buzzega et al., 2020), which strongly depend on knowledge distillation for old tasks, can deliver higher accuracy with CarM by limiting the weight coefficient on the distillation loss as a small value in calculating training loss. With CarM, such weight coefficient does not indeed necessarily be high or managed complicatedly as done in prior work, because we could now leverage a large amount of data in storage (with ground truth) to facilitate training performance. 2 RELATED WORK Class incremental learning. The performance of CL algorithms heavily depends on scenarios and setups, as summarized by Van de Ven et al. (van de Ven & Tolias, 2018). Among them, we are particularly interested in class-incremental learning (CIL), where task IDs are not given during inference (Gepperth & Hammer, 2016). Many prior proposals are broadly divided into two categories, rehearsal-based and regularization-based. In rehearsal-based approaches, episodic memory stores a few samples of old tasks to replay in the future (Bang et al., 2021; Castro et al., 2018; Chaudhry et al., 2018; Rebuffi et al., 2017). On the contrary, regularization-based approaches exploit the information of old tasks implicitly retained in the model parameters, without storing samples representing old tasks (Kirkpatrick et al., 2017; Zenke et al., 2017; Liu et al., 2018; Li & Hoiem, 2017; Lee et al., 2017; Mallya et al., 2018). As rehearsal-based approaches generally have shown the better performance in CIL (Prabhu et al., 2020), we aim to alleviate current drawbacks of the approaches (i.e., limited memory space) by incorporating data management across the memory-storage hierarchy. The CIL setup usually assumes that the tasks contain disjoint set of classes (Rebuffi et al., 2017; Castro et al., 2018; Gepperth & Hammer, 2016). More recent studies introduce methods to learn from the blurry stream of tasks, where some samples across the tasks overlap in terms of class ID (Aljundi et al., 2019; Prabhu et al., 2020). Moreover, prior works can be classified as either offline (Wu et al., 2019; Rebuffi et al., 2017; Chaudhry et al., 2018; Castro et al., 2018), which allows a buffer to store incoming samples for the current task, or online (Fini et al., 2020; Aljundi et al., 2019; Jin et al., 2020), which has no such buffer: a few works consider both (Prabhu et al., 2020). Both online and offline methods can take advantage of CarM as our work focuses on improving episodic memory with a storage device. Episodic memory management. There are numerous episodic memory management strategies proposed in the literature (Parisi et al., 2018) such as herding selection (Welling, 2009), discriminative sampling (Liu et al., 2020), entropy-based sampling (Chaudhry et al., 2018) and diversity-based sampling (Kang et al., 2020; Bang et al., 2021). A number of works have been proposed to compose the episodic memory with representative and discriminative samples. Liu et al. propose a strategy to store samples representing the mean and boundary of each class distribution (Liu et al., 2020). Borsos et al. propose a coreset generation method using cardinality-constrained bi-level optimization (Borsos et al., 2020). Cong et al. propose a GAN-based memory aiming to perturb styles of remembered samples for incremental learning (Cong et al., 2020). Bang et al. propose a strategy to promote the diversity of samples in the episodic memory (Bang et al., 2021). These recent works improve the quality of the samples stored in the memory at the expense of excessive computation or difficulty (Borsos et al., 2020) involved in training a generation model for perturbation (Cong et al., 2020; Borsos et al., 2020). Interestingly, most of strategies show marginal accuracy improvements over the uniform random sampling despite the computational complexity (Chaudhry et al., 2018; Castro et al., 2018; Rebuffi et al., 2017). Other than sampling, there are works to generate samples of past tasks (Shin et al., 2017; Seff et al., 2017; Wu et al., 2018; Hu et al., 2019). Unlike these works addressing the sampling efficiency, we focus on the systematically efficient method to manage samples across the system memory hierarchy. Memory over-commitment in NN training. Prior work studies using storage or slow memory (e.g., host memory) as an extension of fast memory (e.g., GPU memory) to increase memory capacity for NN training (Rhu et al., 2016; Wang et al., 2018; Hildebrand et al., 2020; Huang et al., 2020; Peng et al., 2020; Jin et al., 2018; Ren et al., 2021). However, most of these works target at optimizing the conventional offline learning scenarios by swapping optimizer states, activations, or model weights between the fast memory and slow memory (or storage), whereas we focus on swapping samples in between episodic memory and storage to tackle the forgetting problem in the context of continual learning. In more general context, memory-storage caching has been studied to reduce memory and energy consumption for various applications (Ananthanarayanan et al., 2012; Oh et al., 2012; Zaharia et al., 2010), which is orthogonal to our work. 3 PROPOSED METHOD: CAROUSEL MEMORY We describe how data swapping in CarM extends the current workflow of episodic memory (EM) in Figure 1. For ease of illustration, we assume that the input stream data is organized by consecutive tasks but CL learners do not necessarily rely on boundaries between tasks to perform training and update EM. There are three common stages involved in traditional EM methods, which proceed in order: data incoming, training, and EM updating. This workflow corresponds to many existing methods including TinyER (Chaudhry et al., 2019b), CBRS (Chrysakis & Moens, 2020), iCaRL (Rebuffi et al., 2017), BiC (Wu et al., 2019), and DER++ (Buzzega et al., 2020). Then, we add two additional key stages for data swapping: storage updating and storage sample retrieving. • Data incoming I : The episodic memory maintains a subset of samples from previous tasks {T1, . . . , Ti−1}. When samples for a new task Ti arrive, they are first enqueued into a stream buffer and later exercised for training. Different CL algorithms require different amounts of samples to be enqueued for training. The task-level learning relies on task boundaries, waiting until all Ti’s samples Time = 𝒊 + 𝟏Time = 𝒊 appear (Rebuffi et al., 2017; Wu et al., 2019). On the contrary, the batch-level learning initiates the training stage as soon as a batch of Ti’s samples in a pre-defined size is available (Chaudhry et al., 2019b; ?; Chrysakis & Moens, 2020). • Training T : The training combines old samples in EM with new samples in a stream buffer to compose a training bundle. The CL learner organizes the bundle into one or more mini-batches, where each mini-batch is a mixture of old and new samples. The mini-batch size and the ratio between the two types of samples within a mini-batch are configured by the learning algorithm. Typically, several mini-batches are constructed in the task-level learning. Learners may go over multiple passes given a bundle, trading off computation cost for accuracy. • EM updating M : Once the training stage is completed, samples in the stream buffer are no longer new and represent a past experience, requiring EM to be updated with these samples. EM may have enough space to store all of them, but if it does not, the CL method applies a sampling strategy like reservoir sampling (Vitter, 1985) and greedy-balancing sampling (Prabhu et al., 2020) to select a subset of samples from EM as well as from the stream buffer to keep in EM. All prior works “discard” the samples which are not chosen to be kept in EM. • Storage updating S : CarM flushes the stream data onto the storage before cleaning up the stream buffer for the next data incoming step. No loss of information occurs if the free space available in the storage is large enough for the stream data. However, if the storage is filled due to lack of capacity, we end up having victim samples to remove from the storage. In this case, we randomly choose samples to evict for each class while keeping the in-storage data class-balanced. • Storage sample retrieving R : With the large number of samples maintained in the storage, data swapping replaces in-memory samples with in-storage samples during training to exercise abundant information preserved in the storage regarding past experiences. CarM collects various useful signals for each in-memory sample used in the training stage and determines whether to replace that sample or not. This decision is made by our generic gating function that selects a subset of the samples for replacement with effectively little runtime cost. Since old samples for training are drawn directly from EM and a large pool of samples is always kept in the storage ES, the training phase is forced to have a boundary of sample selection restricted by the size of EM . The continual learning with data swapping that optimizes model parameters θ for old and new samples x (and corresponding labels y) can hence be formulated as follows: argmin θ i∑ task id=1 E(x,y)∼ES∪Ti [L(f(x, θ), y)],where (x, y) ∈ EM. (1) 3.1 MINIMIZING DELAY TO CONTINUAL MODEL TRAINING The primary objective in our proposed design is hiding performance interference caused by the data swapping so that CarM incurs low latency during training. To that end, we propose two techniques that encompass in-storage sample retrieval and EM updating stages. 𝑅3 𝑏 𝑇1 𝑏 𝑅1 𝑏 𝑇2 𝑏 𝑇3 𝑏Sync Async Time … … Training Time Reduction … 𝑇1 𝑏 𝑅1 𝑏 𝑅2 𝑏 𝑅2 𝑏 𝑅3 𝑏 𝑇2 𝑏 𝑇3 𝑏 Figure 2: Training time reduction with async sample retrieval. Asynchronous sample retrieval. Similar to the conventional learning practice, CarM maintains fetch workers performing data decoding and augmentation. As shown in Figure 1, CarM has an additional swap worker dedicated to deciding in-memory samples to evict and issuing I/O requests to bring new samples from storage into memory. In the CL workflow, the data retrieval stage R has dependency on the training stage T since training triggers the replacement of an in-memory sample when it is used as training input. To illustrate, we assume that the system has a single fetch worker to pre-process the training input bundle and creates N mini-batch from the bundle – this pre-processing is incurred every time a sample is fetched for training. The swap worker identifies samples in EM to be replaced from mini-batch i training (T bi ) and then issues I/O requests to retrieve other samples from storage (R b i ). If we want to allow the next mini-batch training to exercise EM completely refreshed with the replaced samples, executions of T b and Rb by definition must be serialized such that we have a sequence of T b1 → Rb1 → T b2 → Rb2 → T b3 → Rb3, as shown in Figure 2 (Sync). However, committing to such strict serialized executions slows down training speed significantly, e.g., the second mini-batch training T b2 can start only after finishing up T b 1 → Rb1, which takes much longer time than T b1 with no retrieval of storage data as done in the traditional EM design. To prevent this performance degradation, CarM adopts asynchronous sample retrieval that runs the retrieval step in parallel with the subsequent training steps. By the asynchronous method, we keep the minimum possible dependency as shown in Figure 2 (Async), with an arbitrary Rbi not necessarily happened before T b i+1. Apparently, this design choice poses a delay on applying in-storage samples to EM, making it possible for the next training steps to access some samples undergoing replacement. However, we found such accesses do not frequently occur, and the delay does not nullify the benefit that comes from data swapping. In addition, when the swap worker retrieves in-storage samples and writes on memory, it may interfere with fetch workers that attempt to read samples from it for pre-processing. To mitigate such interference, CarM could opt for EM partitioning to parallelize read/write operations over independent partitions. With EM partitioning, only those operations that access the same partition need coordination, achieving concurrency against operations that access other partitions. 3.2 DATA SWAPPING POLICY BY A GATE FUNCTION The gate function in Figure 1 is a core component in CarM for adjusting I/O traffic. The gate, as guided by its decision logic, allows us to select a certain portion of samples to swap out from those EM samples drawn in the training stage. Having this control knob is of big practical importance as the maximum sustainable I/O traffic differs considerably among devices due to their in-use storage mediums with different characteristics (e.g., high-bandwidth flash drive vs low-bandwidth magnetic drive). At the same time, the gate is required to be effective with such partial data swapping in terms of accuracy in the subsequent training steps. To facilitate this, we propose a sample scoring method that ranks the samples in the same mini-batch so that the training algorithm can decide at which point along the continuum of the ranks we can separate samples to swap from other samples to keep further in memory. Score-based replacement. The score quantifies the relative importance of a trained sample to keep in EM with respect to other samples in the same mini-batch. Intuitively, a higher score means that the sample is in a higher rank, so is better “not” to be replaced if we need to reduce I/O traffic and vice versa. To this end, we define the gate function σi for ith sample, xi, as σi = 1(s(xi) > τ), where s(xi) is a scoring function and τ is a scoring threshold, with both s(xi) and τ between 0 and 1. The threshold is determined by the proportion of the samples that we want to replace from the EM with samples in storage with the consideration of computational efficiency. It allows data swapping to match with I/O bandwidth available on the storage medium, and prevents the system from over-subscribing the bandwidth leading to I/O back-pressure and increased queueing time or under-subscribing the bandwidth leaving storage data exploited less opportunistically. Policies. We design several swapping policies driven by the sample scoring method in the context of CL with data swapping for the first time. Specifically, we propose the following three policies: (1) Random selects random samples to replace from EM. Its scoring function assigns 0 to the τ proportion of the samples randomly selected from a mini-batch while assigning 1 to the other samples. (2) Entropy collects two useful signals for each sample produced during training: prediction correctness and the associated entropy. This policy prefers to replace the samples that are correctly predicted because these samples may not be much beneficial to improve the model in the near future. Furthermore, in this group of samples, if any specific sample has a lower entropy value than the other samples, the prediction confidence is relatively stronger, making it a better candidate for replacement. By contrast, for the samples that are incorrectly predicted, this policy prefers to “not” replace the samples that exhibit lower entropy, i.e., incorrect prediction with stronger confidence, since they may take longer to be predicted correctly. Thus, the scoring function s(xi) with a model f(·) is defined as: s(xi) = 1 U [g(xi)(H(f(xi))) + (1− g(xi)) (U −H(f(xi)))] , (2) where g(xi) = 1(f(xi) = yi), H(·) is an entropy function, and U is the maximum entropy value. (3) Dynamic combines Random and Entropy to perform the first half of training passes given a bundle with Random and the next half of the passes with Entropy. This policy is motivated by curriculum learning (Bengio et al., 2009), which gradually focuses on training harder samples as time elapses. It is indeed possible to come up with a number of replacement policies, for which this paper introduces a few concrete examples. Regardless, designing the gate logic with more effective replacement policies is a promising research direction that we want to further explore in CarM. 4 EXPERIMENTS As CarM is broadly applicable to a variety of EM-based CL methods, we compare the performance with and without CarM in the methods of their own setups. We select seven methods as shown in Table 1, to cover several aspects discussed in Section 3 such as bundle boundary of learning (i.e., task-level vs batch-level) and number of passes taken per bundle. We discuss detailed reproducible settings in Section 5. For evaluation, we implement CarM in PyTorch 1.7.1 as a working prototype. Datasets and metrics. Datasets include CIFAR subset—CIFAR10 (C10) and CIFAR100 (C100)— , ImageNet subset—ImageNet-100 (I100), Mini-ImageNet (100 classes) (MI100), and TinyImageNet (200 classes) (TI200)—, and ImageNet-1000. We use two popular metrics, the final accuracy and the final forgetting (Chaudhry et al., 2018) averaged over classes, to reflect the performance of continual learning. Except for ImageNet-1000 that represents a significantly large-scale training, the results are averaged over five runs while each method assigns an equal of classes to each task. We also measure training speed measured from the time the training stage receives a bundle to the time it completes training the bundle. Baselines and architectures. On top of each CL method, we vary the amount of data swapping to study the effectiveness of CarM in detail. Unless otherwise stated, CarM-N means that our swap worker is configured to replace N% of EM samples drawn by the training stage. All experiments are based on either ResNet or DenseNet neural networks, with all using the SGD optimizer as suggested by the original works, and use the entropy-based data swapping policy (i.e., Entropy) by default. 4.1 RESULTS We compare existing methods with two CarM versions, CarM-50 that performs partial swapping for a half of the data and CarM-100 that performs full swapping. Table 1 presents the performance in terms of the top-1 accuracy (Acc.) and the forgetting score (Fgt.), except for ER, iCarL, and BiC that measure the top-5 accuracy for the ImageNet subset as done in the original works. First, CarM-100 improves the accuracy remarkably over almost all of the methods under consideration, advancing the state-of-the-art performances for CIFAR and ImageNet datasets. The results clearly show the effectiveness of using the storage device in large capacity to allow CL to exploit abundant information of the previous tasks. Among the seven methods, CarM-100 delivers relatively larger accuracy gains for BiC (Wu et al., 2019), GDumb (Prabhu et al., 2020), DER++ (Buzzega et al., 2020), and RM (Bang et al., 2021) that take multi-passes on each training input. We believe that as long as old samples in EM are exercised more frequently for a new bundle to train (i.e., new samples plus old samples), data swapping can subsequently bring in more diverse samples from storage to take advantage of them. Regardless, although TinyER (Chaudhry et al., 2019b) is designed to take a single pass over new samples and thus exercise EM less aggressively, as applied with our techniques, it improves the accuracy by 7.72%, 11.79%, and 3.89% for CIFAR-100, Mini-ImageNet, and ImageNet-1000, respectively. In comparison to CarM-100, CarM-50 obtains slightly lower accuracy across the models. We argue that such a small sacrifice in accuracy is indeed worthwhile when storage I/O bandwidth is the primary constraint. In CarM-50, with 50% lower I/O traffic caused by data swapping, the accuracy as compared to CarM-100 diminishes only by 1%, 0.6%, and 0.7% on average for CIFAR subset, ImageNet subset, and ImageNet-1000, respectively, providing an ability to trade-off small accuracy loss for substantial I/O traffic reduction. Similarly to the accuracy, our data swapping approaches considerably reduce forgetting scores over the majority of the original methods. Perhaps, one method that shows less promising results in Table 1 would be iCaRL (Rebuffi et al., 2017), where CarM makes the accuracy occasionally worse. From the in-depth investigation of iCaRL in Appendix A.4.1, we observe that using data swapping and knowledge distillation at the same time cannot not deliver great accuracy. That is, as knowledge distillation may not be much compatible with data swapping, we revisit distillation-based CL methods (i.e., iCaRL, BiC, and DER++) when they are used along with data swapping in detail. Knowledge distillation on CarM. Note that the ways to distill the knowledge of old data in iCaRL, BiC, and DER++ are all different (see Appendix A.4.1). Briefly speaking, in calculating loss for old data, iCaRL uses only soft labels obtained from an old classifier, whereas BiC and DER++ use both hard labels (i.e., ground truth) as well as soft labels. To investigate the effect of using these two types of loss, we first modify the loss function of iCaRL similarly to that of BiC, i.e., α× soft label loss+ (1−α)× hard label loss, and then show accuracy over varying α values for all three distillation-based methods in Figure 3. For each method, we also include accuracy when α increases incrementally over time as done in BiC. The results show that distillation-based methods with CarM significantly improve accuracy when the α is very small. For iCaRL, compared to α = 1.0 (i.e., no hard label loss as iCaRL does), we obtain 5.4 higher accuracy when α = 0.0 (i.e., no distillation) and 5.7 higher accuracy when α = 0.1, which is the best result. Similarly, for BiC and DER++ with CarM, we found that the coefficient α to be applied on the soft label loss does not necessarily be high (i.e., iCaRL) or managed complicatedly (i.e., BiC) to achieve higher accuracy. Please refer to Figure 6 for the CIFAR subset results. Our best interpretation for the reason behind is as follows. The key assumption of knowledge distillation is that once the model is trained with a new task, the knowledge newly learned is supposed to generalize the task well and can be effectively transferable to subsequent task training. However, if the model is not sufficiently generalized for old tasks, using distillation losses extensively might be adverse—Data swapping attempts to correct decision boundaries driven by abundant in-storage samples to further generalize old tasks, but interfered by the knowledge distilled by the old models. Comparison for data swapping policies. We compare the performance of the three data swapping policies proposed in Section 3.2 under CarM-50. As shown in Table 2, both Entropy and Dynamic outperform Random by 0.16% on average for ImageNet subset (see Table 10 for CIFAR subset). We highlight that our major contribution for gating mechanism is computational efficiency while matching with I/O bandwidth available on the storage medium, and the primary objective of exploring data swapping policies is establishing a good baseline for the gating mechanism. In this regard, we found that all three policies can serve as good baselines. Impact on training speed. Delay optimization techniques in Section 3.1 are intended to incur insignificant delay on training. To confirm this, we examine how training speed in CarM-50 changes over the original memory-only methods, measured as the percentage of wall-clock time (i.e., actual time taken) increase as applied with asynchronous (Async) vs synchronous sample retrieval (Sync). To consider the most challenging scenario, we make data entered into the stream buffer at a rate enough to keep training always busy with new mini-batches. As shown in Table 3, regardless of EM methods, the asynchronous version of CarM does not dramatically affect training speeds for both CIFAR and ImageNet subsets. By contrast, the synchronous version slows down training time up to 71.6% for CIFAR subset and 62.0% for ImageNet subset. Regardless of the version in use, in-memory samples undergoing data swapping are rarely drawn in the subsequent training steps since the size of an episodic memory size is typically much larger than the size of a training batch. Therefore, no difference in accuracy is observed between the two version. 4.2 ABLATION STUDY We present an ablation study using four methods (TinyER, BiC, DER++, and RM) that represent the state-of-the-art in each type of EM methodologies, using the CIFAR subset. Size of EM. To confirm performance benefits over using different memory sizes, we empirically evaluate CarM-50 over varying EM sizes and show the average accuracy in Figure 4(a). In all cases, CarM-50 outperforms the existing methods, with BiC, DER++, and RM having relatively higher accuracy increases. Moreover, we observe that data swapping delivers better accuracy over conventional memory-only approaches using much smaller memory. For example, CarM-50 with DER++ on EM size 300 shows higher accuracy than pure DER++ on EM size 1000, and CarM-50 with TinyER on EM size 300 shows higher accuracy than pure TinyEM on EM size 500. Therefore, it turns out that data swapping could help reduce the EM size without hurting the accuracy of existing methods in both the multi-pass method and single-pass method. Data swapping ratio. We present results with different swapping ratios to show that our gate model indeed brings out meaningful benefits over using different I/O bandwidths. To that end, Figure 4(b) shows the change in accuracy when our gating policy decreases the swapping ratio down to 20% (CarM-20) or increases it up to 80% (CarM-80). Obviously, at CarM-80 in high swapping ratio, the accuracy across the four EM methods gets very close to the accuracy obtained in full swapping. A surprising result is that even at CarM-20 in 20% data swapping, the accuracy is very comparable to when we allow higher data swapping ratios. The results indicate that our method would be effective even when applied to the system with low-bandwidth storage. Size of storage. As local storage cannot store all the past data, the system must discard some old samples once the storage is fully occupied. Figure 4(c) shows accuracy degradation in CarM-50 when storage capacity is limited to 1.5–10× of the EM size. The results show that data swapping improves performance over traditional approaches even with using 50% larger capacity for the storage. Large number of tasks. One pressing issue on CL is learning a large number of tasks as it is required to keep the knowledge learned in the remote past. To evaluate this aspect, we split CIFAR-100 (100 classes) into 50 tasks and run with the four methods. As Figure 4(d) shows, CarM significantly outperforms the baselines, showing the potential for long-term continual model training. 5 CONCLUSION We alleviate catastrophic forgetting by integrating traditional episodic memory-based continual learning methods with device-internal data storage, named CarM. We design data swapping strategies to improve model accuracy by dynamically utilizing a large amount of the past data available in the storage. Our swapping mechanism addresses the cumbersome performance hurdle incurred by slow storage access, and hence continual model training is not dramatically affected by data transfers between memory and storage. We show the effectiveness of CarM using seven well-known methods on standard datasets, over varying memory sizes, storage sizes, and data swapping ratios. REPRODUCIBILITY STATEMENT We take the reproducibility of the research very seriously. Appendix hence includes detailed information necessary for reproducing all the experiments performed in this work, as follows: • Appendix A.1 describes the implementation details of building CarM. • Appendix A.2 specifies dataset information used in the experiments (e.g., the number of tasks and the number of classes per task). • Appendix A.3 provides experimental details (e.g., metrics and hyper-parameters). • Appendix A.3.4 presents detailed specification of machines (e.g., GPU model) used in the experiments. Our source code is available at https://anonymous.4open.science/r/CarM, where we include running environments and configuration files for all the experiments that make it possible to reproduce the results reported in this paper with minimal effort. ETHICS STATEMENT All continual learning (CL) methods including the proposed one would adapt and extend the already trained AI model to recognize better with the streamed data. The CL methods will expedite the deployment of AI systems to help humans by its versatility of adapting to a new environment out of the factory or research labs. As all CL methods, however, would suffer from adversarial streamed data as well as data bias, which may cause ethnic, gender or biased gender issues, the proposed method would not be an exception. Although the proposed method has no intention to allow such problematic cases, the method may be exposed to such threats. Relentless efforts should be made to develop mechanisms to prevent such usage cases in order to make the continuously updating machine learning models safer and enjoyable to be used by humans. A APPENDIX A.1 IMPLEMENTATION DETAILS First, we describe implementation details about the two components of the proposed method: swap worker and episodic memory. Then, we describe the details about PyTorch integration of our implementation for ease of use. Swap worker. CarM implements the swap worker through multiprocessing (pyt) in popular Python standard library so that data swapping is running in parallel with PyTorch’s default fetch workers dedicated to data decoding and augmentation. The swap worker uses asyncio (asy) to asynchronously load samples from storage to memory, effectively overlapping high-latency I/O operations with other CarM-related operations, such as image decoding, sample replacement on EM, and entropy calculation. The swap worker issues multiple data swapping requests without spinning on or being blocked by I/O. As a result, it is sufficient to have only one swap worker for CarM in the system. Episodic memory. There are various ways to implement EM to be shared between fetch workers and the swap worker. The current system favors flexibility over performance, so we opt for implementing EM as a shared object provided by Manager (man) in the Python standard library (multiprocessing.managers), which is based on message passing in the server-client semantics. In terms of flexibility, the Manager does not require the clients (i.e., fetch workers and swap workers) to define the exact data layout in the EM address space or coordinate for potential memory resizing to accommodate raw samples of different sizes (e.g., image resolutions). Hence, it is sufficient for the client workers to perform reads and writes on EM using indexes on the EM samples. An alternative yet obviously higher-performance implementation would be using multiprocessing.shared_memory (sha), which enables direct reads and writes on EM by exposing a common region of memory to the processes. Despite good performance, this method is less flexible as all processes should be aware of the data layout in a designated EM address range precisely at runtime, thus requiring additional coordination for sample lookups and EM resizing. As our system evolves, we ultimately want to combine the best of both methods to promise both flexibility and performance. A.2 DATASETS Each baseline is evaluated on its own dataset used in the original work. The first rows of Table 4 and Table 5 show datasets used in the CIFAR subset and the ImageNet subset, respectively, for all baselines. ImageNet-100 is a ImageNet ILSVRC2012 subset used in iCaRL, which contains images in the same resolution as those in the original ImageNet ILSVRC2012. Other datasets used as the ImageNet subset have smaller image resolution than the original one (e.g., 64×64 for Tiny-ImageNet, 84×84 for Mini-ImageNet). In addition, we trained all baselines on ImageNet-1000 to verify the effectiveness of CarM on a large-scale dataset. We note that only ER, iCaRL, and BiC have been compared using the ImageNet-1000 dataset in the literature (Wu et al., 2019). Datasets are split as done in the original work. The second and third rows of Table 4 and Table 5 show the detailed information on the splitting strategy. For all baselines, the ImageNet-1000 dataset is split into 10 tasks, each with 100 classes. Note that all datasets are non-blurry, meaning that each task consists of its own set of classes and samples belonging to a previous task never appear in the next tasks. Since the experimental results are highly sensitive to the class order in the continuous tasks to train, we follow the same class order used in the original works. A.3 EXPERIMENTAL DETAILS We present the effectiveness of the proposed CarM using seven CL methods of their own setups. This section discusses detailed settings for each method so that the results are reproducible by our source code. We first describe the metrics used for the evaluations. A.3.1 METRICS Final accuracy. Final accuracy is an average accuracy over all classes observed after the last task training is done. Final forgetting. Forgetting indicates how much each task has been forgotten while training new tasks (Chaudhry et al., 2018). Forgetting for a task is calculated by comparing the best accuracy observed over task insertions to the final accuracy of the task when training is over. Final forgetting is an average forgetting across all tasks when training is over. A.3.2 BASELINE DETAILS • ER (Ratcliff, 1990) combines all samples in the current stream buffer and the current EM, and passes them over to the model as a training set, i.e., training bundle. There is no algorithmic optimization applied to the model itself. We manage EM as a ring buffer that assigns EM space equally over all classes observed so far. We use the same hyper-parameters and loss function (binary cross-entropy loss) as used in iCaRL. • iCaRL (Rebuffi et al., 2017) uses three algorithmic optimizations: distillation loss, herding, and nearest-mean-of-exemplar classification. To transfer the information of old tasks, iCaRL leverages the distillation loss using logits obtained from the most recently trained model for old classes: this loss information is considered as the ground truth for old classes. Herding is its own EM management method, which populates the samples whose feature vectors are the closest to the average feature vector overall stream data for each class. iCaRL allocates EM space equally overall observed classes. • TinyER (Chaudhry et al., 2019b) explores four EM management strategies named reservoir, ring buffer, k-means, and mean of features. We adopt the reservoir in the experiments because it shows overall the highest performance in the original paper. Similar to ER, TinyER retrieves old samples from EM without other optimizations on the model itself. TinyER is batch-level learning and focuses on an extremely online setup that takes a single pass for every streamed batch. • BiC (Wu et al., 2019) runs bias correction on the last layer of the neural network, structured as fully connected layer, to mitigate data imbalance problem between old samples and new samples. The data imbalance is an inherent problem due to the limited size of EM, and it gets worse as we have a larger number of consecutive classes to train. Similar to iCaRL, BiC opts for distillation loss, but its entire loss function is a mixture of distillation loss and cross-entropy loss that is directly calculated from some reserved samples for old classes. • GDumb (Prabhu et al., 2020) is a simple rehearsal-based method that uses only the memory to train the model. The memory management is done via greedy balanced sampling, where GDumb tries to keep each class balanced by evicting data categorized into the majority class out of EM. Unlike other methods, the model is trained from scratch for inference and then discarded every time the memory is updated. GDumb uses cosine annealing learning-rate scheduler and cross-entropy loss for gradient descend. • DER++ (Buzzega et al., 2020) is one of rehearsal-based methods with knowledge distillation. Unlike other methods, this approach retains logits (along with samples) in EM for knowledge distillation. For knowledge distillation, DER++ calculates euclidean distance between the logits stored in EM and the logits generated by the current network. To enable data swapping on DER++, we store the logits in the storage along with samples. • RM (Bang et al., 2021) uses the same backbone as GDumb, but it improves memory update policy and training method over GDumb. For memory management, RM calculates the uncertainty of each sample and tries to fill the memory with samples in a wide spectrum that ranges from robust samples with low uncertainity to fragile samples with high uncertainity while keeping the classes balanced. In addition, data augmentation (DA) is proposed to advance the original RM implementation. We use RM without DA to apply data swapping in our work, but we include some results of RM with DA in Section A.4.4. Reproduction. We use reported numbers from the original paper for DER++ on TinyImageNet (Buzzega et al., 2020). For iCaRL, we believe we faithfully implement its details, but could not reach the accuracy reported in the paper. As far as we know, there is no PyTorch source code that reproduces iCaRL on both CIFAR-100 and ImageNet-100 datasets. In our implementation for iCaRL, we refer to a PyTorch version written by the PodNet authors (Douillard et al., 2020) as they achieve the most comparable results. We use the results obtained from the referred version rather than the reported results, because compared to the reported accuracy, the obtained accuracy is nearly the same for CIFAR-100 and higher for ImageNet-100. A.3.3 HYPER-PARAMETERS We follow hyper-parameters presented in the original works: we did not perform hyper-parameter search for the baselines. Table 6, Table 7, and Table 8 present all the details on the hyper-parameters. Although DER++ updates EM in batch-level and does not consider task boundary, for a larger dataset than MNIST, the original paper chooses to takes multiple passes per bundle. So, we deem DER++ to be a task-level learning method as long as we use CIFAR-100 and Tiny ImageNet as its training dataset. Here, TI and CI denote task-incremental learning and class-incremental learning, respectively. TI is an easy and simplified scenario, where the task ID is given at both training and inference. In TI setting, the model can classify the input among the classes that belong to the provided task ID. On the contrary, CI is the setting where the task ID is unknown during inference, which is a more realistic case than TI. A.3.4 DETAILED SPECIFICATION OF MACHINES Our experiments are performed on machines with HW specification as presented in Table 9. These machines are also used in measuring the impact on training speed with data swapping. A.4 ADDITIONAL RESULTS A.4.1 DISTILLATION ANALYSIS Effectiveness of features of iCaRL on CarM. We explore iCaRL by measuring accuracy for all possible 32 combinations based on its algorithmic features, i.e., knowledge distillation (D), herding (H), and nearest-mean-of-exemplars (N), along with our CarM-100 (F) or CarM-50 (P). In Figure 5, we show eight combinations that are sufficient to support the three interesting findings. First, data swapping without distillation (orange bars) outperforms the other combinations including pure iCaRL (blue and green bars). Second, for combinations with distillation, applying data swapping does not deliver great accuracy (D/H/N vs the other two in blue bars). Finally, data swapping does not seem to necessitate sophisticated algorithmic features (F&H vs D/H/N), inferring a model simplification potential for episodic memory. Knowledge distillation on CarM. Figure 6 show accuracy for CIFAR subset while varying α values in α× soft label loss+ (1− α)× hard label loss for iCaRL, BiC, and DER++. We can make the same conclusions as discussed in the ‘Knowledge distillation on CarM’ paragraph of Section 4.1. Below, we describe how each distillation-based method can be transformed into the presented model for loss calculation. The original loss function of iCaRL (Rebuffi et al., 2017) is defined as: Licarl(xi) = −[ t∑ y=s {δy=yi log gy(xi) + δy 6=yi log(1− gy(xi))} + s−1∑ y=1 {qyi log gy(xi) + (1− q y i ) log(1− gy(xi))}] (3) where qyi is the output of the old model, gy(xi) is the output of the current model, {1, 2, .., s− 1} is a set of old classes and {s, ..., t} is the set of new classes. For distillation, it uses a soft target from the previous model for old classes of all current training set. As a result, training the current model heavily relies on the performance of the previous model. Especially, when data that belongs to old classes replays, since the target of loss is only soft output from the previous model, it is likely that the similar soft output from the old model is repeatedly distilled without the correct hard label. Due to such aggressive distillation, iCaRL cannot take an advantage of CarM of data swapping, which enables to replay and train abundant old data, hindering positive decision boundary corrections. That is, the wrongly predicted samples from the old model will be predicted wrongly also in the future even if they are replayed several times by CarM. BiC and DER++ use distillation loss, however, unlike iCaRL, they provide a loss term of which target for old classes is ground truth, the correct hard label. As result, BiC and DER++ could get higher accuracies with CarM. To evaluate Figure 3 and Figure 6, we modified the loss function of iCaRL, adding another binary cross entropy that uses ground truth as a target, which is referred to hard label loss, as following: Lmodified(xi) = αLicarl(xi)− (1− α) t∑ y=1 {δy=yi log gy(xi) + δy 6=yi log(1− gy(xi))} (4) BiC and DER++ already have its own hard label loss, we did not modify loss function. Note that when α is set to 1.0 in BiC, since it is unable to train any new data, which is unrealistic situation, we excluded the result of α = 1.0. A.4.2 INCREMENTAL ACCURACY OF TABLE 1 IN THE MAIN PAPER Incremental accuracy. We here report incremental accuracy as an additional performance metric. Incremental accuracy is a set of average accuracy over classes observed so far after training each task. Figure 11 and Figure 11 show the incremental accuracy of Table 1 in the main paper. We also mark the accuracy from original paper of iCaRL on CIFAR-100, iCaRL on ImageNet-100, BiC on ImageNet-100 and DER++ on Tiny-ImageNet. In general, the more tasks (classes) come, the larger gap of accuracy between original and CarM. This implies that running on CarM could better mitigate the catastrophic forgetting for long-term training. A.4.3 ABLATION STUDY ON ER, ICARL, AND GDUMB We report the results for an ablation study on ER, iCaRL, and GDumb, which were not presented in the main paper. Figure 7 shows accuracy over varying EM sizes, Figure 8 shows accuracy over varying swapping ratios, Figure 9 shows accuracy over varying storage capacity, and Figure 10 shows accuracy with learning 50 tasks. In general, we found the similar observations as discussed in Section 4.2. ER iCaRL GDumb 50 100 Ac cu ra cy (% ) 33 .7 46 .3 47 .352 .7 47 .7 49 .954 .0 48 .4 52 .6 53 .8 48 .2 53 .3 54 .9 49 .5 53 .7 Original 20% 50% 80% 100% Figure 8: Accuracy over varying data swapping ratios. ER iCaRL GDumb 50 100 Ac cu ra cy (% ) 33 .7 46 .3 47 .3 37 .5 44 .6 47 .7 42 .1 46 .2 49 .552 .0 47 .4 49 .053 .0 46 .8 51 .054 .0 48 .4 52 .6 Original 1.5X 2X 5X 10X All Figure 9: Accuracy over varying storage capacities. A.4.4 RESULTS OF RM WITH DATA AUGMENTATION We implement RM with data augmentation and show the results in Table 11 using CIFAR-10 dataset. Both CarM and CarM-50 improves accuracy significantly over the baseline method. A.4.5 CARM ON EMBEDDED DEVICE We evaluate CarM using a NVIDIA Jetson TX2 to show its efficacy when running on a representative embedded AI computing device. Table 12 shows all baselines with CarM-50 and CarM-100 on CIFAR subset. We see accuracy improvements with CarM as similarly observed in the main paper. A.5 DISCUSSIONS We have taken early steps towards leveraging both memory and storage to overcome the forgetting problem in CL while preserving the same training efficiency, which we find to be effective for the hardware we tested. However, as the characteristics between the memory and storage may vary significantly, the storage access latency may still become a significant bottleneck unless carefully exploited. Ideally, given the specs of a hardware configuration (e.g., computation, memory, and available I/O bandwidth), the swapping mechanism could decide an optimal policy to increase the memory capacity without adding additional latency. We leave this as an area of future work, which would make CarM more robust and resilient to variations in different hardware settings.
1. What is the focus of the paper regarding continual learning? 2. What is the intuition behind the proposed approach, and how does it differ from previous methods? 3. What are the strengths and weaknesses of the introduced Episodic Storage, and how does it impact performance? 4. How do data swapping policies contribute to the overall outcome, and are they significant? 5. Are there any concerns or limitations regarding the assumption on the large memory, and how might it affect the results? 6. Is there anything else that could be added or improved upon in the paper?
Summary Of The Paper Review
Summary Of The Paper The paper proposed a new strategy for constructing the replay buffer during continual learning. By assuming the split of memories in a continual learner as the large & slow and the small & fast, the authors newly introduce Episodic Storage that memorizes a large number of past tasks' instances, which is different from Episodic Memory that standard CL methods use. The paper introduces swapping rules to use them. Review The initial intuition in the paper about different roles in memories for continual learning is reasonable. But, the paper is composed of too naive or technical modifications with a strong assumption on the large memory. Since the model keeps 1.5x~10x or even full instances of the past tasks compared to baselines, the performance should be fantastic. Data swapping policies are not significant. in most cases, the performance of Entropy and Dynamics are very near to the Random with a marginal gap. It would be hard to say the gain in Entropy and Dynamic is statistically significant (e.g., p-test). [Minor] Some citation is missing on page 4, Line3.
ICLR
Title Carousel Memory: Rethinking the Design of Episodic Memory for Continual Learning Abstract Continual Learning (CL) is an emerging machine learning paradigm that aims to learn from a continuous stream of tasks without forgetting knowledge learned from the previous tasks. To avoid performance decrease caused by forgetting, prior studies exploit episodic memory (EM), which stores a subset of the past observed samples while learning from new non-i.i.d. data. Despite the promising results, since CL is often assumed to execute on mobile or IoT devices, the EM size is bounded by the small hardware memory capacity and makes it infeasible to meet the accuracy requirements for real-world applications. Specifically, all prior CL methods discard samples overflowed from the EM and can never retrieve them back for subsequent training steps, incurring loss of information that would exacerbate catastrophic forgetting. We explore a novel hierarchical EM management strategy to address the forgetting issue. In particular, in mobile and IoT devices, real-time data can be stored not just in high-speed RAMs but in internal storage devices as well, which offer significantly larger capacity than the RAMs. Based on this insight, we propose to exploit the abundant storage to preserve past experiences and alleviate the forgetting by allowing CL to efficiently migrate samples between memory and storage without being interfered by the slow access speed of the storage. We call it Carousel Memory (CarM). As CarM is complementary to existing CL methods, we conduct extensive evaluations of our method with seven popular CL methods and show that CarM significantly improves the accuracy of the methods across different settings by large margins in final average accuracy (up to 28.4%) while retaining the same training efficiency. 1 INTRODUCTION With the rising demand for realistic on-device machine learning, recent years have witnessed a novel learning paradigm, namely continual learning (CL), for training neural networks (NN) with a stream of non-i.i.d. data. In such a paradigm, the neural network is incrementally learned with insertions of new tasks (e.g., a set of classes) (Rebuffi et al., 2017). The NN model is expected to continuously learn new knowledge from new tasks over time while retaining previously learned knowledge, which is a closer representation of how intelligent systems operate in the real world. In this learning setup, the knowledge should be acquired not only from the new data timely but also in a computationally efficient manner. In this regard, CL is suitable for learning on mobile and IoT devices (Hayes et al., 2020; Wang et al., 2019). However, CL faces significant challenges from the notorious catastrophic forgetting problem— knowledge learned in the past fading away as the NN model continues to learn new tasks (McCloskey & Neal, 1989). Among many prior approaches to addressing this issue, episodic memory (EM) is one of the most effective approaches (Buzzega et al., 2020; Chaudhry et al., 2019a;b; Lopez-Paz & Ranzato, 2017; Prabhu et al., 2020). EM is an in-memory buffer that stores old samples and replays them periodically while training models with new samples. EM needs to have a sufficiently large capacity to achieve a desired accuracy, and such capacity in need may vary significantly since incoming data may contain a varying number of tasks and classes at different time slots and geolocations (Bang et al., 2021). However, in practice, the size of EM is often quite small, bounded by limited on-device memory capacity. The limited EM size makes it difficult to store a large number of samples or scale up to a large number of tasks, preventing CL models from achieving high accuracy as training moves forward. To address the forgetting problem, we introduce a hierarchical EM method, which significantly enhances the effectiveness of episodic memory. Our method is motivated by the fact that modern mobile and IoT devices are commonly equipped with a deep memory hierarchy including small memory with fast access (50–150 ns) and large storage with slow access (25–250 µs), which is typically orders of magnitude larger than the memory. Provided by these different hardware characteristics, the memory is an ideal place to access samples at high speed during training, promising short training time. In contrast, the storage is an ideal place to store a significantly large number of old samples and use them for greatly improving model accuracy. The design goal of our scheme, Carousel Memory or CarM, is to combine the best of both worlds to improve the episodic memory capacity by leveraging on-device storage but without significantly prolonging traditional memorybased CL approaches. CarM stores as many observed samples as possible so long as it does not exceed a given storage capacity (rather than discarding those overflowed from EM as done in existing methods) and updates the in-memory EM while the model is still learning with samples already in EM. One key research question is how to manage samples across EM and storage for both system efficiency and model accuracy. Here we propose a hierarchical memory-aware data swapping, an online process that dynamically replaces a subset of in-memory samples used for model training with other samples stored in storage, with an optimization goal in two aspects. (1) System efficiency. Prior single-level memory-only training approaches promise timely model updates even in the face of real-time data that arrives with high throughput. Therefore, we expect drawing old samples from slow storage does not incur significant I/O overhead that affects the overall system efficiency, especially for mobile and IoT devices. (2) Model accuracy. CarM significantly increases the effective EM size, mitigating forgetting issues by avoiding important information from overflowing due to limited memory capacity. As a result, we expect our approach also improves the model accuracy by exploiting data samples preserved in the storage more effectively for training. To fulfill the competing goals, we design CarM from two different perspectives: swapping mechanism (Section 3.1) and swapping policy (Section 3.2). The swapping mechanism of CarM ensures that the slow speed of accessing the storage does not become a bottleneck of continual model training by carefully hiding sample swapping latency through asynchrony. Moreover, we propose various swapping policies to decide which and when to swap samples and incorporate them into a single component, namely, gate function. The gate function allows for fewer swapping samples, making CarM to march with low I/O bandwidth storage which is common for mobile and IoT devices. One major benefit of CarM is that it is largely complementary to existing episodic memory-based CL methods. By exploiting the memory hierarchy, we show that CarM helps boost the accuracy of many existing methods by up to 28.4% for Rainbow Memory (RM) (Bang et al., 2021) in Tiny-ImageNet dataset (Section 4.1) and even allows them to retain their accuracy with much smaller EM sizes. With CarM as a strong baseline for episodic memory-based CL methods, some well-known algorithmic optimizations may need to be revisited to ensure that they are not actually at odds with data swapping. For example, we observe that iCaRL (Rebuffi et al., 2017), BiC (Wu et al., 2019), and DER++ (Buzzega et al., 2020), which strongly depend on knowledge distillation for old tasks, can deliver higher accuracy with CarM by limiting the weight coefficient on the distillation loss as a small value in calculating training loss. With CarM, such weight coefficient does not indeed necessarily be high or managed complicatedly as done in prior work, because we could now leverage a large amount of data in storage (with ground truth) to facilitate training performance. 2 RELATED WORK Class incremental learning. The performance of CL algorithms heavily depends on scenarios and setups, as summarized by Van de Ven et al. (van de Ven & Tolias, 2018). Among them, we are particularly interested in class-incremental learning (CIL), where task IDs are not given during inference (Gepperth & Hammer, 2016). Many prior proposals are broadly divided into two categories, rehearsal-based and regularization-based. In rehearsal-based approaches, episodic memory stores a few samples of old tasks to replay in the future (Bang et al., 2021; Castro et al., 2018; Chaudhry et al., 2018; Rebuffi et al., 2017). On the contrary, regularization-based approaches exploit the information of old tasks implicitly retained in the model parameters, without storing samples representing old tasks (Kirkpatrick et al., 2017; Zenke et al., 2017; Liu et al., 2018; Li & Hoiem, 2017; Lee et al., 2017; Mallya et al., 2018). As rehearsal-based approaches generally have shown the better performance in CIL (Prabhu et al., 2020), we aim to alleviate current drawbacks of the approaches (i.e., limited memory space) by incorporating data management across the memory-storage hierarchy. The CIL setup usually assumes that the tasks contain disjoint set of classes (Rebuffi et al., 2017; Castro et al., 2018; Gepperth & Hammer, 2016). More recent studies introduce methods to learn from the blurry stream of tasks, where some samples across the tasks overlap in terms of class ID (Aljundi et al., 2019; Prabhu et al., 2020). Moreover, prior works can be classified as either offline (Wu et al., 2019; Rebuffi et al., 2017; Chaudhry et al., 2018; Castro et al., 2018), which allows a buffer to store incoming samples for the current task, or online (Fini et al., 2020; Aljundi et al., 2019; Jin et al., 2020), which has no such buffer: a few works consider both (Prabhu et al., 2020). Both online and offline methods can take advantage of CarM as our work focuses on improving episodic memory with a storage device. Episodic memory management. There are numerous episodic memory management strategies proposed in the literature (Parisi et al., 2018) such as herding selection (Welling, 2009), discriminative sampling (Liu et al., 2020), entropy-based sampling (Chaudhry et al., 2018) and diversity-based sampling (Kang et al., 2020; Bang et al., 2021). A number of works have been proposed to compose the episodic memory with representative and discriminative samples. Liu et al. propose a strategy to store samples representing the mean and boundary of each class distribution (Liu et al., 2020). Borsos et al. propose a coreset generation method using cardinality-constrained bi-level optimization (Borsos et al., 2020). Cong et al. propose a GAN-based memory aiming to perturb styles of remembered samples for incremental learning (Cong et al., 2020). Bang et al. propose a strategy to promote the diversity of samples in the episodic memory (Bang et al., 2021). These recent works improve the quality of the samples stored in the memory at the expense of excessive computation or difficulty (Borsos et al., 2020) involved in training a generation model for perturbation (Cong et al., 2020; Borsos et al., 2020). Interestingly, most of strategies show marginal accuracy improvements over the uniform random sampling despite the computational complexity (Chaudhry et al., 2018; Castro et al., 2018; Rebuffi et al., 2017). Other than sampling, there are works to generate samples of past tasks (Shin et al., 2017; Seff et al., 2017; Wu et al., 2018; Hu et al., 2019). Unlike these works addressing the sampling efficiency, we focus on the systematically efficient method to manage samples across the system memory hierarchy. Memory over-commitment in NN training. Prior work studies using storage or slow memory (e.g., host memory) as an extension of fast memory (e.g., GPU memory) to increase memory capacity for NN training (Rhu et al., 2016; Wang et al., 2018; Hildebrand et al., 2020; Huang et al., 2020; Peng et al., 2020; Jin et al., 2018; Ren et al., 2021). However, most of these works target at optimizing the conventional offline learning scenarios by swapping optimizer states, activations, or model weights between the fast memory and slow memory (or storage), whereas we focus on swapping samples in between episodic memory and storage to tackle the forgetting problem in the context of continual learning. In more general context, memory-storage caching has been studied to reduce memory and energy consumption for various applications (Ananthanarayanan et al., 2012; Oh et al., 2012; Zaharia et al., 2010), which is orthogonal to our work. 3 PROPOSED METHOD: CAROUSEL MEMORY We describe how data swapping in CarM extends the current workflow of episodic memory (EM) in Figure 1. For ease of illustration, we assume that the input stream data is organized by consecutive tasks but CL learners do not necessarily rely on boundaries between tasks to perform training and update EM. There are three common stages involved in traditional EM methods, which proceed in order: data incoming, training, and EM updating. This workflow corresponds to many existing methods including TinyER (Chaudhry et al., 2019b), CBRS (Chrysakis & Moens, 2020), iCaRL (Rebuffi et al., 2017), BiC (Wu et al., 2019), and DER++ (Buzzega et al., 2020). Then, we add two additional key stages for data swapping: storage updating and storage sample retrieving. • Data incoming I : The episodic memory maintains a subset of samples from previous tasks {T1, . . . , Ti−1}. When samples for a new task Ti arrive, they are first enqueued into a stream buffer and later exercised for training. Different CL algorithms require different amounts of samples to be enqueued for training. The task-level learning relies on task boundaries, waiting until all Ti’s samples Time = 𝒊 + 𝟏Time = 𝒊 appear (Rebuffi et al., 2017; Wu et al., 2019). On the contrary, the batch-level learning initiates the training stage as soon as a batch of Ti’s samples in a pre-defined size is available (Chaudhry et al., 2019b; ?; Chrysakis & Moens, 2020). • Training T : The training combines old samples in EM with new samples in a stream buffer to compose a training bundle. The CL learner organizes the bundle into one or more mini-batches, where each mini-batch is a mixture of old and new samples. The mini-batch size and the ratio between the two types of samples within a mini-batch are configured by the learning algorithm. Typically, several mini-batches are constructed in the task-level learning. Learners may go over multiple passes given a bundle, trading off computation cost for accuracy. • EM updating M : Once the training stage is completed, samples in the stream buffer are no longer new and represent a past experience, requiring EM to be updated with these samples. EM may have enough space to store all of them, but if it does not, the CL method applies a sampling strategy like reservoir sampling (Vitter, 1985) and greedy-balancing sampling (Prabhu et al., 2020) to select a subset of samples from EM as well as from the stream buffer to keep in EM. All prior works “discard” the samples which are not chosen to be kept in EM. • Storage updating S : CarM flushes the stream data onto the storage before cleaning up the stream buffer for the next data incoming step. No loss of information occurs if the free space available in the storage is large enough for the stream data. However, if the storage is filled due to lack of capacity, we end up having victim samples to remove from the storage. In this case, we randomly choose samples to evict for each class while keeping the in-storage data class-balanced. • Storage sample retrieving R : With the large number of samples maintained in the storage, data swapping replaces in-memory samples with in-storage samples during training to exercise abundant information preserved in the storage regarding past experiences. CarM collects various useful signals for each in-memory sample used in the training stage and determines whether to replace that sample or not. This decision is made by our generic gating function that selects a subset of the samples for replacement with effectively little runtime cost. Since old samples for training are drawn directly from EM and a large pool of samples is always kept in the storage ES, the training phase is forced to have a boundary of sample selection restricted by the size of EM . The continual learning with data swapping that optimizes model parameters θ for old and new samples x (and corresponding labels y) can hence be formulated as follows: argmin θ i∑ task id=1 E(x,y)∼ES∪Ti [L(f(x, θ), y)],where (x, y) ∈ EM. (1) 3.1 MINIMIZING DELAY TO CONTINUAL MODEL TRAINING The primary objective in our proposed design is hiding performance interference caused by the data swapping so that CarM incurs low latency during training. To that end, we propose two techniques that encompass in-storage sample retrieval and EM updating stages. 𝑅3 𝑏 𝑇1 𝑏 𝑅1 𝑏 𝑇2 𝑏 𝑇3 𝑏Sync Async Time … … Training Time Reduction … 𝑇1 𝑏 𝑅1 𝑏 𝑅2 𝑏 𝑅2 𝑏 𝑅3 𝑏 𝑇2 𝑏 𝑇3 𝑏 Figure 2: Training time reduction with async sample retrieval. Asynchronous sample retrieval. Similar to the conventional learning practice, CarM maintains fetch workers performing data decoding and augmentation. As shown in Figure 1, CarM has an additional swap worker dedicated to deciding in-memory samples to evict and issuing I/O requests to bring new samples from storage into memory. In the CL workflow, the data retrieval stage R has dependency on the training stage T since training triggers the replacement of an in-memory sample when it is used as training input. To illustrate, we assume that the system has a single fetch worker to pre-process the training input bundle and creates N mini-batch from the bundle – this pre-processing is incurred every time a sample is fetched for training. The swap worker identifies samples in EM to be replaced from mini-batch i training (T bi ) and then issues I/O requests to retrieve other samples from storage (R b i ). If we want to allow the next mini-batch training to exercise EM completely refreshed with the replaced samples, executions of T b and Rb by definition must be serialized such that we have a sequence of T b1 → Rb1 → T b2 → Rb2 → T b3 → Rb3, as shown in Figure 2 (Sync). However, committing to such strict serialized executions slows down training speed significantly, e.g., the second mini-batch training T b2 can start only after finishing up T b 1 → Rb1, which takes much longer time than T b1 with no retrieval of storage data as done in the traditional EM design. To prevent this performance degradation, CarM adopts asynchronous sample retrieval that runs the retrieval step in parallel with the subsequent training steps. By the asynchronous method, we keep the minimum possible dependency as shown in Figure 2 (Async), with an arbitrary Rbi not necessarily happened before T b i+1. Apparently, this design choice poses a delay on applying in-storage samples to EM, making it possible for the next training steps to access some samples undergoing replacement. However, we found such accesses do not frequently occur, and the delay does not nullify the benefit that comes from data swapping. In addition, when the swap worker retrieves in-storage samples and writes on memory, it may interfere with fetch workers that attempt to read samples from it for pre-processing. To mitigate such interference, CarM could opt for EM partitioning to parallelize read/write operations over independent partitions. With EM partitioning, only those operations that access the same partition need coordination, achieving concurrency against operations that access other partitions. 3.2 DATA SWAPPING POLICY BY A GATE FUNCTION The gate function in Figure 1 is a core component in CarM for adjusting I/O traffic. The gate, as guided by its decision logic, allows us to select a certain portion of samples to swap out from those EM samples drawn in the training stage. Having this control knob is of big practical importance as the maximum sustainable I/O traffic differs considerably among devices due to their in-use storage mediums with different characteristics (e.g., high-bandwidth flash drive vs low-bandwidth magnetic drive). At the same time, the gate is required to be effective with such partial data swapping in terms of accuracy in the subsequent training steps. To facilitate this, we propose a sample scoring method that ranks the samples in the same mini-batch so that the training algorithm can decide at which point along the continuum of the ranks we can separate samples to swap from other samples to keep further in memory. Score-based replacement. The score quantifies the relative importance of a trained sample to keep in EM with respect to other samples in the same mini-batch. Intuitively, a higher score means that the sample is in a higher rank, so is better “not” to be replaced if we need to reduce I/O traffic and vice versa. To this end, we define the gate function σi for ith sample, xi, as σi = 1(s(xi) > τ), where s(xi) is a scoring function and τ is a scoring threshold, with both s(xi) and τ between 0 and 1. The threshold is determined by the proportion of the samples that we want to replace from the EM with samples in storage with the consideration of computational efficiency. It allows data swapping to match with I/O bandwidth available on the storage medium, and prevents the system from over-subscribing the bandwidth leading to I/O back-pressure and increased queueing time or under-subscribing the bandwidth leaving storage data exploited less opportunistically. Policies. We design several swapping policies driven by the sample scoring method in the context of CL with data swapping for the first time. Specifically, we propose the following three policies: (1) Random selects random samples to replace from EM. Its scoring function assigns 0 to the τ proportion of the samples randomly selected from a mini-batch while assigning 1 to the other samples. (2) Entropy collects two useful signals for each sample produced during training: prediction correctness and the associated entropy. This policy prefers to replace the samples that are correctly predicted because these samples may not be much beneficial to improve the model in the near future. Furthermore, in this group of samples, if any specific sample has a lower entropy value than the other samples, the prediction confidence is relatively stronger, making it a better candidate for replacement. By contrast, for the samples that are incorrectly predicted, this policy prefers to “not” replace the samples that exhibit lower entropy, i.e., incorrect prediction with stronger confidence, since they may take longer to be predicted correctly. Thus, the scoring function s(xi) with a model f(·) is defined as: s(xi) = 1 U [g(xi)(H(f(xi))) + (1− g(xi)) (U −H(f(xi)))] , (2) where g(xi) = 1(f(xi) = yi), H(·) is an entropy function, and U is the maximum entropy value. (3) Dynamic combines Random and Entropy to perform the first half of training passes given a bundle with Random and the next half of the passes with Entropy. This policy is motivated by curriculum learning (Bengio et al., 2009), which gradually focuses on training harder samples as time elapses. It is indeed possible to come up with a number of replacement policies, for which this paper introduces a few concrete examples. Regardless, designing the gate logic with more effective replacement policies is a promising research direction that we want to further explore in CarM. 4 EXPERIMENTS As CarM is broadly applicable to a variety of EM-based CL methods, we compare the performance with and without CarM in the methods of their own setups. We select seven methods as shown in Table 1, to cover several aspects discussed in Section 3 such as bundle boundary of learning (i.e., task-level vs batch-level) and number of passes taken per bundle. We discuss detailed reproducible settings in Section 5. For evaluation, we implement CarM in PyTorch 1.7.1 as a working prototype. Datasets and metrics. Datasets include CIFAR subset—CIFAR10 (C10) and CIFAR100 (C100)— , ImageNet subset—ImageNet-100 (I100), Mini-ImageNet (100 classes) (MI100), and TinyImageNet (200 classes) (TI200)—, and ImageNet-1000. We use two popular metrics, the final accuracy and the final forgetting (Chaudhry et al., 2018) averaged over classes, to reflect the performance of continual learning. Except for ImageNet-1000 that represents a significantly large-scale training, the results are averaged over five runs while each method assigns an equal of classes to each task. We also measure training speed measured from the time the training stage receives a bundle to the time it completes training the bundle. Baselines and architectures. On top of each CL method, we vary the amount of data swapping to study the effectiveness of CarM in detail. Unless otherwise stated, CarM-N means that our swap worker is configured to replace N% of EM samples drawn by the training stage. All experiments are based on either ResNet or DenseNet neural networks, with all using the SGD optimizer as suggested by the original works, and use the entropy-based data swapping policy (i.e., Entropy) by default. 4.1 RESULTS We compare existing methods with two CarM versions, CarM-50 that performs partial swapping for a half of the data and CarM-100 that performs full swapping. Table 1 presents the performance in terms of the top-1 accuracy (Acc.) and the forgetting score (Fgt.), except for ER, iCarL, and BiC that measure the top-5 accuracy for the ImageNet subset as done in the original works. First, CarM-100 improves the accuracy remarkably over almost all of the methods under consideration, advancing the state-of-the-art performances for CIFAR and ImageNet datasets. The results clearly show the effectiveness of using the storage device in large capacity to allow CL to exploit abundant information of the previous tasks. Among the seven methods, CarM-100 delivers relatively larger accuracy gains for BiC (Wu et al., 2019), GDumb (Prabhu et al., 2020), DER++ (Buzzega et al., 2020), and RM (Bang et al., 2021) that take multi-passes on each training input. We believe that as long as old samples in EM are exercised more frequently for a new bundle to train (i.e., new samples plus old samples), data swapping can subsequently bring in more diverse samples from storage to take advantage of them. Regardless, although TinyER (Chaudhry et al., 2019b) is designed to take a single pass over new samples and thus exercise EM less aggressively, as applied with our techniques, it improves the accuracy by 7.72%, 11.79%, and 3.89% for CIFAR-100, Mini-ImageNet, and ImageNet-1000, respectively. In comparison to CarM-100, CarM-50 obtains slightly lower accuracy across the models. We argue that such a small sacrifice in accuracy is indeed worthwhile when storage I/O bandwidth is the primary constraint. In CarM-50, with 50% lower I/O traffic caused by data swapping, the accuracy as compared to CarM-100 diminishes only by 1%, 0.6%, and 0.7% on average for CIFAR subset, ImageNet subset, and ImageNet-1000, respectively, providing an ability to trade-off small accuracy loss for substantial I/O traffic reduction. Similarly to the accuracy, our data swapping approaches considerably reduce forgetting scores over the majority of the original methods. Perhaps, one method that shows less promising results in Table 1 would be iCaRL (Rebuffi et al., 2017), where CarM makes the accuracy occasionally worse. From the in-depth investigation of iCaRL in Appendix A.4.1, we observe that using data swapping and knowledge distillation at the same time cannot not deliver great accuracy. That is, as knowledge distillation may not be much compatible with data swapping, we revisit distillation-based CL methods (i.e., iCaRL, BiC, and DER++) when they are used along with data swapping in detail. Knowledge distillation on CarM. Note that the ways to distill the knowledge of old data in iCaRL, BiC, and DER++ are all different (see Appendix A.4.1). Briefly speaking, in calculating loss for old data, iCaRL uses only soft labels obtained from an old classifier, whereas BiC and DER++ use both hard labels (i.e., ground truth) as well as soft labels. To investigate the effect of using these two types of loss, we first modify the loss function of iCaRL similarly to that of BiC, i.e., α× soft label loss+ (1−α)× hard label loss, and then show accuracy over varying α values for all three distillation-based methods in Figure 3. For each method, we also include accuracy when α increases incrementally over time as done in BiC. The results show that distillation-based methods with CarM significantly improve accuracy when the α is very small. For iCaRL, compared to α = 1.0 (i.e., no hard label loss as iCaRL does), we obtain 5.4 higher accuracy when α = 0.0 (i.e., no distillation) and 5.7 higher accuracy when α = 0.1, which is the best result. Similarly, for BiC and DER++ with CarM, we found that the coefficient α to be applied on the soft label loss does not necessarily be high (i.e., iCaRL) or managed complicatedly (i.e., BiC) to achieve higher accuracy. Please refer to Figure 6 for the CIFAR subset results. Our best interpretation for the reason behind is as follows. The key assumption of knowledge distillation is that once the model is trained with a new task, the knowledge newly learned is supposed to generalize the task well and can be effectively transferable to subsequent task training. However, if the model is not sufficiently generalized for old tasks, using distillation losses extensively might be adverse—Data swapping attempts to correct decision boundaries driven by abundant in-storage samples to further generalize old tasks, but interfered by the knowledge distilled by the old models. Comparison for data swapping policies. We compare the performance of the three data swapping policies proposed in Section 3.2 under CarM-50. As shown in Table 2, both Entropy and Dynamic outperform Random by 0.16% on average for ImageNet subset (see Table 10 for CIFAR subset). We highlight that our major contribution for gating mechanism is computational efficiency while matching with I/O bandwidth available on the storage medium, and the primary objective of exploring data swapping policies is establishing a good baseline for the gating mechanism. In this regard, we found that all three policies can serve as good baselines. Impact on training speed. Delay optimization techniques in Section 3.1 are intended to incur insignificant delay on training. To confirm this, we examine how training speed in CarM-50 changes over the original memory-only methods, measured as the percentage of wall-clock time (i.e., actual time taken) increase as applied with asynchronous (Async) vs synchronous sample retrieval (Sync). To consider the most challenging scenario, we make data entered into the stream buffer at a rate enough to keep training always busy with new mini-batches. As shown in Table 3, regardless of EM methods, the asynchronous version of CarM does not dramatically affect training speeds for both CIFAR and ImageNet subsets. By contrast, the synchronous version slows down training time up to 71.6% for CIFAR subset and 62.0% for ImageNet subset. Regardless of the version in use, in-memory samples undergoing data swapping are rarely drawn in the subsequent training steps since the size of an episodic memory size is typically much larger than the size of a training batch. Therefore, no difference in accuracy is observed between the two version. 4.2 ABLATION STUDY We present an ablation study using four methods (TinyER, BiC, DER++, and RM) that represent the state-of-the-art in each type of EM methodologies, using the CIFAR subset. Size of EM. To confirm performance benefits over using different memory sizes, we empirically evaluate CarM-50 over varying EM sizes and show the average accuracy in Figure 4(a). In all cases, CarM-50 outperforms the existing methods, with BiC, DER++, and RM having relatively higher accuracy increases. Moreover, we observe that data swapping delivers better accuracy over conventional memory-only approaches using much smaller memory. For example, CarM-50 with DER++ on EM size 300 shows higher accuracy than pure DER++ on EM size 1000, and CarM-50 with TinyER on EM size 300 shows higher accuracy than pure TinyEM on EM size 500. Therefore, it turns out that data swapping could help reduce the EM size without hurting the accuracy of existing methods in both the multi-pass method and single-pass method. Data swapping ratio. We present results with different swapping ratios to show that our gate model indeed brings out meaningful benefits over using different I/O bandwidths. To that end, Figure 4(b) shows the change in accuracy when our gating policy decreases the swapping ratio down to 20% (CarM-20) or increases it up to 80% (CarM-80). Obviously, at CarM-80 in high swapping ratio, the accuracy across the four EM methods gets very close to the accuracy obtained in full swapping. A surprising result is that even at CarM-20 in 20% data swapping, the accuracy is very comparable to when we allow higher data swapping ratios. The results indicate that our method would be effective even when applied to the system with low-bandwidth storage. Size of storage. As local storage cannot store all the past data, the system must discard some old samples once the storage is fully occupied. Figure 4(c) shows accuracy degradation in CarM-50 when storage capacity is limited to 1.5–10× of the EM size. The results show that data swapping improves performance over traditional approaches even with using 50% larger capacity for the storage. Large number of tasks. One pressing issue on CL is learning a large number of tasks as it is required to keep the knowledge learned in the remote past. To evaluate this aspect, we split CIFAR-100 (100 classes) into 50 tasks and run with the four methods. As Figure 4(d) shows, CarM significantly outperforms the baselines, showing the potential for long-term continual model training. 5 CONCLUSION We alleviate catastrophic forgetting by integrating traditional episodic memory-based continual learning methods with device-internal data storage, named CarM. We design data swapping strategies to improve model accuracy by dynamically utilizing a large amount of the past data available in the storage. Our swapping mechanism addresses the cumbersome performance hurdle incurred by slow storage access, and hence continual model training is not dramatically affected by data transfers between memory and storage. We show the effectiveness of CarM using seven well-known methods on standard datasets, over varying memory sizes, storage sizes, and data swapping ratios. REPRODUCIBILITY STATEMENT We take the reproducibility of the research very seriously. Appendix hence includes detailed information necessary for reproducing all the experiments performed in this work, as follows: • Appendix A.1 describes the implementation details of building CarM. • Appendix A.2 specifies dataset information used in the experiments (e.g., the number of tasks and the number of classes per task). • Appendix A.3 provides experimental details (e.g., metrics and hyper-parameters). • Appendix A.3.4 presents detailed specification of machines (e.g., GPU model) used in the experiments. Our source code is available at https://anonymous.4open.science/r/CarM, where we include running environments and configuration files for all the experiments that make it possible to reproduce the results reported in this paper with minimal effort. ETHICS STATEMENT All continual learning (CL) methods including the proposed one would adapt and extend the already trained AI model to recognize better with the streamed data. The CL methods will expedite the deployment of AI systems to help humans by its versatility of adapting to a new environment out of the factory or research labs. As all CL methods, however, would suffer from adversarial streamed data as well as data bias, which may cause ethnic, gender or biased gender issues, the proposed method would not be an exception. Although the proposed method has no intention to allow such problematic cases, the method may be exposed to such threats. Relentless efforts should be made to develop mechanisms to prevent such usage cases in order to make the continuously updating machine learning models safer and enjoyable to be used by humans. A APPENDIX A.1 IMPLEMENTATION DETAILS First, we describe implementation details about the two components of the proposed method: swap worker and episodic memory. Then, we describe the details about PyTorch integration of our implementation for ease of use. Swap worker. CarM implements the swap worker through multiprocessing (pyt) in popular Python standard library so that data swapping is running in parallel with PyTorch’s default fetch workers dedicated to data decoding and augmentation. The swap worker uses asyncio (asy) to asynchronously load samples from storage to memory, effectively overlapping high-latency I/O operations with other CarM-related operations, such as image decoding, sample replacement on EM, and entropy calculation. The swap worker issues multiple data swapping requests without spinning on or being blocked by I/O. As a result, it is sufficient to have only one swap worker for CarM in the system. Episodic memory. There are various ways to implement EM to be shared between fetch workers and the swap worker. The current system favors flexibility over performance, so we opt for implementing EM as a shared object provided by Manager (man) in the Python standard library (multiprocessing.managers), which is based on message passing in the server-client semantics. In terms of flexibility, the Manager does not require the clients (i.e., fetch workers and swap workers) to define the exact data layout in the EM address space or coordinate for potential memory resizing to accommodate raw samples of different sizes (e.g., image resolutions). Hence, it is sufficient for the client workers to perform reads and writes on EM using indexes on the EM samples. An alternative yet obviously higher-performance implementation would be using multiprocessing.shared_memory (sha), which enables direct reads and writes on EM by exposing a common region of memory to the processes. Despite good performance, this method is less flexible as all processes should be aware of the data layout in a designated EM address range precisely at runtime, thus requiring additional coordination for sample lookups and EM resizing. As our system evolves, we ultimately want to combine the best of both methods to promise both flexibility and performance. A.2 DATASETS Each baseline is evaluated on its own dataset used in the original work. The first rows of Table 4 and Table 5 show datasets used in the CIFAR subset and the ImageNet subset, respectively, for all baselines. ImageNet-100 is a ImageNet ILSVRC2012 subset used in iCaRL, which contains images in the same resolution as those in the original ImageNet ILSVRC2012. Other datasets used as the ImageNet subset have smaller image resolution than the original one (e.g., 64×64 for Tiny-ImageNet, 84×84 for Mini-ImageNet). In addition, we trained all baselines on ImageNet-1000 to verify the effectiveness of CarM on a large-scale dataset. We note that only ER, iCaRL, and BiC have been compared using the ImageNet-1000 dataset in the literature (Wu et al., 2019). Datasets are split as done in the original work. The second and third rows of Table 4 and Table 5 show the detailed information on the splitting strategy. For all baselines, the ImageNet-1000 dataset is split into 10 tasks, each with 100 classes. Note that all datasets are non-blurry, meaning that each task consists of its own set of classes and samples belonging to a previous task never appear in the next tasks. Since the experimental results are highly sensitive to the class order in the continuous tasks to train, we follow the same class order used in the original works. A.3 EXPERIMENTAL DETAILS We present the effectiveness of the proposed CarM using seven CL methods of their own setups. This section discusses detailed settings for each method so that the results are reproducible by our source code. We first describe the metrics used for the evaluations. A.3.1 METRICS Final accuracy. Final accuracy is an average accuracy over all classes observed after the last task training is done. Final forgetting. Forgetting indicates how much each task has been forgotten while training new tasks (Chaudhry et al., 2018). Forgetting for a task is calculated by comparing the best accuracy observed over task insertions to the final accuracy of the task when training is over. Final forgetting is an average forgetting across all tasks when training is over. A.3.2 BASELINE DETAILS • ER (Ratcliff, 1990) combines all samples in the current stream buffer and the current EM, and passes them over to the model as a training set, i.e., training bundle. There is no algorithmic optimization applied to the model itself. We manage EM as a ring buffer that assigns EM space equally over all classes observed so far. We use the same hyper-parameters and loss function (binary cross-entropy loss) as used in iCaRL. • iCaRL (Rebuffi et al., 2017) uses three algorithmic optimizations: distillation loss, herding, and nearest-mean-of-exemplar classification. To transfer the information of old tasks, iCaRL leverages the distillation loss using logits obtained from the most recently trained model for old classes: this loss information is considered as the ground truth for old classes. Herding is its own EM management method, which populates the samples whose feature vectors are the closest to the average feature vector overall stream data for each class. iCaRL allocates EM space equally overall observed classes. • TinyER (Chaudhry et al., 2019b) explores four EM management strategies named reservoir, ring buffer, k-means, and mean of features. We adopt the reservoir in the experiments because it shows overall the highest performance in the original paper. Similar to ER, TinyER retrieves old samples from EM without other optimizations on the model itself. TinyER is batch-level learning and focuses on an extremely online setup that takes a single pass for every streamed batch. • BiC (Wu et al., 2019) runs bias correction on the last layer of the neural network, structured as fully connected layer, to mitigate data imbalance problem between old samples and new samples. The data imbalance is an inherent problem due to the limited size of EM, and it gets worse as we have a larger number of consecutive classes to train. Similar to iCaRL, BiC opts for distillation loss, but its entire loss function is a mixture of distillation loss and cross-entropy loss that is directly calculated from some reserved samples for old classes. • GDumb (Prabhu et al., 2020) is a simple rehearsal-based method that uses only the memory to train the model. The memory management is done via greedy balanced sampling, where GDumb tries to keep each class balanced by evicting data categorized into the majority class out of EM. Unlike other methods, the model is trained from scratch for inference and then discarded every time the memory is updated. GDumb uses cosine annealing learning-rate scheduler and cross-entropy loss for gradient descend. • DER++ (Buzzega et al., 2020) is one of rehearsal-based methods with knowledge distillation. Unlike other methods, this approach retains logits (along with samples) in EM for knowledge distillation. For knowledge distillation, DER++ calculates euclidean distance between the logits stored in EM and the logits generated by the current network. To enable data swapping on DER++, we store the logits in the storage along with samples. • RM (Bang et al., 2021) uses the same backbone as GDumb, but it improves memory update policy and training method over GDumb. For memory management, RM calculates the uncertainty of each sample and tries to fill the memory with samples in a wide spectrum that ranges from robust samples with low uncertainity to fragile samples with high uncertainity while keeping the classes balanced. In addition, data augmentation (DA) is proposed to advance the original RM implementation. We use RM without DA to apply data swapping in our work, but we include some results of RM with DA in Section A.4.4. Reproduction. We use reported numbers from the original paper for DER++ on TinyImageNet (Buzzega et al., 2020). For iCaRL, we believe we faithfully implement its details, but could not reach the accuracy reported in the paper. As far as we know, there is no PyTorch source code that reproduces iCaRL on both CIFAR-100 and ImageNet-100 datasets. In our implementation for iCaRL, we refer to a PyTorch version written by the PodNet authors (Douillard et al., 2020) as they achieve the most comparable results. We use the results obtained from the referred version rather than the reported results, because compared to the reported accuracy, the obtained accuracy is nearly the same for CIFAR-100 and higher for ImageNet-100. A.3.3 HYPER-PARAMETERS We follow hyper-parameters presented in the original works: we did not perform hyper-parameter search for the baselines. Table 6, Table 7, and Table 8 present all the details on the hyper-parameters. Although DER++ updates EM in batch-level and does not consider task boundary, for a larger dataset than MNIST, the original paper chooses to takes multiple passes per bundle. So, we deem DER++ to be a task-level learning method as long as we use CIFAR-100 and Tiny ImageNet as its training dataset. Here, TI and CI denote task-incremental learning and class-incremental learning, respectively. TI is an easy and simplified scenario, where the task ID is given at both training and inference. In TI setting, the model can classify the input among the classes that belong to the provided task ID. On the contrary, CI is the setting where the task ID is unknown during inference, which is a more realistic case than TI. A.3.4 DETAILED SPECIFICATION OF MACHINES Our experiments are performed on machines with HW specification as presented in Table 9. These machines are also used in measuring the impact on training speed with data swapping. A.4 ADDITIONAL RESULTS A.4.1 DISTILLATION ANALYSIS Effectiveness of features of iCaRL on CarM. We explore iCaRL by measuring accuracy for all possible 32 combinations based on its algorithmic features, i.e., knowledge distillation (D), herding (H), and nearest-mean-of-exemplars (N), along with our CarM-100 (F) or CarM-50 (P). In Figure 5, we show eight combinations that are sufficient to support the three interesting findings. First, data swapping without distillation (orange bars) outperforms the other combinations including pure iCaRL (blue and green bars). Second, for combinations with distillation, applying data swapping does not deliver great accuracy (D/H/N vs the other two in blue bars). Finally, data swapping does not seem to necessitate sophisticated algorithmic features (F&H vs D/H/N), inferring a model simplification potential for episodic memory. Knowledge distillation on CarM. Figure 6 show accuracy for CIFAR subset while varying α values in α× soft label loss+ (1− α)× hard label loss for iCaRL, BiC, and DER++. We can make the same conclusions as discussed in the ‘Knowledge distillation on CarM’ paragraph of Section 4.1. Below, we describe how each distillation-based method can be transformed into the presented model for loss calculation. The original loss function of iCaRL (Rebuffi et al., 2017) is defined as: Licarl(xi) = −[ t∑ y=s {δy=yi log gy(xi) + δy 6=yi log(1− gy(xi))} + s−1∑ y=1 {qyi log gy(xi) + (1− q y i ) log(1− gy(xi))}] (3) where qyi is the output of the old model, gy(xi) is the output of the current model, {1, 2, .., s− 1} is a set of old classes and {s, ..., t} is the set of new classes. For distillation, it uses a soft target from the previous model for old classes of all current training set. As a result, training the current model heavily relies on the performance of the previous model. Especially, when data that belongs to old classes replays, since the target of loss is only soft output from the previous model, it is likely that the similar soft output from the old model is repeatedly distilled without the correct hard label. Due to such aggressive distillation, iCaRL cannot take an advantage of CarM of data swapping, which enables to replay and train abundant old data, hindering positive decision boundary corrections. That is, the wrongly predicted samples from the old model will be predicted wrongly also in the future even if they are replayed several times by CarM. BiC and DER++ use distillation loss, however, unlike iCaRL, they provide a loss term of which target for old classes is ground truth, the correct hard label. As result, BiC and DER++ could get higher accuracies with CarM. To evaluate Figure 3 and Figure 6, we modified the loss function of iCaRL, adding another binary cross entropy that uses ground truth as a target, which is referred to hard label loss, as following: Lmodified(xi) = αLicarl(xi)− (1− α) t∑ y=1 {δy=yi log gy(xi) + δy 6=yi log(1− gy(xi))} (4) BiC and DER++ already have its own hard label loss, we did not modify loss function. Note that when α is set to 1.0 in BiC, since it is unable to train any new data, which is unrealistic situation, we excluded the result of α = 1.0. A.4.2 INCREMENTAL ACCURACY OF TABLE 1 IN THE MAIN PAPER Incremental accuracy. We here report incremental accuracy as an additional performance metric. Incremental accuracy is a set of average accuracy over classes observed so far after training each task. Figure 11 and Figure 11 show the incremental accuracy of Table 1 in the main paper. We also mark the accuracy from original paper of iCaRL on CIFAR-100, iCaRL on ImageNet-100, BiC on ImageNet-100 and DER++ on Tiny-ImageNet. In general, the more tasks (classes) come, the larger gap of accuracy between original and CarM. This implies that running on CarM could better mitigate the catastrophic forgetting for long-term training. A.4.3 ABLATION STUDY ON ER, ICARL, AND GDUMB We report the results for an ablation study on ER, iCaRL, and GDumb, which were not presented in the main paper. Figure 7 shows accuracy over varying EM sizes, Figure 8 shows accuracy over varying swapping ratios, Figure 9 shows accuracy over varying storage capacity, and Figure 10 shows accuracy with learning 50 tasks. In general, we found the similar observations as discussed in Section 4.2. ER iCaRL GDumb 50 100 Ac cu ra cy (% ) 33 .7 46 .3 47 .352 .7 47 .7 49 .954 .0 48 .4 52 .6 53 .8 48 .2 53 .3 54 .9 49 .5 53 .7 Original 20% 50% 80% 100% Figure 8: Accuracy over varying data swapping ratios. ER iCaRL GDumb 50 100 Ac cu ra cy (% ) 33 .7 46 .3 47 .3 37 .5 44 .6 47 .7 42 .1 46 .2 49 .552 .0 47 .4 49 .053 .0 46 .8 51 .054 .0 48 .4 52 .6 Original 1.5X 2X 5X 10X All Figure 9: Accuracy over varying storage capacities. A.4.4 RESULTS OF RM WITH DATA AUGMENTATION We implement RM with data augmentation and show the results in Table 11 using CIFAR-10 dataset. Both CarM and CarM-50 improves accuracy significantly over the baseline method. A.4.5 CARM ON EMBEDDED DEVICE We evaluate CarM using a NVIDIA Jetson TX2 to show its efficacy when running on a representative embedded AI computing device. Table 12 shows all baselines with CarM-50 and CarM-100 on CIFAR subset. We see accuracy improvements with CarM as similarly observed in the main paper. A.5 DISCUSSIONS We have taken early steps towards leveraging both memory and storage to overcome the forgetting problem in CL while preserving the same training efficiency, which we find to be effective for the hardware we tested. However, as the characteristics between the memory and storage may vary significantly, the storage access latency may still become a significant bottleneck unless carefully exploited. Ideally, given the specs of a hardware configuration (e.g., computation, memory, and available I/O bandwidth), the swapping mechanism could decide an optimal policy to increase the memory capacity without adding additional latency. We leave this as an area of future work, which would make CarM more robust and resilient to variations in different hardware settings.
1. What if you use a bigger memory buffer instead of relying on storage? 2. How does the size and variability of the buffer impact performance? 3. What is the actual size of the storage memory used in the experiments? 4. Why do random, entropy, and dynamic policies have similar results? 5. Does the swapping policy improve performance because of the size and variability of the buffer, or is there another reason? 6. Why doesn't data swapping improve iCaRL performance as much as it does in other methods? 7. Why is having hard labels important when using iCarl + CaRM? 8. Is the coefficient of soft labels necessarily high? 9. Can you provide more insights into why old models might be enforcing wrong labels and how having a signal on the true label can help avoid confirmation bias? 10. Are there any errors in the figures (e.g., inconsistent x-axis labels)?
Summary Of The Paper Review
Summary Of The Paper The paper proposes Carousel Memory (CarM), a new design for episodic memory (EM) in continual learning (CL) systems based on replay or rehearsal of previously observed samples. EM buffers are stored in high-speed memory for fast retrieval but are usually limited in size and the number of samples that they can store. The paper proposes to exploit the abundance of internal storage in some devices to alleviate forgetting by applying asynchronous strategies for swapping samples between memory and storage without necessarily slowing the training due to the difference in access speed. In particular, they propose a data swapping mechanism, based on a gating function, to dynamically replace a subset of in-memory samples used for model training with other samples in the data storage memory. Review The paper is well written and easy to follow. Background and literature review are on point and up to date. The experiments are thorough and well-detailed. Questions I guess my main question is: what if you use a bigger memory buffer? You observed that __"data swapping delivers better accuracy over conventional memory-only approaches using much smaller memory". I think you should dig a bit more on this. I understand that the point of this paper is showing how to use storage instead of RAM memory to achieve this, but what happens if you try with larger EM sizes such as the total one (memory+storage) that you use when you train with CaRM? I guess that would also give you some information on the importance of your swapping policy. Actually, It is not really clear what is the actual size of the storage memory. For instance: looking at plot 4c what is the meaning of "original"? How many samples are you storing? And how much storage memory? All policies (random, entropy, dynamic) have similar results. Entropy looks slightly better than the others, but I don't think it has too much significance. The fact that ransom policy works so good makes me think that your method improve performance because because of the size and variability of your buffer. For instance, what if you don't use CaRM and you use the entropy policy to decide which samples stay in the buffer? Data swapping is not improving iCaRL performance as it does in other methods. Authors explain this behaviour by studying different ways of computing loss on old data. In the original paper, iCaRL uses only soft labels obtained from an old classifier. DER++ use both hard labels (i.e., ground truth) as well as soft labels and then weighting the loss as a linear/convex combination of the hard and soft labels. Having convex combination of losses with soft and hard labels generally helps in BIC and DER++. Modifying iCarl in the same way looks that having hard labels is important when using iCarl + CaRM. In general the coefficient of soft labels does not have to be high. Is that because old models are enforcing wrong labels and you always need a signal on the true label to avoid confirmation bias? Can you give more insights on this? Figure 11 x-axis have 10 classes for some methods and 100 for others, the caption does not explain why. Is it an error? Same for figure 12.
ICLR
Title Improving Model Consistency of Decentralized Federated Learning via Sharpness Aware Minimization and Multiple Gossip Approaches Abstract To mitigate the privacy leakages and reduce the communication burden of Federated Learning (FL), decentralized FL (DFL) discards the central server and each client only communicates with its neighbors in the decentralized communication network. However, existing DFL algorithms tend to feature high inconsistency among local models, which results in severe distribution shifts across clients and inferior performance compared with centralized FL (CFL), especially on heterogeneous data or with sparse connectivity of communication topology. To alleviate this challenge, we propose two DFL algorithms named DFedSAM and DFedSAM-MGS to improve the performance. Specifically, DFedSAM leverages gradient perturbation to generate local flatness models via Sharpness Aware Minimization (SAM), which searches for model parameters with uniformly low loss function values. In addition, DFedSAM-MGS further boosts DFedSAM by adopting the technique of Multiple Gossip Steps (MGS) for a better model consistency, which accelerates the aggregation of local flatness models and better balances the communication complexity and learning performance. In the theoretical perspective, we present the improved convergence rates O ( 1 T + 1 T 2(1−λ)2 ) and O ( 1 T + λ+1 T 2(1−λQ)2 ) in the stochastic non-convex setting for DFedSAM and DFedSAM-MGS, respectively, where 1−λ is the spectral gap of the gossip matrix W and Q is the gossip steps in MGS. Meanwhile, we empirically confirm that our methods can achieve competitive performance compared with CFL baselines and outperform existing DFL baselines. N/A ( 1 T + 1 T 2(1−λ)2 ) and O ( 1 T + λQ+1 T 2(1−λQ)2 ) in the stochastic non-convex setting for DFedSAM and DFedSAM-MGS, respectively, where 1−λ is the spectral gap of the gossip matrix W and Q is the gossip steps in MGS. Meanwhile, we empirically confirm that our methods can achieve competitive performance compared with CFL baselines and outperform existing DFL baselines. 1 INTRODUCTION … … … … (a) Ring (b) Exponential (c) Grid (d) Fully-connected (b) Loss surface of FedAvg (c) Loss surface of DFedAvg Federated learning (FL) (Mcmahan et al., 2017; Li et al., 2020b) allows distributed clients to collaboratively train a shared model under the orchestration of the cloud without transmitting local data. However, almost all FL paradigms employ a central server to communicate with clients, which faces several critical challenges, such as computational resources limitation, high communication bandwidth cost, and privacy leakage (Kairouz et al., 2021). Compared to the centralized FL (CFL) framework, decentralized FL (DFL, see Figure 1), in which the clients only communicate with their neighbors without a central server, offers communication advantage and further preserves the data privacy (Kairouz et al., 2021; Wang et al., 2021). However, DFL suffers from bottlenecks such as severe inconsistency of local models due to heterogeneous data and model aggregation locality caused by the network connectivity of communication topology. This inconsistency results in severe over-fitting in local models and model performance degradation. Therefore, the global/consensus model may bring inferior performance compared with CFL, especially on heterogeneous data or in face of the sparse connectivity of communication net- 1 works. Similar performance pattern of DFL has also been demonstrated by Sun et al. (2022). To explore the reasons behind this phenomenon, we present the structure of the loss landscapes (Li et al., 2018) for FedAvg (Mcmahan et al., 2017) and decentralized FedAvg (DFedAvg, Sun et al. (2022)) on Fashion-MNIST (Xiao et al., 2017) with the same setting in Figure 2 (a) and (b). It is clearly seen that DFL method has a sharper landscape than CFL method. Motivation. Most FL algorithms face the over-fitting issue of local models on heterogeneous data. Many recent works (Sahu et al., 2018; Li et al., 2020c; Karimireddy et al., 2020; Yang et al., 2021; Acar et al., 2021; Wang et al., 2022) focus on the CFL and mitigate this issue with various effective solutions. In DFL, this issue can be exacerbated due to sharp loss landscape caused by the inconsistency of local models (see Figure 2 (a) and (b)). Therefore, the performance of decentralized schemes is usually worse than that of centralized schemes with the same setting (Sun et al., 2022). Consequently, an important research question is: can we design a DFL algorithm that can mitigate inconsistency among local models and achieve the similar performance to its centralized counterpart? To address this question, we propose two DFL algorithms: DFedSAM and DFedSAM-MGS. Specifically, DFedSAM overcomes local model over-fitting issue via gradient perturbation with SAM (Foret et al., 2021) in each client to generate local flatness models. Since each client aggregates the flatness models from its neighbors, a potential flat aggregated model can be generated, which results in high generalization ability. To further boost the performance of DFedSAM, DFedSAMMGS integrates multiple gossip steps (MGS) (Ye et al., 2020; Ye & Zhang, 2021; Li et al., 2020a) to accelerate the aggregation of local flatness models by increasing the number of gossip steps of local communications. It realizes a better trade-off between communication complexity and learning performance by bridging the gap between CFL and DFL, since DFL can be roughly regarded as CFL with a sufficiently large number of gossip steps (see Section 5.4). Theoretically, we present the convergence rates for our algorithms in the stochastic non-convex setting. We show that the bound can be looser when the connectivity of the communication topology λ is sufficiently sparse, or the data homogeneity β is sufficiently large, while as the consensus/gossip steps Q in MGS increase, it is tighter as the impact of communication topology can be alleviated (see Section 4). The theoretical results directly explain why the application of SAM and MGS in DFL can ensure better performance with various types of communication network topology. Empirically, we conduct extensive experiments on CIFAR-10 and CIFAR-100 datasets in both the identical data distribution (IID) and non-IID settings. The experimental results confirm that our algorithms achieve competitive performance compared to CFL baselines and outperform DFL baselines (see Section 5.2). Contribution. Our main contributions can be summarized as three-fold: • We propose two DFL algorithms DFedSAM and DFedSAM-MGS. DFedSAM alleviates the inconsistency of local models through getting local flatness models, while DFedSAMMGS achieves a better consistency based on DFedSAM via the aggregation acceleration and has a better trade-off between communication and generalization. • We present the convergence ratesO ( 1 T + 1 T 2(1−λ)2 ) andO ( 1 T + λQ+1 T 2(1−λQ)2 ) for DFedSAM and DFedSAM-MGS in the non-convex settings, respectively, and show that our algorithms can achieve the linear speedup for convergence. • We conduct extensive experiments to verity the efficacy of our proposed DFedSAM and DFedSAM-MGS, which can achieve competitive performance compared with both CFL and DFL baselines. 2 RELATED WORK Decentralized Federated Learning (DFL). In DFL, clients only communicate with their neighbors in various communication networks without a central server in comparison to CFL, which offers communication advantage and preserves the data privacy. Lalitha et al. (2018; 2019) take a Bayesian-like approach by introducing a belief over the model parameter space of the clients in a fully DFL framework. Roy et al. (2019) propose the first server-less, peer-to-peer approach BrainTorrent to FL and apply it on medical application in a highly dynamic peer-to-peer FL environment. Sun et al. (2022) apply the multiple local iteration with SGD and quantization method to further reduce the communication cost, and provide the convergence results in various convexity setting. Dai et al. (2022) develop a decentralized sparse training technique to further save the communication and computation cost. Sharpness Aware Minimization (SAM). SAM (Foret et al., 2021) is an effective optimizer for training deep learning models, which leverages the flatness geometry of the loss landscape to improve model generalization ability. Recently, Andriushchenko & Flammarion (2022) study the properties of SAM and provide convergence results of SAM for non-convex objectives. As a powerful optimizer, SAM and its variants have been applied to various machine learning (ML) tasks (Zhao et al., 2022; Kwon et al., 2021; Du et al., 2021; Liu et al., 2022; Abbas et al., 2022). Specifically, Qu et al. (2022) and Caldarola et al. (2022) integrate SAM to improve the generalization, and thus mitigate the distribution shift problem and achieve a new SOTA performance for CFL. However, to the best of our knowledge, no efforts have been devoted to the empirical performance and theoretical analysis of SAM in the DFL setting. Multiple Gossip Steps (MGS). The advantage of increasing the times of local communications within a network topology is investigated in Ye et al. (2020), in which FastMix is proposed with multi-consensus and gradient tracking and they establish the optimal computational complexity and a near optimal communication complexity. DeEPCA (Ye & Zhang, 2021) integrates FastMix into a decebtralized PCA algorithm to accelerate the training process. DeLi-CoCo (Hashemi et al., 2022) performs multiple compression gossip steps in each iteration for fast convergence with arbitrary communication compression. Network-DANE (Li et al., 2020a) uses multiple gossip steps and generalizes DANE to decentralized scenarios. In general, by increasing the number of gossip steps, local clients can approach to a better consensus model towards the performance in CFL. Thus, the use of MGS can also potentially mitigate the model inconsistency in the DFL setting. The work most related to this paper is DFedAvg and DFedAvg with momentum (DFedAvgM) in Sun et al. (2022), which leverages multiple local iterations with the SGD optimizer and significantly improve the performance of classic decentralized parallel SGD method D-PSGD (Lian et al., 2017). However, DFL may suffers from inferior performance due to the severe model inconsistency issue among the clients. Another related work is FedSAM (Qu et al., 2022), which integrates SAM optimizer into CFL to enhance the flatness of local model and achieves new SOTA performance for CFL. On top of the aforementioned studies, we are the first to extend the SAM optimizer to the DFL setting and simultaneously provide its convergence guarantee in the nonconvex setting. Furthermore, we bride the gap of CFL and DFL via adopting MGS in DFedSAM-MGS, which largely mitigates the model inconsistency in DFL. 3 METHODOLOGY In this section, we try to solve this issue in the DFL setting. Below, we first initialize the problem setup in DFL and then describe the proposed DFedSAM and DFedSAM-MGS in detail. 3.1 PROBLEM SETUP In this work, we are interested in solving the following finite-sum stochastic non-convex minimization problem in the DFL setting: min x∈Rd f(x) := 1 m m∑ i=1 fi(x), fi(x) = Eξ∼DiFi(x; ξ), (1) where Di denotes the data distribution in the i-th client, which is heterogeneous across clients, m is the number of clients, and Fi(x; ξ) is the local objective function associated with the training data samples ξ. Problem (1) is known as empirical risk minimization (ERM) and models many applications in ML. As shown in Figure 1(b), we model the communication network in the decentralized network topology between clients as an undirected connected graph G = (N ,V,W ), where N := {1, 2, . . . ,m} represents the set of clients, and V ⊆ N × N represents the set of communication channels, each connecting two distinct clients. Furthermore, we emphasis that there is no central server in the decentralized setting and all clients only communicate with their neighbors with respect to the communication channels V . In addition, we assume Problem (1) is well-defined and denote f∗ as the minimal value of f , i.e., f(x) ≥ f(x∗) = f∗ for all x ∈ Rd. 3.2 DFEDSAM AND DFEDSAM-MG ALGORITHMS Instead of searching for a solution via SGD (Bottou, 2010; Bottou et al., 2018), SAM (Foret et al., 2021) aims to seek a solution in a flatness region through adding a small perturbation to the models, i.e., x+δ with more robust performance. As shown in Figure 2, decentralized schemes has a sharper landscape with poorer generalization ability than centralized schemes. However, the study focus on this issue remains unexplored. In this paper, we extend to SAM optimizer into DFL for investigating this issue, dubbed DFedSAM, whose local loss function is defined as: fi(x) = Eξ∼Di max ∥δi∥22≤ρ Fi(y t,k(i) + δi; ξi), i ∈ N (2) where yt,k(i) + δi is viewed as the perturbed model, ρ is a predefined constant controlling the radius of the perturbation and ∥ · ∥22 is a l2-norm, which can be simplified to ∥ · ∥2 in the rest. Similar with CFL methods, in DFL, DFedSAM allows that clients can update the local model parameters with multiple local iterates before communication are performed. Specifically, for each client i ∈ {1, 2, ...,m}, each local iteration k ∈ {0, 1, ...,K − 1} in each communication round t ∈ {0, 1, ..., T − 1}, the k-th inner iteration in client i is performed as: yt,k+1(i) = yt,k(i)− ηg̃t,k(i), (3) where g̃t,k(i) = ∇Fi(yt,k + δ(yt,k); ξ) and δ(yt,k) = ρgt,k/ ∥∥gt,k∥∥ 2 . Following by (Foret et al., 2021), using first order Taylor expansion around yt,k for a small value of ρ. After K inner iterations in each client, parameters are updated as zt(i) ← yt,K(i) and sent to its neighbors l ∈ N (i) after local updates. Then each client averages its parameters with the information of its neighbors: xt+1(i) = ∑ l∈N (i) wi,lz t(l). (4) On the other hand, we use multiple gossip steps (MGS) technique (Ye et al., 2020; Ye & Zhang, 2021; Hashemi et al., 2022) to achieve a better consistency among local models based on DFedSAM, dubbed DFedSAM-MGS, thereby further boosting the performance. DFedSAM-MGS provides a balance between the communication cost and generalization ability in DFL setting. Specifically, the produce of MGS at the q-th step (q ∈ {0, 1, ..., Q − 1}) can be viewed as two steps in terms of exchanging messages and local gossip update as follows: xt,q+1(i) = ∑ l∈N (i) wi,lz t,q(l), and zt,q+1(i) = xt,q+1(i). (5) At the end of MGS, xt+1(i) = xt,Q(i). Both DFedSAM and DFedSAM-MGS are summarized in Algorithm 1 (see Appendix C). It is clearly seen that DFedSAM may generate the trade-off between the local computation complexity and communication overhead via multiple local iterations, whereas the local communication is only performed at once. While DFedSAM-MGS performs multiple local communications with a larger Q to make all local clients synchronized. Therefore, DFedSAM-MGS can be viewed as compromising between DFL and CFL. Compared with existing SOTA DFL methods: DFedAvg and DFedAvgM (Sun et al., 2022), the benefits of DFedSAM and DFedSAM-MGS lie in three-fold: (i) SAM is introduced to first alleviate local over-fitting issue caused by the inconsistency of local models via seeking a flatness model at each client in DFL, and also contribute to make consensus model flat; (ii) MGS in DFedSAMMGS is used to further accelerate the aggregation of local flatness models for a better consistency among local models based on DFedSAM and properly balances the communication complexity and learning performance; (iii) Furthermore, we also present the theories unifying the impacts of gradient perturbation ρ in SAM, several times of local communications Q in MGS, and various network typologies λ, along with data homogeneity β upon the convergence rate in Section 4. 4 CONVERGENCE ANALYSIS In this section, we show the convergence results of DFedSAM and DFedSAM-MGS for general non-convex FL setting, and the detailed proof is presented in Appendix E. Below, we first give several useful and necessary notations and assumptions. Definition 1 (The gossip/mixing matrix). (Sun et al., 2022, Definition 1) The gossip matrix W = [wi,j ] ∈ [0, 1]m×m is assumed to have these properties: (i) (Graph) If i ̸= j and (i, j) /∈ V , then wi,j = 0, otherwise, wi,j > 0; (ii) (Symmetry) W = W⊤; (iii) (Null space property) null{I − W} = span{1}; (iv) (Spectral property) I ⪰ W ≻ −I. Under these properties, eigenvalues of W can be shown satisfying 1 = |λ1(W))| > |λ2(W))| ≥ · · · ≥ |λm(W))|. Furthermore, λ := max{|λ2(W)|, |λm(W))|} and 1− λ ∈ (0, 1] is the denoted as spectral gap of W. Definition 2 (Homogeneity parameter). (Li et al., 2020a, Definition 2) For any i ∈ {1, 2, . . . ,m} and the parameter x ∈ Rd, the homogeneity parameter β can be defined as: β := max 1≤i≤m βi, with βi := sup x∈Rd ∥∇fi(x)−∇f(x)∥ . Assumption 1 (Lipschitz smoothness). The function fi is differentiable and ∇fi is L-Lipschitz continuous, ∀i ∈ {1, 2, . . . ,m}, i.e., ∥∇fi(x)−∇fi(y)∥ ≤ L∥x− y∥, for all x,y ∈ Rd. Assumption 2 (Bounded variance). The gradient of the function fi have σl-bounded variance, i.e., Eξi ∥∇Fi(y; ξi)−∇fi(x)∥ 2 ≤ σ2l ,∀i ∈ {1, 2, . . . ,m}, the global variance is also bounded, i.e., 1m ∑m i=1 ∥∇fi(x) −∇f(x)∥2 ≤ σ2g for all x ∈ Rd. It is not hard to verify that the σg is smaller than the homogeneity parameter β, i.e., σ2g ≤ β2. Assumption 3 (Bounded gradient ). For any i∈{1, 2, . . . ,m} and x∈Rd, we have ∥∇fi(x)∥≤B. Note that above mentioned assumptions are mild and commonly used in characterizing the convergence rate of FL (Sun et al., 2022; Ghadimi & Lan, 2013; Yang et al., 2021; Bottou et al., 2018; Yu et al., 2019; Reddi et al., 2021). Difference with classic decentralized parallel SGD methods such as D-PSGD (Lian et al., 2017), the technical difficulty is that zt(i)− xt(i) fails to be an unbiased gradient estimation∇fi(xt(i)) after multiple local iterates, thereby merging the multiple local iterations is non-trivial. Furthermore, the various topologies of communication network in DFL are quite different with SAM in CFL (Qu et al., 2022). Below, we adopt the averaged parameter xt= 1m ∑m i=1 x t(i) of all clients to be the approximated solution of Problem (1). Theorem 4.1 Let Assumptions 1, 2 and 3 hold, and the parameters {xt(i)}t≥0 is generated via Algorithm 1. Meanwhile, assume the learning rate of SAM in each client satisfy 0 < η ≤ 110KL . Let xt = 1m ∑m i=1 x t(i) and denote Φ(λ,m,Q) as the metric related with three parameters in terms of the number of spectral gap, the clients and multiple gossip steps, Φ(λ,m,Q) = λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 , (6) Then, we have the gradient estimation of DFedSAM or DFedSAM-MGS for solving Problem (1): min 1≤t≤T E ∥∥∇f(xt)∥∥2 ≤ 2[f(x1)− f∗] T (ηK − 32η3K2L2) + α(K, ρ, η) +Φ(λ,m,Q)β(K, ρ, η, λ), (7) where T is the number of communication rounds and the constants are given as α(K, ρ, η) = ηL2K2 ηK − 32η3K2L2 (4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 2Kρ2 2K − 1 ) , β(K, ρ, η, λ) = 64η5K3L4 ηK−32η3K2L2 (4K3L2ρ4 2K−1 + 8K(L2ρ2 + σ2g + σ 2 l ) + 8KB 2 + ρ2 η2(2K−1) ) . With Theorem 4.1, we state following convergence rates for DFedSAM and DFedSAM-MGS. Corollary 4.1.1 Let the local adaptive learning rate satisfy η = O(1/L √ KT ). With the similar assumptions required in Theorem 4.1.1, and setting the perturbation parameter ρ = O( 1√ T ). Then, the convergence rate for DFedSAM satisfies: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2=O(f(x1)−f∗√ KT + K(β2+σ2l ) T + KB2 T 2(1−λ)2 + K3/2L4 T 2 + L2 T 2(1−λ)2 + K(β2+σ2l ) T 2(1−λ)2 ) . Remark 1 DFedSAM can achieve a linear speedup on the general non-convex setting as long as T ≥ K, which is significantly better than the state-of-the-art (SOTA) bounds such as O ( 1√ T + σ2g√ T + σ2g+B 2 (1−λ)2T 3/2 ) in (Sun et al., 2022). Note that the bound can be tighter as λ decreases, which is dominated by K(β 2+σ2l ) T 2(1−λ)2 terms as λ ≤ 1− K1/4 T 3/2 , whereas as β increases, it can be degraded. Corollary 4.1.2 Let Q > 1, T be large enough and η = O(1/L √ KT ). With the similar assumptions required in Theorem 4.1.1 and perturbation amplitude ρ being ρ = O( 1√ T ), Then, the convergence rate for DFedSAM-MGS satisfies: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2=O(f(x1)−f∗√ KT + K(β2+σ2l ) T + K3/2L4 T 2 +Φ(λ,m,Q) L2+K(β2+σ2l +B 2) T 2 ) . Remark 2 The impact of the network topology (1 − λ) can be alleviated as Q increases and the number of clients m is large enough, the term λ Q+1 (1−λ)2m2(Q−1) of Φ(λ,m,Q) can be neglected, and the term λ Q+1 (1−λQ)2 is close to 1. That means by using the proposed Q-step gossip procedure, model consistency among clients can be improved, and thus DFL in the various communication topologies can be roughly viewed as CFL. Thus, the negative effect of the gradient variances σ2l and β 2 can be degraded especially on sparse network topology where λ is close to 1. In practice, a suitable steps Q > 1 is possible to achieve a communication-accuracy trade-off in DFL setting. 5 EXPERIMENTS In this section, we evaluate the efficacy of our algorithms compared to six baselines from CFL and DFL settings. In addition, we conduct several experiments to verify the impact of the communication network topology in Section 4. Furthermore, several ablation studies are conducted. 5.1 EXPERIMENT SETUP Dataset and Data Partition. The efficacy of the proposed DFedSAM and DFedSAM-MGS is evaluated on CIFAR-10 and CIFAR-100 datasets (Krizhevsky et al., 2009) in both IID and non-IID settings. Specifically, Dir Partition (Hsu et al., 2019) is used for simulating non-IID across federated clients, where the local data of each client is partitioned by splitting the total dataset through sampling the label ratios from the Dirichlet distribution Dir(α) with parameters α = 0.3 and α = 0.6. Baselines. The compared baselines cover several SOTA methods in both the CFL and DFL settings. Specifically, centralized baselines include FedAvg (Mcmahan et al., 2017) and FedSAM (Qu et al., 2022). For decentralized setting, D-PSGD (Lian et al., 2017), DFedAvg and DFedAvgM (Sun et al., 2022), along with DisPFL (Dai et al., 2022), are used for comparison. Implementation Details. The total number of clients is set as 100, among which 10% clients participates in communication. Specifically, all clients perform the local iteration step for decentralized methods and only participated clients can perform local update for centralized methods. We initialize the local learning rate as 0.1 with a decay rate 0.998 per communication round for all experiments. For CIFAR-10 and CIFAR-100 datasets, VGG-11 (He et al., 2016) and ResNet-18 (Simonyan & Zisserman, 2014) are adopted as the backbones in each client, respectively. The number of communication rounds is set as 1000 in the experiments for comparing with all baselines and studying on topology-aware performance. In addition, all the ablation studies are conducted on CIFAR-10 dataset and the number of communication rounds is set as 500. Communication Configurations. For a fair comparison between decentralized and centralized setting, we apply a dynamic time-varying connection topology for decentralized methods to ensure that in each round, the number of connections are no more than that in the central server. In specific, the number of clients communicating with their neighbors can be controlled to keep the communication volume consistent with centralized methods. Following earlier works, the communication complexity is measured by the times of local communications. The more experiments setup are presented in Appendix B due to the limited space. 5.2 PERFORMANCE EVALUATION Performance with compared baselines. In Table 1 and Figure 3, we evaluate DFedSAM and DFedSAM-MGS (Q = 4) with ρ = 0.01 on CIFAR-10 and CIFAR-100 datasets in both settings compared with all baselines from CFL and DFL. From these results, it is clearly seen that our proposed algorithms outperform other decentralized methods on this two datasets, and DFedSAMMGS outperforms and roughly achieves the performance of SOTA centralized baseline FedSAM on CIFAR-10 and CIFAR-100, respectively. Specifically, the training accuracy and testing accuracy are presented in Table 1 to show the generalization performance. We can see that the performance improvement is more obvious than all other baselines on CIFAR-10 with the same communication round. For instance, the difference between training accuracy and test accuracy on CIFAR-10 in IID setting is 14.14% in DFedSAM, 13.22% in DFedSAM-MGS, 15.29% in FedAvg and 15% in FedSAM. That means our algorithms achieve a comparable generalization than centralized baselines. Impact of non-IID levels (β). In Table 1, we can see our algorithms are robust to different participation cases. Heterogeneous data distribution of local client is set to various participation levels from IID, Dirichlet 0.6 and Dirichlet 0.3, which makes the training of global/consensus model is more difficult. For instance, on CIFAR-10, as non-IID levels increases, DFedSAMMGS achieves better generalization than FedSAM as the difference between training accuracy and test accuracy in DFedSAM-MGS {15.27%, 14.51%, 13.22%} are lower than that in FedSAM {17.26%, 14.85%, 15%}. Similarly, the difference in DFedSAM {17.37%, 15.06%, 14.10%} are lower than that in FedAvg {17.60%, 15.82%, 15.27%}. These observations confirm that our algorithms are more robust than baselines in various degrees of heterogeneous data. 5.3 TOPOLOGY-AWARE PERFORMANCE We verify the influence of various communication topologies and gossip averaging steps in DFedSAM and DFedSAM-MGS. Different with the comparison of CFL and DFL in Section 5.2, we only need to verify the key properties for the DFL methods in this section. Thus, the communication type is set as “Complete”, i.e., each client can communicate with its neighbors in the same communication round. The degree of sparse connectivity λ is: ring > grid > exponential > fullconnected in DFL. From Table 2, our algorithms are obviously superior to all decentralized baselines in various communication networks, which is coincided with our theoretical findings. Specifically, compared with DFedAvgM, DFedSAM and DFedSAM-MGS can significantly improve the performance in the ring topology with 0.64% and 8.0%, respectively. Meanwhile, the performance of DFedSAM-MGS in various topologies is always better than that of other methods. This observation confirms that multiple gossip steps Q can alleviate the impact of network topology with a smaller Q = 4. Therefore, our algorithms can generate the better generalization and model consistency in various communication topologies. 5.4 ABLATION STUDY Below, we verify the influence of each component and hyper-parameter in DFedSAM where Q = 1. All the ablation studies are conducted in “exponential” topology except the study for Q in three topologies, and the communication type is the same as Section 5.3: “Complete”. Consensus/gossip steps Q. In Figure 4, we investigate where the balance between learning performance and communication complexity in three network topologies is. We choose four multiple steps Q = {1, 2, 3, 4} and study the different balance points under the different steps in three net- work topologies in Figure 4 (a), (b) and (c). As the number of local communications increases, model performance is also improved but the communication complexity increases too. It is clearly seen that the balance point is different but with the same tendency in different topologies. And a relatively larger Q can bring better performance for a given communication complexity. Therefore, we select Q = 4 in DFedSAM-MGS under 1000 communication rounds for a better balance. Local iteration steps K. Large local iteration steps K can help the convergence in previous DFL work (Sun et al., 2022) with the theoretical guarantees. To investigate the acceleration on T by adopting a larger local iteration steps K, we fix the total batchsize and change local training epochs. As shown in Figure 5 (a), our algorithms can accelerate the convergence in theoretical results (see Section 4.1) as a larger local iteration steps K is adopted. Number of participated clients m. As shown in Figure 5 (b), we compare the performance between different number of client participation m = {50, 100, 150} with the same hyper-parameters. Compared with larger m = 150, the smaller m = {50, 100} can achieve better convergence and test accuracy as the number of local data increases, which makes indirectly local model more generalization, thereby improving the performance of consensus model. Perturbation radius ρ. Perturbation radius ρ has the impact on performance as the adding perturbation is accumulated when the communication round T increases. It is a trade-off between test accuracy and the generalization. To select a proper value for our algorithms, we conduct some experiments on various perturbation radius from the set {0.01, 0.025, 0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1.0} in Figure 5 (c). As ρ = 0.01, we achieve a better convergence and performance. Meanwhile, ρ = O( 1√ T ) can make a linear speedup on convergence (see Section 4.1). The effectiveness of SAM and MGS. To validate the effectiveness of SAM and MGS, respectively, we compare four algorithms including DFedAvg, DFedSAM and FedSAM-MGS with the same setting. From Table 3, DFedSAM can achieve the performance improvement and better generalization compared with DFedAvg as SAM optimizer is adopted. DFedSAM-MGS can further boost the performance compared with FedSAM as MGS can also make models consistent among clients and accelerate the convergence rates. 6 CONCLUSIONS AND FUTURE WORK In this paper, we focus on the model inconsistency challenge caused by heterogeneous data and network connectivity of communication topology in DFL and overcome this challenge from the perspectives of model generalization. We propose two DFL frameworks: DFedSAM and DFedSAMMGS with better model consistency among clients. DFedSAM adopts SAM to achieve the flatness model in each client, thereby improving the generalization by generating a consensus/global flatness model. Meanwhile, DFedSAM-MGS further improves the model consistency based on DFedSAM by accelerating the aggregation of local flat models and reaching a better trade-off between learning performance and communication complexity. For theoretical findings, we confirm a linear speedup and unify the impacts of gradient perturbation in SAM, local communications in MGS, and network typology, along with data homogeneity upon the convergence rate in DFL. Furthermore, empirical results also verify the superiority of our approaches. For future work, we will continue towards understanding of the effect of SAM and MGS for more desira ble generalization in DFL. B MORE DETAILS ON ALGORITHM IMPLEMENTATION B.1 DATASETS AND BACKBONES. CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009) are labeled subsets of the 80 million images dataset. They both share the same 60, 000 input images. CIFAR-100 has a finer labeling, with 100 unique labels, in comparison to CIFAR-10, having 10 unique labels. The VGG-11 as the backbone is used for CIFAR-10, and the ResNet is chose for CIFAR-100, where the batch-norm layers are replaced by group-norm layers due to a detrimental effect of batch-norm. B.2 MORE DETAILS ABOUT BASELINES. FedAvg is the classic FL method via the vanilla weighted averaging to parallel train a global model with a central server. FedSAM applies SAM to be the local optimizer for improving the model generalization performance. For decentralized schemes, D-PSGD is a classic decentralized parallel SGD method to reach a consensus model 1, DFedAvg is the decentralized FedAvg, and DFedAvgM uses SGD with momentum based on DFedAvg to train models on each client and performs multiple local training steps before each communication. Furthermore, DisPFL is a novel personalized FL 1In this work, we focus on decentralized FL which refers to the local training with multiple local iterates, whereas decentralized learning/training focuses on one-step local training. For instance, D-PSGD (Lian et al., 2017) is a decentralized training algorithm, which uses the one-step SGD to train local models in each communication round. framework in a decentralized communication protocol, which uses a decentralized sparse training technique, thus for a fair comparison, we report the global accuracy in DisPFL. B.3 HYPERPARAMETERS. The total client number is set to 100, and the number of connection s in each client is restrict at most 10 neighbors in decentralized setting. For centralized setting, the sample ratio of client is set to 0.1. The local learning rate is set to 0.1 decayed with 0.998 after each communication round for all experiments, and the global learning rate is set to 1.0 for centralized methods. The batch size is fixed to 128 for all the experiments. We run 1000 global communication rounds for CIFAR-10 and CIFAR100. SGD optimizer is used with weighted decayed parameter 0.0005 for all baselines except FedSAM. Other optimizer hyper-parameters ρ = 0.01 for our algorithms (DFedSAM and DFedSAMMGS with Q = 1) via grid search on the set {0.01, 0.025, 0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1.0} and the value of ρ in FedSAM is followed by (Qu et al., 2022), respectively. And following by (Sun et al., 2022), the local optimization with momentum 0.9 for DFedAvgM. For local iterations K, the training epoch in D-PSGD is set to 1, that for all other methods is set to 5. B.4 COMMUNICATION CONFIGURATIONS. Specifically, such as (Dai et al., 2022), the decentralized methods actually generate far more communication volume than centralized methods because each client in the network topology needs to transmit the local information to their neighbors. However, only the partly sampled clients can upload their parameter updates with a central server in centralized setting. Therefore, for a fair comparison, we use a dynamic time-varying connection topology for decentralized methods in Section 5.2, we restrict each client can communicate with at most 10 neighbors which are random sampled without replacement from all clients, and only 10 clients who are neighbors to each other can perform one gossip step to exchange their local information in DFedSAM. In DFedSAM-MGS, the gossip step is performed Q times, 10×Q clients are sampled without replacement can perform one gossip step to exchange their local information. C ALGORITHMS D PRELIMINARY LEMMAS Lemma D.1 [Lemma 4, (Lian et al., 2017)] For any t ∈ Z+, the mixing matrix W ∈ Rm satisfies ∥Wt − P∥op ≤ λt, where λ := max{|λ2|, |λm(W )|} and for a matrix A, we denote its spectral norm as ∥A∥op. Furthermore, 1 := [1, 1, . . . , 1]⊤ ∈ Rm and P := 11⊤ m ∈ Rm×m. In [Proposition 1, (Nedic & Ozdaglar, 2009)], the author also proved that ∥W t − P∥op ≤ Cλt for some C > 0 that depends on the matrix. Lemma D.2 [Lemma A.5, (Qu et al., 2022)] (Bounded global variance of ∥∇fi(x+ δi)−∇f(x+ δ)∥2.) An immediate implication of Assumptions 1 and 2, the variance of local and global gradients with perturbation can be bounded as follows: ∥∇fi(x+ δi)−∇f(x+ δ)∥2 ≤ 3σ2g + 6L2ρ2. Lemma D.3 [Lemma B.1, (Qu et al., 2022)] (Bounded Eδ of DFedSAM). the updates of DFedSAM for any learning rate satisfying η ≤ 14KL have the drift due to δi,k − δ: Eδ = 1 m m∑ i=1 E[∥δi,k − δ∥2] ≤ 2K2β2η2ρ2. where δ = ρ ∇F (x)∥∇F (x)∥ , δi,k = ρ ∇Fi(yt,k,ξ) ∥∇Fi(yt,k,ξ)∥ . Algorithm 1: DFedSAM and DFedSAM-MGS Input : Total number of clients m, total number of communication rounds T , the number of consensus steps per gradient iteration Q, learning rate η, and total number of the local iterates are K. Output: Generate consensus model xT after the final communication of all clients with their neighbors. 1 Initialization: Randomly initialize each client’s model x0(i). 2 for t = 0 to T − 1 do 3 for node i in parallel do 4 for k = 0 to K − 1 do 5 Set yt,0(i)← xt(i), yt,−1(i) = yt,0(i) 6 Sample a batch of local data ξi and calculate local gradient gt,k(i) = ∇Fi(yt,k; ξi) 7 g̃t,k(i) = ∇Fi(yt,k + δ(yt,k); ξi) with δ(yt,k) = ρgt,k/ ∥∥gt,k∥∥ 2 8 yt,k+1(i) = yt,k(i)− ηg̃t,k(i) 9 end 10 zt(i)← yt,K(i) 11 Receive neighbors’ models zt(l) from neighborhood set Sk,t with adjacency matrix W . 12 xt+1(i) = ∑ l∈N (i) wi,lz t(l) 13 for q = 0 to Q− 1 do 14 xt,q+1(i) = ∑ l∈N (i) wi,lz t,q(l) (zt,0(i) = zt(i)) (Exchanging messages) 15 zt,q+1(i) = xt,q+1(i) (Local gossip update) 16 end 17 xt+1(i) = xt,Q(i) 18 end 19 end E CONVERGENCE ANALYSIS FOR DFEDSAM AND DFEDSAM-MGS In the following, we present the proof of convergence results for DFedSAM and DFedSAM-MGS, respectively. Note that the proof of Theorem 4.1 is thoroughly introduced in two sections E.2 and E.3 as follows, where Q = 1 and Q > 1, respectively. E.1 PRELIMINARY LEMMAS Lemma E.1 Assume that Assumptions 1 and 2 hold, and (yt,k(i) + δi,k)t≥0, (xt,k(i))t≥0 are generated by DFedSAM for all i ∈ {1, 2, ...,m}. If the client update of DFedSAM for any learning rate η ≤ 110KL , it then follows: 1 m m∑ i=1 E ∥∥(yt,k(i) + δi,k)− xt(i)∥∥2 ≤ 2K(4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2) + 2Kρ2 2K − 1 , (8) where 0 ≤ k ≤ K − 1. Proof. For any local iteration k ∈ {0, 1, ...,K − 1} in any node i, it holds 1 m m∑ i=1 E ∥∥(yt,k(i) + δi,k)− xt(i)∥∥2 = 1 m m∑ i=1 E ∥∥yt,k−1(i) + δi,k − η∇Fi(yt,k−1(i) + δi,k−1)− xt(i)∥∥2 = 1 m m∑ i=1 E∥yt,k−1(i) + δi,k−1 − xt(i) + δi,k − δi,k−1 − η ( ∇Fi(yt,k−1(i) + δi,k−1)−∇Fi(yt,k−1) +∇Fi(yt,k−1)−∇fi(xt) +∇fi(xt)−∇f(xt) +∇f(xt) ) ∥2 ≤ I + II, where I = (1 + 12K−1 ) 1 m ∑m i=1 ( E ∥∥yt,k−1(i) + δi,k−1 − xt(i)∥∥2 + E∥δi,k − δi,k−1∥2) and II = 2K m m∑ i=1 E∥ − η ( ∇Fi(yt,k−1(i) + δi,k−1)−∇Fi(yt,k−1) +∇Fi(yt,k−1)−∇fi(xt) +∇fi(xt)−∇f(xt) +∇f(xt) ) ∥2, With Lemma D.3 and Assumptions, the bounds are I = (1 + 1 2K − 1 ) 1 m m∑ i=1 ( E ∥∥yt,k−1(i) + δi,k−1 − xt(i)∥∥2 + 2K2L2η2ρ4), and II = 8Kη2 m m∑ i=1 ( L2ρ2 + σ2l + σ 2 g + E ∥∥∇f(xt)∥∥2 ), where E ∥δi,k−1∥2 ≤ ρ2. Thus, we can obtain E ∥∥(yt,k(i) + δi,k)− xt(i)∥∥2 ≤ (1 + 1 2K − 1 )E ∥∥(yt,k−1(i) + δi,k−1)− xt(i)∥∥2 + 4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 , where E ∥∇f(xt)∥2 = 1m ∑m i=1 E ∥∇f(xt(i))∥ 2, f(x) := 1m ∑m i=1 fi(x), and ∇fi(xt) := ∇f(xt(i)). The recursion from τ = 0 to k yields 1 m m∑ i=1 E ∥∥(yt,k(i) + δi,k)− xt(i)∥∥2 ≤ 1 m m∑ i=1 K−1∑ τ=1 (1 + 1 2K − 1 )τ (4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 )+ (1 + 1 2K − 1 )ρ2 ≤ 2K(4K 3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2) + 2Kρ2 2K − 1 . This completes the proof. Lemma E.2 Assume that Assumption 3 holds and the number of local iteration K is large enough. Let {xt(i)}t≥0 be generated by DFedSAM for all i ∈ {1, 2, ...,m} and any learning rate η > 0, we have following bound: 1 m m∑ i=1 E[∥xt,k(i)− xt∥2] ≤ C2η 2 (1− λ)2 , where C2 = 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) . Proof. Following [Lemma 4, (Sun et al., 2022)], we denote Zt := [ zt(1), zt(2), . . . , zt(m) ]⊤ ∈ Rm×d. With these notation, we have Xt+1 = WZt = WXt − ζt, (9) where ζt := WXt −WZt. The iteration equation (9) can be rewritten as the following expression Xt = WtX0 − t−1∑ j=0 Wt−1−jζj . (10) Obviously, it follows WP = PW = P. (11) According to Lemma D.1, it holds ∥Wt −P∥ ≤ λt. Multiplying both sides of equation (10) with P and using equation (11), we then get PXt = PX0 − t−1∑ j=0 Pζj = − t−1∑ j=0 Pζj , (12) where we used initialization X0 = 0. Then, we are led to ∥Xt −PXt∥ = ∥ t−1∑ j=0 (P−Wt−1−j)ζj∥ ≤ t−1∑ j=0 ∥P−Wt−1−j∥op∥ζj∥ ≤ t−1∑ j=0 λt−1−j∥ζj∥. (13) With Cauchy inequality, E∥Xt −PXt∥2 ≤ E( t−1∑ j=0 λ t−1−j 2 · λ t−1−j 2 ∥ζj∥)2 ≤ ( t−1∑ j=0 λt−1−j)( t−1∑ j=0 λt−1−jE∥ζj∥2) Direct calculation gives us E∥ζj∥2 ≤ ∥W∥2 · E∥Xj − Zj∥2 ≤ E∥Xj − Zj∥2. With Lemma E.1 and Assumption 3, for any j, E∥Xj − Zj∥2 ≤ m ( 2K( 4K3L2ρ4 2K − 1 + 8K(L2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ2 η2(2K − 1) ) η2. Thus, we get E∥Xt −PXt∥2 ≤ m ( 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) ) η2 (1− λ)2 . The fact that Xt −PXt = xt(1)− xt xt(2)− xt ... xt(m)− xt then proves the result. Lemma E.3 Assume that Assumption 3 holds and the number of local iteration K is large enough. Let {xt(i)}t≥0 be generated by DFedSAM-MGS for all i ∈ {1, 2, ...,m} and any learning rate η > 0, we have following bound: 1 m m∑ i=1 E[∥xt,k(i)− xt∥2] ≤ C2η2 ( λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 ) , where C2 = 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) . Proof. Following [Lemma 4, (Sun et al., 2022)] and Lemma E.2, we denote Zt :=[ zt(1), zt(2), . . . , zt(m) ]⊤ ∈ Rm×d. With these notation, we have Xt+1 = WQZt = WQXt − ζt, (14) where ζt := WQXt −WQZt. The iteration equation (14) can be rewritten as the following expression Xt = (Wt)QX0 − t−1∑ j=0 W(t−1−j)Qζj . (15) Obviously, it follows WQP = PWQ = P. (16) According to Lemma D.1, it holds ∥Wt −P∥ ≤ λt. Multiplying both sides of equation (15) with P and using equation (16), we then get PXt = PX0 − t−1∑ j=0 Pζj = − t−1∑ j=0 Pζj , where we used initialization X0 = 0. Then, we are led to ∥Xt −PXt∥ = ∥ t−1∑ j=0 (P−WQ(t−1−j))ζj∥ ≤ t−1∑ j=0 ∥P−WQ(t−1−j)∥op∥ζj∥ ≤ t−1∑ j=0 λt−1−j∥W(t−1−j)(Q−1)∥∥ζj∥ ≤ t−1∑ j=0 λt−1−j∥Wt−1−j −P+P∥Q−1∥ζj∥. With Cauchy inequality, E∥Xt −PXt∥2 ≤ ( t−1∑ j=0 λt−1−j(λ(Q−1)(t−1−j) + 1 mQ−1 ) t−1∑ j=0 λt−1−j(λ(Q−1)(t−1−j) + 1 mQ−1 )E∥ζj∥2) ≤ ( t−1∑ j=0 (λQ(t−1−j) + λt−1−j mQ−1 ) t−1∑ j=0 (λQ(t−1−j) + λt−1−j mQ−1 )E∥ζj∥2) ≤ E∥ζj∥2 ( 1 (1− λ)2m2(Q−1) + 1 (1− λQ)2 ) . Direct calculation gives us E∥ζj∥2 ≤ ∥WQ∥2 · E∥Xj − Zj∥2 ≤ ∥W −P+P∥2Q∥Xj − Zj∥2 ≤ (∥W −P∥2Q + ∥P∥2Q)E∥Xj − Zj∥2 ≤ (λQ + 1)E∥Xj − Zj∥2. With Lemma E.1 and Assumption 3, for any j, E∥Xj − Zj∥2 ≤ m ( 2K( 4K3L2ρ4 2K − 1 + 8K(L2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ2 η2(2K − 1) ) η2. Thus, we get E∥Xt −PXt∥2 ≤ mC2η2 ( λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 ) , where C2 = 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) . The fact that Xt −PXt = xt(1)− xt xt(2)− xt ... xt(m)− xt then proves the result. E.2 PROOF OF CONVERGENCE RESULTS FOR DFEDSAM. Noting that PXt+1 = PWZt = PZt, that is also xt+1 = zt, where X := [x(1),x(2), . . . ,x(m)]⊤ ∈ Rm×d and Z := [z(1), z(2), . . . , z(m)]⊤ ∈ Rm×d. Thus we have xt+1 − xt = xt+1 − zt + zt − xt = zt − xt, (17) where zt := ∑m i=1 z t(i) m and x t := ∑m i=1 x t(i) m . In each node, zt − xt = ∑m i=1( ∑K−1 k=0 y t,k+1(i)− yt,k(i)) m = ∑m i=1 ∑K−1 k=0 (−ηg̃t,k(i)) m = ∑m i=1 ∑K−1 k=0 (−η∇Fi(yt,k + ρ∇Fi(yt,k; ξ)/ ∥∥∇Fi(yt,k; ξ)∥∥2); ξ) m . (18) The Lipschitz continuity of∇f : Ef(xt+1) ≤ Ef(xt) + E⟨∇f(xt), zt − xt⟩+ L 2 E∥xt+1 − xt∥2, (19) where we used (17). And (18) is used: E⟨K∇f(xt), (zt − xt)/K⟩ = E⟨K∇f(xt),−η∇f(xt) + η∇f(xt) + (zt − xt)/K⟩ = −ηKE ∥∥∥∇f(xt)∥∥∥2 + E⟨K∇f(xt), η mK m∑ i=1 K−1∑ k=0 ( ∇f(xt(i))−∇Fi(yt,k + δi,k; ξ) ) ⟩ a) ≤ ηE ∥∥∥∇f(xt)∥∥∥ · ∥∥∥∥∥ Lm m∑ i=1 K−1∑ k=0 (xt(i)− yt,k − δi,k) ∥∥∥∥∥ b) ≤ ηK 2 E ∥∥∥∇f(xt)∥∥∥2 + ηL2K2 2K ( 2K( 4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2) + 2Kρ2 2K − 1 ) , (20) where a) uses the Lipschitz continuity, b) uses Lemma E.1. Meanwhile, we can get L 2 E ∥∥∥xt+1 − xt∥∥∥2 = L 2 E ∥∥∥zt − xt∥∥∥2 ≤ L 2 1 m m∑ i=1 ∥∥yt,K(i)− xt(i)∥∥2 ≤ L 2 E ∥∥∥∥∥−η ∑m i=1 ∑K−1 k=0 ∇Fi(yt,k + δi,k; ξ) m ∥∥∥∥∥ 2 a) ≤ L 2 η2K2B2, (21) where a) uses Assumption 3. Furthermore, (19) can be represented as Ef(xt+1) ≤Ef(xt)− ηK 2 E ∥∥∥∇Ef(xt)∥∥∥2 + ηL2K 2 C1 + 8η3K2L2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 + L 2 η2K2B2, (22) where C1 = 2K( 4K 3L2η2ρ4 2K−1 + 8Kη 2(L2ρ2 + σ2g + σ 2 l ) + 2Kρ2 2K−1 . Thus, with Lemma E.2, we can get 1 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 ≤ 2L2∑mi=1 ∥∥∥xt(i)− xt∥∥∥2 m + 2E ∥∥∥∇f(xt)∥∥∥2 a) ≤ 2L2 C2η 2 (1− λ)2 + 2E ∥∥∥∇f(xt)∥∥∥2 , (23) where a) uses Lemma E.2 and C2 = 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) . Therefore, (19) is Ef(xt+1) ≤ Ef(xt)− ηK 2 E ∥∥∥∇f(xt)∥∥∥2 + ηL2KC1 2 + 8η3K2L2(2L2 C2η 2 (1− λ)2 + 2E ∥∥∥∇f(xt)∥∥∥2) ≤ Ef(xt) + (16η3K2L2 − ηK 2 )E ∥∥∥∇f(xt)∥∥∥2 + ηL2KC1 2 + 16C2η 5K2L4 (1− λ)2 . (24) Summing the inequality (24) from t = 1 to T , and then we can get the proved result as below: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 ≤ 2f(x1)− 2f∗ T (ηK − 32η3K2L2) + ηL2KC1 2 + 16C2η 5K2L4 (1−λ)2 ηK − 32η3K2L2 . If we choose the learning rate η = O(1/L √ KT ) and η ≤ 110KL , the number of communication round T is large enough, we have min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 =O(f(x1)− f∗√ KT + K3/2L2ρ4 T + K(L4ρ2 + σ2g + σ 2 l ) T + L2ρ2 T (1− λ)2 + KB2 T 2(1− λ)2 + K2L2ρ4 T 2(1− λ)2 + K(L2ρ2 + σ2g + σ 2 l ) T 2(1− λ)2 ) . When perturbation amplitude ρ proportional to the learning rate, e.g., ρ = O( 1√ T ), and then we have: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 =O(f(x1)− f∗√ KT + K(σ2g + σ 2 l ) T + KB2 T 2(1− λ)2 + K3/2L4 T 2 + L2 T 2(1− λ)2 + K(σ2g + σ 2 l ) T 2(1− λ)2 ) . Under Definition 2, we can get min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 =O(f(x1)− f∗√ KT + K(β2 + σ2l ) T + KB2 T 2(1− λ)2 + K3/2L4 T 2 + L2 T 2(1− λ)2 + K(β2 + σ2l ) T 2(1− λ)2 ) . This completes the proof. E.3 PROOF OF CONVERGENCE RESULTS FOR DFEDSAM-MGS With multiple gossiping steps, x0 and z0 are represented as x and z, respectively. Meanwhile, Zt,Q = Zt,0 ·WQ = Zt ·WQ. Noting that PXt+1 = PWQZt = PZt(Q > 1), that is also xt+1 = zt, where X := [x(1),x(2), . . . ,x(m)]⊤ ∈ Rm×d and Z := [z(1), z(2), . . . , z(m)]⊤ ∈ Rm×d. Thus we have xt+1 − xt = xt+1 − zt + zt − xt = zt − xt, (25) where zt := ∑m i=1 z t(i) m and x t := ∑m i=1 x t(i) m . In each node, zt − xt = ∑m i=1( ∑K−1 k=0 y t,k+1(i)− yt,k(i)) m = ∑m i=1 ∑K−1 k=0 (−ηg̃t,k(i)) m = ∑m i=1 ∑K−1 k=0 (−η∇Fi(yt,k + ρ∇Fi(yt,k; ξ)/ ∥∥∇Fi(yt,k; ξ)∥∥2); ξ) m . (26) The Lipschitz continuity of∇f : Ef(xt+1) ≤ Ef(xt) + E⟨∇f(xt), zt − xt⟩+ L 2 E∥xt+1 − xt∥2, (27) where we used (17). And (18) is used: E⟨K∇f(xt), (zt − xt)/K⟩ = E⟨K∇f(xt),−η∇f(xt) + η∇f(xt) + (zt − xt)/K⟩ = −ηKE ∥∥∥∇f(xt)∥∥∥2 + E⟨K∇f(xt), η mK m∑ i=1 K−1∑ k=0 ( ∇f(xt(i))−∇Fi(yt,k + δi,k; ξ) ) ⟩ a) ≤ ηE ∥∥∥∇f(xt)∥∥∥ · ∥∥∥∥∥ Lm m∑ i=1 K−1∑ k=0 (xt(i)− yt,k − δi,k) ∥∥∥∥∥ b) ≤ ηK 2 E ∥∥∥∇f(xt)∥∥∥2 + ηL2K2 2K ( 2K( 4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2) + 2Kρ2 2K − 1 ) , (28) where a) uses the Lipschitz continuity, b) uses Lemma E.1. Meanwhile, we can get L 2 E ∥∥∥xt+1 − xt∥∥∥2 = L 2 E ∥∥∥zt − xt∥∥∥2 ≤ L 2 1 m m∑ i=1 ∥∥yt,K(i)− xt(i)∥∥2 ≤ L 2 E ∥∥∥∥∥−η ∑m i=1 ∑K−1 k=0 ∇Fi(yt,k + δi,k; ξ) m ∥∥∥∥∥ 2 a) ≤ L 2 η2K2B2, (29) where a) uses Assumption 3. Furthermore, (19) can be represented as Ef(xt+1) ≤Ef(xt)− ηK 2 E ∥∥∥∇Ef(xt)∥∥∥2 + ηL2K 2 C1 + 8η3K2L2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 + L 2 η2K2B2, (30) where C1 = 2K( 4K 3L2η2ρ4 2K−1 + 8Kη 2(L2ρ2 + σ2g + σ 2 l ) + 2Kρ2 2K−1 . Thus, with Lemma E.3, we can get 1 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 ≤ 2L2∑mi=1 ∥∥∥xt(i)− xt∥∥∥2 m + 2E ∥∥∥∇f(xt)∥∥∥2 a) ≤ 2L2C2η2 ( λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 ) + 2E ∥∥∥∇f(xt)∥∥∥2 , (31) where a) uses Lemma E.3 and C2 = 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) . Therefore, (19) is Ef(xt+1) ≤ Ef(xt)− ηK 2 E ∥∥∥∇f(xt)∥∥∥2 + ηL2KC1 2 + 8η3K2L2(2L2C2η 2 ( λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 ) + 2E ∥∥∥∇f(xt)∥∥∥2) ≤ Ef(xt) + (16η3K2L2 − ηK 2 )E ∥∥∥∇f(xt)∥∥∥2 + ηL2KC1 2 + 16C2η 5K2L4 ( λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 ) . (32) Summing the inequality (32) from t = 1 to T , and then we can get the proved result as below: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 ≤ 2f(x1)− 2f∗ T (ηK − 32η3K2L2) + ηL2KC1 2 + 16C2η 5K2L4 ( λQ+1 (1−λ)2m2(Q−1) + λQ+1 (1−λQ)2 ) ηK − 32η3K2L2 . If we choose the learning rate η = O(1/L √ KT ) and η ≤ 110KL , the number of communication round T is large enough with Definition 2 and Φ(λ,m,Q) = λ Q+1 (1−λ)2m2(Q−1) + λQ+1 (1−λQ)2 is the key parameter to the convergence bound with the number of spectral gap, the clients and multiple gossiping steps. Thus we have min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 = O(f(x1)− f∗√ KT + K3/2L2ρ4 T + K(L4ρ2 + β2 + σ2l ) T + Φ(λ,m,Q) (L2ρ2 T + K2L2ρ4 T 2 + K(L2ρ2 + β2 + σ2l +B 2) T 2 )) . When perturbation amplitude ρ proportional to the learning rate, e.g., ρ = O( 1√ T ), and then we have: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 =O(f(x1)− f∗√ KT + K(β2 + σ2l ) T + K3/2L4 T 2 +Φ(λ,m,Q) L2 +K(β2 + σ2l +B 2) T 2 ) . This completes the proof.
1. What is the focus and contribution of the paper on decentralized federated learning? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical soundness and experimental results? 3. What are the weaknesses of the paper, especially regarding its novelty and originality? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a new algorithm DFedSAM within the decentralized federated learning framework, where communications between servers are only performed within local neighborhood. To tackle the sharper landscape generated by the decentralization, the algorithm adapts a sharpness aware minimization strategy (SAM) by adding perturbation on the iterate before gradient evaluation. A multi-gossip step variant is also provided to improve model consistency. Extensive experiments are included to show the effectiveness of the method. Strengths And Weaknesses Strength Theoretical soundness Extensive experiments Weakness limited novelty Clarity, Quality, Novelty And Reproducibility The paper is fairly easy to read through, the novelty is limited as the algorithm is an application of existing method SAM under the decentralized federated learning setting.
ICLR
Title Improving Model Consistency of Decentralized Federated Learning via Sharpness Aware Minimization and Multiple Gossip Approaches Abstract To mitigate the privacy leakages and reduce the communication burden of Federated Learning (FL), decentralized FL (DFL) discards the central server and each client only communicates with its neighbors in the decentralized communication network. However, existing DFL algorithms tend to feature high inconsistency among local models, which results in severe distribution shifts across clients and inferior performance compared with centralized FL (CFL), especially on heterogeneous data or with sparse connectivity of communication topology. To alleviate this challenge, we propose two DFL algorithms named DFedSAM and DFedSAM-MGS to improve the performance. Specifically, DFedSAM leverages gradient perturbation to generate local flatness models via Sharpness Aware Minimization (SAM), which searches for model parameters with uniformly low loss function values. In addition, DFedSAM-MGS further boosts DFedSAM by adopting the technique of Multiple Gossip Steps (MGS) for a better model consistency, which accelerates the aggregation of local flatness models and better balances the communication complexity and learning performance. In the theoretical perspective, we present the improved convergence rates O ( 1 T + 1 T 2(1−λ)2 ) and O ( 1 T + λ+1 T 2(1−λQ)2 ) in the stochastic non-convex setting for DFedSAM and DFedSAM-MGS, respectively, where 1−λ is the spectral gap of the gossip matrix W and Q is the gossip steps in MGS. Meanwhile, we empirically confirm that our methods can achieve competitive performance compared with CFL baselines and outperform existing DFL baselines. N/A ( 1 T + 1 T 2(1−λ)2 ) and O ( 1 T + λQ+1 T 2(1−λQ)2 ) in the stochastic non-convex setting for DFedSAM and DFedSAM-MGS, respectively, where 1−λ is the spectral gap of the gossip matrix W and Q is the gossip steps in MGS. Meanwhile, we empirically confirm that our methods can achieve competitive performance compared with CFL baselines and outperform existing DFL baselines. 1 INTRODUCTION … … … … (a) Ring (b) Exponential (c) Grid (d) Fully-connected (b) Loss surface of FedAvg (c) Loss surface of DFedAvg Federated learning (FL) (Mcmahan et al., 2017; Li et al., 2020b) allows distributed clients to collaboratively train a shared model under the orchestration of the cloud without transmitting local data. However, almost all FL paradigms employ a central server to communicate with clients, which faces several critical challenges, such as computational resources limitation, high communication bandwidth cost, and privacy leakage (Kairouz et al., 2021). Compared to the centralized FL (CFL) framework, decentralized FL (DFL, see Figure 1), in which the clients only communicate with their neighbors without a central server, offers communication advantage and further preserves the data privacy (Kairouz et al., 2021; Wang et al., 2021). However, DFL suffers from bottlenecks such as severe inconsistency of local models due to heterogeneous data and model aggregation locality caused by the network connectivity of communication topology. This inconsistency results in severe over-fitting in local models and model performance degradation. Therefore, the global/consensus model may bring inferior performance compared with CFL, especially on heterogeneous data or in face of the sparse connectivity of communication net- 1 works. Similar performance pattern of DFL has also been demonstrated by Sun et al. (2022). To explore the reasons behind this phenomenon, we present the structure of the loss landscapes (Li et al., 2018) for FedAvg (Mcmahan et al., 2017) and decentralized FedAvg (DFedAvg, Sun et al. (2022)) on Fashion-MNIST (Xiao et al., 2017) with the same setting in Figure 2 (a) and (b). It is clearly seen that DFL method has a sharper landscape than CFL method. Motivation. Most FL algorithms face the over-fitting issue of local models on heterogeneous data. Many recent works (Sahu et al., 2018; Li et al., 2020c; Karimireddy et al., 2020; Yang et al., 2021; Acar et al., 2021; Wang et al., 2022) focus on the CFL and mitigate this issue with various effective solutions. In DFL, this issue can be exacerbated due to sharp loss landscape caused by the inconsistency of local models (see Figure 2 (a) and (b)). Therefore, the performance of decentralized schemes is usually worse than that of centralized schemes with the same setting (Sun et al., 2022). Consequently, an important research question is: can we design a DFL algorithm that can mitigate inconsistency among local models and achieve the similar performance to its centralized counterpart? To address this question, we propose two DFL algorithms: DFedSAM and DFedSAM-MGS. Specifically, DFedSAM overcomes local model over-fitting issue via gradient perturbation with SAM (Foret et al., 2021) in each client to generate local flatness models. Since each client aggregates the flatness models from its neighbors, a potential flat aggregated model can be generated, which results in high generalization ability. To further boost the performance of DFedSAM, DFedSAMMGS integrates multiple gossip steps (MGS) (Ye et al., 2020; Ye & Zhang, 2021; Li et al., 2020a) to accelerate the aggregation of local flatness models by increasing the number of gossip steps of local communications. It realizes a better trade-off between communication complexity and learning performance by bridging the gap between CFL and DFL, since DFL can be roughly regarded as CFL with a sufficiently large number of gossip steps (see Section 5.4). Theoretically, we present the convergence rates for our algorithms in the stochastic non-convex setting. We show that the bound can be looser when the connectivity of the communication topology λ is sufficiently sparse, or the data homogeneity β is sufficiently large, while as the consensus/gossip steps Q in MGS increase, it is tighter as the impact of communication topology can be alleviated (see Section 4). The theoretical results directly explain why the application of SAM and MGS in DFL can ensure better performance with various types of communication network topology. Empirically, we conduct extensive experiments on CIFAR-10 and CIFAR-100 datasets in both the identical data distribution (IID) and non-IID settings. The experimental results confirm that our algorithms achieve competitive performance compared to CFL baselines and outperform DFL baselines (see Section 5.2). Contribution. Our main contributions can be summarized as three-fold: • We propose two DFL algorithms DFedSAM and DFedSAM-MGS. DFedSAM alleviates the inconsistency of local models through getting local flatness models, while DFedSAMMGS achieves a better consistency based on DFedSAM via the aggregation acceleration and has a better trade-off between communication and generalization. • We present the convergence ratesO ( 1 T + 1 T 2(1−λ)2 ) andO ( 1 T + λQ+1 T 2(1−λQ)2 ) for DFedSAM and DFedSAM-MGS in the non-convex settings, respectively, and show that our algorithms can achieve the linear speedup for convergence. • We conduct extensive experiments to verity the efficacy of our proposed DFedSAM and DFedSAM-MGS, which can achieve competitive performance compared with both CFL and DFL baselines. 2 RELATED WORK Decentralized Federated Learning (DFL). In DFL, clients only communicate with their neighbors in various communication networks without a central server in comparison to CFL, which offers communication advantage and preserves the data privacy. Lalitha et al. (2018; 2019) take a Bayesian-like approach by introducing a belief over the model parameter space of the clients in a fully DFL framework. Roy et al. (2019) propose the first server-less, peer-to-peer approach BrainTorrent to FL and apply it on medical application in a highly dynamic peer-to-peer FL environment. Sun et al. (2022) apply the multiple local iteration with SGD and quantization method to further reduce the communication cost, and provide the convergence results in various convexity setting. Dai et al. (2022) develop a decentralized sparse training technique to further save the communication and computation cost. Sharpness Aware Minimization (SAM). SAM (Foret et al., 2021) is an effective optimizer for training deep learning models, which leverages the flatness geometry of the loss landscape to improve model generalization ability. Recently, Andriushchenko & Flammarion (2022) study the properties of SAM and provide convergence results of SAM for non-convex objectives. As a powerful optimizer, SAM and its variants have been applied to various machine learning (ML) tasks (Zhao et al., 2022; Kwon et al., 2021; Du et al., 2021; Liu et al., 2022; Abbas et al., 2022). Specifically, Qu et al. (2022) and Caldarola et al. (2022) integrate SAM to improve the generalization, and thus mitigate the distribution shift problem and achieve a new SOTA performance for CFL. However, to the best of our knowledge, no efforts have been devoted to the empirical performance and theoretical analysis of SAM in the DFL setting. Multiple Gossip Steps (MGS). The advantage of increasing the times of local communications within a network topology is investigated in Ye et al. (2020), in which FastMix is proposed with multi-consensus and gradient tracking and they establish the optimal computational complexity and a near optimal communication complexity. DeEPCA (Ye & Zhang, 2021) integrates FastMix into a decebtralized PCA algorithm to accelerate the training process. DeLi-CoCo (Hashemi et al., 2022) performs multiple compression gossip steps in each iteration for fast convergence with arbitrary communication compression. Network-DANE (Li et al., 2020a) uses multiple gossip steps and generalizes DANE to decentralized scenarios. In general, by increasing the number of gossip steps, local clients can approach to a better consensus model towards the performance in CFL. Thus, the use of MGS can also potentially mitigate the model inconsistency in the DFL setting. The work most related to this paper is DFedAvg and DFedAvg with momentum (DFedAvgM) in Sun et al. (2022), which leverages multiple local iterations with the SGD optimizer and significantly improve the performance of classic decentralized parallel SGD method D-PSGD (Lian et al., 2017). However, DFL may suffers from inferior performance due to the severe model inconsistency issue among the clients. Another related work is FedSAM (Qu et al., 2022), which integrates SAM optimizer into CFL to enhance the flatness of local model and achieves new SOTA performance for CFL. On top of the aforementioned studies, we are the first to extend the SAM optimizer to the DFL setting and simultaneously provide its convergence guarantee in the nonconvex setting. Furthermore, we bride the gap of CFL and DFL via adopting MGS in DFedSAM-MGS, which largely mitigates the model inconsistency in DFL. 3 METHODOLOGY In this section, we try to solve this issue in the DFL setting. Below, we first initialize the problem setup in DFL and then describe the proposed DFedSAM and DFedSAM-MGS in detail. 3.1 PROBLEM SETUP In this work, we are interested in solving the following finite-sum stochastic non-convex minimization problem in the DFL setting: min x∈Rd f(x) := 1 m m∑ i=1 fi(x), fi(x) = Eξ∼DiFi(x; ξ), (1) where Di denotes the data distribution in the i-th client, which is heterogeneous across clients, m is the number of clients, and Fi(x; ξ) is the local objective function associated with the training data samples ξ. Problem (1) is known as empirical risk minimization (ERM) and models many applications in ML. As shown in Figure 1(b), we model the communication network in the decentralized network topology between clients as an undirected connected graph G = (N ,V,W ), where N := {1, 2, . . . ,m} represents the set of clients, and V ⊆ N × N represents the set of communication channels, each connecting two distinct clients. Furthermore, we emphasis that there is no central server in the decentralized setting and all clients only communicate with their neighbors with respect to the communication channels V . In addition, we assume Problem (1) is well-defined and denote f∗ as the minimal value of f , i.e., f(x) ≥ f(x∗) = f∗ for all x ∈ Rd. 3.2 DFEDSAM AND DFEDSAM-MG ALGORITHMS Instead of searching for a solution via SGD (Bottou, 2010; Bottou et al., 2018), SAM (Foret et al., 2021) aims to seek a solution in a flatness region through adding a small perturbation to the models, i.e., x+δ with more robust performance. As shown in Figure 2, decentralized schemes has a sharper landscape with poorer generalization ability than centralized schemes. However, the study focus on this issue remains unexplored. In this paper, we extend to SAM optimizer into DFL for investigating this issue, dubbed DFedSAM, whose local loss function is defined as: fi(x) = Eξ∼Di max ∥δi∥22≤ρ Fi(y t,k(i) + δi; ξi), i ∈ N (2) where yt,k(i) + δi is viewed as the perturbed model, ρ is a predefined constant controlling the radius of the perturbation and ∥ · ∥22 is a l2-norm, which can be simplified to ∥ · ∥2 in the rest. Similar with CFL methods, in DFL, DFedSAM allows that clients can update the local model parameters with multiple local iterates before communication are performed. Specifically, for each client i ∈ {1, 2, ...,m}, each local iteration k ∈ {0, 1, ...,K − 1} in each communication round t ∈ {0, 1, ..., T − 1}, the k-th inner iteration in client i is performed as: yt,k+1(i) = yt,k(i)− ηg̃t,k(i), (3) where g̃t,k(i) = ∇Fi(yt,k + δ(yt,k); ξ) and δ(yt,k) = ρgt,k/ ∥∥gt,k∥∥ 2 . Following by (Foret et al., 2021), using first order Taylor expansion around yt,k for a small value of ρ. After K inner iterations in each client, parameters are updated as zt(i) ← yt,K(i) and sent to its neighbors l ∈ N (i) after local updates. Then each client averages its parameters with the information of its neighbors: xt+1(i) = ∑ l∈N (i) wi,lz t(l). (4) On the other hand, we use multiple gossip steps (MGS) technique (Ye et al., 2020; Ye & Zhang, 2021; Hashemi et al., 2022) to achieve a better consistency among local models based on DFedSAM, dubbed DFedSAM-MGS, thereby further boosting the performance. DFedSAM-MGS provides a balance between the communication cost and generalization ability in DFL setting. Specifically, the produce of MGS at the q-th step (q ∈ {0, 1, ..., Q − 1}) can be viewed as two steps in terms of exchanging messages and local gossip update as follows: xt,q+1(i) = ∑ l∈N (i) wi,lz t,q(l), and zt,q+1(i) = xt,q+1(i). (5) At the end of MGS, xt+1(i) = xt,Q(i). Both DFedSAM and DFedSAM-MGS are summarized in Algorithm 1 (see Appendix C). It is clearly seen that DFedSAM may generate the trade-off between the local computation complexity and communication overhead via multiple local iterations, whereas the local communication is only performed at once. While DFedSAM-MGS performs multiple local communications with a larger Q to make all local clients synchronized. Therefore, DFedSAM-MGS can be viewed as compromising between DFL and CFL. Compared with existing SOTA DFL methods: DFedAvg and DFedAvgM (Sun et al., 2022), the benefits of DFedSAM and DFedSAM-MGS lie in three-fold: (i) SAM is introduced to first alleviate local over-fitting issue caused by the inconsistency of local models via seeking a flatness model at each client in DFL, and also contribute to make consensus model flat; (ii) MGS in DFedSAMMGS is used to further accelerate the aggregation of local flatness models for a better consistency among local models based on DFedSAM and properly balances the communication complexity and learning performance; (iii) Furthermore, we also present the theories unifying the impacts of gradient perturbation ρ in SAM, several times of local communications Q in MGS, and various network typologies λ, along with data homogeneity β upon the convergence rate in Section 4. 4 CONVERGENCE ANALYSIS In this section, we show the convergence results of DFedSAM and DFedSAM-MGS for general non-convex FL setting, and the detailed proof is presented in Appendix E. Below, we first give several useful and necessary notations and assumptions. Definition 1 (The gossip/mixing matrix). (Sun et al., 2022, Definition 1) The gossip matrix W = [wi,j ] ∈ [0, 1]m×m is assumed to have these properties: (i) (Graph) If i ̸= j and (i, j) /∈ V , then wi,j = 0, otherwise, wi,j > 0; (ii) (Symmetry) W = W⊤; (iii) (Null space property) null{I − W} = span{1}; (iv) (Spectral property) I ⪰ W ≻ −I. Under these properties, eigenvalues of W can be shown satisfying 1 = |λ1(W))| > |λ2(W))| ≥ · · · ≥ |λm(W))|. Furthermore, λ := max{|λ2(W)|, |λm(W))|} and 1− λ ∈ (0, 1] is the denoted as spectral gap of W. Definition 2 (Homogeneity parameter). (Li et al., 2020a, Definition 2) For any i ∈ {1, 2, . . . ,m} and the parameter x ∈ Rd, the homogeneity parameter β can be defined as: β := max 1≤i≤m βi, with βi := sup x∈Rd ∥∇fi(x)−∇f(x)∥ . Assumption 1 (Lipschitz smoothness). The function fi is differentiable and ∇fi is L-Lipschitz continuous, ∀i ∈ {1, 2, . . . ,m}, i.e., ∥∇fi(x)−∇fi(y)∥ ≤ L∥x− y∥, for all x,y ∈ Rd. Assumption 2 (Bounded variance). The gradient of the function fi have σl-bounded variance, i.e., Eξi ∥∇Fi(y; ξi)−∇fi(x)∥ 2 ≤ σ2l ,∀i ∈ {1, 2, . . . ,m}, the global variance is also bounded, i.e., 1m ∑m i=1 ∥∇fi(x) −∇f(x)∥2 ≤ σ2g for all x ∈ Rd. It is not hard to verify that the σg is smaller than the homogeneity parameter β, i.e., σ2g ≤ β2. Assumption 3 (Bounded gradient ). For any i∈{1, 2, . . . ,m} and x∈Rd, we have ∥∇fi(x)∥≤B. Note that above mentioned assumptions are mild and commonly used in characterizing the convergence rate of FL (Sun et al., 2022; Ghadimi & Lan, 2013; Yang et al., 2021; Bottou et al., 2018; Yu et al., 2019; Reddi et al., 2021). Difference with classic decentralized parallel SGD methods such as D-PSGD (Lian et al., 2017), the technical difficulty is that zt(i)− xt(i) fails to be an unbiased gradient estimation∇fi(xt(i)) after multiple local iterates, thereby merging the multiple local iterations is non-trivial. Furthermore, the various topologies of communication network in DFL are quite different with SAM in CFL (Qu et al., 2022). Below, we adopt the averaged parameter xt= 1m ∑m i=1 x t(i) of all clients to be the approximated solution of Problem (1). Theorem 4.1 Let Assumptions 1, 2 and 3 hold, and the parameters {xt(i)}t≥0 is generated via Algorithm 1. Meanwhile, assume the learning rate of SAM in each client satisfy 0 < η ≤ 110KL . Let xt = 1m ∑m i=1 x t(i) and denote Φ(λ,m,Q) as the metric related with three parameters in terms of the number of spectral gap, the clients and multiple gossip steps, Φ(λ,m,Q) = λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 , (6) Then, we have the gradient estimation of DFedSAM or DFedSAM-MGS for solving Problem (1): min 1≤t≤T E ∥∥∇f(xt)∥∥2 ≤ 2[f(x1)− f∗] T (ηK − 32η3K2L2) + α(K, ρ, η) +Φ(λ,m,Q)β(K, ρ, η, λ), (7) where T is the number of communication rounds and the constants are given as α(K, ρ, η) = ηL2K2 ηK − 32η3K2L2 (4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 2Kρ2 2K − 1 ) , β(K, ρ, η, λ) = 64η5K3L4 ηK−32η3K2L2 (4K3L2ρ4 2K−1 + 8K(L2ρ2 + σ2g + σ 2 l ) + 8KB 2 + ρ2 η2(2K−1) ) . With Theorem 4.1, we state following convergence rates for DFedSAM and DFedSAM-MGS. Corollary 4.1.1 Let the local adaptive learning rate satisfy η = O(1/L √ KT ). With the similar assumptions required in Theorem 4.1.1, and setting the perturbation parameter ρ = O( 1√ T ). Then, the convergence rate for DFedSAM satisfies: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2=O(f(x1)−f∗√ KT + K(β2+σ2l ) T + KB2 T 2(1−λ)2 + K3/2L4 T 2 + L2 T 2(1−λ)2 + K(β2+σ2l ) T 2(1−λ)2 ) . Remark 1 DFedSAM can achieve a linear speedup on the general non-convex setting as long as T ≥ K, which is significantly better than the state-of-the-art (SOTA) bounds such as O ( 1√ T + σ2g√ T + σ2g+B 2 (1−λ)2T 3/2 ) in (Sun et al., 2022). Note that the bound can be tighter as λ decreases, which is dominated by K(β 2+σ2l ) T 2(1−λ)2 terms as λ ≤ 1− K1/4 T 3/2 , whereas as β increases, it can be degraded. Corollary 4.1.2 Let Q > 1, T be large enough and η = O(1/L √ KT ). With the similar assumptions required in Theorem 4.1.1 and perturbation amplitude ρ being ρ = O( 1√ T ), Then, the convergence rate for DFedSAM-MGS satisfies: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2=O(f(x1)−f∗√ KT + K(β2+σ2l ) T + K3/2L4 T 2 +Φ(λ,m,Q) L2+K(β2+σ2l +B 2) T 2 ) . Remark 2 The impact of the network topology (1 − λ) can be alleviated as Q increases and the number of clients m is large enough, the term λ Q+1 (1−λ)2m2(Q−1) of Φ(λ,m,Q) can be neglected, and the term λ Q+1 (1−λQ)2 is close to 1. That means by using the proposed Q-step gossip procedure, model consistency among clients can be improved, and thus DFL in the various communication topologies can be roughly viewed as CFL. Thus, the negative effect of the gradient variances σ2l and β 2 can be degraded especially on sparse network topology where λ is close to 1. In practice, a suitable steps Q > 1 is possible to achieve a communication-accuracy trade-off in DFL setting. 5 EXPERIMENTS In this section, we evaluate the efficacy of our algorithms compared to six baselines from CFL and DFL settings. In addition, we conduct several experiments to verify the impact of the communication network topology in Section 4. Furthermore, several ablation studies are conducted. 5.1 EXPERIMENT SETUP Dataset and Data Partition. The efficacy of the proposed DFedSAM and DFedSAM-MGS is evaluated on CIFAR-10 and CIFAR-100 datasets (Krizhevsky et al., 2009) in both IID and non-IID settings. Specifically, Dir Partition (Hsu et al., 2019) is used for simulating non-IID across federated clients, where the local data of each client is partitioned by splitting the total dataset through sampling the label ratios from the Dirichlet distribution Dir(α) with parameters α = 0.3 and α = 0.6. Baselines. The compared baselines cover several SOTA methods in both the CFL and DFL settings. Specifically, centralized baselines include FedAvg (Mcmahan et al., 2017) and FedSAM (Qu et al., 2022). For decentralized setting, D-PSGD (Lian et al., 2017), DFedAvg and DFedAvgM (Sun et al., 2022), along with DisPFL (Dai et al., 2022), are used for comparison. Implementation Details. The total number of clients is set as 100, among which 10% clients participates in communication. Specifically, all clients perform the local iteration step for decentralized methods and only participated clients can perform local update for centralized methods. We initialize the local learning rate as 0.1 with a decay rate 0.998 per communication round for all experiments. For CIFAR-10 and CIFAR-100 datasets, VGG-11 (He et al., 2016) and ResNet-18 (Simonyan & Zisserman, 2014) are adopted as the backbones in each client, respectively. The number of communication rounds is set as 1000 in the experiments for comparing with all baselines and studying on topology-aware performance. In addition, all the ablation studies are conducted on CIFAR-10 dataset and the number of communication rounds is set as 500. Communication Configurations. For a fair comparison between decentralized and centralized setting, we apply a dynamic time-varying connection topology for decentralized methods to ensure that in each round, the number of connections are no more than that in the central server. In specific, the number of clients communicating with their neighbors can be controlled to keep the communication volume consistent with centralized methods. Following earlier works, the communication complexity is measured by the times of local communications. The more experiments setup are presented in Appendix B due to the limited space. 5.2 PERFORMANCE EVALUATION Performance with compared baselines. In Table 1 and Figure 3, we evaluate DFedSAM and DFedSAM-MGS (Q = 4) with ρ = 0.01 on CIFAR-10 and CIFAR-100 datasets in both settings compared with all baselines from CFL and DFL. From these results, it is clearly seen that our proposed algorithms outperform other decentralized methods on this two datasets, and DFedSAMMGS outperforms and roughly achieves the performance of SOTA centralized baseline FedSAM on CIFAR-10 and CIFAR-100, respectively. Specifically, the training accuracy and testing accuracy are presented in Table 1 to show the generalization performance. We can see that the performance improvement is more obvious than all other baselines on CIFAR-10 with the same communication round. For instance, the difference between training accuracy and test accuracy on CIFAR-10 in IID setting is 14.14% in DFedSAM, 13.22% in DFedSAM-MGS, 15.29% in FedAvg and 15% in FedSAM. That means our algorithms achieve a comparable generalization than centralized baselines. Impact of non-IID levels (β). In Table 1, we can see our algorithms are robust to different participation cases. Heterogeneous data distribution of local client is set to various participation levels from IID, Dirichlet 0.6 and Dirichlet 0.3, which makes the training of global/consensus model is more difficult. For instance, on CIFAR-10, as non-IID levels increases, DFedSAMMGS achieves better generalization than FedSAM as the difference between training accuracy and test accuracy in DFedSAM-MGS {15.27%, 14.51%, 13.22%} are lower than that in FedSAM {17.26%, 14.85%, 15%}. Similarly, the difference in DFedSAM {17.37%, 15.06%, 14.10%} are lower than that in FedAvg {17.60%, 15.82%, 15.27%}. These observations confirm that our algorithms are more robust than baselines in various degrees of heterogeneous data. 5.3 TOPOLOGY-AWARE PERFORMANCE We verify the influence of various communication topologies and gossip averaging steps in DFedSAM and DFedSAM-MGS. Different with the comparison of CFL and DFL in Section 5.2, we only need to verify the key properties for the DFL methods in this section. Thus, the communication type is set as “Complete”, i.e., each client can communicate with its neighbors in the same communication round. The degree of sparse connectivity λ is: ring > grid > exponential > fullconnected in DFL. From Table 2, our algorithms are obviously superior to all decentralized baselines in various communication networks, which is coincided with our theoretical findings. Specifically, compared with DFedAvgM, DFedSAM and DFedSAM-MGS can significantly improve the performance in the ring topology with 0.64% and 8.0%, respectively. Meanwhile, the performance of DFedSAM-MGS in various topologies is always better than that of other methods. This observation confirms that multiple gossip steps Q can alleviate the impact of network topology with a smaller Q = 4. Therefore, our algorithms can generate the better generalization and model consistency in various communication topologies. 5.4 ABLATION STUDY Below, we verify the influence of each component and hyper-parameter in DFedSAM where Q = 1. All the ablation studies are conducted in “exponential” topology except the study for Q in three topologies, and the communication type is the same as Section 5.3: “Complete”. Consensus/gossip steps Q. In Figure 4, we investigate where the balance between learning performance and communication complexity in three network topologies is. We choose four multiple steps Q = {1, 2, 3, 4} and study the different balance points under the different steps in three net- work topologies in Figure 4 (a), (b) and (c). As the number of local communications increases, model performance is also improved but the communication complexity increases too. It is clearly seen that the balance point is different but with the same tendency in different topologies. And a relatively larger Q can bring better performance for a given communication complexity. Therefore, we select Q = 4 in DFedSAM-MGS under 1000 communication rounds for a better balance. Local iteration steps K. Large local iteration steps K can help the convergence in previous DFL work (Sun et al., 2022) with the theoretical guarantees. To investigate the acceleration on T by adopting a larger local iteration steps K, we fix the total batchsize and change local training epochs. As shown in Figure 5 (a), our algorithms can accelerate the convergence in theoretical results (see Section 4.1) as a larger local iteration steps K is adopted. Number of participated clients m. As shown in Figure 5 (b), we compare the performance between different number of client participation m = {50, 100, 150} with the same hyper-parameters. Compared with larger m = 150, the smaller m = {50, 100} can achieve better convergence and test accuracy as the number of local data increases, which makes indirectly local model more generalization, thereby improving the performance of consensus model. Perturbation radius ρ. Perturbation radius ρ has the impact on performance as the adding perturbation is accumulated when the communication round T increases. It is a trade-off between test accuracy and the generalization. To select a proper value for our algorithms, we conduct some experiments on various perturbation radius from the set {0.01, 0.025, 0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1.0} in Figure 5 (c). As ρ = 0.01, we achieve a better convergence and performance. Meanwhile, ρ = O( 1√ T ) can make a linear speedup on convergence (see Section 4.1). The effectiveness of SAM and MGS. To validate the effectiveness of SAM and MGS, respectively, we compare four algorithms including DFedAvg, DFedSAM and FedSAM-MGS with the same setting. From Table 3, DFedSAM can achieve the performance improvement and better generalization compared with DFedAvg as SAM optimizer is adopted. DFedSAM-MGS can further boost the performance compared with FedSAM as MGS can also make models consistent among clients and accelerate the convergence rates. 6 CONCLUSIONS AND FUTURE WORK In this paper, we focus on the model inconsistency challenge caused by heterogeneous data and network connectivity of communication topology in DFL and overcome this challenge from the perspectives of model generalization. We propose two DFL frameworks: DFedSAM and DFedSAMMGS with better model consistency among clients. DFedSAM adopts SAM to achieve the flatness model in each client, thereby improving the generalization by generating a consensus/global flatness model. Meanwhile, DFedSAM-MGS further improves the model consistency based on DFedSAM by accelerating the aggregation of local flat models and reaching a better trade-off between learning performance and communication complexity. For theoretical findings, we confirm a linear speedup and unify the impacts of gradient perturbation in SAM, local communications in MGS, and network typology, along with data homogeneity upon the convergence rate in DFL. Furthermore, empirical results also verify the superiority of our approaches. For future work, we will continue towards understanding of the effect of SAM and MGS for more desira ble generalization in DFL. B MORE DETAILS ON ALGORITHM IMPLEMENTATION B.1 DATASETS AND BACKBONES. CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009) are labeled subsets of the 80 million images dataset. They both share the same 60, 000 input images. CIFAR-100 has a finer labeling, with 100 unique labels, in comparison to CIFAR-10, having 10 unique labels. The VGG-11 as the backbone is used for CIFAR-10, and the ResNet is chose for CIFAR-100, where the batch-norm layers are replaced by group-norm layers due to a detrimental effect of batch-norm. B.2 MORE DETAILS ABOUT BASELINES. FedAvg is the classic FL method via the vanilla weighted averaging to parallel train a global model with a central server. FedSAM applies SAM to be the local optimizer for improving the model generalization performance. For decentralized schemes, D-PSGD is a classic decentralized parallel SGD method to reach a consensus model 1, DFedAvg is the decentralized FedAvg, and DFedAvgM uses SGD with momentum based on DFedAvg to train models on each client and performs multiple local training steps before each communication. Furthermore, DisPFL is a novel personalized FL 1In this work, we focus on decentralized FL which refers to the local training with multiple local iterates, whereas decentralized learning/training focuses on one-step local training. For instance, D-PSGD (Lian et al., 2017) is a decentralized training algorithm, which uses the one-step SGD to train local models in each communication round. framework in a decentralized communication protocol, which uses a decentralized sparse training technique, thus for a fair comparison, we report the global accuracy in DisPFL. B.3 HYPERPARAMETERS. The total client number is set to 100, and the number of connection s in each client is restrict at most 10 neighbors in decentralized setting. For centralized setting, the sample ratio of client is set to 0.1. The local learning rate is set to 0.1 decayed with 0.998 after each communication round for all experiments, and the global learning rate is set to 1.0 for centralized methods. The batch size is fixed to 128 for all the experiments. We run 1000 global communication rounds for CIFAR-10 and CIFAR100. SGD optimizer is used with weighted decayed parameter 0.0005 for all baselines except FedSAM. Other optimizer hyper-parameters ρ = 0.01 for our algorithms (DFedSAM and DFedSAMMGS with Q = 1) via grid search on the set {0.01, 0.025, 0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1.0} and the value of ρ in FedSAM is followed by (Qu et al., 2022), respectively. And following by (Sun et al., 2022), the local optimization with momentum 0.9 for DFedAvgM. For local iterations K, the training epoch in D-PSGD is set to 1, that for all other methods is set to 5. B.4 COMMUNICATION CONFIGURATIONS. Specifically, such as (Dai et al., 2022), the decentralized methods actually generate far more communication volume than centralized methods because each client in the network topology needs to transmit the local information to their neighbors. However, only the partly sampled clients can upload their parameter updates with a central server in centralized setting. Therefore, for a fair comparison, we use a dynamic time-varying connection topology for decentralized methods in Section 5.2, we restrict each client can communicate with at most 10 neighbors which are random sampled without replacement from all clients, and only 10 clients who are neighbors to each other can perform one gossip step to exchange their local information in DFedSAM. In DFedSAM-MGS, the gossip step is performed Q times, 10×Q clients are sampled without replacement can perform one gossip step to exchange their local information. C ALGORITHMS D PRELIMINARY LEMMAS Lemma D.1 [Lemma 4, (Lian et al., 2017)] For any t ∈ Z+, the mixing matrix W ∈ Rm satisfies ∥Wt − P∥op ≤ λt, where λ := max{|λ2|, |λm(W )|} and for a matrix A, we denote its spectral norm as ∥A∥op. Furthermore, 1 := [1, 1, . . . , 1]⊤ ∈ Rm and P := 11⊤ m ∈ Rm×m. In [Proposition 1, (Nedic & Ozdaglar, 2009)], the author also proved that ∥W t − P∥op ≤ Cλt for some C > 0 that depends on the matrix. Lemma D.2 [Lemma A.5, (Qu et al., 2022)] (Bounded global variance of ∥∇fi(x+ δi)−∇f(x+ δ)∥2.) An immediate implication of Assumptions 1 and 2, the variance of local and global gradients with perturbation can be bounded as follows: ∥∇fi(x+ δi)−∇f(x+ δ)∥2 ≤ 3σ2g + 6L2ρ2. Lemma D.3 [Lemma B.1, (Qu et al., 2022)] (Bounded Eδ of DFedSAM). the updates of DFedSAM for any learning rate satisfying η ≤ 14KL have the drift due to δi,k − δ: Eδ = 1 m m∑ i=1 E[∥δi,k − δ∥2] ≤ 2K2β2η2ρ2. where δ = ρ ∇F (x)∥∇F (x)∥ , δi,k = ρ ∇Fi(yt,k,ξ) ∥∇Fi(yt,k,ξ)∥ . Algorithm 1: DFedSAM and DFedSAM-MGS Input : Total number of clients m, total number of communication rounds T , the number of consensus steps per gradient iteration Q, learning rate η, and total number of the local iterates are K. Output: Generate consensus model xT after the final communication of all clients with their neighbors. 1 Initialization: Randomly initialize each client’s model x0(i). 2 for t = 0 to T − 1 do 3 for node i in parallel do 4 for k = 0 to K − 1 do 5 Set yt,0(i)← xt(i), yt,−1(i) = yt,0(i) 6 Sample a batch of local data ξi and calculate local gradient gt,k(i) = ∇Fi(yt,k; ξi) 7 g̃t,k(i) = ∇Fi(yt,k + δ(yt,k); ξi) with δ(yt,k) = ρgt,k/ ∥∥gt,k∥∥ 2 8 yt,k+1(i) = yt,k(i)− ηg̃t,k(i) 9 end 10 zt(i)← yt,K(i) 11 Receive neighbors’ models zt(l) from neighborhood set Sk,t with adjacency matrix W . 12 xt+1(i) = ∑ l∈N (i) wi,lz t(l) 13 for q = 0 to Q− 1 do 14 xt,q+1(i) = ∑ l∈N (i) wi,lz t,q(l) (zt,0(i) = zt(i)) (Exchanging messages) 15 zt,q+1(i) = xt,q+1(i) (Local gossip update) 16 end 17 xt+1(i) = xt,Q(i) 18 end 19 end E CONVERGENCE ANALYSIS FOR DFEDSAM AND DFEDSAM-MGS In the following, we present the proof of convergence results for DFedSAM and DFedSAM-MGS, respectively. Note that the proof of Theorem 4.1 is thoroughly introduced in two sections E.2 and E.3 as follows, where Q = 1 and Q > 1, respectively. E.1 PRELIMINARY LEMMAS Lemma E.1 Assume that Assumptions 1 and 2 hold, and (yt,k(i) + δi,k)t≥0, (xt,k(i))t≥0 are generated by DFedSAM for all i ∈ {1, 2, ...,m}. If the client update of DFedSAM for any learning rate η ≤ 110KL , it then follows: 1 m m∑ i=1 E ∥∥(yt,k(i) + δi,k)− xt(i)∥∥2 ≤ 2K(4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2) + 2Kρ2 2K − 1 , (8) where 0 ≤ k ≤ K − 1. Proof. For any local iteration k ∈ {0, 1, ...,K − 1} in any node i, it holds 1 m m∑ i=1 E ∥∥(yt,k(i) + δi,k)− xt(i)∥∥2 = 1 m m∑ i=1 E ∥∥yt,k−1(i) + δi,k − η∇Fi(yt,k−1(i) + δi,k−1)− xt(i)∥∥2 = 1 m m∑ i=1 E∥yt,k−1(i) + δi,k−1 − xt(i) + δi,k − δi,k−1 − η ( ∇Fi(yt,k−1(i) + δi,k−1)−∇Fi(yt,k−1) +∇Fi(yt,k−1)−∇fi(xt) +∇fi(xt)−∇f(xt) +∇f(xt) ) ∥2 ≤ I + II, where I = (1 + 12K−1 ) 1 m ∑m i=1 ( E ∥∥yt,k−1(i) + δi,k−1 − xt(i)∥∥2 + E∥δi,k − δi,k−1∥2) and II = 2K m m∑ i=1 E∥ − η ( ∇Fi(yt,k−1(i) + δi,k−1)−∇Fi(yt,k−1) +∇Fi(yt,k−1)−∇fi(xt) +∇fi(xt)−∇f(xt) +∇f(xt) ) ∥2, With Lemma D.3 and Assumptions, the bounds are I = (1 + 1 2K − 1 ) 1 m m∑ i=1 ( E ∥∥yt,k−1(i) + δi,k−1 − xt(i)∥∥2 + 2K2L2η2ρ4), and II = 8Kη2 m m∑ i=1 ( L2ρ2 + σ2l + σ 2 g + E ∥∥∇f(xt)∥∥2 ), where E ∥δi,k−1∥2 ≤ ρ2. Thus, we can obtain E ∥∥(yt,k(i) + δi,k)− xt(i)∥∥2 ≤ (1 + 1 2K − 1 )E ∥∥(yt,k−1(i) + δi,k−1)− xt(i)∥∥2 + 4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 , where E ∥∇f(xt)∥2 = 1m ∑m i=1 E ∥∇f(xt(i))∥ 2, f(x) := 1m ∑m i=1 fi(x), and ∇fi(xt) := ∇f(xt(i)). The recursion from τ = 0 to k yields 1 m m∑ i=1 E ∥∥(yt,k(i) + δi,k)− xt(i)∥∥2 ≤ 1 m m∑ i=1 K−1∑ τ=1 (1 + 1 2K − 1 )τ (4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 )+ (1 + 1 2K − 1 )ρ2 ≤ 2K(4K 3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2) + 2Kρ2 2K − 1 . This completes the proof. Lemma E.2 Assume that Assumption 3 holds and the number of local iteration K is large enough. Let {xt(i)}t≥0 be generated by DFedSAM for all i ∈ {1, 2, ...,m} and any learning rate η > 0, we have following bound: 1 m m∑ i=1 E[∥xt,k(i)− xt∥2] ≤ C2η 2 (1− λ)2 , where C2 = 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) . Proof. Following [Lemma 4, (Sun et al., 2022)], we denote Zt := [ zt(1), zt(2), . . . , zt(m) ]⊤ ∈ Rm×d. With these notation, we have Xt+1 = WZt = WXt − ζt, (9) where ζt := WXt −WZt. The iteration equation (9) can be rewritten as the following expression Xt = WtX0 − t−1∑ j=0 Wt−1−jζj . (10) Obviously, it follows WP = PW = P. (11) According to Lemma D.1, it holds ∥Wt −P∥ ≤ λt. Multiplying both sides of equation (10) with P and using equation (11), we then get PXt = PX0 − t−1∑ j=0 Pζj = − t−1∑ j=0 Pζj , (12) where we used initialization X0 = 0. Then, we are led to ∥Xt −PXt∥ = ∥ t−1∑ j=0 (P−Wt−1−j)ζj∥ ≤ t−1∑ j=0 ∥P−Wt−1−j∥op∥ζj∥ ≤ t−1∑ j=0 λt−1−j∥ζj∥. (13) With Cauchy inequality, E∥Xt −PXt∥2 ≤ E( t−1∑ j=0 λ t−1−j 2 · λ t−1−j 2 ∥ζj∥)2 ≤ ( t−1∑ j=0 λt−1−j)( t−1∑ j=0 λt−1−jE∥ζj∥2) Direct calculation gives us E∥ζj∥2 ≤ ∥W∥2 · E∥Xj − Zj∥2 ≤ E∥Xj − Zj∥2. With Lemma E.1 and Assumption 3, for any j, E∥Xj − Zj∥2 ≤ m ( 2K( 4K3L2ρ4 2K − 1 + 8K(L2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ2 η2(2K − 1) ) η2. Thus, we get E∥Xt −PXt∥2 ≤ m ( 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) ) η2 (1− λ)2 . The fact that Xt −PXt = xt(1)− xt xt(2)− xt ... xt(m)− xt then proves the result. Lemma E.3 Assume that Assumption 3 holds and the number of local iteration K is large enough. Let {xt(i)}t≥0 be generated by DFedSAM-MGS for all i ∈ {1, 2, ...,m} and any learning rate η > 0, we have following bound: 1 m m∑ i=1 E[∥xt,k(i)− xt∥2] ≤ C2η2 ( λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 ) , where C2 = 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) . Proof. Following [Lemma 4, (Sun et al., 2022)] and Lemma E.2, we denote Zt :=[ zt(1), zt(2), . . . , zt(m) ]⊤ ∈ Rm×d. With these notation, we have Xt+1 = WQZt = WQXt − ζt, (14) where ζt := WQXt −WQZt. The iteration equation (14) can be rewritten as the following expression Xt = (Wt)QX0 − t−1∑ j=0 W(t−1−j)Qζj . (15) Obviously, it follows WQP = PWQ = P. (16) According to Lemma D.1, it holds ∥Wt −P∥ ≤ λt. Multiplying both sides of equation (15) with P and using equation (16), we then get PXt = PX0 − t−1∑ j=0 Pζj = − t−1∑ j=0 Pζj , where we used initialization X0 = 0. Then, we are led to ∥Xt −PXt∥ = ∥ t−1∑ j=0 (P−WQ(t−1−j))ζj∥ ≤ t−1∑ j=0 ∥P−WQ(t−1−j)∥op∥ζj∥ ≤ t−1∑ j=0 λt−1−j∥W(t−1−j)(Q−1)∥∥ζj∥ ≤ t−1∑ j=0 λt−1−j∥Wt−1−j −P+P∥Q−1∥ζj∥. With Cauchy inequality, E∥Xt −PXt∥2 ≤ ( t−1∑ j=0 λt−1−j(λ(Q−1)(t−1−j) + 1 mQ−1 ) t−1∑ j=0 λt−1−j(λ(Q−1)(t−1−j) + 1 mQ−1 )E∥ζj∥2) ≤ ( t−1∑ j=0 (λQ(t−1−j) + λt−1−j mQ−1 ) t−1∑ j=0 (λQ(t−1−j) + λt−1−j mQ−1 )E∥ζj∥2) ≤ E∥ζj∥2 ( 1 (1− λ)2m2(Q−1) + 1 (1− λQ)2 ) . Direct calculation gives us E∥ζj∥2 ≤ ∥WQ∥2 · E∥Xj − Zj∥2 ≤ ∥W −P+P∥2Q∥Xj − Zj∥2 ≤ (∥W −P∥2Q + ∥P∥2Q)E∥Xj − Zj∥2 ≤ (λQ + 1)E∥Xj − Zj∥2. With Lemma E.1 and Assumption 3, for any j, E∥Xj − Zj∥2 ≤ m ( 2K( 4K3L2ρ4 2K − 1 + 8K(L2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ2 η2(2K − 1) ) η2. Thus, we get E∥Xt −PXt∥2 ≤ mC2η2 ( λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 ) , where C2 = 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) . The fact that Xt −PXt = xt(1)− xt xt(2)− xt ... xt(m)− xt then proves the result. E.2 PROOF OF CONVERGENCE RESULTS FOR DFEDSAM. Noting that PXt+1 = PWZt = PZt, that is also xt+1 = zt, where X := [x(1),x(2), . . . ,x(m)]⊤ ∈ Rm×d and Z := [z(1), z(2), . . . , z(m)]⊤ ∈ Rm×d. Thus we have xt+1 − xt = xt+1 − zt + zt − xt = zt − xt, (17) where zt := ∑m i=1 z t(i) m and x t := ∑m i=1 x t(i) m . In each node, zt − xt = ∑m i=1( ∑K−1 k=0 y t,k+1(i)− yt,k(i)) m = ∑m i=1 ∑K−1 k=0 (−ηg̃t,k(i)) m = ∑m i=1 ∑K−1 k=0 (−η∇Fi(yt,k + ρ∇Fi(yt,k; ξ)/ ∥∥∇Fi(yt,k; ξ)∥∥2); ξ) m . (18) The Lipschitz continuity of∇f : Ef(xt+1) ≤ Ef(xt) + E⟨∇f(xt), zt − xt⟩+ L 2 E∥xt+1 − xt∥2, (19) where we used (17). And (18) is used: E⟨K∇f(xt), (zt − xt)/K⟩ = E⟨K∇f(xt),−η∇f(xt) + η∇f(xt) + (zt − xt)/K⟩ = −ηKE ∥∥∥∇f(xt)∥∥∥2 + E⟨K∇f(xt), η mK m∑ i=1 K−1∑ k=0 ( ∇f(xt(i))−∇Fi(yt,k + δi,k; ξ) ) ⟩ a) ≤ ηE ∥∥∥∇f(xt)∥∥∥ · ∥∥∥∥∥ Lm m∑ i=1 K−1∑ k=0 (xt(i)− yt,k − δi,k) ∥∥∥∥∥ b) ≤ ηK 2 E ∥∥∥∇f(xt)∥∥∥2 + ηL2K2 2K ( 2K( 4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2) + 2Kρ2 2K − 1 ) , (20) where a) uses the Lipschitz continuity, b) uses Lemma E.1. Meanwhile, we can get L 2 E ∥∥∥xt+1 − xt∥∥∥2 = L 2 E ∥∥∥zt − xt∥∥∥2 ≤ L 2 1 m m∑ i=1 ∥∥yt,K(i)− xt(i)∥∥2 ≤ L 2 E ∥∥∥∥∥−η ∑m i=1 ∑K−1 k=0 ∇Fi(yt,k + δi,k; ξ) m ∥∥∥∥∥ 2 a) ≤ L 2 η2K2B2, (21) where a) uses Assumption 3. Furthermore, (19) can be represented as Ef(xt+1) ≤Ef(xt)− ηK 2 E ∥∥∥∇Ef(xt)∥∥∥2 + ηL2K 2 C1 + 8η3K2L2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 + L 2 η2K2B2, (22) where C1 = 2K( 4K 3L2η2ρ4 2K−1 + 8Kη 2(L2ρ2 + σ2g + σ 2 l ) + 2Kρ2 2K−1 . Thus, with Lemma E.2, we can get 1 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 ≤ 2L2∑mi=1 ∥∥∥xt(i)− xt∥∥∥2 m + 2E ∥∥∥∇f(xt)∥∥∥2 a) ≤ 2L2 C2η 2 (1− λ)2 + 2E ∥∥∥∇f(xt)∥∥∥2 , (23) where a) uses Lemma E.2 and C2 = 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) . Therefore, (19) is Ef(xt+1) ≤ Ef(xt)− ηK 2 E ∥∥∥∇f(xt)∥∥∥2 + ηL2KC1 2 + 8η3K2L2(2L2 C2η 2 (1− λ)2 + 2E ∥∥∥∇f(xt)∥∥∥2) ≤ Ef(xt) + (16η3K2L2 − ηK 2 )E ∥∥∥∇f(xt)∥∥∥2 + ηL2KC1 2 + 16C2η 5K2L4 (1− λ)2 . (24) Summing the inequality (24) from t = 1 to T , and then we can get the proved result as below: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 ≤ 2f(x1)− 2f∗ T (ηK − 32η3K2L2) + ηL2KC1 2 + 16C2η 5K2L4 (1−λ)2 ηK − 32η3K2L2 . If we choose the learning rate η = O(1/L √ KT ) and η ≤ 110KL , the number of communication round T is large enough, we have min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 =O(f(x1)− f∗√ KT + K3/2L2ρ4 T + K(L4ρ2 + σ2g + σ 2 l ) T + L2ρ2 T (1− λ)2 + KB2 T 2(1− λ)2 + K2L2ρ4 T 2(1− λ)2 + K(L2ρ2 + σ2g + σ 2 l ) T 2(1− λ)2 ) . When perturbation amplitude ρ proportional to the learning rate, e.g., ρ = O( 1√ T ), and then we have: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 =O(f(x1)− f∗√ KT + K(σ2g + σ 2 l ) T + KB2 T 2(1− λ)2 + K3/2L4 T 2 + L2 T 2(1− λ)2 + K(σ2g + σ 2 l ) T 2(1− λ)2 ) . Under Definition 2, we can get min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 =O(f(x1)− f∗√ KT + K(β2 + σ2l ) T + KB2 T 2(1− λ)2 + K3/2L4 T 2 + L2 T 2(1− λ)2 + K(β2 + σ2l ) T 2(1− λ)2 ) . This completes the proof. E.3 PROOF OF CONVERGENCE RESULTS FOR DFEDSAM-MGS With multiple gossiping steps, x0 and z0 are represented as x and z, respectively. Meanwhile, Zt,Q = Zt,0 ·WQ = Zt ·WQ. Noting that PXt+1 = PWQZt = PZt(Q > 1), that is also xt+1 = zt, where X := [x(1),x(2), . . . ,x(m)]⊤ ∈ Rm×d and Z := [z(1), z(2), . . . , z(m)]⊤ ∈ Rm×d. Thus we have xt+1 − xt = xt+1 − zt + zt − xt = zt − xt, (25) where zt := ∑m i=1 z t(i) m and x t := ∑m i=1 x t(i) m . In each node, zt − xt = ∑m i=1( ∑K−1 k=0 y t,k+1(i)− yt,k(i)) m = ∑m i=1 ∑K−1 k=0 (−ηg̃t,k(i)) m = ∑m i=1 ∑K−1 k=0 (−η∇Fi(yt,k + ρ∇Fi(yt,k; ξ)/ ∥∥∇Fi(yt,k; ξ)∥∥2); ξ) m . (26) The Lipschitz continuity of∇f : Ef(xt+1) ≤ Ef(xt) + E⟨∇f(xt), zt − xt⟩+ L 2 E∥xt+1 − xt∥2, (27) where we used (17). And (18) is used: E⟨K∇f(xt), (zt − xt)/K⟩ = E⟨K∇f(xt),−η∇f(xt) + η∇f(xt) + (zt − xt)/K⟩ = −ηKE ∥∥∥∇f(xt)∥∥∥2 + E⟨K∇f(xt), η mK m∑ i=1 K−1∑ k=0 ( ∇f(xt(i))−∇Fi(yt,k + δi,k; ξ) ) ⟩ a) ≤ ηE ∥∥∥∇f(xt)∥∥∥ · ∥∥∥∥∥ Lm m∑ i=1 K−1∑ k=0 (xt(i)− yt,k − δi,k) ∥∥∥∥∥ b) ≤ ηK 2 E ∥∥∥∇f(xt)∥∥∥2 + ηL2K2 2K ( 2K( 4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2) + 2Kρ2 2K − 1 ) , (28) where a) uses the Lipschitz continuity, b) uses Lemma E.1. Meanwhile, we can get L 2 E ∥∥∥xt+1 − xt∥∥∥2 = L 2 E ∥∥∥zt − xt∥∥∥2 ≤ L 2 1 m m∑ i=1 ∥∥yt,K(i)− xt(i)∥∥2 ≤ L 2 E ∥∥∥∥∥−η ∑m i=1 ∑K−1 k=0 ∇Fi(yt,k + δi,k; ξ) m ∥∥∥∥∥ 2 a) ≤ L 2 η2K2B2, (29) where a) uses Assumption 3. Furthermore, (19) can be represented as Ef(xt+1) ≤Ef(xt)− ηK 2 E ∥∥∥∇Ef(xt)∥∥∥2 + ηL2K 2 C1 + 8η3K2L2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 + L 2 η2K2B2, (30) where C1 = 2K( 4K 3L2η2ρ4 2K−1 + 8Kη 2(L2ρ2 + σ2g + σ 2 l ) + 2Kρ2 2K−1 . Thus, with Lemma E.3, we can get 1 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 ≤ 2L2∑mi=1 ∥∥∥xt(i)− xt∥∥∥2 m + 2E ∥∥∥∇f(xt)∥∥∥2 a) ≤ 2L2C2η2 ( λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 ) + 2E ∥∥∥∇f(xt)∥∥∥2 , (31) where a) uses Lemma E.3 and C2 = 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) . Therefore, (19) is Ef(xt+1) ≤ Ef(xt)− ηK 2 E ∥∥∥∇f(xt)∥∥∥2 + ηL2KC1 2 + 8η3K2L2(2L2C2η 2 ( λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 ) + 2E ∥∥∥∇f(xt)∥∥∥2) ≤ Ef(xt) + (16η3K2L2 − ηK 2 )E ∥∥∥∇f(xt)∥∥∥2 + ηL2KC1 2 + 16C2η 5K2L4 ( λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 ) . (32) Summing the inequality (32) from t = 1 to T , and then we can get the proved result as below: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 ≤ 2f(x1)− 2f∗ T (ηK − 32η3K2L2) + ηL2KC1 2 + 16C2η 5K2L4 ( λQ+1 (1−λ)2m2(Q−1) + λQ+1 (1−λQ)2 ) ηK − 32η3K2L2 . If we choose the learning rate η = O(1/L √ KT ) and η ≤ 110KL , the number of communication round T is large enough with Definition 2 and Φ(λ,m,Q) = λ Q+1 (1−λ)2m2(Q−1) + λQ+1 (1−λQ)2 is the key parameter to the convergence bound with the number of spectral gap, the clients and multiple gossiping steps. Thus we have min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 = O(f(x1)− f∗√ KT + K3/2L2ρ4 T + K(L4ρ2 + β2 + σ2l ) T + Φ(λ,m,Q) (L2ρ2 T + K2L2ρ4 T 2 + K(L2ρ2 + β2 + σ2l +B 2) T 2 )) . When perturbation amplitude ρ proportional to the learning rate, e.g., ρ = O( 1√ T ), and then we have: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 =O(f(x1)− f∗√ KT + K(β2 + σ2l ) T + K3/2L4 T 2 +Φ(λ,m,Q) L2 +K(β2 + σ2l +B 2) T 2 ) . This completes the proof.
1. What is the focus and contribution of the paper on decentralized learning? 2. What are the strengths of the proposed approach, particularly in terms of its application of SAM? 3. What are the weaknesses of the paper, especially regarding its connection to prior works? 4. Do you have any concerns about the scalability of the algorithm in relation to the number of clients? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes to apply SAM in the decentralized learning scenario to alleviate the distribution shift, termed DFedSAM. Convergence results are provided for smooth non-convex objectives under a bounded gradient assumption. Numerical experiments are conducted on several datasets. Strengths And Weaknesses Strengths: The ablation study provides a detailed discussion on multiple cases in the experimental setups. The convergence results are provided. The paper is generally well written and easy to follow. Weaknesses: My main concern is that this paper can be viewed as the extension of Generalized Federated Learning via Sharpness Aware Minimization. The authors changed the federated learning into a decentralized learning and added multiple gossip from another paper. How does the algorithm scale on the client size? Could you please provide a figure on small and large number of clients to see whether the relationship between training performance and client size is linear or not. Typo: commincation found in figures --> communication round Clarity, Quality, Novelty And Reproducibility The paper is a very good balance between motivations, background, a clear explanation of the method with all the hard maths in appendix. The idea is easy to understand and should lead to improvement in decentralized learning. Question: Figure 2: Are these two figures based on read dataset?
ICLR
Title Improving Model Consistency of Decentralized Federated Learning via Sharpness Aware Minimization and Multiple Gossip Approaches Abstract To mitigate the privacy leakages and reduce the communication burden of Federated Learning (FL), decentralized FL (DFL) discards the central server and each client only communicates with its neighbors in the decentralized communication network. However, existing DFL algorithms tend to feature high inconsistency among local models, which results in severe distribution shifts across clients and inferior performance compared with centralized FL (CFL), especially on heterogeneous data or with sparse connectivity of communication topology. To alleviate this challenge, we propose two DFL algorithms named DFedSAM and DFedSAM-MGS to improve the performance. Specifically, DFedSAM leverages gradient perturbation to generate local flatness models via Sharpness Aware Minimization (SAM), which searches for model parameters with uniformly low loss function values. In addition, DFedSAM-MGS further boosts DFedSAM by adopting the technique of Multiple Gossip Steps (MGS) for a better model consistency, which accelerates the aggregation of local flatness models and better balances the communication complexity and learning performance. In the theoretical perspective, we present the improved convergence rates O ( 1 T + 1 T 2(1−λ)2 ) and O ( 1 T + λ+1 T 2(1−λQ)2 ) in the stochastic non-convex setting for DFedSAM and DFedSAM-MGS, respectively, where 1−λ is the spectral gap of the gossip matrix W and Q is the gossip steps in MGS. Meanwhile, we empirically confirm that our methods can achieve competitive performance compared with CFL baselines and outperform existing DFL baselines. N/A ( 1 T + 1 T 2(1−λ)2 ) and O ( 1 T + λQ+1 T 2(1−λQ)2 ) in the stochastic non-convex setting for DFedSAM and DFedSAM-MGS, respectively, where 1−λ is the spectral gap of the gossip matrix W and Q is the gossip steps in MGS. Meanwhile, we empirically confirm that our methods can achieve competitive performance compared with CFL baselines and outperform existing DFL baselines. 1 INTRODUCTION … … … … (a) Ring (b) Exponential (c) Grid (d) Fully-connected (b) Loss surface of FedAvg (c) Loss surface of DFedAvg Federated learning (FL) (Mcmahan et al., 2017; Li et al., 2020b) allows distributed clients to collaboratively train a shared model under the orchestration of the cloud without transmitting local data. However, almost all FL paradigms employ a central server to communicate with clients, which faces several critical challenges, such as computational resources limitation, high communication bandwidth cost, and privacy leakage (Kairouz et al., 2021). Compared to the centralized FL (CFL) framework, decentralized FL (DFL, see Figure 1), in which the clients only communicate with their neighbors without a central server, offers communication advantage and further preserves the data privacy (Kairouz et al., 2021; Wang et al., 2021). However, DFL suffers from bottlenecks such as severe inconsistency of local models due to heterogeneous data and model aggregation locality caused by the network connectivity of communication topology. This inconsistency results in severe over-fitting in local models and model performance degradation. Therefore, the global/consensus model may bring inferior performance compared with CFL, especially on heterogeneous data or in face of the sparse connectivity of communication net- 1 works. Similar performance pattern of DFL has also been demonstrated by Sun et al. (2022). To explore the reasons behind this phenomenon, we present the structure of the loss landscapes (Li et al., 2018) for FedAvg (Mcmahan et al., 2017) and decentralized FedAvg (DFedAvg, Sun et al. (2022)) on Fashion-MNIST (Xiao et al., 2017) with the same setting in Figure 2 (a) and (b). It is clearly seen that DFL method has a sharper landscape than CFL method. Motivation. Most FL algorithms face the over-fitting issue of local models on heterogeneous data. Many recent works (Sahu et al., 2018; Li et al., 2020c; Karimireddy et al., 2020; Yang et al., 2021; Acar et al., 2021; Wang et al., 2022) focus on the CFL and mitigate this issue with various effective solutions. In DFL, this issue can be exacerbated due to sharp loss landscape caused by the inconsistency of local models (see Figure 2 (a) and (b)). Therefore, the performance of decentralized schemes is usually worse than that of centralized schemes with the same setting (Sun et al., 2022). Consequently, an important research question is: can we design a DFL algorithm that can mitigate inconsistency among local models and achieve the similar performance to its centralized counterpart? To address this question, we propose two DFL algorithms: DFedSAM and DFedSAM-MGS. Specifically, DFedSAM overcomes local model over-fitting issue via gradient perturbation with SAM (Foret et al., 2021) in each client to generate local flatness models. Since each client aggregates the flatness models from its neighbors, a potential flat aggregated model can be generated, which results in high generalization ability. To further boost the performance of DFedSAM, DFedSAMMGS integrates multiple gossip steps (MGS) (Ye et al., 2020; Ye & Zhang, 2021; Li et al., 2020a) to accelerate the aggregation of local flatness models by increasing the number of gossip steps of local communications. It realizes a better trade-off between communication complexity and learning performance by bridging the gap between CFL and DFL, since DFL can be roughly regarded as CFL with a sufficiently large number of gossip steps (see Section 5.4). Theoretically, we present the convergence rates for our algorithms in the stochastic non-convex setting. We show that the bound can be looser when the connectivity of the communication topology λ is sufficiently sparse, or the data homogeneity β is sufficiently large, while as the consensus/gossip steps Q in MGS increase, it is tighter as the impact of communication topology can be alleviated (see Section 4). The theoretical results directly explain why the application of SAM and MGS in DFL can ensure better performance with various types of communication network topology. Empirically, we conduct extensive experiments on CIFAR-10 and CIFAR-100 datasets in both the identical data distribution (IID) and non-IID settings. The experimental results confirm that our algorithms achieve competitive performance compared to CFL baselines and outperform DFL baselines (see Section 5.2). Contribution. Our main contributions can be summarized as three-fold: • We propose two DFL algorithms DFedSAM and DFedSAM-MGS. DFedSAM alleviates the inconsistency of local models through getting local flatness models, while DFedSAMMGS achieves a better consistency based on DFedSAM via the aggregation acceleration and has a better trade-off between communication and generalization. • We present the convergence ratesO ( 1 T + 1 T 2(1−λ)2 ) andO ( 1 T + λQ+1 T 2(1−λQ)2 ) for DFedSAM and DFedSAM-MGS in the non-convex settings, respectively, and show that our algorithms can achieve the linear speedup for convergence. • We conduct extensive experiments to verity the efficacy of our proposed DFedSAM and DFedSAM-MGS, which can achieve competitive performance compared with both CFL and DFL baselines. 2 RELATED WORK Decentralized Federated Learning (DFL). In DFL, clients only communicate with their neighbors in various communication networks without a central server in comparison to CFL, which offers communication advantage and preserves the data privacy. Lalitha et al. (2018; 2019) take a Bayesian-like approach by introducing a belief over the model parameter space of the clients in a fully DFL framework. Roy et al. (2019) propose the first server-less, peer-to-peer approach BrainTorrent to FL and apply it on medical application in a highly dynamic peer-to-peer FL environment. Sun et al. (2022) apply the multiple local iteration with SGD and quantization method to further reduce the communication cost, and provide the convergence results in various convexity setting. Dai et al. (2022) develop a decentralized sparse training technique to further save the communication and computation cost. Sharpness Aware Minimization (SAM). SAM (Foret et al., 2021) is an effective optimizer for training deep learning models, which leverages the flatness geometry of the loss landscape to improve model generalization ability. Recently, Andriushchenko & Flammarion (2022) study the properties of SAM and provide convergence results of SAM for non-convex objectives. As a powerful optimizer, SAM and its variants have been applied to various machine learning (ML) tasks (Zhao et al., 2022; Kwon et al., 2021; Du et al., 2021; Liu et al., 2022; Abbas et al., 2022). Specifically, Qu et al. (2022) and Caldarola et al. (2022) integrate SAM to improve the generalization, and thus mitigate the distribution shift problem and achieve a new SOTA performance for CFL. However, to the best of our knowledge, no efforts have been devoted to the empirical performance and theoretical analysis of SAM in the DFL setting. Multiple Gossip Steps (MGS). The advantage of increasing the times of local communications within a network topology is investigated in Ye et al. (2020), in which FastMix is proposed with multi-consensus and gradient tracking and they establish the optimal computational complexity and a near optimal communication complexity. DeEPCA (Ye & Zhang, 2021) integrates FastMix into a decebtralized PCA algorithm to accelerate the training process. DeLi-CoCo (Hashemi et al., 2022) performs multiple compression gossip steps in each iteration for fast convergence with arbitrary communication compression. Network-DANE (Li et al., 2020a) uses multiple gossip steps and generalizes DANE to decentralized scenarios. In general, by increasing the number of gossip steps, local clients can approach to a better consensus model towards the performance in CFL. Thus, the use of MGS can also potentially mitigate the model inconsistency in the DFL setting. The work most related to this paper is DFedAvg and DFedAvg with momentum (DFedAvgM) in Sun et al. (2022), which leverages multiple local iterations with the SGD optimizer and significantly improve the performance of classic decentralized parallel SGD method D-PSGD (Lian et al., 2017). However, DFL may suffers from inferior performance due to the severe model inconsistency issue among the clients. Another related work is FedSAM (Qu et al., 2022), which integrates SAM optimizer into CFL to enhance the flatness of local model and achieves new SOTA performance for CFL. On top of the aforementioned studies, we are the first to extend the SAM optimizer to the DFL setting and simultaneously provide its convergence guarantee in the nonconvex setting. Furthermore, we bride the gap of CFL and DFL via adopting MGS in DFedSAM-MGS, which largely mitigates the model inconsistency in DFL. 3 METHODOLOGY In this section, we try to solve this issue in the DFL setting. Below, we first initialize the problem setup in DFL and then describe the proposed DFedSAM and DFedSAM-MGS in detail. 3.1 PROBLEM SETUP In this work, we are interested in solving the following finite-sum stochastic non-convex minimization problem in the DFL setting: min x∈Rd f(x) := 1 m m∑ i=1 fi(x), fi(x) = Eξ∼DiFi(x; ξ), (1) where Di denotes the data distribution in the i-th client, which is heterogeneous across clients, m is the number of clients, and Fi(x; ξ) is the local objective function associated with the training data samples ξ. Problem (1) is known as empirical risk minimization (ERM) and models many applications in ML. As shown in Figure 1(b), we model the communication network in the decentralized network topology between clients as an undirected connected graph G = (N ,V,W ), where N := {1, 2, . . . ,m} represents the set of clients, and V ⊆ N × N represents the set of communication channels, each connecting two distinct clients. Furthermore, we emphasis that there is no central server in the decentralized setting and all clients only communicate with their neighbors with respect to the communication channels V . In addition, we assume Problem (1) is well-defined and denote f∗ as the minimal value of f , i.e., f(x) ≥ f(x∗) = f∗ for all x ∈ Rd. 3.2 DFEDSAM AND DFEDSAM-MG ALGORITHMS Instead of searching for a solution via SGD (Bottou, 2010; Bottou et al., 2018), SAM (Foret et al., 2021) aims to seek a solution in a flatness region through adding a small perturbation to the models, i.e., x+δ with more robust performance. As shown in Figure 2, decentralized schemes has a sharper landscape with poorer generalization ability than centralized schemes. However, the study focus on this issue remains unexplored. In this paper, we extend to SAM optimizer into DFL for investigating this issue, dubbed DFedSAM, whose local loss function is defined as: fi(x) = Eξ∼Di max ∥δi∥22≤ρ Fi(y t,k(i) + δi; ξi), i ∈ N (2) where yt,k(i) + δi is viewed as the perturbed model, ρ is a predefined constant controlling the radius of the perturbation and ∥ · ∥22 is a l2-norm, which can be simplified to ∥ · ∥2 in the rest. Similar with CFL methods, in DFL, DFedSAM allows that clients can update the local model parameters with multiple local iterates before communication are performed. Specifically, for each client i ∈ {1, 2, ...,m}, each local iteration k ∈ {0, 1, ...,K − 1} in each communication round t ∈ {0, 1, ..., T − 1}, the k-th inner iteration in client i is performed as: yt,k+1(i) = yt,k(i)− ηg̃t,k(i), (3) where g̃t,k(i) = ∇Fi(yt,k + δ(yt,k); ξ) and δ(yt,k) = ρgt,k/ ∥∥gt,k∥∥ 2 . Following by (Foret et al., 2021), using first order Taylor expansion around yt,k for a small value of ρ. After K inner iterations in each client, parameters are updated as zt(i) ← yt,K(i) and sent to its neighbors l ∈ N (i) after local updates. Then each client averages its parameters with the information of its neighbors: xt+1(i) = ∑ l∈N (i) wi,lz t(l). (4) On the other hand, we use multiple gossip steps (MGS) technique (Ye et al., 2020; Ye & Zhang, 2021; Hashemi et al., 2022) to achieve a better consistency among local models based on DFedSAM, dubbed DFedSAM-MGS, thereby further boosting the performance. DFedSAM-MGS provides a balance between the communication cost and generalization ability in DFL setting. Specifically, the produce of MGS at the q-th step (q ∈ {0, 1, ..., Q − 1}) can be viewed as two steps in terms of exchanging messages and local gossip update as follows: xt,q+1(i) = ∑ l∈N (i) wi,lz t,q(l), and zt,q+1(i) = xt,q+1(i). (5) At the end of MGS, xt+1(i) = xt,Q(i). Both DFedSAM and DFedSAM-MGS are summarized in Algorithm 1 (see Appendix C). It is clearly seen that DFedSAM may generate the trade-off between the local computation complexity and communication overhead via multiple local iterations, whereas the local communication is only performed at once. While DFedSAM-MGS performs multiple local communications with a larger Q to make all local clients synchronized. Therefore, DFedSAM-MGS can be viewed as compromising between DFL and CFL. Compared with existing SOTA DFL methods: DFedAvg and DFedAvgM (Sun et al., 2022), the benefits of DFedSAM and DFedSAM-MGS lie in three-fold: (i) SAM is introduced to first alleviate local over-fitting issue caused by the inconsistency of local models via seeking a flatness model at each client in DFL, and also contribute to make consensus model flat; (ii) MGS in DFedSAMMGS is used to further accelerate the aggregation of local flatness models for a better consistency among local models based on DFedSAM and properly balances the communication complexity and learning performance; (iii) Furthermore, we also present the theories unifying the impacts of gradient perturbation ρ in SAM, several times of local communications Q in MGS, and various network typologies λ, along with data homogeneity β upon the convergence rate in Section 4. 4 CONVERGENCE ANALYSIS In this section, we show the convergence results of DFedSAM and DFedSAM-MGS for general non-convex FL setting, and the detailed proof is presented in Appendix E. Below, we first give several useful and necessary notations and assumptions. Definition 1 (The gossip/mixing matrix). (Sun et al., 2022, Definition 1) The gossip matrix W = [wi,j ] ∈ [0, 1]m×m is assumed to have these properties: (i) (Graph) If i ̸= j and (i, j) /∈ V , then wi,j = 0, otherwise, wi,j > 0; (ii) (Symmetry) W = W⊤; (iii) (Null space property) null{I − W} = span{1}; (iv) (Spectral property) I ⪰ W ≻ −I. Under these properties, eigenvalues of W can be shown satisfying 1 = |λ1(W))| > |λ2(W))| ≥ · · · ≥ |λm(W))|. Furthermore, λ := max{|λ2(W)|, |λm(W))|} and 1− λ ∈ (0, 1] is the denoted as spectral gap of W. Definition 2 (Homogeneity parameter). (Li et al., 2020a, Definition 2) For any i ∈ {1, 2, . . . ,m} and the parameter x ∈ Rd, the homogeneity parameter β can be defined as: β := max 1≤i≤m βi, with βi := sup x∈Rd ∥∇fi(x)−∇f(x)∥ . Assumption 1 (Lipschitz smoothness). The function fi is differentiable and ∇fi is L-Lipschitz continuous, ∀i ∈ {1, 2, . . . ,m}, i.e., ∥∇fi(x)−∇fi(y)∥ ≤ L∥x− y∥, for all x,y ∈ Rd. Assumption 2 (Bounded variance). The gradient of the function fi have σl-bounded variance, i.e., Eξi ∥∇Fi(y; ξi)−∇fi(x)∥ 2 ≤ σ2l ,∀i ∈ {1, 2, . . . ,m}, the global variance is also bounded, i.e., 1m ∑m i=1 ∥∇fi(x) −∇f(x)∥2 ≤ σ2g for all x ∈ Rd. It is not hard to verify that the σg is smaller than the homogeneity parameter β, i.e., σ2g ≤ β2. Assumption 3 (Bounded gradient ). For any i∈{1, 2, . . . ,m} and x∈Rd, we have ∥∇fi(x)∥≤B. Note that above mentioned assumptions are mild and commonly used in characterizing the convergence rate of FL (Sun et al., 2022; Ghadimi & Lan, 2013; Yang et al., 2021; Bottou et al., 2018; Yu et al., 2019; Reddi et al., 2021). Difference with classic decentralized parallel SGD methods such as D-PSGD (Lian et al., 2017), the technical difficulty is that zt(i)− xt(i) fails to be an unbiased gradient estimation∇fi(xt(i)) after multiple local iterates, thereby merging the multiple local iterations is non-trivial. Furthermore, the various topologies of communication network in DFL are quite different with SAM in CFL (Qu et al., 2022). Below, we adopt the averaged parameter xt= 1m ∑m i=1 x t(i) of all clients to be the approximated solution of Problem (1). Theorem 4.1 Let Assumptions 1, 2 and 3 hold, and the parameters {xt(i)}t≥0 is generated via Algorithm 1. Meanwhile, assume the learning rate of SAM in each client satisfy 0 < η ≤ 110KL . Let xt = 1m ∑m i=1 x t(i) and denote Φ(λ,m,Q) as the metric related with three parameters in terms of the number of spectral gap, the clients and multiple gossip steps, Φ(λ,m,Q) = λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 , (6) Then, we have the gradient estimation of DFedSAM or DFedSAM-MGS for solving Problem (1): min 1≤t≤T E ∥∥∇f(xt)∥∥2 ≤ 2[f(x1)− f∗] T (ηK − 32η3K2L2) + α(K, ρ, η) +Φ(λ,m,Q)β(K, ρ, η, λ), (7) where T is the number of communication rounds and the constants are given as α(K, ρ, η) = ηL2K2 ηK − 32η3K2L2 (4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 2Kρ2 2K − 1 ) , β(K, ρ, η, λ) = 64η5K3L4 ηK−32η3K2L2 (4K3L2ρ4 2K−1 + 8K(L2ρ2 + σ2g + σ 2 l ) + 8KB 2 + ρ2 η2(2K−1) ) . With Theorem 4.1, we state following convergence rates for DFedSAM and DFedSAM-MGS. Corollary 4.1.1 Let the local adaptive learning rate satisfy η = O(1/L √ KT ). With the similar assumptions required in Theorem 4.1.1, and setting the perturbation parameter ρ = O( 1√ T ). Then, the convergence rate for DFedSAM satisfies: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2=O(f(x1)−f∗√ KT + K(β2+σ2l ) T + KB2 T 2(1−λ)2 + K3/2L4 T 2 + L2 T 2(1−λ)2 + K(β2+σ2l ) T 2(1−λ)2 ) . Remark 1 DFedSAM can achieve a linear speedup on the general non-convex setting as long as T ≥ K, which is significantly better than the state-of-the-art (SOTA) bounds such as O ( 1√ T + σ2g√ T + σ2g+B 2 (1−λ)2T 3/2 ) in (Sun et al., 2022). Note that the bound can be tighter as λ decreases, which is dominated by K(β 2+σ2l ) T 2(1−λ)2 terms as λ ≤ 1− K1/4 T 3/2 , whereas as β increases, it can be degraded. Corollary 4.1.2 Let Q > 1, T be large enough and η = O(1/L √ KT ). With the similar assumptions required in Theorem 4.1.1 and perturbation amplitude ρ being ρ = O( 1√ T ), Then, the convergence rate for DFedSAM-MGS satisfies: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2=O(f(x1)−f∗√ KT + K(β2+σ2l ) T + K3/2L4 T 2 +Φ(λ,m,Q) L2+K(β2+σ2l +B 2) T 2 ) . Remark 2 The impact of the network topology (1 − λ) can be alleviated as Q increases and the number of clients m is large enough, the term λ Q+1 (1−λ)2m2(Q−1) of Φ(λ,m,Q) can be neglected, and the term λ Q+1 (1−λQ)2 is close to 1. That means by using the proposed Q-step gossip procedure, model consistency among clients can be improved, and thus DFL in the various communication topologies can be roughly viewed as CFL. Thus, the negative effect of the gradient variances σ2l and β 2 can be degraded especially on sparse network topology where λ is close to 1. In practice, a suitable steps Q > 1 is possible to achieve a communication-accuracy trade-off in DFL setting. 5 EXPERIMENTS In this section, we evaluate the efficacy of our algorithms compared to six baselines from CFL and DFL settings. In addition, we conduct several experiments to verify the impact of the communication network topology in Section 4. Furthermore, several ablation studies are conducted. 5.1 EXPERIMENT SETUP Dataset and Data Partition. The efficacy of the proposed DFedSAM and DFedSAM-MGS is evaluated on CIFAR-10 and CIFAR-100 datasets (Krizhevsky et al., 2009) in both IID and non-IID settings. Specifically, Dir Partition (Hsu et al., 2019) is used for simulating non-IID across federated clients, where the local data of each client is partitioned by splitting the total dataset through sampling the label ratios from the Dirichlet distribution Dir(α) with parameters α = 0.3 and α = 0.6. Baselines. The compared baselines cover several SOTA methods in both the CFL and DFL settings. Specifically, centralized baselines include FedAvg (Mcmahan et al., 2017) and FedSAM (Qu et al., 2022). For decentralized setting, D-PSGD (Lian et al., 2017), DFedAvg and DFedAvgM (Sun et al., 2022), along with DisPFL (Dai et al., 2022), are used for comparison. Implementation Details. The total number of clients is set as 100, among which 10% clients participates in communication. Specifically, all clients perform the local iteration step for decentralized methods and only participated clients can perform local update for centralized methods. We initialize the local learning rate as 0.1 with a decay rate 0.998 per communication round for all experiments. For CIFAR-10 and CIFAR-100 datasets, VGG-11 (He et al., 2016) and ResNet-18 (Simonyan & Zisserman, 2014) are adopted as the backbones in each client, respectively. The number of communication rounds is set as 1000 in the experiments for comparing with all baselines and studying on topology-aware performance. In addition, all the ablation studies are conducted on CIFAR-10 dataset and the number of communication rounds is set as 500. Communication Configurations. For a fair comparison between decentralized and centralized setting, we apply a dynamic time-varying connection topology for decentralized methods to ensure that in each round, the number of connections are no more than that in the central server. In specific, the number of clients communicating with their neighbors can be controlled to keep the communication volume consistent with centralized methods. Following earlier works, the communication complexity is measured by the times of local communications. The more experiments setup are presented in Appendix B due to the limited space. 5.2 PERFORMANCE EVALUATION Performance with compared baselines. In Table 1 and Figure 3, we evaluate DFedSAM and DFedSAM-MGS (Q = 4) with ρ = 0.01 on CIFAR-10 and CIFAR-100 datasets in both settings compared with all baselines from CFL and DFL. From these results, it is clearly seen that our proposed algorithms outperform other decentralized methods on this two datasets, and DFedSAMMGS outperforms and roughly achieves the performance of SOTA centralized baseline FedSAM on CIFAR-10 and CIFAR-100, respectively. Specifically, the training accuracy and testing accuracy are presented in Table 1 to show the generalization performance. We can see that the performance improvement is more obvious than all other baselines on CIFAR-10 with the same communication round. For instance, the difference between training accuracy and test accuracy on CIFAR-10 in IID setting is 14.14% in DFedSAM, 13.22% in DFedSAM-MGS, 15.29% in FedAvg and 15% in FedSAM. That means our algorithms achieve a comparable generalization than centralized baselines. Impact of non-IID levels (β). In Table 1, we can see our algorithms are robust to different participation cases. Heterogeneous data distribution of local client is set to various participation levels from IID, Dirichlet 0.6 and Dirichlet 0.3, which makes the training of global/consensus model is more difficult. For instance, on CIFAR-10, as non-IID levels increases, DFedSAMMGS achieves better generalization than FedSAM as the difference between training accuracy and test accuracy in DFedSAM-MGS {15.27%, 14.51%, 13.22%} are lower than that in FedSAM {17.26%, 14.85%, 15%}. Similarly, the difference in DFedSAM {17.37%, 15.06%, 14.10%} are lower than that in FedAvg {17.60%, 15.82%, 15.27%}. These observations confirm that our algorithms are more robust than baselines in various degrees of heterogeneous data. 5.3 TOPOLOGY-AWARE PERFORMANCE We verify the influence of various communication topologies and gossip averaging steps in DFedSAM and DFedSAM-MGS. Different with the comparison of CFL and DFL in Section 5.2, we only need to verify the key properties for the DFL methods in this section. Thus, the communication type is set as “Complete”, i.e., each client can communicate with its neighbors in the same communication round. The degree of sparse connectivity λ is: ring > grid > exponential > fullconnected in DFL. From Table 2, our algorithms are obviously superior to all decentralized baselines in various communication networks, which is coincided with our theoretical findings. Specifically, compared with DFedAvgM, DFedSAM and DFedSAM-MGS can significantly improve the performance in the ring topology with 0.64% and 8.0%, respectively. Meanwhile, the performance of DFedSAM-MGS in various topologies is always better than that of other methods. This observation confirms that multiple gossip steps Q can alleviate the impact of network topology with a smaller Q = 4. Therefore, our algorithms can generate the better generalization and model consistency in various communication topologies. 5.4 ABLATION STUDY Below, we verify the influence of each component and hyper-parameter in DFedSAM where Q = 1. All the ablation studies are conducted in “exponential” topology except the study for Q in three topologies, and the communication type is the same as Section 5.3: “Complete”. Consensus/gossip steps Q. In Figure 4, we investigate where the balance between learning performance and communication complexity in three network topologies is. We choose four multiple steps Q = {1, 2, 3, 4} and study the different balance points under the different steps in three net- work topologies in Figure 4 (a), (b) and (c). As the number of local communications increases, model performance is also improved but the communication complexity increases too. It is clearly seen that the balance point is different but with the same tendency in different topologies. And a relatively larger Q can bring better performance for a given communication complexity. Therefore, we select Q = 4 in DFedSAM-MGS under 1000 communication rounds for a better balance. Local iteration steps K. Large local iteration steps K can help the convergence in previous DFL work (Sun et al., 2022) with the theoretical guarantees. To investigate the acceleration on T by adopting a larger local iteration steps K, we fix the total batchsize and change local training epochs. As shown in Figure 5 (a), our algorithms can accelerate the convergence in theoretical results (see Section 4.1) as a larger local iteration steps K is adopted. Number of participated clients m. As shown in Figure 5 (b), we compare the performance between different number of client participation m = {50, 100, 150} with the same hyper-parameters. Compared with larger m = 150, the smaller m = {50, 100} can achieve better convergence and test accuracy as the number of local data increases, which makes indirectly local model more generalization, thereby improving the performance of consensus model. Perturbation radius ρ. Perturbation radius ρ has the impact on performance as the adding perturbation is accumulated when the communication round T increases. It is a trade-off between test accuracy and the generalization. To select a proper value for our algorithms, we conduct some experiments on various perturbation radius from the set {0.01, 0.025, 0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1.0} in Figure 5 (c). As ρ = 0.01, we achieve a better convergence and performance. Meanwhile, ρ = O( 1√ T ) can make a linear speedup on convergence (see Section 4.1). The effectiveness of SAM and MGS. To validate the effectiveness of SAM and MGS, respectively, we compare four algorithms including DFedAvg, DFedSAM and FedSAM-MGS with the same setting. From Table 3, DFedSAM can achieve the performance improvement and better generalization compared with DFedAvg as SAM optimizer is adopted. DFedSAM-MGS can further boost the performance compared with FedSAM as MGS can also make models consistent among clients and accelerate the convergence rates. 6 CONCLUSIONS AND FUTURE WORK In this paper, we focus on the model inconsistency challenge caused by heterogeneous data and network connectivity of communication topology in DFL and overcome this challenge from the perspectives of model generalization. We propose two DFL frameworks: DFedSAM and DFedSAMMGS with better model consistency among clients. DFedSAM adopts SAM to achieve the flatness model in each client, thereby improving the generalization by generating a consensus/global flatness model. Meanwhile, DFedSAM-MGS further improves the model consistency based on DFedSAM by accelerating the aggregation of local flat models and reaching a better trade-off between learning performance and communication complexity. For theoretical findings, we confirm a linear speedup and unify the impacts of gradient perturbation in SAM, local communications in MGS, and network typology, along with data homogeneity upon the convergence rate in DFL. Furthermore, empirical results also verify the superiority of our approaches. For future work, we will continue towards understanding of the effect of SAM and MGS for more desira ble generalization in DFL. B MORE DETAILS ON ALGORITHM IMPLEMENTATION B.1 DATASETS AND BACKBONES. CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009) are labeled subsets of the 80 million images dataset. They both share the same 60, 000 input images. CIFAR-100 has a finer labeling, with 100 unique labels, in comparison to CIFAR-10, having 10 unique labels. The VGG-11 as the backbone is used for CIFAR-10, and the ResNet is chose for CIFAR-100, where the batch-norm layers are replaced by group-norm layers due to a detrimental effect of batch-norm. B.2 MORE DETAILS ABOUT BASELINES. FedAvg is the classic FL method via the vanilla weighted averaging to parallel train a global model with a central server. FedSAM applies SAM to be the local optimizer for improving the model generalization performance. For decentralized schemes, D-PSGD is a classic decentralized parallel SGD method to reach a consensus model 1, DFedAvg is the decentralized FedAvg, and DFedAvgM uses SGD with momentum based on DFedAvg to train models on each client and performs multiple local training steps before each communication. Furthermore, DisPFL is a novel personalized FL 1In this work, we focus on decentralized FL which refers to the local training with multiple local iterates, whereas decentralized learning/training focuses on one-step local training. For instance, D-PSGD (Lian et al., 2017) is a decentralized training algorithm, which uses the one-step SGD to train local models in each communication round. framework in a decentralized communication protocol, which uses a decentralized sparse training technique, thus for a fair comparison, we report the global accuracy in DisPFL. B.3 HYPERPARAMETERS. The total client number is set to 100, and the number of connection s in each client is restrict at most 10 neighbors in decentralized setting. For centralized setting, the sample ratio of client is set to 0.1. The local learning rate is set to 0.1 decayed with 0.998 after each communication round for all experiments, and the global learning rate is set to 1.0 for centralized methods. The batch size is fixed to 128 for all the experiments. We run 1000 global communication rounds for CIFAR-10 and CIFAR100. SGD optimizer is used with weighted decayed parameter 0.0005 for all baselines except FedSAM. Other optimizer hyper-parameters ρ = 0.01 for our algorithms (DFedSAM and DFedSAMMGS with Q = 1) via grid search on the set {0.01, 0.025, 0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1.0} and the value of ρ in FedSAM is followed by (Qu et al., 2022), respectively. And following by (Sun et al., 2022), the local optimization with momentum 0.9 for DFedAvgM. For local iterations K, the training epoch in D-PSGD is set to 1, that for all other methods is set to 5. B.4 COMMUNICATION CONFIGURATIONS. Specifically, such as (Dai et al., 2022), the decentralized methods actually generate far more communication volume than centralized methods because each client in the network topology needs to transmit the local information to their neighbors. However, only the partly sampled clients can upload their parameter updates with a central server in centralized setting. Therefore, for a fair comparison, we use a dynamic time-varying connection topology for decentralized methods in Section 5.2, we restrict each client can communicate with at most 10 neighbors which are random sampled without replacement from all clients, and only 10 clients who are neighbors to each other can perform one gossip step to exchange their local information in DFedSAM. In DFedSAM-MGS, the gossip step is performed Q times, 10×Q clients are sampled without replacement can perform one gossip step to exchange their local information. C ALGORITHMS D PRELIMINARY LEMMAS Lemma D.1 [Lemma 4, (Lian et al., 2017)] For any t ∈ Z+, the mixing matrix W ∈ Rm satisfies ∥Wt − P∥op ≤ λt, where λ := max{|λ2|, |λm(W )|} and for a matrix A, we denote its spectral norm as ∥A∥op. Furthermore, 1 := [1, 1, . . . , 1]⊤ ∈ Rm and P := 11⊤ m ∈ Rm×m. In [Proposition 1, (Nedic & Ozdaglar, 2009)], the author also proved that ∥W t − P∥op ≤ Cλt for some C > 0 that depends on the matrix. Lemma D.2 [Lemma A.5, (Qu et al., 2022)] (Bounded global variance of ∥∇fi(x+ δi)−∇f(x+ δ)∥2.) An immediate implication of Assumptions 1 and 2, the variance of local and global gradients with perturbation can be bounded as follows: ∥∇fi(x+ δi)−∇f(x+ δ)∥2 ≤ 3σ2g + 6L2ρ2. Lemma D.3 [Lemma B.1, (Qu et al., 2022)] (Bounded Eδ of DFedSAM). the updates of DFedSAM for any learning rate satisfying η ≤ 14KL have the drift due to δi,k − δ: Eδ = 1 m m∑ i=1 E[∥δi,k − δ∥2] ≤ 2K2β2η2ρ2. where δ = ρ ∇F (x)∥∇F (x)∥ , δi,k = ρ ∇Fi(yt,k,ξ) ∥∇Fi(yt,k,ξ)∥ . Algorithm 1: DFedSAM and DFedSAM-MGS Input : Total number of clients m, total number of communication rounds T , the number of consensus steps per gradient iteration Q, learning rate η, and total number of the local iterates are K. Output: Generate consensus model xT after the final communication of all clients with their neighbors. 1 Initialization: Randomly initialize each client’s model x0(i). 2 for t = 0 to T − 1 do 3 for node i in parallel do 4 for k = 0 to K − 1 do 5 Set yt,0(i)← xt(i), yt,−1(i) = yt,0(i) 6 Sample a batch of local data ξi and calculate local gradient gt,k(i) = ∇Fi(yt,k; ξi) 7 g̃t,k(i) = ∇Fi(yt,k + δ(yt,k); ξi) with δ(yt,k) = ρgt,k/ ∥∥gt,k∥∥ 2 8 yt,k+1(i) = yt,k(i)− ηg̃t,k(i) 9 end 10 zt(i)← yt,K(i) 11 Receive neighbors’ models zt(l) from neighborhood set Sk,t with adjacency matrix W . 12 xt+1(i) = ∑ l∈N (i) wi,lz t(l) 13 for q = 0 to Q− 1 do 14 xt,q+1(i) = ∑ l∈N (i) wi,lz t,q(l) (zt,0(i) = zt(i)) (Exchanging messages) 15 zt,q+1(i) = xt,q+1(i) (Local gossip update) 16 end 17 xt+1(i) = xt,Q(i) 18 end 19 end E CONVERGENCE ANALYSIS FOR DFEDSAM AND DFEDSAM-MGS In the following, we present the proof of convergence results for DFedSAM and DFedSAM-MGS, respectively. Note that the proof of Theorem 4.1 is thoroughly introduced in two sections E.2 and E.3 as follows, where Q = 1 and Q > 1, respectively. E.1 PRELIMINARY LEMMAS Lemma E.1 Assume that Assumptions 1 and 2 hold, and (yt,k(i) + δi,k)t≥0, (xt,k(i))t≥0 are generated by DFedSAM for all i ∈ {1, 2, ...,m}. If the client update of DFedSAM for any learning rate η ≤ 110KL , it then follows: 1 m m∑ i=1 E ∥∥(yt,k(i) + δi,k)− xt(i)∥∥2 ≤ 2K(4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2) + 2Kρ2 2K − 1 , (8) where 0 ≤ k ≤ K − 1. Proof. For any local iteration k ∈ {0, 1, ...,K − 1} in any node i, it holds 1 m m∑ i=1 E ∥∥(yt,k(i) + δi,k)− xt(i)∥∥2 = 1 m m∑ i=1 E ∥∥yt,k−1(i) + δi,k − η∇Fi(yt,k−1(i) + δi,k−1)− xt(i)∥∥2 = 1 m m∑ i=1 E∥yt,k−1(i) + δi,k−1 − xt(i) + δi,k − δi,k−1 − η ( ∇Fi(yt,k−1(i) + δi,k−1)−∇Fi(yt,k−1) +∇Fi(yt,k−1)−∇fi(xt) +∇fi(xt)−∇f(xt) +∇f(xt) ) ∥2 ≤ I + II, where I = (1 + 12K−1 ) 1 m ∑m i=1 ( E ∥∥yt,k−1(i) + δi,k−1 − xt(i)∥∥2 + E∥δi,k − δi,k−1∥2) and II = 2K m m∑ i=1 E∥ − η ( ∇Fi(yt,k−1(i) + δi,k−1)−∇Fi(yt,k−1) +∇Fi(yt,k−1)−∇fi(xt) +∇fi(xt)−∇f(xt) +∇f(xt) ) ∥2, With Lemma D.3 and Assumptions, the bounds are I = (1 + 1 2K − 1 ) 1 m m∑ i=1 ( E ∥∥yt,k−1(i) + δi,k−1 − xt(i)∥∥2 + 2K2L2η2ρ4), and II = 8Kη2 m m∑ i=1 ( L2ρ2 + σ2l + σ 2 g + E ∥∥∇f(xt)∥∥2 ), where E ∥δi,k−1∥2 ≤ ρ2. Thus, we can obtain E ∥∥(yt,k(i) + δi,k)− xt(i)∥∥2 ≤ (1 + 1 2K − 1 )E ∥∥(yt,k−1(i) + δi,k−1)− xt(i)∥∥2 + 4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 , where E ∥∇f(xt)∥2 = 1m ∑m i=1 E ∥∇f(xt(i))∥ 2, f(x) := 1m ∑m i=1 fi(x), and ∇fi(xt) := ∇f(xt(i)). The recursion from τ = 0 to k yields 1 m m∑ i=1 E ∥∥(yt,k(i) + δi,k)− xt(i)∥∥2 ≤ 1 m m∑ i=1 K−1∑ τ=1 (1 + 1 2K − 1 )τ (4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 )+ (1 + 1 2K − 1 )ρ2 ≤ 2K(4K 3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2) + 2Kρ2 2K − 1 . This completes the proof. Lemma E.2 Assume that Assumption 3 holds and the number of local iteration K is large enough. Let {xt(i)}t≥0 be generated by DFedSAM for all i ∈ {1, 2, ...,m} and any learning rate η > 0, we have following bound: 1 m m∑ i=1 E[∥xt,k(i)− xt∥2] ≤ C2η 2 (1− λ)2 , where C2 = 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) . Proof. Following [Lemma 4, (Sun et al., 2022)], we denote Zt := [ zt(1), zt(2), . . . , zt(m) ]⊤ ∈ Rm×d. With these notation, we have Xt+1 = WZt = WXt − ζt, (9) where ζt := WXt −WZt. The iteration equation (9) can be rewritten as the following expression Xt = WtX0 − t−1∑ j=0 Wt−1−jζj . (10) Obviously, it follows WP = PW = P. (11) According to Lemma D.1, it holds ∥Wt −P∥ ≤ λt. Multiplying both sides of equation (10) with P and using equation (11), we then get PXt = PX0 − t−1∑ j=0 Pζj = − t−1∑ j=0 Pζj , (12) where we used initialization X0 = 0. Then, we are led to ∥Xt −PXt∥ = ∥ t−1∑ j=0 (P−Wt−1−j)ζj∥ ≤ t−1∑ j=0 ∥P−Wt−1−j∥op∥ζj∥ ≤ t−1∑ j=0 λt−1−j∥ζj∥. (13) With Cauchy inequality, E∥Xt −PXt∥2 ≤ E( t−1∑ j=0 λ t−1−j 2 · λ t−1−j 2 ∥ζj∥)2 ≤ ( t−1∑ j=0 λt−1−j)( t−1∑ j=0 λt−1−jE∥ζj∥2) Direct calculation gives us E∥ζj∥2 ≤ ∥W∥2 · E∥Xj − Zj∥2 ≤ E∥Xj − Zj∥2. With Lemma E.1 and Assumption 3, for any j, E∥Xj − Zj∥2 ≤ m ( 2K( 4K3L2ρ4 2K − 1 + 8K(L2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ2 η2(2K − 1) ) η2. Thus, we get E∥Xt −PXt∥2 ≤ m ( 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) ) η2 (1− λ)2 . The fact that Xt −PXt = xt(1)− xt xt(2)− xt ... xt(m)− xt then proves the result. Lemma E.3 Assume that Assumption 3 holds and the number of local iteration K is large enough. Let {xt(i)}t≥0 be generated by DFedSAM-MGS for all i ∈ {1, 2, ...,m} and any learning rate η > 0, we have following bound: 1 m m∑ i=1 E[∥xt,k(i)− xt∥2] ≤ C2η2 ( λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 ) , where C2 = 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) . Proof. Following [Lemma 4, (Sun et al., 2022)] and Lemma E.2, we denote Zt :=[ zt(1), zt(2), . . . , zt(m) ]⊤ ∈ Rm×d. With these notation, we have Xt+1 = WQZt = WQXt − ζt, (14) where ζt := WQXt −WQZt. The iteration equation (14) can be rewritten as the following expression Xt = (Wt)QX0 − t−1∑ j=0 W(t−1−j)Qζj . (15) Obviously, it follows WQP = PWQ = P. (16) According to Lemma D.1, it holds ∥Wt −P∥ ≤ λt. Multiplying both sides of equation (15) with P and using equation (16), we then get PXt = PX0 − t−1∑ j=0 Pζj = − t−1∑ j=0 Pζj , where we used initialization X0 = 0. Then, we are led to ∥Xt −PXt∥ = ∥ t−1∑ j=0 (P−WQ(t−1−j))ζj∥ ≤ t−1∑ j=0 ∥P−WQ(t−1−j)∥op∥ζj∥ ≤ t−1∑ j=0 λt−1−j∥W(t−1−j)(Q−1)∥∥ζj∥ ≤ t−1∑ j=0 λt−1−j∥Wt−1−j −P+P∥Q−1∥ζj∥. With Cauchy inequality, E∥Xt −PXt∥2 ≤ ( t−1∑ j=0 λt−1−j(λ(Q−1)(t−1−j) + 1 mQ−1 ) t−1∑ j=0 λt−1−j(λ(Q−1)(t−1−j) + 1 mQ−1 )E∥ζj∥2) ≤ ( t−1∑ j=0 (λQ(t−1−j) + λt−1−j mQ−1 ) t−1∑ j=0 (λQ(t−1−j) + λt−1−j mQ−1 )E∥ζj∥2) ≤ E∥ζj∥2 ( 1 (1− λ)2m2(Q−1) + 1 (1− λQ)2 ) . Direct calculation gives us E∥ζj∥2 ≤ ∥WQ∥2 · E∥Xj − Zj∥2 ≤ ∥W −P+P∥2Q∥Xj − Zj∥2 ≤ (∥W −P∥2Q + ∥P∥2Q)E∥Xj − Zj∥2 ≤ (λQ + 1)E∥Xj − Zj∥2. With Lemma E.1 and Assumption 3, for any j, E∥Xj − Zj∥2 ≤ m ( 2K( 4K3L2ρ4 2K − 1 + 8K(L2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ2 η2(2K − 1) ) η2. Thus, we get E∥Xt −PXt∥2 ≤ mC2η2 ( λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 ) , where C2 = 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) . The fact that Xt −PXt = xt(1)− xt xt(2)− xt ... xt(m)− xt then proves the result. E.2 PROOF OF CONVERGENCE RESULTS FOR DFEDSAM. Noting that PXt+1 = PWZt = PZt, that is also xt+1 = zt, where X := [x(1),x(2), . . . ,x(m)]⊤ ∈ Rm×d and Z := [z(1), z(2), . . . , z(m)]⊤ ∈ Rm×d. Thus we have xt+1 − xt = xt+1 − zt + zt − xt = zt − xt, (17) where zt := ∑m i=1 z t(i) m and x t := ∑m i=1 x t(i) m . In each node, zt − xt = ∑m i=1( ∑K−1 k=0 y t,k+1(i)− yt,k(i)) m = ∑m i=1 ∑K−1 k=0 (−ηg̃t,k(i)) m = ∑m i=1 ∑K−1 k=0 (−η∇Fi(yt,k + ρ∇Fi(yt,k; ξ)/ ∥∥∇Fi(yt,k; ξ)∥∥2); ξ) m . (18) The Lipschitz continuity of∇f : Ef(xt+1) ≤ Ef(xt) + E⟨∇f(xt), zt − xt⟩+ L 2 E∥xt+1 − xt∥2, (19) where we used (17). And (18) is used: E⟨K∇f(xt), (zt − xt)/K⟩ = E⟨K∇f(xt),−η∇f(xt) + η∇f(xt) + (zt − xt)/K⟩ = −ηKE ∥∥∥∇f(xt)∥∥∥2 + E⟨K∇f(xt), η mK m∑ i=1 K−1∑ k=0 ( ∇f(xt(i))−∇Fi(yt,k + δi,k; ξ) ) ⟩ a) ≤ ηE ∥∥∥∇f(xt)∥∥∥ · ∥∥∥∥∥ Lm m∑ i=1 K−1∑ k=0 (xt(i)− yt,k − δi,k) ∥∥∥∥∥ b) ≤ ηK 2 E ∥∥∥∇f(xt)∥∥∥2 + ηL2K2 2K ( 2K( 4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2) + 2Kρ2 2K − 1 ) , (20) where a) uses the Lipschitz continuity, b) uses Lemma E.1. Meanwhile, we can get L 2 E ∥∥∥xt+1 − xt∥∥∥2 = L 2 E ∥∥∥zt − xt∥∥∥2 ≤ L 2 1 m m∑ i=1 ∥∥yt,K(i)− xt(i)∥∥2 ≤ L 2 E ∥∥∥∥∥−η ∑m i=1 ∑K−1 k=0 ∇Fi(yt,k + δi,k; ξ) m ∥∥∥∥∥ 2 a) ≤ L 2 η2K2B2, (21) where a) uses Assumption 3. Furthermore, (19) can be represented as Ef(xt+1) ≤Ef(xt)− ηK 2 E ∥∥∥∇Ef(xt)∥∥∥2 + ηL2K 2 C1 + 8η3K2L2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 + L 2 η2K2B2, (22) where C1 = 2K( 4K 3L2η2ρ4 2K−1 + 8Kη 2(L2ρ2 + σ2g + σ 2 l ) + 2Kρ2 2K−1 . Thus, with Lemma E.2, we can get 1 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 ≤ 2L2∑mi=1 ∥∥∥xt(i)− xt∥∥∥2 m + 2E ∥∥∥∇f(xt)∥∥∥2 a) ≤ 2L2 C2η 2 (1− λ)2 + 2E ∥∥∥∇f(xt)∥∥∥2 , (23) where a) uses Lemma E.2 and C2 = 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) . Therefore, (19) is Ef(xt+1) ≤ Ef(xt)− ηK 2 E ∥∥∥∇f(xt)∥∥∥2 + ηL2KC1 2 + 8η3K2L2(2L2 C2η 2 (1− λ)2 + 2E ∥∥∥∇f(xt)∥∥∥2) ≤ Ef(xt) + (16η3K2L2 − ηK 2 )E ∥∥∥∇f(xt)∥∥∥2 + ηL2KC1 2 + 16C2η 5K2L4 (1− λ)2 . (24) Summing the inequality (24) from t = 1 to T , and then we can get the proved result as below: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 ≤ 2f(x1)− 2f∗ T (ηK − 32η3K2L2) + ηL2KC1 2 + 16C2η 5K2L4 (1−λ)2 ηK − 32η3K2L2 . If we choose the learning rate η = O(1/L √ KT ) and η ≤ 110KL , the number of communication round T is large enough, we have min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 =O(f(x1)− f∗√ KT + K3/2L2ρ4 T + K(L4ρ2 + σ2g + σ 2 l ) T + L2ρ2 T (1− λ)2 + KB2 T 2(1− λ)2 + K2L2ρ4 T 2(1− λ)2 + K(L2ρ2 + σ2g + σ 2 l ) T 2(1− λ)2 ) . When perturbation amplitude ρ proportional to the learning rate, e.g., ρ = O( 1√ T ), and then we have: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 =O(f(x1)− f∗√ KT + K(σ2g + σ 2 l ) T + KB2 T 2(1− λ)2 + K3/2L4 T 2 + L2 T 2(1− λ)2 + K(σ2g + σ 2 l ) T 2(1− λ)2 ) . Under Definition 2, we can get min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 =O(f(x1)− f∗√ KT + K(β2 + σ2l ) T + KB2 T 2(1− λ)2 + K3/2L4 T 2 + L2 T 2(1− λ)2 + K(β2 + σ2l ) T 2(1− λ)2 ) . This completes the proof. E.3 PROOF OF CONVERGENCE RESULTS FOR DFEDSAM-MGS With multiple gossiping steps, x0 and z0 are represented as x and z, respectively. Meanwhile, Zt,Q = Zt,0 ·WQ = Zt ·WQ. Noting that PXt+1 = PWQZt = PZt(Q > 1), that is also xt+1 = zt, where X := [x(1),x(2), . . . ,x(m)]⊤ ∈ Rm×d and Z := [z(1), z(2), . . . , z(m)]⊤ ∈ Rm×d. Thus we have xt+1 − xt = xt+1 − zt + zt − xt = zt − xt, (25) where zt := ∑m i=1 z t(i) m and x t := ∑m i=1 x t(i) m . In each node, zt − xt = ∑m i=1( ∑K−1 k=0 y t,k+1(i)− yt,k(i)) m = ∑m i=1 ∑K−1 k=0 (−ηg̃t,k(i)) m = ∑m i=1 ∑K−1 k=0 (−η∇Fi(yt,k + ρ∇Fi(yt,k; ξ)/ ∥∥∇Fi(yt,k; ξ)∥∥2); ξ) m . (26) The Lipschitz continuity of∇f : Ef(xt+1) ≤ Ef(xt) + E⟨∇f(xt), zt − xt⟩+ L 2 E∥xt+1 − xt∥2, (27) where we used (17). And (18) is used: E⟨K∇f(xt), (zt − xt)/K⟩ = E⟨K∇f(xt),−η∇f(xt) + η∇f(xt) + (zt − xt)/K⟩ = −ηKE ∥∥∥∇f(xt)∥∥∥2 + E⟨K∇f(xt), η mK m∑ i=1 K−1∑ k=0 ( ∇f(xt(i))−∇Fi(yt,k + δi,k; ξ) ) ⟩ a) ≤ ηE ∥∥∥∇f(xt)∥∥∥ · ∥∥∥∥∥ Lm m∑ i=1 K−1∑ k=0 (xt(i)− yt,k − δi,k) ∥∥∥∥∥ b) ≤ ηK 2 E ∥∥∥∇f(xt)∥∥∥2 + ηL2K2 2K ( 2K( 4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2) + 2Kρ2 2K − 1 ) , (28) where a) uses the Lipschitz continuity, b) uses Lemma E.1. Meanwhile, we can get L 2 E ∥∥∥xt+1 − xt∥∥∥2 = L 2 E ∥∥∥zt − xt∥∥∥2 ≤ L 2 1 m m∑ i=1 ∥∥yt,K(i)− xt(i)∥∥2 ≤ L 2 E ∥∥∥∥∥−η ∑m i=1 ∑K−1 k=0 ∇Fi(yt,k + δi,k; ξ) m ∥∥∥∥∥ 2 a) ≤ L 2 η2K2B2, (29) where a) uses Assumption 3. Furthermore, (19) can be represented as Ef(xt+1) ≤Ef(xt)− ηK 2 E ∥∥∥∇Ef(xt)∥∥∥2 + ηL2K 2 C1 + 8η3K2L2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 + L 2 η2K2B2, (30) where C1 = 2K( 4K 3L2η2ρ4 2K−1 + 8Kη 2(L2ρ2 + σ2g + σ 2 l ) + 2Kρ2 2K−1 . Thus, with Lemma E.3, we can get 1 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 ≤ 2L2∑mi=1 ∥∥∥xt(i)− xt∥∥∥2 m + 2E ∥∥∥∇f(xt)∥∥∥2 a) ≤ 2L2C2η2 ( λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 ) + 2E ∥∥∥∇f(xt)∥∥∥2 , (31) where a) uses Lemma E.3 and C2 = 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) . Therefore, (19) is Ef(xt+1) ≤ Ef(xt)− ηK 2 E ∥∥∥∇f(xt)∥∥∥2 + ηL2KC1 2 + 8η3K2L2(2L2C2η 2 ( λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 ) + 2E ∥∥∥∇f(xt)∥∥∥2) ≤ Ef(xt) + (16η3K2L2 − ηK 2 )E ∥∥∥∇f(xt)∥∥∥2 + ηL2KC1 2 + 16C2η 5K2L4 ( λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 ) . (32) Summing the inequality (32) from t = 1 to T , and then we can get the proved result as below: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 ≤ 2f(x1)− 2f∗ T (ηK − 32η3K2L2) + ηL2KC1 2 + 16C2η 5K2L4 ( λQ+1 (1−λ)2m2(Q−1) + λQ+1 (1−λQ)2 ) ηK − 32η3K2L2 . If we choose the learning rate η = O(1/L √ KT ) and η ≤ 110KL , the number of communication round T is large enough with Definition 2 and Φ(λ,m,Q) = λ Q+1 (1−λ)2m2(Q−1) + λQ+1 (1−λQ)2 is the key parameter to the convergence bound with the number of spectral gap, the clients and multiple gossiping steps. Thus we have min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 = O(f(x1)− f∗√ KT + K3/2L2ρ4 T + K(L4ρ2 + β2 + σ2l ) T + Φ(λ,m,Q) (L2ρ2 T + K2L2ρ4 T 2 + K(L2ρ2 + β2 + σ2l +B 2) T 2 )) . When perturbation amplitude ρ proportional to the learning rate, e.g., ρ = O( 1√ T ), and then we have: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 =O(f(x1)− f∗√ KT + K(β2 + σ2l ) T + K3/2L4 T 2 +Φ(λ,m,Q) L2 +K(β2 + σ2l +B 2) T 2 ) . This completes the proof.
1. What is the main contribution of the paper regarding sharpness-aware minimization in decentralized federated learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in its theoretical analysis? 3. Do you have any concerns regarding the presentation and relevance of the theory presented in Section 4? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any minor typos or unclear sections in the paper that could be improved?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper Sharpness-aware minimization (SAM) has recently been shown to improve various aspects of deep learning. In this paper, the authors show empirically that SAM also helps to improve the performance in decentralized federated learning. A main contribution of this paper is a theoretical analysis of their federated learning algorithms. However, this theory appears disconnected from the rest of the paper and it is unclear why it matters in practice (see weaknesses). Strengths And Weaknesses The main weakness of the paper is the theory presented in Section 4. A lot of work is spent to derive a complicated and intimidating bound, and I am not convinced that this bound is important or matters in practice. For example, it is unclear if it matches any situations observed in practice. To improve the paper, some experiments could be added which demonstrate that this theory is correct or useful, as it is near impossible to confirm the math due to its complexity and presentation. Just notice that for the considered deep learning applications, Assumption 1 and Assumption 3 may not hold. Perhaps the authors could provide some "sanity checks" and show what happens on a simple federated linear regression problem. Does the theory work in that setting? Do the bounds / analysis simplify? If the theory Section 4 could explain some aspects of why SAM improves decentralized federated learning or give some guidance on the choice of parameters I would be convinced. But as of now, I don't see the point of including it. Minor typos (no influence on my rating): decebtralized -> decentralized Eq.2, SAM uses norm ||.||, not the squared norm. ||.||^2 is not a norm, so this sentence is technically wrong. Anyway, why write it with squared norm if simplified later? Commincation -> Communication typology -> topology desira ble -> desirable How are the weights W chosen? Perhaps a sentence can be added in the paragraph after Eq. (1) on how W is picked in practice. Section 3.2 is not very clearly written. In particular, does the neighbourhood N include the node itself, i.e., does Eq. (4) include also the weights of node i? In Eq. 5 why does it suddenly switch to "q" from "k"? The difference between Q and K needs to be explained, I suspect that the averaging happens every few inner iterations here? However, after reading Algorithm 1 in the appendix, these things became clear. Figure 5 (c) is very hard to read, especially for color-blind people. Perhaps the best result can be highlighted with a different line style or bold? Clarity, Quality, Novelty And Reproducibility Clarity/Quality: see strengths/weaknesses. Novelty: The paper seems to be the first one to apply SAM to decentralized federated learning. However, the application is straightforward and applying an existing algorithm to an existing problem domain is of rather limited novelty. Reproducibility: the results seem possible to reproduce if the code is provided.
ICLR
Title Improving Model Consistency of Decentralized Federated Learning via Sharpness Aware Minimization and Multiple Gossip Approaches Abstract To mitigate the privacy leakages and reduce the communication burden of Federated Learning (FL), decentralized FL (DFL) discards the central server and each client only communicates with its neighbors in the decentralized communication network. However, existing DFL algorithms tend to feature high inconsistency among local models, which results in severe distribution shifts across clients and inferior performance compared with centralized FL (CFL), especially on heterogeneous data or with sparse connectivity of communication topology. To alleviate this challenge, we propose two DFL algorithms named DFedSAM and DFedSAM-MGS to improve the performance. Specifically, DFedSAM leverages gradient perturbation to generate local flatness models via Sharpness Aware Minimization (SAM), which searches for model parameters with uniformly low loss function values. In addition, DFedSAM-MGS further boosts DFedSAM by adopting the technique of Multiple Gossip Steps (MGS) for a better model consistency, which accelerates the aggregation of local flatness models and better balances the communication complexity and learning performance. In the theoretical perspective, we present the improved convergence rates O ( 1 T + 1 T 2(1−λ)2 ) and O ( 1 T + λ+1 T 2(1−λQ)2 ) in the stochastic non-convex setting for DFedSAM and DFedSAM-MGS, respectively, where 1−λ is the spectral gap of the gossip matrix W and Q is the gossip steps in MGS. Meanwhile, we empirically confirm that our methods can achieve competitive performance compared with CFL baselines and outperform existing DFL baselines. N/A ( 1 T + 1 T 2(1−λ)2 ) and O ( 1 T + λQ+1 T 2(1−λQ)2 ) in the stochastic non-convex setting for DFedSAM and DFedSAM-MGS, respectively, where 1−λ is the spectral gap of the gossip matrix W and Q is the gossip steps in MGS. Meanwhile, we empirically confirm that our methods can achieve competitive performance compared with CFL baselines and outperform existing DFL baselines. 1 INTRODUCTION … … … … (a) Ring (b) Exponential (c) Grid (d) Fully-connected (b) Loss surface of FedAvg (c) Loss surface of DFedAvg Federated learning (FL) (Mcmahan et al., 2017; Li et al., 2020b) allows distributed clients to collaboratively train a shared model under the orchestration of the cloud without transmitting local data. However, almost all FL paradigms employ a central server to communicate with clients, which faces several critical challenges, such as computational resources limitation, high communication bandwidth cost, and privacy leakage (Kairouz et al., 2021). Compared to the centralized FL (CFL) framework, decentralized FL (DFL, see Figure 1), in which the clients only communicate with their neighbors without a central server, offers communication advantage and further preserves the data privacy (Kairouz et al., 2021; Wang et al., 2021). However, DFL suffers from bottlenecks such as severe inconsistency of local models due to heterogeneous data and model aggregation locality caused by the network connectivity of communication topology. This inconsistency results in severe over-fitting in local models and model performance degradation. Therefore, the global/consensus model may bring inferior performance compared with CFL, especially on heterogeneous data or in face of the sparse connectivity of communication net- 1 works. Similar performance pattern of DFL has also been demonstrated by Sun et al. (2022). To explore the reasons behind this phenomenon, we present the structure of the loss landscapes (Li et al., 2018) for FedAvg (Mcmahan et al., 2017) and decentralized FedAvg (DFedAvg, Sun et al. (2022)) on Fashion-MNIST (Xiao et al., 2017) with the same setting in Figure 2 (a) and (b). It is clearly seen that DFL method has a sharper landscape than CFL method. Motivation. Most FL algorithms face the over-fitting issue of local models on heterogeneous data. Many recent works (Sahu et al., 2018; Li et al., 2020c; Karimireddy et al., 2020; Yang et al., 2021; Acar et al., 2021; Wang et al., 2022) focus on the CFL and mitigate this issue with various effective solutions. In DFL, this issue can be exacerbated due to sharp loss landscape caused by the inconsistency of local models (see Figure 2 (a) and (b)). Therefore, the performance of decentralized schemes is usually worse than that of centralized schemes with the same setting (Sun et al., 2022). Consequently, an important research question is: can we design a DFL algorithm that can mitigate inconsistency among local models and achieve the similar performance to its centralized counterpart? To address this question, we propose two DFL algorithms: DFedSAM and DFedSAM-MGS. Specifically, DFedSAM overcomes local model over-fitting issue via gradient perturbation with SAM (Foret et al., 2021) in each client to generate local flatness models. Since each client aggregates the flatness models from its neighbors, a potential flat aggregated model can be generated, which results in high generalization ability. To further boost the performance of DFedSAM, DFedSAMMGS integrates multiple gossip steps (MGS) (Ye et al., 2020; Ye & Zhang, 2021; Li et al., 2020a) to accelerate the aggregation of local flatness models by increasing the number of gossip steps of local communications. It realizes a better trade-off between communication complexity and learning performance by bridging the gap between CFL and DFL, since DFL can be roughly regarded as CFL with a sufficiently large number of gossip steps (see Section 5.4). Theoretically, we present the convergence rates for our algorithms in the stochastic non-convex setting. We show that the bound can be looser when the connectivity of the communication topology λ is sufficiently sparse, or the data homogeneity β is sufficiently large, while as the consensus/gossip steps Q in MGS increase, it is tighter as the impact of communication topology can be alleviated (see Section 4). The theoretical results directly explain why the application of SAM and MGS in DFL can ensure better performance with various types of communication network topology. Empirically, we conduct extensive experiments on CIFAR-10 and CIFAR-100 datasets in both the identical data distribution (IID) and non-IID settings. The experimental results confirm that our algorithms achieve competitive performance compared to CFL baselines and outperform DFL baselines (see Section 5.2). Contribution. Our main contributions can be summarized as three-fold: • We propose two DFL algorithms DFedSAM and DFedSAM-MGS. DFedSAM alleviates the inconsistency of local models through getting local flatness models, while DFedSAMMGS achieves a better consistency based on DFedSAM via the aggregation acceleration and has a better trade-off between communication and generalization. • We present the convergence ratesO ( 1 T + 1 T 2(1−λ)2 ) andO ( 1 T + λQ+1 T 2(1−λQ)2 ) for DFedSAM and DFedSAM-MGS in the non-convex settings, respectively, and show that our algorithms can achieve the linear speedup for convergence. • We conduct extensive experiments to verity the efficacy of our proposed DFedSAM and DFedSAM-MGS, which can achieve competitive performance compared with both CFL and DFL baselines. 2 RELATED WORK Decentralized Federated Learning (DFL). In DFL, clients only communicate with their neighbors in various communication networks without a central server in comparison to CFL, which offers communication advantage and preserves the data privacy. Lalitha et al. (2018; 2019) take a Bayesian-like approach by introducing a belief over the model parameter space of the clients in a fully DFL framework. Roy et al. (2019) propose the first server-less, peer-to-peer approach BrainTorrent to FL and apply it on medical application in a highly dynamic peer-to-peer FL environment. Sun et al. (2022) apply the multiple local iteration with SGD and quantization method to further reduce the communication cost, and provide the convergence results in various convexity setting. Dai et al. (2022) develop a decentralized sparse training technique to further save the communication and computation cost. Sharpness Aware Minimization (SAM). SAM (Foret et al., 2021) is an effective optimizer for training deep learning models, which leverages the flatness geometry of the loss landscape to improve model generalization ability. Recently, Andriushchenko & Flammarion (2022) study the properties of SAM and provide convergence results of SAM for non-convex objectives. As a powerful optimizer, SAM and its variants have been applied to various machine learning (ML) tasks (Zhao et al., 2022; Kwon et al., 2021; Du et al., 2021; Liu et al., 2022; Abbas et al., 2022). Specifically, Qu et al. (2022) and Caldarola et al. (2022) integrate SAM to improve the generalization, and thus mitigate the distribution shift problem and achieve a new SOTA performance for CFL. However, to the best of our knowledge, no efforts have been devoted to the empirical performance and theoretical analysis of SAM in the DFL setting. Multiple Gossip Steps (MGS). The advantage of increasing the times of local communications within a network topology is investigated in Ye et al. (2020), in which FastMix is proposed with multi-consensus and gradient tracking and they establish the optimal computational complexity and a near optimal communication complexity. DeEPCA (Ye & Zhang, 2021) integrates FastMix into a decebtralized PCA algorithm to accelerate the training process. DeLi-CoCo (Hashemi et al., 2022) performs multiple compression gossip steps in each iteration for fast convergence with arbitrary communication compression. Network-DANE (Li et al., 2020a) uses multiple gossip steps and generalizes DANE to decentralized scenarios. In general, by increasing the number of gossip steps, local clients can approach to a better consensus model towards the performance in CFL. Thus, the use of MGS can also potentially mitigate the model inconsistency in the DFL setting. The work most related to this paper is DFedAvg and DFedAvg with momentum (DFedAvgM) in Sun et al. (2022), which leverages multiple local iterations with the SGD optimizer and significantly improve the performance of classic decentralized parallel SGD method D-PSGD (Lian et al., 2017). However, DFL may suffers from inferior performance due to the severe model inconsistency issue among the clients. Another related work is FedSAM (Qu et al., 2022), which integrates SAM optimizer into CFL to enhance the flatness of local model and achieves new SOTA performance for CFL. On top of the aforementioned studies, we are the first to extend the SAM optimizer to the DFL setting and simultaneously provide its convergence guarantee in the nonconvex setting. Furthermore, we bride the gap of CFL and DFL via adopting MGS in DFedSAM-MGS, which largely mitigates the model inconsistency in DFL. 3 METHODOLOGY In this section, we try to solve this issue in the DFL setting. Below, we first initialize the problem setup in DFL and then describe the proposed DFedSAM and DFedSAM-MGS in detail. 3.1 PROBLEM SETUP In this work, we are interested in solving the following finite-sum stochastic non-convex minimization problem in the DFL setting: min x∈Rd f(x) := 1 m m∑ i=1 fi(x), fi(x) = Eξ∼DiFi(x; ξ), (1) where Di denotes the data distribution in the i-th client, which is heterogeneous across clients, m is the number of clients, and Fi(x; ξ) is the local objective function associated with the training data samples ξ. Problem (1) is known as empirical risk minimization (ERM) and models many applications in ML. As shown in Figure 1(b), we model the communication network in the decentralized network topology between clients as an undirected connected graph G = (N ,V,W ), where N := {1, 2, . . . ,m} represents the set of clients, and V ⊆ N × N represents the set of communication channels, each connecting two distinct clients. Furthermore, we emphasis that there is no central server in the decentralized setting and all clients only communicate with their neighbors with respect to the communication channels V . In addition, we assume Problem (1) is well-defined and denote f∗ as the minimal value of f , i.e., f(x) ≥ f(x∗) = f∗ for all x ∈ Rd. 3.2 DFEDSAM AND DFEDSAM-MG ALGORITHMS Instead of searching for a solution via SGD (Bottou, 2010; Bottou et al., 2018), SAM (Foret et al., 2021) aims to seek a solution in a flatness region through adding a small perturbation to the models, i.e., x+δ with more robust performance. As shown in Figure 2, decentralized schemes has a sharper landscape with poorer generalization ability than centralized schemes. However, the study focus on this issue remains unexplored. In this paper, we extend to SAM optimizer into DFL for investigating this issue, dubbed DFedSAM, whose local loss function is defined as: fi(x) = Eξ∼Di max ∥δi∥22≤ρ Fi(y t,k(i) + δi; ξi), i ∈ N (2) where yt,k(i) + δi is viewed as the perturbed model, ρ is a predefined constant controlling the radius of the perturbation and ∥ · ∥22 is a l2-norm, which can be simplified to ∥ · ∥2 in the rest. Similar with CFL methods, in DFL, DFedSAM allows that clients can update the local model parameters with multiple local iterates before communication are performed. Specifically, for each client i ∈ {1, 2, ...,m}, each local iteration k ∈ {0, 1, ...,K − 1} in each communication round t ∈ {0, 1, ..., T − 1}, the k-th inner iteration in client i is performed as: yt,k+1(i) = yt,k(i)− ηg̃t,k(i), (3) where g̃t,k(i) = ∇Fi(yt,k + δ(yt,k); ξ) and δ(yt,k) = ρgt,k/ ∥∥gt,k∥∥ 2 . Following by (Foret et al., 2021), using first order Taylor expansion around yt,k for a small value of ρ. After K inner iterations in each client, parameters are updated as zt(i) ← yt,K(i) and sent to its neighbors l ∈ N (i) after local updates. Then each client averages its parameters with the information of its neighbors: xt+1(i) = ∑ l∈N (i) wi,lz t(l). (4) On the other hand, we use multiple gossip steps (MGS) technique (Ye et al., 2020; Ye & Zhang, 2021; Hashemi et al., 2022) to achieve a better consistency among local models based on DFedSAM, dubbed DFedSAM-MGS, thereby further boosting the performance. DFedSAM-MGS provides a balance between the communication cost and generalization ability in DFL setting. Specifically, the produce of MGS at the q-th step (q ∈ {0, 1, ..., Q − 1}) can be viewed as two steps in terms of exchanging messages and local gossip update as follows: xt,q+1(i) = ∑ l∈N (i) wi,lz t,q(l), and zt,q+1(i) = xt,q+1(i). (5) At the end of MGS, xt+1(i) = xt,Q(i). Both DFedSAM and DFedSAM-MGS are summarized in Algorithm 1 (see Appendix C). It is clearly seen that DFedSAM may generate the trade-off between the local computation complexity and communication overhead via multiple local iterations, whereas the local communication is only performed at once. While DFedSAM-MGS performs multiple local communications with a larger Q to make all local clients synchronized. Therefore, DFedSAM-MGS can be viewed as compromising between DFL and CFL. Compared with existing SOTA DFL methods: DFedAvg and DFedAvgM (Sun et al., 2022), the benefits of DFedSAM and DFedSAM-MGS lie in three-fold: (i) SAM is introduced to first alleviate local over-fitting issue caused by the inconsistency of local models via seeking a flatness model at each client in DFL, and also contribute to make consensus model flat; (ii) MGS in DFedSAMMGS is used to further accelerate the aggregation of local flatness models for a better consistency among local models based on DFedSAM and properly balances the communication complexity and learning performance; (iii) Furthermore, we also present the theories unifying the impacts of gradient perturbation ρ in SAM, several times of local communications Q in MGS, and various network typologies λ, along with data homogeneity β upon the convergence rate in Section 4. 4 CONVERGENCE ANALYSIS In this section, we show the convergence results of DFedSAM and DFedSAM-MGS for general non-convex FL setting, and the detailed proof is presented in Appendix E. Below, we first give several useful and necessary notations and assumptions. Definition 1 (The gossip/mixing matrix). (Sun et al., 2022, Definition 1) The gossip matrix W = [wi,j ] ∈ [0, 1]m×m is assumed to have these properties: (i) (Graph) If i ̸= j and (i, j) /∈ V , then wi,j = 0, otherwise, wi,j > 0; (ii) (Symmetry) W = W⊤; (iii) (Null space property) null{I − W} = span{1}; (iv) (Spectral property) I ⪰ W ≻ −I. Under these properties, eigenvalues of W can be shown satisfying 1 = |λ1(W))| > |λ2(W))| ≥ · · · ≥ |λm(W))|. Furthermore, λ := max{|λ2(W)|, |λm(W))|} and 1− λ ∈ (0, 1] is the denoted as spectral gap of W. Definition 2 (Homogeneity parameter). (Li et al., 2020a, Definition 2) For any i ∈ {1, 2, . . . ,m} and the parameter x ∈ Rd, the homogeneity parameter β can be defined as: β := max 1≤i≤m βi, with βi := sup x∈Rd ∥∇fi(x)−∇f(x)∥ . Assumption 1 (Lipschitz smoothness). The function fi is differentiable and ∇fi is L-Lipschitz continuous, ∀i ∈ {1, 2, . . . ,m}, i.e., ∥∇fi(x)−∇fi(y)∥ ≤ L∥x− y∥, for all x,y ∈ Rd. Assumption 2 (Bounded variance). The gradient of the function fi have σl-bounded variance, i.e., Eξi ∥∇Fi(y; ξi)−∇fi(x)∥ 2 ≤ σ2l ,∀i ∈ {1, 2, . . . ,m}, the global variance is also bounded, i.e., 1m ∑m i=1 ∥∇fi(x) −∇f(x)∥2 ≤ σ2g for all x ∈ Rd. It is not hard to verify that the σg is smaller than the homogeneity parameter β, i.e., σ2g ≤ β2. Assumption 3 (Bounded gradient ). For any i∈{1, 2, . . . ,m} and x∈Rd, we have ∥∇fi(x)∥≤B. Note that above mentioned assumptions are mild and commonly used in characterizing the convergence rate of FL (Sun et al., 2022; Ghadimi & Lan, 2013; Yang et al., 2021; Bottou et al., 2018; Yu et al., 2019; Reddi et al., 2021). Difference with classic decentralized parallel SGD methods such as D-PSGD (Lian et al., 2017), the technical difficulty is that zt(i)− xt(i) fails to be an unbiased gradient estimation∇fi(xt(i)) after multiple local iterates, thereby merging the multiple local iterations is non-trivial. Furthermore, the various topologies of communication network in DFL are quite different with SAM in CFL (Qu et al., 2022). Below, we adopt the averaged parameter xt= 1m ∑m i=1 x t(i) of all clients to be the approximated solution of Problem (1). Theorem 4.1 Let Assumptions 1, 2 and 3 hold, and the parameters {xt(i)}t≥0 is generated via Algorithm 1. Meanwhile, assume the learning rate of SAM in each client satisfy 0 < η ≤ 110KL . Let xt = 1m ∑m i=1 x t(i) and denote Φ(λ,m,Q) as the metric related with three parameters in terms of the number of spectral gap, the clients and multiple gossip steps, Φ(λ,m,Q) = λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 , (6) Then, we have the gradient estimation of DFedSAM or DFedSAM-MGS for solving Problem (1): min 1≤t≤T E ∥∥∇f(xt)∥∥2 ≤ 2[f(x1)− f∗] T (ηK − 32η3K2L2) + α(K, ρ, η) +Φ(λ,m,Q)β(K, ρ, η, λ), (7) where T is the number of communication rounds and the constants are given as α(K, ρ, η) = ηL2K2 ηK − 32η3K2L2 (4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 2Kρ2 2K − 1 ) , β(K, ρ, η, λ) = 64η5K3L4 ηK−32η3K2L2 (4K3L2ρ4 2K−1 + 8K(L2ρ2 + σ2g + σ 2 l ) + 8KB 2 + ρ2 η2(2K−1) ) . With Theorem 4.1, we state following convergence rates for DFedSAM and DFedSAM-MGS. Corollary 4.1.1 Let the local adaptive learning rate satisfy η = O(1/L √ KT ). With the similar assumptions required in Theorem 4.1.1, and setting the perturbation parameter ρ = O( 1√ T ). Then, the convergence rate for DFedSAM satisfies: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2=O(f(x1)−f∗√ KT + K(β2+σ2l ) T + KB2 T 2(1−λ)2 + K3/2L4 T 2 + L2 T 2(1−λ)2 + K(β2+σ2l ) T 2(1−λ)2 ) . Remark 1 DFedSAM can achieve a linear speedup on the general non-convex setting as long as T ≥ K, which is significantly better than the state-of-the-art (SOTA) bounds such as O ( 1√ T + σ2g√ T + σ2g+B 2 (1−λ)2T 3/2 ) in (Sun et al., 2022). Note that the bound can be tighter as λ decreases, which is dominated by K(β 2+σ2l ) T 2(1−λ)2 terms as λ ≤ 1− K1/4 T 3/2 , whereas as β increases, it can be degraded. Corollary 4.1.2 Let Q > 1, T be large enough and η = O(1/L √ KT ). With the similar assumptions required in Theorem 4.1.1 and perturbation amplitude ρ being ρ = O( 1√ T ), Then, the convergence rate for DFedSAM-MGS satisfies: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2=O(f(x1)−f∗√ KT + K(β2+σ2l ) T + K3/2L4 T 2 +Φ(λ,m,Q) L2+K(β2+σ2l +B 2) T 2 ) . Remark 2 The impact of the network topology (1 − λ) can be alleviated as Q increases and the number of clients m is large enough, the term λ Q+1 (1−λ)2m2(Q−1) of Φ(λ,m,Q) can be neglected, and the term λ Q+1 (1−λQ)2 is close to 1. That means by using the proposed Q-step gossip procedure, model consistency among clients can be improved, and thus DFL in the various communication topologies can be roughly viewed as CFL. Thus, the negative effect of the gradient variances σ2l and β 2 can be degraded especially on sparse network topology where λ is close to 1. In practice, a suitable steps Q > 1 is possible to achieve a communication-accuracy trade-off in DFL setting. 5 EXPERIMENTS In this section, we evaluate the efficacy of our algorithms compared to six baselines from CFL and DFL settings. In addition, we conduct several experiments to verify the impact of the communication network topology in Section 4. Furthermore, several ablation studies are conducted. 5.1 EXPERIMENT SETUP Dataset and Data Partition. The efficacy of the proposed DFedSAM and DFedSAM-MGS is evaluated on CIFAR-10 and CIFAR-100 datasets (Krizhevsky et al., 2009) in both IID and non-IID settings. Specifically, Dir Partition (Hsu et al., 2019) is used for simulating non-IID across federated clients, where the local data of each client is partitioned by splitting the total dataset through sampling the label ratios from the Dirichlet distribution Dir(α) with parameters α = 0.3 and α = 0.6. Baselines. The compared baselines cover several SOTA methods in both the CFL and DFL settings. Specifically, centralized baselines include FedAvg (Mcmahan et al., 2017) and FedSAM (Qu et al., 2022). For decentralized setting, D-PSGD (Lian et al., 2017), DFedAvg and DFedAvgM (Sun et al., 2022), along with DisPFL (Dai et al., 2022), are used for comparison. Implementation Details. The total number of clients is set as 100, among which 10% clients participates in communication. Specifically, all clients perform the local iteration step for decentralized methods and only participated clients can perform local update for centralized methods. We initialize the local learning rate as 0.1 with a decay rate 0.998 per communication round for all experiments. For CIFAR-10 and CIFAR-100 datasets, VGG-11 (He et al., 2016) and ResNet-18 (Simonyan & Zisserman, 2014) are adopted as the backbones in each client, respectively. The number of communication rounds is set as 1000 in the experiments for comparing with all baselines and studying on topology-aware performance. In addition, all the ablation studies are conducted on CIFAR-10 dataset and the number of communication rounds is set as 500. Communication Configurations. For a fair comparison between decentralized and centralized setting, we apply a dynamic time-varying connection topology for decentralized methods to ensure that in each round, the number of connections are no more than that in the central server. In specific, the number of clients communicating with their neighbors can be controlled to keep the communication volume consistent with centralized methods. Following earlier works, the communication complexity is measured by the times of local communications. The more experiments setup are presented in Appendix B due to the limited space. 5.2 PERFORMANCE EVALUATION Performance with compared baselines. In Table 1 and Figure 3, we evaluate DFedSAM and DFedSAM-MGS (Q = 4) with ρ = 0.01 on CIFAR-10 and CIFAR-100 datasets in both settings compared with all baselines from CFL and DFL. From these results, it is clearly seen that our proposed algorithms outperform other decentralized methods on this two datasets, and DFedSAMMGS outperforms and roughly achieves the performance of SOTA centralized baseline FedSAM on CIFAR-10 and CIFAR-100, respectively. Specifically, the training accuracy and testing accuracy are presented in Table 1 to show the generalization performance. We can see that the performance improvement is more obvious than all other baselines on CIFAR-10 with the same communication round. For instance, the difference between training accuracy and test accuracy on CIFAR-10 in IID setting is 14.14% in DFedSAM, 13.22% in DFedSAM-MGS, 15.29% in FedAvg and 15% in FedSAM. That means our algorithms achieve a comparable generalization than centralized baselines. Impact of non-IID levels (β). In Table 1, we can see our algorithms are robust to different participation cases. Heterogeneous data distribution of local client is set to various participation levels from IID, Dirichlet 0.6 and Dirichlet 0.3, which makes the training of global/consensus model is more difficult. For instance, on CIFAR-10, as non-IID levels increases, DFedSAMMGS achieves better generalization than FedSAM as the difference between training accuracy and test accuracy in DFedSAM-MGS {15.27%, 14.51%, 13.22%} are lower than that in FedSAM {17.26%, 14.85%, 15%}. Similarly, the difference in DFedSAM {17.37%, 15.06%, 14.10%} are lower than that in FedAvg {17.60%, 15.82%, 15.27%}. These observations confirm that our algorithms are more robust than baselines in various degrees of heterogeneous data. 5.3 TOPOLOGY-AWARE PERFORMANCE We verify the influence of various communication topologies and gossip averaging steps in DFedSAM and DFedSAM-MGS. Different with the comparison of CFL and DFL in Section 5.2, we only need to verify the key properties for the DFL methods in this section. Thus, the communication type is set as “Complete”, i.e., each client can communicate with its neighbors in the same communication round. The degree of sparse connectivity λ is: ring > grid > exponential > fullconnected in DFL. From Table 2, our algorithms are obviously superior to all decentralized baselines in various communication networks, which is coincided with our theoretical findings. Specifically, compared with DFedAvgM, DFedSAM and DFedSAM-MGS can significantly improve the performance in the ring topology with 0.64% and 8.0%, respectively. Meanwhile, the performance of DFedSAM-MGS in various topologies is always better than that of other methods. This observation confirms that multiple gossip steps Q can alleviate the impact of network topology with a smaller Q = 4. Therefore, our algorithms can generate the better generalization and model consistency in various communication topologies. 5.4 ABLATION STUDY Below, we verify the influence of each component and hyper-parameter in DFedSAM where Q = 1. All the ablation studies are conducted in “exponential” topology except the study for Q in three topologies, and the communication type is the same as Section 5.3: “Complete”. Consensus/gossip steps Q. In Figure 4, we investigate where the balance between learning performance and communication complexity in three network topologies is. We choose four multiple steps Q = {1, 2, 3, 4} and study the different balance points under the different steps in three net- work topologies in Figure 4 (a), (b) and (c). As the number of local communications increases, model performance is also improved but the communication complexity increases too. It is clearly seen that the balance point is different but with the same tendency in different topologies. And a relatively larger Q can bring better performance for a given communication complexity. Therefore, we select Q = 4 in DFedSAM-MGS under 1000 communication rounds for a better balance. Local iteration steps K. Large local iteration steps K can help the convergence in previous DFL work (Sun et al., 2022) with the theoretical guarantees. To investigate the acceleration on T by adopting a larger local iteration steps K, we fix the total batchsize and change local training epochs. As shown in Figure 5 (a), our algorithms can accelerate the convergence in theoretical results (see Section 4.1) as a larger local iteration steps K is adopted. Number of participated clients m. As shown in Figure 5 (b), we compare the performance between different number of client participation m = {50, 100, 150} with the same hyper-parameters. Compared with larger m = 150, the smaller m = {50, 100} can achieve better convergence and test accuracy as the number of local data increases, which makes indirectly local model more generalization, thereby improving the performance of consensus model. Perturbation radius ρ. Perturbation radius ρ has the impact on performance as the adding perturbation is accumulated when the communication round T increases. It is a trade-off between test accuracy and the generalization. To select a proper value for our algorithms, we conduct some experiments on various perturbation radius from the set {0.01, 0.025, 0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1.0} in Figure 5 (c). As ρ = 0.01, we achieve a better convergence and performance. Meanwhile, ρ = O( 1√ T ) can make a linear speedup on convergence (see Section 4.1). The effectiveness of SAM and MGS. To validate the effectiveness of SAM and MGS, respectively, we compare four algorithms including DFedAvg, DFedSAM and FedSAM-MGS with the same setting. From Table 3, DFedSAM can achieve the performance improvement and better generalization compared with DFedAvg as SAM optimizer is adopted. DFedSAM-MGS can further boost the performance compared with FedSAM as MGS can also make models consistent among clients and accelerate the convergence rates. 6 CONCLUSIONS AND FUTURE WORK In this paper, we focus on the model inconsistency challenge caused by heterogeneous data and network connectivity of communication topology in DFL and overcome this challenge from the perspectives of model generalization. We propose two DFL frameworks: DFedSAM and DFedSAMMGS with better model consistency among clients. DFedSAM adopts SAM to achieve the flatness model in each client, thereby improving the generalization by generating a consensus/global flatness model. Meanwhile, DFedSAM-MGS further improves the model consistency based on DFedSAM by accelerating the aggregation of local flat models and reaching a better trade-off between learning performance and communication complexity. For theoretical findings, we confirm a linear speedup and unify the impacts of gradient perturbation in SAM, local communications in MGS, and network typology, along with data homogeneity upon the convergence rate in DFL. Furthermore, empirical results also verify the superiority of our approaches. For future work, we will continue towards understanding of the effect of SAM and MGS for more desira ble generalization in DFL. B MORE DETAILS ON ALGORITHM IMPLEMENTATION B.1 DATASETS AND BACKBONES. CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009) are labeled subsets of the 80 million images dataset. They both share the same 60, 000 input images. CIFAR-100 has a finer labeling, with 100 unique labels, in comparison to CIFAR-10, having 10 unique labels. The VGG-11 as the backbone is used for CIFAR-10, and the ResNet is chose for CIFAR-100, where the batch-norm layers are replaced by group-norm layers due to a detrimental effect of batch-norm. B.2 MORE DETAILS ABOUT BASELINES. FedAvg is the classic FL method via the vanilla weighted averaging to parallel train a global model with a central server. FedSAM applies SAM to be the local optimizer for improving the model generalization performance. For decentralized schemes, D-PSGD is a classic decentralized parallel SGD method to reach a consensus model 1, DFedAvg is the decentralized FedAvg, and DFedAvgM uses SGD with momentum based on DFedAvg to train models on each client and performs multiple local training steps before each communication. Furthermore, DisPFL is a novel personalized FL 1In this work, we focus on decentralized FL which refers to the local training with multiple local iterates, whereas decentralized learning/training focuses on one-step local training. For instance, D-PSGD (Lian et al., 2017) is a decentralized training algorithm, which uses the one-step SGD to train local models in each communication round. framework in a decentralized communication protocol, which uses a decentralized sparse training technique, thus for a fair comparison, we report the global accuracy in DisPFL. B.3 HYPERPARAMETERS. The total client number is set to 100, and the number of connection s in each client is restrict at most 10 neighbors in decentralized setting. For centralized setting, the sample ratio of client is set to 0.1. The local learning rate is set to 0.1 decayed with 0.998 after each communication round for all experiments, and the global learning rate is set to 1.0 for centralized methods. The batch size is fixed to 128 for all the experiments. We run 1000 global communication rounds for CIFAR-10 and CIFAR100. SGD optimizer is used with weighted decayed parameter 0.0005 for all baselines except FedSAM. Other optimizer hyper-parameters ρ = 0.01 for our algorithms (DFedSAM and DFedSAMMGS with Q = 1) via grid search on the set {0.01, 0.025, 0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1.0} and the value of ρ in FedSAM is followed by (Qu et al., 2022), respectively. And following by (Sun et al., 2022), the local optimization with momentum 0.9 for DFedAvgM. For local iterations K, the training epoch in D-PSGD is set to 1, that for all other methods is set to 5. B.4 COMMUNICATION CONFIGURATIONS. Specifically, such as (Dai et al., 2022), the decentralized methods actually generate far more communication volume than centralized methods because each client in the network topology needs to transmit the local information to their neighbors. However, only the partly sampled clients can upload their parameter updates with a central server in centralized setting. Therefore, for a fair comparison, we use a dynamic time-varying connection topology for decentralized methods in Section 5.2, we restrict each client can communicate with at most 10 neighbors which are random sampled without replacement from all clients, and only 10 clients who are neighbors to each other can perform one gossip step to exchange their local information in DFedSAM. In DFedSAM-MGS, the gossip step is performed Q times, 10×Q clients are sampled without replacement can perform one gossip step to exchange their local information. C ALGORITHMS D PRELIMINARY LEMMAS Lemma D.1 [Lemma 4, (Lian et al., 2017)] For any t ∈ Z+, the mixing matrix W ∈ Rm satisfies ∥Wt − P∥op ≤ λt, where λ := max{|λ2|, |λm(W )|} and for a matrix A, we denote its spectral norm as ∥A∥op. Furthermore, 1 := [1, 1, . . . , 1]⊤ ∈ Rm and P := 11⊤ m ∈ Rm×m. In [Proposition 1, (Nedic & Ozdaglar, 2009)], the author also proved that ∥W t − P∥op ≤ Cλt for some C > 0 that depends on the matrix. Lemma D.2 [Lemma A.5, (Qu et al., 2022)] (Bounded global variance of ∥∇fi(x+ δi)−∇f(x+ δ)∥2.) An immediate implication of Assumptions 1 and 2, the variance of local and global gradients with perturbation can be bounded as follows: ∥∇fi(x+ δi)−∇f(x+ δ)∥2 ≤ 3σ2g + 6L2ρ2. Lemma D.3 [Lemma B.1, (Qu et al., 2022)] (Bounded Eδ of DFedSAM). the updates of DFedSAM for any learning rate satisfying η ≤ 14KL have the drift due to δi,k − δ: Eδ = 1 m m∑ i=1 E[∥δi,k − δ∥2] ≤ 2K2β2η2ρ2. where δ = ρ ∇F (x)∥∇F (x)∥ , δi,k = ρ ∇Fi(yt,k,ξ) ∥∇Fi(yt,k,ξ)∥ . Algorithm 1: DFedSAM and DFedSAM-MGS Input : Total number of clients m, total number of communication rounds T , the number of consensus steps per gradient iteration Q, learning rate η, and total number of the local iterates are K. Output: Generate consensus model xT after the final communication of all clients with their neighbors. 1 Initialization: Randomly initialize each client’s model x0(i). 2 for t = 0 to T − 1 do 3 for node i in parallel do 4 for k = 0 to K − 1 do 5 Set yt,0(i)← xt(i), yt,−1(i) = yt,0(i) 6 Sample a batch of local data ξi and calculate local gradient gt,k(i) = ∇Fi(yt,k; ξi) 7 g̃t,k(i) = ∇Fi(yt,k + δ(yt,k); ξi) with δ(yt,k) = ρgt,k/ ∥∥gt,k∥∥ 2 8 yt,k+1(i) = yt,k(i)− ηg̃t,k(i) 9 end 10 zt(i)← yt,K(i) 11 Receive neighbors’ models zt(l) from neighborhood set Sk,t with adjacency matrix W . 12 xt+1(i) = ∑ l∈N (i) wi,lz t(l) 13 for q = 0 to Q− 1 do 14 xt,q+1(i) = ∑ l∈N (i) wi,lz t,q(l) (zt,0(i) = zt(i)) (Exchanging messages) 15 zt,q+1(i) = xt,q+1(i) (Local gossip update) 16 end 17 xt+1(i) = xt,Q(i) 18 end 19 end E CONVERGENCE ANALYSIS FOR DFEDSAM AND DFEDSAM-MGS In the following, we present the proof of convergence results for DFedSAM and DFedSAM-MGS, respectively. Note that the proof of Theorem 4.1 is thoroughly introduced in two sections E.2 and E.3 as follows, where Q = 1 and Q > 1, respectively. E.1 PRELIMINARY LEMMAS Lemma E.1 Assume that Assumptions 1 and 2 hold, and (yt,k(i) + δi,k)t≥0, (xt,k(i))t≥0 are generated by DFedSAM for all i ∈ {1, 2, ...,m}. If the client update of DFedSAM for any learning rate η ≤ 110KL , it then follows: 1 m m∑ i=1 E ∥∥(yt,k(i) + δi,k)− xt(i)∥∥2 ≤ 2K(4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2) + 2Kρ2 2K − 1 , (8) where 0 ≤ k ≤ K − 1. Proof. For any local iteration k ∈ {0, 1, ...,K − 1} in any node i, it holds 1 m m∑ i=1 E ∥∥(yt,k(i) + δi,k)− xt(i)∥∥2 = 1 m m∑ i=1 E ∥∥yt,k−1(i) + δi,k − η∇Fi(yt,k−1(i) + δi,k−1)− xt(i)∥∥2 = 1 m m∑ i=1 E∥yt,k−1(i) + δi,k−1 − xt(i) + δi,k − δi,k−1 − η ( ∇Fi(yt,k−1(i) + δi,k−1)−∇Fi(yt,k−1) +∇Fi(yt,k−1)−∇fi(xt) +∇fi(xt)−∇f(xt) +∇f(xt) ) ∥2 ≤ I + II, where I = (1 + 12K−1 ) 1 m ∑m i=1 ( E ∥∥yt,k−1(i) + δi,k−1 − xt(i)∥∥2 + E∥δi,k − δi,k−1∥2) and II = 2K m m∑ i=1 E∥ − η ( ∇Fi(yt,k−1(i) + δi,k−1)−∇Fi(yt,k−1) +∇Fi(yt,k−1)−∇fi(xt) +∇fi(xt)−∇f(xt) +∇f(xt) ) ∥2, With Lemma D.3 and Assumptions, the bounds are I = (1 + 1 2K − 1 ) 1 m m∑ i=1 ( E ∥∥yt,k−1(i) + δi,k−1 − xt(i)∥∥2 + 2K2L2η2ρ4), and II = 8Kη2 m m∑ i=1 ( L2ρ2 + σ2l + σ 2 g + E ∥∥∇f(xt)∥∥2 ), where E ∥δi,k−1∥2 ≤ ρ2. Thus, we can obtain E ∥∥(yt,k(i) + δi,k)− xt(i)∥∥2 ≤ (1 + 1 2K − 1 )E ∥∥(yt,k−1(i) + δi,k−1)− xt(i)∥∥2 + 4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 , where E ∥∇f(xt)∥2 = 1m ∑m i=1 E ∥∇f(xt(i))∥ 2, f(x) := 1m ∑m i=1 fi(x), and ∇fi(xt) := ∇f(xt(i)). The recursion from τ = 0 to k yields 1 m m∑ i=1 E ∥∥(yt,k(i) + δi,k)− xt(i)∥∥2 ≤ 1 m m∑ i=1 K−1∑ τ=1 (1 + 1 2K − 1 )τ (4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 )+ (1 + 1 2K − 1 )ρ2 ≤ 2K(4K 3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2) + 2Kρ2 2K − 1 . This completes the proof. Lemma E.2 Assume that Assumption 3 holds and the number of local iteration K is large enough. Let {xt(i)}t≥0 be generated by DFedSAM for all i ∈ {1, 2, ...,m} and any learning rate η > 0, we have following bound: 1 m m∑ i=1 E[∥xt,k(i)− xt∥2] ≤ C2η 2 (1− λ)2 , where C2 = 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) . Proof. Following [Lemma 4, (Sun et al., 2022)], we denote Zt := [ zt(1), zt(2), . . . , zt(m) ]⊤ ∈ Rm×d. With these notation, we have Xt+1 = WZt = WXt − ζt, (9) where ζt := WXt −WZt. The iteration equation (9) can be rewritten as the following expression Xt = WtX0 − t−1∑ j=0 Wt−1−jζj . (10) Obviously, it follows WP = PW = P. (11) According to Lemma D.1, it holds ∥Wt −P∥ ≤ λt. Multiplying both sides of equation (10) with P and using equation (11), we then get PXt = PX0 − t−1∑ j=0 Pζj = − t−1∑ j=0 Pζj , (12) where we used initialization X0 = 0. Then, we are led to ∥Xt −PXt∥ = ∥ t−1∑ j=0 (P−Wt−1−j)ζj∥ ≤ t−1∑ j=0 ∥P−Wt−1−j∥op∥ζj∥ ≤ t−1∑ j=0 λt−1−j∥ζj∥. (13) With Cauchy inequality, E∥Xt −PXt∥2 ≤ E( t−1∑ j=0 λ t−1−j 2 · λ t−1−j 2 ∥ζj∥)2 ≤ ( t−1∑ j=0 λt−1−j)( t−1∑ j=0 λt−1−jE∥ζj∥2) Direct calculation gives us E∥ζj∥2 ≤ ∥W∥2 · E∥Xj − Zj∥2 ≤ E∥Xj − Zj∥2. With Lemma E.1 and Assumption 3, for any j, E∥Xj − Zj∥2 ≤ m ( 2K( 4K3L2ρ4 2K − 1 + 8K(L2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ2 η2(2K − 1) ) η2. Thus, we get E∥Xt −PXt∥2 ≤ m ( 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) ) η2 (1− λ)2 . The fact that Xt −PXt = xt(1)− xt xt(2)− xt ... xt(m)− xt then proves the result. Lemma E.3 Assume that Assumption 3 holds and the number of local iteration K is large enough. Let {xt(i)}t≥0 be generated by DFedSAM-MGS for all i ∈ {1, 2, ...,m} and any learning rate η > 0, we have following bound: 1 m m∑ i=1 E[∥xt,k(i)− xt∥2] ≤ C2η2 ( λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 ) , where C2 = 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) . Proof. Following [Lemma 4, (Sun et al., 2022)] and Lemma E.2, we denote Zt :=[ zt(1), zt(2), . . . , zt(m) ]⊤ ∈ Rm×d. With these notation, we have Xt+1 = WQZt = WQXt − ζt, (14) where ζt := WQXt −WQZt. The iteration equation (14) can be rewritten as the following expression Xt = (Wt)QX0 − t−1∑ j=0 W(t−1−j)Qζj . (15) Obviously, it follows WQP = PWQ = P. (16) According to Lemma D.1, it holds ∥Wt −P∥ ≤ λt. Multiplying both sides of equation (15) with P and using equation (16), we then get PXt = PX0 − t−1∑ j=0 Pζj = − t−1∑ j=0 Pζj , where we used initialization X0 = 0. Then, we are led to ∥Xt −PXt∥ = ∥ t−1∑ j=0 (P−WQ(t−1−j))ζj∥ ≤ t−1∑ j=0 ∥P−WQ(t−1−j)∥op∥ζj∥ ≤ t−1∑ j=0 λt−1−j∥W(t−1−j)(Q−1)∥∥ζj∥ ≤ t−1∑ j=0 λt−1−j∥Wt−1−j −P+P∥Q−1∥ζj∥. With Cauchy inequality, E∥Xt −PXt∥2 ≤ ( t−1∑ j=0 λt−1−j(λ(Q−1)(t−1−j) + 1 mQ−1 ) t−1∑ j=0 λt−1−j(λ(Q−1)(t−1−j) + 1 mQ−1 )E∥ζj∥2) ≤ ( t−1∑ j=0 (λQ(t−1−j) + λt−1−j mQ−1 ) t−1∑ j=0 (λQ(t−1−j) + λt−1−j mQ−1 )E∥ζj∥2) ≤ E∥ζj∥2 ( 1 (1− λ)2m2(Q−1) + 1 (1− λQ)2 ) . Direct calculation gives us E∥ζj∥2 ≤ ∥WQ∥2 · E∥Xj − Zj∥2 ≤ ∥W −P+P∥2Q∥Xj − Zj∥2 ≤ (∥W −P∥2Q + ∥P∥2Q)E∥Xj − Zj∥2 ≤ (λQ + 1)E∥Xj − Zj∥2. With Lemma E.1 and Assumption 3, for any j, E∥Xj − Zj∥2 ≤ m ( 2K( 4K3L2ρ4 2K − 1 + 8K(L2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ2 η2(2K − 1) ) η2. Thus, we get E∥Xt −PXt∥2 ≤ mC2η2 ( λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 ) , where C2 = 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) . The fact that Xt −PXt = xt(1)− xt xt(2)− xt ... xt(m)− xt then proves the result. E.2 PROOF OF CONVERGENCE RESULTS FOR DFEDSAM. Noting that PXt+1 = PWZt = PZt, that is also xt+1 = zt, where X := [x(1),x(2), . . . ,x(m)]⊤ ∈ Rm×d and Z := [z(1), z(2), . . . , z(m)]⊤ ∈ Rm×d. Thus we have xt+1 − xt = xt+1 − zt + zt − xt = zt − xt, (17) where zt := ∑m i=1 z t(i) m and x t := ∑m i=1 x t(i) m . In each node, zt − xt = ∑m i=1( ∑K−1 k=0 y t,k+1(i)− yt,k(i)) m = ∑m i=1 ∑K−1 k=0 (−ηg̃t,k(i)) m = ∑m i=1 ∑K−1 k=0 (−η∇Fi(yt,k + ρ∇Fi(yt,k; ξ)/ ∥∥∇Fi(yt,k; ξ)∥∥2); ξ) m . (18) The Lipschitz continuity of∇f : Ef(xt+1) ≤ Ef(xt) + E⟨∇f(xt), zt − xt⟩+ L 2 E∥xt+1 − xt∥2, (19) where we used (17). And (18) is used: E⟨K∇f(xt), (zt − xt)/K⟩ = E⟨K∇f(xt),−η∇f(xt) + η∇f(xt) + (zt − xt)/K⟩ = −ηKE ∥∥∥∇f(xt)∥∥∥2 + E⟨K∇f(xt), η mK m∑ i=1 K−1∑ k=0 ( ∇f(xt(i))−∇Fi(yt,k + δi,k; ξ) ) ⟩ a) ≤ ηE ∥∥∥∇f(xt)∥∥∥ · ∥∥∥∥∥ Lm m∑ i=1 K−1∑ k=0 (xt(i)− yt,k − δi,k) ∥∥∥∥∥ b) ≤ ηK 2 E ∥∥∥∇f(xt)∥∥∥2 + ηL2K2 2K ( 2K( 4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2) + 2Kρ2 2K − 1 ) , (20) where a) uses the Lipschitz continuity, b) uses Lemma E.1. Meanwhile, we can get L 2 E ∥∥∥xt+1 − xt∥∥∥2 = L 2 E ∥∥∥zt − xt∥∥∥2 ≤ L 2 1 m m∑ i=1 ∥∥yt,K(i)− xt(i)∥∥2 ≤ L 2 E ∥∥∥∥∥−η ∑m i=1 ∑K−1 k=0 ∇Fi(yt,k + δi,k; ξ) m ∥∥∥∥∥ 2 a) ≤ L 2 η2K2B2, (21) where a) uses Assumption 3. Furthermore, (19) can be represented as Ef(xt+1) ≤Ef(xt)− ηK 2 E ∥∥∥∇Ef(xt)∥∥∥2 + ηL2K 2 C1 + 8η3K2L2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 + L 2 η2K2B2, (22) where C1 = 2K( 4K 3L2η2ρ4 2K−1 + 8Kη 2(L2ρ2 + σ2g + σ 2 l ) + 2Kρ2 2K−1 . Thus, with Lemma E.2, we can get 1 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 ≤ 2L2∑mi=1 ∥∥∥xt(i)− xt∥∥∥2 m + 2E ∥∥∥∇f(xt)∥∥∥2 a) ≤ 2L2 C2η 2 (1− λ)2 + 2E ∥∥∥∇f(xt)∥∥∥2 , (23) where a) uses Lemma E.2 and C2 = 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) . Therefore, (19) is Ef(xt+1) ≤ Ef(xt)− ηK 2 E ∥∥∥∇f(xt)∥∥∥2 + ηL2KC1 2 + 8η3K2L2(2L2 C2η 2 (1− λ)2 + 2E ∥∥∥∇f(xt)∥∥∥2) ≤ Ef(xt) + (16η3K2L2 − ηK 2 )E ∥∥∥∇f(xt)∥∥∥2 + ηL2KC1 2 + 16C2η 5K2L4 (1− λ)2 . (24) Summing the inequality (24) from t = 1 to T , and then we can get the proved result as below: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 ≤ 2f(x1)− 2f∗ T (ηK − 32η3K2L2) + ηL2KC1 2 + 16C2η 5K2L4 (1−λ)2 ηK − 32η3K2L2 . If we choose the learning rate η = O(1/L √ KT ) and η ≤ 110KL , the number of communication round T is large enough, we have min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 =O(f(x1)− f∗√ KT + K3/2L2ρ4 T + K(L4ρ2 + σ2g + σ 2 l ) T + L2ρ2 T (1− λ)2 + KB2 T 2(1− λ)2 + K2L2ρ4 T 2(1− λ)2 + K(L2ρ2 + σ2g + σ 2 l ) T 2(1− λ)2 ) . When perturbation amplitude ρ proportional to the learning rate, e.g., ρ = O( 1√ T ), and then we have: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 =O(f(x1)− f∗√ KT + K(σ2g + σ 2 l ) T + KB2 T 2(1− λ)2 + K3/2L4 T 2 + L2 T 2(1− λ)2 + K(σ2g + σ 2 l ) T 2(1− λ)2 ) . Under Definition 2, we can get min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 =O(f(x1)− f∗√ KT + K(β2 + σ2l ) T + KB2 T 2(1− λ)2 + K3/2L4 T 2 + L2 T 2(1− λ)2 + K(β2 + σ2l ) T 2(1− λ)2 ) . This completes the proof. E.3 PROOF OF CONVERGENCE RESULTS FOR DFEDSAM-MGS With multiple gossiping steps, x0 and z0 are represented as x and z, respectively. Meanwhile, Zt,Q = Zt,0 ·WQ = Zt ·WQ. Noting that PXt+1 = PWQZt = PZt(Q > 1), that is also xt+1 = zt, where X := [x(1),x(2), . . . ,x(m)]⊤ ∈ Rm×d and Z := [z(1), z(2), . . . , z(m)]⊤ ∈ Rm×d. Thus we have xt+1 − xt = xt+1 − zt + zt − xt = zt − xt, (25) where zt := ∑m i=1 z t(i) m and x t := ∑m i=1 x t(i) m . In each node, zt − xt = ∑m i=1( ∑K−1 k=0 y t,k+1(i)− yt,k(i)) m = ∑m i=1 ∑K−1 k=0 (−ηg̃t,k(i)) m = ∑m i=1 ∑K−1 k=0 (−η∇Fi(yt,k + ρ∇Fi(yt,k; ξ)/ ∥∥∇Fi(yt,k; ξ)∥∥2); ξ) m . (26) The Lipschitz continuity of∇f : Ef(xt+1) ≤ Ef(xt) + E⟨∇f(xt), zt − xt⟩+ L 2 E∥xt+1 − xt∥2, (27) where we used (17). And (18) is used: E⟨K∇f(xt), (zt − xt)/K⟩ = E⟨K∇f(xt),−η∇f(xt) + η∇f(xt) + (zt − xt)/K⟩ = −ηKE ∥∥∥∇f(xt)∥∥∥2 + E⟨K∇f(xt), η mK m∑ i=1 K−1∑ k=0 ( ∇f(xt(i))−∇Fi(yt,k + δi,k; ξ) ) ⟩ a) ≤ ηE ∥∥∥∇f(xt)∥∥∥ · ∥∥∥∥∥ Lm m∑ i=1 K−1∑ k=0 (xt(i)− yt,k − δi,k) ∥∥∥∥∥ b) ≤ ηK 2 E ∥∥∥∇f(xt)∥∥∥2 + ηL2K2 2K ( 2K( 4K3L2η2ρ4 2K − 1 + 8Kη2(L2ρ2 + σ2g + σ 2 l ) + 8Kη2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2) + 2Kρ2 2K − 1 ) , (28) where a) uses the Lipschitz continuity, b) uses Lemma E.1. Meanwhile, we can get L 2 E ∥∥∥xt+1 − xt∥∥∥2 = L 2 E ∥∥∥zt − xt∥∥∥2 ≤ L 2 1 m m∑ i=1 ∥∥yt,K(i)− xt(i)∥∥2 ≤ L 2 E ∥∥∥∥∥−η ∑m i=1 ∑K−1 k=0 ∇Fi(yt,k + δi,k; ξ) m ∥∥∥∥∥ 2 a) ≤ L 2 η2K2B2, (29) where a) uses Assumption 3. Furthermore, (19) can be represented as Ef(xt+1) ≤Ef(xt)− ηK 2 E ∥∥∥∇Ef(xt)∥∥∥2 + ηL2K 2 C1 + 8η3K2L2 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 + L 2 η2K2B2, (30) where C1 = 2K( 4K 3L2η2ρ4 2K−1 + 8Kη 2(L2ρ2 + σ2g + σ 2 l ) + 2Kρ2 2K−1 . Thus, with Lemma E.3, we can get 1 m m∑ i=1 E ∥∥∇f(xt(i))∥∥2 ≤ 2L2∑mi=1 ∥∥∥xt(i)− xt∥∥∥2 m + 2E ∥∥∥∇f(xt)∥∥∥2 a) ≤ 2L2C2η2 ( λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 ) + 2E ∥∥∥∇f(xt)∥∥∥2 , (31) where a) uses Lemma E.3 and C2 = 2K( 4K 3L2ρ4 2K−1 + 8K(L 2ρ2 + σ2g + σ 2 l ) + 8KB 2) + 2Kρ 2 η2(2K−1) . Therefore, (19) is Ef(xt+1) ≤ Ef(xt)− ηK 2 E ∥∥∥∇f(xt)∥∥∥2 + ηL2KC1 2 + 8η3K2L2(2L2C2η 2 ( λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 ) + 2E ∥∥∥∇f(xt)∥∥∥2) ≤ Ef(xt) + (16η3K2L2 − ηK 2 )E ∥∥∥∇f(xt)∥∥∥2 + ηL2KC1 2 + 16C2η 5K2L4 ( λQ + 1 (1− λ)2m2(Q−1) + λQ + 1 (1− λQ)2 ) . (32) Summing the inequality (32) from t = 1 to T , and then we can get the proved result as below: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 ≤ 2f(x1)− 2f∗ T (ηK − 32η3K2L2) + ηL2KC1 2 + 16C2η 5K2L4 ( λQ+1 (1−λ)2m2(Q−1) + λQ+1 (1−λQ)2 ) ηK − 32η3K2L2 . If we choose the learning rate η = O(1/L √ KT ) and η ≤ 110KL , the number of communication round T is large enough with Definition 2 and Φ(λ,m,Q) = λ Q+1 (1−λ)2m2(Q−1) + λQ+1 (1−λQ)2 is the key parameter to the convergence bound with the number of spectral gap, the clients and multiple gossiping steps. Thus we have min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 = O(f(x1)− f∗√ KT + K3/2L2ρ4 T + K(L4ρ2 + β2 + σ2l ) T + Φ(λ,m,Q) (L2ρ2 T + K2L2ρ4 T 2 + K(L2ρ2 + β2 + σ2l +B 2) T 2 )) . When perturbation amplitude ρ proportional to the learning rate, e.g., ρ = O( 1√ T ), and then we have: min 1≤t≤T E ∥∥∥∇f(xt)∥∥∥2 =O(f(x1)− f∗√ KT + K(β2 + σ2l ) T + K3/2L4 T 2 +Φ(λ,m,Q) L2 +K(β2 + σ2l +B 2) T 2 ) . This completes the proof.
1. What is the focus of the paper, and what are the contributions of the proposed approach? 2. What are the strengths and weaknesses of the paper regarding its assumptions, theoretical analysis, and comparison with other works? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any questions or concerns raised by the reviewer that the author should address in future work?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper combines sharpness-aware minimization (SAM) with decentralized SGD. It established the convergence rate and demonstrated that it could improve the generalization performance. However, the novelty is incremental, some assumptions are too strong, it missed some important literature. Strengths And Weaknesses Pros: The proposed algorithm could improve the generalization performance of decentralized federated SGD. This paper provided the theoretical convergence rate. Cons: The novelty is incremental. It is not surprising that this combination could improve the generalization performance. In addition, it is not difficult to combine existing theoretical analysis to establish the convergence rate of this algorithm. This paper assumes that the gradient is bounded. It is too strong. FedAvg and FedSAM do not need this assumption. With this assumption, the theoretical analysis becomes much easier. The first decentralized FedAvg is https://arxiv.org/abs/1910.09126 . However, the authors totally ignore this important literature. This is NOT acceptable! Why does the MGS version even outperform FedSAM for CIFAR10? This should be discussed clearly. Clarity, Quality, Novelty And Reproducibility Clarity: Good. Quality: neutral Novelty: incremental Reproducibility: No source code. It is unclear if it is reproducible.
ICLR
Title Object-Contrastive Networks: Unsupervised Object Representations Abstract Discovering objects and their attributes is of great importance for autonomous agents to effectively operate in human environments. This task is particularly challenging due to the ubiquitousness of objects and all their nuances in perceptual and semantic detail. In this paper we present an unsupervised approach for learning disentangled representations of objects entirely from unlabeled monocular videos. These continuous representations are not biased by or limited to a discrete set of labels determined by human labelers. The proposed representation is trained with a metric learning loss, where nearest neighbors in embedding space are pulled together while being pushed against other objects. We show these unsupervised embeddings allow robots to discover object attributes that generalize to previously unseen environments. We quantitatively evaluate performance on a large-scale synthetic dataset with 12k object models, as well as on a real dataset collected by a robot and show that our unsupervised object understanding generalizes to previously unseen objects. Specifically, we demonstrate the effectiveness of our approach on robotic manipulation tasks, such as pointing at and grasping of objects. An interesting and perhaps surprising finding in this approach is that given a limited set of objects, object correspondences will naturally emerge when using metric learning without requiring explicit positive pairs. Videos of robotic experiments are available at sites.google.com/view/object-contrastive-networks Objectness negative anchor positive OCN embedding deep network metric loss attraction repulsion attraction (embedding nearest-neighbor) repulsion noisy repulsion robotic data collection attraction (embedding nearest-neighbor) Figure 1: Object-Contrastive Networks (OCN): by attracting embedding nearest neighbors and repulsing others using metric learning, continuous object representations naturally emerge. In a video collected by a robot looking at a table from different viewpoints, objects are extracted from random pairs of frames. Given two lists of objects, each object is attracted to its closest neighbor while being pushed against all other objects. Noisy repulsion may occur when the same object across viewpoint is not matched against itself. However the learning still converges towards disentangled and semantically meaningful object representations which can be useful in autonomous robotics applications. N/A Discovering objects and their attributes is of great importance for autonomous agents to effectively operate in human environments. This task is particularly challenging due to the ubiquitousness of objects and all their nuances in perceptual and semantic detail. In this paper we present an unsupervised approach for learning disentangled representations of objects entirely from unlabeled monocular videos. These continuous representations are not biased by or limited to a discrete set of labels determined by human labelers. The proposed representation is trained with a metric learning loss, where nearest neighbors in embedding space are pulled together while being pushed against other objects. We show these unsupervised embeddings allow robots to discover object attributes that generalize to previously unseen environments. We quantitatively evaluate performance on a large-scale synthetic dataset with 12k object models, as well as on a real dataset collected by a robot and show that our unsupervised object understanding generalizes to previously unseen objects. Specifically, we demonstrate the effectiveness of our approach on robotic manipulation tasks, such as pointing at and grasping of objects. An interesting and perhaps surprising finding in this approach is that given a limited set of objects, object correspondences will naturally emerge when using metric learning without requiring explicit positive pairs. Videos of robotic experiments are available at sites.google.com/view/object-contrastive-networks 1 INTRODUCTION The ability to autonomously train to recognize and differentiate previously unseen objects as well as infer general properties and attributes is an important skill for robotic agents. Increased autonomy leads to robustness, one of the main challenges real-world robotics faces. It also renders scaling up data collection practical. Additionally, removing human supervision from the loop has the potential to enable learning richer and less biased continuous representations than ones supervised by a limited set of discrete labels. Unbiased representations can prove useful in unknown future environments different from the ones seen during supervision, a typical challenge for robotics. In this work we present an unsupervised method that learns representations that disentangle perceptual and semantic object attributes such as class, function, and color. We automatically acquire training data by capturing videos with a real robot; a robot base moves around a table to capture objects in various arrangements. Assuming a pre-existing objectness detector, we extract objects from random frames within a same scene containing the same objects, and let the metric learning system decide how to assign positive and negative pairs of embeddings. Representations that generalize across objects naturally emerge despite not being given groundtruth matches. Unlike previous methods, we abstain from employing additional self-supervisory training signals such as tracking or depth. The only inputs to the system are monocular videos. This simplifies data collection and allows our embedding to integrate into existing end-to-end learning pipelines. We demonstrate that a trained Object-Contrastive Network (OCN) embedding allows us to reliably identify object instances based on their visual features such as color and shape. Moreover, we show that objects are also organized along their semantic or functional properties. For example, a cup might not only be associated with other cups, but also with other containers like bowls or vases. The key contributions of this work are: (1) an unsupervised algorithm for learning representations of objects (naturally encoding attributes like class, color, texture and function) which generalize to previously unseen objects; (2) showing monocular videos are sufficient to contrast similar and dissimilar objects pairs naturally without requiring explicit correspondences; (3) demonstrating the autonomy of the system, using a robot from data collection to tasks such as pointing and grasping similar objects to ones presented to it. 2 RELATED WORK Object discovery from visual media. Identifying objects and their attributes has a long history in computer vision and robotics (Tuytelaars et al., 2009). Traditionally, approaches focus on identifying regions in unlabeled images to locate and identify objects (Sivic et al., 2005; Russell et al., 2006; Arora et al., 2007; Fritz & Schiele, 2008; Kim et al., 2008). Discovering objects based on the notion of ’objectness’ instead of specific categories enables more principled strategies for object recognition (Uijlings et al., 2013; Romea et al., 2011). Several methods address the challenge to discover, track, and segment objects in videos based on supervised (Wang et al., 2014) or unsupervised (Kwak et al., 2015; Schulter et al., 2013; Haller & Leordeanu, 2017) techniques. The spatio-temporal signal present in videos can also help to reveal additional cues that allow to identify objects (Wang & Gupta, 2015; Jain et al., 2017). In the context of robotics, methods also focus on exploiting depth to discover objects and their properties (Mishra et al., 2012; Karpathy et al., 2013). Many recent approaches exploit the effectiveness of convolutional deep neural networks to detect objects (Ren et al., 2015; Liu et al., 2016; Lin et al., 2017) and to even provide pixel-precise segmentations (He et al., 2017). While the detection efficiency of these methods is unparalleled, they rely on supervised training procedures and therefore require large amounts of labeled data. Self-supervised methods for the discovery of object attributes mostly focus on learning representations by identifying features in multi-view imagery (DeTone et al., 2017; Lin et al., 2015) and videos (Wang & Gupta, 2015), or by stabilizing the training signal through domain randomization (Doersch et al., 2015; Zhang et al., 2018). Some methods not only operate on RGB images but also employ additional signals, such as depth (Florence et al., 2018; Pot et al., 2018) or egomotion (Agrawal et al., 2015) to self-supervise the learning process. It has been recognized, that contrasting observations from multiple views can provide a view-invariant training signal allowing to even differentiate subtle cues as relevant features that can be leveraged for instance categorization and imitation learning tasks (Sermanet et al., 2018). Unsupervised representation learning. Unlike supervised learning techniques, unsupervised methods focus on learning representations directly from data to enable image retrieval (Paulin et al., 2015), transfer learning (Zhang et al., 2017a), image denoising (Vincent et al., 2008), and other tasks (Dumoulin et al., 2016; Kumar et al., 2015). Using data from multiple modalities, such as imagery of multiple views (Sermanet et al., 2018), sound (Owens et al., 2016; Aytar et al., 2016), or other sensory inputs (Dehzangi et al., 2017), along with the often inherent spatio-temporal coherence (Doersch et al., 2015; Radford et al., 2015), can facilitate the unsupervised learning of representations and embeddings. For example, Zagoruyko & Komodakis (2015) explore multiple architectures to compare image patches and Pathak et al. (2017b) exploit temporal coherence to learn object-centric features. Gao et al. (2016) rely of spatial proximity of detected objects to determine attraction in metric learning, OCN operates similarly but does not require spatial proximity for positive matches, it does however take advantage of the likely presence of a same object in any pair of frames within a video. Zhang et al. (2017b) also take a similar unsupervised metric learning approach for tracking specific faces, using tracking trajectories and heuristics for matching trajectories and obtain richer positive matches. While our approach is simpler in that it does not require tracking or 3D matching, it could be augmented with extra matching signals. In robotics and other real-world scenarios where agents are often only able obtain sparse signals from their environment, self-learned embeddings can serve as an efficient representation to optimize learning objectives. Pathak et al. (2017a) introduce a curiosity-driven approach to obtain a reward signal from visual inputs; other methods use similar strategies to enable grasping (Pinto & Gupta, 2016) and manipulation tasks (Sermanet et al., 2018), or to be pose and background agnostic (Held et al., 2015). Mitash et al. (2017) jointly uses 3D synthetic and real data to learn a representation to detect objects and estimate their pose, even for cluttered configurations. Hickson et al. (2018) learn semantic classes of objects in videos by integrating clustering into a convolutional neural network. 3 UNSUPERVISED LEARNING OF OBJECT REPRESENTATIONS We propose an unsupervised approach to the problem of object understanding for multiple reasons: (1) make data collection simple and scalable, (2) increase autonomy in robotics by continuously learning about new objects without assistance, (3) discover continuous representations that are richer and more subtle than the discrete set of attributes that humans might provide as supervision which may not match future new environments. All these objectives require a method that can learn about objects and differentiate them without supervision. To bootstrap our learning signal we leverage two assumptions: (1) we are provided with a general objectness model so that we can attend to individual objects in a scene, (2) during an observation sequence the same objects will be present in most frames (this can later be relaxed by using an approximate estimation of ego-motion). Given a video sequence around a scene containing multiple objects, we randomly select two frames I and Î in the sequence and detect the objects present in each image. Let us assume N and M objects are detected in image I and Î , respectively. Each of the n-th and m-th cropped object images are embedded in a low dimensional space, organized by a metric learning objective. Unlike traditional methods which rely on human-provided similarity labels to drive metric learning, we use a selfsupervised approach to mine synthetic synthetic similarity labels. 3.1 OBJECTNESS DETECTION To detect objects, we use Faster-RCNN (Ren et al., 2015) trained on the COCO object detection dataset (Lin et al., 2014). Faster-RCNN detects objects in two stages: first generate class-agnostic bounding box proposals all objects present in an image (Fig. 1), second associate detected objects with class labels. We use OCN to discover object attributes, and only rely on the first objectness stage of Faster-R-CNN to detect object candidates. Examples of detected objects are illustrated in Fig. 1. 3.2 METRIC LOSS FOR OBJECT ATTRIBUTE DISENTANGLEMENT We denote a cropped object image by x ∈ X and compute its embedding via a convolutional neural network f (x) ∶ X → K. Note that for simplicity we may omit x from f (x) while f inherits all superscripts and subscripts. Let us consider two pairs of images I and Î that are taken at random from the same contiguous observation sequence. Let us also assume there are n and m objects detected in I and Î respectively. We denote the n-th and m-th objects in the images I and Î as xIn and xÎm, respectively. We compute the distance matrix Dn,m = √ (f In − f Îm)2, n ∈ 1..N, m ∈ 1..M . For every embedded anchor f In, n ∈ 1..N , we select a positive embedding f Î m with minimum distance as positive: f În+ = argmin(Dn,m). Given a batch of (anchor, positive) pairs {(xi, x+i )}Ni=1, the n-pair loss is defined as follows (Sohn, 2016): LN−pair({(xi, x+i )}Ni=1; f ) = 1 N N ∑ i=1 log(1 +∑ i≠j exp(f ⊺i f + j − f ⊺ i f + i )) The loss learns embeddings that identify ground truth anchor-positive pairs from all other anchornegative pairs in the same batch. It is formulated as a sum of softmax multi-class cross-entropy losses over a batch, encouraging the inner product of each anchor-positive pair (fi, f + i ) to be larger than all anchor-negative pairs (fi, f + j≠i). The final OCN training objective over an observation sequence is the sum of npairs losses over all pairs of individual frames: LOCN = LN−pair({(xIn, xÎn+)}Nn=1; f ) + LN−pair({(xÎm, xIm+)}Mm=1; f ) 3.3 ARCHITECTURE OCN takes a standard ResNet50 architecture until layer global pool and initializes it with ImageNet pre-trained weights. We then add three additional ResNet convolutional layers and a fully connected layer to produce the final embedding. The network is trained with the n-pairs metric learning loss as discussed in Sec. 3.2. Our architecture is depicted in Fig. 1 and Fig. 2. 3.4 OBJECT-CENTRIC EMBEDDING SPACE By using multiple views of the same scene and by attending to individual objects, our architecture allows us to differentiate subtle variations of object attributes. Observing the same object across different views facilitates learning invariance to scene-specific properties, such as scale, occlusion, lighting, and background, as each frame exhibits variations of these factors. The network solves the metric loss by representing object-centric attributes, such as shape, function, texture, or color, as these are consistent for (anchor, positive)-pairs, and dissimilar for (anchor, negative)-pairs. 3.5 WHY SHOULD THIS WORK? One might expect that this approach may only work if it is given a good enough initialization so that matching the same object across multiple frames is more likely than random chance. While ImageNet pretraining certainly helps convergence as shown in Table 1, it is not a requirement to learn meaningful representations as shown in Sec. 8. When all weights are random and no labels are provided, what can drive the network to consistently converge to meaningful embeddings? We estimate that the co-occurrence of the following hypotheses drives this convergence: (1) objects often remains visually similar to themselves across multiple viewpoints, (2) limiting the possible object matches within a scene increases the likelihood of a positive match, (3) the low-dimensionality of the embedding space forces the model to generalize by sharing abstract features across objects, (4) the smoothness of embeddings learned with metric learning facilitates convergence when supervision signals are weak, and (5) occasional true-positive matches (even by chance) yield more coherent gradients than false-positive matches which produce inconsistent gradients and dissipate as noise, leading over time to an acceleration of consistent gradients and stronger initial supervision signal. 4 DATA COLLECTION, HYPERPARAMETERS, AND TRAINING To evaluate the effectiveness of OCN embeddings we generated two datasets of real and synthetic objects. For the (unlabeled) real data we arrange objects in table-top configurations and capture frames from continuous camera trajectories. The (labeled) synthetic data is generated from renderings of 3D objects in a similar configuration. Details about the datasets are reported in Table 4. 4.1 SYNTHETIC DATA GENERATION To generate diverse object configurations we use 12 categories (airplane, car, chair, cup, bottle, bowl, guitars, keyboard, lamp, monitor, radio, vase) from ModelNet (Wu et al., 2015). The selected categories cover around 8k models of the 12k models available in the entire dataset. ModelNet provides the object models in a 80-20 split for training and testing. We further split the testing data into models for test and validation, resulting in a 80-10-10 split for training, validation, and test. For validation purposes, we manually assign each model labels describing the semantic and functional properties of the object, including the labels ‘class’, ‘has lid’, ‘has wheels’, ‘has buttons’, ‘has flat surface’, ‘has legs’, ‘is container’, ‘is sittable’, ‘is device’. Fig. 9 shows an example scene. We randomly define the number of objects (up to 20) in a scene and select half of the objects from two randomly selected categories. The other half is selected from the remaining object categories. We further randomly define the positions of the objects and vary their sizes, both so that they do not intersect. Additionally, each object is assigned one of eight predefined colors. We use this setup to generate 100K scenes for training, and 50K scenes for each, validation and testing. For each scene we generate a number (n = 10) of views and select random combination of two views for detecting objects. In total we produce 400K views (200 pairs) for training and 50K views (25K pairs) for each, validation and testing. 4.2 AUTOMATIC REAL DATA COLLECTION Our real object data set consists of 187 unique object instances spread across six categories including ‘balls’, ‘bottles & cans’, ‘bowls’, ‘cups & mugs’, ‘glasses’, and ‘plates’. Table 5 provides details about the number of objects in each category and how they are split between training, test, and validation. Note that we distinguish between cups & mugs and glasses categories based on whether it contains a handle. Fig. 3 provides a snapshot of our entire object dataset. We automated the real world data collection through using a mobile robot equipped with an HD camera (Fig. 8). At each run, we place about 10 objects on the table and then trigger the capturing process by having the robot rotate around the table by 90 degrees (see Fig. 8). In average 130 images are captured at each run. We select random pairs of frames for each trajectory during training of the OCN. We performed 345, 109, and 122 runs of data collection for training, test, and validation dataset, respectively. In total 43084 images were captured for OCN training and 15061 and 16385 were used for test and validation, respectively. 4.3 TRAINING An OCN is trained based on two views of the same synthetic or real scene. We randomly pick two frames of a camera trajectory around the scene to ensure the same objects are present; the frames are selected based on their time stamps so that they are as far apart as possible. We set the npairs regularization to λ = 0.002. The distance matrix Dn,m (Sec. 3.2) is constructed based on the individually detected objects for each of the two frames. The object detector was not specifically trained on any of our datasets. Furthermore, we only used scenes where at least 5 objects were detected in each frame. Operating on less objects results in a more noisy training signal as the n-pairs loss cannot create enough meaningful (anchor, negative)-pairs for contrasting them with the (anchor, positive)-pair. As the number of detected objects per view varies, we reciprocally use both frames to find anchors and their corresponding positives as discussed in Sec. 3.2. Across our experiments, the OCN training converged after 600k-1.2M iterations. 5 EXPERIMENTAL RESULTS To evaluate the effectiveness of an OCN embedding as representation for object attribute disentanglement, we performed experiments on a large-scale synthetic dataset and two robotic tasks of pointing and grasping in a real-world environment. Moreover, the experiments are designed in a way to directly showcase the usefulness of OCN on real robotics applications. 5.1 ATTRIBUTES CLASSIFICATION One way to evaluate the quality of unsupervised embeddings is to train attribute classifiers on top of the embedding using labeled data. Note however this may not entirely reflect the quality of an embedding because it is only measuring a discrete and small number of attributes while an embedding may capture more continuous and larger number of abstract concepts. Classifiers: We consider two types of classifiers to be applied on top of existing embeddings in this experiment: linear and nearest-neighbor classifiers. The linear classifier consists of a single linear layer going from embedding space to the 1-hot encoding of the target label for each attribute. It is trained with a range of learning rates and the best model is retained for each attribute. The nearest-neighbor classifier consists of embedding an entire ‘training’ set, and for each embedding of the evaluation set, assigning to it the labels of the nearest sample from the training set. Nearestneighbor classification is not a perfect approach because it does not necessarily measure generalization as linear classification does and results may vary significantly depending on how many nearest neighbors are available. It is also less subject to data imbalances. We still report this metric to get a sense of its performance because in an unsupervised inference context, the models might be used in a nearest-neighbor fashion (e.g. as in Sec. 5.3). Baselines: we compare multiple baselines in Table 1 and Table 6. The ‘Softmax’ baseline refers to the model described in Fig. 2, i.e. the exact same architecture as for OCN except that the model is trained with a supervised cross-entropy/softmax loss. The ‘ResNet50’ baseline refers to using the unmodified outputs of the ResNet50 model (He et al., 2016) (2048-dimensional vectors) as embeddings and training a nearest-neighbor classifier as defined above. We consider ‘Softmax’ and ‘ResNet50’ baselines as the lower and upper error-bounds for standard approaches to a classification task. The ‘OCN supervised’ baseline refers to the exact same OCN training described in Fig. 2, except that the positive matches are provided rather than discovered automatically. ‘OCN supervised’ represents the metric learning upper bound for classification. Finally we indicate as a reference the error rates for random classification. Results: we quantitatively evaluate our unsupervised models against supervised baselines on the labeled synthetic datasets (train and test) introduced in Sec. 4. Note that there is no overlap in object instances between the training and the evaluation set. The first take-away is that unsupervised performance closely follows its supervised baseline when trained with metric learning. As expected the cross-entropy/softmax approach performs best and establishes the error lower bound while the ResNet50 baseline are upper-bound results. Note that the dataset is heavily imbalanced for the binary attributes reported in Table 1 and Table 6 and require balancing for linear classification. In Fig. 4 and Sec. 9, 11, we show qualitative results of nearest neighbor objects discovered by OCN. 5.2 INSTANCE DETECTION AND TRACKING An OCN embedding can be used to match instances of the same object across multiple views and over time. This is illustrated in Fig. 5, where objects of one view (anchors) are matched against the objects of another view. We can find the nearest neighbors (positives) in the scene through the OCN embedding space as well as the closest matching objects with descending similarity (negatives). We report the quality of finding corresponding objects in Table 2 and differentiate between attribute errors, that indicate a mismatch of specific attributes (e.g. a blue cup is associated with a red cup), and object matching errors, which measure when objects are not of the same instance. An OCN embedding significantly improves detecting object instances across multiple views. 5.3 ROBOT EXPERIMENTS Pointing: We evaluate performance of OCN on a pointing robotic task (Fig. 6). The robot has to point to an object that it deems most similar to the object directly in front of him on the small table. The objects on the big table are randomly selected from each of the six object categories (Table 5). We consider two sets of these target objects. The quantitative experiment in Table 3 uses three query objects per category and is ran three times for each combination of query and target objects (3 × 2 × 18 = 108 experiments performed). The full set of experiments for one of the three runs is illustrated in Fig. 15. Table 3 quantifies OCN performance of this experiment. We report on errors related to ‘class’ and ‘container’ attributes (note that the other ten attributes described in Sec. 4.1 are not relevant to the real object data set). While the trained OCN model is performing well on the most categories, it has particularly some difficulty on the object classes ‘cups & mugs’ and ‘glasses’. These categories are generally mistaken with the category ‘bowls’. As a result the network performs much better in the attribute ‘container’ since all the three categories ‘bowls’, ‘bottles & cans’, and ’glasses’ refer to the same attribute. Grasping: We qualitatively evaluate the OCN performance on a grasping task in an environment unseen during training. First, a person holds and shows an object to the robot, then the robot picks up the most similar object from a set of objects on a table (see Fig. 7). In this experiment, we focus on evaluating OCN with objects that have either similar shape or color attribute. Using OCN the robot can successfully identify and grasp the object that has the closest color and shape attributes to the query object. Note training data did not contain objects held by hand. 6 CONCLUSION We introduced a novel unsupervised representation learning algorithm that allows us to differentiate object attributes, such as color, shape, and function. An OCN embedding is learned by contrasting the features of objects captured from two frames of single view camera trajectories of table-top indoor environments. We specifically attend to individual objects by detecting object bounding boxes and leverage a metric learning loss to disentangle subtle variations of object attributes. The resulting embedding space allows to organize objects along multiple dimensions and serves as representation for robotic learning. We show that an OCN embedding can be used on real robotic tasks such as grasping and pointing, where it is important to differentiate visual and semantic attributes of individual object instances. Finally, we show that an OCN can be trained efficiently from RGB videos that are automatically obtained from a real robotic agent. 7 DATASET 8 RANDOM WEIGHTS We find in Table 6 that models that are not pretrained with ImageNet supervision perform worse but still yield reasonable results. This indicates that the approach does not rely on a good initialization to bootstrap itself without labels. Even more surprisingly, when freezing the weights of the ResNet50 base of the model to its random initialization, results degrade but still remain far below chance as well as below the ’ResNet50 embeddings’ baseline. Obtaining reasonable results with random weights has already been observed in prior work such as (Jarrett et al., 2009), (Saxe et al., 2011) and (Ulyanov et al., 2017). 9 ADDITIONAL QUALITATIVE RESULTS 10 ADDITIONAL ROBOTIC POINTING RESULTS 11 ADDITIONAL NEAREST NEIGHBOR RESULTS
1. What is the main contribution of the paper regarding unsupervised feature learning? 2. What are the strengths and weaknesses of the proposed approach compared to current unsupervised feature learning methods? 3. How does the reviewer assess the experimental setup and comparisons with other works? 4. Are there any missing baselines or experimental conditions that should be considered? 5. What are the limitations and potential issues with the proposed method, especially in real-world scenarios? 6. How does the reviewer evaluate the clarity and consistency of the paper's language and terminology?
Review
Review Summary: This paper aim for learning a feature representation from video sequences captured from a scene from different view points. The proposed approach is tested on a table top scenario for synthetic and real scenes. Pairs of frames from captured video is selected, then a pre-trained object detector finds the object proposal bounding boxes. The positive pairs are found using nearest neighbor between cropped bounding boxed from two random frames and finally a network is trained using an n-pair contrastive loss function and hence called object-contrastive network. Pros: Unsupervised feature learning is an interesting area in computer vision and ML and this paper tries to tackle this problem for objects seen from different viewpoints. Cons: -Not enough technical novelty compared to current unsupervised feature learning approaches. The proposed approach uses two random frame from a sequence and use nearest neighbor match based on some pre-trained network and compute an n-pair contrastive loss of Sohn 2016 on top. -Experimental set up for the real experiment is very simplistic and objects with similar appearance and colors are appearing in both train and test sets which is far from random selection of object instances and categories into test and train (plates, bowls and cups with similar colors and similar shapes). Why the proposed method is not trained and tested on tasks similar to [a]? There can be similar setup in training videos of [a] and tested on object detection task on videos of natural scenes (rather than a particular indoor table top scenario). [a] is a relevant baseline which is missed. [a] Xiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using videos. In ICCV, 2015. Missing Baselines: -Comparing against learned embedding feature with feature trained on (a) ResNet50 pre-trained ImageNet or (b) ResNet50 pre-trained COCO for both NN and linear setup is missed. There is only ResNet50 embedding pre-trained on ImageNet shown in table 1. -Comparing against previous self-supervised methods that use tracking is missed. -Comparing against previous methods that learn embedding on delta time and/or camera location is missed. Issues in experimental setups: -Section 5.2 with title “Instance Detection and Tracking” only shows three qualitative example if instance retrieval and ranking for a pair of views. There is no standard quantitative result for instance tracking in this section such accuracy of trajectory over time. Also the detail of experimental setup for table 2 is missing. Number of instances, pairs, real or synthetic, etc. -The object appearance is not similar from different view. In the current experimental setup (which is less than 90 degrees different viewpoint) the appearance can be similar. It is not clear if the proposed approach can work with more variation of camera viewpoint. -There are many hand designed assumptions in the experimental setup which makes it unnatural in real scenario. For instance, the number of objects in all frames are approximately equal and all objects are visible in all frames. In real scenario the objects can appear and disappear from the camera viewpoint based on camera field of view and can can cause drastic changes in the nearest neighbor set up in the method formulation. What happen if in extreme case there is no object in one of the frames when wants to find the pairs? It can match with some random patches then? -In Page 5, section 4.1, it is mentioned “We randomly define the number of objects (up to 20) in a scene and select half of the objects from two randomly selected categories. The other half is selected from the remaining object categories.”. What is the logistic behind this choice? The reason for this setup is not explained in the paper. -Throughout the paper the words “attribute”, “class”, “semantic”, “label” are used in a confusing manner based on the current literature. For example, “…naturally encoding attributes like class, color, texture and function…” in Introduction section. Class is not an object attribute.
ICLR
Title Object-Contrastive Networks: Unsupervised Object Representations Abstract Discovering objects and their attributes is of great importance for autonomous agents to effectively operate in human environments. This task is particularly challenging due to the ubiquitousness of objects and all their nuances in perceptual and semantic detail. In this paper we present an unsupervised approach for learning disentangled representations of objects entirely from unlabeled monocular videos. These continuous representations are not biased by or limited to a discrete set of labels determined by human labelers. The proposed representation is trained with a metric learning loss, where nearest neighbors in embedding space are pulled together while being pushed against other objects. We show these unsupervised embeddings allow robots to discover object attributes that generalize to previously unseen environments. We quantitatively evaluate performance on a large-scale synthetic dataset with 12k object models, as well as on a real dataset collected by a robot and show that our unsupervised object understanding generalizes to previously unseen objects. Specifically, we demonstrate the effectiveness of our approach on robotic manipulation tasks, such as pointing at and grasping of objects. An interesting and perhaps surprising finding in this approach is that given a limited set of objects, object correspondences will naturally emerge when using metric learning without requiring explicit positive pairs. Videos of robotic experiments are available at sites.google.com/view/object-contrastive-networks Objectness negative anchor positive OCN embedding deep network metric loss attraction repulsion attraction (embedding nearest-neighbor) repulsion noisy repulsion robotic data collection attraction (embedding nearest-neighbor) Figure 1: Object-Contrastive Networks (OCN): by attracting embedding nearest neighbors and repulsing others using metric learning, continuous object representations naturally emerge. In a video collected by a robot looking at a table from different viewpoints, objects are extracted from random pairs of frames. Given two lists of objects, each object is attracted to its closest neighbor while being pushed against all other objects. Noisy repulsion may occur when the same object across viewpoint is not matched against itself. However the learning still converges towards disentangled and semantically meaningful object representations which can be useful in autonomous robotics applications. N/A Discovering objects and their attributes is of great importance for autonomous agents to effectively operate in human environments. This task is particularly challenging due to the ubiquitousness of objects and all their nuances in perceptual and semantic detail. In this paper we present an unsupervised approach for learning disentangled representations of objects entirely from unlabeled monocular videos. These continuous representations are not biased by or limited to a discrete set of labels determined by human labelers. The proposed representation is trained with a metric learning loss, where nearest neighbors in embedding space are pulled together while being pushed against other objects. We show these unsupervised embeddings allow robots to discover object attributes that generalize to previously unseen environments. We quantitatively evaluate performance on a large-scale synthetic dataset with 12k object models, as well as on a real dataset collected by a robot and show that our unsupervised object understanding generalizes to previously unseen objects. Specifically, we demonstrate the effectiveness of our approach on robotic manipulation tasks, such as pointing at and grasping of objects. An interesting and perhaps surprising finding in this approach is that given a limited set of objects, object correspondences will naturally emerge when using metric learning without requiring explicit positive pairs. Videos of robotic experiments are available at sites.google.com/view/object-contrastive-networks 1 INTRODUCTION The ability to autonomously train to recognize and differentiate previously unseen objects as well as infer general properties and attributes is an important skill for robotic agents. Increased autonomy leads to robustness, one of the main challenges real-world robotics faces. It also renders scaling up data collection practical. Additionally, removing human supervision from the loop has the potential to enable learning richer and less biased continuous representations than ones supervised by a limited set of discrete labels. Unbiased representations can prove useful in unknown future environments different from the ones seen during supervision, a typical challenge for robotics. In this work we present an unsupervised method that learns representations that disentangle perceptual and semantic object attributes such as class, function, and color. We automatically acquire training data by capturing videos with a real robot; a robot base moves around a table to capture objects in various arrangements. Assuming a pre-existing objectness detector, we extract objects from random frames within a same scene containing the same objects, and let the metric learning system decide how to assign positive and negative pairs of embeddings. Representations that generalize across objects naturally emerge despite not being given groundtruth matches. Unlike previous methods, we abstain from employing additional self-supervisory training signals such as tracking or depth. The only inputs to the system are monocular videos. This simplifies data collection and allows our embedding to integrate into existing end-to-end learning pipelines. We demonstrate that a trained Object-Contrastive Network (OCN) embedding allows us to reliably identify object instances based on their visual features such as color and shape. Moreover, we show that objects are also organized along their semantic or functional properties. For example, a cup might not only be associated with other cups, but also with other containers like bowls or vases. The key contributions of this work are: (1) an unsupervised algorithm for learning representations of objects (naturally encoding attributes like class, color, texture and function) which generalize to previously unseen objects; (2) showing monocular videos are sufficient to contrast similar and dissimilar objects pairs naturally without requiring explicit correspondences; (3) demonstrating the autonomy of the system, using a robot from data collection to tasks such as pointing and grasping similar objects to ones presented to it. 2 RELATED WORK Object discovery from visual media. Identifying objects and their attributes has a long history in computer vision and robotics (Tuytelaars et al., 2009). Traditionally, approaches focus on identifying regions in unlabeled images to locate and identify objects (Sivic et al., 2005; Russell et al., 2006; Arora et al., 2007; Fritz & Schiele, 2008; Kim et al., 2008). Discovering objects based on the notion of ’objectness’ instead of specific categories enables more principled strategies for object recognition (Uijlings et al., 2013; Romea et al., 2011). Several methods address the challenge to discover, track, and segment objects in videos based on supervised (Wang et al., 2014) or unsupervised (Kwak et al., 2015; Schulter et al., 2013; Haller & Leordeanu, 2017) techniques. The spatio-temporal signal present in videos can also help to reveal additional cues that allow to identify objects (Wang & Gupta, 2015; Jain et al., 2017). In the context of robotics, methods also focus on exploiting depth to discover objects and their properties (Mishra et al., 2012; Karpathy et al., 2013). Many recent approaches exploit the effectiveness of convolutional deep neural networks to detect objects (Ren et al., 2015; Liu et al., 2016; Lin et al., 2017) and to even provide pixel-precise segmentations (He et al., 2017). While the detection efficiency of these methods is unparalleled, they rely on supervised training procedures and therefore require large amounts of labeled data. Self-supervised methods for the discovery of object attributes mostly focus on learning representations by identifying features in multi-view imagery (DeTone et al., 2017; Lin et al., 2015) and videos (Wang & Gupta, 2015), or by stabilizing the training signal through domain randomization (Doersch et al., 2015; Zhang et al., 2018). Some methods not only operate on RGB images but also employ additional signals, such as depth (Florence et al., 2018; Pot et al., 2018) or egomotion (Agrawal et al., 2015) to self-supervise the learning process. It has been recognized, that contrasting observations from multiple views can provide a view-invariant training signal allowing to even differentiate subtle cues as relevant features that can be leveraged for instance categorization and imitation learning tasks (Sermanet et al., 2018). Unsupervised representation learning. Unlike supervised learning techniques, unsupervised methods focus on learning representations directly from data to enable image retrieval (Paulin et al., 2015), transfer learning (Zhang et al., 2017a), image denoising (Vincent et al., 2008), and other tasks (Dumoulin et al., 2016; Kumar et al., 2015). Using data from multiple modalities, such as imagery of multiple views (Sermanet et al., 2018), sound (Owens et al., 2016; Aytar et al., 2016), or other sensory inputs (Dehzangi et al., 2017), along with the often inherent spatio-temporal coherence (Doersch et al., 2015; Radford et al., 2015), can facilitate the unsupervised learning of representations and embeddings. For example, Zagoruyko & Komodakis (2015) explore multiple architectures to compare image patches and Pathak et al. (2017b) exploit temporal coherence to learn object-centric features. Gao et al. (2016) rely of spatial proximity of detected objects to determine attraction in metric learning, OCN operates similarly but does not require spatial proximity for positive matches, it does however take advantage of the likely presence of a same object in any pair of frames within a video. Zhang et al. (2017b) also take a similar unsupervised metric learning approach for tracking specific faces, using tracking trajectories and heuristics for matching trajectories and obtain richer positive matches. While our approach is simpler in that it does not require tracking or 3D matching, it could be augmented with extra matching signals. In robotics and other real-world scenarios where agents are often only able obtain sparse signals from their environment, self-learned embeddings can serve as an efficient representation to optimize learning objectives. Pathak et al. (2017a) introduce a curiosity-driven approach to obtain a reward signal from visual inputs; other methods use similar strategies to enable grasping (Pinto & Gupta, 2016) and manipulation tasks (Sermanet et al., 2018), or to be pose and background agnostic (Held et al., 2015). Mitash et al. (2017) jointly uses 3D synthetic and real data to learn a representation to detect objects and estimate their pose, even for cluttered configurations. Hickson et al. (2018) learn semantic classes of objects in videos by integrating clustering into a convolutional neural network. 3 UNSUPERVISED LEARNING OF OBJECT REPRESENTATIONS We propose an unsupervised approach to the problem of object understanding for multiple reasons: (1) make data collection simple and scalable, (2) increase autonomy in robotics by continuously learning about new objects without assistance, (3) discover continuous representations that are richer and more subtle than the discrete set of attributes that humans might provide as supervision which may not match future new environments. All these objectives require a method that can learn about objects and differentiate them without supervision. To bootstrap our learning signal we leverage two assumptions: (1) we are provided with a general objectness model so that we can attend to individual objects in a scene, (2) during an observation sequence the same objects will be present in most frames (this can later be relaxed by using an approximate estimation of ego-motion). Given a video sequence around a scene containing multiple objects, we randomly select two frames I and Î in the sequence and detect the objects present in each image. Let us assume N and M objects are detected in image I and Î , respectively. Each of the n-th and m-th cropped object images are embedded in a low dimensional space, organized by a metric learning objective. Unlike traditional methods which rely on human-provided similarity labels to drive metric learning, we use a selfsupervised approach to mine synthetic synthetic similarity labels. 3.1 OBJECTNESS DETECTION To detect objects, we use Faster-RCNN (Ren et al., 2015) trained on the COCO object detection dataset (Lin et al., 2014). Faster-RCNN detects objects in two stages: first generate class-agnostic bounding box proposals all objects present in an image (Fig. 1), second associate detected objects with class labels. We use OCN to discover object attributes, and only rely on the first objectness stage of Faster-R-CNN to detect object candidates. Examples of detected objects are illustrated in Fig. 1. 3.2 METRIC LOSS FOR OBJECT ATTRIBUTE DISENTANGLEMENT We denote a cropped object image by x ∈ X and compute its embedding via a convolutional neural network f (x) ∶ X → K. Note that for simplicity we may omit x from f (x) while f inherits all superscripts and subscripts. Let us consider two pairs of images I and Î that are taken at random from the same contiguous observation sequence. Let us also assume there are n and m objects detected in I and Î respectively. We denote the n-th and m-th objects in the images I and Î as xIn and xÎm, respectively. We compute the distance matrix Dn,m = √ (f In − f Îm)2, n ∈ 1..N, m ∈ 1..M . For every embedded anchor f In, n ∈ 1..N , we select a positive embedding f Î m with minimum distance as positive: f În+ = argmin(Dn,m). Given a batch of (anchor, positive) pairs {(xi, x+i )}Ni=1, the n-pair loss is defined as follows (Sohn, 2016): LN−pair({(xi, x+i )}Ni=1; f ) = 1 N N ∑ i=1 log(1 +∑ i≠j exp(f ⊺i f + j − f ⊺ i f + i )) The loss learns embeddings that identify ground truth anchor-positive pairs from all other anchornegative pairs in the same batch. It is formulated as a sum of softmax multi-class cross-entropy losses over a batch, encouraging the inner product of each anchor-positive pair (fi, f + i ) to be larger than all anchor-negative pairs (fi, f + j≠i). The final OCN training objective over an observation sequence is the sum of npairs losses over all pairs of individual frames: LOCN = LN−pair({(xIn, xÎn+)}Nn=1; f ) + LN−pair({(xÎm, xIm+)}Mm=1; f ) 3.3 ARCHITECTURE OCN takes a standard ResNet50 architecture until layer global pool and initializes it with ImageNet pre-trained weights. We then add three additional ResNet convolutional layers and a fully connected layer to produce the final embedding. The network is trained with the n-pairs metric learning loss as discussed in Sec. 3.2. Our architecture is depicted in Fig. 1 and Fig. 2. 3.4 OBJECT-CENTRIC EMBEDDING SPACE By using multiple views of the same scene and by attending to individual objects, our architecture allows us to differentiate subtle variations of object attributes. Observing the same object across different views facilitates learning invariance to scene-specific properties, such as scale, occlusion, lighting, and background, as each frame exhibits variations of these factors. The network solves the metric loss by representing object-centric attributes, such as shape, function, texture, or color, as these are consistent for (anchor, positive)-pairs, and dissimilar for (anchor, negative)-pairs. 3.5 WHY SHOULD THIS WORK? One might expect that this approach may only work if it is given a good enough initialization so that matching the same object across multiple frames is more likely than random chance. While ImageNet pretraining certainly helps convergence as shown in Table 1, it is not a requirement to learn meaningful representations as shown in Sec. 8. When all weights are random and no labels are provided, what can drive the network to consistently converge to meaningful embeddings? We estimate that the co-occurrence of the following hypotheses drives this convergence: (1) objects often remains visually similar to themselves across multiple viewpoints, (2) limiting the possible object matches within a scene increases the likelihood of a positive match, (3) the low-dimensionality of the embedding space forces the model to generalize by sharing abstract features across objects, (4) the smoothness of embeddings learned with metric learning facilitates convergence when supervision signals are weak, and (5) occasional true-positive matches (even by chance) yield more coherent gradients than false-positive matches which produce inconsistent gradients and dissipate as noise, leading over time to an acceleration of consistent gradients and stronger initial supervision signal. 4 DATA COLLECTION, HYPERPARAMETERS, AND TRAINING To evaluate the effectiveness of OCN embeddings we generated two datasets of real and synthetic objects. For the (unlabeled) real data we arrange objects in table-top configurations and capture frames from continuous camera trajectories. The (labeled) synthetic data is generated from renderings of 3D objects in a similar configuration. Details about the datasets are reported in Table 4. 4.1 SYNTHETIC DATA GENERATION To generate diverse object configurations we use 12 categories (airplane, car, chair, cup, bottle, bowl, guitars, keyboard, lamp, monitor, radio, vase) from ModelNet (Wu et al., 2015). The selected categories cover around 8k models of the 12k models available in the entire dataset. ModelNet provides the object models in a 80-20 split for training and testing. We further split the testing data into models for test and validation, resulting in a 80-10-10 split for training, validation, and test. For validation purposes, we manually assign each model labels describing the semantic and functional properties of the object, including the labels ‘class’, ‘has lid’, ‘has wheels’, ‘has buttons’, ‘has flat surface’, ‘has legs’, ‘is container’, ‘is sittable’, ‘is device’. Fig. 9 shows an example scene. We randomly define the number of objects (up to 20) in a scene and select half of the objects from two randomly selected categories. The other half is selected from the remaining object categories. We further randomly define the positions of the objects and vary their sizes, both so that they do not intersect. Additionally, each object is assigned one of eight predefined colors. We use this setup to generate 100K scenes for training, and 50K scenes for each, validation and testing. For each scene we generate a number (n = 10) of views and select random combination of two views for detecting objects. In total we produce 400K views (200 pairs) for training and 50K views (25K pairs) for each, validation and testing. 4.2 AUTOMATIC REAL DATA COLLECTION Our real object data set consists of 187 unique object instances spread across six categories including ‘balls’, ‘bottles & cans’, ‘bowls’, ‘cups & mugs’, ‘glasses’, and ‘plates’. Table 5 provides details about the number of objects in each category and how they are split between training, test, and validation. Note that we distinguish between cups & mugs and glasses categories based on whether it contains a handle. Fig. 3 provides a snapshot of our entire object dataset. We automated the real world data collection through using a mobile robot equipped with an HD camera (Fig. 8). At each run, we place about 10 objects on the table and then trigger the capturing process by having the robot rotate around the table by 90 degrees (see Fig. 8). In average 130 images are captured at each run. We select random pairs of frames for each trajectory during training of the OCN. We performed 345, 109, and 122 runs of data collection for training, test, and validation dataset, respectively. In total 43084 images were captured for OCN training and 15061 and 16385 were used for test and validation, respectively. 4.3 TRAINING An OCN is trained based on two views of the same synthetic or real scene. We randomly pick two frames of a camera trajectory around the scene to ensure the same objects are present; the frames are selected based on their time stamps so that they are as far apart as possible. We set the npairs regularization to λ = 0.002. The distance matrix Dn,m (Sec. 3.2) is constructed based on the individually detected objects for each of the two frames. The object detector was not specifically trained on any of our datasets. Furthermore, we only used scenes where at least 5 objects were detected in each frame. Operating on less objects results in a more noisy training signal as the n-pairs loss cannot create enough meaningful (anchor, negative)-pairs for contrasting them with the (anchor, positive)-pair. As the number of detected objects per view varies, we reciprocally use both frames to find anchors and their corresponding positives as discussed in Sec. 3.2. Across our experiments, the OCN training converged after 600k-1.2M iterations. 5 EXPERIMENTAL RESULTS To evaluate the effectiveness of an OCN embedding as representation for object attribute disentanglement, we performed experiments on a large-scale synthetic dataset and two robotic tasks of pointing and grasping in a real-world environment. Moreover, the experiments are designed in a way to directly showcase the usefulness of OCN on real robotics applications. 5.1 ATTRIBUTES CLASSIFICATION One way to evaluate the quality of unsupervised embeddings is to train attribute classifiers on top of the embedding using labeled data. Note however this may not entirely reflect the quality of an embedding because it is only measuring a discrete and small number of attributes while an embedding may capture more continuous and larger number of abstract concepts. Classifiers: We consider two types of classifiers to be applied on top of existing embeddings in this experiment: linear and nearest-neighbor classifiers. The linear classifier consists of a single linear layer going from embedding space to the 1-hot encoding of the target label for each attribute. It is trained with a range of learning rates and the best model is retained for each attribute. The nearest-neighbor classifier consists of embedding an entire ‘training’ set, and for each embedding of the evaluation set, assigning to it the labels of the nearest sample from the training set. Nearestneighbor classification is not a perfect approach because it does not necessarily measure generalization as linear classification does and results may vary significantly depending on how many nearest neighbors are available. It is also less subject to data imbalances. We still report this metric to get a sense of its performance because in an unsupervised inference context, the models might be used in a nearest-neighbor fashion (e.g. as in Sec. 5.3). Baselines: we compare multiple baselines in Table 1 and Table 6. The ‘Softmax’ baseline refers to the model described in Fig. 2, i.e. the exact same architecture as for OCN except that the model is trained with a supervised cross-entropy/softmax loss. The ‘ResNet50’ baseline refers to using the unmodified outputs of the ResNet50 model (He et al., 2016) (2048-dimensional vectors) as embeddings and training a nearest-neighbor classifier as defined above. We consider ‘Softmax’ and ‘ResNet50’ baselines as the lower and upper error-bounds for standard approaches to a classification task. The ‘OCN supervised’ baseline refers to the exact same OCN training described in Fig. 2, except that the positive matches are provided rather than discovered automatically. ‘OCN supervised’ represents the metric learning upper bound for classification. Finally we indicate as a reference the error rates for random classification. Results: we quantitatively evaluate our unsupervised models against supervised baselines on the labeled synthetic datasets (train and test) introduced in Sec. 4. Note that there is no overlap in object instances between the training and the evaluation set. The first take-away is that unsupervised performance closely follows its supervised baseline when trained with metric learning. As expected the cross-entropy/softmax approach performs best and establishes the error lower bound while the ResNet50 baseline are upper-bound results. Note that the dataset is heavily imbalanced for the binary attributes reported in Table 1 and Table 6 and require balancing for linear classification. In Fig. 4 and Sec. 9, 11, we show qualitative results of nearest neighbor objects discovered by OCN. 5.2 INSTANCE DETECTION AND TRACKING An OCN embedding can be used to match instances of the same object across multiple views and over time. This is illustrated in Fig. 5, where objects of one view (anchors) are matched against the objects of another view. We can find the nearest neighbors (positives) in the scene through the OCN embedding space as well as the closest matching objects with descending similarity (negatives). We report the quality of finding corresponding objects in Table 2 and differentiate between attribute errors, that indicate a mismatch of specific attributes (e.g. a blue cup is associated with a red cup), and object matching errors, which measure when objects are not of the same instance. An OCN embedding significantly improves detecting object instances across multiple views. 5.3 ROBOT EXPERIMENTS Pointing: We evaluate performance of OCN on a pointing robotic task (Fig. 6). The robot has to point to an object that it deems most similar to the object directly in front of him on the small table. The objects on the big table are randomly selected from each of the six object categories (Table 5). We consider two sets of these target objects. The quantitative experiment in Table 3 uses three query objects per category and is ran three times for each combination of query and target objects (3 × 2 × 18 = 108 experiments performed). The full set of experiments for one of the three runs is illustrated in Fig. 15. Table 3 quantifies OCN performance of this experiment. We report on errors related to ‘class’ and ‘container’ attributes (note that the other ten attributes described in Sec. 4.1 are not relevant to the real object data set). While the trained OCN model is performing well on the most categories, it has particularly some difficulty on the object classes ‘cups & mugs’ and ‘glasses’. These categories are generally mistaken with the category ‘bowls’. As a result the network performs much better in the attribute ‘container’ since all the three categories ‘bowls’, ‘bottles & cans’, and ’glasses’ refer to the same attribute. Grasping: We qualitatively evaluate the OCN performance on a grasping task in an environment unseen during training. First, a person holds and shows an object to the robot, then the robot picks up the most similar object from a set of objects on a table (see Fig. 7). In this experiment, we focus on evaluating OCN with objects that have either similar shape or color attribute. Using OCN the robot can successfully identify and grasp the object that has the closest color and shape attributes to the query object. Note training data did not contain objects held by hand. 6 CONCLUSION We introduced a novel unsupervised representation learning algorithm that allows us to differentiate object attributes, such as color, shape, and function. An OCN embedding is learned by contrasting the features of objects captured from two frames of single view camera trajectories of table-top indoor environments. We specifically attend to individual objects by detecting object bounding boxes and leverage a metric learning loss to disentangle subtle variations of object attributes. The resulting embedding space allows to organize objects along multiple dimensions and serves as representation for robotic learning. We show that an OCN embedding can be used on real robotic tasks such as grasping and pointing, where it is important to differentiate visual and semantic attributes of individual object instances. Finally, we show that an OCN can be trained efficiently from RGB videos that are automatically obtained from a real robotic agent. 7 DATASET 8 RANDOM WEIGHTS We find in Table 6 that models that are not pretrained with ImageNet supervision perform worse but still yield reasonable results. This indicates that the approach does not rely on a good initialization to bootstrap itself without labels. Even more surprisingly, when freezing the weights of the ResNet50 base of the model to its random initialization, results degrade but still remain far below chance as well as below the ’ResNet50 embeddings’ baseline. Obtaining reasonable results with random weights has already been observed in prior work such as (Jarrett et al., 2009), (Saxe et al., 2011) and (Ulyanov et al., 2017). 9 ADDITIONAL QUALITATIVE RESULTS 10 ADDITIONAL ROBOTIC POINTING RESULTS 11 ADDITIONAL NEAREST NEIGHBOR RESULTS
1. What is the main contribution of the paper on unsupervised representation learning for visual inputs? 2. What are the strengths and weaknesses of the proposed method, particularly regarding its technical novelty and performance compared to baselines? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any questions or concerns regarding the proposed method's application to robotic tasks, such as grasping and pointing scenarios? 5. What are the limitations of the experimental results and suggestions for improvement?
Review
Review In this paper, an unsupervised representation learning method for visual inputs is proposed. The proposed method incorporates a metric learning approach that pulls nearest neighbor pairs of image patches close to each other in the embedding space while pushing apart other pairs. The train and test scenarios are captured based on a table top scenario with multiple objects of different colors such as cups and bowls. Standard datasets are not used for benchmarking. Pros: - Unsupervised feature learning is an active and interesting area of research. - The proposed method and network architecture is simple. Cons: -The paper does not present technical novelty. The proposed network architecture is a ResNet50 that uses the object proposals obtained by Faster RCNN (Ren et al. 2015) and incorporates the n-pair loss function proposed by Sohn 2016. -Technical details are missing. >When using object proposals obtained by Faster RCNN (Ren et al. 2015) to generate pairs, do you use all of the object proposals or do you select a subset of them by some threshold? >How is the robotic control performed in the robot grasping and pointing tasks? It seems that grasping and pointing tasks are not learned and are based on conventional motion planning and state estimation. These details are not included in the manuscript. >Section 4.2 mentions that a mobile robot was needed for collecting the data. What kind of mobile robot was used? Why was a mobile robot needed? There is no details about the robot platform in the manuscript and no visualization is provided for the robot setup. >Why is a ResNet pretrained with ImageNet is used throughout the experiments (and not pretrained with COCO?) while object proposals are obtained by Faster RCNN which is pretrained by COCO. How would that affect the results? -The paper uses imprecise, confusing and sometimes incorrect phrases and wordings throughout the draft. Here are a few examples: > It is unclear what it is exactly meant by “object correspondences” that naturally emerge (e.g. in the last four lines of abstract). Would a correct object correspondence refer to “similar instances”, “similar colors”, “similar object categories”, “similar object poses”, “similar functionality”? For example, the first column of Fig. 1 shows an example of two cups with “similar green color”, “similar tall shapes” and “similar functionality” and “similar background” that are considered as “negative pairs (with red arrows)” while in the same Fig 1. Two cups with drastically different appearances one of which is “yellow with handle” (second column) and the another is “green without handle” (third column) are considered to be positive pairs (blue arrows). Similar confusing examples and arguments can repeatedly be found in the experiments and embedding visualizations: Fig. 4, Fig.5, Fig. 10- Fig. 15. > Throughout the draft and in Figures (e.g. Fig. 1) it is emphasized the the data collection is done by “robot” or it is “robotic”. Why was a robot needed to capture images or videos of around a table? A hand held moving camera could also be simply used. Capturing images and videos around a space is a very well-known and simple practice done in many previous works in computer vision and machine learning. It does not need a robot and it is not robotic. > First sentence of the second paragraph of the introduction is not a technically correct statement based on the well-known computer vision literature. Specifically, “class” is not considered as an “attribute”. What does it mean to disentangle “perceptual” and “semantic” object attributes? This is a very vague statement. “color” is neither a “semantic” attribute nor a “perceptual” attribute. Also, in section 3.2 and Fig. 2, it is confusing to consider “class” label as an attribute. Please refer to the well-known prior work of “Farhadi A, Endres I, Hoiem D, Forsyth D. Describing objects by their attributes. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on 2009 Jun 20 (pp. 1778-1785). IEEE.” for a clear and well-known explanation of object attributes vs categories. If your definition of “attributes” is technically different than that of this aforementioned prior work, it should be clarified to avoid confusion. -Unconvincing usefulness for robotics applications: The paper explains that the proposed method is effective and useful for robotic tasks however tis claim is not backed up with convincing robotic tasks or scenarios. Also, the performance in the robotic grasping and pointing tasks are not compared with any baselines. > While usefulness of the method for improving robotic grasping task is listed in the contributions (number 3 in the last paragraph of Section 1) there are only two qualitative grasping scenarios shown in the paper. It is not explained how the robot control is done. In both grasping scenarios the target object is the one in the middle. It seems that the grasping control policies are not learned and are motion planned or are a scripted policy and only the visual matching of the target object with the objects of the scene are done with the proposed method. For evaluating such scenario no robot is needed and only similarity score could be reported. It would had been interesting to see how representation learning can be seamlessly incorporated for robotic grasping which involves control as a tangible contribution however, the presented scenario does is not contributing to that problem and is only doing visual matching. No baseline comparison is provided here. > The details of the robot control for the robot pointing scenario is also not provided. The objects are places on two rows in all scenarios and it looks like the the actual pointing is done by motion planning after the visual matching step done by the proposed method. The presented method in this paper tries to find the closest visual match to the target object amongst the objects on the table and does not integrate it with any policy learning for improving the “robotic pointing task” so there is no robotic contribution here. No baseline is provided performance comparison in this robotic task as well. - The experimental results are weak and unconvincing: > In the experiments corresponding to Table 6 and Table 1, what is the performance of ResNet 50 embedding (linear)? Can you provide these results? > Table 6 shows that the performance gain of the proposed method compared to the supervised and unsupervised baselines is marginal. > What is the performance of a baseline network with similar architecture that uses “contrastive loss” (based on object pairs with similar attributes). This baseline is missed. > Qualitatively, the visualizations in Fig. 1, Fig. 4-5, Fig. 10-15 show that OCN embedding is noisy. In all these figures, there are many less similar instances that are retrieved on the top and many similar instances that are ranked down in the list. > The paper refers to ad-hoc experimental protocols which makes the evaluations unclear and invalid: what is meant to report “class” and “container” attributes in Table 3 and section 5.3? Why are “bottles and cans” are considered in a same group while there were referred to as different objects in all previous explanations of the object types and categories used in the training and testing? What is the difference between “cups and Mugs” and “glasses” that are separated? How are “Balls” , “Bowls”, “Plates”, etc listed as *attributes*? Aren’t these previously referred to as object categories? > The paper has some inconsistent experimental setups. Why in Fig 16, the same instances of were removed for the real objects and not for the synthetic objects? -The presentation of the paper can be improved. Here are a few examples: >Grammatical or unclear wordings and typos (e.g. Caption of Fig. 2 “attracting embedding nearest neighbors”; is not a readable. Repetitive words, Last line of section 3. , etc) >In Fig. 10-15, using t-SNE (Maaten LV, Hinton G. Visualizing data using t-SNE. Journal of machine learning research. 2008;9(Nov):2579-605.) instead of a sorted list provides much more expressive visualization of the effectiveness of the approach. It is highly recommended that t-SNE be used instead of sorted lists if image patches.
ICLR
Title Object-Contrastive Networks: Unsupervised Object Representations Abstract Discovering objects and their attributes is of great importance for autonomous agents to effectively operate in human environments. This task is particularly challenging due to the ubiquitousness of objects and all their nuances in perceptual and semantic detail. In this paper we present an unsupervised approach for learning disentangled representations of objects entirely from unlabeled monocular videos. These continuous representations are not biased by or limited to a discrete set of labels determined by human labelers. The proposed representation is trained with a metric learning loss, where nearest neighbors in embedding space are pulled together while being pushed against other objects. We show these unsupervised embeddings allow robots to discover object attributes that generalize to previously unseen environments. We quantitatively evaluate performance on a large-scale synthetic dataset with 12k object models, as well as on a real dataset collected by a robot and show that our unsupervised object understanding generalizes to previously unseen objects. Specifically, we demonstrate the effectiveness of our approach on robotic manipulation tasks, such as pointing at and grasping of objects. An interesting and perhaps surprising finding in this approach is that given a limited set of objects, object correspondences will naturally emerge when using metric learning without requiring explicit positive pairs. Videos of robotic experiments are available at sites.google.com/view/object-contrastive-networks Objectness negative anchor positive OCN embedding deep network metric loss attraction repulsion attraction (embedding nearest-neighbor) repulsion noisy repulsion robotic data collection attraction (embedding nearest-neighbor) Figure 1: Object-Contrastive Networks (OCN): by attracting embedding nearest neighbors and repulsing others using metric learning, continuous object representations naturally emerge. In a video collected by a robot looking at a table from different viewpoints, objects are extracted from random pairs of frames. Given two lists of objects, each object is attracted to its closest neighbor while being pushed against all other objects. Noisy repulsion may occur when the same object across viewpoint is not matched against itself. However the learning still converges towards disentangled and semantically meaningful object representations which can be useful in autonomous robotics applications. N/A Discovering objects and their attributes is of great importance for autonomous agents to effectively operate in human environments. This task is particularly challenging due to the ubiquitousness of objects and all their nuances in perceptual and semantic detail. In this paper we present an unsupervised approach for learning disentangled representations of objects entirely from unlabeled monocular videos. These continuous representations are not biased by or limited to a discrete set of labels determined by human labelers. The proposed representation is trained with a metric learning loss, where nearest neighbors in embedding space are pulled together while being pushed against other objects. We show these unsupervised embeddings allow robots to discover object attributes that generalize to previously unseen environments. We quantitatively evaluate performance on a large-scale synthetic dataset with 12k object models, as well as on a real dataset collected by a robot and show that our unsupervised object understanding generalizes to previously unseen objects. Specifically, we demonstrate the effectiveness of our approach on robotic manipulation tasks, such as pointing at and grasping of objects. An interesting and perhaps surprising finding in this approach is that given a limited set of objects, object correspondences will naturally emerge when using metric learning without requiring explicit positive pairs. Videos of robotic experiments are available at sites.google.com/view/object-contrastive-networks 1 INTRODUCTION The ability to autonomously train to recognize and differentiate previously unseen objects as well as infer general properties and attributes is an important skill for robotic agents. Increased autonomy leads to robustness, one of the main challenges real-world robotics faces. It also renders scaling up data collection practical. Additionally, removing human supervision from the loop has the potential to enable learning richer and less biased continuous representations than ones supervised by a limited set of discrete labels. Unbiased representations can prove useful in unknown future environments different from the ones seen during supervision, a typical challenge for robotics. In this work we present an unsupervised method that learns representations that disentangle perceptual and semantic object attributes such as class, function, and color. We automatically acquire training data by capturing videos with a real robot; a robot base moves around a table to capture objects in various arrangements. Assuming a pre-existing objectness detector, we extract objects from random frames within a same scene containing the same objects, and let the metric learning system decide how to assign positive and negative pairs of embeddings. Representations that generalize across objects naturally emerge despite not being given groundtruth matches. Unlike previous methods, we abstain from employing additional self-supervisory training signals such as tracking or depth. The only inputs to the system are monocular videos. This simplifies data collection and allows our embedding to integrate into existing end-to-end learning pipelines. We demonstrate that a trained Object-Contrastive Network (OCN) embedding allows us to reliably identify object instances based on their visual features such as color and shape. Moreover, we show that objects are also organized along their semantic or functional properties. For example, a cup might not only be associated with other cups, but also with other containers like bowls or vases. The key contributions of this work are: (1) an unsupervised algorithm for learning representations of objects (naturally encoding attributes like class, color, texture and function) which generalize to previously unseen objects; (2) showing monocular videos are sufficient to contrast similar and dissimilar objects pairs naturally without requiring explicit correspondences; (3) demonstrating the autonomy of the system, using a robot from data collection to tasks such as pointing and grasping similar objects to ones presented to it. 2 RELATED WORK Object discovery from visual media. Identifying objects and their attributes has a long history in computer vision and robotics (Tuytelaars et al., 2009). Traditionally, approaches focus on identifying regions in unlabeled images to locate and identify objects (Sivic et al., 2005; Russell et al., 2006; Arora et al., 2007; Fritz & Schiele, 2008; Kim et al., 2008). Discovering objects based on the notion of ’objectness’ instead of specific categories enables more principled strategies for object recognition (Uijlings et al., 2013; Romea et al., 2011). Several methods address the challenge to discover, track, and segment objects in videos based on supervised (Wang et al., 2014) or unsupervised (Kwak et al., 2015; Schulter et al., 2013; Haller & Leordeanu, 2017) techniques. The spatio-temporal signal present in videos can also help to reveal additional cues that allow to identify objects (Wang & Gupta, 2015; Jain et al., 2017). In the context of robotics, methods also focus on exploiting depth to discover objects and their properties (Mishra et al., 2012; Karpathy et al., 2013). Many recent approaches exploit the effectiveness of convolutional deep neural networks to detect objects (Ren et al., 2015; Liu et al., 2016; Lin et al., 2017) and to even provide pixel-precise segmentations (He et al., 2017). While the detection efficiency of these methods is unparalleled, they rely on supervised training procedures and therefore require large amounts of labeled data. Self-supervised methods for the discovery of object attributes mostly focus on learning representations by identifying features in multi-view imagery (DeTone et al., 2017; Lin et al., 2015) and videos (Wang & Gupta, 2015), or by stabilizing the training signal through domain randomization (Doersch et al., 2015; Zhang et al., 2018). Some methods not only operate on RGB images but also employ additional signals, such as depth (Florence et al., 2018; Pot et al., 2018) or egomotion (Agrawal et al., 2015) to self-supervise the learning process. It has been recognized, that contrasting observations from multiple views can provide a view-invariant training signal allowing to even differentiate subtle cues as relevant features that can be leveraged for instance categorization and imitation learning tasks (Sermanet et al., 2018). Unsupervised representation learning. Unlike supervised learning techniques, unsupervised methods focus on learning representations directly from data to enable image retrieval (Paulin et al., 2015), transfer learning (Zhang et al., 2017a), image denoising (Vincent et al., 2008), and other tasks (Dumoulin et al., 2016; Kumar et al., 2015). Using data from multiple modalities, such as imagery of multiple views (Sermanet et al., 2018), sound (Owens et al., 2016; Aytar et al., 2016), or other sensory inputs (Dehzangi et al., 2017), along with the often inherent spatio-temporal coherence (Doersch et al., 2015; Radford et al., 2015), can facilitate the unsupervised learning of representations and embeddings. For example, Zagoruyko & Komodakis (2015) explore multiple architectures to compare image patches and Pathak et al. (2017b) exploit temporal coherence to learn object-centric features. Gao et al. (2016) rely of spatial proximity of detected objects to determine attraction in metric learning, OCN operates similarly but does not require spatial proximity for positive matches, it does however take advantage of the likely presence of a same object in any pair of frames within a video. Zhang et al. (2017b) also take a similar unsupervised metric learning approach for tracking specific faces, using tracking trajectories and heuristics for matching trajectories and obtain richer positive matches. While our approach is simpler in that it does not require tracking or 3D matching, it could be augmented with extra matching signals. In robotics and other real-world scenarios where agents are often only able obtain sparse signals from their environment, self-learned embeddings can serve as an efficient representation to optimize learning objectives. Pathak et al. (2017a) introduce a curiosity-driven approach to obtain a reward signal from visual inputs; other methods use similar strategies to enable grasping (Pinto & Gupta, 2016) and manipulation tasks (Sermanet et al., 2018), or to be pose and background agnostic (Held et al., 2015). Mitash et al. (2017) jointly uses 3D synthetic and real data to learn a representation to detect objects and estimate their pose, even for cluttered configurations. Hickson et al. (2018) learn semantic classes of objects in videos by integrating clustering into a convolutional neural network. 3 UNSUPERVISED LEARNING OF OBJECT REPRESENTATIONS We propose an unsupervised approach to the problem of object understanding for multiple reasons: (1) make data collection simple and scalable, (2) increase autonomy in robotics by continuously learning about new objects without assistance, (3) discover continuous representations that are richer and more subtle than the discrete set of attributes that humans might provide as supervision which may not match future new environments. All these objectives require a method that can learn about objects and differentiate them without supervision. To bootstrap our learning signal we leverage two assumptions: (1) we are provided with a general objectness model so that we can attend to individual objects in a scene, (2) during an observation sequence the same objects will be present in most frames (this can later be relaxed by using an approximate estimation of ego-motion). Given a video sequence around a scene containing multiple objects, we randomly select two frames I and Î in the sequence and detect the objects present in each image. Let us assume N and M objects are detected in image I and Î , respectively. Each of the n-th and m-th cropped object images are embedded in a low dimensional space, organized by a metric learning objective. Unlike traditional methods which rely on human-provided similarity labels to drive metric learning, we use a selfsupervised approach to mine synthetic synthetic similarity labels. 3.1 OBJECTNESS DETECTION To detect objects, we use Faster-RCNN (Ren et al., 2015) trained on the COCO object detection dataset (Lin et al., 2014). Faster-RCNN detects objects in two stages: first generate class-agnostic bounding box proposals all objects present in an image (Fig. 1), second associate detected objects with class labels. We use OCN to discover object attributes, and only rely on the first objectness stage of Faster-R-CNN to detect object candidates. Examples of detected objects are illustrated in Fig. 1. 3.2 METRIC LOSS FOR OBJECT ATTRIBUTE DISENTANGLEMENT We denote a cropped object image by x ∈ X and compute its embedding via a convolutional neural network f (x) ∶ X → K. Note that for simplicity we may omit x from f (x) while f inherits all superscripts and subscripts. Let us consider two pairs of images I and Î that are taken at random from the same contiguous observation sequence. Let us also assume there are n and m objects detected in I and Î respectively. We denote the n-th and m-th objects in the images I and Î as xIn and xÎm, respectively. We compute the distance matrix Dn,m = √ (f In − f Îm)2, n ∈ 1..N, m ∈ 1..M . For every embedded anchor f In, n ∈ 1..N , we select a positive embedding f Î m with minimum distance as positive: f În+ = argmin(Dn,m). Given a batch of (anchor, positive) pairs {(xi, x+i )}Ni=1, the n-pair loss is defined as follows (Sohn, 2016): LN−pair({(xi, x+i )}Ni=1; f ) = 1 N N ∑ i=1 log(1 +∑ i≠j exp(f ⊺i f + j − f ⊺ i f + i )) The loss learns embeddings that identify ground truth anchor-positive pairs from all other anchornegative pairs in the same batch. It is formulated as a sum of softmax multi-class cross-entropy losses over a batch, encouraging the inner product of each anchor-positive pair (fi, f + i ) to be larger than all anchor-negative pairs (fi, f + j≠i). The final OCN training objective over an observation sequence is the sum of npairs losses over all pairs of individual frames: LOCN = LN−pair({(xIn, xÎn+)}Nn=1; f ) + LN−pair({(xÎm, xIm+)}Mm=1; f ) 3.3 ARCHITECTURE OCN takes a standard ResNet50 architecture until layer global pool and initializes it with ImageNet pre-trained weights. We then add three additional ResNet convolutional layers and a fully connected layer to produce the final embedding. The network is trained with the n-pairs metric learning loss as discussed in Sec. 3.2. Our architecture is depicted in Fig. 1 and Fig. 2. 3.4 OBJECT-CENTRIC EMBEDDING SPACE By using multiple views of the same scene and by attending to individual objects, our architecture allows us to differentiate subtle variations of object attributes. Observing the same object across different views facilitates learning invariance to scene-specific properties, such as scale, occlusion, lighting, and background, as each frame exhibits variations of these factors. The network solves the metric loss by representing object-centric attributes, such as shape, function, texture, or color, as these are consistent for (anchor, positive)-pairs, and dissimilar for (anchor, negative)-pairs. 3.5 WHY SHOULD THIS WORK? One might expect that this approach may only work if it is given a good enough initialization so that matching the same object across multiple frames is more likely than random chance. While ImageNet pretraining certainly helps convergence as shown in Table 1, it is not a requirement to learn meaningful representations as shown in Sec. 8. When all weights are random and no labels are provided, what can drive the network to consistently converge to meaningful embeddings? We estimate that the co-occurrence of the following hypotheses drives this convergence: (1) objects often remains visually similar to themselves across multiple viewpoints, (2) limiting the possible object matches within a scene increases the likelihood of a positive match, (3) the low-dimensionality of the embedding space forces the model to generalize by sharing abstract features across objects, (4) the smoothness of embeddings learned with metric learning facilitates convergence when supervision signals are weak, and (5) occasional true-positive matches (even by chance) yield more coherent gradients than false-positive matches which produce inconsistent gradients and dissipate as noise, leading over time to an acceleration of consistent gradients and stronger initial supervision signal. 4 DATA COLLECTION, HYPERPARAMETERS, AND TRAINING To evaluate the effectiveness of OCN embeddings we generated two datasets of real and synthetic objects. For the (unlabeled) real data we arrange objects in table-top configurations and capture frames from continuous camera trajectories. The (labeled) synthetic data is generated from renderings of 3D objects in a similar configuration. Details about the datasets are reported in Table 4. 4.1 SYNTHETIC DATA GENERATION To generate diverse object configurations we use 12 categories (airplane, car, chair, cup, bottle, bowl, guitars, keyboard, lamp, monitor, radio, vase) from ModelNet (Wu et al., 2015). The selected categories cover around 8k models of the 12k models available in the entire dataset. ModelNet provides the object models in a 80-20 split for training and testing. We further split the testing data into models for test and validation, resulting in a 80-10-10 split for training, validation, and test. For validation purposes, we manually assign each model labels describing the semantic and functional properties of the object, including the labels ‘class’, ‘has lid’, ‘has wheels’, ‘has buttons’, ‘has flat surface’, ‘has legs’, ‘is container’, ‘is sittable’, ‘is device’. Fig. 9 shows an example scene. We randomly define the number of objects (up to 20) in a scene and select half of the objects from two randomly selected categories. The other half is selected from the remaining object categories. We further randomly define the positions of the objects and vary their sizes, both so that they do not intersect. Additionally, each object is assigned one of eight predefined colors. We use this setup to generate 100K scenes for training, and 50K scenes for each, validation and testing. For each scene we generate a number (n = 10) of views and select random combination of two views for detecting objects. In total we produce 400K views (200 pairs) for training and 50K views (25K pairs) for each, validation and testing. 4.2 AUTOMATIC REAL DATA COLLECTION Our real object data set consists of 187 unique object instances spread across six categories including ‘balls’, ‘bottles & cans’, ‘bowls’, ‘cups & mugs’, ‘glasses’, and ‘plates’. Table 5 provides details about the number of objects in each category and how they are split between training, test, and validation. Note that we distinguish between cups & mugs and glasses categories based on whether it contains a handle. Fig. 3 provides a snapshot of our entire object dataset. We automated the real world data collection through using a mobile robot equipped with an HD camera (Fig. 8). At each run, we place about 10 objects on the table and then trigger the capturing process by having the robot rotate around the table by 90 degrees (see Fig. 8). In average 130 images are captured at each run. We select random pairs of frames for each trajectory during training of the OCN. We performed 345, 109, and 122 runs of data collection for training, test, and validation dataset, respectively. In total 43084 images were captured for OCN training and 15061 and 16385 were used for test and validation, respectively. 4.3 TRAINING An OCN is trained based on two views of the same synthetic or real scene. We randomly pick two frames of a camera trajectory around the scene to ensure the same objects are present; the frames are selected based on their time stamps so that they are as far apart as possible. We set the npairs regularization to λ = 0.002. The distance matrix Dn,m (Sec. 3.2) is constructed based on the individually detected objects for each of the two frames. The object detector was not specifically trained on any of our datasets. Furthermore, we only used scenes where at least 5 objects were detected in each frame. Operating on less objects results in a more noisy training signal as the n-pairs loss cannot create enough meaningful (anchor, negative)-pairs for contrasting them with the (anchor, positive)-pair. As the number of detected objects per view varies, we reciprocally use both frames to find anchors and their corresponding positives as discussed in Sec. 3.2. Across our experiments, the OCN training converged after 600k-1.2M iterations. 5 EXPERIMENTAL RESULTS To evaluate the effectiveness of an OCN embedding as representation for object attribute disentanglement, we performed experiments on a large-scale synthetic dataset and two robotic tasks of pointing and grasping in a real-world environment. Moreover, the experiments are designed in a way to directly showcase the usefulness of OCN on real robotics applications. 5.1 ATTRIBUTES CLASSIFICATION One way to evaluate the quality of unsupervised embeddings is to train attribute classifiers on top of the embedding using labeled data. Note however this may not entirely reflect the quality of an embedding because it is only measuring a discrete and small number of attributes while an embedding may capture more continuous and larger number of abstract concepts. Classifiers: We consider two types of classifiers to be applied on top of existing embeddings in this experiment: linear and nearest-neighbor classifiers. The linear classifier consists of a single linear layer going from embedding space to the 1-hot encoding of the target label for each attribute. It is trained with a range of learning rates and the best model is retained for each attribute. The nearest-neighbor classifier consists of embedding an entire ‘training’ set, and for each embedding of the evaluation set, assigning to it the labels of the nearest sample from the training set. Nearestneighbor classification is not a perfect approach because it does not necessarily measure generalization as linear classification does and results may vary significantly depending on how many nearest neighbors are available. It is also less subject to data imbalances. We still report this metric to get a sense of its performance because in an unsupervised inference context, the models might be used in a nearest-neighbor fashion (e.g. as in Sec. 5.3). Baselines: we compare multiple baselines in Table 1 and Table 6. The ‘Softmax’ baseline refers to the model described in Fig. 2, i.e. the exact same architecture as for OCN except that the model is trained with a supervised cross-entropy/softmax loss. The ‘ResNet50’ baseline refers to using the unmodified outputs of the ResNet50 model (He et al., 2016) (2048-dimensional vectors) as embeddings and training a nearest-neighbor classifier as defined above. We consider ‘Softmax’ and ‘ResNet50’ baselines as the lower and upper error-bounds for standard approaches to a classification task. The ‘OCN supervised’ baseline refers to the exact same OCN training described in Fig. 2, except that the positive matches are provided rather than discovered automatically. ‘OCN supervised’ represents the metric learning upper bound for classification. Finally we indicate as a reference the error rates for random classification. Results: we quantitatively evaluate our unsupervised models against supervised baselines on the labeled synthetic datasets (train and test) introduced in Sec. 4. Note that there is no overlap in object instances between the training and the evaluation set. The first take-away is that unsupervised performance closely follows its supervised baseline when trained with metric learning. As expected the cross-entropy/softmax approach performs best and establishes the error lower bound while the ResNet50 baseline are upper-bound results. Note that the dataset is heavily imbalanced for the binary attributes reported in Table 1 and Table 6 and require balancing for linear classification. In Fig. 4 and Sec. 9, 11, we show qualitative results of nearest neighbor objects discovered by OCN. 5.2 INSTANCE DETECTION AND TRACKING An OCN embedding can be used to match instances of the same object across multiple views and over time. This is illustrated in Fig. 5, where objects of one view (anchors) are matched against the objects of another view. We can find the nearest neighbors (positives) in the scene through the OCN embedding space as well as the closest matching objects with descending similarity (negatives). We report the quality of finding corresponding objects in Table 2 and differentiate between attribute errors, that indicate a mismatch of specific attributes (e.g. a blue cup is associated with a red cup), and object matching errors, which measure when objects are not of the same instance. An OCN embedding significantly improves detecting object instances across multiple views. 5.3 ROBOT EXPERIMENTS Pointing: We evaluate performance of OCN on a pointing robotic task (Fig. 6). The robot has to point to an object that it deems most similar to the object directly in front of him on the small table. The objects on the big table are randomly selected from each of the six object categories (Table 5). We consider two sets of these target objects. The quantitative experiment in Table 3 uses three query objects per category and is ran three times for each combination of query and target objects (3 × 2 × 18 = 108 experiments performed). The full set of experiments for one of the three runs is illustrated in Fig. 15. Table 3 quantifies OCN performance of this experiment. We report on errors related to ‘class’ and ‘container’ attributes (note that the other ten attributes described in Sec. 4.1 are not relevant to the real object data set). While the trained OCN model is performing well on the most categories, it has particularly some difficulty on the object classes ‘cups & mugs’ and ‘glasses’. These categories are generally mistaken with the category ‘bowls’. As a result the network performs much better in the attribute ‘container’ since all the three categories ‘bowls’, ‘bottles & cans’, and ’glasses’ refer to the same attribute. Grasping: We qualitatively evaluate the OCN performance on a grasping task in an environment unseen during training. First, a person holds and shows an object to the robot, then the robot picks up the most similar object from a set of objects on a table (see Fig. 7). In this experiment, we focus on evaluating OCN with objects that have either similar shape or color attribute. Using OCN the robot can successfully identify and grasp the object that has the closest color and shape attributes to the query object. Note training data did not contain objects held by hand. 6 CONCLUSION We introduced a novel unsupervised representation learning algorithm that allows us to differentiate object attributes, such as color, shape, and function. An OCN embedding is learned by contrasting the features of objects captured from two frames of single view camera trajectories of table-top indoor environments. We specifically attend to individual objects by detecting object bounding boxes and leverage a metric learning loss to disentangle subtle variations of object attributes. The resulting embedding space allows to organize objects along multiple dimensions and serves as representation for robotic learning. We show that an OCN embedding can be used on real robotic tasks such as grasping and pointing, where it is important to differentiate visual and semantic attributes of individual object instances. Finally, we show that an OCN can be trained efficiently from RGB videos that are automatically obtained from a real robotic agent. 7 DATASET 8 RANDOM WEIGHTS We find in Table 6 that models that are not pretrained with ImageNet supervision perform worse but still yield reasonable results. This indicates that the approach does not rely on a good initialization to bootstrap itself without labels. Even more surprisingly, when freezing the weights of the ResNet50 base of the model to its random initialization, results degrade but still remain far below chance as well as below the ’ResNet50 embeddings’ baseline. Obtaining reasonable results with random weights has already been observed in prior work such as (Jarrett et al., 2009), (Saxe et al., 2011) and (Ulyanov et al., 2017). 9 ADDITIONAL QUALITATIVE RESULTS 10 ADDITIONAL ROBOTIC POINTING RESULTS 11 ADDITIONAL NEAREST NEIGHBOR RESULTS
1. What is the main contribution of the paper regarding self-supervised learning? 2. What are the strengths and weaknesses of the proposed approach? 3. How does the reviewer assess the significance of the problem setup and the chosen approach? 4. What are some concerns regarding the experimental results and their implications? 5. Are there any related works that should be considered and discussed in the paper?
Review
Review This paper explored self-supervised learning of object representations. The main idea is to encourage objects with similar features to get further ‘attracted’ to each other. The authors demonstrated that the system works on real objects with simple geometry. The problem of self-supervised learning from video data is an important one. This paper is making progress along this direction. The approach is new. The paper is in general well written and clear. My main concern is the problem setup is unnatural. I can imagine two types of approaches with video input: object-based or pixel-based. An object-based approach detects and tracks objects over time, and then learns object representations upon correspondence. Tracking objects is hard, but gives object correspondence as the basis for learning. A pixel-based approach does not detect objects, but learns dense feature representations for each pixel via signals such as optical flow. Here, training signals become noisier, but the algorithm no longer suffers from the errors that may arise in object detection. It can also generalize to objects that are hard to be detected, such as soft bodies and liquids. The proposed system however lies in an uncanny valley: it performs object detection per frame (which is hard and noisy), but it then discards the temporal signals in the video and therefore loses object correspondence. To compensate that, the authors proposed a sometimes incorrect heuristic (based on nearest neighbors) as supervision. The motivation behind such design is unclear, and I’d love to hear the authors’ feedback on it. The authors should also cite and discuss those pixel-based methods (see below). The experimental results are neat, but not too impressive. The objects used are all rigid with simple geometry. As said before, such an approach would have a hard time generalize to deformable objects and liquids. The results on the real robot is not very convincing, as the system doesn’t really learn object representations that can be used for manipulation. For example, for the results in Fig 7, I assume the way of grasping is handcrafted, instead of learned by the network. Please let me know if I’m wrong. In general, I feel this paper interesting but not exciting, and it’s unclear what we can really learn from it. I’m therefore on the border, leaning slightly toward rejection. I’m happy to adjust my rating based on the discussion and revision. Related work Self-supervised Visual Descriptor Learning for Dense Correspondence. ICRA 17. Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation. CORL 18 (concurrent work, though on arxiv since June). Unsupervised learning of object frames by dense equivariant image labelling. NIPS 17.
ICLR
Title SGD Learns One-Layer Networks in WGANs Abstract Generative adversarial networks (GANs) are a widely used framework for learning generative models. Wasserstein GANs (WGANs), one of the most successful variants of GANs, require solving a minmax optimization problem to global optimality, but are in practice successfully trained using stochastic gradient descent-ascent. In this paper, we show that, when the generator is a one-layer network, stochastic gradient descent-ascent converges to a global solution with polynomial time and sample complexity. 1 INTRODUCTION Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) are a prominent framework for learning generative models of complex, real-world distributions given samples from these distributions. GANs and their variants have been successfully applied to numerous datasets and tasks, including image-to-image translation (Isola et al., 2017), image super-resolution (Ledig et al., 2017), domain adaptation (Tzeng et al., 2017), probabilistic inference (Dumoulin et al., 2016), compressed sensing (Bora et al., 2017) and many more. These advances owe in part to the success of Wasserstein GANs (WGANs) (Arjovsky et al., 2017; Gulrajani et al., 2017), leveraging the neural net induced integral probability metric to better measure the difference between a target and a generated distribution. Along with the afore-described empirical successes, there have been theoretical studies of the statistical properties of GANs—see e.g. (Zhang et al., 2018; Arora et al., 2017; 2018; Bai et al., 2018; Dumoulin et al., 2016) and their references. These works have shown that, with an appropriate design of the generator and discriminator, the global optimum of the WGAN objective identifies the target distribution with low sample complexity. On the algorithmic front, prior work has focused on the stability and convergence properties of gradient descent-ascent (GDA) and its variants in GAN training and more general min-max optimization problems; see e.g. (Nagarajan & Kolter, 2017; Heusel et al., 2017; Mescheder et al., 2017; 2018; Daskalakis et al., 2017; Daskalakis & Panageas, 2018a;b; Gidel et al., 2019; Liang & Stokes, 2019; Mokhtari et al., 2019; Jin et al., 2019; Lin et al., 2019) and their references. It is known that, even in min-max optimization problems with convex-concave objectives, GDA may fail to compute the min-max solution and may even exhibit divergent behavior. Hence, these works have studied conditions under which GDA converges to a globally optimal solution under a convex-concave objective, or different types of locally optimal solutions under nonconvex-concave or nonconvex-nonconcave objectives. They have also identified variants of GDA with better stability properties in both theory and practice, most notably those using negative momentum. In the context of GAN training, Feizi et al. (2017) show that for WGANs with a linear generator and quadratic discriminator GDA succeeds in learning a Gaussian using polynomially many samples in the dimension. In the same vein, we are the first to our knowledge to study the global convergence properties of stochastic GDA in the GAN setting, and establishing such guarantees for non-linear generators. In particular, we study the WGAN formulation for learning a single-layer generative model with some reasonable choices of activations including tanh, sigmoid and leaky ReLU. Our contributions. For WGAN with a one-layer generator network using an activation from a large family of functions and a quadratic discriminator, we show that stochastic gradient descent-ascent learns a target distribution using polynomial time and samples, under the assumption that the target distribution is realizable in the architecture of the generator. This is achieved by a) analysis of the dynamics of stochastic gradient-descent to show it attains a global optimum of the minmax problem, and b) appropriate design of the discriminator to ensure a parametric O( 1√ n ) statistical rate (Zhang et al., 2018; Bai et al., 2018). Related Work. We briefly review relevant results in GAN training and learning generative models: - Optimization viewpoint. For standard GANs and WGANs with appropriate regularization, Nagarajan & Kolter (2017), Mescheder et al. (2017) and Heusel et al. (2017) establish sufficient conditions to achieve local convergence and stability properties for GAN training. At the equilibrium point, if the Jacobian of the associated gradient vector field has only eigenvalues with negative real-part at the equilibrium point, GAN training is verified to converge locally for small enough learning rates. A follow-up paper by (Mescheder et al., 2018) shows the necessity of these conditions by identifying a prototypical counterexample that is not always locally convergent with gradient descent based GAN optimization. However, the lack of global convergence prevents the analysis to provide any guarantees of learning the real distribution. The work of (Feizi et al., 2017) described above has similar goals as our paper, namely understanding the convergence properties of basic dynamics in simple WGAN formulations. However, they only consider linear generators, which restrict the WGAN model to learning a Gaussian. Our work goes a step further, considering WGANs whose generators are one-layer neural networks with a broad selection of activations. We show that with a proper gradient-based algorithm, we can still recover the ground truth parameters of the underlying distribution. More broadly, WGANs typically result in nonconvex-nonconcave min-max optimization problems. In these problems, a global min-max solution may not exist, and there are various notions of local min-max solutions, namely local min-local max solutions Daskalakis & Panageas (2018b), and local min solutions of the max objective Jin et al. (2019), the latter being guaranteed to exist under mild conditions. In fact, Lin et al. (2019) show that GDA is able to find stationary points of the max objective in nonconvex-concave objectives. Given that GDA may not even converge for convexconcave objectives, another line of work has studied variants of GDA that exhibit global convergence to the min-max solution Daskalakis et al. (2017); Daskalakis & Panageas (2018a); Gidel et al. (2019); Liang & Stokes (2019); Mokhtari et al. (2019), which is established for GDA variants that add negative momentum to the dynamics. While the convergence of GDA with negative momentum is shown in convex-concave settings, there is experimental evidence supporting that it improves GAN training (Daskalakis et al., 2017; Gidel et al., 2019). - Statistical viewpoint. Several works have studied the issue of mode collapse. One might doubt the ability of GANs to actually learn the distribution vs just memorize the training data (Arora et al., 2017; 2018; Dumoulin et al., 2016). Some corresponding cures have been proposed. For instance,Zhang et al. (2018); Bai et al. (2018) show for specific generators combined with appropriate parametric discriminator design, WGANs can attain parametric statistical rates, avoiding the exponential in dimension sample complexity (Liang, 2018; Bai et al., 2018; Feizi et al., 2017). Recent work of Wu et al. (2019) provides an algorithm to learn the distribution of a single-layer ReLU generator network. While our conclusion appears similar, our focus is very different. Our paper targets understanding when a WGAN formulation trained with stochastic GDA can learn in polynomial time and sample complexity. Their work instead relies on a specifically tailored algorithm for learning truncated normal distributions Daskalakis et al. (2018). 2 PRELIMINARIES We consider GAN formulations for learning a generatorGA : Rk → Rd of the form z 7→ x = φ(Az), where A is a d × k parameter matrix and φ some activation function. We consider discriminators Dv : Rd → R or DV : Rd → R that are linear or quadratic forms respectively for the different purposes of learning the marginals or the joint distribution. We assume latent variables z are sampled from the normal N (0, Ik×k), where Ik×k denotes the identity matrix of size k. The real/target distribution outputs samples x ∼ D = GA∗(N (0, Ik0×k0)), for some ground truth parameters A∗, where A∗ is d× k0, and we take k ≥ k0 for enough expressivity, taking k = d when k0 is unknown. The Wasserstain GAN under our choice of generator and discriminator is naturally formulated as: min A∈Rd×k max v∈Rd { f(A,v) ≡ Ex∼DDv(x)− Ez∼N (0,Ik×k)Dv(GA(z)) } .1 1We will replace v by V ∈ Rd×d when necessary. We use ai to denote the i-th row vector of A. We sometimes omit the 2 subscript, using ‖x‖ to denote the 2-norm of vector x, and ‖X‖ to denote the spectral norm of matrix X . Sn ⊂ Rn×n represents all the symmetric matrices of dimension n× n. We use Df(X0)[B] to denote the directional derivative of function f at point X0 with direction B: Df(X0)[B] = limt→0 f(X0+tB)−f(X0) t . 3 WARM-UP: LEARNING THE MARGINAL DISTRIBUTIONS As a warm-up, we ask whether a simple linear discriminator is sufficient for the purposes of learning the marginal distributions of all coordinates of D. Notice that in our setting, the i-th output of the generator is φ(x) where x ∼ N (0, ‖ai‖2), and is thus solely determined by ‖ai‖2. With a linear discriminator Dv(x) = v>x, our minimax game becomes: min A∈Rd×k max v∈Rd { f1(A,v) ≡ Ex∼D [ v>x ] − Ez∼N (0,Ik×k) [ v>φ(Az) ]} . (1) Notice that when the activation φ is an odd function, such as the tanh activation, the symmetric property of the Gaussian distribution ensures that Ex∼D[v>x] = 0, hence the linear discriminator in f1 reveals no information about A∗. Therefore specifically for odd activations (or odd plus a constant activations), we instead use an adjusted rectified linear discriminator Dv(x) ≡ v>R(x−C) to enforce some bias, where C = 12 (φ(x) + φ(−x)) for all x, and R denotes the ReLU activation. Formally, we slightly modify our loss function as: f̄1(A,v) ≡ Ex∼D [ v>R(x− C) ] − Ez∼N (0,Ik×k) [ v>R(φ(Az)− C) ] . (2) We will show that we can learn each marginal of D if the activation function φ satisfies the following. Assumption 1. The activation function φ satisfies either one of the following: 1. φ is an odd function plus constant, and φ is monotone increasing; 2. The even component of φ, i.e. 12 (φ(x)+φ(−x)), is positive and monotone increasing on x ∈ [0,∞). Remark 1. All common activation functions like (Leaky) ReLU, tanh or sigmoid function satisfy Assumption 1. Lemma 1. Suppose A∗ 6= 0. Consider f1 with activation that satisfies Assumption 1.2 and f̄1 with activation that satisfies Assumption 1.1. The stationary points of such f1 and f̄1 yield parameters A satisfying ‖ai‖ = ‖a∗i ‖,∀i ∈ [d]. To bound the capacity of the discriminator, similar to the Lipschitz constraint in WGAN, we regularize the discriminator. For the regularized formulation we have: Theorem 1. In the same setting as Lemma 1, alternating gradient descent-ascent with proper learning rates on min A max v {f1(A,v)− ‖v‖2/2} or respectively min A max v {f̄1(A,v)− ‖v‖2/2} recovers A such that ‖ai‖ = ‖a∗i ‖,∀i ∈ [d]. All the proofs of the paper can be found in the appendix. We show that all local min-max points in the sense of (Jin et al., 2019) of the original problem are global min-max points and recover the correct norm of a∗i ,∀i. Notice for the source data distribution x = (x1, x2, · · ·xd) ∼ D with activation φ, the marginal distribution of each xi follows φ(N (0, ‖a∗i ‖2)) and is determined by ‖a∗i ‖. Therefore we have learned the marginal distribution for each entry i. It remains to learn the joint distribution. 4 LEARNING THE JOINT DISTRIBUTION In the previous section, we utilize a (rectified) linear discriminator, such that each coordinate vi interacts with the i-th random variable. With the (rectified) linear discriminator, WGAN learns the correct ‖ai‖, for all i. However, since there’s no interaction between different coordinates of the random vector, we do not expect to learn the joint distribution with a linear discriminator. To proceed, a natural idea is to use a quadratic discriminator DV (x) := x>V x = 〈xx>, V 〉 to enforce component interactions. Similar to the previous section, we study the regularized version: min A∈Rd×k max V ∈Rd×d {f2(A, V )− 1 2 ‖V ‖2F }, (3) where f2(A, V ) = Ex∼DDV (x)− Ez∼N (0,Ik×k)DV (φ(Az)) = 〈 Ex∼D [ xx> ] − Ez∼N (0,Ik×k) [ φ(Az)φ(Az)> ] , V 〉 . By adding a regularizer on V and explicitly maximizing over V : g(A) ≡ max V { f2(A, V )− 1 2 ‖V ‖2F } = 1 2 ∥∥Ex∼D [xx>]− Ez∼N (0,Ik×k) [φ(Az)φ(Az)>]∥∥2F . In the next subsection, we first focus on analyzing the second-order stationary points of g, then we establish that gradient descent ascent converges to second-order stationary points of g . 4.1 GLOBAL CONVERGENCE FOR OPTIMIZING THE GENERATING PARAMETERS We first assume that both A and A∗ have unit row vectors, and then extend to general case since we already know how to learn the row norms from Section 3. To explicitly compute g(A), we rely on the property of Hermite polynomials. Since normalized Hermite polynomials {hi}∞i=0 forms an orthonomal basis in the functional space, we rewrite the activation function as φ(x) = ∑∞ i=0 σihi, where σi is the i-th Hermite coefficient. We use the following claim: Claim 1 ((Ge et al., 2017) Claim 4.2). Let φ be a function from R to R such that φ ∈ L2(R, e−x2/2), and let its Hermite expansion be φ = ∑∞ i=1 σihi. Then, for any unit vectors u,v ∈ Rd, we have that Ex∼N (0,Id×d) [ φ(u>x)φ(v>x) ] = ∞∑ i=0 σ2i (u >v)i. Therefore we could compute the value of f2 explicitly using the Hermite polynomial expansion: f2(A, V ) = 〈 ∞∑ i=0 σ2i ( (A∗(A∗)>)◦i − (AA>)◦i ) , V 〉 . Here X◦i is the Hadamard power operation where (X◦i)jk = (Xjk)i. Therefore we have: g(A) = 1 2 ∥∥∥∥∥ ∞∑ i=0 σ2i ( (A∗(A∗)>)◦i − (AA>)◦i )∥∥∥∥∥ 2 F We reparametrize with Z = AA> and define g̃(Z) = g(A) with individual component functions g̃jk(z) ≡ 12 ( ∑∞ i=0 σ 2 i ((z ∗ jk) i − zi))2. Accordingly z∗jk = 〈a∗j ,a∗k〉 is the (j, k)-th component of the ground truth covariance matrix A∗(A∗)>. Assumption 2. The activation function φ is an odd function plus constant. In other words, its Hermite expansion φ = ∑∞ i=0 σihi satisfies σi = 0 for even i ≥ 2. Additionally we assume σ1 6= 0. Remark 2. Common activations like tanh and sigmoid satisfy Assumption 2. Lemma 2. For activations including leaky ReLU and functions satisfying Assumption 2, g̃(Z) has a unique stationary point where Z = A∗(A∗)>. Notice g̃(Z) = ∑ jk g̃jk(zjk) is separable across zjk, where each g̃jk is a polynomial scalar function. Lemma 2 comes from the fact that the only zero point for g̃′jk is zjk = z ∗ jk, for odd activation φ and leaky ReLU. Then we migrate this good property to the original problem we want to solve: Problem 1. We optimize over function g when ‖a∗i ‖ = 1,∀i: minA g(A) = 12 ∥∥∥∥∥ ∞∑ i=0 σ2i ( (A∗(A∗)>)◦i − (AA>)◦i )∥∥∥∥∥ 2 F s.t. a>i ai = 1,∀i. Existing work Journée et al. (2008) connects g̃(Z) to the optimization over factorized version for g(A) (g(A) ≡ g̃(AA>)). Specifically, when k = d, all second-order stationary points for g(A) are first-order stationary points for g̃(Z). Though g̃ is not convex, we are able to show that its first-order stationary points are global optima when the generator is sufficiently expressive, i.e., k ≥ k0. In reality we won’t know the latent dimension k0, therefore we just choose k = d for simplicity. We make the following conclusion: Theorem 2. For activations including leaky ReLU and functions satisfying Assumption 2, when k = d, all second-order KKT points for problem 1 are its global minimum. Therefore alternating projected gradient descent-ascent on Eqn. (3) converges to A : AA> = A∗(A∗)>. The extension for non-unit vectors is straightforward, and we defer the analysis to the Appendix. 5 FINITE SAMPLE ANALYSIS Algorithm 1 Online stochastic gradient descent ascent on WGAN 1: Input: n training samples: x1,x2, · · ·xn, where each xi ∼ φ(A∗z), z ∼ N (0, Ik×k), learning rate for generating parameters η, number of iterations T . 2: Random initialize generating matrix A(0). 3: for t = 1, 2, · · · , T do 4: Generatem latent variables z(t)1 , z (t) 2 , · · · , z (t) m ∼ N (0, Ik×k) for the generator. The empirical function becomes f̃ (t)m,n(A, V ) = 〈 1 m m∑ i=1 φ(Az (t) i )φ(Az (t) i ) > − 1 n n∑ i=1 xix > i , V 〉 − 1 2 ‖V ‖2 5: Gradient ascent on V with optimal step-size ηV = 1: V (t) ← V (t) − ηV∇V f̃ (t)m,n(A(t−1), V (t−1)). 6: Sample noise e uniformly from unit sphere 7: Projected Gradient Descent on A, with constraints C = {A|(AA>)ii = (A∗A∗>)ii} : A(t) ← ProjC(A(t−1) − η(∇Af̃ (t)m,n(A(t−1), V (t)) + e)). 8: end for 9: Output: A(T )(A(T ))> In this section, we consider analyzing Algorithm 1, i.e., gradient descent ascent on the following: f̃ (t)m,n(A, V ) = 〈 1 m m∑ i=1 φ(Az (t) i )φ(Az (t) i ) > − 1 n n∑ i=1 xix > i , V 〉 − 1 2 ‖V ‖2. (4) Notice in each iteration, gradient ascent with step-size 1 finds the optimal solution for V . By Danskin’s theorem (Danskin, 2012), our min-max optimization is essentially gradient descent over g̃ (t) m,n(A) ≡ maxV f̃ (t)m,n(A, V ) = 12‖ 1 m ∑m i=1 φ(Az (t) i )φ(Az (t) i ) > − 1n ∑n i=1 xix > i ‖2F with a batch of samples {z(t)i }, i.e., stochastic gradient descent for fn(A) ≡ Ezi∼N (0,Ik×k),∀i∈[m][g̃m,n(A)]. Therefore to bound the difference between fn(A) and the population risk g(A), we analyze the sample complexity required on the observation side (xi ∼ D, i ∈ [n]) and the mini-batch size required on the learning part (φ(Azj), zj ∼ N (0, Ik×k), j ∈ [m]). We will show that with large enough n,m, the algorithm specified in Algorithm 1 that optimizes over the empirical risk will yield the ground truth covariance matrix with high probability. Our proof sketch is roughly as follows: 1. With high probability, projected stochastic gradient descent finds a second order stationary point  of fn(·) as shown in Theorem 31 of (Ge et al., 2015). 2. For sufficiently large m, our empirical objective, though a biased estimator of the population risk g(·), achieves good -approximation to the population risk on both the gradient and Hessian (Lemmas 4&5). Therefore  is also an O( )-approximate second order stationary point (SOSP) for the population risk g(A). 3. We show that any -SOSP  for g(A) yields anO( )-first order stationary point (FOSP) Ẑ ≡ ÂÂ> for the semi-definite programming on g̃(Z) (Lemma 6). 4. We show that any O( )-FOSP of function g̃(Z) induces at most O( ) absolute error compared to the ground truth covariance matrix Z∗ = A∗(A∗)> (Lemma 7). 5.1 OBSERVATION SAMPLE COMPLEXITY For simplicity, we assume the activation and its gradient satisfy Lipschitz continuous, and let the Lipschitz constants be 1 w.l.o.g.: Assumption 3. Assume the activation is 1-Lipschitz and 1-smooth. To estimate observation sample complexity, we will bound the gradient and Hessian for the population risk and empirical risk on the observation samples: g(A) ≡ 1 2 ∥∥Ex∼D [xx>]− Ez∼N (0,Ik×k) [φ(Az)φ(Az)>]∥∥2F , and gn(A) ≡ 1 2 ∥∥∥∥∥ 1n n∑ i=1 xix > i − Ez∼N (0,Ik×k) [ φ(Az)φ(Az)> ]∥∥∥∥∥ 2 F . Claim 2. ∇g(A)−∇gn(A) = 2Ez [ diag(φ′(Az))(X −Xn)φ(Az)z> ] , where X = Ex∼D[xx>], and Xn = 1n ∑n i=1 xix > i . The directional derivative with arbitrary direction B is: D∇g(A)[B]−D∇gn(A)[B] = 2Ez [ diag(φ′(Az))(Xn −X)φ′(Az) ◦ (Bz)z> ] + 2Ez [ diag(φ′′(Az) ◦ (Bz))(Xn −X)φ(Az)z> ] Lemma 3. Suppose the activation satisfies Assumption 3. Pr[‖X − Xn‖ ≤ ‖X‖] ≥ 1 − δ, for n ≥ Θ̃(d/ 2 log2(1/δ))2. Lemma 4. Suppose the activation satisfies Assumption 2&3. With samples n ≥ Θ̃(d/ 2 log2(1/δ)), ‖∇g(A) − ∇gn(A)‖2 ≤ O( d‖A‖2) with probability 1 − δ. Meanwhile, ‖D∇g(A)[B] − D∇gn(A)[B]‖2 ≤ O( d3/2‖A‖2‖B‖2) with probability 1− δ. 5.2 BOUNDING MINI-BATCH SIZE Normally for empirical risk for supervised learning, the mini-batch size can be arbitrarily small since the estimator of the gradient is unbiased. However in the WGAN setting, notice for each iteration, we randomly sample a batch of random variables {zi}i∈[m], and obtain a gradient of g̃m,n(A) ≡ 12 ∥∥∥ 1n∑ni=1 xix>i − 1m∑mj=1 φ(Azj)φ(Azj)>∥∥∥2 F , in Algorithm 1. However, the finite sum is inside the Frobenius norm and the gradient on each mini-batch may no longer be an unbiased estimator for our target gn(A) = 12 ∥∥ 1 n ∑n i=1 xix > i − Ez [ φ(Az)φ(Az)> ]∥∥2 F . In other words, we conduct stochastic gradient descent over the function f(A) ≡ Ez g̃m,n(A). Therefore we just need to analyze the gradient error between this f(A) and gn(A) (i.e. g̃m,n is almost an unbiased estimator of gn). Finally with the concentration bound derived in last section, we get the error bound between f(A) and g(A). Lemma 5. The empirical risk g̃m,n is almost an unbiased estimator of gn. Specifically, the expected function f(A) = Ezi∼N (0,Ik×k),i∈[m][g̃m,n] satisfies: ‖∇f(A)−∇gn(A)‖ ≤ O( 1 m ‖A‖3d2). 2Θ̃ hides log factors of d for simplicity. For arbitrary direction matrix B, ‖D∇f(A)[B]−D∇gn(A)[B]‖ ≤ O( 1 m ‖B‖‖A‖3d5/2). In summary, we conduct concentration bound over the observation samples and mini-batch sizes, and show the gradient of f(A) that Algorithm 1 is optimizing over has close gradient and Hessian with the population risk g(A). Therefore a second-order stationary point (SOSP) for f(A) (that our algorithm is guaranteed to achieve) is also an approximated SOSP for g(A). Next we show such a point also yield an approximated first-order stationary point of the reparametrized function g̃(Z) ≡ g(A),∀Z = AA>. 5.3 RELATION ON APPROXIMATE OPTIMALITY In this section, we establish the relationship between g̃ and g. We present the general form of our target Problem 1: minA∈Rd×k g(A) ≡ g̃(AA>) (5) s.t. Tr(A>XiA) = yi, Xi ∈ S, yi ∈ R, i = 1, · · · , n. Similar to the previous section, the stationary property might not be obvious on the original problem. Instead, we could look at the re-parametrized version as: minZ∈S g̃(Z) (6) s.t. Tr(XiZ) = yi, Xi ∈ S, yi ∈ R, i = 1, · · · , n, Z 0, Definition 1. A matrix A ∈ Rd×k is called an -approximate second-order stationary point ( -SOSP) of Eqn. (5) if there exists a vector λ such that: Tr(A >XiA) = yi, i ∈ [n] ‖(∇Z g̃(AA>)− ∑n i=1 λiXi)ãj‖ ≤ ‖ãj‖, {ãj}j span the column space of A Tr(B>D∇AL(A, λ)[B]) ≥ − ‖B‖2, ∀B s.t. Tr(B>XiA) = 0 Here L(A, λ) is the Lagrangian form g̃(AA>)− ∑n i=1 λi(Tr(A >XiA)− yi). Specifically, when = 0 the above definition is exactly the second-order KKT condition for optimizing (5). Next we present the approximate first-order KKT condition for (6): Definition 2. A symmetric matrix Z ∈ Sn is an -approximate first order stationary point of function (6) ( -FOSP) if and only if there exist a vector σ ∈ Rm and a symmetric matrix S ∈ S such that the following holds: Tr(XiZ) = yi, i ∈ [n] Z 0, S − I, ‖Sãj‖ ≤ ‖ãj‖, {ãj}j span the column space of Z S = ∇Z g̃(Z)− ∑n i=1 σiXi. Lemma 6. Let latent dimension k = d. For an -SOSP of function (5) with A and λ, it infers an -FOSP of function (6) with Z, σ and S that satisfies: Z = AA>, σ = λ and S = ∇Z g̃(AA>) −∑ i λiXi. Now it remains to show an -FOSP of g̃(Z) indeed yields a good approximation for the ground truth parameter matrix. Lemma 7. If Z is an -FOSP of function (6), then ‖Z −Z∗‖F ≤ O( ). Here Z∗ = A∗(A∗)> is the optimal solution for function (6). Together with the previous arguments, we finally achieve our main theorem on connecting the recovery guarantees with the sample complexity and batch size3: Theorem 3. For arbitrary δ < 1, , given small enough learning rate η < 1/poly(d, 1/ , log(1/δ)), let sample size n ≥ Θ̃(d5/ 2 log2(1/δ)), batch size m ≥ O(d5/ ), for large enough T=poly(1/η, 1/ , d, log(1/δ)), the output of Algorithm 1 satisfies ‖A(T )(A(T ))> − Z∗‖F ≤ O( ) with probability 1− δ, under Assumptions 2 & 3 and k = d. 3The exact error bound comes from the fact that when diagonal terms of AA> are fixed, ‖A‖2 = O( √ d). 6 SIMULATIONS In this section, we provide simple experimental results to validate the performance of stochastic gradient descent ascent and provide experimental support for our theory. We focus on Algorithm 1 that targets to recover the parameter matrix. We conduct a thorough empirical studies on three joint factors that might affect the performance: the number of observed samples m (we set n = m as in general GAN training algorithms), the different choices of activation function φ, and the output dimension d. In Figure 1 we plot the relative error for parameter estimation decrease over the increasing sample complexity. We fix the hidden dimension k = 2, and vary the output dimension over {3, 5, 7} and sample complexity over {500, 1000, 2000, 5000, 10000}. Reported values are averaged from 20 runs and we show the standard deviation with the corresponding colored shadow. Clearly the recovery error decreases with higher sample complexity and smaller output dimension. To visually demonstrate the learning process, we also include a simple comparison for different φ: i.e. leaky ReLU and tanh activations, when k = 1 and d = 2. We set the ground truth covariance matrix to be [1, 1; 1, 1], and therefore a valid result should be [1, 1] or [−1,−1]. From Figure 2 we could see that for both leaky ReLU and tanh, the stochastic gradient descent ascent performs similarly with exact recovery of the ground truth parameters. 7 CONCLUSION We analyze the convergence of stochastic gradient descent ascent for Wasserstein GAN on learning a single layer generator network. We show that stochastic gradient descent ascent algorithm attains the global min-max point, and provably recovers the parameters of the network with absolute error measured in Frobenius norm, from Θ̃(d5/ 2) i.i.d samples. A OMITTED PROOF FOR LEARNING THE DISTRIBUTION A.1 STATIONARY POINT FOR MATCHING FIRST MOMENT Proof of Lemma 1. To start with, we consider odd-plus-constant monotone increasing activations. Notice that by proposing a rectified linear discriminator, we have essentially modified the activation function as φ̃ := R(φ− C), where C = 12 (φ(x) + φ(−x)) is the constant bias term of φ. Observe that we can rewrite the objective f̄1 for this case as follows: f1(A,v) = Ez∼N (0,Ik0×k0 )v >φ̃(A∗z)− Ez∼N (0,Ik×k)v >φ̃(Az). Moreover, notice that φ̃ is positive and increasing on its support which is [0,+∞). Now let us consider the other case in our statement where φ has a positive and monotone increasing even component in [0,+∞). In this case, let us take: φ̃(x) = { φ(x) + φ(−x), x ≥ 0 0, o.w. Because of the symmetry of the Gaussian distribution, we can rewrite the objective function for this case as follows: f1(A,v) = Ez∼N (0,Ik0×k0 )v >φ̃(A∗z)− Ez∼N (0,Ik×k)v >φ̃(Az). Moreover, notice that φ̃ is positive and increasing on its support which is [0,+∞). To conclude, in both cases, the optimization objective can be written as follows, where φ̃ satisfies Assumption 1.2 and is only non-zero on [0,+∞). f1(A,v) = Ez∼N (0,Ik0×k0 )v >φ̃(A∗z)− Ez∼N (0,Ik×k)v >φ̃(Az). The stationary points of the above objective satisfy:{ ∇vf1(A,v) = Ez∼N (0,Ik0×k0 )φ̃(A ∗z)− Ez∼N (0,Ik×k)φ̃(Az) = 0, ∇ajf1(A,v) = −Ez∼N (0,Ik×k)vj φ̃′(a>j z)z = 0. We focus on the gradient over v. To achieve ∇vf1(A,v) = 0, the stationary point satisfies: ∀j,Ez∼N (0,Ik0×k0 )φ̃((a ∗ j ) >z) = Ez∼N (0,Ik×k)φ̃(a > j z), i.e. ∀j,Ex∼N (0,‖a∗j ‖2)φ̃(x) = Ex′∼N (0,‖aj‖2)φ̃(x ′). (7) To recap, for activations φ that follow Assumption 1, in both cases we have written the necessary condition on stationary point to be Eqn. (7), where φ̃ is defined differently for odd or non-odd activations, but in both cases it is positive and monotone increasing on its support [0,∞). We then argue the only solution for Eqn. (7) satisfies ‖aj‖ = ‖a∗j‖,∀j. This follows directly from the following claim: Claim 3. The function h(α) := Ex∼N (0,α2)f(x), α > 0 is a monotone increasing function if f is positive and monotone increasing on its support [0,∞). We could see from Claim 3 that the LHS and RHS of Eqn. (7) is simply h(‖aj‖) and h(‖a∗j‖) for each j. Now that h is an monotone increasing function, the unique solution for h(‖aj‖) = h(‖a∗j‖) is to match the norm: ‖aj‖ = ‖a∗j‖,∀j. Proof of Claim 3. h(α) = Ex∼N (0,α2)f(x) = ∫ ∞ 0 f(x)e− x2 2α2 dx y:=x/α = ∫ ∞ 0 αf(αy)e− y2 2 dy = Ey∼N (0,1)αf(αy). Notice h′(α) = Ex∼N (0,1)[αxf ′(αx) + f(αx)]. Since f , f ′, and α > 0, and we only care about the support of f where x is also positive, therefore h′ is always positive and h is monotone increasing. To sum up, at stationary point where ∇f1(A,v) = 0, we have ∀i, ‖a∗i ‖ = ‖ai‖. A.2 PROOF OF THEOREM 1 Proof of Theorem 1. We will take optimal gradient ascent steps with learning rate 1 on the discriminator side v, hence the function we will actually be optimizing over becomes (using the notation for φ̃ from section A.1): h(A) = max v f1(A,v) = 1 2 ∥∥∥Ez∼N (0,Ik0×k0 )φ̃(A∗z)− Ez∼N (0,Ik×k)φ̃(Az)∥∥∥2 . We just want to verify that there’s no spurious local minimum for h(A). Notice there’s no interaction between each row vector of A. Therefore we instead look at each hi := 1 2 ( Ez∼N (0,Ik0×k0 )φ̃((a ∗ i ) >z)− Ez∼N (0,Ik×k)φ̃(a>i z) )2 for each i. Now ∇hi(ai) = − ( Ez∼N (0,Ik0×k0 )φ̃((a ∗ i ) >z)− Ez∼N (0,Ik×k)φ̃(a>i z) ) (Ez∼N (0,Ik×k)zφ̃′(a>i z)). Due to the symmetry of the Gaussian, we take ai = ae1, where a = ‖ai‖. It is easy to see that checking whether Ez∼N (0,Ik×k)zφ̃′(a>i z) = 0 is equivalent to checking whether Ez1∼N (0,1)z1φ̃′(az1) = 0. Recall that φ̃ is supported on [0,+∞) and it is monotonically increasing on its support. Hence, Ez1∼N (0,1)z1φ̃′(az1) 6= 0 unless a = 0. Hence, suppose ‖ai‖ 6= 0,∀i. Then ∇Ah(A) = 0 iff h(A) = 0, i.e. Ez∼N (0,Ik0×k0 )φ̃(A ∗z) = Ez∼N (0,Ik×k)φ̃(Az). Therefore all stationary points of h(A) are global minima where Ez∼N (0,Ik0×k0 )φ̃(A ∗z) = Ez∼N (0,Ik×k)φ̃(Az) and according to Lemma 1, this only happens when ‖ai‖ = ‖a∗i ‖,∀i ∈ [d]. A.3 STATIONARY POINTS FOR WGAN WITH QUADRATIC DISCRIMINATOR Proof of Lemma 2. To study the stationary point for g̃(Z) = ∑ jk g̃jk(zjk), we look at individual g̃jk(z) ≡ 12 ( ∑∞ i=0 σ 2 i ((z ∗ jk) i − zi))2. Notice for odd-plus-constant activations, σi is zero for even i > 0. Recall our assumption in Lemma 2 also requires that σ1 6= 0. Since the analysis is invariance to the position of the matrix Z, we simplify the notation here and essentially want to study the stationary point for f(a) = 12 ( ∑ i odd σ 2 i (a i− bi))2 for some constant b and σi, where σ1 6= 04. f ′(a) = (∑ i odd σ2i (a i − bi) )(∑ i odd iσ2i a i−1 ) = (a− b) σ21 + ∑ i≥3 odd σ2i ai − bi a− b σ21 + ∑ i≥3 odd iσ2i a i−1 = (a− b)(I)(II). Notice now f ′(a) = 0⇔ a = b. This is because the polynomial f ′(a) is factorized to a− b and two factors I and II that are always positive. Notice here we use a i−bi a−b to denote ∑i j=0 a jbi−j , which is always nonnegative. This is simply because ai − bi always shares the same sign as a− b when i is odd. Therefore I=σ21 + ∑ i≥3 odd σ 2 i ai−bi a−b > 0,∀a. 4The zero component has been cancelled out. Meanwhile, since ai−1 is always nonnegative for each odd i, we have II= σ21 + ∑ i≥3 odd iσ 2 i a i−1 is also always positive for any a. Next, for activation like ReLU, loss g̃jk(z) = 12 (h(z)−h(z ∗ jk)) 2, where h(x) = 1π ( √ 1− x2 + (π− cos−1(x))x) (Daniely et al., 2016). Therefore h′(−1) = 0 for any z∗jk. This fact prevents us from getting the same conclusion for ReLU. However, for leaky ReLU with coefficient of leakage α ∈ (0, 1), φ(x) = max{x, αx} = (1 − α)ReLU(x) + αx. We have Ez∼N (0,Ik×k [ φ(a>i z)φ(a > j z) ] =(1− α)2EzReLU(a>i z)ReLU(a>j z) + (1− α)αEzReLU(a>i z)a>j z + (1− α)αEza>i zReLU(a>j z) + α2Eza>i za>j z =(1− α)2h(a>i aj) + αa>i aj Therefore for leaky ReLU g̃jk(z) = 12 ((1 − α) 2(h(z) − h(zjk∗)) + α(z − z∗jk))2, and g̃′jk(z) = ((1−α)2(h(z)−h(zjk∗))+α(z−z∗jk))((1−α)2h′(z)+α).Now with α > 0, (1−α)2h′(z)+α ≥ α for all z and g̃jk(z) = 0⇔ z = z∗jk. To sum up, for odd activations and leaky ReLU, since each g̃jk(z) only has stationary point of z = z∗jk, the stationary point Z of g̃(Z) = ∑ jk g̃jk also satisfy Z = Z ∗ = A∗(A∗)>. Proof of Theorem 2. Instead of directly looking at the second-order stationary point of Problem 1, we look at the following problem on its reparametrized version: Problem 2. minZ g̃(Z) = 12 ∥∥∥∥∥ ∞∑ i=0 σ2i ( (Z∗)◦i − Z◦i )∥∥∥∥∥ 2 F s.t. zii = 1,∀i. Z 0. Here Z∗ = A∗(A∗)> and satisfies z∗ii = 1,∀i. Compared to function g in the original problem 1, it satisfies that g̃(AA>) ≡ g(A). A matrix Z satisfies the first-order stationary point for Problem 2 if there exists a vector σ such that: zii = 1, Z 0, S 0, SZ = 0, S = ∇Zg(Z)− diag(σ). Therefore for a stationary point Z, since Z∗ = A∗(A∗)> 0, and S 0, we have 〈S,Z∗ − Z〉 = 〈S,Z∗〉 ≥ 0. Meanwhile, 〈Z∗ − Z, S〉 =〈Z∗ − Z,∇Zf(Z)− diag(σ)〉 =〈Z∗ − Z,∇Zf(Z)〉 (diag(Z∗ − Z) = 0) = ∑ i,j (z∗ij − zij)g′ij(zij) = ∑ i,j (zij − z∗ij)P (zij)(z∗ij − zij) (Refer to proof of Lemma 2 for the value of g′) =− ∑ ij (zij − z∗ij)2P (zij) ≤0 (P is always positive) Therefore 〈S,Z∗ − Z〉 = 0, and this only happens when Z = Z∗. Finally, from Journée et al. (2008) we know that any first-order stationary point for Problem 2 is a second-order stationary point for our original problem 1 5. Therefore we conclude that all second-order stationary point for Problem 1 are global minimum A: AA> = A∗(A∗)>. A.4 LANDSCAPE ANALYSIS FOR NON-UNIT GENERATING VECTORS In the previous argument, we simply assume that the norm of each generating vectors ai to be 1. This practice simplifies the computation but is not practical. Since we are able to estimate ‖ai‖ for all i first, we could analyze the landscape of our loss function for general matrix A. The main tool is to use the multiplication theorem of Hermite functions: hαn(x) := hn(αx) = bn2 c∑ i=0 αn−2i(α2 − 1)i ( n 2i ) (2i)! i! 2−ihn−2i(x). For the ease of notation, we denote the coefficient as ηn,iα := α n−2i(α2− 1)i ( n 2i ) (2i)! i! 2 −i. We extend the calculations for Hermite inner product for non-standard distributions. Lemma 8. Let (x, y) be normal variables that follow joint distributionN (0, [[α2, αβρ]; [αβρ, β2]]). Then, E[hm(x)hn(y)] = { ∑b l2 c i=0 η l,i α η l,i β ρ l−2i if m ≡ n (mod 2) 0 o.w. (8) Here l = min{m,n}. 5Throughout the analysis for low rank optimization in Journée et al. (2008), they require function g̃(Z) to be convex. However, by carefully scrutinizing the proof, one could see that this condition is not required in building the connection of first-order and second-order stationary points of g(A) and g̃(Z). For more cautious readers, we also show a relaxed version in the next section, where the equivalence of SOSP of g and FOSP of g̃ is a special case of it. Proof. Denote the normalized variables x̂ = x/α, ŷ = y/β. Let l = min{m,n}. E[hm(x)hn(y)] =E[hαm(x̂)hβn(ŷ)] = bm2 c∑ i=0 bn2 c∑ j=0 ηm,iα η n,j β E[hm−2i(x̂)hn−2j(ŷ)] = bm2 c∑ i=0 bn2 c∑ j=0 ηm,iα η n,j β δ(m−2i),(n−2j)ρ n−2j (Lemma ??) = { ∑b l2 c i=0 η l,i α η l,i β ρ l−2i if m ≡ n (mod 2) 0 o.w. . Now the population risk becomes g(A) = 1 2 ∥∥Ex∼D [xx>]− Ez∼N (0,Ik×k) [φ(Az)φ(Az)>]∥∥2 = 1 2 ∑ i,j∈[d] ( Ez∼N (0,Ik0×k0 )φ((a ∗ i ) >z)φ((a∗j ) >z)− Ez∼N (0,Ik×k)φ(a > i z)φ(a > j z) )2 ≡1 2 ∑ i,j g̃ij(zij). To simplify the notation, for a specific i, j pair, we write x̂ = a>i z/α, α = ‖ai‖ and ŷ = a>j z/β, where β = ‖aj‖. Namely we have (x̂, ŷ) ∼ N (0, [[1, ρ]; [ρ, 1]]), where ρ = cos〈ai,aj〉. Again, recall φ(αx̂) = ∑ k odd σihi(αx̂) = ∑ k odd σih α i (x̂). Ez∼N (0,Ik×k)[φ(αx̂)φ(βŷ)] =E [∑ m odd σmh α m(x̂) ∑ n odd σnh β n(ŷ) ] = ∑ m,n odd σmσnES [hαm(x̂)hβn(ŷ)] = ∑ m odd σm ∑ n≤m odd σn bn2 c∑ k=0 ηn,kα η n,k β ρ n−2k Therefore we could write out explicitly the coefficient for each term ρk, k odd, as: ck =∑ n≥k odd σnη n,n−k2 α η n,n−k2 β ( ∑ m≥n σm). We have g̃ij(zij) = ( ∑ k odd ckz k ij − ∑ k odd ck(z ∗ ij) k)2. Now suppose σi to have the same sign, and ‖αi‖ ≥ 1,∀ or ‖αi‖ ≤ 1,∀i, each coefficient ci ≥ 0. Therefore still the only stationary point for g(Z) is Z∗. B OMITTED PROOFS FOR SAMPLE COMPLEXITY B.1 OMITTED PROOFS FOR RELATION ON APPROXIMATE STATIONARY POINTS Proof of Lemma 6. We first review what we want to prove. For a matrixA that satisfies -approximate SOSP for Eqn. (5), we define SA = ∇Z g̃(AA>)− ∑n i=1 λiXi. The conditions ensure that A, λ, SA satisfy: Tr(A >XiA) = yi, ‖SAãi‖2 ≤ ‖ãi‖2, {ãj}j span the column space of A Tr(B>DA∇AL(A, λ)[B]) ≥ − ‖B‖2F , ∀B s.t. Tr(B>XiA) = 0. (9) We just want to show Z := AA>, σ := λ, and S := SA satisfies the conditions for -FOSP of Eqn. (6). Therefore, by going over the conditions, its easy to tell that all other conditions automatically apply and it remains to show SA − I . By noting that∇AL(A, λ) = 2SAA, one has: 1 2 Tr(B>DA∇AL(A, λ)[B]) = Tr(B>SAB) + Tr(B >DA∇Z g̃(AA>)[B]A)− n∑ i=1 DAλi[B] Tr(B >XiA) (from Lemma 5 of Journée et al. (2008)) = Tr(B>SAB) + Tr(AB >DA∇Z g̃(AA>)[B]) (10) (From Eqn. (9) we have Tr(B>XiA) = 0) Notice that A ∈ Rd×k and we have chosen k = d for simplicity. We first argue when A is rankdeficient, i.e. rank(A) < k. There exists some vector v ∈ Rk such that Av = 0. Now for any vector b ∈ Rd, let B = bv>. Therefore AB> = Avb> = 0. From (10) we further have: 1 2 Tr(B>DA∇AL(A, λ)[B]) = Tr(B>SAB) + Tr(AB >DA∇Z g̃(AA>)[B]) = Tr(vb>SAbv >) = ‖v‖2b>SAb ≥− /2‖B‖2F (from (9)) =− /2‖v‖2‖b>‖2 Therefore from the last three rows we have b>SAb ≥ − /2‖b‖2 for any b, i.e. SA − /2Id×d. On the other hand, when A is full rank, the column space of A is the entire Rd vector space, and therefore SA − Id×d directly follows from the second line of the -SOSP definition. B.2 DETAILED CALCULATIONS Recall the population risk g(A) ≡ 1 2 ∥∥Ex∼D [xx>]− Ez∼N (0,Ik×k) [φ(Az)φ(Az)>]∥∥2F . Write the empirical risk on observations as: gn(A) ≡ 1 2 ∥∥∥∥∥ 1n n∑ i=1 xix > i − Ez∼N (0,Ik×k) [ φ(Az)φ(Az)> ]∥∥∥∥∥ 2 F . Claim 4. ∇g(A)−∇gn(A) = 2Ez [ diag(φ′(Az))(X −Xn)φ(Az)z> ] , where X = Ex∼D[xx>], and Xn = 1n ∑n i=1 xix > i . Proof. ∇g(A)−∇gn(A) = ∇(g(A)− gn(A)) = 1 2 ∇ 〈 X −Xn, X +Xn − 2Ez∼N (0,Ik×k) [ φ(Az)φ(Az)> ]〉 =∇ 〈 Xn −X,Ez∼N (0,Ik×k) [ φ(Az)φ(Az)> ]〉 Now write S(A) = φ(Az)φ(Az)>. [S(A+ ∆A)− S(A)]ij =φ(a>i z + ∆a > i z)φ(a > j z + ∆a > j z)− φ(a>i z)φ(a>j z) =φ′(a>i z)∆a > i zφ(a > j z) + φ ′(a>j z)∆a > j zφ(a > i z) +O(‖∆A‖2) Therefore [S(A+ ∆A)− S(A)]i: =φ′(a>i z)∆a > i zφ(Az) > + (φ′(Az) ◦∆Az)>φ(a>i z) +O(‖∆A‖2) Therefore S(A+ ∆A)− S(A) = diag(φ′(Az))∆Azφ(Az)> + φ(Az)z>∆A>diag(φ′(Az)). (11) And g(A+ ∆A)− gn(A+ ∆A)− (g(A)− gn(A)) =〈Xn −X,Ez [S(A+ ∆A)− S(A)]〉 =Ez〈Xn −X, diag(φ′(Az))∆Azφ(Az)> + φ(Az)z>∆A>diag(φ′(Az))〉 =2Ez〈diag(φ′(Az))(Xn −X)φ(Az)z>,∆A〉. Finally we have ∇g(A)−∇gn(A) = 2Ez [ diag(φ′(Az))(Xn −X)φ(Az)z> ] . Claim 5. For arbitrary matrix B, the directional derivative of ∇g(A)−∇gn(A) with direction B is: DA∇g(A)[B]−DA∇gn(A)[B] = 2Ez [ diag(φ′(Az))(Xn −X)φ′(Az) ◦ (Bz)z> ] +2Ez [ diag(φ′′(Az) ◦ (Bz))(Xn −X)φ(Az)z> ] Proof. g(A+ tB) =2Ez [ diag(φ′(Az + tBz))(Xn −X)φ(Az + tBz)z> ] =2Ez [ diag(φ′(Az) + t(Bz) ◦ φ′′(Az))(Xn −X)(φ(Az) + tφ′(Az) ◦ (Bz))z> ] +O(t2) Therefore lim t→0 g(A+ tB)− g(A) t =2Ez [ diag(φ′(Az))(Xn −X)φ′(Az) ◦ (B>z)z> ] + 2Ez [ diag(φ′′(Az) ◦ (Bz))(Xn −X)φ(Az)z> ] B.3 OMITTED PROOFS FOR OBSERVATION SAMPLE COMPLEXITY Proof of Lemma 3. For each xi = φ(Azi), zi ∼ N (0, Ik×k). Each coordinate |xi,j | = |φ(a>j zi)| ≤ |a>j zi| since φ is 1-Lipschitz. 6. Without loss of generality we assumed ‖aj‖ = 1,∀j, therefore a>j z ∼ N (0, Ik×k). For all i ∈ [n], j ∈ [d] |xi,j | ≤ log(nd/δ) with probability 1− δ. Then by matrix concentration inequality ((Vershynin, 2010) Corollary 5.52), we have with probability 1−δ: (1− )X Xn (1+ )X if n ≥ Ω(d/ 2 log2(nd/δ)). Therefore set n = Θ̃(d/ 2 log2(1/δ)) will suffice. Proof of Lemma 4. Xij = Ez∼N (0,Ik×k)φ(a > i z)φ(a > j z) = { 0 i 6= j E[φ2(a>i z)] ≤ 2π i = j 6For simplicity, we analyze as if φ(0) = 0 w.o.l.g. throughout this section, since the bias term is canceled out in the observation side with φ(A∗z) and the learning side with φ(Az). Therefore ‖X‖2 ≤ 2π . Together with Lemma 3, ‖X −Xn‖ ≤ 2 π w.p 1− δ. Recall ∇g(A)−∇gn(A) = 2Ez [ diag(φ′(Az))(X −Xn)φ(Az)z> ] := 2EzG(z), whereG(z) is defined as diag(φ′(Az))(X−Xn)φ(Az)z>. We have ‖G(z)‖ ≤ ‖A‖‖z‖2‖X−Xn‖. ‖∇g(A)−∇gn(A)‖2 = 2‖Ez[G(z)]‖ ≤ 2Ez‖G(z)‖ ≤ 2Ez‖A‖‖z‖2‖X −Xn‖ ≤ 2‖A‖ 2 π Ez‖z‖2 = 2‖A‖ d 2 π For the directional derivative, we make the concentration bound in a similar way. Denote D(z) = diag(φ′(Az))(Xn −X)φ′(Az) ◦ (Bz)z> + diag(φ′′(Az) ◦ (Bz))(Xn −X)φ(Az)z>. ‖D(z)‖ ≤ ‖Xn −X‖2‖B‖‖z‖2(1 + ‖z‖‖A‖). Therefore ‖DA∇g(A)[B]−DA∇gn(A)[B]‖ ≤ O( d3/2‖A‖‖B‖) with probability 1− δ. B.4 OMITTED PROOFS ON BOUNDING MINI-BATCH SIZE Recall g̃m,n(A) ≡ 1 2 ∥∥∥∥∥∥ 1n n∑ i=1 xix > i − 1 m m∑ j=1 φ(Azj)φ(Azj) > ∥∥∥∥∥∥ 2 F . Write Sj(A) ≡ φ(Azj)φ(Azj)>. Then we have g̃m,n(A) = 1 2 〈 Xn − 1 n m∑ j=1 Sj(A), Xn − 1 m m∑ j=1 Sj(A) 〉 = 1 2m2 ∑ i,j 〈Si(A), Sj(A)〉 − 1 n m∑ j=1 〈Sj(A), Xn〉+ 1 2 ‖Xn‖2F On the other hand, our target function is: gn(A) ≡ 1 2 ∥∥∥∥∥ 1n n∑ i=1 xix > i − Ez∼N (0,Ik×k) [ φ(Az)φ(Az)> ]∥∥∥∥∥ 2 F = 1 2 ‖ES [S]‖2F − 〈ES [S], Xn〉+ 1 2 ‖Xn‖2F Therefore ES g̃m,n(A)− gn(A) = 12m (ES‖S(A)‖ 2 F − ‖ESS(A)‖2F ). Claim 6. ∇ES g̃m,n(A)−∇gn(A) = 2 m Ez [ diag(φ′(Az))S(A)φ(Az)z> − diag(φ′(Az))ES [S(A)]φ(Az)z> ] . Proof. 〈∇ES g̃m,n −∇gn,∆A〉 =ES g̃m,n(A+ ∆A) + gn(A+ ∆A)− (ES g̃m,n(A) + gn(A)) +O(‖∆A‖2) = 1 2m ( ES‖S(A+ ∆A)‖2F − ES‖S(A)‖2F − ‖ESS(A+ ∆A)‖2F + ‖ESS(A)‖2F ) +O(‖∆A‖2) = 1 m (ES〈S(A), S(A+ ∆A)− S(A)〉 − 〈ES [S(A)], ES [S(A+ ∆A)− S(A)]) +O(‖∆A‖2) = 1 m ( 〈Ez〈S(A), diag(φ′(Az))∆Azφ(Az)>〉 − 〈ES [S(A)],Ezdiag(φ′(Az))∆Azφ(Az)>〉 ) +O(‖∆A‖2) (from Eqn. (11) and symmetry of S) =〈 2 m Ez [ diag(φ′(Az))S(A)φ(Az)z> − diag(φ′(Az))ES [S(A)]φ(Az)z> ] ,∆A〉+O(‖∆A‖2) Similarly to the derivation in the previous subsection, we again derive the bias in the directional derivative: Claim 7. For arbitrary matrix direction B, DA∇ES g̃m,n(A)[B]−DA∇gn(A)[B] = 2 m Ez [ diag(φ′′(Az) ◦ (Bz))(S(A)− ESS(A))φ(Az)z> + diag(φ′(Az)) ( (φ′(Az) ◦ (Bz))φ(Az)> − Ez[(φ′(Az) ◦ (Bz))φ(Az)>] ) φ(Az)z> + diag(φ′(Az)) ( φ(Az)(φ′(Az) ◦ (Bz))> − Ez[φ(Az)(φ′(Az) ◦ (Bz))>] ) φ(Az)z> + diag(φ′(Az))(S(A)− ESS(A))(φ′(Az) ◦ (Bz))z> ] B.5 OMITTED PROOF OF THE MAIN THEOREM Proof of Lemma 7. On one hand, suppose Z is an -FOSP property of g̃ in (6) along with the matrix S and vector σ, we have: 〈∇g̃(Z), Z − Z∗〉 =〈S,Z − Z∗〉 (since Z − Z∗ has 0 diagonal entries) ≤‖PT (S)‖2‖PT◦(Z − Z∗)‖F (T is the tangent cone of PSD matrices at Z) ≤‖PT (S)‖2‖Z − Z∗‖F = max j {ã>j Sãj}‖Z − Z∗‖F (ãj is the basis of the column space of Z ) ≤ ‖Z − Z∗‖ (12) (from the definition of -FOSP) On the other hand, from the definition of g̃, we have: 〈Z − Z∗,∇g̃(Z)〉 = ∑ ij (zij − z∗ij)g̃′ij(zij) = ∑ ij (zij − z∗ij)2 ∑ k odd σ2kPk(zij) ∑ k odd σ2kkz k−1 ij ≥‖Z − Z∗‖2Fσ41 (13) Here polynomial Pk(zij) ≡ (zkij − (z∗ij)k)/(z − z∗) is always positive for z 6= z∗ and k to be odd. Therefore by comparing (12) and (13) we have ‖Z − Z∗‖F ≥ ‖Z − Z∗‖2Fσ41 , i.e. ‖Z − Z∗‖F ≤ O( ). Proof of Theorem 3. From Theorem 31 from Ge et al. (2015), we know for small enough learning rate η, and arbitrary small , there exists large enough T , such that Algorithm 1 generates an output A(T ) that is sufficiently close to the second order stationary point for f . Or formally we have, Tr((A (T ))>XiA (T )) = yi, ‖(∇Af(A(T ))− ∑ i=1 λiXiA (T )):,j‖2 ≤ min ‖Aj,:‖2, ∀j ∈ [k] Tr(B>DA∇ALf (A(T ), λ)[B]) ≥ − ‖B‖22, ∀B, s.t.Tr(B>XiA) = 0 Lf (A, λ) = f(A) − ∑d i=1 λi(Tr(A >XiA) − yi). Let {ãi = A(T )ri}ki to form the basis of the column vector space of A(T ). Then the second line is a sufficient condition for the following: ‖ã>j (∇Af(A(T ))− ∑ i=1 λiXiA (T ))rj‖2 ≤ ,∀j ∈ [k]. Now with the concentration bound from Lemma 5, suppose our batch size m ≥ O(d5/ ), we have ‖∇Agn(A(T )) − ∇Af(A(T ))‖2 ≤ , and ‖DA∇Agn(A(T ))[B] − DA∇Af(A(T ))[B]‖2 ≤ ‖B‖2 for arbitrary B. Therefore again we get: Tr((A(T ))>XiA (T )) = yi ‖ã>j (∇Agn(A(T ))− ∑ i=1 λiXiA (T ))rj‖2 ≤ 2 , ∀j ∈ [k] Tr(B>DA∇ALgm(A(T ), λ)[B]) ≥ −2 ‖B‖22, ∀B, s.t.Tr(B>XiA) = 0 Next we turn to the concentration bound from Lemma 4. Suppose we have when the sample size n ≥ O(d5/ 2 log2(1/δ)), ‖DA∇Ag(A)[B] − DA∇Agn(A)[B]‖2 ≤ O( ‖B‖2), and ‖∇g(A) − ∇gn(A)‖2 ≤ O( ) with probability 1 − δ. Therefore similarly we get A(T ) is an O( )-SOSP for g(A) = 12 ∥∥∑∞ i=0 σ 2 i ( (A∗(A∗)>)◦i − (AA>)◦i )∥∥2 F . Now with Lemma 6 that connects the approximate stationary points, we have Z := A(T )(A(T ))> is also an -FOSP of g̃(Z) = 12 ∥∥∑∞ i=0 σ 2 i ( (Z∗)◦i − Z◦i )∥∥2 F . Finally with Lemma 7, we get ‖Z − Z∗‖F ≤ O( ).
1. What are the concerns regarding the choice of discriminator in the paper? 2. Why does the reviewer find the title of the paper misleading? 3. What are the strengths and weaknesses of the paper's analysis of simplified discriminators? 4. How does the reviewer assess the novelty and significance of the paper's contributions? 5. Are there any questions or points raised by the reviewer that require further clarification or revision from the authors?
Review
Review The authors provide a long text to justify their contributions and I have read it thoroughly. Unfortunately, I find the responses don't really address my concerns. My major concern is that I cannot understand how quadratic discriminator can be treated as WGAN. The authors replied that the regularization considered in the paper might be treated as Lipschitz constraint for bounded data sets. However, the data sets can’t be bounded because in the paper, the authors consider a special case where the data sets generated from a teacher network where the input is Gaussian noise. Moreover, the authors said that they would add an explanation of this important point in the revision but I haven’t found any revision yet. My another concern is that why the authors don’t study the two layer network discriminator. The authors replied that the choice of discriminator is designed in tandem with the choice of generator. If they use a standard two layer ReLU network as discriminator, this would hurt the sample complexity. I partly agree with that it will be nice if we can design a better discriminator according to the different choice of generator. However, it will be more convincing to show the convergence of WGAN if the authors consider NN discriminator rather than quadratic discriminator which hardly be used in GAN. ================================================================================================== I found this paper over claims its contribution a lot, which is quite misleading. The title of this work is SGD LEARNS ONE-LAYER NETWORKS IN WGANS. And the authors claim that they analyze the convergence of stochastic gradient descent ascent for Wasserstein GAN on learning a single layer generator network. But actually this paper only considers two kinds of simplified discriminators: a (rectified) linear discriminator and quadratic discriminator, which are very different from WGAN used in practice. The analysis of two special cases are hard to be extended to the analysis of WGAN and thus can hardly help to explain why WGAN is successfully trained by SGD in practice. In section 3, the authors consider the rectified linear discriminator, which is quite similar to the standard two layer network with relu activation but the first layer is fixed. The authors prove that the generator can learn the marginal distribution but may not learn the joint distribution. In the beginning of section 4, the authors explain that this is because there is no interaction between different coordinates of the random vector. To learn joint distribution, the authors extend the linear discriminator to the quadratic discriminator and think of it as a natural idea. For the rectified linear discriminator, the regularization of the discriminator is the norm the output layer of discriminator which can be related to the Lipschitz constraint in WGAN. But for quadratic discriminator, I cannot understand how this setting can be treated as WGAN without further explanation from the authors. I wonder why this work doesn’t consider the standard two layer network discriminator which also has the interaction between different coordinates in the first layer.
ICLR
Title SGD Learns One-Layer Networks in WGANs Abstract Generative adversarial networks (GANs) are a widely used framework for learning generative models. Wasserstein GANs (WGANs), one of the most successful variants of GANs, require solving a minmax optimization problem to global optimality, but are in practice successfully trained using stochastic gradient descent-ascent. In this paper, we show that, when the generator is a one-layer network, stochastic gradient descent-ascent converges to a global solution with polynomial time and sample complexity. 1 INTRODUCTION Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) are a prominent framework for learning generative models of complex, real-world distributions given samples from these distributions. GANs and their variants have been successfully applied to numerous datasets and tasks, including image-to-image translation (Isola et al., 2017), image super-resolution (Ledig et al., 2017), domain adaptation (Tzeng et al., 2017), probabilistic inference (Dumoulin et al., 2016), compressed sensing (Bora et al., 2017) and many more. These advances owe in part to the success of Wasserstein GANs (WGANs) (Arjovsky et al., 2017; Gulrajani et al., 2017), leveraging the neural net induced integral probability metric to better measure the difference between a target and a generated distribution. Along with the afore-described empirical successes, there have been theoretical studies of the statistical properties of GANs—see e.g. (Zhang et al., 2018; Arora et al., 2017; 2018; Bai et al., 2018; Dumoulin et al., 2016) and their references. These works have shown that, with an appropriate design of the generator and discriminator, the global optimum of the WGAN objective identifies the target distribution with low sample complexity. On the algorithmic front, prior work has focused on the stability and convergence properties of gradient descent-ascent (GDA) and its variants in GAN training and more general min-max optimization problems; see e.g. (Nagarajan & Kolter, 2017; Heusel et al., 2017; Mescheder et al., 2017; 2018; Daskalakis et al., 2017; Daskalakis & Panageas, 2018a;b; Gidel et al., 2019; Liang & Stokes, 2019; Mokhtari et al., 2019; Jin et al., 2019; Lin et al., 2019) and their references. It is known that, even in min-max optimization problems with convex-concave objectives, GDA may fail to compute the min-max solution and may even exhibit divergent behavior. Hence, these works have studied conditions under which GDA converges to a globally optimal solution under a convex-concave objective, or different types of locally optimal solutions under nonconvex-concave or nonconvex-nonconcave objectives. They have also identified variants of GDA with better stability properties in both theory and practice, most notably those using negative momentum. In the context of GAN training, Feizi et al. (2017) show that for WGANs with a linear generator and quadratic discriminator GDA succeeds in learning a Gaussian using polynomially many samples in the dimension. In the same vein, we are the first to our knowledge to study the global convergence properties of stochastic GDA in the GAN setting, and establishing such guarantees for non-linear generators. In particular, we study the WGAN formulation for learning a single-layer generative model with some reasonable choices of activations including tanh, sigmoid and leaky ReLU. Our contributions. For WGAN with a one-layer generator network using an activation from a large family of functions and a quadratic discriminator, we show that stochastic gradient descent-ascent learns a target distribution using polynomial time and samples, under the assumption that the target distribution is realizable in the architecture of the generator. This is achieved by a) analysis of the dynamics of stochastic gradient-descent to show it attains a global optimum of the minmax problem, and b) appropriate design of the discriminator to ensure a parametric O( 1√ n ) statistical rate (Zhang et al., 2018; Bai et al., 2018). Related Work. We briefly review relevant results in GAN training and learning generative models: - Optimization viewpoint. For standard GANs and WGANs with appropriate regularization, Nagarajan & Kolter (2017), Mescheder et al. (2017) and Heusel et al. (2017) establish sufficient conditions to achieve local convergence and stability properties for GAN training. At the equilibrium point, if the Jacobian of the associated gradient vector field has only eigenvalues with negative real-part at the equilibrium point, GAN training is verified to converge locally for small enough learning rates. A follow-up paper by (Mescheder et al., 2018) shows the necessity of these conditions by identifying a prototypical counterexample that is not always locally convergent with gradient descent based GAN optimization. However, the lack of global convergence prevents the analysis to provide any guarantees of learning the real distribution. The work of (Feizi et al., 2017) described above has similar goals as our paper, namely understanding the convergence properties of basic dynamics in simple WGAN formulations. However, they only consider linear generators, which restrict the WGAN model to learning a Gaussian. Our work goes a step further, considering WGANs whose generators are one-layer neural networks with a broad selection of activations. We show that with a proper gradient-based algorithm, we can still recover the ground truth parameters of the underlying distribution. More broadly, WGANs typically result in nonconvex-nonconcave min-max optimization problems. In these problems, a global min-max solution may not exist, and there are various notions of local min-max solutions, namely local min-local max solutions Daskalakis & Panageas (2018b), and local min solutions of the max objective Jin et al. (2019), the latter being guaranteed to exist under mild conditions. In fact, Lin et al. (2019) show that GDA is able to find stationary points of the max objective in nonconvex-concave objectives. Given that GDA may not even converge for convexconcave objectives, another line of work has studied variants of GDA that exhibit global convergence to the min-max solution Daskalakis et al. (2017); Daskalakis & Panageas (2018a); Gidel et al. (2019); Liang & Stokes (2019); Mokhtari et al. (2019), which is established for GDA variants that add negative momentum to the dynamics. While the convergence of GDA with negative momentum is shown in convex-concave settings, there is experimental evidence supporting that it improves GAN training (Daskalakis et al., 2017; Gidel et al., 2019). - Statistical viewpoint. Several works have studied the issue of mode collapse. One might doubt the ability of GANs to actually learn the distribution vs just memorize the training data (Arora et al., 2017; 2018; Dumoulin et al., 2016). Some corresponding cures have been proposed. For instance,Zhang et al. (2018); Bai et al. (2018) show for specific generators combined with appropriate parametric discriminator design, WGANs can attain parametric statistical rates, avoiding the exponential in dimension sample complexity (Liang, 2018; Bai et al., 2018; Feizi et al., 2017). Recent work of Wu et al. (2019) provides an algorithm to learn the distribution of a single-layer ReLU generator network. While our conclusion appears similar, our focus is very different. Our paper targets understanding when a WGAN formulation trained with stochastic GDA can learn in polynomial time and sample complexity. Their work instead relies on a specifically tailored algorithm for learning truncated normal distributions Daskalakis et al. (2018). 2 PRELIMINARIES We consider GAN formulations for learning a generatorGA : Rk → Rd of the form z 7→ x = φ(Az), where A is a d × k parameter matrix and φ some activation function. We consider discriminators Dv : Rd → R or DV : Rd → R that are linear or quadratic forms respectively for the different purposes of learning the marginals or the joint distribution. We assume latent variables z are sampled from the normal N (0, Ik×k), where Ik×k denotes the identity matrix of size k. The real/target distribution outputs samples x ∼ D = GA∗(N (0, Ik0×k0)), for some ground truth parameters A∗, where A∗ is d× k0, and we take k ≥ k0 for enough expressivity, taking k = d when k0 is unknown. The Wasserstain GAN under our choice of generator and discriminator is naturally formulated as: min A∈Rd×k max v∈Rd { f(A,v) ≡ Ex∼DDv(x)− Ez∼N (0,Ik×k)Dv(GA(z)) } .1 1We will replace v by V ∈ Rd×d when necessary. We use ai to denote the i-th row vector of A. We sometimes omit the 2 subscript, using ‖x‖ to denote the 2-norm of vector x, and ‖X‖ to denote the spectral norm of matrix X . Sn ⊂ Rn×n represents all the symmetric matrices of dimension n× n. We use Df(X0)[B] to denote the directional derivative of function f at point X0 with direction B: Df(X0)[B] = limt→0 f(X0+tB)−f(X0) t . 3 WARM-UP: LEARNING THE MARGINAL DISTRIBUTIONS As a warm-up, we ask whether a simple linear discriminator is sufficient for the purposes of learning the marginal distributions of all coordinates of D. Notice that in our setting, the i-th output of the generator is φ(x) where x ∼ N (0, ‖ai‖2), and is thus solely determined by ‖ai‖2. With a linear discriminator Dv(x) = v>x, our minimax game becomes: min A∈Rd×k max v∈Rd { f1(A,v) ≡ Ex∼D [ v>x ] − Ez∼N (0,Ik×k) [ v>φ(Az) ]} . (1) Notice that when the activation φ is an odd function, such as the tanh activation, the symmetric property of the Gaussian distribution ensures that Ex∼D[v>x] = 0, hence the linear discriminator in f1 reveals no information about A∗. Therefore specifically for odd activations (or odd plus a constant activations), we instead use an adjusted rectified linear discriminator Dv(x) ≡ v>R(x−C) to enforce some bias, where C = 12 (φ(x) + φ(−x)) for all x, and R denotes the ReLU activation. Formally, we slightly modify our loss function as: f̄1(A,v) ≡ Ex∼D [ v>R(x− C) ] − Ez∼N (0,Ik×k) [ v>R(φ(Az)− C) ] . (2) We will show that we can learn each marginal of D if the activation function φ satisfies the following. Assumption 1. The activation function φ satisfies either one of the following: 1. φ is an odd function plus constant, and φ is monotone increasing; 2. The even component of φ, i.e. 12 (φ(x)+φ(−x)), is positive and monotone increasing on x ∈ [0,∞). Remark 1. All common activation functions like (Leaky) ReLU, tanh or sigmoid function satisfy Assumption 1. Lemma 1. Suppose A∗ 6= 0. Consider f1 with activation that satisfies Assumption 1.2 and f̄1 with activation that satisfies Assumption 1.1. The stationary points of such f1 and f̄1 yield parameters A satisfying ‖ai‖ = ‖a∗i ‖,∀i ∈ [d]. To bound the capacity of the discriminator, similar to the Lipschitz constraint in WGAN, we regularize the discriminator. For the regularized formulation we have: Theorem 1. In the same setting as Lemma 1, alternating gradient descent-ascent with proper learning rates on min A max v {f1(A,v)− ‖v‖2/2} or respectively min A max v {f̄1(A,v)− ‖v‖2/2} recovers A such that ‖ai‖ = ‖a∗i ‖,∀i ∈ [d]. All the proofs of the paper can be found in the appendix. We show that all local min-max points in the sense of (Jin et al., 2019) of the original problem are global min-max points and recover the correct norm of a∗i ,∀i. Notice for the source data distribution x = (x1, x2, · · ·xd) ∼ D with activation φ, the marginal distribution of each xi follows φ(N (0, ‖a∗i ‖2)) and is determined by ‖a∗i ‖. Therefore we have learned the marginal distribution for each entry i. It remains to learn the joint distribution. 4 LEARNING THE JOINT DISTRIBUTION In the previous section, we utilize a (rectified) linear discriminator, such that each coordinate vi interacts with the i-th random variable. With the (rectified) linear discriminator, WGAN learns the correct ‖ai‖, for all i. However, since there’s no interaction between different coordinates of the random vector, we do not expect to learn the joint distribution with a linear discriminator. To proceed, a natural idea is to use a quadratic discriminator DV (x) := x>V x = 〈xx>, V 〉 to enforce component interactions. Similar to the previous section, we study the regularized version: min A∈Rd×k max V ∈Rd×d {f2(A, V )− 1 2 ‖V ‖2F }, (3) where f2(A, V ) = Ex∼DDV (x)− Ez∼N (0,Ik×k)DV (φ(Az)) = 〈 Ex∼D [ xx> ] − Ez∼N (0,Ik×k) [ φ(Az)φ(Az)> ] , V 〉 . By adding a regularizer on V and explicitly maximizing over V : g(A) ≡ max V { f2(A, V )− 1 2 ‖V ‖2F } = 1 2 ∥∥Ex∼D [xx>]− Ez∼N (0,Ik×k) [φ(Az)φ(Az)>]∥∥2F . In the next subsection, we first focus on analyzing the second-order stationary points of g, then we establish that gradient descent ascent converges to second-order stationary points of g . 4.1 GLOBAL CONVERGENCE FOR OPTIMIZING THE GENERATING PARAMETERS We first assume that both A and A∗ have unit row vectors, and then extend to general case since we already know how to learn the row norms from Section 3. To explicitly compute g(A), we rely on the property of Hermite polynomials. Since normalized Hermite polynomials {hi}∞i=0 forms an orthonomal basis in the functional space, we rewrite the activation function as φ(x) = ∑∞ i=0 σihi, where σi is the i-th Hermite coefficient. We use the following claim: Claim 1 ((Ge et al., 2017) Claim 4.2). Let φ be a function from R to R such that φ ∈ L2(R, e−x2/2), and let its Hermite expansion be φ = ∑∞ i=1 σihi. Then, for any unit vectors u,v ∈ Rd, we have that Ex∼N (0,Id×d) [ φ(u>x)φ(v>x) ] = ∞∑ i=0 σ2i (u >v)i. Therefore we could compute the value of f2 explicitly using the Hermite polynomial expansion: f2(A, V ) = 〈 ∞∑ i=0 σ2i ( (A∗(A∗)>)◦i − (AA>)◦i ) , V 〉 . Here X◦i is the Hadamard power operation where (X◦i)jk = (Xjk)i. Therefore we have: g(A) = 1 2 ∥∥∥∥∥ ∞∑ i=0 σ2i ( (A∗(A∗)>)◦i − (AA>)◦i )∥∥∥∥∥ 2 F We reparametrize with Z = AA> and define g̃(Z) = g(A) with individual component functions g̃jk(z) ≡ 12 ( ∑∞ i=0 σ 2 i ((z ∗ jk) i − zi))2. Accordingly z∗jk = 〈a∗j ,a∗k〉 is the (j, k)-th component of the ground truth covariance matrix A∗(A∗)>. Assumption 2. The activation function φ is an odd function plus constant. In other words, its Hermite expansion φ = ∑∞ i=0 σihi satisfies σi = 0 for even i ≥ 2. Additionally we assume σ1 6= 0. Remark 2. Common activations like tanh and sigmoid satisfy Assumption 2. Lemma 2. For activations including leaky ReLU and functions satisfying Assumption 2, g̃(Z) has a unique stationary point where Z = A∗(A∗)>. Notice g̃(Z) = ∑ jk g̃jk(zjk) is separable across zjk, where each g̃jk is a polynomial scalar function. Lemma 2 comes from the fact that the only zero point for g̃′jk is zjk = z ∗ jk, for odd activation φ and leaky ReLU. Then we migrate this good property to the original problem we want to solve: Problem 1. We optimize over function g when ‖a∗i ‖ = 1,∀i: minA g(A) = 12 ∥∥∥∥∥ ∞∑ i=0 σ2i ( (A∗(A∗)>)◦i − (AA>)◦i )∥∥∥∥∥ 2 F s.t. a>i ai = 1,∀i. Existing work Journée et al. (2008) connects g̃(Z) to the optimization over factorized version for g(A) (g(A) ≡ g̃(AA>)). Specifically, when k = d, all second-order stationary points for g(A) are first-order stationary points for g̃(Z). Though g̃ is not convex, we are able to show that its first-order stationary points are global optima when the generator is sufficiently expressive, i.e., k ≥ k0. In reality we won’t know the latent dimension k0, therefore we just choose k = d for simplicity. We make the following conclusion: Theorem 2. For activations including leaky ReLU and functions satisfying Assumption 2, when k = d, all second-order KKT points for problem 1 are its global minimum. Therefore alternating projected gradient descent-ascent on Eqn. (3) converges to A : AA> = A∗(A∗)>. The extension for non-unit vectors is straightforward, and we defer the analysis to the Appendix. 5 FINITE SAMPLE ANALYSIS Algorithm 1 Online stochastic gradient descent ascent on WGAN 1: Input: n training samples: x1,x2, · · ·xn, where each xi ∼ φ(A∗z), z ∼ N (0, Ik×k), learning rate for generating parameters η, number of iterations T . 2: Random initialize generating matrix A(0). 3: for t = 1, 2, · · · , T do 4: Generatem latent variables z(t)1 , z (t) 2 , · · · , z (t) m ∼ N (0, Ik×k) for the generator. The empirical function becomes f̃ (t)m,n(A, V ) = 〈 1 m m∑ i=1 φ(Az (t) i )φ(Az (t) i ) > − 1 n n∑ i=1 xix > i , V 〉 − 1 2 ‖V ‖2 5: Gradient ascent on V with optimal step-size ηV = 1: V (t) ← V (t) − ηV∇V f̃ (t)m,n(A(t−1), V (t−1)). 6: Sample noise e uniformly from unit sphere 7: Projected Gradient Descent on A, with constraints C = {A|(AA>)ii = (A∗A∗>)ii} : A(t) ← ProjC(A(t−1) − η(∇Af̃ (t)m,n(A(t−1), V (t)) + e)). 8: end for 9: Output: A(T )(A(T ))> In this section, we consider analyzing Algorithm 1, i.e., gradient descent ascent on the following: f̃ (t)m,n(A, V ) = 〈 1 m m∑ i=1 φ(Az (t) i )φ(Az (t) i ) > − 1 n n∑ i=1 xix > i , V 〉 − 1 2 ‖V ‖2. (4) Notice in each iteration, gradient ascent with step-size 1 finds the optimal solution for V . By Danskin’s theorem (Danskin, 2012), our min-max optimization is essentially gradient descent over g̃ (t) m,n(A) ≡ maxV f̃ (t)m,n(A, V ) = 12‖ 1 m ∑m i=1 φ(Az (t) i )φ(Az (t) i ) > − 1n ∑n i=1 xix > i ‖2F with a batch of samples {z(t)i }, i.e., stochastic gradient descent for fn(A) ≡ Ezi∼N (0,Ik×k),∀i∈[m][g̃m,n(A)]. Therefore to bound the difference between fn(A) and the population risk g(A), we analyze the sample complexity required on the observation side (xi ∼ D, i ∈ [n]) and the mini-batch size required on the learning part (φ(Azj), zj ∼ N (0, Ik×k), j ∈ [m]). We will show that with large enough n,m, the algorithm specified in Algorithm 1 that optimizes over the empirical risk will yield the ground truth covariance matrix with high probability. Our proof sketch is roughly as follows: 1. With high probability, projected stochastic gradient descent finds a second order stationary point  of fn(·) as shown in Theorem 31 of (Ge et al., 2015). 2. For sufficiently large m, our empirical objective, though a biased estimator of the population risk g(·), achieves good -approximation to the population risk on both the gradient and Hessian (Lemmas 4&5). Therefore  is also an O( )-approximate second order stationary point (SOSP) for the population risk g(A). 3. We show that any -SOSP  for g(A) yields anO( )-first order stationary point (FOSP) Ẑ ≡ ÂÂ> for the semi-definite programming on g̃(Z) (Lemma 6). 4. We show that any O( )-FOSP of function g̃(Z) induces at most O( ) absolute error compared to the ground truth covariance matrix Z∗ = A∗(A∗)> (Lemma 7). 5.1 OBSERVATION SAMPLE COMPLEXITY For simplicity, we assume the activation and its gradient satisfy Lipschitz continuous, and let the Lipschitz constants be 1 w.l.o.g.: Assumption 3. Assume the activation is 1-Lipschitz and 1-smooth. To estimate observation sample complexity, we will bound the gradient and Hessian for the population risk and empirical risk on the observation samples: g(A) ≡ 1 2 ∥∥Ex∼D [xx>]− Ez∼N (0,Ik×k) [φ(Az)φ(Az)>]∥∥2F , and gn(A) ≡ 1 2 ∥∥∥∥∥ 1n n∑ i=1 xix > i − Ez∼N (0,Ik×k) [ φ(Az)φ(Az)> ]∥∥∥∥∥ 2 F . Claim 2. ∇g(A)−∇gn(A) = 2Ez [ diag(φ′(Az))(X −Xn)φ(Az)z> ] , where X = Ex∼D[xx>], and Xn = 1n ∑n i=1 xix > i . The directional derivative with arbitrary direction B is: D∇g(A)[B]−D∇gn(A)[B] = 2Ez [ diag(φ′(Az))(Xn −X)φ′(Az) ◦ (Bz)z> ] + 2Ez [ diag(φ′′(Az) ◦ (Bz))(Xn −X)φ(Az)z> ] Lemma 3. Suppose the activation satisfies Assumption 3. Pr[‖X − Xn‖ ≤ ‖X‖] ≥ 1 − δ, for n ≥ Θ̃(d/ 2 log2(1/δ))2. Lemma 4. Suppose the activation satisfies Assumption 2&3. With samples n ≥ Θ̃(d/ 2 log2(1/δ)), ‖∇g(A) − ∇gn(A)‖2 ≤ O( d‖A‖2) with probability 1 − δ. Meanwhile, ‖D∇g(A)[B] − D∇gn(A)[B]‖2 ≤ O( d3/2‖A‖2‖B‖2) with probability 1− δ. 5.2 BOUNDING MINI-BATCH SIZE Normally for empirical risk for supervised learning, the mini-batch size can be arbitrarily small since the estimator of the gradient is unbiased. However in the WGAN setting, notice for each iteration, we randomly sample a batch of random variables {zi}i∈[m], and obtain a gradient of g̃m,n(A) ≡ 12 ∥∥∥ 1n∑ni=1 xix>i − 1m∑mj=1 φ(Azj)φ(Azj)>∥∥∥2 F , in Algorithm 1. However, the finite sum is inside the Frobenius norm and the gradient on each mini-batch may no longer be an unbiased estimator for our target gn(A) = 12 ∥∥ 1 n ∑n i=1 xix > i − Ez [ φ(Az)φ(Az)> ]∥∥2 F . In other words, we conduct stochastic gradient descent over the function f(A) ≡ Ez g̃m,n(A). Therefore we just need to analyze the gradient error between this f(A) and gn(A) (i.e. g̃m,n is almost an unbiased estimator of gn). Finally with the concentration bound derived in last section, we get the error bound between f(A) and g(A). Lemma 5. The empirical risk g̃m,n is almost an unbiased estimator of gn. Specifically, the expected function f(A) = Ezi∼N (0,Ik×k),i∈[m][g̃m,n] satisfies: ‖∇f(A)−∇gn(A)‖ ≤ O( 1 m ‖A‖3d2). 2Θ̃ hides log factors of d for simplicity. For arbitrary direction matrix B, ‖D∇f(A)[B]−D∇gn(A)[B]‖ ≤ O( 1 m ‖B‖‖A‖3d5/2). In summary, we conduct concentration bound over the observation samples and mini-batch sizes, and show the gradient of f(A) that Algorithm 1 is optimizing over has close gradient and Hessian with the population risk g(A). Therefore a second-order stationary point (SOSP) for f(A) (that our algorithm is guaranteed to achieve) is also an approximated SOSP for g(A). Next we show such a point also yield an approximated first-order stationary point of the reparametrized function g̃(Z) ≡ g(A),∀Z = AA>. 5.3 RELATION ON APPROXIMATE OPTIMALITY In this section, we establish the relationship between g̃ and g. We present the general form of our target Problem 1: minA∈Rd×k g(A) ≡ g̃(AA>) (5) s.t. Tr(A>XiA) = yi, Xi ∈ S, yi ∈ R, i = 1, · · · , n. Similar to the previous section, the stationary property might not be obvious on the original problem. Instead, we could look at the re-parametrized version as: minZ∈S g̃(Z) (6) s.t. Tr(XiZ) = yi, Xi ∈ S, yi ∈ R, i = 1, · · · , n, Z 0, Definition 1. A matrix A ∈ Rd×k is called an -approximate second-order stationary point ( -SOSP) of Eqn. (5) if there exists a vector λ such that: Tr(A >XiA) = yi, i ∈ [n] ‖(∇Z g̃(AA>)− ∑n i=1 λiXi)ãj‖ ≤ ‖ãj‖, {ãj}j span the column space of A Tr(B>D∇AL(A, λ)[B]) ≥ − ‖B‖2, ∀B s.t. Tr(B>XiA) = 0 Here L(A, λ) is the Lagrangian form g̃(AA>)− ∑n i=1 λi(Tr(A >XiA)− yi). Specifically, when = 0 the above definition is exactly the second-order KKT condition for optimizing (5). Next we present the approximate first-order KKT condition for (6): Definition 2. A symmetric matrix Z ∈ Sn is an -approximate first order stationary point of function (6) ( -FOSP) if and only if there exist a vector σ ∈ Rm and a symmetric matrix S ∈ S such that the following holds: Tr(XiZ) = yi, i ∈ [n] Z 0, S − I, ‖Sãj‖ ≤ ‖ãj‖, {ãj}j span the column space of Z S = ∇Z g̃(Z)− ∑n i=1 σiXi. Lemma 6. Let latent dimension k = d. For an -SOSP of function (5) with A and λ, it infers an -FOSP of function (6) with Z, σ and S that satisfies: Z = AA>, σ = λ and S = ∇Z g̃(AA>) −∑ i λiXi. Now it remains to show an -FOSP of g̃(Z) indeed yields a good approximation for the ground truth parameter matrix. Lemma 7. If Z is an -FOSP of function (6), then ‖Z −Z∗‖F ≤ O( ). Here Z∗ = A∗(A∗)> is the optimal solution for function (6). Together with the previous arguments, we finally achieve our main theorem on connecting the recovery guarantees with the sample complexity and batch size3: Theorem 3. For arbitrary δ < 1, , given small enough learning rate η < 1/poly(d, 1/ , log(1/δ)), let sample size n ≥ Θ̃(d5/ 2 log2(1/δ)), batch size m ≥ O(d5/ ), for large enough T=poly(1/η, 1/ , d, log(1/δ)), the output of Algorithm 1 satisfies ‖A(T )(A(T ))> − Z∗‖F ≤ O( ) with probability 1− δ, under Assumptions 2 & 3 and k = d. 3The exact error bound comes from the fact that when diagonal terms of AA> are fixed, ‖A‖2 = O( √ d). 6 SIMULATIONS In this section, we provide simple experimental results to validate the performance of stochastic gradient descent ascent and provide experimental support for our theory. We focus on Algorithm 1 that targets to recover the parameter matrix. We conduct a thorough empirical studies on three joint factors that might affect the performance: the number of observed samples m (we set n = m as in general GAN training algorithms), the different choices of activation function φ, and the output dimension d. In Figure 1 we plot the relative error for parameter estimation decrease over the increasing sample complexity. We fix the hidden dimension k = 2, and vary the output dimension over {3, 5, 7} and sample complexity over {500, 1000, 2000, 5000, 10000}. Reported values are averaged from 20 runs and we show the standard deviation with the corresponding colored shadow. Clearly the recovery error decreases with higher sample complexity and smaller output dimension. To visually demonstrate the learning process, we also include a simple comparison for different φ: i.e. leaky ReLU and tanh activations, when k = 1 and d = 2. We set the ground truth covariance matrix to be [1, 1; 1, 1], and therefore a valid result should be [1, 1] or [−1,−1]. From Figure 2 we could see that for both leaky ReLU and tanh, the stochastic gradient descent ascent performs similarly with exact recovery of the ground truth parameters. 7 CONCLUSION We analyze the convergence of stochastic gradient descent ascent for Wasserstein GAN on learning a single layer generator network. We show that stochastic gradient descent ascent algorithm attains the global min-max point, and provably recovers the parameters of the network with absolute error measured in Frobenius norm, from Θ̃(d5/ 2) i.i.d samples. A OMITTED PROOF FOR LEARNING THE DISTRIBUTION A.1 STATIONARY POINT FOR MATCHING FIRST MOMENT Proof of Lemma 1. To start with, we consider odd-plus-constant monotone increasing activations. Notice that by proposing a rectified linear discriminator, we have essentially modified the activation function as φ̃ := R(φ− C), where C = 12 (φ(x) + φ(−x)) is the constant bias term of φ. Observe that we can rewrite the objective f̄1 for this case as follows: f1(A,v) = Ez∼N (0,Ik0×k0 )v >φ̃(A∗z)− Ez∼N (0,Ik×k)v >φ̃(Az). Moreover, notice that φ̃ is positive and increasing on its support which is [0,+∞). Now let us consider the other case in our statement where φ has a positive and monotone increasing even component in [0,+∞). In this case, let us take: φ̃(x) = { φ(x) + φ(−x), x ≥ 0 0, o.w. Because of the symmetry of the Gaussian distribution, we can rewrite the objective function for this case as follows: f1(A,v) = Ez∼N (0,Ik0×k0 )v >φ̃(A∗z)− Ez∼N (0,Ik×k)v >φ̃(Az). Moreover, notice that φ̃ is positive and increasing on its support which is [0,+∞). To conclude, in both cases, the optimization objective can be written as follows, where φ̃ satisfies Assumption 1.2 and is only non-zero on [0,+∞). f1(A,v) = Ez∼N (0,Ik0×k0 )v >φ̃(A∗z)− Ez∼N (0,Ik×k)v >φ̃(Az). The stationary points of the above objective satisfy:{ ∇vf1(A,v) = Ez∼N (0,Ik0×k0 )φ̃(A ∗z)− Ez∼N (0,Ik×k)φ̃(Az) = 0, ∇ajf1(A,v) = −Ez∼N (0,Ik×k)vj φ̃′(a>j z)z = 0. We focus on the gradient over v. To achieve ∇vf1(A,v) = 0, the stationary point satisfies: ∀j,Ez∼N (0,Ik0×k0 )φ̃((a ∗ j ) >z) = Ez∼N (0,Ik×k)φ̃(a > j z), i.e. ∀j,Ex∼N (0,‖a∗j ‖2)φ̃(x) = Ex′∼N (0,‖aj‖2)φ̃(x ′). (7) To recap, for activations φ that follow Assumption 1, in both cases we have written the necessary condition on stationary point to be Eqn. (7), where φ̃ is defined differently for odd or non-odd activations, but in both cases it is positive and monotone increasing on its support [0,∞). We then argue the only solution for Eqn. (7) satisfies ‖aj‖ = ‖a∗j‖,∀j. This follows directly from the following claim: Claim 3. The function h(α) := Ex∼N (0,α2)f(x), α > 0 is a monotone increasing function if f is positive and monotone increasing on its support [0,∞). We could see from Claim 3 that the LHS and RHS of Eqn. (7) is simply h(‖aj‖) and h(‖a∗j‖) for each j. Now that h is an monotone increasing function, the unique solution for h(‖aj‖) = h(‖a∗j‖) is to match the norm: ‖aj‖ = ‖a∗j‖,∀j. Proof of Claim 3. h(α) = Ex∼N (0,α2)f(x) = ∫ ∞ 0 f(x)e− x2 2α2 dx y:=x/α = ∫ ∞ 0 αf(αy)e− y2 2 dy = Ey∼N (0,1)αf(αy). Notice h′(α) = Ex∼N (0,1)[αxf ′(αx) + f(αx)]. Since f , f ′, and α > 0, and we only care about the support of f where x is also positive, therefore h′ is always positive and h is monotone increasing. To sum up, at stationary point where ∇f1(A,v) = 0, we have ∀i, ‖a∗i ‖ = ‖ai‖. A.2 PROOF OF THEOREM 1 Proof of Theorem 1. We will take optimal gradient ascent steps with learning rate 1 on the discriminator side v, hence the function we will actually be optimizing over becomes (using the notation for φ̃ from section A.1): h(A) = max v f1(A,v) = 1 2 ∥∥∥Ez∼N (0,Ik0×k0 )φ̃(A∗z)− Ez∼N (0,Ik×k)φ̃(Az)∥∥∥2 . We just want to verify that there’s no spurious local minimum for h(A). Notice there’s no interaction between each row vector of A. Therefore we instead look at each hi := 1 2 ( Ez∼N (0,Ik0×k0 )φ̃((a ∗ i ) >z)− Ez∼N (0,Ik×k)φ̃(a>i z) )2 for each i. Now ∇hi(ai) = − ( Ez∼N (0,Ik0×k0 )φ̃((a ∗ i ) >z)− Ez∼N (0,Ik×k)φ̃(a>i z) ) (Ez∼N (0,Ik×k)zφ̃′(a>i z)). Due to the symmetry of the Gaussian, we take ai = ae1, where a = ‖ai‖. It is easy to see that checking whether Ez∼N (0,Ik×k)zφ̃′(a>i z) = 0 is equivalent to checking whether Ez1∼N (0,1)z1φ̃′(az1) = 0. Recall that φ̃ is supported on [0,+∞) and it is monotonically increasing on its support. Hence, Ez1∼N (0,1)z1φ̃′(az1) 6= 0 unless a = 0. Hence, suppose ‖ai‖ 6= 0,∀i. Then ∇Ah(A) = 0 iff h(A) = 0, i.e. Ez∼N (0,Ik0×k0 )φ̃(A ∗z) = Ez∼N (0,Ik×k)φ̃(Az). Therefore all stationary points of h(A) are global minima where Ez∼N (0,Ik0×k0 )φ̃(A ∗z) = Ez∼N (0,Ik×k)φ̃(Az) and according to Lemma 1, this only happens when ‖ai‖ = ‖a∗i ‖,∀i ∈ [d]. A.3 STATIONARY POINTS FOR WGAN WITH QUADRATIC DISCRIMINATOR Proof of Lemma 2. To study the stationary point for g̃(Z) = ∑ jk g̃jk(zjk), we look at individual g̃jk(z) ≡ 12 ( ∑∞ i=0 σ 2 i ((z ∗ jk) i − zi))2. Notice for odd-plus-constant activations, σi is zero for even i > 0. Recall our assumption in Lemma 2 also requires that σ1 6= 0. Since the analysis is invariance to the position of the matrix Z, we simplify the notation here and essentially want to study the stationary point for f(a) = 12 ( ∑ i odd σ 2 i (a i− bi))2 for some constant b and σi, where σ1 6= 04. f ′(a) = (∑ i odd σ2i (a i − bi) )(∑ i odd iσ2i a i−1 ) = (a− b) σ21 + ∑ i≥3 odd σ2i ai − bi a− b σ21 + ∑ i≥3 odd iσ2i a i−1 = (a− b)(I)(II). Notice now f ′(a) = 0⇔ a = b. This is because the polynomial f ′(a) is factorized to a− b and two factors I and II that are always positive. Notice here we use a i−bi a−b to denote ∑i j=0 a jbi−j , which is always nonnegative. This is simply because ai − bi always shares the same sign as a− b when i is odd. Therefore I=σ21 + ∑ i≥3 odd σ 2 i ai−bi a−b > 0,∀a. 4The zero component has been cancelled out. Meanwhile, since ai−1 is always nonnegative for each odd i, we have II= σ21 + ∑ i≥3 odd iσ 2 i a i−1 is also always positive for any a. Next, for activation like ReLU, loss g̃jk(z) = 12 (h(z)−h(z ∗ jk)) 2, where h(x) = 1π ( √ 1− x2 + (π− cos−1(x))x) (Daniely et al., 2016). Therefore h′(−1) = 0 for any z∗jk. This fact prevents us from getting the same conclusion for ReLU. However, for leaky ReLU with coefficient of leakage α ∈ (0, 1), φ(x) = max{x, αx} = (1 − α)ReLU(x) + αx. We have Ez∼N (0,Ik×k [ φ(a>i z)φ(a > j z) ] =(1− α)2EzReLU(a>i z)ReLU(a>j z) + (1− α)αEzReLU(a>i z)a>j z + (1− α)αEza>i zReLU(a>j z) + α2Eza>i za>j z =(1− α)2h(a>i aj) + αa>i aj Therefore for leaky ReLU g̃jk(z) = 12 ((1 − α) 2(h(z) − h(zjk∗)) + α(z − z∗jk))2, and g̃′jk(z) = ((1−α)2(h(z)−h(zjk∗))+α(z−z∗jk))((1−α)2h′(z)+α).Now with α > 0, (1−α)2h′(z)+α ≥ α for all z and g̃jk(z) = 0⇔ z = z∗jk. To sum up, for odd activations and leaky ReLU, since each g̃jk(z) only has stationary point of z = z∗jk, the stationary point Z of g̃(Z) = ∑ jk g̃jk also satisfy Z = Z ∗ = A∗(A∗)>. Proof of Theorem 2. Instead of directly looking at the second-order stationary point of Problem 1, we look at the following problem on its reparametrized version: Problem 2. minZ g̃(Z) = 12 ∥∥∥∥∥ ∞∑ i=0 σ2i ( (Z∗)◦i − Z◦i )∥∥∥∥∥ 2 F s.t. zii = 1,∀i. Z 0. Here Z∗ = A∗(A∗)> and satisfies z∗ii = 1,∀i. Compared to function g in the original problem 1, it satisfies that g̃(AA>) ≡ g(A). A matrix Z satisfies the first-order stationary point for Problem 2 if there exists a vector σ such that: zii = 1, Z 0, S 0, SZ = 0, S = ∇Zg(Z)− diag(σ). Therefore for a stationary point Z, since Z∗ = A∗(A∗)> 0, and S 0, we have 〈S,Z∗ − Z〉 = 〈S,Z∗〉 ≥ 0. Meanwhile, 〈Z∗ − Z, S〉 =〈Z∗ − Z,∇Zf(Z)− diag(σ)〉 =〈Z∗ − Z,∇Zf(Z)〉 (diag(Z∗ − Z) = 0) = ∑ i,j (z∗ij − zij)g′ij(zij) = ∑ i,j (zij − z∗ij)P (zij)(z∗ij − zij) (Refer to proof of Lemma 2 for the value of g′) =− ∑ ij (zij − z∗ij)2P (zij) ≤0 (P is always positive) Therefore 〈S,Z∗ − Z〉 = 0, and this only happens when Z = Z∗. Finally, from Journée et al. (2008) we know that any first-order stationary point for Problem 2 is a second-order stationary point for our original problem 1 5. Therefore we conclude that all second-order stationary point for Problem 1 are global minimum A: AA> = A∗(A∗)>. A.4 LANDSCAPE ANALYSIS FOR NON-UNIT GENERATING VECTORS In the previous argument, we simply assume that the norm of each generating vectors ai to be 1. This practice simplifies the computation but is not practical. Since we are able to estimate ‖ai‖ for all i first, we could analyze the landscape of our loss function for general matrix A. The main tool is to use the multiplication theorem of Hermite functions: hαn(x) := hn(αx) = bn2 c∑ i=0 αn−2i(α2 − 1)i ( n 2i ) (2i)! i! 2−ihn−2i(x). For the ease of notation, we denote the coefficient as ηn,iα := α n−2i(α2− 1)i ( n 2i ) (2i)! i! 2 −i. We extend the calculations for Hermite inner product for non-standard distributions. Lemma 8. Let (x, y) be normal variables that follow joint distributionN (0, [[α2, αβρ]; [αβρ, β2]]). Then, E[hm(x)hn(y)] = { ∑b l2 c i=0 η l,i α η l,i β ρ l−2i if m ≡ n (mod 2) 0 o.w. (8) Here l = min{m,n}. 5Throughout the analysis for low rank optimization in Journée et al. (2008), they require function g̃(Z) to be convex. However, by carefully scrutinizing the proof, one could see that this condition is not required in building the connection of first-order and second-order stationary points of g(A) and g̃(Z). For more cautious readers, we also show a relaxed version in the next section, where the equivalence of SOSP of g and FOSP of g̃ is a special case of it. Proof. Denote the normalized variables x̂ = x/α, ŷ = y/β. Let l = min{m,n}. E[hm(x)hn(y)] =E[hαm(x̂)hβn(ŷ)] = bm2 c∑ i=0 bn2 c∑ j=0 ηm,iα η n,j β E[hm−2i(x̂)hn−2j(ŷ)] = bm2 c∑ i=0 bn2 c∑ j=0 ηm,iα η n,j β δ(m−2i),(n−2j)ρ n−2j (Lemma ??) = { ∑b l2 c i=0 η l,i α η l,i β ρ l−2i if m ≡ n (mod 2) 0 o.w. . Now the population risk becomes g(A) = 1 2 ∥∥Ex∼D [xx>]− Ez∼N (0,Ik×k) [φ(Az)φ(Az)>]∥∥2 = 1 2 ∑ i,j∈[d] ( Ez∼N (0,Ik0×k0 )φ((a ∗ i ) >z)φ((a∗j ) >z)− Ez∼N (0,Ik×k)φ(a > i z)φ(a > j z) )2 ≡1 2 ∑ i,j g̃ij(zij). To simplify the notation, for a specific i, j pair, we write x̂ = a>i z/α, α = ‖ai‖ and ŷ = a>j z/β, where β = ‖aj‖. Namely we have (x̂, ŷ) ∼ N (0, [[1, ρ]; [ρ, 1]]), where ρ = cos〈ai,aj〉. Again, recall φ(αx̂) = ∑ k odd σihi(αx̂) = ∑ k odd σih α i (x̂). Ez∼N (0,Ik×k)[φ(αx̂)φ(βŷ)] =E [∑ m odd σmh α m(x̂) ∑ n odd σnh β n(ŷ) ] = ∑ m,n odd σmσnES [hαm(x̂)hβn(ŷ)] = ∑ m odd σm ∑ n≤m odd σn bn2 c∑ k=0 ηn,kα η n,k β ρ n−2k Therefore we could write out explicitly the coefficient for each term ρk, k odd, as: ck =∑ n≥k odd σnη n,n−k2 α η n,n−k2 β ( ∑ m≥n σm). We have g̃ij(zij) = ( ∑ k odd ckz k ij − ∑ k odd ck(z ∗ ij) k)2. Now suppose σi to have the same sign, and ‖αi‖ ≥ 1,∀ or ‖αi‖ ≤ 1,∀i, each coefficient ci ≥ 0. Therefore still the only stationary point for g(Z) is Z∗. B OMITTED PROOFS FOR SAMPLE COMPLEXITY B.1 OMITTED PROOFS FOR RELATION ON APPROXIMATE STATIONARY POINTS Proof of Lemma 6. We first review what we want to prove. For a matrixA that satisfies -approximate SOSP for Eqn. (5), we define SA = ∇Z g̃(AA>)− ∑n i=1 λiXi. The conditions ensure that A, λ, SA satisfy: Tr(A >XiA) = yi, ‖SAãi‖2 ≤ ‖ãi‖2, {ãj}j span the column space of A Tr(B>DA∇AL(A, λ)[B]) ≥ − ‖B‖2F , ∀B s.t. Tr(B>XiA) = 0. (9) We just want to show Z := AA>, σ := λ, and S := SA satisfies the conditions for -FOSP of Eqn. (6). Therefore, by going over the conditions, its easy to tell that all other conditions automatically apply and it remains to show SA − I . By noting that∇AL(A, λ) = 2SAA, one has: 1 2 Tr(B>DA∇AL(A, λ)[B]) = Tr(B>SAB) + Tr(B >DA∇Z g̃(AA>)[B]A)− n∑ i=1 DAλi[B] Tr(B >XiA) (from Lemma 5 of Journée et al. (2008)) = Tr(B>SAB) + Tr(AB >DA∇Z g̃(AA>)[B]) (10) (From Eqn. (9) we have Tr(B>XiA) = 0) Notice that A ∈ Rd×k and we have chosen k = d for simplicity. We first argue when A is rankdeficient, i.e. rank(A) < k. There exists some vector v ∈ Rk such that Av = 0. Now for any vector b ∈ Rd, let B = bv>. Therefore AB> = Avb> = 0. From (10) we further have: 1 2 Tr(B>DA∇AL(A, λ)[B]) = Tr(B>SAB) + Tr(AB >DA∇Z g̃(AA>)[B]) = Tr(vb>SAbv >) = ‖v‖2b>SAb ≥− /2‖B‖2F (from (9)) =− /2‖v‖2‖b>‖2 Therefore from the last three rows we have b>SAb ≥ − /2‖b‖2 for any b, i.e. SA − /2Id×d. On the other hand, when A is full rank, the column space of A is the entire Rd vector space, and therefore SA − Id×d directly follows from the second line of the -SOSP definition. B.2 DETAILED CALCULATIONS Recall the population risk g(A) ≡ 1 2 ∥∥Ex∼D [xx>]− Ez∼N (0,Ik×k) [φ(Az)φ(Az)>]∥∥2F . Write the empirical risk on observations as: gn(A) ≡ 1 2 ∥∥∥∥∥ 1n n∑ i=1 xix > i − Ez∼N (0,Ik×k) [ φ(Az)φ(Az)> ]∥∥∥∥∥ 2 F . Claim 4. ∇g(A)−∇gn(A) = 2Ez [ diag(φ′(Az))(X −Xn)φ(Az)z> ] , where X = Ex∼D[xx>], and Xn = 1n ∑n i=1 xix > i . Proof. ∇g(A)−∇gn(A) = ∇(g(A)− gn(A)) = 1 2 ∇ 〈 X −Xn, X +Xn − 2Ez∼N (0,Ik×k) [ φ(Az)φ(Az)> ]〉 =∇ 〈 Xn −X,Ez∼N (0,Ik×k) [ φ(Az)φ(Az)> ]〉 Now write S(A) = φ(Az)φ(Az)>. [S(A+ ∆A)− S(A)]ij =φ(a>i z + ∆a > i z)φ(a > j z + ∆a > j z)− φ(a>i z)φ(a>j z) =φ′(a>i z)∆a > i zφ(a > j z) + φ ′(a>j z)∆a > j zφ(a > i z) +O(‖∆A‖2) Therefore [S(A+ ∆A)− S(A)]i: =φ′(a>i z)∆a > i zφ(Az) > + (φ′(Az) ◦∆Az)>φ(a>i z) +O(‖∆A‖2) Therefore S(A+ ∆A)− S(A) = diag(φ′(Az))∆Azφ(Az)> + φ(Az)z>∆A>diag(φ′(Az)). (11) And g(A+ ∆A)− gn(A+ ∆A)− (g(A)− gn(A)) =〈Xn −X,Ez [S(A+ ∆A)− S(A)]〉 =Ez〈Xn −X, diag(φ′(Az))∆Azφ(Az)> + φ(Az)z>∆A>diag(φ′(Az))〉 =2Ez〈diag(φ′(Az))(Xn −X)φ(Az)z>,∆A〉. Finally we have ∇g(A)−∇gn(A) = 2Ez [ diag(φ′(Az))(Xn −X)φ(Az)z> ] . Claim 5. For arbitrary matrix B, the directional derivative of ∇g(A)−∇gn(A) with direction B is: DA∇g(A)[B]−DA∇gn(A)[B] = 2Ez [ diag(φ′(Az))(Xn −X)φ′(Az) ◦ (Bz)z> ] +2Ez [ diag(φ′′(Az) ◦ (Bz))(Xn −X)φ(Az)z> ] Proof. g(A+ tB) =2Ez [ diag(φ′(Az + tBz))(Xn −X)φ(Az + tBz)z> ] =2Ez [ diag(φ′(Az) + t(Bz) ◦ φ′′(Az))(Xn −X)(φ(Az) + tφ′(Az) ◦ (Bz))z> ] +O(t2) Therefore lim t→0 g(A+ tB)− g(A) t =2Ez [ diag(φ′(Az))(Xn −X)φ′(Az) ◦ (B>z)z> ] + 2Ez [ diag(φ′′(Az) ◦ (Bz))(Xn −X)φ(Az)z> ] B.3 OMITTED PROOFS FOR OBSERVATION SAMPLE COMPLEXITY Proof of Lemma 3. For each xi = φ(Azi), zi ∼ N (0, Ik×k). Each coordinate |xi,j | = |φ(a>j zi)| ≤ |a>j zi| since φ is 1-Lipschitz. 6. Without loss of generality we assumed ‖aj‖ = 1,∀j, therefore a>j z ∼ N (0, Ik×k). For all i ∈ [n], j ∈ [d] |xi,j | ≤ log(nd/δ) with probability 1− δ. Then by matrix concentration inequality ((Vershynin, 2010) Corollary 5.52), we have with probability 1−δ: (1− )X Xn (1+ )X if n ≥ Ω(d/ 2 log2(nd/δ)). Therefore set n = Θ̃(d/ 2 log2(1/δ)) will suffice. Proof of Lemma 4. Xij = Ez∼N (0,Ik×k)φ(a > i z)φ(a > j z) = { 0 i 6= j E[φ2(a>i z)] ≤ 2π i = j 6For simplicity, we analyze as if φ(0) = 0 w.o.l.g. throughout this section, since the bias term is canceled out in the observation side with φ(A∗z) and the learning side with φ(Az). Therefore ‖X‖2 ≤ 2π . Together with Lemma 3, ‖X −Xn‖ ≤ 2 π w.p 1− δ. Recall ∇g(A)−∇gn(A) = 2Ez [ diag(φ′(Az))(X −Xn)φ(Az)z> ] := 2EzG(z), whereG(z) is defined as diag(φ′(Az))(X−Xn)φ(Az)z>. We have ‖G(z)‖ ≤ ‖A‖‖z‖2‖X−Xn‖. ‖∇g(A)−∇gn(A)‖2 = 2‖Ez[G(z)]‖ ≤ 2Ez‖G(z)‖ ≤ 2Ez‖A‖‖z‖2‖X −Xn‖ ≤ 2‖A‖ 2 π Ez‖z‖2 = 2‖A‖ d 2 π For the directional derivative, we make the concentration bound in a similar way. Denote D(z) = diag(φ′(Az))(Xn −X)φ′(Az) ◦ (Bz)z> + diag(φ′′(Az) ◦ (Bz))(Xn −X)φ(Az)z>. ‖D(z)‖ ≤ ‖Xn −X‖2‖B‖‖z‖2(1 + ‖z‖‖A‖). Therefore ‖DA∇g(A)[B]−DA∇gn(A)[B]‖ ≤ O( d3/2‖A‖‖B‖) with probability 1− δ. B.4 OMITTED PROOFS ON BOUNDING MINI-BATCH SIZE Recall g̃m,n(A) ≡ 1 2 ∥∥∥∥∥∥ 1n n∑ i=1 xix > i − 1 m m∑ j=1 φ(Azj)φ(Azj) > ∥∥∥∥∥∥ 2 F . Write Sj(A) ≡ φ(Azj)φ(Azj)>. Then we have g̃m,n(A) = 1 2 〈 Xn − 1 n m∑ j=1 Sj(A), Xn − 1 m m∑ j=1 Sj(A) 〉 = 1 2m2 ∑ i,j 〈Si(A), Sj(A)〉 − 1 n m∑ j=1 〈Sj(A), Xn〉+ 1 2 ‖Xn‖2F On the other hand, our target function is: gn(A) ≡ 1 2 ∥∥∥∥∥ 1n n∑ i=1 xix > i − Ez∼N (0,Ik×k) [ φ(Az)φ(Az)> ]∥∥∥∥∥ 2 F = 1 2 ‖ES [S]‖2F − 〈ES [S], Xn〉+ 1 2 ‖Xn‖2F Therefore ES g̃m,n(A)− gn(A) = 12m (ES‖S(A)‖ 2 F − ‖ESS(A)‖2F ). Claim 6. ∇ES g̃m,n(A)−∇gn(A) = 2 m Ez [ diag(φ′(Az))S(A)φ(Az)z> − diag(φ′(Az))ES [S(A)]φ(Az)z> ] . Proof. 〈∇ES g̃m,n −∇gn,∆A〉 =ES g̃m,n(A+ ∆A) + gn(A+ ∆A)− (ES g̃m,n(A) + gn(A)) +O(‖∆A‖2) = 1 2m ( ES‖S(A+ ∆A)‖2F − ES‖S(A)‖2F − ‖ESS(A+ ∆A)‖2F + ‖ESS(A)‖2F ) +O(‖∆A‖2) = 1 m (ES〈S(A), S(A+ ∆A)− S(A)〉 − 〈ES [S(A)], ES [S(A+ ∆A)− S(A)]) +O(‖∆A‖2) = 1 m ( 〈Ez〈S(A), diag(φ′(Az))∆Azφ(Az)>〉 − 〈ES [S(A)],Ezdiag(φ′(Az))∆Azφ(Az)>〉 ) +O(‖∆A‖2) (from Eqn. (11) and symmetry of S) =〈 2 m Ez [ diag(φ′(Az))S(A)φ(Az)z> − diag(φ′(Az))ES [S(A)]φ(Az)z> ] ,∆A〉+O(‖∆A‖2) Similarly to the derivation in the previous subsection, we again derive the bias in the directional derivative: Claim 7. For arbitrary matrix direction B, DA∇ES g̃m,n(A)[B]−DA∇gn(A)[B] = 2 m Ez [ diag(φ′′(Az) ◦ (Bz))(S(A)− ESS(A))φ(Az)z> + diag(φ′(Az)) ( (φ′(Az) ◦ (Bz))φ(Az)> − Ez[(φ′(Az) ◦ (Bz))φ(Az)>] ) φ(Az)z> + diag(φ′(Az)) ( φ(Az)(φ′(Az) ◦ (Bz))> − Ez[φ(Az)(φ′(Az) ◦ (Bz))>] ) φ(Az)z> + diag(φ′(Az))(S(A)− ESS(A))(φ′(Az) ◦ (Bz))z> ] B.5 OMITTED PROOF OF THE MAIN THEOREM Proof of Lemma 7. On one hand, suppose Z is an -FOSP property of g̃ in (6) along with the matrix S and vector σ, we have: 〈∇g̃(Z), Z − Z∗〉 =〈S,Z − Z∗〉 (since Z − Z∗ has 0 diagonal entries) ≤‖PT (S)‖2‖PT◦(Z − Z∗)‖F (T is the tangent cone of PSD matrices at Z) ≤‖PT (S)‖2‖Z − Z∗‖F = max j {ã>j Sãj}‖Z − Z∗‖F (ãj is the basis of the column space of Z ) ≤ ‖Z − Z∗‖ (12) (from the definition of -FOSP) On the other hand, from the definition of g̃, we have: 〈Z − Z∗,∇g̃(Z)〉 = ∑ ij (zij − z∗ij)g̃′ij(zij) = ∑ ij (zij − z∗ij)2 ∑ k odd σ2kPk(zij) ∑ k odd σ2kkz k−1 ij ≥‖Z − Z∗‖2Fσ41 (13) Here polynomial Pk(zij) ≡ (zkij − (z∗ij)k)/(z − z∗) is always positive for z 6= z∗ and k to be odd. Therefore by comparing (12) and (13) we have ‖Z − Z∗‖F ≥ ‖Z − Z∗‖2Fσ41 , i.e. ‖Z − Z∗‖F ≤ O( ). Proof of Theorem 3. From Theorem 31 from Ge et al. (2015), we know for small enough learning rate η, and arbitrary small , there exists large enough T , such that Algorithm 1 generates an output A(T ) that is sufficiently close to the second order stationary point for f . Or formally we have, Tr((A (T ))>XiA (T )) = yi, ‖(∇Af(A(T ))− ∑ i=1 λiXiA (T )):,j‖2 ≤ min ‖Aj,:‖2, ∀j ∈ [k] Tr(B>DA∇ALf (A(T ), λ)[B]) ≥ − ‖B‖22, ∀B, s.t.Tr(B>XiA) = 0 Lf (A, λ) = f(A) − ∑d i=1 λi(Tr(A >XiA) − yi). Let {ãi = A(T )ri}ki to form the basis of the column vector space of A(T ). Then the second line is a sufficient condition for the following: ‖ã>j (∇Af(A(T ))− ∑ i=1 λiXiA (T ))rj‖2 ≤ ,∀j ∈ [k]. Now with the concentration bound from Lemma 5, suppose our batch size m ≥ O(d5/ ), we have ‖∇Agn(A(T )) − ∇Af(A(T ))‖2 ≤ , and ‖DA∇Agn(A(T ))[B] − DA∇Af(A(T ))[B]‖2 ≤ ‖B‖2 for arbitrary B. Therefore again we get: Tr((A(T ))>XiA (T )) = yi ‖ã>j (∇Agn(A(T ))− ∑ i=1 λiXiA (T ))rj‖2 ≤ 2 , ∀j ∈ [k] Tr(B>DA∇ALgm(A(T ), λ)[B]) ≥ −2 ‖B‖22, ∀B, s.t.Tr(B>XiA) = 0 Next we turn to the concentration bound from Lemma 4. Suppose we have when the sample size n ≥ O(d5/ 2 log2(1/δ)), ‖DA∇Ag(A)[B] − DA∇Agn(A)[B]‖2 ≤ O( ‖B‖2), and ‖∇g(A) − ∇gn(A)‖2 ≤ O( ) with probability 1 − δ. Therefore similarly we get A(T ) is an O( )-SOSP for g(A) = 12 ∥∥∑∞ i=0 σ 2 i ( (A∗(A∗)>)◦i − (AA>)◦i )∥∥2 F . Now with Lemma 6 that connects the approximate stationary points, we have Z := A(T )(A(T ))> is also an -FOSP of g̃(Z) = 12 ∥∥∑∞ i=0 σ 2 i ( (Z∗)◦i − Z◦i )∥∥2 F . Finally with Lemma 7, we get ‖Z − Z∗‖F ≤ O( ).
1. What is the focus of the review, and what are the reviewer's concerns regarding the paper? 2. What are the strengths and weaknesses of the paper according to the reviewer? 3. Does the reviewer have any questions about the paper's content or conclusions? 4. How does the reviewer assess the novelty and significance of the paper's contributions? 5. Are there any limitations or potential biases in the reviewer's evaluation of the paper?
Review
Review I have read the authors response. In the response the authors clarified the contributions of this paper. I agree with the authors that the analysis of gradient descent-ascent is a difficult problem, and the optimization results given in this paper is a contribution of importance. Because of this I have improved my score. However, I do not agree with the authors that studying quadratic discriminators instead of more complicated ones should be considered as a contribution instead of drawback. In my opinion, as long as the focus is on WGAN, results involving standard neural networks are still more desired compared with the results in this submission. For example, similar results for a neural network discriminator might be even more impactful, because the optimization problem is even more difficult. Therefore I still consider the simple discriminator and generator as a weak point of this paper. ====================================================================================================== This paper studies the training of WGANs with stochastic gradient descent. The authors show that for one-layer generator network and quadratic discriminator, if the target distribution is modeled by a teacher network same as the generator, then stochastic gradient descent-ascent can learn this target distribution in polynomial time. The authors also provide sample complexity results. The paper is well-written and the theoretical analysis seems to be valid and complete. However, I think the WGANs studied in this paper are simplified too much that the analysis can no longer capture the true nature of WGAN training. First, the paper only studies linear and quadratic discriminators. This is not very consistent with the original intuition of WGAN, which is to use the worst Lipschitz continuous neural network to approximate the worst function in the set of all Lipschitz continuous functions in the definition of Wasserstein distance. When the discriminator is as simple as linear or quadratic functions, there is pretty much no “Wasserstein” in the optimization problem. Moreover, the claim that SGD learns one-layer networks can be very misleading. In fact what is a “one-layer” neural network? - if the authors meant “two-layer network” or “single hidden layer network”, then this is not true. Because as far as I can tell, the model $x = B \phi(A z)$ is much more difficult than the model $x = \phi(A z)$. The former is a standard single hidden layer network which is non-convex, while the latter is essentially a linear model especially when \phi is known. - if the authors meant “a linear model with elementwise monotonic transform”, then I would like to suggest that a more appropriate name should be used to avoid unnecessary confusion. As previously mentioned, the discriminators are too simple to approximate the Wasserstein distance, and therefore in general it should not be possible to guarantee recovery of the true data distribution. However, in this paper it is still shown that certain true distributions can be learned. This is due to the extremely simplified true model. In fact, even if the activation function $\phi$ is unknown, it seems that one can still learn $A^* (A^*)^\top$ well (for example, by Kendall’s tau).
ICLR
Title SGD Learns One-Layer Networks in WGANs Abstract Generative adversarial networks (GANs) are a widely used framework for learning generative models. Wasserstein GANs (WGANs), one of the most successful variants of GANs, require solving a minmax optimization problem to global optimality, but are in practice successfully trained using stochastic gradient descent-ascent. In this paper, we show that, when the generator is a one-layer network, stochastic gradient descent-ascent converges to a global solution with polynomial time and sample complexity. 1 INTRODUCTION Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) are a prominent framework for learning generative models of complex, real-world distributions given samples from these distributions. GANs and their variants have been successfully applied to numerous datasets and tasks, including image-to-image translation (Isola et al., 2017), image super-resolution (Ledig et al., 2017), domain adaptation (Tzeng et al., 2017), probabilistic inference (Dumoulin et al., 2016), compressed sensing (Bora et al., 2017) and many more. These advances owe in part to the success of Wasserstein GANs (WGANs) (Arjovsky et al., 2017; Gulrajani et al., 2017), leveraging the neural net induced integral probability metric to better measure the difference between a target and a generated distribution. Along with the afore-described empirical successes, there have been theoretical studies of the statistical properties of GANs—see e.g. (Zhang et al., 2018; Arora et al., 2017; 2018; Bai et al., 2018; Dumoulin et al., 2016) and their references. These works have shown that, with an appropriate design of the generator and discriminator, the global optimum of the WGAN objective identifies the target distribution with low sample complexity. On the algorithmic front, prior work has focused on the stability and convergence properties of gradient descent-ascent (GDA) and its variants in GAN training and more general min-max optimization problems; see e.g. (Nagarajan & Kolter, 2017; Heusel et al., 2017; Mescheder et al., 2017; 2018; Daskalakis et al., 2017; Daskalakis & Panageas, 2018a;b; Gidel et al., 2019; Liang & Stokes, 2019; Mokhtari et al., 2019; Jin et al., 2019; Lin et al., 2019) and their references. It is known that, even in min-max optimization problems with convex-concave objectives, GDA may fail to compute the min-max solution and may even exhibit divergent behavior. Hence, these works have studied conditions under which GDA converges to a globally optimal solution under a convex-concave objective, or different types of locally optimal solutions under nonconvex-concave or nonconvex-nonconcave objectives. They have also identified variants of GDA with better stability properties in both theory and practice, most notably those using negative momentum. In the context of GAN training, Feizi et al. (2017) show that for WGANs with a linear generator and quadratic discriminator GDA succeeds in learning a Gaussian using polynomially many samples in the dimension. In the same vein, we are the first to our knowledge to study the global convergence properties of stochastic GDA in the GAN setting, and establishing such guarantees for non-linear generators. In particular, we study the WGAN formulation for learning a single-layer generative model with some reasonable choices of activations including tanh, sigmoid and leaky ReLU. Our contributions. For WGAN with a one-layer generator network using an activation from a large family of functions and a quadratic discriminator, we show that stochastic gradient descent-ascent learns a target distribution using polynomial time and samples, under the assumption that the target distribution is realizable in the architecture of the generator. This is achieved by a) analysis of the dynamics of stochastic gradient-descent to show it attains a global optimum of the minmax problem, and b) appropriate design of the discriminator to ensure a parametric O( 1√ n ) statistical rate (Zhang et al., 2018; Bai et al., 2018). Related Work. We briefly review relevant results in GAN training and learning generative models: - Optimization viewpoint. For standard GANs and WGANs with appropriate regularization, Nagarajan & Kolter (2017), Mescheder et al. (2017) and Heusel et al. (2017) establish sufficient conditions to achieve local convergence and stability properties for GAN training. At the equilibrium point, if the Jacobian of the associated gradient vector field has only eigenvalues with negative real-part at the equilibrium point, GAN training is verified to converge locally for small enough learning rates. A follow-up paper by (Mescheder et al., 2018) shows the necessity of these conditions by identifying a prototypical counterexample that is not always locally convergent with gradient descent based GAN optimization. However, the lack of global convergence prevents the analysis to provide any guarantees of learning the real distribution. The work of (Feizi et al., 2017) described above has similar goals as our paper, namely understanding the convergence properties of basic dynamics in simple WGAN formulations. However, they only consider linear generators, which restrict the WGAN model to learning a Gaussian. Our work goes a step further, considering WGANs whose generators are one-layer neural networks with a broad selection of activations. We show that with a proper gradient-based algorithm, we can still recover the ground truth parameters of the underlying distribution. More broadly, WGANs typically result in nonconvex-nonconcave min-max optimization problems. In these problems, a global min-max solution may not exist, and there are various notions of local min-max solutions, namely local min-local max solutions Daskalakis & Panageas (2018b), and local min solutions of the max objective Jin et al. (2019), the latter being guaranteed to exist under mild conditions. In fact, Lin et al. (2019) show that GDA is able to find stationary points of the max objective in nonconvex-concave objectives. Given that GDA may not even converge for convexconcave objectives, another line of work has studied variants of GDA that exhibit global convergence to the min-max solution Daskalakis et al. (2017); Daskalakis & Panageas (2018a); Gidel et al. (2019); Liang & Stokes (2019); Mokhtari et al. (2019), which is established for GDA variants that add negative momentum to the dynamics. While the convergence of GDA with negative momentum is shown in convex-concave settings, there is experimental evidence supporting that it improves GAN training (Daskalakis et al., 2017; Gidel et al., 2019). - Statistical viewpoint. Several works have studied the issue of mode collapse. One might doubt the ability of GANs to actually learn the distribution vs just memorize the training data (Arora et al., 2017; 2018; Dumoulin et al., 2016). Some corresponding cures have been proposed. For instance,Zhang et al. (2018); Bai et al. (2018) show for specific generators combined with appropriate parametric discriminator design, WGANs can attain parametric statistical rates, avoiding the exponential in dimension sample complexity (Liang, 2018; Bai et al., 2018; Feizi et al., 2017). Recent work of Wu et al. (2019) provides an algorithm to learn the distribution of a single-layer ReLU generator network. While our conclusion appears similar, our focus is very different. Our paper targets understanding when a WGAN formulation trained with stochastic GDA can learn in polynomial time and sample complexity. Their work instead relies on a specifically tailored algorithm for learning truncated normal distributions Daskalakis et al. (2018). 2 PRELIMINARIES We consider GAN formulations for learning a generatorGA : Rk → Rd of the form z 7→ x = φ(Az), where A is a d × k parameter matrix and φ some activation function. We consider discriminators Dv : Rd → R or DV : Rd → R that are linear or quadratic forms respectively for the different purposes of learning the marginals or the joint distribution. We assume latent variables z are sampled from the normal N (0, Ik×k), where Ik×k denotes the identity matrix of size k. The real/target distribution outputs samples x ∼ D = GA∗(N (0, Ik0×k0)), for some ground truth parameters A∗, where A∗ is d× k0, and we take k ≥ k0 for enough expressivity, taking k = d when k0 is unknown. The Wasserstain GAN under our choice of generator and discriminator is naturally formulated as: min A∈Rd×k max v∈Rd { f(A,v) ≡ Ex∼DDv(x)− Ez∼N (0,Ik×k)Dv(GA(z)) } .1 1We will replace v by V ∈ Rd×d when necessary. We use ai to denote the i-th row vector of A. We sometimes omit the 2 subscript, using ‖x‖ to denote the 2-norm of vector x, and ‖X‖ to denote the spectral norm of matrix X . Sn ⊂ Rn×n represents all the symmetric matrices of dimension n× n. We use Df(X0)[B] to denote the directional derivative of function f at point X0 with direction B: Df(X0)[B] = limt→0 f(X0+tB)−f(X0) t . 3 WARM-UP: LEARNING THE MARGINAL DISTRIBUTIONS As a warm-up, we ask whether a simple linear discriminator is sufficient for the purposes of learning the marginal distributions of all coordinates of D. Notice that in our setting, the i-th output of the generator is φ(x) where x ∼ N (0, ‖ai‖2), and is thus solely determined by ‖ai‖2. With a linear discriminator Dv(x) = v>x, our minimax game becomes: min A∈Rd×k max v∈Rd { f1(A,v) ≡ Ex∼D [ v>x ] − Ez∼N (0,Ik×k) [ v>φ(Az) ]} . (1) Notice that when the activation φ is an odd function, such as the tanh activation, the symmetric property of the Gaussian distribution ensures that Ex∼D[v>x] = 0, hence the linear discriminator in f1 reveals no information about A∗. Therefore specifically for odd activations (or odd plus a constant activations), we instead use an adjusted rectified linear discriminator Dv(x) ≡ v>R(x−C) to enforce some bias, where C = 12 (φ(x) + φ(−x)) for all x, and R denotes the ReLU activation. Formally, we slightly modify our loss function as: f̄1(A,v) ≡ Ex∼D [ v>R(x− C) ] − Ez∼N (0,Ik×k) [ v>R(φ(Az)− C) ] . (2) We will show that we can learn each marginal of D if the activation function φ satisfies the following. Assumption 1. The activation function φ satisfies either one of the following: 1. φ is an odd function plus constant, and φ is monotone increasing; 2. The even component of φ, i.e. 12 (φ(x)+φ(−x)), is positive and monotone increasing on x ∈ [0,∞). Remark 1. All common activation functions like (Leaky) ReLU, tanh or sigmoid function satisfy Assumption 1. Lemma 1. Suppose A∗ 6= 0. Consider f1 with activation that satisfies Assumption 1.2 and f̄1 with activation that satisfies Assumption 1.1. The stationary points of such f1 and f̄1 yield parameters A satisfying ‖ai‖ = ‖a∗i ‖,∀i ∈ [d]. To bound the capacity of the discriminator, similar to the Lipschitz constraint in WGAN, we regularize the discriminator. For the regularized formulation we have: Theorem 1. In the same setting as Lemma 1, alternating gradient descent-ascent with proper learning rates on min A max v {f1(A,v)− ‖v‖2/2} or respectively min A max v {f̄1(A,v)− ‖v‖2/2} recovers A such that ‖ai‖ = ‖a∗i ‖,∀i ∈ [d]. All the proofs of the paper can be found in the appendix. We show that all local min-max points in the sense of (Jin et al., 2019) of the original problem are global min-max points and recover the correct norm of a∗i ,∀i. Notice for the source data distribution x = (x1, x2, · · ·xd) ∼ D with activation φ, the marginal distribution of each xi follows φ(N (0, ‖a∗i ‖2)) and is determined by ‖a∗i ‖. Therefore we have learned the marginal distribution for each entry i. It remains to learn the joint distribution. 4 LEARNING THE JOINT DISTRIBUTION In the previous section, we utilize a (rectified) linear discriminator, such that each coordinate vi interacts with the i-th random variable. With the (rectified) linear discriminator, WGAN learns the correct ‖ai‖, for all i. However, since there’s no interaction between different coordinates of the random vector, we do not expect to learn the joint distribution with a linear discriminator. To proceed, a natural idea is to use a quadratic discriminator DV (x) := x>V x = 〈xx>, V 〉 to enforce component interactions. Similar to the previous section, we study the regularized version: min A∈Rd×k max V ∈Rd×d {f2(A, V )− 1 2 ‖V ‖2F }, (3) where f2(A, V ) = Ex∼DDV (x)− Ez∼N (0,Ik×k)DV (φ(Az)) = 〈 Ex∼D [ xx> ] − Ez∼N (0,Ik×k) [ φ(Az)φ(Az)> ] , V 〉 . By adding a regularizer on V and explicitly maximizing over V : g(A) ≡ max V { f2(A, V )− 1 2 ‖V ‖2F } = 1 2 ∥∥Ex∼D [xx>]− Ez∼N (0,Ik×k) [φ(Az)φ(Az)>]∥∥2F . In the next subsection, we first focus on analyzing the second-order stationary points of g, then we establish that gradient descent ascent converges to second-order stationary points of g . 4.1 GLOBAL CONVERGENCE FOR OPTIMIZING THE GENERATING PARAMETERS We first assume that both A and A∗ have unit row vectors, and then extend to general case since we already know how to learn the row norms from Section 3. To explicitly compute g(A), we rely on the property of Hermite polynomials. Since normalized Hermite polynomials {hi}∞i=0 forms an orthonomal basis in the functional space, we rewrite the activation function as φ(x) = ∑∞ i=0 σihi, where σi is the i-th Hermite coefficient. We use the following claim: Claim 1 ((Ge et al., 2017) Claim 4.2). Let φ be a function from R to R such that φ ∈ L2(R, e−x2/2), and let its Hermite expansion be φ = ∑∞ i=1 σihi. Then, for any unit vectors u,v ∈ Rd, we have that Ex∼N (0,Id×d) [ φ(u>x)φ(v>x) ] = ∞∑ i=0 σ2i (u >v)i. Therefore we could compute the value of f2 explicitly using the Hermite polynomial expansion: f2(A, V ) = 〈 ∞∑ i=0 σ2i ( (A∗(A∗)>)◦i − (AA>)◦i ) , V 〉 . Here X◦i is the Hadamard power operation where (X◦i)jk = (Xjk)i. Therefore we have: g(A) = 1 2 ∥∥∥∥∥ ∞∑ i=0 σ2i ( (A∗(A∗)>)◦i − (AA>)◦i )∥∥∥∥∥ 2 F We reparametrize with Z = AA> and define g̃(Z) = g(A) with individual component functions g̃jk(z) ≡ 12 ( ∑∞ i=0 σ 2 i ((z ∗ jk) i − zi))2. Accordingly z∗jk = 〈a∗j ,a∗k〉 is the (j, k)-th component of the ground truth covariance matrix A∗(A∗)>. Assumption 2. The activation function φ is an odd function plus constant. In other words, its Hermite expansion φ = ∑∞ i=0 σihi satisfies σi = 0 for even i ≥ 2. Additionally we assume σ1 6= 0. Remark 2. Common activations like tanh and sigmoid satisfy Assumption 2. Lemma 2. For activations including leaky ReLU and functions satisfying Assumption 2, g̃(Z) has a unique stationary point where Z = A∗(A∗)>. Notice g̃(Z) = ∑ jk g̃jk(zjk) is separable across zjk, where each g̃jk is a polynomial scalar function. Lemma 2 comes from the fact that the only zero point for g̃′jk is zjk = z ∗ jk, for odd activation φ and leaky ReLU. Then we migrate this good property to the original problem we want to solve: Problem 1. We optimize over function g when ‖a∗i ‖ = 1,∀i: minA g(A) = 12 ∥∥∥∥∥ ∞∑ i=0 σ2i ( (A∗(A∗)>)◦i − (AA>)◦i )∥∥∥∥∥ 2 F s.t. a>i ai = 1,∀i. Existing work Journée et al. (2008) connects g̃(Z) to the optimization over factorized version for g(A) (g(A) ≡ g̃(AA>)). Specifically, when k = d, all second-order stationary points for g(A) are first-order stationary points for g̃(Z). Though g̃ is not convex, we are able to show that its first-order stationary points are global optima when the generator is sufficiently expressive, i.e., k ≥ k0. In reality we won’t know the latent dimension k0, therefore we just choose k = d for simplicity. We make the following conclusion: Theorem 2. For activations including leaky ReLU and functions satisfying Assumption 2, when k = d, all second-order KKT points for problem 1 are its global minimum. Therefore alternating projected gradient descent-ascent on Eqn. (3) converges to A : AA> = A∗(A∗)>. The extension for non-unit vectors is straightforward, and we defer the analysis to the Appendix. 5 FINITE SAMPLE ANALYSIS Algorithm 1 Online stochastic gradient descent ascent on WGAN 1: Input: n training samples: x1,x2, · · ·xn, where each xi ∼ φ(A∗z), z ∼ N (0, Ik×k), learning rate for generating parameters η, number of iterations T . 2: Random initialize generating matrix A(0). 3: for t = 1, 2, · · · , T do 4: Generatem latent variables z(t)1 , z (t) 2 , · · · , z (t) m ∼ N (0, Ik×k) for the generator. The empirical function becomes f̃ (t)m,n(A, V ) = 〈 1 m m∑ i=1 φ(Az (t) i )φ(Az (t) i ) > − 1 n n∑ i=1 xix > i , V 〉 − 1 2 ‖V ‖2 5: Gradient ascent on V with optimal step-size ηV = 1: V (t) ← V (t) − ηV∇V f̃ (t)m,n(A(t−1), V (t−1)). 6: Sample noise e uniformly from unit sphere 7: Projected Gradient Descent on A, with constraints C = {A|(AA>)ii = (A∗A∗>)ii} : A(t) ← ProjC(A(t−1) − η(∇Af̃ (t)m,n(A(t−1), V (t)) + e)). 8: end for 9: Output: A(T )(A(T ))> In this section, we consider analyzing Algorithm 1, i.e., gradient descent ascent on the following: f̃ (t)m,n(A, V ) = 〈 1 m m∑ i=1 φ(Az (t) i )φ(Az (t) i ) > − 1 n n∑ i=1 xix > i , V 〉 − 1 2 ‖V ‖2. (4) Notice in each iteration, gradient ascent with step-size 1 finds the optimal solution for V . By Danskin’s theorem (Danskin, 2012), our min-max optimization is essentially gradient descent over g̃ (t) m,n(A) ≡ maxV f̃ (t)m,n(A, V ) = 12‖ 1 m ∑m i=1 φ(Az (t) i )φ(Az (t) i ) > − 1n ∑n i=1 xix > i ‖2F with a batch of samples {z(t)i }, i.e., stochastic gradient descent for fn(A) ≡ Ezi∼N (0,Ik×k),∀i∈[m][g̃m,n(A)]. Therefore to bound the difference between fn(A) and the population risk g(A), we analyze the sample complexity required on the observation side (xi ∼ D, i ∈ [n]) and the mini-batch size required on the learning part (φ(Azj), zj ∼ N (0, Ik×k), j ∈ [m]). We will show that with large enough n,m, the algorithm specified in Algorithm 1 that optimizes over the empirical risk will yield the ground truth covariance matrix with high probability. Our proof sketch is roughly as follows: 1. With high probability, projected stochastic gradient descent finds a second order stationary point  of fn(·) as shown in Theorem 31 of (Ge et al., 2015). 2. For sufficiently large m, our empirical objective, though a biased estimator of the population risk g(·), achieves good -approximation to the population risk on both the gradient and Hessian (Lemmas 4&5). Therefore  is also an O( )-approximate second order stationary point (SOSP) for the population risk g(A). 3. We show that any -SOSP  for g(A) yields anO( )-first order stationary point (FOSP) Ẑ ≡ ÂÂ> for the semi-definite programming on g̃(Z) (Lemma 6). 4. We show that any O( )-FOSP of function g̃(Z) induces at most O( ) absolute error compared to the ground truth covariance matrix Z∗ = A∗(A∗)> (Lemma 7). 5.1 OBSERVATION SAMPLE COMPLEXITY For simplicity, we assume the activation and its gradient satisfy Lipschitz continuous, and let the Lipschitz constants be 1 w.l.o.g.: Assumption 3. Assume the activation is 1-Lipschitz and 1-smooth. To estimate observation sample complexity, we will bound the gradient and Hessian for the population risk and empirical risk on the observation samples: g(A) ≡ 1 2 ∥∥Ex∼D [xx>]− Ez∼N (0,Ik×k) [φ(Az)φ(Az)>]∥∥2F , and gn(A) ≡ 1 2 ∥∥∥∥∥ 1n n∑ i=1 xix > i − Ez∼N (0,Ik×k) [ φ(Az)φ(Az)> ]∥∥∥∥∥ 2 F . Claim 2. ∇g(A)−∇gn(A) = 2Ez [ diag(φ′(Az))(X −Xn)φ(Az)z> ] , where X = Ex∼D[xx>], and Xn = 1n ∑n i=1 xix > i . The directional derivative with arbitrary direction B is: D∇g(A)[B]−D∇gn(A)[B] = 2Ez [ diag(φ′(Az))(Xn −X)φ′(Az) ◦ (Bz)z> ] + 2Ez [ diag(φ′′(Az) ◦ (Bz))(Xn −X)φ(Az)z> ] Lemma 3. Suppose the activation satisfies Assumption 3. Pr[‖X − Xn‖ ≤ ‖X‖] ≥ 1 − δ, for n ≥ Θ̃(d/ 2 log2(1/δ))2. Lemma 4. Suppose the activation satisfies Assumption 2&3. With samples n ≥ Θ̃(d/ 2 log2(1/δ)), ‖∇g(A) − ∇gn(A)‖2 ≤ O( d‖A‖2) with probability 1 − δ. Meanwhile, ‖D∇g(A)[B] − D∇gn(A)[B]‖2 ≤ O( d3/2‖A‖2‖B‖2) with probability 1− δ. 5.2 BOUNDING MINI-BATCH SIZE Normally for empirical risk for supervised learning, the mini-batch size can be arbitrarily small since the estimator of the gradient is unbiased. However in the WGAN setting, notice for each iteration, we randomly sample a batch of random variables {zi}i∈[m], and obtain a gradient of g̃m,n(A) ≡ 12 ∥∥∥ 1n∑ni=1 xix>i − 1m∑mj=1 φ(Azj)φ(Azj)>∥∥∥2 F , in Algorithm 1. However, the finite sum is inside the Frobenius norm and the gradient on each mini-batch may no longer be an unbiased estimator for our target gn(A) = 12 ∥∥ 1 n ∑n i=1 xix > i − Ez [ φ(Az)φ(Az)> ]∥∥2 F . In other words, we conduct stochastic gradient descent over the function f(A) ≡ Ez g̃m,n(A). Therefore we just need to analyze the gradient error between this f(A) and gn(A) (i.e. g̃m,n is almost an unbiased estimator of gn). Finally with the concentration bound derived in last section, we get the error bound between f(A) and g(A). Lemma 5. The empirical risk g̃m,n is almost an unbiased estimator of gn. Specifically, the expected function f(A) = Ezi∼N (0,Ik×k),i∈[m][g̃m,n] satisfies: ‖∇f(A)−∇gn(A)‖ ≤ O( 1 m ‖A‖3d2). 2Θ̃ hides log factors of d for simplicity. For arbitrary direction matrix B, ‖D∇f(A)[B]−D∇gn(A)[B]‖ ≤ O( 1 m ‖B‖‖A‖3d5/2). In summary, we conduct concentration bound over the observation samples and mini-batch sizes, and show the gradient of f(A) that Algorithm 1 is optimizing over has close gradient and Hessian with the population risk g(A). Therefore a second-order stationary point (SOSP) for f(A) (that our algorithm is guaranteed to achieve) is also an approximated SOSP for g(A). Next we show such a point also yield an approximated first-order stationary point of the reparametrized function g̃(Z) ≡ g(A),∀Z = AA>. 5.3 RELATION ON APPROXIMATE OPTIMALITY In this section, we establish the relationship between g̃ and g. We present the general form of our target Problem 1: minA∈Rd×k g(A) ≡ g̃(AA>) (5) s.t. Tr(A>XiA) = yi, Xi ∈ S, yi ∈ R, i = 1, · · · , n. Similar to the previous section, the stationary property might not be obvious on the original problem. Instead, we could look at the re-parametrized version as: minZ∈S g̃(Z) (6) s.t. Tr(XiZ) = yi, Xi ∈ S, yi ∈ R, i = 1, · · · , n, Z 0, Definition 1. A matrix A ∈ Rd×k is called an -approximate second-order stationary point ( -SOSP) of Eqn. (5) if there exists a vector λ such that: Tr(A >XiA) = yi, i ∈ [n] ‖(∇Z g̃(AA>)− ∑n i=1 λiXi)ãj‖ ≤ ‖ãj‖, {ãj}j span the column space of A Tr(B>D∇AL(A, λ)[B]) ≥ − ‖B‖2, ∀B s.t. Tr(B>XiA) = 0 Here L(A, λ) is the Lagrangian form g̃(AA>)− ∑n i=1 λi(Tr(A >XiA)− yi). Specifically, when = 0 the above definition is exactly the second-order KKT condition for optimizing (5). Next we present the approximate first-order KKT condition for (6): Definition 2. A symmetric matrix Z ∈ Sn is an -approximate first order stationary point of function (6) ( -FOSP) if and only if there exist a vector σ ∈ Rm and a symmetric matrix S ∈ S such that the following holds: Tr(XiZ) = yi, i ∈ [n] Z 0, S − I, ‖Sãj‖ ≤ ‖ãj‖, {ãj}j span the column space of Z S = ∇Z g̃(Z)− ∑n i=1 σiXi. Lemma 6. Let latent dimension k = d. For an -SOSP of function (5) with A and λ, it infers an -FOSP of function (6) with Z, σ and S that satisfies: Z = AA>, σ = λ and S = ∇Z g̃(AA>) −∑ i λiXi. Now it remains to show an -FOSP of g̃(Z) indeed yields a good approximation for the ground truth parameter matrix. Lemma 7. If Z is an -FOSP of function (6), then ‖Z −Z∗‖F ≤ O( ). Here Z∗ = A∗(A∗)> is the optimal solution for function (6). Together with the previous arguments, we finally achieve our main theorem on connecting the recovery guarantees with the sample complexity and batch size3: Theorem 3. For arbitrary δ < 1, , given small enough learning rate η < 1/poly(d, 1/ , log(1/δ)), let sample size n ≥ Θ̃(d5/ 2 log2(1/δ)), batch size m ≥ O(d5/ ), for large enough T=poly(1/η, 1/ , d, log(1/δ)), the output of Algorithm 1 satisfies ‖A(T )(A(T ))> − Z∗‖F ≤ O( ) with probability 1− δ, under Assumptions 2 & 3 and k = d. 3The exact error bound comes from the fact that when diagonal terms of AA> are fixed, ‖A‖2 = O( √ d). 6 SIMULATIONS In this section, we provide simple experimental results to validate the performance of stochastic gradient descent ascent and provide experimental support for our theory. We focus on Algorithm 1 that targets to recover the parameter matrix. We conduct a thorough empirical studies on three joint factors that might affect the performance: the number of observed samples m (we set n = m as in general GAN training algorithms), the different choices of activation function φ, and the output dimension d. In Figure 1 we plot the relative error for parameter estimation decrease over the increasing sample complexity. We fix the hidden dimension k = 2, and vary the output dimension over {3, 5, 7} and sample complexity over {500, 1000, 2000, 5000, 10000}. Reported values are averaged from 20 runs and we show the standard deviation with the corresponding colored shadow. Clearly the recovery error decreases with higher sample complexity and smaller output dimension. To visually demonstrate the learning process, we also include a simple comparison for different φ: i.e. leaky ReLU and tanh activations, when k = 1 and d = 2. We set the ground truth covariance matrix to be [1, 1; 1, 1], and therefore a valid result should be [1, 1] or [−1,−1]. From Figure 2 we could see that for both leaky ReLU and tanh, the stochastic gradient descent ascent performs similarly with exact recovery of the ground truth parameters. 7 CONCLUSION We analyze the convergence of stochastic gradient descent ascent for Wasserstein GAN on learning a single layer generator network. We show that stochastic gradient descent ascent algorithm attains the global min-max point, and provably recovers the parameters of the network with absolute error measured in Frobenius norm, from Θ̃(d5/ 2) i.i.d samples. A OMITTED PROOF FOR LEARNING THE DISTRIBUTION A.1 STATIONARY POINT FOR MATCHING FIRST MOMENT Proof of Lemma 1. To start with, we consider odd-plus-constant monotone increasing activations. Notice that by proposing a rectified linear discriminator, we have essentially modified the activation function as φ̃ := R(φ− C), where C = 12 (φ(x) + φ(−x)) is the constant bias term of φ. Observe that we can rewrite the objective f̄1 for this case as follows: f1(A,v) = Ez∼N (0,Ik0×k0 )v >φ̃(A∗z)− Ez∼N (0,Ik×k)v >φ̃(Az). Moreover, notice that φ̃ is positive and increasing on its support which is [0,+∞). Now let us consider the other case in our statement where φ has a positive and monotone increasing even component in [0,+∞). In this case, let us take: φ̃(x) = { φ(x) + φ(−x), x ≥ 0 0, o.w. Because of the symmetry of the Gaussian distribution, we can rewrite the objective function for this case as follows: f1(A,v) = Ez∼N (0,Ik0×k0 )v >φ̃(A∗z)− Ez∼N (0,Ik×k)v >φ̃(Az). Moreover, notice that φ̃ is positive and increasing on its support which is [0,+∞). To conclude, in both cases, the optimization objective can be written as follows, where φ̃ satisfies Assumption 1.2 and is only non-zero on [0,+∞). f1(A,v) = Ez∼N (0,Ik0×k0 )v >φ̃(A∗z)− Ez∼N (0,Ik×k)v >φ̃(Az). The stationary points of the above objective satisfy:{ ∇vf1(A,v) = Ez∼N (0,Ik0×k0 )φ̃(A ∗z)− Ez∼N (0,Ik×k)φ̃(Az) = 0, ∇ajf1(A,v) = −Ez∼N (0,Ik×k)vj φ̃′(a>j z)z = 0. We focus on the gradient over v. To achieve ∇vf1(A,v) = 0, the stationary point satisfies: ∀j,Ez∼N (0,Ik0×k0 )φ̃((a ∗ j ) >z) = Ez∼N (0,Ik×k)φ̃(a > j z), i.e. ∀j,Ex∼N (0,‖a∗j ‖2)φ̃(x) = Ex′∼N (0,‖aj‖2)φ̃(x ′). (7) To recap, for activations φ that follow Assumption 1, in both cases we have written the necessary condition on stationary point to be Eqn. (7), where φ̃ is defined differently for odd or non-odd activations, but in both cases it is positive and monotone increasing on its support [0,∞). We then argue the only solution for Eqn. (7) satisfies ‖aj‖ = ‖a∗j‖,∀j. This follows directly from the following claim: Claim 3. The function h(α) := Ex∼N (0,α2)f(x), α > 0 is a monotone increasing function if f is positive and monotone increasing on its support [0,∞). We could see from Claim 3 that the LHS and RHS of Eqn. (7) is simply h(‖aj‖) and h(‖a∗j‖) for each j. Now that h is an monotone increasing function, the unique solution for h(‖aj‖) = h(‖a∗j‖) is to match the norm: ‖aj‖ = ‖a∗j‖,∀j. Proof of Claim 3. h(α) = Ex∼N (0,α2)f(x) = ∫ ∞ 0 f(x)e− x2 2α2 dx y:=x/α = ∫ ∞ 0 αf(αy)e− y2 2 dy = Ey∼N (0,1)αf(αy). Notice h′(α) = Ex∼N (0,1)[αxf ′(αx) + f(αx)]. Since f , f ′, and α > 0, and we only care about the support of f where x is also positive, therefore h′ is always positive and h is monotone increasing. To sum up, at stationary point where ∇f1(A,v) = 0, we have ∀i, ‖a∗i ‖ = ‖ai‖. A.2 PROOF OF THEOREM 1 Proof of Theorem 1. We will take optimal gradient ascent steps with learning rate 1 on the discriminator side v, hence the function we will actually be optimizing over becomes (using the notation for φ̃ from section A.1): h(A) = max v f1(A,v) = 1 2 ∥∥∥Ez∼N (0,Ik0×k0 )φ̃(A∗z)− Ez∼N (0,Ik×k)φ̃(Az)∥∥∥2 . We just want to verify that there’s no spurious local minimum for h(A). Notice there’s no interaction between each row vector of A. Therefore we instead look at each hi := 1 2 ( Ez∼N (0,Ik0×k0 )φ̃((a ∗ i ) >z)− Ez∼N (0,Ik×k)φ̃(a>i z) )2 for each i. Now ∇hi(ai) = − ( Ez∼N (0,Ik0×k0 )φ̃((a ∗ i ) >z)− Ez∼N (0,Ik×k)φ̃(a>i z) ) (Ez∼N (0,Ik×k)zφ̃′(a>i z)). Due to the symmetry of the Gaussian, we take ai = ae1, where a = ‖ai‖. It is easy to see that checking whether Ez∼N (0,Ik×k)zφ̃′(a>i z) = 0 is equivalent to checking whether Ez1∼N (0,1)z1φ̃′(az1) = 0. Recall that φ̃ is supported on [0,+∞) and it is monotonically increasing on its support. Hence, Ez1∼N (0,1)z1φ̃′(az1) 6= 0 unless a = 0. Hence, suppose ‖ai‖ 6= 0,∀i. Then ∇Ah(A) = 0 iff h(A) = 0, i.e. Ez∼N (0,Ik0×k0 )φ̃(A ∗z) = Ez∼N (0,Ik×k)φ̃(Az). Therefore all stationary points of h(A) are global minima where Ez∼N (0,Ik0×k0 )φ̃(A ∗z) = Ez∼N (0,Ik×k)φ̃(Az) and according to Lemma 1, this only happens when ‖ai‖ = ‖a∗i ‖,∀i ∈ [d]. A.3 STATIONARY POINTS FOR WGAN WITH QUADRATIC DISCRIMINATOR Proof of Lemma 2. To study the stationary point for g̃(Z) = ∑ jk g̃jk(zjk), we look at individual g̃jk(z) ≡ 12 ( ∑∞ i=0 σ 2 i ((z ∗ jk) i − zi))2. Notice for odd-plus-constant activations, σi is zero for even i > 0. Recall our assumption in Lemma 2 also requires that σ1 6= 0. Since the analysis is invariance to the position of the matrix Z, we simplify the notation here and essentially want to study the stationary point for f(a) = 12 ( ∑ i odd σ 2 i (a i− bi))2 for some constant b and σi, where σ1 6= 04. f ′(a) = (∑ i odd σ2i (a i − bi) )(∑ i odd iσ2i a i−1 ) = (a− b) σ21 + ∑ i≥3 odd σ2i ai − bi a− b σ21 + ∑ i≥3 odd iσ2i a i−1 = (a− b)(I)(II). Notice now f ′(a) = 0⇔ a = b. This is because the polynomial f ′(a) is factorized to a− b and two factors I and II that are always positive. Notice here we use a i−bi a−b to denote ∑i j=0 a jbi−j , which is always nonnegative. This is simply because ai − bi always shares the same sign as a− b when i is odd. Therefore I=σ21 + ∑ i≥3 odd σ 2 i ai−bi a−b > 0,∀a. 4The zero component has been cancelled out. Meanwhile, since ai−1 is always nonnegative for each odd i, we have II= σ21 + ∑ i≥3 odd iσ 2 i a i−1 is also always positive for any a. Next, for activation like ReLU, loss g̃jk(z) = 12 (h(z)−h(z ∗ jk)) 2, where h(x) = 1π ( √ 1− x2 + (π− cos−1(x))x) (Daniely et al., 2016). Therefore h′(−1) = 0 for any z∗jk. This fact prevents us from getting the same conclusion for ReLU. However, for leaky ReLU with coefficient of leakage α ∈ (0, 1), φ(x) = max{x, αx} = (1 − α)ReLU(x) + αx. We have Ez∼N (0,Ik×k [ φ(a>i z)φ(a > j z) ] =(1− α)2EzReLU(a>i z)ReLU(a>j z) + (1− α)αEzReLU(a>i z)a>j z + (1− α)αEza>i zReLU(a>j z) + α2Eza>i za>j z =(1− α)2h(a>i aj) + αa>i aj Therefore for leaky ReLU g̃jk(z) = 12 ((1 − α) 2(h(z) − h(zjk∗)) + α(z − z∗jk))2, and g̃′jk(z) = ((1−α)2(h(z)−h(zjk∗))+α(z−z∗jk))((1−α)2h′(z)+α).Now with α > 0, (1−α)2h′(z)+α ≥ α for all z and g̃jk(z) = 0⇔ z = z∗jk. To sum up, for odd activations and leaky ReLU, since each g̃jk(z) only has stationary point of z = z∗jk, the stationary point Z of g̃(Z) = ∑ jk g̃jk also satisfy Z = Z ∗ = A∗(A∗)>. Proof of Theorem 2. Instead of directly looking at the second-order stationary point of Problem 1, we look at the following problem on its reparametrized version: Problem 2. minZ g̃(Z) = 12 ∥∥∥∥∥ ∞∑ i=0 σ2i ( (Z∗)◦i − Z◦i )∥∥∥∥∥ 2 F s.t. zii = 1,∀i. Z 0. Here Z∗ = A∗(A∗)> and satisfies z∗ii = 1,∀i. Compared to function g in the original problem 1, it satisfies that g̃(AA>) ≡ g(A). A matrix Z satisfies the first-order stationary point for Problem 2 if there exists a vector σ such that: zii = 1, Z 0, S 0, SZ = 0, S = ∇Zg(Z)− diag(σ). Therefore for a stationary point Z, since Z∗ = A∗(A∗)> 0, and S 0, we have 〈S,Z∗ − Z〉 = 〈S,Z∗〉 ≥ 0. Meanwhile, 〈Z∗ − Z, S〉 =〈Z∗ − Z,∇Zf(Z)− diag(σ)〉 =〈Z∗ − Z,∇Zf(Z)〉 (diag(Z∗ − Z) = 0) = ∑ i,j (z∗ij − zij)g′ij(zij) = ∑ i,j (zij − z∗ij)P (zij)(z∗ij − zij) (Refer to proof of Lemma 2 for the value of g′) =− ∑ ij (zij − z∗ij)2P (zij) ≤0 (P is always positive) Therefore 〈S,Z∗ − Z〉 = 0, and this only happens when Z = Z∗. Finally, from Journée et al. (2008) we know that any first-order stationary point for Problem 2 is a second-order stationary point for our original problem 1 5. Therefore we conclude that all second-order stationary point for Problem 1 are global minimum A: AA> = A∗(A∗)>. A.4 LANDSCAPE ANALYSIS FOR NON-UNIT GENERATING VECTORS In the previous argument, we simply assume that the norm of each generating vectors ai to be 1. This practice simplifies the computation but is not practical. Since we are able to estimate ‖ai‖ for all i first, we could analyze the landscape of our loss function for general matrix A. The main tool is to use the multiplication theorem of Hermite functions: hαn(x) := hn(αx) = bn2 c∑ i=0 αn−2i(α2 − 1)i ( n 2i ) (2i)! i! 2−ihn−2i(x). For the ease of notation, we denote the coefficient as ηn,iα := α n−2i(α2− 1)i ( n 2i ) (2i)! i! 2 −i. We extend the calculations for Hermite inner product for non-standard distributions. Lemma 8. Let (x, y) be normal variables that follow joint distributionN (0, [[α2, αβρ]; [αβρ, β2]]). Then, E[hm(x)hn(y)] = { ∑b l2 c i=0 η l,i α η l,i β ρ l−2i if m ≡ n (mod 2) 0 o.w. (8) Here l = min{m,n}. 5Throughout the analysis for low rank optimization in Journée et al. (2008), they require function g̃(Z) to be convex. However, by carefully scrutinizing the proof, one could see that this condition is not required in building the connection of first-order and second-order stationary points of g(A) and g̃(Z). For more cautious readers, we also show a relaxed version in the next section, where the equivalence of SOSP of g and FOSP of g̃ is a special case of it. Proof. Denote the normalized variables x̂ = x/α, ŷ = y/β. Let l = min{m,n}. E[hm(x)hn(y)] =E[hαm(x̂)hβn(ŷ)] = bm2 c∑ i=0 bn2 c∑ j=0 ηm,iα η n,j β E[hm−2i(x̂)hn−2j(ŷ)] = bm2 c∑ i=0 bn2 c∑ j=0 ηm,iα η n,j β δ(m−2i),(n−2j)ρ n−2j (Lemma ??) = { ∑b l2 c i=0 η l,i α η l,i β ρ l−2i if m ≡ n (mod 2) 0 o.w. . Now the population risk becomes g(A) = 1 2 ∥∥Ex∼D [xx>]− Ez∼N (0,Ik×k) [φ(Az)φ(Az)>]∥∥2 = 1 2 ∑ i,j∈[d] ( Ez∼N (0,Ik0×k0 )φ((a ∗ i ) >z)φ((a∗j ) >z)− Ez∼N (0,Ik×k)φ(a > i z)φ(a > j z) )2 ≡1 2 ∑ i,j g̃ij(zij). To simplify the notation, for a specific i, j pair, we write x̂ = a>i z/α, α = ‖ai‖ and ŷ = a>j z/β, where β = ‖aj‖. Namely we have (x̂, ŷ) ∼ N (0, [[1, ρ]; [ρ, 1]]), where ρ = cos〈ai,aj〉. Again, recall φ(αx̂) = ∑ k odd σihi(αx̂) = ∑ k odd σih α i (x̂). Ez∼N (0,Ik×k)[φ(αx̂)φ(βŷ)] =E [∑ m odd σmh α m(x̂) ∑ n odd σnh β n(ŷ) ] = ∑ m,n odd σmσnES [hαm(x̂)hβn(ŷ)] = ∑ m odd σm ∑ n≤m odd σn bn2 c∑ k=0 ηn,kα η n,k β ρ n−2k Therefore we could write out explicitly the coefficient for each term ρk, k odd, as: ck =∑ n≥k odd σnη n,n−k2 α η n,n−k2 β ( ∑ m≥n σm). We have g̃ij(zij) = ( ∑ k odd ckz k ij − ∑ k odd ck(z ∗ ij) k)2. Now suppose σi to have the same sign, and ‖αi‖ ≥ 1,∀ or ‖αi‖ ≤ 1,∀i, each coefficient ci ≥ 0. Therefore still the only stationary point for g(Z) is Z∗. B OMITTED PROOFS FOR SAMPLE COMPLEXITY B.1 OMITTED PROOFS FOR RELATION ON APPROXIMATE STATIONARY POINTS Proof of Lemma 6. We first review what we want to prove. For a matrixA that satisfies -approximate SOSP for Eqn. (5), we define SA = ∇Z g̃(AA>)− ∑n i=1 λiXi. The conditions ensure that A, λ, SA satisfy: Tr(A >XiA) = yi, ‖SAãi‖2 ≤ ‖ãi‖2, {ãj}j span the column space of A Tr(B>DA∇AL(A, λ)[B]) ≥ − ‖B‖2F , ∀B s.t. Tr(B>XiA) = 0. (9) We just want to show Z := AA>, σ := λ, and S := SA satisfies the conditions for -FOSP of Eqn. (6). Therefore, by going over the conditions, its easy to tell that all other conditions automatically apply and it remains to show SA − I . By noting that∇AL(A, λ) = 2SAA, one has: 1 2 Tr(B>DA∇AL(A, λ)[B]) = Tr(B>SAB) + Tr(B >DA∇Z g̃(AA>)[B]A)− n∑ i=1 DAλi[B] Tr(B >XiA) (from Lemma 5 of Journée et al. (2008)) = Tr(B>SAB) + Tr(AB >DA∇Z g̃(AA>)[B]) (10) (From Eqn. (9) we have Tr(B>XiA) = 0) Notice that A ∈ Rd×k and we have chosen k = d for simplicity. We first argue when A is rankdeficient, i.e. rank(A) < k. There exists some vector v ∈ Rk such that Av = 0. Now for any vector b ∈ Rd, let B = bv>. Therefore AB> = Avb> = 0. From (10) we further have: 1 2 Tr(B>DA∇AL(A, λ)[B]) = Tr(B>SAB) + Tr(AB >DA∇Z g̃(AA>)[B]) = Tr(vb>SAbv >) = ‖v‖2b>SAb ≥− /2‖B‖2F (from (9)) =− /2‖v‖2‖b>‖2 Therefore from the last three rows we have b>SAb ≥ − /2‖b‖2 for any b, i.e. SA − /2Id×d. On the other hand, when A is full rank, the column space of A is the entire Rd vector space, and therefore SA − Id×d directly follows from the second line of the -SOSP definition. B.2 DETAILED CALCULATIONS Recall the population risk g(A) ≡ 1 2 ∥∥Ex∼D [xx>]− Ez∼N (0,Ik×k) [φ(Az)φ(Az)>]∥∥2F . Write the empirical risk on observations as: gn(A) ≡ 1 2 ∥∥∥∥∥ 1n n∑ i=1 xix > i − Ez∼N (0,Ik×k) [ φ(Az)φ(Az)> ]∥∥∥∥∥ 2 F . Claim 4. ∇g(A)−∇gn(A) = 2Ez [ diag(φ′(Az))(X −Xn)φ(Az)z> ] , where X = Ex∼D[xx>], and Xn = 1n ∑n i=1 xix > i . Proof. ∇g(A)−∇gn(A) = ∇(g(A)− gn(A)) = 1 2 ∇ 〈 X −Xn, X +Xn − 2Ez∼N (0,Ik×k) [ φ(Az)φ(Az)> ]〉 =∇ 〈 Xn −X,Ez∼N (0,Ik×k) [ φ(Az)φ(Az)> ]〉 Now write S(A) = φ(Az)φ(Az)>. [S(A+ ∆A)− S(A)]ij =φ(a>i z + ∆a > i z)φ(a > j z + ∆a > j z)− φ(a>i z)φ(a>j z) =φ′(a>i z)∆a > i zφ(a > j z) + φ ′(a>j z)∆a > j zφ(a > i z) +O(‖∆A‖2) Therefore [S(A+ ∆A)− S(A)]i: =φ′(a>i z)∆a > i zφ(Az) > + (φ′(Az) ◦∆Az)>φ(a>i z) +O(‖∆A‖2) Therefore S(A+ ∆A)− S(A) = diag(φ′(Az))∆Azφ(Az)> + φ(Az)z>∆A>diag(φ′(Az)). (11) And g(A+ ∆A)− gn(A+ ∆A)− (g(A)− gn(A)) =〈Xn −X,Ez [S(A+ ∆A)− S(A)]〉 =Ez〈Xn −X, diag(φ′(Az))∆Azφ(Az)> + φ(Az)z>∆A>diag(φ′(Az))〉 =2Ez〈diag(φ′(Az))(Xn −X)φ(Az)z>,∆A〉. Finally we have ∇g(A)−∇gn(A) = 2Ez [ diag(φ′(Az))(Xn −X)φ(Az)z> ] . Claim 5. For arbitrary matrix B, the directional derivative of ∇g(A)−∇gn(A) with direction B is: DA∇g(A)[B]−DA∇gn(A)[B] = 2Ez [ diag(φ′(Az))(Xn −X)φ′(Az) ◦ (Bz)z> ] +2Ez [ diag(φ′′(Az) ◦ (Bz))(Xn −X)φ(Az)z> ] Proof. g(A+ tB) =2Ez [ diag(φ′(Az + tBz))(Xn −X)φ(Az + tBz)z> ] =2Ez [ diag(φ′(Az) + t(Bz) ◦ φ′′(Az))(Xn −X)(φ(Az) + tφ′(Az) ◦ (Bz))z> ] +O(t2) Therefore lim t→0 g(A+ tB)− g(A) t =2Ez [ diag(φ′(Az))(Xn −X)φ′(Az) ◦ (B>z)z> ] + 2Ez [ diag(φ′′(Az) ◦ (Bz))(Xn −X)φ(Az)z> ] B.3 OMITTED PROOFS FOR OBSERVATION SAMPLE COMPLEXITY Proof of Lemma 3. For each xi = φ(Azi), zi ∼ N (0, Ik×k). Each coordinate |xi,j | = |φ(a>j zi)| ≤ |a>j zi| since φ is 1-Lipschitz. 6. Without loss of generality we assumed ‖aj‖ = 1,∀j, therefore a>j z ∼ N (0, Ik×k). For all i ∈ [n], j ∈ [d] |xi,j | ≤ log(nd/δ) with probability 1− δ. Then by matrix concentration inequality ((Vershynin, 2010) Corollary 5.52), we have with probability 1−δ: (1− )X Xn (1+ )X if n ≥ Ω(d/ 2 log2(nd/δ)). Therefore set n = Θ̃(d/ 2 log2(1/δ)) will suffice. Proof of Lemma 4. Xij = Ez∼N (0,Ik×k)φ(a > i z)φ(a > j z) = { 0 i 6= j E[φ2(a>i z)] ≤ 2π i = j 6For simplicity, we analyze as if φ(0) = 0 w.o.l.g. throughout this section, since the bias term is canceled out in the observation side with φ(A∗z) and the learning side with φ(Az). Therefore ‖X‖2 ≤ 2π . Together with Lemma 3, ‖X −Xn‖ ≤ 2 π w.p 1− δ. Recall ∇g(A)−∇gn(A) = 2Ez [ diag(φ′(Az))(X −Xn)φ(Az)z> ] := 2EzG(z), whereG(z) is defined as diag(φ′(Az))(X−Xn)φ(Az)z>. We have ‖G(z)‖ ≤ ‖A‖‖z‖2‖X−Xn‖. ‖∇g(A)−∇gn(A)‖2 = 2‖Ez[G(z)]‖ ≤ 2Ez‖G(z)‖ ≤ 2Ez‖A‖‖z‖2‖X −Xn‖ ≤ 2‖A‖ 2 π Ez‖z‖2 = 2‖A‖ d 2 π For the directional derivative, we make the concentration bound in a similar way. Denote D(z) = diag(φ′(Az))(Xn −X)φ′(Az) ◦ (Bz)z> + diag(φ′′(Az) ◦ (Bz))(Xn −X)φ(Az)z>. ‖D(z)‖ ≤ ‖Xn −X‖2‖B‖‖z‖2(1 + ‖z‖‖A‖). Therefore ‖DA∇g(A)[B]−DA∇gn(A)[B]‖ ≤ O( d3/2‖A‖‖B‖) with probability 1− δ. B.4 OMITTED PROOFS ON BOUNDING MINI-BATCH SIZE Recall g̃m,n(A) ≡ 1 2 ∥∥∥∥∥∥ 1n n∑ i=1 xix > i − 1 m m∑ j=1 φ(Azj)φ(Azj) > ∥∥∥∥∥∥ 2 F . Write Sj(A) ≡ φ(Azj)φ(Azj)>. Then we have g̃m,n(A) = 1 2 〈 Xn − 1 n m∑ j=1 Sj(A), Xn − 1 m m∑ j=1 Sj(A) 〉 = 1 2m2 ∑ i,j 〈Si(A), Sj(A)〉 − 1 n m∑ j=1 〈Sj(A), Xn〉+ 1 2 ‖Xn‖2F On the other hand, our target function is: gn(A) ≡ 1 2 ∥∥∥∥∥ 1n n∑ i=1 xix > i − Ez∼N (0,Ik×k) [ φ(Az)φ(Az)> ]∥∥∥∥∥ 2 F = 1 2 ‖ES [S]‖2F − 〈ES [S], Xn〉+ 1 2 ‖Xn‖2F Therefore ES g̃m,n(A)− gn(A) = 12m (ES‖S(A)‖ 2 F − ‖ESS(A)‖2F ). Claim 6. ∇ES g̃m,n(A)−∇gn(A) = 2 m Ez [ diag(φ′(Az))S(A)φ(Az)z> − diag(φ′(Az))ES [S(A)]φ(Az)z> ] . Proof. 〈∇ES g̃m,n −∇gn,∆A〉 =ES g̃m,n(A+ ∆A) + gn(A+ ∆A)− (ES g̃m,n(A) + gn(A)) +O(‖∆A‖2) = 1 2m ( ES‖S(A+ ∆A)‖2F − ES‖S(A)‖2F − ‖ESS(A+ ∆A)‖2F + ‖ESS(A)‖2F ) +O(‖∆A‖2) = 1 m (ES〈S(A), S(A+ ∆A)− S(A)〉 − 〈ES [S(A)], ES [S(A+ ∆A)− S(A)]) +O(‖∆A‖2) = 1 m ( 〈Ez〈S(A), diag(φ′(Az))∆Azφ(Az)>〉 − 〈ES [S(A)],Ezdiag(φ′(Az))∆Azφ(Az)>〉 ) +O(‖∆A‖2) (from Eqn. (11) and symmetry of S) =〈 2 m Ez [ diag(φ′(Az))S(A)φ(Az)z> − diag(φ′(Az))ES [S(A)]φ(Az)z> ] ,∆A〉+O(‖∆A‖2) Similarly to the derivation in the previous subsection, we again derive the bias in the directional derivative: Claim 7. For arbitrary matrix direction B, DA∇ES g̃m,n(A)[B]−DA∇gn(A)[B] = 2 m Ez [ diag(φ′′(Az) ◦ (Bz))(S(A)− ESS(A))φ(Az)z> + diag(φ′(Az)) ( (φ′(Az) ◦ (Bz))φ(Az)> − Ez[(φ′(Az) ◦ (Bz))φ(Az)>] ) φ(Az)z> + diag(φ′(Az)) ( φ(Az)(φ′(Az) ◦ (Bz))> − Ez[φ(Az)(φ′(Az) ◦ (Bz))>] ) φ(Az)z> + diag(φ′(Az))(S(A)− ESS(A))(φ′(Az) ◦ (Bz))z> ] B.5 OMITTED PROOF OF THE MAIN THEOREM Proof of Lemma 7. On one hand, suppose Z is an -FOSP property of g̃ in (6) along with the matrix S and vector σ, we have: 〈∇g̃(Z), Z − Z∗〉 =〈S,Z − Z∗〉 (since Z − Z∗ has 0 diagonal entries) ≤‖PT (S)‖2‖PT◦(Z − Z∗)‖F (T is the tangent cone of PSD matrices at Z) ≤‖PT (S)‖2‖Z − Z∗‖F = max j {ã>j Sãj}‖Z − Z∗‖F (ãj is the basis of the column space of Z ) ≤ ‖Z − Z∗‖ (12) (from the definition of -FOSP) On the other hand, from the definition of g̃, we have: 〈Z − Z∗,∇g̃(Z)〉 = ∑ ij (zij − z∗ij)g̃′ij(zij) = ∑ ij (zij − z∗ij)2 ∑ k odd σ2kPk(zij) ∑ k odd σ2kkz k−1 ij ≥‖Z − Z∗‖2Fσ41 (13) Here polynomial Pk(zij) ≡ (zkij − (z∗ij)k)/(z − z∗) is always positive for z 6= z∗ and k to be odd. Therefore by comparing (12) and (13) we have ‖Z − Z∗‖F ≥ ‖Z − Z∗‖2Fσ41 , i.e. ‖Z − Z∗‖F ≤ O( ). Proof of Theorem 3. From Theorem 31 from Ge et al. (2015), we know for small enough learning rate η, and arbitrary small , there exists large enough T , such that Algorithm 1 generates an output A(T ) that is sufficiently close to the second order stationary point for f . Or formally we have, Tr((A (T ))>XiA (T )) = yi, ‖(∇Af(A(T ))− ∑ i=1 λiXiA (T )):,j‖2 ≤ min ‖Aj,:‖2, ∀j ∈ [k] Tr(B>DA∇ALf (A(T ), λ)[B]) ≥ − ‖B‖22, ∀B, s.t.Tr(B>XiA) = 0 Lf (A, λ) = f(A) − ∑d i=1 λi(Tr(A >XiA) − yi). Let {ãi = A(T )ri}ki to form the basis of the column vector space of A(T ). Then the second line is a sufficient condition for the following: ‖ã>j (∇Af(A(T ))− ∑ i=1 λiXiA (T ))rj‖2 ≤ ,∀j ∈ [k]. Now with the concentration bound from Lemma 5, suppose our batch size m ≥ O(d5/ ), we have ‖∇Agn(A(T )) − ∇Af(A(T ))‖2 ≤ , and ‖DA∇Agn(A(T ))[B] − DA∇Af(A(T ))[B]‖2 ≤ ‖B‖2 for arbitrary B. Therefore again we get: Tr((A(T ))>XiA (T )) = yi ‖ã>j (∇Agn(A(T ))− ∑ i=1 λiXiA (T ))rj‖2 ≤ 2 , ∀j ∈ [k] Tr(B>DA∇ALgm(A(T ), λ)[B]) ≥ −2 ‖B‖22, ∀B, s.t.Tr(B>XiA) = 0 Next we turn to the concentration bound from Lemma 4. Suppose we have when the sample size n ≥ O(d5/ 2 log2(1/δ)), ‖DA∇Ag(A)[B] − DA∇Agn(A)[B]‖2 ≤ O( ‖B‖2), and ‖∇g(A) − ∇gn(A)‖2 ≤ O( ) with probability 1 − δ. Therefore similarly we get A(T ) is an O( )-SOSP for g(A) = 12 ∥∥∑∞ i=0 σ 2 i ( (A∗(A∗)>)◦i − (AA>)◦i )∥∥2 F . Now with Lemma 6 that connects the approximate stationary points, we have Z := A(T )(A(T ))> is also an -FOSP of g̃(Z) = 12 ∥∥∑∞ i=0 σ 2 i ( (Z∗)◦i − Z◦i )∥∥2 F . Finally with Lemma 7, we get ‖Z − Z∗‖F ≤ O( ).
1. What is the main contribution of the paper regarding Stochastic Gradient Descent-Ascent and WGAN? 2. What are the limitations of the paper, particularly in the settings of the discriminator? 3. How does the reviewer assess the theoretical analysis and experimental results presented in the paper? 4. What additional information or explanations do you think the reviewer wants the authors to provide regarding error propagation and the learning process of the complementary discriminator?
Review
Review In this paper, the authors attempt to prove that the Stochastic Gradient Descent-Ascent could converge to a global solution to the min-max problem of WGAN, in the setting of a one-layer generator and simple discriminator. They also show that the linear discriminator could be used to learn the marginal distributions of each coordinate, while a quadratic one could obtain joint distributions of every two coordinates. Since the linear discriminator and the quadratic one could be solved in one step Gradient Ascent, the author applied the standard analysis method to reveal the property of the Gradient Descent method. Experiments are also carried out to justify their theory that the WGAN could recover the distribution. However, the most significant drawback of this paper is that the settings for the discriminator are too simple, which leads to the following two problems: 1) Revealing the joint distributions of two coordinates is still much weaker than the desired result of recovering the true distribution of the data. 2) The analysis of this paper could not be extended to a complex discriminator since it would be suffered from the training error propagation in the Gradient Ascent step, instead of getting an accurate solution for the Gradient Ascent step. Therefore, more explanations are desired to be given to bound the error propagation and what will the complimentary discriminator learn from the data distribution.
ICLR
Title Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear Functions Abstract We show that the representation cost of fully connected neural networks with homogeneous nonlinearities which describes the implicit bias in function space of networks with L2-regularization or with losses such as the cross-entropy converges as the depth of the network goes to infinity to a notion of rank over nonlinear functions. We then inquire under which conditions the global minima of the loss recover the ‘true’ rank of the data: we show that for too large depths the global minimum will be approximately rank 1 (underestimating the rank); we then argue that there is a range of depths which grows with the number of datapoints where the true rank is recovered. Finally, we discuss the effect of the rank of a classifier on the topology of the resulting class boundaries and show that autoencoders with optimal nonlinear rank are naturally denoising. 1 INTRODUCTION There has been a lot of recent interest in the so-called implicit bias of DNNs, which describes what functions are favored by a network when fitting the training data. Different network architectures (choice of nonlinearity, depth, width of the network, and more) and training procedures (initialization, optimization algorithm, loss) can lead to widely different biases. In contrast to the so-called kernel regime where the implicit bias is described by the Neural Tangent Kernel (Jacot et al., 2018), there are several active regimes (also called rich or feature-learning regimes), whose implicit bias often feature a form sparsity that is absent from the kernel regime. Such active regimes have been observed for example in DNNs with small initialization (Chizat & Bach, 2018; Rotskoff & Vanden-Eijnden, 2018; Li et al., 2020; Jacot et al., 2022a), with L2regularization (Savarese et al., 2019; Ongie et al., 2020; Jacot et al., 2022b) or when trained on exponentially decaying losses (Gunasekar et al., 2018a;b; Soudry et al., 2018; Du et al., 2018; Ji & Telgarsky, 2018; Chizat & Bach, 2020; Ji & Telgarsky, 2020). In the latter two cases, the implicit bias is described by the representation cost: R(f) = min W:fW=f ∥W∥2 where f is a function that can be represented by the network and the minimization is over all parameters W that result in a network function fW equal to f , the parameters W form a vector and ∥W∥ is the L2-norm. The representation cost can in some cases be explicitly computed for linear networks. For diagonal linear networks, the representation cost of a linear function f(x) = wTx equals theLp normR(f) = L ∥w∥pp of the vector v for p = 2 L (Gunasekar et al., 2018a; Moroshko et al., 2020) where L is the depth of the network. For fully-connected linear networks, the representation cost of a linear function f(x) = Ax equals the Lp-Schatten norm (the Lp norm of the singular values) R(f) = L ∥A∥pp (Dai et al., 2021). A common thread between these examples is a bias towards some notion of sparsity: sparsity of the entries of the vector w in diagonal networks and sparsity of the singular values in fully connected networks. Furthermore, this bias becomes stronger with depth and in the infinite depth limit L→ ∞ the rescaled representation cost R(f)/L converges to the L0 norm ∥w∥0 (the number of non-zero entries in w) in the first case and to the rank Rank(A) in the second. For shallow (L = 2) nonlinear networks with a homogeneous activation, the representation cost also takes the form of a L1 norm (Bach, 2017; Chizat & Bach, 2020; Ongie et al., 2020), leading to sparsity in the effective number of neurons in the hidden layer of the network. However, the representation cost of deeper networks does not resemble any typical norm (Lp or not), though it still leads to some form of sparsity (Jacot et al., 2022b). Despite the absence of explicit formula, we will show that the rescaled representation cost R(f)/L converges to some notion of rank in nonlinear networks as L→ ∞, in analogy to infinite depth linear networks. CONTRIBUTIONS We first introduce two notions of rank: the Jacobian rank RankJ(f) = maxx Rank [Jf(x)] and the Bottleneck rank RankBN (f) which is the smallest integer k such that f can be factorized f = h ◦ g with inner dimension k. In general, RankJ(f) ≤ RankBN (f), but for functions of the form f = ψ ◦ A ◦ ϕ (for a linear map A and two bijections ψ and ϕ), we have RankJ(f) = RankBN (f) = RankA. These two notions of rank satisfy the properties (1) Rankf ∈ Z; (2) Rank(f ◦ g) ≤ min{Rankf,Rankg}; (3) Rank(f + g) ≤ Rankf +Rankg; (4) Rank(x 7→ Ax+ b) = RankA. We then show that in the infinite depth limit L→ ∞ the rescaled representation cost of DNNs with a general homogeneous nonlinearity is sandwiched between the Jacobian and Bottleneck ranks: RankJ (f) ≤ lim L→∞ R(f) L ≤ RankBN (f) . Furthermore limL→∞R(f) satisfies properties (2-4) above. We also conjecture that the limiting representation cost equals its upper bound RankBN (f). We then study how this bias towards low-rank functions translates to finite but large depths. We first show that for large depths the rescaled norm of the parameters ∥Ŵ∥2/L at any global minimum Ŵ is upper bounded by 1+CN/L for a constant CN which depends on the training points. This implies that the resulting function has approximately rank 1 w.r.t. the Jacobian and Bottleneck ranks. This is however problematic if we are trying to fit a ‘true function’ f∗ whose ‘true rank’ k = RankBNf ∗ is larger than 1. Thankfully we show that if k > 1 the constantCN explodes asN → ∞, so that the above bound (∥Ŵ∥2/L ≤ 1+CN/L) is relevant only for very large depths whenN is large. We show another upper bound ∥Ŵ∥2/L ≤ k + C/L with a constant C independent of N , suggesting the existence of a range of intermediate depths where the network recovers the true rank k. Finally, we discuss how rank recovery affects the topology of decision boundaries in classification and leads autoencoders to naturally be denoising, which we confirm with numerical experiments. RELATED WORKS The implicit bias of deep homogeneous networks has, to our knowledge, been much less studied than those of either linear networks or shallow nonlinear ones. (Ongie & Willett, 2022) study deep networks with only one nonlinear layer (all others being linear). Similarly (Le & Jegelka, 2022) show a low-rank alignment phenomenon in a network whose last layers are linear. Closer to our setup is the analysis of the representation cost of deep homogeneous networks in (Jacot et al., 2022b), which gives two reformulations for the optimization in the definition of the representation cost, with some implications on the sparsity of the representations, though the infinite depth limit is not studied. A very similar analysis of the sparsity effect of large depth on the global minima of L2-regularized networks is given in (Timor et al., 2022), however, they only show how the optimal weight matrices are almost rank 1 (and only on average), while we show low-rank properties of the learned function, as well as the existence of a layer with almost rank 1 hidden representations. 2 PRELIMINARIES In this section, we define fully-connected DNNs and their representation cost. FULLY CONNECTED DNNS In this paper, we study fully connected DNNs with L+1 layers numbered from 0 (input layer) to L (output layer). Each layer ℓ ∈ {0, . . . , L} has nℓ neurons, with n0 = din the input dimension and nL = dout the output dimension. The pre-activations α̃ℓ(x) ∈ Rnℓ and activations αℓ(x) ∈ Rnℓ of the layers of the network are defined inductively as α0(x) = x α̃ℓ(x) =Wℓαℓ−1(x) + bℓ αℓ(x) = σ (α̃ℓ(x)) , for the nℓ×nℓ−1 connection weight matrixWℓ, the nℓ bias vector bℓ and the nonlinearity σ : R → R applied entrywise to the vector α̃ℓ(x). The parameters of the network are the collection of all connection weights matrices and bias vectors W = (W1, b1, . . . ,WL, bL). We call the network function fW : Rdin → Rdout the function that maps an input x to the preactivations of the last layer α̃L(x). In this paper, we will focus on homogeneous nonlinearities σ, i.e. such that σ(λx) = λσ(x) for any λ ≥ 0 and x ∈ R, such as the traditional ReLU σ(x) = max{0, x}. In our theoretical analysis we will assume that the nonlinearity is of the form σa(x) = { x if x ≥ 0 ax otherwise for some α ∈ (−1, 1), since for a general homogeneous nonlinearity σ (which is not proportional to the identity function, the constant zero function or the absolute function), there are scalars a ∈ (−1, 1), b ∈ R and c ∈ {+1,−1} such that σ(x) = cσa(bx); as a result, the global minima and representation cost are the same up to scaling. Remark 1. By a simple generalization of the work of (Arora et al., 2018), the set of functions that can be represented by networks (with any finite widths and depth) with such nonlinearities is the set of piecewise linear functions with a finite number of linear regions. In contrast, the three types of homogeneous nonlinearities we rule out (the identity, the constant, or the absolute value) lead to different sets of functions: the linear functions, the constant functions, or the piecewise linear functions f such that limt→∞ ∥f(tx)− f(−tx)∥ is finite for all directions x ∈ Rdin (or possibly a subset of this class of functions). While some of the results of this paper could probably be generalized to the third case up to a few details, we rule it out for the sake of simplicity. Remark 2. All of our results will be for sufficiently wide networks, i.e. for all widths n such that nℓ ≥ n∗ℓ for some minimal widths n∗ℓ . Moreover these results are O(0) in the width, in the sense that above the threshold n∗ℓ the constants do not depend on the widths nℓ. When there are a finite number of datapoints N , it was shown by (Jacot et al., 2022b) that a width of N(N + 1) is always sufficient, that is we can always take n∗ℓ = N(N +1) (though it is observed empirically that a much smaller width can be sufficient in some cases). When we are trying to fit a piecewise linear function over the whole input domain Ω, the width required depends on the number of linear regions (He et al., 2018). REPRESENTATION COST The representation cost R(f ; Ω, σ, L) is the squared norm of the optimal weights W which represents the function f|Ω: R(f ; Ω, σ, L) = min W:fW|Ω=f|Ω ∥W∥2 where the minimum is taken over all weights W of a depth L network (with some finite widths n) such that fW(x) = f(x) for all x ∈ Ω. If no such weights exist, we define R(f ; Ω, σ, L) = ∞. The representation cost describes the natural bias on the represented function fW induced by adding L2 regularization on the weights W: min W C(fW) + λ ∥W∥2 = min f C(f) + λR(f ; Ω, σ, L) for any cost C (defined on functions f : Ω 7→ Rdout ) and where the minimization on the right is over all functions f that can be represented by a depth L network with nonlinearity σ. Therefore, if we can give a simple description of the representation cost of a function f , we can better understand what type of functions f are favored by a DNN with nonlinearity σ and depth L. Remark 3. Note that the representation cost does not only play a role in the presence of L2regularization, it also describes the implicit bias of networks trained on an exponentially decaying loss, such as the cross-entropy loss, as described in (Soudry et al., 2018; Gunasekar et al., 2018a; Chizat & Bach, 2020). 3 INFINITELY DEEP NETWORKS In this section, we first give 4 properties that a notion of rank on piecewise linear functions should satisfy and introduce two notions of rank that satisfy these properties. We then show that the infinitedepth limit L→ ∞ of the rescaled representation cost R(f ; Ω, σa, L)/L is sandwiched between the two notions of rank we introduced, and that this limit satisfies 3 of the 4 properties we introduced. RANK OF PIECEWISE LINEAR FUNCTIONS There is no single natural definition of rank for nonlinear functions, but we will provide two of them in this section and compare them. We focus on notions of rank for piecewise linear functions with a finite number of linear regions since these are the function that can be represented by DNNs with homogeneous nonlinearities (this is a Corollary of Theorem 2.1 from (Arora et al., 2018), for more details, see Appendix E.1). We call such functions finite piecewise linear functions (FPLF). Let us first state a set of properties that any notion of rank on FPLF should satisfy, inspired by properties of rank for linear functions: 1. The rank of a function is an integer Rank(f) ∈ N. 2. Rank(f ◦ g) ≤ min{Rankf,Rankg}. 3. Rank(f + g) ≤ Rankf +Rankg. 4. If f is affine (f(x) = Ax+ b) then Rankf = RankA. Taking g = id or f = id in (2) implies Rank(f) ≤ min{din, dout}. Properties (2) and (4) also imply that for any bijection ϕ on Rd, Rank(ϕ) = Rank(ϕ−1) = d. Note that these properties do not uniquely define a notion of rank. Indeed we will now give two notions of rank which satisfy these properties but do not always match. However any such notion of rank must agree on a large family of functions: Property 2 implies that Rank is invariant under preand post-composition with bijections (see Appendix A), which implies that the rank of functions of the form ψ ◦ f ◦ϕ for an affine function f(x) = Ax+ b and two (piecewise linear) bijections ψ and ϕ always equals RankA. The first notion of rank we consider is based on the rank of the Jacobian of the function: Definition 1. The Jacobian rank of a FPLF f is RankJ(f ; Ω) = maxx RankJf(x), taking the max over points where x is differentiable. Note that since the jacobian is constant over the linear regions of the FPLF f , we only need to take the maximum over every linear region. As observed in (Feng et al., 2022), the Jacobian rank measures the intrinsic dimension of the output set f(Ω). The second notion of rank is inspired by the fact that for linear functions f , the rank of f equals the minimal dimension k such that f can be written as the composition of two linear function f = g ◦ h with inner dimension k. We define the bottleneck rank as: Definition 2. The bottleneck rank RankBN (f ; Ω) is the smallest integer k ∈ N such that there is a factorization as the composition of two FPLFs f|Ω = (g ◦ h)|Ω with inner dimension k. The following proposition relates these two notions of rank: Proposition 1. Both RankJ and RankBN satisfy properties 1− 4 above. Furthermore: • For any FPLF and any set Ω, RankJ(f ; Ω) ≤ RankBN (f ; Ω). • There exists a FPLF f : R2 → R2 and a domain Ω such that RankJ(f ; Ω) = 1 and RankBN (f ; Ω) = 2. INFINITE-DEPTH REPRESENTATION COST In the infinite-depth limit, the (rescaled) representation cost of DNNs R∞(f ; Ω, σa) = limL→∞ R(f ;Ω,σa,L) L converges to a value ‘sandwiched’ between the above two notions of rank: Theorem 1. For any bounded domain Ω and any FPLF f RankJ(f ; Ω) ≤ R∞(f ; Ω, σα) ≤ RankBN (f ; Ω). Furthermore the limiting representation cost R∞(f ; Ω, σa) satisfies properties 2 to 4. Proof. The lower bound follows from taking L → ∞ in Proposition 3 (see Section 4). The upper bound is constructive: a function f = h ◦ g can be represented as a network in three consecutive parts: a first part (of depth Lg) representing g, a final part (of depth Lh) representing h, and in the middle L− Lg − Lh identity layers on a k-dimensional space. The contribution to the norm of the parameters of the middle part is k(L−Lg −Lh) and it dominates as L→ ∞, since the contribution of the first and final parts are finite. Note that R∞(f ; Ω, σa) might satisfy property 1 as well, we were simply not able to prove it. Theorem 1 implies that for functions of the form f = ψ ◦ A ◦ ϕ for bijections ψ and ϕ, R∞(f ; Ω, σa) = RankJ(f ; Ω) = RankBN (f ; Ω) = RankA. Remark 4. Motivated by some aspects of the proofs and a general intuition (which is described in Section 4) we conjecture that R∞(f ; Ω, σa) = RankBN (f ; Ω). This would imply that the limiting representation cost does not depend on the choice of nonlinearity, as long as it is of the form σa (which we already proved is the case for functions of the form ψ ◦A ◦ ϕ). This result suggests that large-depth neural networks are biased towards function which have a low Jacobian rank and (if our above mentioned conjecture is true) low Bottleneck rank, much like linear networks are biased towards low-rank linear maps. It also suggests that the rescaled norm of the parameters ∥W∥2/L is an approximate upper bound on the Jacobian rank (and if our conjecture is true on the Bottleneck rank too) of the function fW. In the next section, we partly formalize these ideas. 4 RANK RECOVERY IN FINITE DEPTH NETWORKS In this section, we study how the (approximate) rank of minimizer functions fŴ (i.e. functions at a global minimum Ŵ) for the MSE Lλ(W) = 1N ∑N i=1(fW(xi)−yi)2+ λ L ∥W∥ 2 with data sampled from a distribution with support Ω is affected by the depth L. In particular, when the outputs are generated from a true function f∗ (i.e. yi = f∗(xi)) with k = RankBN (f∗; Ω), we study in which condition the ‘true rank’ k is recovered. APPROXIMATE RANK 1 REGIME One can build a function with BN-rank 1 that fits any training data (for example by first projecting the input to a line with no overlap and then mapping the points from the line to the outputs with a piecewise linear function). This implies the following bound: Proposition 2. There is a constant CN (which depends on the training data only) such that for any large enough L, at any global minimum Ŵ of the loss Lλ the represented function fŴ satisfies 1 L R(fŴ;σa,Ω, L) ≤ 1 + CN L . Proof. We use the same construction as in the proof of Theorem 1 for any fitting rank 1 function. This bound implies that the function fŴ represented by the network at a global minimum is approximately rank 1 both w.r.t. to the Jacobian and Bottleneck ranks, showing the bias towards low-rank functions even for finite (but possibly very large) depths. Jacobian Rank: For any function f , the rescaled norm representation cost 1LR(f ; Ω, σa, L) bounds the Lp-Schatten norm of the Jacobian (with p = 2L ) at any point: Proposition 3. Let f be a FPLF, then at any differentiable point x, we have ∥Jf(x)∥2/L2/L := RankJfW(x)∑ k=1 sk (Jf(x)) 2 L ≤ 1 L R(f ; Ω, σa, L), where sk (JfW(x)) is the k-th singular value of the Jacobian JfW(x). Together with Proposition 2, this implies that the second singular value of the Jacobian of any minimizer function must be exponentially small s2 ( JfŴ(x) ) ≤ ( 1+ CN L 2 )L 2 in L. Bottleneck Rank: We can further prove the existence of a bottleneck in the network in any minimizer network, i.e. a layer ℓ whose hidden representation is approximately rank 1: Proposition 4. For any global minimum Ŵ of the L2-regularized loss Lλ with λ > 0 and any set of Ñ datapoints X̃ ∈ Rdin×Ñ (which do not have to be the training set X) with non-constant outputs, there is a layer ℓ0 such that the first two singular values s1, s2 of the hidden representation Zℓ0 ∈ Rnℓ×N (whose columns are the activations αℓ0(xi) for all the inputs xi in X̃) satisfies s2 s1 = O(L− 1 4 ). The fact that the global minima of the loss are approximately rank 1 not only in the Jacobian but also in the Bottleneck sense further supports our conjecture that the limiting representation cost equals the Bottleneck rank R∞ = RankBN . Furthermore, it shows that the global minimum of the L2-regularized is biased towards low-rank functions for large depths, since it fits the data with (approximately) the smallest possible rank. RANK RECOVERY FOR INTERMEDIATE DEPTHS However, learning rank 1 functions is not always a good thing. Assume that we are trying to fit a ‘true function’ f∗ : Ω → Rdout with a certain rank k = RankBN (f∗; Ω). If k > 1 the global minima of a large depth network will end up underestimating the true rank k. In contrast, in the linear setting underestimating the true rank is almost never a problem: for example in matrix completion one always wants to find a minimal rank solution (Candès & Recht, 2009; Arora et al., 2019). The difference is due to the fact that rank 1 nonlinear functions can fit any finite training set, which is not the case in the linear case. Thankfully, for large datasets it becomes more and more difficult to underestimate the rank, since for large N fitting the data with a rank 1 function requires large derivatives, which in turn implies a large parameter norm: Theorem 2. Given a Jacobian-rank k true function f∗ : Ω → Rdout on a bounded domain Ω, then for all ϵ there is a constant cϵ such that for any BN-rank 1 function f̂ : Ω → Rdout that fits f̂(xi) = f ∗(xi) a dataset x1, . . . , xN sampled i.i.d. from a distribution p with support Ω, we have 1 LR(f̂ ; Ω, σa, L) > cϵN 2 L (1− 1 k ) with prob. at least 1− ϵ. Proof. We show that there is a point x ∈ Ω with large derivative ∥Jf(x)∥op ≥ TSP(y1,...,yN ) diam(x1,...,xN ) for the Traveling Salesman Problem TSP(y1, . . . , yN ), i.e. the length of the shortest path passing through every point y1, . . . , ym, and the diameter diam(x1, . . . , xN ) of the points x1, . . . , xN . This follows from the fact that the image of f̂ is a line going through all yis, and if i and j are the first and last points visited, the image of segment [xi, xj ] is a line from yi to yj passing through all yks. The diameter is bounded by diamΩ while the TSP scales as N1− 1 k (Beardwood et al., 1959) since the yis are sampled from a k-dimensional distribution. The bound on the parameter norm then follows from Proposition 3. This implies that the constantCN in Proposition 2 explodes as the number of datapointsN increases, i.e. as N increases, larger and larger depths are required for the bound in Proposition 2 to be meaningful. In that case, a better upper bound on the norm of the parameters can be obtained, which implies that the functions fŴ at global minima are approximately rank k or less (at least in the Jacobian sense, according to Proposition 3): Proposition 5. Let the ‘true function’ f∗ : Ω → Rdout be piecewise linear with RankBN (f∗) = k, then there is a constant C which depends on f∗ only such that any minimizer function fŴ satisfies 1 L R(fŴ;σa,Ω, L) ≤ 1 L R(f∗;σa,Ω, L) ≤ k + C L . Theorem 2 and Proposition 5 imply that if the number of datapoints N is sufficiently large (N >( k+CL c ) kL 2k−2 ), there are parameters W∗ that fit the true function f∗ with a smaller parameter norm than any choice of parameters W that fit the data with a rank 1 function. In that case, the global minima will not be rank 1 and might instead recover the true rank k. Another interpretation is that since the constant C does not depend on the number of training points N (in contrast to CN ), there is a range of depths (which grows as N → ∞) where the upper bound of Proposition 5 is below that of Proposition 2. We expect rank recovery to happen roughly in this range of depths: too small depths can lead to an overestimation of the rank1, while too large depths can lead to an underestimation. Remark 5. Note that in our experiments, we were not able to observe gradient descent converging to a solution that underestimates the true rank, even for very deep networks. This is probably due to gradient descent converging to one of the many local minima in the loss surface of very deep L2-regularized DNNs. Some recent theoretical results offer a possible explanation for why gradient descent naturally avoids rank 1 solutions: the proof of Proposition 2 shows that rank 1 fitting functions have exploding gradient as N → ∞, and such high gradient functions are known (at the moment only for shallow networks with 1D inputs) to correspond to narrow minima (Mulayoff et al., 2021). Some of our results can be applied to local minima Ŵwith a small norm: Proposition 3 implies that the Jacobian rank of fŴ is approximately bounded by ∥Ŵ∥ 2 /L. Proposition 4 also applies to local minima, but only if ∥Ŵ∥2/L ≤ 1 + C/L for some constant C, though it could be generalized. DISCUSSION We now propose a tentative explanation for the phenomenon observed in this section. In contrast to the rest of the paper, this discussion is informal. 1Note that traditional regression models, such as Kernel Ridge Regression (KRR) typically overestimate the true rank, as described in Appendix D.1. Ideally, we want to learn functions f which can be factorized as a composition h ◦ g so that not only the inner dimension is small but the two functions g, h are not ‘too complex’. These two objectives are often contradictory and one needs to find a trade-off between the two. Instead of optimizing the bottleneck rank, one might want to optimize with a regularization term of the form min f=h◦g k + γ (C(g) + C(h)) , (1) optimizing over all possible factorization f = h ◦ g of f with inner dimension k, where C(g) and C(h) are measures of the complexity of g and h resp. The parameter γ ≥ 0 allows us to tune the balance between the minimization of the inner dimension and the complexity of g and h, recovering the Bottleneck rank when γ = 0. For small γ the minimizer is always rank 1 (since it is always possible to fit a finite dataset with a rank 1 function in the absence of restriction on the complexity on g and h), but with the right choice of γ one can recover the true rank. Some aspects of the proofs techniques we used in this paper suggest that large-depth DNNs are optimizing such a cost (or an approximation thereof). Consider a deep network that fits with minimal parameter norm a function f ; if we add more layers to the network it is natural to assume that the new optimal representation of f will be almost the same as that of the shallower network with some added (approximate) identity layers. The interesting question is where are those identity layers added? The cost of adding an identity layer at a layer ℓ equals the dimension dℓ of the hidden representation of the inputs at ℓ. It is therefore optimal to add identity layers where the hidden representations have minimal dimension. This suggests that for large depths the optimal representation of a function f approximately takes the form of Lg layers representing g, then L−Lg−Lh identity layers, and finally Lh layers representing h, for some factorization f = h◦g with inner dimension k. We observe in Figure 1 such a three-part representation structure in an MSE task with a low-rank true function. The rescaled parameter norm would then take the form 1 L ∥W∥2 = L− Lg − Lh L k + 1 L ( ∥Wg∥2 + ∥Wh∥2 ) , where Wg and Wh are the parameters of the first and last part of the network. For large depths, we can make the approximation L−Lg−LhL ≈ 1 to recover the same structure as Equation 1, with γ = 1/L, C(g) = ∥W∥2g and C(h) = ∥Wh∥ 2. This intuition offers a possible explanation for rank recovery in DNNs, though we are not yet able to prove it rigorously. 5 PRACTICAL IMPLICATIONS In this section, we describe the impact of rank minimization on two practical tasks: multiclass classification and autoencoders. MULTICLASS CLASSIFICATION Consider a function fW∗ : Rdin → Rm which solves a classification task with m classes, i.e. for all training points xi with class yi ∈ {1, . . . ,m} the yi-th entry of the vector fW∗ is strictly larger than all other entries. The Bottleneck rank k = RankBN (fW∗) of fW∗ has an impact on the topology of the resulting partition of the input space Ω into classes, leading to topological properties typical of a partition on a k-dimensional space rather than those of a partition on a din-dimensional space. When k = 1, the partition will be topologically equivalent to a classification on a line, which implies the absence of tripoints, i.e. points at the boundary of 3 (or more) classes. Indeed any boundary point x ∈ Ω will be mapped to a boundary point z = g(x) by the first function g : Ω → R in the factorization of fW∗ ; since z has at most two neighboring classes, then so does x. This property is illustrated in Figure 2: for a classification task on four classes on the plane, we observe that the partitions obtained by shallow networks (L = 2) leads to tripoints which are absent in deeper networks (L = 9). Notice also that the presence or absence of L2-regularization has little effect on the final shape, which is in line with the observation that the cross-entropy loss leads to an implicit L2-regularization (Soudry et al., 2018; Gunasekar et al., 2018a; Chizat & Bach, 2020), reducing the necessity of an explicit L2-regularization. AUTOENCODERS Consider learning an autoender on data of the form x = g(z) where z is sampled (with full dimensional support) in a latent space Rk and g : Rk → Rd is an injective FPLF. In this setting, the true rank is the intrinsic dimension k of the data, since the minimal rank function that equals the identity on the data distribution has rank k. Assume that the learned autoencoder f̂ : Rk → Rk fits the data f(x) = x for all x = g(z) and recovers the rank RankBN f̂ = k. At any datapoint x0 = g(z0) such that g is differentiale at z0, the data support g(Rk) is locally a k-dimensional affine subspace T = x0 + ImJg(z0). In the linear region of f̂ that contains x0, f̂ is an affine projection to T since it equals the identity when restricted to T and its Jacobian is rank k. This proves that rank recovering autoencoders are naturally (locally) denoising. 6 CONCLUSION We have shown that in infinitely deep networks, L2-regularization leads to a bias towards low-rank functions, for some notion of rank on FPLFs. We have then shown a set of results that suggest that this low-rank bias extends to large but finite depths. With the right depths, this leads to ‘rank recovery’, where the learned function has approximately the same rank as the ‘true function’. We proposed a tentative explanation for this rank recovery: for finite but large widths, the network is biased towards function f which can be factorized f = h ◦ g with both a small inner dimension k and small complexity of g and h. Finally, we have shown how rank recovery affects the topology of the class boundaries in a classification task and leads to natural denoising abilities in autoencoders.
1. What are the main contributions and key concepts discussed in the paper regarding deep neural networks? 2. What are the strengths of the proposed approach, particularly in terms of its notation, organization, and theoretical analysis? 3. Do you have any concerns or questions regarding the paper's content, such as assumptions made in the proof or definitions provided? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper studies the rank behavior of deep neural networks with three different types of rank definitions: the maximum of network Jacobi rank, bottlenet rank, and representation cost. The authors rigorously demonstrate the intrinsic connections among those three concepts, revealing a fascinating property of general deep learning models. The authors also show the representation cost of BN rank-1 and rank-k functions. The results in this paper are appealing to a broad audience in machine learning. Strengths And Weaknesses Strengths: This paper is very well written; the notations are clear and very carefully designed; the organization of this paper is very clear. The topic of rank in deep networks are very important for a broad range of domains. Many previous works are not as successful in establishing a general enough yet elegant theory analysis framework for this topic. The three metrics, representation cost, Jacobi rank, and BN-Rank in this work are convincing, general, and elegant to use. Suffcicent numerical studies validates the theory justifications very well. Theory results on the connections between ranks and representation costs are very interesting and compelling. I have the following questions: In the representation cost, the norm || || is spectral norm or what? In the proof to Th1, do we assume that f tends to have a constant dimension for most of the intermediate layers? I think this makes sense for neural networks constructed manually, but may not be that reasonable for the underlying target function, the property of which is unknown. In definition 1, I think perhaps a better intuition for this definition of rank would be, considering Sard's Theorem and the Rank theorem of manifolds, only those regions of the highest rank are of true influences in the output manifold, as in [1]. Some previou work[1] also discusses rank behavior of network ranks from the opinion of random matrix theory. The problem of using Jacobi rank directly is that, the rank of matrix is instable under small noises and errors, thus it is impossible to measure them in practice. So perhaps using the counting measure of significant singular values could be better. [1] Rank Diminishing in Deep Neural Networks, NeurIPS2022. Clarity, Quality, Novelty And Reproducibility Very well.
ICLR
Title Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear Functions Abstract We show that the representation cost of fully connected neural networks with homogeneous nonlinearities which describes the implicit bias in function space of networks with L2-regularization or with losses such as the cross-entropy converges as the depth of the network goes to infinity to a notion of rank over nonlinear functions. We then inquire under which conditions the global minima of the loss recover the ‘true’ rank of the data: we show that for too large depths the global minimum will be approximately rank 1 (underestimating the rank); we then argue that there is a range of depths which grows with the number of datapoints where the true rank is recovered. Finally, we discuss the effect of the rank of a classifier on the topology of the resulting class boundaries and show that autoencoders with optimal nonlinear rank are naturally denoising. 1 INTRODUCTION There has been a lot of recent interest in the so-called implicit bias of DNNs, which describes what functions are favored by a network when fitting the training data. Different network architectures (choice of nonlinearity, depth, width of the network, and more) and training procedures (initialization, optimization algorithm, loss) can lead to widely different biases. In contrast to the so-called kernel regime where the implicit bias is described by the Neural Tangent Kernel (Jacot et al., 2018), there are several active regimes (also called rich or feature-learning regimes), whose implicit bias often feature a form sparsity that is absent from the kernel regime. Such active regimes have been observed for example in DNNs with small initialization (Chizat & Bach, 2018; Rotskoff & Vanden-Eijnden, 2018; Li et al., 2020; Jacot et al., 2022a), with L2regularization (Savarese et al., 2019; Ongie et al., 2020; Jacot et al., 2022b) or when trained on exponentially decaying losses (Gunasekar et al., 2018a;b; Soudry et al., 2018; Du et al., 2018; Ji & Telgarsky, 2018; Chizat & Bach, 2020; Ji & Telgarsky, 2020). In the latter two cases, the implicit bias is described by the representation cost: R(f) = min W:fW=f ∥W∥2 where f is a function that can be represented by the network and the minimization is over all parameters W that result in a network function fW equal to f , the parameters W form a vector and ∥W∥ is the L2-norm. The representation cost can in some cases be explicitly computed for linear networks. For diagonal linear networks, the representation cost of a linear function f(x) = wTx equals theLp normR(f) = L ∥w∥pp of the vector v for p = 2 L (Gunasekar et al., 2018a; Moroshko et al., 2020) where L is the depth of the network. For fully-connected linear networks, the representation cost of a linear function f(x) = Ax equals the Lp-Schatten norm (the Lp norm of the singular values) R(f) = L ∥A∥pp (Dai et al., 2021). A common thread between these examples is a bias towards some notion of sparsity: sparsity of the entries of the vector w in diagonal networks and sparsity of the singular values in fully connected networks. Furthermore, this bias becomes stronger with depth and in the infinite depth limit L→ ∞ the rescaled representation cost R(f)/L converges to the L0 norm ∥w∥0 (the number of non-zero entries in w) in the first case and to the rank Rank(A) in the second. For shallow (L = 2) nonlinear networks with a homogeneous activation, the representation cost also takes the form of a L1 norm (Bach, 2017; Chizat & Bach, 2020; Ongie et al., 2020), leading to sparsity in the effective number of neurons in the hidden layer of the network. However, the representation cost of deeper networks does not resemble any typical norm (Lp or not), though it still leads to some form of sparsity (Jacot et al., 2022b). Despite the absence of explicit formula, we will show that the rescaled representation cost R(f)/L converges to some notion of rank in nonlinear networks as L→ ∞, in analogy to infinite depth linear networks. CONTRIBUTIONS We first introduce two notions of rank: the Jacobian rank RankJ(f) = maxx Rank [Jf(x)] and the Bottleneck rank RankBN (f) which is the smallest integer k such that f can be factorized f = h ◦ g with inner dimension k. In general, RankJ(f) ≤ RankBN (f), but for functions of the form f = ψ ◦ A ◦ ϕ (for a linear map A and two bijections ψ and ϕ), we have RankJ(f) = RankBN (f) = RankA. These two notions of rank satisfy the properties (1) Rankf ∈ Z; (2) Rank(f ◦ g) ≤ min{Rankf,Rankg}; (3) Rank(f + g) ≤ Rankf +Rankg; (4) Rank(x 7→ Ax+ b) = RankA. We then show that in the infinite depth limit L→ ∞ the rescaled representation cost of DNNs with a general homogeneous nonlinearity is sandwiched between the Jacobian and Bottleneck ranks: RankJ (f) ≤ lim L→∞ R(f) L ≤ RankBN (f) . Furthermore limL→∞R(f) satisfies properties (2-4) above. We also conjecture that the limiting representation cost equals its upper bound RankBN (f). We then study how this bias towards low-rank functions translates to finite but large depths. We first show that for large depths the rescaled norm of the parameters ∥Ŵ∥2/L at any global minimum Ŵ is upper bounded by 1+CN/L for a constant CN which depends on the training points. This implies that the resulting function has approximately rank 1 w.r.t. the Jacobian and Bottleneck ranks. This is however problematic if we are trying to fit a ‘true function’ f∗ whose ‘true rank’ k = RankBNf ∗ is larger than 1. Thankfully we show that if k > 1 the constantCN explodes asN → ∞, so that the above bound (∥Ŵ∥2/L ≤ 1+CN/L) is relevant only for very large depths whenN is large. We show another upper bound ∥Ŵ∥2/L ≤ k + C/L with a constant C independent of N , suggesting the existence of a range of intermediate depths where the network recovers the true rank k. Finally, we discuss how rank recovery affects the topology of decision boundaries in classification and leads autoencoders to naturally be denoising, which we confirm with numerical experiments. RELATED WORKS The implicit bias of deep homogeneous networks has, to our knowledge, been much less studied than those of either linear networks or shallow nonlinear ones. (Ongie & Willett, 2022) study deep networks with only one nonlinear layer (all others being linear). Similarly (Le & Jegelka, 2022) show a low-rank alignment phenomenon in a network whose last layers are linear. Closer to our setup is the analysis of the representation cost of deep homogeneous networks in (Jacot et al., 2022b), which gives two reformulations for the optimization in the definition of the representation cost, with some implications on the sparsity of the representations, though the infinite depth limit is not studied. A very similar analysis of the sparsity effect of large depth on the global minima of L2-regularized networks is given in (Timor et al., 2022), however, they only show how the optimal weight matrices are almost rank 1 (and only on average), while we show low-rank properties of the learned function, as well as the existence of a layer with almost rank 1 hidden representations. 2 PRELIMINARIES In this section, we define fully-connected DNNs and their representation cost. FULLY CONNECTED DNNS In this paper, we study fully connected DNNs with L+1 layers numbered from 0 (input layer) to L (output layer). Each layer ℓ ∈ {0, . . . , L} has nℓ neurons, with n0 = din the input dimension and nL = dout the output dimension. The pre-activations α̃ℓ(x) ∈ Rnℓ and activations αℓ(x) ∈ Rnℓ of the layers of the network are defined inductively as α0(x) = x α̃ℓ(x) =Wℓαℓ−1(x) + bℓ αℓ(x) = σ (α̃ℓ(x)) , for the nℓ×nℓ−1 connection weight matrixWℓ, the nℓ bias vector bℓ and the nonlinearity σ : R → R applied entrywise to the vector α̃ℓ(x). The parameters of the network are the collection of all connection weights matrices and bias vectors W = (W1, b1, . . . ,WL, bL). We call the network function fW : Rdin → Rdout the function that maps an input x to the preactivations of the last layer α̃L(x). In this paper, we will focus on homogeneous nonlinearities σ, i.e. such that σ(λx) = λσ(x) for any λ ≥ 0 and x ∈ R, such as the traditional ReLU σ(x) = max{0, x}. In our theoretical analysis we will assume that the nonlinearity is of the form σa(x) = { x if x ≥ 0 ax otherwise for some α ∈ (−1, 1), since for a general homogeneous nonlinearity σ (which is not proportional to the identity function, the constant zero function or the absolute function), there are scalars a ∈ (−1, 1), b ∈ R and c ∈ {+1,−1} such that σ(x) = cσa(bx); as a result, the global minima and representation cost are the same up to scaling. Remark 1. By a simple generalization of the work of (Arora et al., 2018), the set of functions that can be represented by networks (with any finite widths and depth) with such nonlinearities is the set of piecewise linear functions with a finite number of linear regions. In contrast, the three types of homogeneous nonlinearities we rule out (the identity, the constant, or the absolute value) lead to different sets of functions: the linear functions, the constant functions, or the piecewise linear functions f such that limt→∞ ∥f(tx)− f(−tx)∥ is finite for all directions x ∈ Rdin (or possibly a subset of this class of functions). While some of the results of this paper could probably be generalized to the third case up to a few details, we rule it out for the sake of simplicity. Remark 2. All of our results will be for sufficiently wide networks, i.e. for all widths n such that nℓ ≥ n∗ℓ for some minimal widths n∗ℓ . Moreover these results are O(0) in the width, in the sense that above the threshold n∗ℓ the constants do not depend on the widths nℓ. When there are a finite number of datapoints N , it was shown by (Jacot et al., 2022b) that a width of N(N + 1) is always sufficient, that is we can always take n∗ℓ = N(N +1) (though it is observed empirically that a much smaller width can be sufficient in some cases). When we are trying to fit a piecewise linear function over the whole input domain Ω, the width required depends on the number of linear regions (He et al., 2018). REPRESENTATION COST The representation cost R(f ; Ω, σ, L) is the squared norm of the optimal weights W which represents the function f|Ω: R(f ; Ω, σ, L) = min W:fW|Ω=f|Ω ∥W∥2 where the minimum is taken over all weights W of a depth L network (with some finite widths n) such that fW(x) = f(x) for all x ∈ Ω. If no such weights exist, we define R(f ; Ω, σ, L) = ∞. The representation cost describes the natural bias on the represented function fW induced by adding L2 regularization on the weights W: min W C(fW) + λ ∥W∥2 = min f C(f) + λR(f ; Ω, σ, L) for any cost C (defined on functions f : Ω 7→ Rdout ) and where the minimization on the right is over all functions f that can be represented by a depth L network with nonlinearity σ. Therefore, if we can give a simple description of the representation cost of a function f , we can better understand what type of functions f are favored by a DNN with nonlinearity σ and depth L. Remark 3. Note that the representation cost does not only play a role in the presence of L2regularization, it also describes the implicit bias of networks trained on an exponentially decaying loss, such as the cross-entropy loss, as described in (Soudry et al., 2018; Gunasekar et al., 2018a; Chizat & Bach, 2020). 3 INFINITELY DEEP NETWORKS In this section, we first give 4 properties that a notion of rank on piecewise linear functions should satisfy and introduce two notions of rank that satisfy these properties. We then show that the infinitedepth limit L→ ∞ of the rescaled representation cost R(f ; Ω, σa, L)/L is sandwiched between the two notions of rank we introduced, and that this limit satisfies 3 of the 4 properties we introduced. RANK OF PIECEWISE LINEAR FUNCTIONS There is no single natural definition of rank for nonlinear functions, but we will provide two of them in this section and compare them. We focus on notions of rank for piecewise linear functions with a finite number of linear regions since these are the function that can be represented by DNNs with homogeneous nonlinearities (this is a Corollary of Theorem 2.1 from (Arora et al., 2018), for more details, see Appendix E.1). We call such functions finite piecewise linear functions (FPLF). Let us first state a set of properties that any notion of rank on FPLF should satisfy, inspired by properties of rank for linear functions: 1. The rank of a function is an integer Rank(f) ∈ N. 2. Rank(f ◦ g) ≤ min{Rankf,Rankg}. 3. Rank(f + g) ≤ Rankf +Rankg. 4. If f is affine (f(x) = Ax+ b) then Rankf = RankA. Taking g = id or f = id in (2) implies Rank(f) ≤ min{din, dout}. Properties (2) and (4) also imply that for any bijection ϕ on Rd, Rank(ϕ) = Rank(ϕ−1) = d. Note that these properties do not uniquely define a notion of rank. Indeed we will now give two notions of rank which satisfy these properties but do not always match. However any such notion of rank must agree on a large family of functions: Property 2 implies that Rank is invariant under preand post-composition with bijections (see Appendix A), which implies that the rank of functions of the form ψ ◦ f ◦ϕ for an affine function f(x) = Ax+ b and two (piecewise linear) bijections ψ and ϕ always equals RankA. The first notion of rank we consider is based on the rank of the Jacobian of the function: Definition 1. The Jacobian rank of a FPLF f is RankJ(f ; Ω) = maxx RankJf(x), taking the max over points where x is differentiable. Note that since the jacobian is constant over the linear regions of the FPLF f , we only need to take the maximum over every linear region. As observed in (Feng et al., 2022), the Jacobian rank measures the intrinsic dimension of the output set f(Ω). The second notion of rank is inspired by the fact that for linear functions f , the rank of f equals the minimal dimension k such that f can be written as the composition of two linear function f = g ◦ h with inner dimension k. We define the bottleneck rank as: Definition 2. The bottleneck rank RankBN (f ; Ω) is the smallest integer k ∈ N such that there is a factorization as the composition of two FPLFs f|Ω = (g ◦ h)|Ω with inner dimension k. The following proposition relates these two notions of rank: Proposition 1. Both RankJ and RankBN satisfy properties 1− 4 above. Furthermore: • For any FPLF and any set Ω, RankJ(f ; Ω) ≤ RankBN (f ; Ω). • There exists a FPLF f : R2 → R2 and a domain Ω such that RankJ(f ; Ω) = 1 and RankBN (f ; Ω) = 2. INFINITE-DEPTH REPRESENTATION COST In the infinite-depth limit, the (rescaled) representation cost of DNNs R∞(f ; Ω, σa) = limL→∞ R(f ;Ω,σa,L) L converges to a value ‘sandwiched’ between the above two notions of rank: Theorem 1. For any bounded domain Ω and any FPLF f RankJ(f ; Ω) ≤ R∞(f ; Ω, σα) ≤ RankBN (f ; Ω). Furthermore the limiting representation cost R∞(f ; Ω, σa) satisfies properties 2 to 4. Proof. The lower bound follows from taking L → ∞ in Proposition 3 (see Section 4). The upper bound is constructive: a function f = h ◦ g can be represented as a network in three consecutive parts: a first part (of depth Lg) representing g, a final part (of depth Lh) representing h, and in the middle L− Lg − Lh identity layers on a k-dimensional space. The contribution to the norm of the parameters of the middle part is k(L−Lg −Lh) and it dominates as L→ ∞, since the contribution of the first and final parts are finite. Note that R∞(f ; Ω, σa) might satisfy property 1 as well, we were simply not able to prove it. Theorem 1 implies that for functions of the form f = ψ ◦ A ◦ ϕ for bijections ψ and ϕ, R∞(f ; Ω, σa) = RankJ(f ; Ω) = RankBN (f ; Ω) = RankA. Remark 4. Motivated by some aspects of the proofs and a general intuition (which is described in Section 4) we conjecture that R∞(f ; Ω, σa) = RankBN (f ; Ω). This would imply that the limiting representation cost does not depend on the choice of nonlinearity, as long as it is of the form σa (which we already proved is the case for functions of the form ψ ◦A ◦ ϕ). This result suggests that large-depth neural networks are biased towards function which have a low Jacobian rank and (if our above mentioned conjecture is true) low Bottleneck rank, much like linear networks are biased towards low-rank linear maps. It also suggests that the rescaled norm of the parameters ∥W∥2/L is an approximate upper bound on the Jacobian rank (and if our conjecture is true on the Bottleneck rank too) of the function fW. In the next section, we partly formalize these ideas. 4 RANK RECOVERY IN FINITE DEPTH NETWORKS In this section, we study how the (approximate) rank of minimizer functions fŴ (i.e. functions at a global minimum Ŵ) for the MSE Lλ(W) = 1N ∑N i=1(fW(xi)−yi)2+ λ L ∥W∥ 2 with data sampled from a distribution with support Ω is affected by the depth L. In particular, when the outputs are generated from a true function f∗ (i.e. yi = f∗(xi)) with k = RankBN (f∗; Ω), we study in which condition the ‘true rank’ k is recovered. APPROXIMATE RANK 1 REGIME One can build a function with BN-rank 1 that fits any training data (for example by first projecting the input to a line with no overlap and then mapping the points from the line to the outputs with a piecewise linear function). This implies the following bound: Proposition 2. There is a constant CN (which depends on the training data only) such that for any large enough L, at any global minimum Ŵ of the loss Lλ the represented function fŴ satisfies 1 L R(fŴ;σa,Ω, L) ≤ 1 + CN L . Proof. We use the same construction as in the proof of Theorem 1 for any fitting rank 1 function. This bound implies that the function fŴ represented by the network at a global minimum is approximately rank 1 both w.r.t. to the Jacobian and Bottleneck ranks, showing the bias towards low-rank functions even for finite (but possibly very large) depths. Jacobian Rank: For any function f , the rescaled norm representation cost 1LR(f ; Ω, σa, L) bounds the Lp-Schatten norm of the Jacobian (with p = 2L ) at any point: Proposition 3. Let f be a FPLF, then at any differentiable point x, we have ∥Jf(x)∥2/L2/L := RankJfW(x)∑ k=1 sk (Jf(x)) 2 L ≤ 1 L R(f ; Ω, σa, L), where sk (JfW(x)) is the k-th singular value of the Jacobian JfW(x). Together with Proposition 2, this implies that the second singular value of the Jacobian of any minimizer function must be exponentially small s2 ( JfŴ(x) ) ≤ ( 1+ CN L 2 )L 2 in L. Bottleneck Rank: We can further prove the existence of a bottleneck in the network in any minimizer network, i.e. a layer ℓ whose hidden representation is approximately rank 1: Proposition 4. For any global minimum Ŵ of the L2-regularized loss Lλ with λ > 0 and any set of Ñ datapoints X̃ ∈ Rdin×Ñ (which do not have to be the training set X) with non-constant outputs, there is a layer ℓ0 such that the first two singular values s1, s2 of the hidden representation Zℓ0 ∈ Rnℓ×N (whose columns are the activations αℓ0(xi) for all the inputs xi in X̃) satisfies s2 s1 = O(L− 1 4 ). The fact that the global minima of the loss are approximately rank 1 not only in the Jacobian but also in the Bottleneck sense further supports our conjecture that the limiting representation cost equals the Bottleneck rank R∞ = RankBN . Furthermore, it shows that the global minimum of the L2-regularized is biased towards low-rank functions for large depths, since it fits the data with (approximately) the smallest possible rank. RANK RECOVERY FOR INTERMEDIATE DEPTHS However, learning rank 1 functions is not always a good thing. Assume that we are trying to fit a ‘true function’ f∗ : Ω → Rdout with a certain rank k = RankBN (f∗; Ω). If k > 1 the global minima of a large depth network will end up underestimating the true rank k. In contrast, in the linear setting underestimating the true rank is almost never a problem: for example in matrix completion one always wants to find a minimal rank solution (Candès & Recht, 2009; Arora et al., 2019). The difference is due to the fact that rank 1 nonlinear functions can fit any finite training set, which is not the case in the linear case. Thankfully, for large datasets it becomes more and more difficult to underestimate the rank, since for large N fitting the data with a rank 1 function requires large derivatives, which in turn implies a large parameter norm: Theorem 2. Given a Jacobian-rank k true function f∗ : Ω → Rdout on a bounded domain Ω, then for all ϵ there is a constant cϵ such that for any BN-rank 1 function f̂ : Ω → Rdout that fits f̂(xi) = f ∗(xi) a dataset x1, . . . , xN sampled i.i.d. from a distribution p with support Ω, we have 1 LR(f̂ ; Ω, σa, L) > cϵN 2 L (1− 1 k ) with prob. at least 1− ϵ. Proof. We show that there is a point x ∈ Ω with large derivative ∥Jf(x)∥op ≥ TSP(y1,...,yN ) diam(x1,...,xN ) for the Traveling Salesman Problem TSP(y1, . . . , yN ), i.e. the length of the shortest path passing through every point y1, . . . , ym, and the diameter diam(x1, . . . , xN ) of the points x1, . . . , xN . This follows from the fact that the image of f̂ is a line going through all yis, and if i and j are the first and last points visited, the image of segment [xi, xj ] is a line from yi to yj passing through all yks. The diameter is bounded by diamΩ while the TSP scales as N1− 1 k (Beardwood et al., 1959) since the yis are sampled from a k-dimensional distribution. The bound on the parameter norm then follows from Proposition 3. This implies that the constantCN in Proposition 2 explodes as the number of datapointsN increases, i.e. as N increases, larger and larger depths are required for the bound in Proposition 2 to be meaningful. In that case, a better upper bound on the norm of the parameters can be obtained, which implies that the functions fŴ at global minima are approximately rank k or less (at least in the Jacobian sense, according to Proposition 3): Proposition 5. Let the ‘true function’ f∗ : Ω → Rdout be piecewise linear with RankBN (f∗) = k, then there is a constant C which depends on f∗ only such that any minimizer function fŴ satisfies 1 L R(fŴ;σa,Ω, L) ≤ 1 L R(f∗;σa,Ω, L) ≤ k + C L . Theorem 2 and Proposition 5 imply that if the number of datapoints N is sufficiently large (N >( k+CL c ) kL 2k−2 ), there are parameters W∗ that fit the true function f∗ with a smaller parameter norm than any choice of parameters W that fit the data with a rank 1 function. In that case, the global minima will not be rank 1 and might instead recover the true rank k. Another interpretation is that since the constant C does not depend on the number of training points N (in contrast to CN ), there is a range of depths (which grows as N → ∞) where the upper bound of Proposition 5 is below that of Proposition 2. We expect rank recovery to happen roughly in this range of depths: too small depths can lead to an overestimation of the rank1, while too large depths can lead to an underestimation. Remark 5. Note that in our experiments, we were not able to observe gradient descent converging to a solution that underestimates the true rank, even for very deep networks. This is probably due to gradient descent converging to one of the many local minima in the loss surface of very deep L2-regularized DNNs. Some recent theoretical results offer a possible explanation for why gradient descent naturally avoids rank 1 solutions: the proof of Proposition 2 shows that rank 1 fitting functions have exploding gradient as N → ∞, and such high gradient functions are known (at the moment only for shallow networks with 1D inputs) to correspond to narrow minima (Mulayoff et al., 2021). Some of our results can be applied to local minima Ŵwith a small norm: Proposition 3 implies that the Jacobian rank of fŴ is approximately bounded by ∥Ŵ∥ 2 /L. Proposition 4 also applies to local minima, but only if ∥Ŵ∥2/L ≤ 1 + C/L for some constant C, though it could be generalized. DISCUSSION We now propose a tentative explanation for the phenomenon observed in this section. In contrast to the rest of the paper, this discussion is informal. 1Note that traditional regression models, such as Kernel Ridge Regression (KRR) typically overestimate the true rank, as described in Appendix D.1. Ideally, we want to learn functions f which can be factorized as a composition h ◦ g so that not only the inner dimension is small but the two functions g, h are not ‘too complex’. These two objectives are often contradictory and one needs to find a trade-off between the two. Instead of optimizing the bottleneck rank, one might want to optimize with a regularization term of the form min f=h◦g k + γ (C(g) + C(h)) , (1) optimizing over all possible factorization f = h ◦ g of f with inner dimension k, where C(g) and C(h) are measures of the complexity of g and h resp. The parameter γ ≥ 0 allows us to tune the balance between the minimization of the inner dimension and the complexity of g and h, recovering the Bottleneck rank when γ = 0. For small γ the minimizer is always rank 1 (since it is always possible to fit a finite dataset with a rank 1 function in the absence of restriction on the complexity on g and h), but with the right choice of γ one can recover the true rank. Some aspects of the proofs techniques we used in this paper suggest that large-depth DNNs are optimizing such a cost (or an approximation thereof). Consider a deep network that fits with minimal parameter norm a function f ; if we add more layers to the network it is natural to assume that the new optimal representation of f will be almost the same as that of the shallower network with some added (approximate) identity layers. The interesting question is where are those identity layers added? The cost of adding an identity layer at a layer ℓ equals the dimension dℓ of the hidden representation of the inputs at ℓ. It is therefore optimal to add identity layers where the hidden representations have minimal dimension. This suggests that for large depths the optimal representation of a function f approximately takes the form of Lg layers representing g, then L−Lg−Lh identity layers, and finally Lh layers representing h, for some factorization f = h◦g with inner dimension k. We observe in Figure 1 such a three-part representation structure in an MSE task with a low-rank true function. The rescaled parameter norm would then take the form 1 L ∥W∥2 = L− Lg − Lh L k + 1 L ( ∥Wg∥2 + ∥Wh∥2 ) , where Wg and Wh are the parameters of the first and last part of the network. For large depths, we can make the approximation L−Lg−LhL ≈ 1 to recover the same structure as Equation 1, with γ = 1/L, C(g) = ∥W∥2g and C(h) = ∥Wh∥ 2. This intuition offers a possible explanation for rank recovery in DNNs, though we are not yet able to prove it rigorously. 5 PRACTICAL IMPLICATIONS In this section, we describe the impact of rank minimization on two practical tasks: multiclass classification and autoencoders. MULTICLASS CLASSIFICATION Consider a function fW∗ : Rdin → Rm which solves a classification task with m classes, i.e. for all training points xi with class yi ∈ {1, . . . ,m} the yi-th entry of the vector fW∗ is strictly larger than all other entries. The Bottleneck rank k = RankBN (fW∗) of fW∗ has an impact on the topology of the resulting partition of the input space Ω into classes, leading to topological properties typical of a partition on a k-dimensional space rather than those of a partition on a din-dimensional space. When k = 1, the partition will be topologically equivalent to a classification on a line, which implies the absence of tripoints, i.e. points at the boundary of 3 (or more) classes. Indeed any boundary point x ∈ Ω will be mapped to a boundary point z = g(x) by the first function g : Ω → R in the factorization of fW∗ ; since z has at most two neighboring classes, then so does x. This property is illustrated in Figure 2: for a classification task on four classes on the plane, we observe that the partitions obtained by shallow networks (L = 2) leads to tripoints which are absent in deeper networks (L = 9). Notice also that the presence or absence of L2-regularization has little effect on the final shape, which is in line with the observation that the cross-entropy loss leads to an implicit L2-regularization (Soudry et al., 2018; Gunasekar et al., 2018a; Chizat & Bach, 2020), reducing the necessity of an explicit L2-regularization. AUTOENCODERS Consider learning an autoender on data of the form x = g(z) where z is sampled (with full dimensional support) in a latent space Rk and g : Rk → Rd is an injective FPLF. In this setting, the true rank is the intrinsic dimension k of the data, since the minimal rank function that equals the identity on the data distribution has rank k. Assume that the learned autoencoder f̂ : Rk → Rk fits the data f(x) = x for all x = g(z) and recovers the rank RankBN f̂ = k. At any datapoint x0 = g(z0) such that g is differentiale at z0, the data support g(Rk) is locally a k-dimensional affine subspace T = x0 + ImJg(z0). In the linear region of f̂ that contains x0, f̂ is an affine projection to T since it equals the identity when restricted to T and its Jacobian is rank k. This proves that rank recovering autoencoders are naturally (locally) denoising. 6 CONCLUSION We have shown that in infinitely deep networks, L2-regularization leads to a bias towards low-rank functions, for some notion of rank on FPLFs. We have then shown a set of results that suggest that this low-rank bias extends to large but finite depths. With the right depths, this leads to ‘rank recovery’, where the learned function has approximately the same rank as the ‘true function’. We proposed a tentative explanation for this rank recovery: for finite but large widths, the network is biased towards function f which can be factorized f = h ◦ g with both a small inner dimension k and small complexity of g and h. Finally, we have shown how rank recovery affects the topology of the class boundaries in a classification task and leads to natural denoising abilities in autoencoders.
1. What is the focus of the paper regarding deep neural networks? 2. What are the strengths and weaknesses of the proposed approach? 3. How does the reviewer assess the relevance of the theory in practical networks? 4. Are there any minor questions or imprecisions in the paper that the reviewer would like to bring attention to?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper the authors consider the implicit bias of deep neural networks with homogeneous activations and linear layers, trained to minimize the square loss against a given piecewise linear target function with ℓ 2 regularization. previous works have explored the functions achieving the minimum "representation cost" (the minimum weight norm of a network that interpolates the data) in simpler cases, such as linear networks and depth-two relu networks. the authors consider a representation cost that is minimized over all networks of sufficiently large widths, and show that in the asymptotics where the network depth goes to infinity, the normalized representation cost converges to a suitable nonlinear notion of "rank" that the authors define. they study various perturbations off this limit (finite but large depth, on a loss taken over n empirical samples), and interpret the resulting conclusions in positive and negative lights of this "low-rank bias". simple experiments are presented that connect to classification and autoencoding problems. Strengths And Weaknesses Strengths The work considers an important and challenging problem in the study of implicit bias of interpolating neural networks -- the case of nonlinear neural networks of depth larger than two -- which has not been successfully treated in any significant generality in any of the prior works in this area. Weaknesses The results are all phrased in terms of "representation costs of piecewise linear functions for homogeneous neural networks of sufficient width", where the width needs to depend on the target function f -- in particular the definition at the bottom of page 3 does not pertain to any fixed class of architectures. This makes it hard to compare to prior work, e.g. most relevant is by Ongie et al. (2020) -- why not just formulate everything in terms of "infinite width" networks as Ongie et al. do? This would seem to greatly facilitate comparison. The authors draw an analogy to the rank minimization implicit bias in deep linear networks throughout the paper, but it seems worth noting in this connection that the authors' results seem much weaker (for deep linear networks, the representation cost characterization holds for very general width-depth configurations, rather than only the case of depth ≫ width here). In the proofs, one sees that the limiting representation cost (imagine it in terms of the bottleneck rank, since the lower bound in terms of the Jacobian rank seems rather coarse -- it doesn't capture any information about the complexity/variability/number/etc. of the different piecewise linear components of f , only their maximum slopes), is realized by a network that is infinitely deep, and has fixed width (although this width in general depends on f ). Such networks are impossible to learn with gradient descent -- e.g. one thinks of the dying ReLU issue http://arxiv.org/abs/1903.06733. Could the authors comment on why conclusions drawn from this limit can still be relevant for practical networks? It seems that the theory (including in section 4, where "finite-depth networks" are considered) does not have any implications for networks whose width is growing together with the depth. Similarly, the upper bound developed in the proofs on the representation cost seems to have no ability to capture interesting practical phenomena in the study of representation by deep homogeneous networks such as depth separations (where a certain function f can be represented with far greater efficiency by a network of L + 1 layers than a network of L layers). There are some technical imprecisions that make it hard to assess the correctness of the theory (see below for some more minor ones). Since piecewise linear functions are not generally differentiable, it would sense to comment quickly on this issue when defining the Jacobian rank (since a maximum is taken over Ω , involving points where it will not be defined). Proof of Theorem 1 (second inequality): the argument seems to be using implicitly the claim "if a piecewise linear f can be written as f = g ∘ h , then g and h are also piecewise linear". This does not seem to be true -- consider any diffeomorphism ϕ on h ( Ω ) , then f = ( g ∘ ϕ − 1 ) ∘ ( ϕ ∘ h ) gives another decomposition of f into two functions that need not be piecewise linear. The claim seems more subtle than it is treated in the proof (it does not seem immediate to me that all such decompositions are related in this way). It is not made clear in the main body that Theorem 1 is only shown to satisfy the bijection property for piecewise linear bijections. Minor / Questions Last line in first graf of "Contributions" on page 2: it would seem to be preferrable to actually state the five properties here--if they are somewhat technical, perhaps in "informal" form? I was trying to find the references for the representation costs of linear networks quoted in the introduction and in the appendices, but I could not find anywhere in the cited references (the authors give Gunasekar 2018, Moroshko 2020, Soudry 2018) the specific results that were being invoked. It would be better if an actual specific pointer to where in these papers the results being invoked can be found were added, or a more direct reference was included (for example, https://proceedings.neurips.cc/paper/2021/hash/e22cb9d6bbb4c290a94e4fff4d68a831-Abstract.html seems to have the requisite results). Some technical imprecisions: Section 1 defines the Jacobian rank with a minimum, but Definition 1 defines it with a maximum. Given the context, it seems the authors are focusing on the class of piecewise linear functions under consideration those with finitely many pieces (otherwise it does not make sense to me to write max for the definition of the Jacobian rank rather than sup ). Page 4 after five properties: I cannot see how property 2 implies invariance under composition with bijections (property 2 does not involve a statement about composition). It is not clear to me how to deduce this from these five properties even thinking of linear maps and their rank function. Clarity, Quality, Novelty And Reproducibility There are some issues with clarity caused by imprecise references, typos (both in language and in proofs), and missing details in mathematical arguments. The problem considered is rather novel, and the authors provide technical arguments in the appendices.
ICLR
Title Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear Functions Abstract We show that the representation cost of fully connected neural networks with homogeneous nonlinearities which describes the implicit bias in function space of networks with L2-regularization or with losses such as the cross-entropy converges as the depth of the network goes to infinity to a notion of rank over nonlinear functions. We then inquire under which conditions the global minima of the loss recover the ‘true’ rank of the data: we show that for too large depths the global minimum will be approximately rank 1 (underestimating the rank); we then argue that there is a range of depths which grows with the number of datapoints where the true rank is recovered. Finally, we discuss the effect of the rank of a classifier on the topology of the resulting class boundaries and show that autoencoders with optimal nonlinear rank are naturally denoising. 1 INTRODUCTION There has been a lot of recent interest in the so-called implicit bias of DNNs, which describes what functions are favored by a network when fitting the training data. Different network architectures (choice of nonlinearity, depth, width of the network, and more) and training procedures (initialization, optimization algorithm, loss) can lead to widely different biases. In contrast to the so-called kernel regime where the implicit bias is described by the Neural Tangent Kernel (Jacot et al., 2018), there are several active regimes (also called rich or feature-learning regimes), whose implicit bias often feature a form sparsity that is absent from the kernel regime. Such active regimes have been observed for example in DNNs with small initialization (Chizat & Bach, 2018; Rotskoff & Vanden-Eijnden, 2018; Li et al., 2020; Jacot et al., 2022a), with L2regularization (Savarese et al., 2019; Ongie et al., 2020; Jacot et al., 2022b) or when trained on exponentially decaying losses (Gunasekar et al., 2018a;b; Soudry et al., 2018; Du et al., 2018; Ji & Telgarsky, 2018; Chizat & Bach, 2020; Ji & Telgarsky, 2020). In the latter two cases, the implicit bias is described by the representation cost: R(f) = min W:fW=f ∥W∥2 where f is a function that can be represented by the network and the minimization is over all parameters W that result in a network function fW equal to f , the parameters W form a vector and ∥W∥ is the L2-norm. The representation cost can in some cases be explicitly computed for linear networks. For diagonal linear networks, the representation cost of a linear function f(x) = wTx equals theLp normR(f) = L ∥w∥pp of the vector v for p = 2 L (Gunasekar et al., 2018a; Moroshko et al., 2020) where L is the depth of the network. For fully-connected linear networks, the representation cost of a linear function f(x) = Ax equals the Lp-Schatten norm (the Lp norm of the singular values) R(f) = L ∥A∥pp (Dai et al., 2021). A common thread between these examples is a bias towards some notion of sparsity: sparsity of the entries of the vector w in diagonal networks and sparsity of the singular values in fully connected networks. Furthermore, this bias becomes stronger with depth and in the infinite depth limit L→ ∞ the rescaled representation cost R(f)/L converges to the L0 norm ∥w∥0 (the number of non-zero entries in w) in the first case and to the rank Rank(A) in the second. For shallow (L = 2) nonlinear networks with a homogeneous activation, the representation cost also takes the form of a L1 norm (Bach, 2017; Chizat & Bach, 2020; Ongie et al., 2020), leading to sparsity in the effective number of neurons in the hidden layer of the network. However, the representation cost of deeper networks does not resemble any typical norm (Lp or not), though it still leads to some form of sparsity (Jacot et al., 2022b). Despite the absence of explicit formula, we will show that the rescaled representation cost R(f)/L converges to some notion of rank in nonlinear networks as L→ ∞, in analogy to infinite depth linear networks. CONTRIBUTIONS We first introduce two notions of rank: the Jacobian rank RankJ(f) = maxx Rank [Jf(x)] and the Bottleneck rank RankBN (f) which is the smallest integer k such that f can be factorized f = h ◦ g with inner dimension k. In general, RankJ(f) ≤ RankBN (f), but for functions of the form f = ψ ◦ A ◦ ϕ (for a linear map A and two bijections ψ and ϕ), we have RankJ(f) = RankBN (f) = RankA. These two notions of rank satisfy the properties (1) Rankf ∈ Z; (2) Rank(f ◦ g) ≤ min{Rankf,Rankg}; (3) Rank(f + g) ≤ Rankf +Rankg; (4) Rank(x 7→ Ax+ b) = RankA. We then show that in the infinite depth limit L→ ∞ the rescaled representation cost of DNNs with a general homogeneous nonlinearity is sandwiched between the Jacobian and Bottleneck ranks: RankJ (f) ≤ lim L→∞ R(f) L ≤ RankBN (f) . Furthermore limL→∞R(f) satisfies properties (2-4) above. We also conjecture that the limiting representation cost equals its upper bound RankBN (f). We then study how this bias towards low-rank functions translates to finite but large depths. We first show that for large depths the rescaled norm of the parameters ∥Ŵ∥2/L at any global minimum Ŵ is upper bounded by 1+CN/L for a constant CN which depends on the training points. This implies that the resulting function has approximately rank 1 w.r.t. the Jacobian and Bottleneck ranks. This is however problematic if we are trying to fit a ‘true function’ f∗ whose ‘true rank’ k = RankBNf ∗ is larger than 1. Thankfully we show that if k > 1 the constantCN explodes asN → ∞, so that the above bound (∥Ŵ∥2/L ≤ 1+CN/L) is relevant only for very large depths whenN is large. We show another upper bound ∥Ŵ∥2/L ≤ k + C/L with a constant C independent of N , suggesting the existence of a range of intermediate depths where the network recovers the true rank k. Finally, we discuss how rank recovery affects the topology of decision boundaries in classification and leads autoencoders to naturally be denoising, which we confirm with numerical experiments. RELATED WORKS The implicit bias of deep homogeneous networks has, to our knowledge, been much less studied than those of either linear networks or shallow nonlinear ones. (Ongie & Willett, 2022) study deep networks with only one nonlinear layer (all others being linear). Similarly (Le & Jegelka, 2022) show a low-rank alignment phenomenon in a network whose last layers are linear. Closer to our setup is the analysis of the representation cost of deep homogeneous networks in (Jacot et al., 2022b), which gives two reformulations for the optimization in the definition of the representation cost, with some implications on the sparsity of the representations, though the infinite depth limit is not studied. A very similar analysis of the sparsity effect of large depth on the global minima of L2-regularized networks is given in (Timor et al., 2022), however, they only show how the optimal weight matrices are almost rank 1 (and only on average), while we show low-rank properties of the learned function, as well as the existence of a layer with almost rank 1 hidden representations. 2 PRELIMINARIES In this section, we define fully-connected DNNs and their representation cost. FULLY CONNECTED DNNS In this paper, we study fully connected DNNs with L+1 layers numbered from 0 (input layer) to L (output layer). Each layer ℓ ∈ {0, . . . , L} has nℓ neurons, with n0 = din the input dimension and nL = dout the output dimension. The pre-activations α̃ℓ(x) ∈ Rnℓ and activations αℓ(x) ∈ Rnℓ of the layers of the network are defined inductively as α0(x) = x α̃ℓ(x) =Wℓαℓ−1(x) + bℓ αℓ(x) = σ (α̃ℓ(x)) , for the nℓ×nℓ−1 connection weight matrixWℓ, the nℓ bias vector bℓ and the nonlinearity σ : R → R applied entrywise to the vector α̃ℓ(x). The parameters of the network are the collection of all connection weights matrices and bias vectors W = (W1, b1, . . . ,WL, bL). We call the network function fW : Rdin → Rdout the function that maps an input x to the preactivations of the last layer α̃L(x). In this paper, we will focus on homogeneous nonlinearities σ, i.e. such that σ(λx) = λσ(x) for any λ ≥ 0 and x ∈ R, such as the traditional ReLU σ(x) = max{0, x}. In our theoretical analysis we will assume that the nonlinearity is of the form σa(x) = { x if x ≥ 0 ax otherwise for some α ∈ (−1, 1), since for a general homogeneous nonlinearity σ (which is not proportional to the identity function, the constant zero function or the absolute function), there are scalars a ∈ (−1, 1), b ∈ R and c ∈ {+1,−1} such that σ(x) = cσa(bx); as a result, the global minima and representation cost are the same up to scaling. Remark 1. By a simple generalization of the work of (Arora et al., 2018), the set of functions that can be represented by networks (with any finite widths and depth) with such nonlinearities is the set of piecewise linear functions with a finite number of linear regions. In contrast, the three types of homogeneous nonlinearities we rule out (the identity, the constant, or the absolute value) lead to different sets of functions: the linear functions, the constant functions, or the piecewise linear functions f such that limt→∞ ∥f(tx)− f(−tx)∥ is finite for all directions x ∈ Rdin (or possibly a subset of this class of functions). While some of the results of this paper could probably be generalized to the third case up to a few details, we rule it out for the sake of simplicity. Remark 2. All of our results will be for sufficiently wide networks, i.e. for all widths n such that nℓ ≥ n∗ℓ for some minimal widths n∗ℓ . Moreover these results are O(0) in the width, in the sense that above the threshold n∗ℓ the constants do not depend on the widths nℓ. When there are a finite number of datapoints N , it was shown by (Jacot et al., 2022b) that a width of N(N + 1) is always sufficient, that is we can always take n∗ℓ = N(N +1) (though it is observed empirically that a much smaller width can be sufficient in some cases). When we are trying to fit a piecewise linear function over the whole input domain Ω, the width required depends on the number of linear regions (He et al., 2018). REPRESENTATION COST The representation cost R(f ; Ω, σ, L) is the squared norm of the optimal weights W which represents the function f|Ω: R(f ; Ω, σ, L) = min W:fW|Ω=f|Ω ∥W∥2 where the minimum is taken over all weights W of a depth L network (with some finite widths n) such that fW(x) = f(x) for all x ∈ Ω. If no such weights exist, we define R(f ; Ω, σ, L) = ∞. The representation cost describes the natural bias on the represented function fW induced by adding L2 regularization on the weights W: min W C(fW) + λ ∥W∥2 = min f C(f) + λR(f ; Ω, σ, L) for any cost C (defined on functions f : Ω 7→ Rdout ) and where the minimization on the right is over all functions f that can be represented by a depth L network with nonlinearity σ. Therefore, if we can give a simple description of the representation cost of a function f , we can better understand what type of functions f are favored by a DNN with nonlinearity σ and depth L. Remark 3. Note that the representation cost does not only play a role in the presence of L2regularization, it also describes the implicit bias of networks trained on an exponentially decaying loss, such as the cross-entropy loss, as described in (Soudry et al., 2018; Gunasekar et al., 2018a; Chizat & Bach, 2020). 3 INFINITELY DEEP NETWORKS In this section, we first give 4 properties that a notion of rank on piecewise linear functions should satisfy and introduce two notions of rank that satisfy these properties. We then show that the infinitedepth limit L→ ∞ of the rescaled representation cost R(f ; Ω, σa, L)/L is sandwiched between the two notions of rank we introduced, and that this limit satisfies 3 of the 4 properties we introduced. RANK OF PIECEWISE LINEAR FUNCTIONS There is no single natural definition of rank for nonlinear functions, but we will provide two of them in this section and compare them. We focus on notions of rank for piecewise linear functions with a finite number of linear regions since these are the function that can be represented by DNNs with homogeneous nonlinearities (this is a Corollary of Theorem 2.1 from (Arora et al., 2018), for more details, see Appendix E.1). We call such functions finite piecewise linear functions (FPLF). Let us first state a set of properties that any notion of rank on FPLF should satisfy, inspired by properties of rank for linear functions: 1. The rank of a function is an integer Rank(f) ∈ N. 2. Rank(f ◦ g) ≤ min{Rankf,Rankg}. 3. Rank(f + g) ≤ Rankf +Rankg. 4. If f is affine (f(x) = Ax+ b) then Rankf = RankA. Taking g = id or f = id in (2) implies Rank(f) ≤ min{din, dout}. Properties (2) and (4) also imply that for any bijection ϕ on Rd, Rank(ϕ) = Rank(ϕ−1) = d. Note that these properties do not uniquely define a notion of rank. Indeed we will now give two notions of rank which satisfy these properties but do not always match. However any such notion of rank must agree on a large family of functions: Property 2 implies that Rank is invariant under preand post-composition with bijections (see Appendix A), which implies that the rank of functions of the form ψ ◦ f ◦ϕ for an affine function f(x) = Ax+ b and two (piecewise linear) bijections ψ and ϕ always equals RankA. The first notion of rank we consider is based on the rank of the Jacobian of the function: Definition 1. The Jacobian rank of a FPLF f is RankJ(f ; Ω) = maxx RankJf(x), taking the max over points where x is differentiable. Note that since the jacobian is constant over the linear regions of the FPLF f , we only need to take the maximum over every linear region. As observed in (Feng et al., 2022), the Jacobian rank measures the intrinsic dimension of the output set f(Ω). The second notion of rank is inspired by the fact that for linear functions f , the rank of f equals the minimal dimension k such that f can be written as the composition of two linear function f = g ◦ h with inner dimension k. We define the bottleneck rank as: Definition 2. The bottleneck rank RankBN (f ; Ω) is the smallest integer k ∈ N such that there is a factorization as the composition of two FPLFs f|Ω = (g ◦ h)|Ω with inner dimension k. The following proposition relates these two notions of rank: Proposition 1. Both RankJ and RankBN satisfy properties 1− 4 above. Furthermore: • For any FPLF and any set Ω, RankJ(f ; Ω) ≤ RankBN (f ; Ω). • There exists a FPLF f : R2 → R2 and a domain Ω such that RankJ(f ; Ω) = 1 and RankBN (f ; Ω) = 2. INFINITE-DEPTH REPRESENTATION COST In the infinite-depth limit, the (rescaled) representation cost of DNNs R∞(f ; Ω, σa) = limL→∞ R(f ;Ω,σa,L) L converges to a value ‘sandwiched’ between the above two notions of rank: Theorem 1. For any bounded domain Ω and any FPLF f RankJ(f ; Ω) ≤ R∞(f ; Ω, σα) ≤ RankBN (f ; Ω). Furthermore the limiting representation cost R∞(f ; Ω, σa) satisfies properties 2 to 4. Proof. The lower bound follows from taking L → ∞ in Proposition 3 (see Section 4). The upper bound is constructive: a function f = h ◦ g can be represented as a network in three consecutive parts: a first part (of depth Lg) representing g, a final part (of depth Lh) representing h, and in the middle L− Lg − Lh identity layers on a k-dimensional space. The contribution to the norm of the parameters of the middle part is k(L−Lg −Lh) and it dominates as L→ ∞, since the contribution of the first and final parts are finite. Note that R∞(f ; Ω, σa) might satisfy property 1 as well, we were simply not able to prove it. Theorem 1 implies that for functions of the form f = ψ ◦ A ◦ ϕ for bijections ψ and ϕ, R∞(f ; Ω, σa) = RankJ(f ; Ω) = RankBN (f ; Ω) = RankA. Remark 4. Motivated by some aspects of the proofs and a general intuition (which is described in Section 4) we conjecture that R∞(f ; Ω, σa) = RankBN (f ; Ω). This would imply that the limiting representation cost does not depend on the choice of nonlinearity, as long as it is of the form σa (which we already proved is the case for functions of the form ψ ◦A ◦ ϕ). This result suggests that large-depth neural networks are biased towards function which have a low Jacobian rank and (if our above mentioned conjecture is true) low Bottleneck rank, much like linear networks are biased towards low-rank linear maps. It also suggests that the rescaled norm of the parameters ∥W∥2/L is an approximate upper bound on the Jacobian rank (and if our conjecture is true on the Bottleneck rank too) of the function fW. In the next section, we partly formalize these ideas. 4 RANK RECOVERY IN FINITE DEPTH NETWORKS In this section, we study how the (approximate) rank of minimizer functions fŴ (i.e. functions at a global minimum Ŵ) for the MSE Lλ(W) = 1N ∑N i=1(fW(xi)−yi)2+ λ L ∥W∥ 2 with data sampled from a distribution with support Ω is affected by the depth L. In particular, when the outputs are generated from a true function f∗ (i.e. yi = f∗(xi)) with k = RankBN (f∗; Ω), we study in which condition the ‘true rank’ k is recovered. APPROXIMATE RANK 1 REGIME One can build a function with BN-rank 1 that fits any training data (for example by first projecting the input to a line with no overlap and then mapping the points from the line to the outputs with a piecewise linear function). This implies the following bound: Proposition 2. There is a constant CN (which depends on the training data only) such that for any large enough L, at any global minimum Ŵ of the loss Lλ the represented function fŴ satisfies 1 L R(fŴ;σa,Ω, L) ≤ 1 + CN L . Proof. We use the same construction as in the proof of Theorem 1 for any fitting rank 1 function. This bound implies that the function fŴ represented by the network at a global minimum is approximately rank 1 both w.r.t. to the Jacobian and Bottleneck ranks, showing the bias towards low-rank functions even for finite (but possibly very large) depths. Jacobian Rank: For any function f , the rescaled norm representation cost 1LR(f ; Ω, σa, L) bounds the Lp-Schatten norm of the Jacobian (with p = 2L ) at any point: Proposition 3. Let f be a FPLF, then at any differentiable point x, we have ∥Jf(x)∥2/L2/L := RankJfW(x)∑ k=1 sk (Jf(x)) 2 L ≤ 1 L R(f ; Ω, σa, L), where sk (JfW(x)) is the k-th singular value of the Jacobian JfW(x). Together with Proposition 2, this implies that the second singular value of the Jacobian of any minimizer function must be exponentially small s2 ( JfŴ(x) ) ≤ ( 1+ CN L 2 )L 2 in L. Bottleneck Rank: We can further prove the existence of a bottleneck in the network in any minimizer network, i.e. a layer ℓ whose hidden representation is approximately rank 1: Proposition 4. For any global minimum Ŵ of the L2-regularized loss Lλ with λ > 0 and any set of Ñ datapoints X̃ ∈ Rdin×Ñ (which do not have to be the training set X) with non-constant outputs, there is a layer ℓ0 such that the first two singular values s1, s2 of the hidden representation Zℓ0 ∈ Rnℓ×N (whose columns are the activations αℓ0(xi) for all the inputs xi in X̃) satisfies s2 s1 = O(L− 1 4 ). The fact that the global minima of the loss are approximately rank 1 not only in the Jacobian but also in the Bottleneck sense further supports our conjecture that the limiting representation cost equals the Bottleneck rank R∞ = RankBN . Furthermore, it shows that the global minimum of the L2-regularized is biased towards low-rank functions for large depths, since it fits the data with (approximately) the smallest possible rank. RANK RECOVERY FOR INTERMEDIATE DEPTHS However, learning rank 1 functions is not always a good thing. Assume that we are trying to fit a ‘true function’ f∗ : Ω → Rdout with a certain rank k = RankBN (f∗; Ω). If k > 1 the global minima of a large depth network will end up underestimating the true rank k. In contrast, in the linear setting underestimating the true rank is almost never a problem: for example in matrix completion one always wants to find a minimal rank solution (Candès & Recht, 2009; Arora et al., 2019). The difference is due to the fact that rank 1 nonlinear functions can fit any finite training set, which is not the case in the linear case. Thankfully, for large datasets it becomes more and more difficult to underestimate the rank, since for large N fitting the data with a rank 1 function requires large derivatives, which in turn implies a large parameter norm: Theorem 2. Given a Jacobian-rank k true function f∗ : Ω → Rdout on a bounded domain Ω, then for all ϵ there is a constant cϵ such that for any BN-rank 1 function f̂ : Ω → Rdout that fits f̂(xi) = f ∗(xi) a dataset x1, . . . , xN sampled i.i.d. from a distribution p with support Ω, we have 1 LR(f̂ ; Ω, σa, L) > cϵN 2 L (1− 1 k ) with prob. at least 1− ϵ. Proof. We show that there is a point x ∈ Ω with large derivative ∥Jf(x)∥op ≥ TSP(y1,...,yN ) diam(x1,...,xN ) for the Traveling Salesman Problem TSP(y1, . . . , yN ), i.e. the length of the shortest path passing through every point y1, . . . , ym, and the diameter diam(x1, . . . , xN ) of the points x1, . . . , xN . This follows from the fact that the image of f̂ is a line going through all yis, and if i and j are the first and last points visited, the image of segment [xi, xj ] is a line from yi to yj passing through all yks. The diameter is bounded by diamΩ while the TSP scales as N1− 1 k (Beardwood et al., 1959) since the yis are sampled from a k-dimensional distribution. The bound on the parameter norm then follows from Proposition 3. This implies that the constantCN in Proposition 2 explodes as the number of datapointsN increases, i.e. as N increases, larger and larger depths are required for the bound in Proposition 2 to be meaningful. In that case, a better upper bound on the norm of the parameters can be obtained, which implies that the functions fŴ at global minima are approximately rank k or less (at least in the Jacobian sense, according to Proposition 3): Proposition 5. Let the ‘true function’ f∗ : Ω → Rdout be piecewise linear with RankBN (f∗) = k, then there is a constant C which depends on f∗ only such that any minimizer function fŴ satisfies 1 L R(fŴ;σa,Ω, L) ≤ 1 L R(f∗;σa,Ω, L) ≤ k + C L . Theorem 2 and Proposition 5 imply that if the number of datapoints N is sufficiently large (N >( k+CL c ) kL 2k−2 ), there are parameters W∗ that fit the true function f∗ with a smaller parameter norm than any choice of parameters W that fit the data with a rank 1 function. In that case, the global minima will not be rank 1 and might instead recover the true rank k. Another interpretation is that since the constant C does not depend on the number of training points N (in contrast to CN ), there is a range of depths (which grows as N → ∞) where the upper bound of Proposition 5 is below that of Proposition 2. We expect rank recovery to happen roughly in this range of depths: too small depths can lead to an overestimation of the rank1, while too large depths can lead to an underestimation. Remark 5. Note that in our experiments, we were not able to observe gradient descent converging to a solution that underestimates the true rank, even for very deep networks. This is probably due to gradient descent converging to one of the many local minima in the loss surface of very deep L2-regularized DNNs. Some recent theoretical results offer a possible explanation for why gradient descent naturally avoids rank 1 solutions: the proof of Proposition 2 shows that rank 1 fitting functions have exploding gradient as N → ∞, and such high gradient functions are known (at the moment only for shallow networks with 1D inputs) to correspond to narrow minima (Mulayoff et al., 2021). Some of our results can be applied to local minima Ŵwith a small norm: Proposition 3 implies that the Jacobian rank of fŴ is approximately bounded by ∥Ŵ∥ 2 /L. Proposition 4 also applies to local minima, but only if ∥Ŵ∥2/L ≤ 1 + C/L for some constant C, though it could be generalized. DISCUSSION We now propose a tentative explanation for the phenomenon observed in this section. In contrast to the rest of the paper, this discussion is informal. 1Note that traditional regression models, such as Kernel Ridge Regression (KRR) typically overestimate the true rank, as described in Appendix D.1. Ideally, we want to learn functions f which can be factorized as a composition h ◦ g so that not only the inner dimension is small but the two functions g, h are not ‘too complex’. These two objectives are often contradictory and one needs to find a trade-off between the two. Instead of optimizing the bottleneck rank, one might want to optimize with a regularization term of the form min f=h◦g k + γ (C(g) + C(h)) , (1) optimizing over all possible factorization f = h ◦ g of f with inner dimension k, where C(g) and C(h) are measures of the complexity of g and h resp. The parameter γ ≥ 0 allows us to tune the balance between the minimization of the inner dimension and the complexity of g and h, recovering the Bottleneck rank when γ = 0. For small γ the minimizer is always rank 1 (since it is always possible to fit a finite dataset with a rank 1 function in the absence of restriction on the complexity on g and h), but with the right choice of γ one can recover the true rank. Some aspects of the proofs techniques we used in this paper suggest that large-depth DNNs are optimizing such a cost (or an approximation thereof). Consider a deep network that fits with minimal parameter norm a function f ; if we add more layers to the network it is natural to assume that the new optimal representation of f will be almost the same as that of the shallower network with some added (approximate) identity layers. The interesting question is where are those identity layers added? The cost of adding an identity layer at a layer ℓ equals the dimension dℓ of the hidden representation of the inputs at ℓ. It is therefore optimal to add identity layers where the hidden representations have minimal dimension. This suggests that for large depths the optimal representation of a function f approximately takes the form of Lg layers representing g, then L−Lg−Lh identity layers, and finally Lh layers representing h, for some factorization f = h◦g with inner dimension k. We observe in Figure 1 such a three-part representation structure in an MSE task with a low-rank true function. The rescaled parameter norm would then take the form 1 L ∥W∥2 = L− Lg − Lh L k + 1 L ( ∥Wg∥2 + ∥Wh∥2 ) , where Wg and Wh are the parameters of the first and last part of the network. For large depths, we can make the approximation L−Lg−LhL ≈ 1 to recover the same structure as Equation 1, with γ = 1/L, C(g) = ∥W∥2g and C(h) = ∥Wh∥ 2. This intuition offers a possible explanation for rank recovery in DNNs, though we are not yet able to prove it rigorously. 5 PRACTICAL IMPLICATIONS In this section, we describe the impact of rank minimization on two practical tasks: multiclass classification and autoencoders. MULTICLASS CLASSIFICATION Consider a function fW∗ : Rdin → Rm which solves a classification task with m classes, i.e. for all training points xi with class yi ∈ {1, . . . ,m} the yi-th entry of the vector fW∗ is strictly larger than all other entries. The Bottleneck rank k = RankBN (fW∗) of fW∗ has an impact on the topology of the resulting partition of the input space Ω into classes, leading to topological properties typical of a partition on a k-dimensional space rather than those of a partition on a din-dimensional space. When k = 1, the partition will be topologically equivalent to a classification on a line, which implies the absence of tripoints, i.e. points at the boundary of 3 (or more) classes. Indeed any boundary point x ∈ Ω will be mapped to a boundary point z = g(x) by the first function g : Ω → R in the factorization of fW∗ ; since z has at most two neighboring classes, then so does x. This property is illustrated in Figure 2: for a classification task on four classes on the plane, we observe that the partitions obtained by shallow networks (L = 2) leads to tripoints which are absent in deeper networks (L = 9). Notice also that the presence or absence of L2-regularization has little effect on the final shape, which is in line with the observation that the cross-entropy loss leads to an implicit L2-regularization (Soudry et al., 2018; Gunasekar et al., 2018a; Chizat & Bach, 2020), reducing the necessity of an explicit L2-regularization. AUTOENCODERS Consider learning an autoender on data of the form x = g(z) where z is sampled (with full dimensional support) in a latent space Rk and g : Rk → Rd is an injective FPLF. In this setting, the true rank is the intrinsic dimension k of the data, since the minimal rank function that equals the identity on the data distribution has rank k. Assume that the learned autoencoder f̂ : Rk → Rk fits the data f(x) = x for all x = g(z) and recovers the rank RankBN f̂ = k. At any datapoint x0 = g(z0) such that g is differentiale at z0, the data support g(Rk) is locally a k-dimensional affine subspace T = x0 + ImJg(z0). In the linear region of f̂ that contains x0, f̂ is an affine projection to T since it equals the identity when restricted to T and its Jacobian is rank k. This proves that rank recovering autoencoders are naturally (locally) denoising. 6 CONCLUSION We have shown that in infinitely deep networks, L2-regularization leads to a bias towards low-rank functions, for some notion of rank on FPLFs. We have then shown a set of results that suggest that this low-rank bias extends to large but finite depths. With the right depths, this leads to ‘rank recovery’, where the learned function has approximately the same rank as the ‘true function’. We proposed a tentative explanation for this rank recovery: for finite but large widths, the network is biased towards function f which can be factorized f = h ◦ g with both a small inner dimension k and small complexity of g and h. Finally, we have shown how rank recovery affects the topology of the class boundaries in a classification task and leads to natural denoising abilities in autoencoders.
1. What are the key contributions and strengths of the paper regarding the introduction of new notions of rank for nonlinear functions? 2. What are the weaknesses and limitations of the paper, particularly in its assumptions and experimental designs? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What additional remarks and questions does the reviewer have regarding the paper's definitions, proofs, and applications?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper introduces two new notions of rank for nonlinear functions (Jacobian rank and bottleneck rank). These definitions satisfy a set of properties of matrix ranks and thus generalize this classical notion. Moreover, under these rank notions, there exists regimes (large depth, large sample size) where the authors showed that neural networks that minimize a regularized ERM objective have low rank (and sometimes may recover the rank of the true teacher function) - suggesting a form of implicit bias. Experiments are done to support the claim and application to autoencoders are discussed. Strengths And Weaknesses Strengths The generalization of the notion of rank to regular but nonlinear function is indeed a very difficult task to attempt in general. I would characterize this paper as a sensible and thought-provoking attempt since the resulting notion is not trivial in certain regimes (large dataset, large depth, etc.). Traditional low rank literature usually requires linear separability or low rankness of the data matrix or whitened input. This paper does not make such assumptions. The theoretical analysis is done rigorously, with great intuition and explanation (although the Discussion subsection in section 4 is informal, it helps getting the general idea of the proof and feasible future directions). Experiments and applications well-support the theoretical findings. Weaknesses The rank characterization in the main results are done for some global minimizer of the regularized ERM, and not what is actually learnt by specific algorithms such as gradient descent or stochastic gradient descent. In practice, it is very hard for GD/SGD to find said global minimizer. The infinite depth regime is not something one expects to see in practice. Theorem 2 only addresses that the representation cost for rank-1 functions would be high if the true function has rank k. It is not clear how the representation cost penalty interplays with the underestimation of rank (for example, can we get some kind of high cost guarantee for rank-m functions where 1 < m < k?) The point made in the section would be a lot more convincing if this interplay is made quantitative. Additional remarks and questions The definition of Jacobian rank in the paper can be problematic, especially with the use of ReLU/piecewise linear function since the Jacobian is not defined everywhere. Changing this to maximum rank over all affine pieces should address this problem (but would limit future applicability to neural networks that are not piecewise linear). The most general way to address this would probably involve defining the Jacobian rank for an appropriate subdifferential model (say Clarke), but this requires a lot more work. Theorem 1 has a reference to Proposition 3, which only appears later in the paper. In page 4, section 3, the author claims that “This result (of Theorem 1) suggests that large-depth neural networks are biased towards functions which have a low Jacobian or bottleneck rank…”. However, assuming that the representation cost of the optimal function is low (which is reasonable given that it is a term in the optimization objective), only the Jacobian rank is constrained to be small. The bottleneck rank can be arbitrarily large (unless the author’s conjecture that the representation cost in the infinite depth limit equals the bottleneck rank is true). The use of TSP in the proof of Theorem 2 comes as quite a surprise and it would be more intuitive if there are some discussion of this proof technique (also, it’s not immediately clear what metric space the TSP is being formulated in). What conditions are required for the scaling of Beardwood et al 1959 to hold? (If all the y_i’s are in a straight line, I don’t suppose this scaling holds?) Clarity, Quality, Novelty And Reproducibility Clarity: I find the paper can touch up in terms of definitions and flows (see above). The statement and proof of theorem 2 in the main text would benefit from some discussion in terms of more quantitative results or limitation of the proof technique. Quality: I find the paper impactful and would potentially open doors to newer results in terms of implicit bias for this new notion of rank. Both the rigorous theoretical results and the informal intuitions are well thought out. Novelty: The paper is highly innovative, considering new notions of rank that are traditionally a difficult task to tackle and identifying regimes where these notions are meaningful. Reproducibility: More details of the implementation of experiments should be included in order to verify the results shown in the paper.
ICLR
Title Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear Functions Abstract We show that the representation cost of fully connected neural networks with homogeneous nonlinearities which describes the implicit bias in function space of networks with L2-regularization or with losses such as the cross-entropy converges as the depth of the network goes to infinity to a notion of rank over nonlinear functions. We then inquire under which conditions the global minima of the loss recover the ‘true’ rank of the data: we show that for too large depths the global minimum will be approximately rank 1 (underestimating the rank); we then argue that there is a range of depths which grows with the number of datapoints where the true rank is recovered. Finally, we discuss the effect of the rank of a classifier on the topology of the resulting class boundaries and show that autoencoders with optimal nonlinear rank are naturally denoising. 1 INTRODUCTION There has been a lot of recent interest in the so-called implicit bias of DNNs, which describes what functions are favored by a network when fitting the training data. Different network architectures (choice of nonlinearity, depth, width of the network, and more) and training procedures (initialization, optimization algorithm, loss) can lead to widely different biases. In contrast to the so-called kernel regime where the implicit bias is described by the Neural Tangent Kernel (Jacot et al., 2018), there are several active regimes (also called rich or feature-learning regimes), whose implicit bias often feature a form sparsity that is absent from the kernel regime. Such active regimes have been observed for example in DNNs with small initialization (Chizat & Bach, 2018; Rotskoff & Vanden-Eijnden, 2018; Li et al., 2020; Jacot et al., 2022a), with L2regularization (Savarese et al., 2019; Ongie et al., 2020; Jacot et al., 2022b) or when trained on exponentially decaying losses (Gunasekar et al., 2018a;b; Soudry et al., 2018; Du et al., 2018; Ji & Telgarsky, 2018; Chizat & Bach, 2020; Ji & Telgarsky, 2020). In the latter two cases, the implicit bias is described by the representation cost: R(f) = min W:fW=f ∥W∥2 where f is a function that can be represented by the network and the minimization is over all parameters W that result in a network function fW equal to f , the parameters W form a vector and ∥W∥ is the L2-norm. The representation cost can in some cases be explicitly computed for linear networks. For diagonal linear networks, the representation cost of a linear function f(x) = wTx equals theLp normR(f) = L ∥w∥pp of the vector v for p = 2 L (Gunasekar et al., 2018a; Moroshko et al., 2020) where L is the depth of the network. For fully-connected linear networks, the representation cost of a linear function f(x) = Ax equals the Lp-Schatten norm (the Lp norm of the singular values) R(f) = L ∥A∥pp (Dai et al., 2021). A common thread between these examples is a bias towards some notion of sparsity: sparsity of the entries of the vector w in diagonal networks and sparsity of the singular values in fully connected networks. Furthermore, this bias becomes stronger with depth and in the infinite depth limit L→ ∞ the rescaled representation cost R(f)/L converges to the L0 norm ∥w∥0 (the number of non-zero entries in w) in the first case and to the rank Rank(A) in the second. For shallow (L = 2) nonlinear networks with a homogeneous activation, the representation cost also takes the form of a L1 norm (Bach, 2017; Chizat & Bach, 2020; Ongie et al., 2020), leading to sparsity in the effective number of neurons in the hidden layer of the network. However, the representation cost of deeper networks does not resemble any typical norm (Lp or not), though it still leads to some form of sparsity (Jacot et al., 2022b). Despite the absence of explicit formula, we will show that the rescaled representation cost R(f)/L converges to some notion of rank in nonlinear networks as L→ ∞, in analogy to infinite depth linear networks. CONTRIBUTIONS We first introduce two notions of rank: the Jacobian rank RankJ(f) = maxx Rank [Jf(x)] and the Bottleneck rank RankBN (f) which is the smallest integer k such that f can be factorized f = h ◦ g with inner dimension k. In general, RankJ(f) ≤ RankBN (f), but for functions of the form f = ψ ◦ A ◦ ϕ (for a linear map A and two bijections ψ and ϕ), we have RankJ(f) = RankBN (f) = RankA. These two notions of rank satisfy the properties (1) Rankf ∈ Z; (2) Rank(f ◦ g) ≤ min{Rankf,Rankg}; (3) Rank(f + g) ≤ Rankf +Rankg; (4) Rank(x 7→ Ax+ b) = RankA. We then show that in the infinite depth limit L→ ∞ the rescaled representation cost of DNNs with a general homogeneous nonlinearity is sandwiched between the Jacobian and Bottleneck ranks: RankJ (f) ≤ lim L→∞ R(f) L ≤ RankBN (f) . Furthermore limL→∞R(f) satisfies properties (2-4) above. We also conjecture that the limiting representation cost equals its upper bound RankBN (f). We then study how this bias towards low-rank functions translates to finite but large depths. We first show that for large depths the rescaled norm of the parameters ∥Ŵ∥2/L at any global minimum Ŵ is upper bounded by 1+CN/L for a constant CN which depends on the training points. This implies that the resulting function has approximately rank 1 w.r.t. the Jacobian and Bottleneck ranks. This is however problematic if we are trying to fit a ‘true function’ f∗ whose ‘true rank’ k = RankBNf ∗ is larger than 1. Thankfully we show that if k > 1 the constantCN explodes asN → ∞, so that the above bound (∥Ŵ∥2/L ≤ 1+CN/L) is relevant only for very large depths whenN is large. We show another upper bound ∥Ŵ∥2/L ≤ k + C/L with a constant C independent of N , suggesting the existence of a range of intermediate depths where the network recovers the true rank k. Finally, we discuss how rank recovery affects the topology of decision boundaries in classification and leads autoencoders to naturally be denoising, which we confirm with numerical experiments. RELATED WORKS The implicit bias of deep homogeneous networks has, to our knowledge, been much less studied than those of either linear networks or shallow nonlinear ones. (Ongie & Willett, 2022) study deep networks with only one nonlinear layer (all others being linear). Similarly (Le & Jegelka, 2022) show a low-rank alignment phenomenon in a network whose last layers are linear. Closer to our setup is the analysis of the representation cost of deep homogeneous networks in (Jacot et al., 2022b), which gives two reformulations for the optimization in the definition of the representation cost, with some implications on the sparsity of the representations, though the infinite depth limit is not studied. A very similar analysis of the sparsity effect of large depth on the global minima of L2-regularized networks is given in (Timor et al., 2022), however, they only show how the optimal weight matrices are almost rank 1 (and only on average), while we show low-rank properties of the learned function, as well as the existence of a layer with almost rank 1 hidden representations. 2 PRELIMINARIES In this section, we define fully-connected DNNs and their representation cost. FULLY CONNECTED DNNS In this paper, we study fully connected DNNs with L+1 layers numbered from 0 (input layer) to L (output layer). Each layer ℓ ∈ {0, . . . , L} has nℓ neurons, with n0 = din the input dimension and nL = dout the output dimension. The pre-activations α̃ℓ(x) ∈ Rnℓ and activations αℓ(x) ∈ Rnℓ of the layers of the network are defined inductively as α0(x) = x α̃ℓ(x) =Wℓαℓ−1(x) + bℓ αℓ(x) = σ (α̃ℓ(x)) , for the nℓ×nℓ−1 connection weight matrixWℓ, the nℓ bias vector bℓ and the nonlinearity σ : R → R applied entrywise to the vector α̃ℓ(x). The parameters of the network are the collection of all connection weights matrices and bias vectors W = (W1, b1, . . . ,WL, bL). We call the network function fW : Rdin → Rdout the function that maps an input x to the preactivations of the last layer α̃L(x). In this paper, we will focus on homogeneous nonlinearities σ, i.e. such that σ(λx) = λσ(x) for any λ ≥ 0 and x ∈ R, such as the traditional ReLU σ(x) = max{0, x}. In our theoretical analysis we will assume that the nonlinearity is of the form σa(x) = { x if x ≥ 0 ax otherwise for some α ∈ (−1, 1), since for a general homogeneous nonlinearity σ (which is not proportional to the identity function, the constant zero function or the absolute function), there are scalars a ∈ (−1, 1), b ∈ R and c ∈ {+1,−1} such that σ(x) = cσa(bx); as a result, the global minima and representation cost are the same up to scaling. Remark 1. By a simple generalization of the work of (Arora et al., 2018), the set of functions that can be represented by networks (with any finite widths and depth) with such nonlinearities is the set of piecewise linear functions with a finite number of linear regions. In contrast, the three types of homogeneous nonlinearities we rule out (the identity, the constant, or the absolute value) lead to different sets of functions: the linear functions, the constant functions, or the piecewise linear functions f such that limt→∞ ∥f(tx)− f(−tx)∥ is finite for all directions x ∈ Rdin (or possibly a subset of this class of functions). While some of the results of this paper could probably be generalized to the third case up to a few details, we rule it out for the sake of simplicity. Remark 2. All of our results will be for sufficiently wide networks, i.e. for all widths n such that nℓ ≥ n∗ℓ for some minimal widths n∗ℓ . Moreover these results are O(0) in the width, in the sense that above the threshold n∗ℓ the constants do not depend on the widths nℓ. When there are a finite number of datapoints N , it was shown by (Jacot et al., 2022b) that a width of N(N + 1) is always sufficient, that is we can always take n∗ℓ = N(N +1) (though it is observed empirically that a much smaller width can be sufficient in some cases). When we are trying to fit a piecewise linear function over the whole input domain Ω, the width required depends on the number of linear regions (He et al., 2018). REPRESENTATION COST The representation cost R(f ; Ω, σ, L) is the squared norm of the optimal weights W which represents the function f|Ω: R(f ; Ω, σ, L) = min W:fW|Ω=f|Ω ∥W∥2 where the minimum is taken over all weights W of a depth L network (with some finite widths n) such that fW(x) = f(x) for all x ∈ Ω. If no such weights exist, we define R(f ; Ω, σ, L) = ∞. The representation cost describes the natural bias on the represented function fW induced by adding L2 regularization on the weights W: min W C(fW) + λ ∥W∥2 = min f C(f) + λR(f ; Ω, σ, L) for any cost C (defined on functions f : Ω 7→ Rdout ) and where the minimization on the right is over all functions f that can be represented by a depth L network with nonlinearity σ. Therefore, if we can give a simple description of the representation cost of a function f , we can better understand what type of functions f are favored by a DNN with nonlinearity σ and depth L. Remark 3. Note that the representation cost does not only play a role in the presence of L2regularization, it also describes the implicit bias of networks trained on an exponentially decaying loss, such as the cross-entropy loss, as described in (Soudry et al., 2018; Gunasekar et al., 2018a; Chizat & Bach, 2020). 3 INFINITELY DEEP NETWORKS In this section, we first give 4 properties that a notion of rank on piecewise linear functions should satisfy and introduce two notions of rank that satisfy these properties. We then show that the infinitedepth limit L→ ∞ of the rescaled representation cost R(f ; Ω, σa, L)/L is sandwiched between the two notions of rank we introduced, and that this limit satisfies 3 of the 4 properties we introduced. RANK OF PIECEWISE LINEAR FUNCTIONS There is no single natural definition of rank for nonlinear functions, but we will provide two of them in this section and compare them. We focus on notions of rank for piecewise linear functions with a finite number of linear regions since these are the function that can be represented by DNNs with homogeneous nonlinearities (this is a Corollary of Theorem 2.1 from (Arora et al., 2018), for more details, see Appendix E.1). We call such functions finite piecewise linear functions (FPLF). Let us first state a set of properties that any notion of rank on FPLF should satisfy, inspired by properties of rank for linear functions: 1. The rank of a function is an integer Rank(f) ∈ N. 2. Rank(f ◦ g) ≤ min{Rankf,Rankg}. 3. Rank(f + g) ≤ Rankf +Rankg. 4. If f is affine (f(x) = Ax+ b) then Rankf = RankA. Taking g = id or f = id in (2) implies Rank(f) ≤ min{din, dout}. Properties (2) and (4) also imply that for any bijection ϕ on Rd, Rank(ϕ) = Rank(ϕ−1) = d. Note that these properties do not uniquely define a notion of rank. Indeed we will now give two notions of rank which satisfy these properties but do not always match. However any such notion of rank must agree on a large family of functions: Property 2 implies that Rank is invariant under preand post-composition with bijections (see Appendix A), which implies that the rank of functions of the form ψ ◦ f ◦ϕ for an affine function f(x) = Ax+ b and two (piecewise linear) bijections ψ and ϕ always equals RankA. The first notion of rank we consider is based on the rank of the Jacobian of the function: Definition 1. The Jacobian rank of a FPLF f is RankJ(f ; Ω) = maxx RankJf(x), taking the max over points where x is differentiable. Note that since the jacobian is constant over the linear regions of the FPLF f , we only need to take the maximum over every linear region. As observed in (Feng et al., 2022), the Jacobian rank measures the intrinsic dimension of the output set f(Ω). The second notion of rank is inspired by the fact that for linear functions f , the rank of f equals the minimal dimension k such that f can be written as the composition of two linear function f = g ◦ h with inner dimension k. We define the bottleneck rank as: Definition 2. The bottleneck rank RankBN (f ; Ω) is the smallest integer k ∈ N such that there is a factorization as the composition of two FPLFs f|Ω = (g ◦ h)|Ω with inner dimension k. The following proposition relates these two notions of rank: Proposition 1. Both RankJ and RankBN satisfy properties 1− 4 above. Furthermore: • For any FPLF and any set Ω, RankJ(f ; Ω) ≤ RankBN (f ; Ω). • There exists a FPLF f : R2 → R2 and a domain Ω such that RankJ(f ; Ω) = 1 and RankBN (f ; Ω) = 2. INFINITE-DEPTH REPRESENTATION COST In the infinite-depth limit, the (rescaled) representation cost of DNNs R∞(f ; Ω, σa) = limL→∞ R(f ;Ω,σa,L) L converges to a value ‘sandwiched’ between the above two notions of rank: Theorem 1. For any bounded domain Ω and any FPLF f RankJ(f ; Ω) ≤ R∞(f ; Ω, σα) ≤ RankBN (f ; Ω). Furthermore the limiting representation cost R∞(f ; Ω, σa) satisfies properties 2 to 4. Proof. The lower bound follows from taking L → ∞ in Proposition 3 (see Section 4). The upper bound is constructive: a function f = h ◦ g can be represented as a network in three consecutive parts: a first part (of depth Lg) representing g, a final part (of depth Lh) representing h, and in the middle L− Lg − Lh identity layers on a k-dimensional space. The contribution to the norm of the parameters of the middle part is k(L−Lg −Lh) and it dominates as L→ ∞, since the contribution of the first and final parts are finite. Note that R∞(f ; Ω, σa) might satisfy property 1 as well, we were simply not able to prove it. Theorem 1 implies that for functions of the form f = ψ ◦ A ◦ ϕ for bijections ψ and ϕ, R∞(f ; Ω, σa) = RankJ(f ; Ω) = RankBN (f ; Ω) = RankA. Remark 4. Motivated by some aspects of the proofs and a general intuition (which is described in Section 4) we conjecture that R∞(f ; Ω, σa) = RankBN (f ; Ω). This would imply that the limiting representation cost does not depend on the choice of nonlinearity, as long as it is of the form σa (which we already proved is the case for functions of the form ψ ◦A ◦ ϕ). This result suggests that large-depth neural networks are biased towards function which have a low Jacobian rank and (if our above mentioned conjecture is true) low Bottleneck rank, much like linear networks are biased towards low-rank linear maps. It also suggests that the rescaled norm of the parameters ∥W∥2/L is an approximate upper bound on the Jacobian rank (and if our conjecture is true on the Bottleneck rank too) of the function fW. In the next section, we partly formalize these ideas. 4 RANK RECOVERY IN FINITE DEPTH NETWORKS In this section, we study how the (approximate) rank of minimizer functions fŴ (i.e. functions at a global minimum Ŵ) for the MSE Lλ(W) = 1N ∑N i=1(fW(xi)−yi)2+ λ L ∥W∥ 2 with data sampled from a distribution with support Ω is affected by the depth L. In particular, when the outputs are generated from a true function f∗ (i.e. yi = f∗(xi)) with k = RankBN (f∗; Ω), we study in which condition the ‘true rank’ k is recovered. APPROXIMATE RANK 1 REGIME One can build a function with BN-rank 1 that fits any training data (for example by first projecting the input to a line with no overlap and then mapping the points from the line to the outputs with a piecewise linear function). This implies the following bound: Proposition 2. There is a constant CN (which depends on the training data only) such that for any large enough L, at any global minimum Ŵ of the loss Lλ the represented function fŴ satisfies 1 L R(fŴ;σa,Ω, L) ≤ 1 + CN L . Proof. We use the same construction as in the proof of Theorem 1 for any fitting rank 1 function. This bound implies that the function fŴ represented by the network at a global minimum is approximately rank 1 both w.r.t. to the Jacobian and Bottleneck ranks, showing the bias towards low-rank functions even for finite (but possibly very large) depths. Jacobian Rank: For any function f , the rescaled norm representation cost 1LR(f ; Ω, σa, L) bounds the Lp-Schatten norm of the Jacobian (with p = 2L ) at any point: Proposition 3. Let f be a FPLF, then at any differentiable point x, we have ∥Jf(x)∥2/L2/L := RankJfW(x)∑ k=1 sk (Jf(x)) 2 L ≤ 1 L R(f ; Ω, σa, L), where sk (JfW(x)) is the k-th singular value of the Jacobian JfW(x). Together with Proposition 2, this implies that the second singular value of the Jacobian of any minimizer function must be exponentially small s2 ( JfŴ(x) ) ≤ ( 1+ CN L 2 )L 2 in L. Bottleneck Rank: We can further prove the existence of a bottleneck in the network in any minimizer network, i.e. a layer ℓ whose hidden representation is approximately rank 1: Proposition 4. For any global minimum Ŵ of the L2-regularized loss Lλ with λ > 0 and any set of Ñ datapoints X̃ ∈ Rdin×Ñ (which do not have to be the training set X) with non-constant outputs, there is a layer ℓ0 such that the first two singular values s1, s2 of the hidden representation Zℓ0 ∈ Rnℓ×N (whose columns are the activations αℓ0(xi) for all the inputs xi in X̃) satisfies s2 s1 = O(L− 1 4 ). The fact that the global minima of the loss are approximately rank 1 not only in the Jacobian but also in the Bottleneck sense further supports our conjecture that the limiting representation cost equals the Bottleneck rank R∞ = RankBN . Furthermore, it shows that the global minimum of the L2-regularized is biased towards low-rank functions for large depths, since it fits the data with (approximately) the smallest possible rank. RANK RECOVERY FOR INTERMEDIATE DEPTHS However, learning rank 1 functions is not always a good thing. Assume that we are trying to fit a ‘true function’ f∗ : Ω → Rdout with a certain rank k = RankBN (f∗; Ω). If k > 1 the global minima of a large depth network will end up underestimating the true rank k. In contrast, in the linear setting underestimating the true rank is almost never a problem: for example in matrix completion one always wants to find a minimal rank solution (Candès & Recht, 2009; Arora et al., 2019). The difference is due to the fact that rank 1 nonlinear functions can fit any finite training set, which is not the case in the linear case. Thankfully, for large datasets it becomes more and more difficult to underestimate the rank, since for large N fitting the data with a rank 1 function requires large derivatives, which in turn implies a large parameter norm: Theorem 2. Given a Jacobian-rank k true function f∗ : Ω → Rdout on a bounded domain Ω, then for all ϵ there is a constant cϵ such that for any BN-rank 1 function f̂ : Ω → Rdout that fits f̂(xi) = f ∗(xi) a dataset x1, . . . , xN sampled i.i.d. from a distribution p with support Ω, we have 1 LR(f̂ ; Ω, σa, L) > cϵN 2 L (1− 1 k ) with prob. at least 1− ϵ. Proof. We show that there is a point x ∈ Ω with large derivative ∥Jf(x)∥op ≥ TSP(y1,...,yN ) diam(x1,...,xN ) for the Traveling Salesman Problem TSP(y1, . . . , yN ), i.e. the length of the shortest path passing through every point y1, . . . , ym, and the diameter diam(x1, . . . , xN ) of the points x1, . . . , xN . This follows from the fact that the image of f̂ is a line going through all yis, and if i and j are the first and last points visited, the image of segment [xi, xj ] is a line from yi to yj passing through all yks. The diameter is bounded by diamΩ while the TSP scales as N1− 1 k (Beardwood et al., 1959) since the yis are sampled from a k-dimensional distribution. The bound on the parameter norm then follows from Proposition 3. This implies that the constantCN in Proposition 2 explodes as the number of datapointsN increases, i.e. as N increases, larger and larger depths are required for the bound in Proposition 2 to be meaningful. In that case, a better upper bound on the norm of the parameters can be obtained, which implies that the functions fŴ at global minima are approximately rank k or less (at least in the Jacobian sense, according to Proposition 3): Proposition 5. Let the ‘true function’ f∗ : Ω → Rdout be piecewise linear with RankBN (f∗) = k, then there is a constant C which depends on f∗ only such that any minimizer function fŴ satisfies 1 L R(fŴ;σa,Ω, L) ≤ 1 L R(f∗;σa,Ω, L) ≤ k + C L . Theorem 2 and Proposition 5 imply that if the number of datapoints N is sufficiently large (N >( k+CL c ) kL 2k−2 ), there are parameters W∗ that fit the true function f∗ with a smaller parameter norm than any choice of parameters W that fit the data with a rank 1 function. In that case, the global minima will not be rank 1 and might instead recover the true rank k. Another interpretation is that since the constant C does not depend on the number of training points N (in contrast to CN ), there is a range of depths (which grows as N → ∞) where the upper bound of Proposition 5 is below that of Proposition 2. We expect rank recovery to happen roughly in this range of depths: too small depths can lead to an overestimation of the rank1, while too large depths can lead to an underestimation. Remark 5. Note that in our experiments, we were not able to observe gradient descent converging to a solution that underestimates the true rank, even for very deep networks. This is probably due to gradient descent converging to one of the many local minima in the loss surface of very deep L2-regularized DNNs. Some recent theoretical results offer a possible explanation for why gradient descent naturally avoids rank 1 solutions: the proof of Proposition 2 shows that rank 1 fitting functions have exploding gradient as N → ∞, and such high gradient functions are known (at the moment only for shallow networks with 1D inputs) to correspond to narrow minima (Mulayoff et al., 2021). Some of our results can be applied to local minima Ŵwith a small norm: Proposition 3 implies that the Jacobian rank of fŴ is approximately bounded by ∥Ŵ∥ 2 /L. Proposition 4 also applies to local minima, but only if ∥Ŵ∥2/L ≤ 1 + C/L for some constant C, though it could be generalized. DISCUSSION We now propose a tentative explanation for the phenomenon observed in this section. In contrast to the rest of the paper, this discussion is informal. 1Note that traditional regression models, such as Kernel Ridge Regression (KRR) typically overestimate the true rank, as described in Appendix D.1. Ideally, we want to learn functions f which can be factorized as a composition h ◦ g so that not only the inner dimension is small but the two functions g, h are not ‘too complex’. These two objectives are often contradictory and one needs to find a trade-off between the two. Instead of optimizing the bottleneck rank, one might want to optimize with a regularization term of the form min f=h◦g k + γ (C(g) + C(h)) , (1) optimizing over all possible factorization f = h ◦ g of f with inner dimension k, where C(g) and C(h) are measures of the complexity of g and h resp. The parameter γ ≥ 0 allows us to tune the balance between the minimization of the inner dimension and the complexity of g and h, recovering the Bottleneck rank when γ = 0. For small γ the minimizer is always rank 1 (since it is always possible to fit a finite dataset with a rank 1 function in the absence of restriction on the complexity on g and h), but with the right choice of γ one can recover the true rank. Some aspects of the proofs techniques we used in this paper suggest that large-depth DNNs are optimizing such a cost (or an approximation thereof). Consider a deep network that fits with minimal parameter norm a function f ; if we add more layers to the network it is natural to assume that the new optimal representation of f will be almost the same as that of the shallower network with some added (approximate) identity layers. The interesting question is where are those identity layers added? The cost of adding an identity layer at a layer ℓ equals the dimension dℓ of the hidden representation of the inputs at ℓ. It is therefore optimal to add identity layers where the hidden representations have minimal dimension. This suggests that for large depths the optimal representation of a function f approximately takes the form of Lg layers representing g, then L−Lg−Lh identity layers, and finally Lh layers representing h, for some factorization f = h◦g with inner dimension k. We observe in Figure 1 such a three-part representation structure in an MSE task with a low-rank true function. The rescaled parameter norm would then take the form 1 L ∥W∥2 = L− Lg − Lh L k + 1 L ( ∥Wg∥2 + ∥Wh∥2 ) , where Wg and Wh are the parameters of the first and last part of the network. For large depths, we can make the approximation L−Lg−LhL ≈ 1 to recover the same structure as Equation 1, with γ = 1/L, C(g) = ∥W∥2g and C(h) = ∥Wh∥ 2. This intuition offers a possible explanation for rank recovery in DNNs, though we are not yet able to prove it rigorously. 5 PRACTICAL IMPLICATIONS In this section, we describe the impact of rank minimization on two practical tasks: multiclass classification and autoencoders. MULTICLASS CLASSIFICATION Consider a function fW∗ : Rdin → Rm which solves a classification task with m classes, i.e. for all training points xi with class yi ∈ {1, . . . ,m} the yi-th entry of the vector fW∗ is strictly larger than all other entries. The Bottleneck rank k = RankBN (fW∗) of fW∗ has an impact on the topology of the resulting partition of the input space Ω into classes, leading to topological properties typical of a partition on a k-dimensional space rather than those of a partition on a din-dimensional space. When k = 1, the partition will be topologically equivalent to a classification on a line, which implies the absence of tripoints, i.e. points at the boundary of 3 (or more) classes. Indeed any boundary point x ∈ Ω will be mapped to a boundary point z = g(x) by the first function g : Ω → R in the factorization of fW∗ ; since z has at most two neighboring classes, then so does x. This property is illustrated in Figure 2: for a classification task on four classes on the plane, we observe that the partitions obtained by shallow networks (L = 2) leads to tripoints which are absent in deeper networks (L = 9). Notice also that the presence or absence of L2-regularization has little effect on the final shape, which is in line with the observation that the cross-entropy loss leads to an implicit L2-regularization (Soudry et al., 2018; Gunasekar et al., 2018a; Chizat & Bach, 2020), reducing the necessity of an explicit L2-regularization. AUTOENCODERS Consider learning an autoender on data of the form x = g(z) where z is sampled (with full dimensional support) in a latent space Rk and g : Rk → Rd is an injective FPLF. In this setting, the true rank is the intrinsic dimension k of the data, since the minimal rank function that equals the identity on the data distribution has rank k. Assume that the learned autoencoder f̂ : Rk → Rk fits the data f(x) = x for all x = g(z) and recovers the rank RankBN f̂ = k. At any datapoint x0 = g(z0) such that g is differentiale at z0, the data support g(Rk) is locally a k-dimensional affine subspace T = x0 + ImJg(z0). In the linear region of f̂ that contains x0, f̂ is an affine projection to T since it equals the identity when restricted to T and its Jacobian is rank k. This proves that rank recovering autoencoders are naturally (locally) denoising. 6 CONCLUSION We have shown that in infinitely deep networks, L2-regularization leads to a bias towards low-rank functions, for some notion of rank on FPLFs. We have then shown a set of results that suggest that this low-rank bias extends to large but finite depths. With the right depths, this leads to ‘rank recovery’, where the learned function has approximately the same rank as the ‘true function’. We proposed a tentative explanation for this rank recovery: for finite but large widths, the network is biased towards function f which can be factorized f = h ◦ g with both a small inner dimension k and small complexity of g and h. Finally, we have shown how rank recovery affects the topology of the class boundaries in a classification task and leads to natural denoising abilities in autoencoders.
1. What are the main contributions and novel aspects introduced by the paper regarding nonlinear functions and their ranks? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its theoretical analysis and practical implications? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or questions regarding the paper's proofs, assumptions, and results? If so, what are they?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper introduces two notions of rank for nonlinear functions, the Jacobian rank, and the bottleneck rank. The authors then consider fully connected neural networks with homogeneous nonlinearities. They first show that for L → ∞ , the reconstruction cost of any piecewise linear function is sandwiched between the two notions of rank. Next, they show several results regarding the reconstruction cost of any minimizer of the ℓ 2 -regularized empirical risk minimization problem when L is large but finite. They have a discussion on some of the implications of these theoretical results. Strengths And Weaknesses Strengths The paper looks at the implicit bias that depth of fully connected networks induce. This problem is interesting and different implicit biases have attracted a lot of attention in the past few years. The paper tries to characterize this implicit bias theoretically and shows a few examples of what these theoretical results might imply in practice. Weakness The paper is hard to follow. At times it is not clear how one results implies the other. The proofs in the main paper are very terse and even following some of the proofs in the appendix are not easy. Clarity, Quality, Novelty And Reproducibility Clarity and quality The paper is not very well written and at some points hard to follow. The authors have assumed the reader is knowledgeable with works in this area and the proofs are very terse making them hard to understand. Novelty The results are novel to the best of my knowledge. However, I believe the authors should make a better case why these results are interesting and useful. Comments Proof of Proposition 2 is quite long. I suggest adding a proof overview either in the appendix or the main body. Also, L clearly needs to be larger than something for this theorem to hold which needs to be added to the statement of the theorem for correctness. Why is ϵ not showing up in the bound of Theorem 2? Is there a typo? In the appendix B, in proof of Prop. 2 (Prop. 3 of the main) the authors use a result from Soudry et al. which I could not find in the paper that is cited. Please add a note which Theorem or Proposition of this paper implies this result. Minor comments and typos Page 1, 2nd paragraph from the bottom: in f ( x ) = w ⊤ x , x is missing. There's also a typo in cost for the fully connected networks. Page 2, definition of Jacobian rank should use max instead of min Page 4, under the properties of rank: "Property 2 implies that Rank is invariant under pre-and post-composition with bijections..." It is not clear to me how property 2 implies that. It needs more explanation.
ICLR
Title Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear Functions Abstract We show that the representation cost of fully connected neural networks with homogeneous nonlinearities which describes the implicit bias in function space of networks with L2-regularization or with losses such as the cross-entropy converges as the depth of the network goes to infinity to a notion of rank over nonlinear functions. We then inquire under which conditions the global minima of the loss recover the ‘true’ rank of the data: we show that for too large depths the global minimum will be approximately rank 1 (underestimating the rank); we then argue that there is a range of depths which grows with the number of datapoints where the true rank is recovered. Finally, we discuss the effect of the rank of a classifier on the topology of the resulting class boundaries and show that autoencoders with optimal nonlinear rank are naturally denoising. 1 INTRODUCTION There has been a lot of recent interest in the so-called implicit bias of DNNs, which describes what functions are favored by a network when fitting the training data. Different network architectures (choice of nonlinearity, depth, width of the network, and more) and training procedures (initialization, optimization algorithm, loss) can lead to widely different biases. In contrast to the so-called kernel regime where the implicit bias is described by the Neural Tangent Kernel (Jacot et al., 2018), there are several active regimes (also called rich or feature-learning regimes), whose implicit bias often feature a form sparsity that is absent from the kernel regime. Such active regimes have been observed for example in DNNs with small initialization (Chizat & Bach, 2018; Rotskoff & Vanden-Eijnden, 2018; Li et al., 2020; Jacot et al., 2022a), with L2regularization (Savarese et al., 2019; Ongie et al., 2020; Jacot et al., 2022b) or when trained on exponentially decaying losses (Gunasekar et al., 2018a;b; Soudry et al., 2018; Du et al., 2018; Ji & Telgarsky, 2018; Chizat & Bach, 2020; Ji & Telgarsky, 2020). In the latter two cases, the implicit bias is described by the representation cost: R(f) = min W:fW=f ∥W∥2 where f is a function that can be represented by the network and the minimization is over all parameters W that result in a network function fW equal to f , the parameters W form a vector and ∥W∥ is the L2-norm. The representation cost can in some cases be explicitly computed for linear networks. For diagonal linear networks, the representation cost of a linear function f(x) = wTx equals theLp normR(f) = L ∥w∥pp of the vector v for p = 2 L (Gunasekar et al., 2018a; Moroshko et al., 2020) where L is the depth of the network. For fully-connected linear networks, the representation cost of a linear function f(x) = Ax equals the Lp-Schatten norm (the Lp norm of the singular values) R(f) = L ∥A∥pp (Dai et al., 2021). A common thread between these examples is a bias towards some notion of sparsity: sparsity of the entries of the vector w in diagonal networks and sparsity of the singular values in fully connected networks. Furthermore, this bias becomes stronger with depth and in the infinite depth limit L→ ∞ the rescaled representation cost R(f)/L converges to the L0 norm ∥w∥0 (the number of non-zero entries in w) in the first case and to the rank Rank(A) in the second. For shallow (L = 2) nonlinear networks with a homogeneous activation, the representation cost also takes the form of a L1 norm (Bach, 2017; Chizat & Bach, 2020; Ongie et al., 2020), leading to sparsity in the effective number of neurons in the hidden layer of the network. However, the representation cost of deeper networks does not resemble any typical norm (Lp or not), though it still leads to some form of sparsity (Jacot et al., 2022b). Despite the absence of explicit formula, we will show that the rescaled representation cost R(f)/L converges to some notion of rank in nonlinear networks as L→ ∞, in analogy to infinite depth linear networks. CONTRIBUTIONS We first introduce two notions of rank: the Jacobian rank RankJ(f) = maxx Rank [Jf(x)] and the Bottleneck rank RankBN (f) which is the smallest integer k such that f can be factorized f = h ◦ g with inner dimension k. In general, RankJ(f) ≤ RankBN (f), but for functions of the form f = ψ ◦ A ◦ ϕ (for a linear map A and two bijections ψ and ϕ), we have RankJ(f) = RankBN (f) = RankA. These two notions of rank satisfy the properties (1) Rankf ∈ Z; (2) Rank(f ◦ g) ≤ min{Rankf,Rankg}; (3) Rank(f + g) ≤ Rankf +Rankg; (4) Rank(x 7→ Ax+ b) = RankA. We then show that in the infinite depth limit L→ ∞ the rescaled representation cost of DNNs with a general homogeneous nonlinearity is sandwiched between the Jacobian and Bottleneck ranks: RankJ (f) ≤ lim L→∞ R(f) L ≤ RankBN (f) . Furthermore limL→∞R(f) satisfies properties (2-4) above. We also conjecture that the limiting representation cost equals its upper bound RankBN (f). We then study how this bias towards low-rank functions translates to finite but large depths. We first show that for large depths the rescaled norm of the parameters ∥Ŵ∥2/L at any global minimum Ŵ is upper bounded by 1+CN/L for a constant CN which depends on the training points. This implies that the resulting function has approximately rank 1 w.r.t. the Jacobian and Bottleneck ranks. This is however problematic if we are trying to fit a ‘true function’ f∗ whose ‘true rank’ k = RankBNf ∗ is larger than 1. Thankfully we show that if k > 1 the constantCN explodes asN → ∞, so that the above bound (∥Ŵ∥2/L ≤ 1+CN/L) is relevant only for very large depths whenN is large. We show another upper bound ∥Ŵ∥2/L ≤ k + C/L with a constant C independent of N , suggesting the existence of a range of intermediate depths where the network recovers the true rank k. Finally, we discuss how rank recovery affects the topology of decision boundaries in classification and leads autoencoders to naturally be denoising, which we confirm with numerical experiments. RELATED WORKS The implicit bias of deep homogeneous networks has, to our knowledge, been much less studied than those of either linear networks or shallow nonlinear ones. (Ongie & Willett, 2022) study deep networks with only one nonlinear layer (all others being linear). Similarly (Le & Jegelka, 2022) show a low-rank alignment phenomenon in a network whose last layers are linear. Closer to our setup is the analysis of the representation cost of deep homogeneous networks in (Jacot et al., 2022b), which gives two reformulations for the optimization in the definition of the representation cost, with some implications on the sparsity of the representations, though the infinite depth limit is not studied. A very similar analysis of the sparsity effect of large depth on the global minima of L2-regularized networks is given in (Timor et al., 2022), however, they only show how the optimal weight matrices are almost rank 1 (and only on average), while we show low-rank properties of the learned function, as well as the existence of a layer with almost rank 1 hidden representations. 2 PRELIMINARIES In this section, we define fully-connected DNNs and their representation cost. FULLY CONNECTED DNNS In this paper, we study fully connected DNNs with L+1 layers numbered from 0 (input layer) to L (output layer). Each layer ℓ ∈ {0, . . . , L} has nℓ neurons, with n0 = din the input dimension and nL = dout the output dimension. The pre-activations α̃ℓ(x) ∈ Rnℓ and activations αℓ(x) ∈ Rnℓ of the layers of the network are defined inductively as α0(x) = x α̃ℓ(x) =Wℓαℓ−1(x) + bℓ αℓ(x) = σ (α̃ℓ(x)) , for the nℓ×nℓ−1 connection weight matrixWℓ, the nℓ bias vector bℓ and the nonlinearity σ : R → R applied entrywise to the vector α̃ℓ(x). The parameters of the network are the collection of all connection weights matrices and bias vectors W = (W1, b1, . . . ,WL, bL). We call the network function fW : Rdin → Rdout the function that maps an input x to the preactivations of the last layer α̃L(x). In this paper, we will focus on homogeneous nonlinearities σ, i.e. such that σ(λx) = λσ(x) for any λ ≥ 0 and x ∈ R, such as the traditional ReLU σ(x) = max{0, x}. In our theoretical analysis we will assume that the nonlinearity is of the form σa(x) = { x if x ≥ 0 ax otherwise for some α ∈ (−1, 1), since for a general homogeneous nonlinearity σ (which is not proportional to the identity function, the constant zero function or the absolute function), there are scalars a ∈ (−1, 1), b ∈ R and c ∈ {+1,−1} such that σ(x) = cσa(bx); as a result, the global minima and representation cost are the same up to scaling. Remark 1. By a simple generalization of the work of (Arora et al., 2018), the set of functions that can be represented by networks (with any finite widths and depth) with such nonlinearities is the set of piecewise linear functions with a finite number of linear regions. In contrast, the three types of homogeneous nonlinearities we rule out (the identity, the constant, or the absolute value) lead to different sets of functions: the linear functions, the constant functions, or the piecewise linear functions f such that limt→∞ ∥f(tx)− f(−tx)∥ is finite for all directions x ∈ Rdin (or possibly a subset of this class of functions). While some of the results of this paper could probably be generalized to the third case up to a few details, we rule it out for the sake of simplicity. Remark 2. All of our results will be for sufficiently wide networks, i.e. for all widths n such that nℓ ≥ n∗ℓ for some minimal widths n∗ℓ . Moreover these results are O(0) in the width, in the sense that above the threshold n∗ℓ the constants do not depend on the widths nℓ. When there are a finite number of datapoints N , it was shown by (Jacot et al., 2022b) that a width of N(N + 1) is always sufficient, that is we can always take n∗ℓ = N(N +1) (though it is observed empirically that a much smaller width can be sufficient in some cases). When we are trying to fit a piecewise linear function over the whole input domain Ω, the width required depends on the number of linear regions (He et al., 2018). REPRESENTATION COST The representation cost R(f ; Ω, σ, L) is the squared norm of the optimal weights W which represents the function f|Ω: R(f ; Ω, σ, L) = min W:fW|Ω=f|Ω ∥W∥2 where the minimum is taken over all weights W of a depth L network (with some finite widths n) such that fW(x) = f(x) for all x ∈ Ω. If no such weights exist, we define R(f ; Ω, σ, L) = ∞. The representation cost describes the natural bias on the represented function fW induced by adding L2 regularization on the weights W: min W C(fW) + λ ∥W∥2 = min f C(f) + λR(f ; Ω, σ, L) for any cost C (defined on functions f : Ω 7→ Rdout ) and where the minimization on the right is over all functions f that can be represented by a depth L network with nonlinearity σ. Therefore, if we can give a simple description of the representation cost of a function f , we can better understand what type of functions f are favored by a DNN with nonlinearity σ and depth L. Remark 3. Note that the representation cost does not only play a role in the presence of L2regularization, it also describes the implicit bias of networks trained on an exponentially decaying loss, such as the cross-entropy loss, as described in (Soudry et al., 2018; Gunasekar et al., 2018a; Chizat & Bach, 2020). 3 INFINITELY DEEP NETWORKS In this section, we first give 4 properties that a notion of rank on piecewise linear functions should satisfy and introduce two notions of rank that satisfy these properties. We then show that the infinitedepth limit L→ ∞ of the rescaled representation cost R(f ; Ω, σa, L)/L is sandwiched between the two notions of rank we introduced, and that this limit satisfies 3 of the 4 properties we introduced. RANK OF PIECEWISE LINEAR FUNCTIONS There is no single natural definition of rank for nonlinear functions, but we will provide two of them in this section and compare them. We focus on notions of rank for piecewise linear functions with a finite number of linear regions since these are the function that can be represented by DNNs with homogeneous nonlinearities (this is a Corollary of Theorem 2.1 from (Arora et al., 2018), for more details, see Appendix E.1). We call such functions finite piecewise linear functions (FPLF). Let us first state a set of properties that any notion of rank on FPLF should satisfy, inspired by properties of rank for linear functions: 1. The rank of a function is an integer Rank(f) ∈ N. 2. Rank(f ◦ g) ≤ min{Rankf,Rankg}. 3. Rank(f + g) ≤ Rankf +Rankg. 4. If f is affine (f(x) = Ax+ b) then Rankf = RankA. Taking g = id or f = id in (2) implies Rank(f) ≤ min{din, dout}. Properties (2) and (4) also imply that for any bijection ϕ on Rd, Rank(ϕ) = Rank(ϕ−1) = d. Note that these properties do not uniquely define a notion of rank. Indeed we will now give two notions of rank which satisfy these properties but do not always match. However any such notion of rank must agree on a large family of functions: Property 2 implies that Rank is invariant under preand post-composition with bijections (see Appendix A), which implies that the rank of functions of the form ψ ◦ f ◦ϕ for an affine function f(x) = Ax+ b and two (piecewise linear) bijections ψ and ϕ always equals RankA. The first notion of rank we consider is based on the rank of the Jacobian of the function: Definition 1. The Jacobian rank of a FPLF f is RankJ(f ; Ω) = maxx RankJf(x), taking the max over points where x is differentiable. Note that since the jacobian is constant over the linear regions of the FPLF f , we only need to take the maximum over every linear region. As observed in (Feng et al., 2022), the Jacobian rank measures the intrinsic dimension of the output set f(Ω). The second notion of rank is inspired by the fact that for linear functions f , the rank of f equals the minimal dimension k such that f can be written as the composition of two linear function f = g ◦ h with inner dimension k. We define the bottleneck rank as: Definition 2. The bottleneck rank RankBN (f ; Ω) is the smallest integer k ∈ N such that there is a factorization as the composition of two FPLFs f|Ω = (g ◦ h)|Ω with inner dimension k. The following proposition relates these two notions of rank: Proposition 1. Both RankJ and RankBN satisfy properties 1− 4 above. Furthermore: • For any FPLF and any set Ω, RankJ(f ; Ω) ≤ RankBN (f ; Ω). • There exists a FPLF f : R2 → R2 and a domain Ω such that RankJ(f ; Ω) = 1 and RankBN (f ; Ω) = 2. INFINITE-DEPTH REPRESENTATION COST In the infinite-depth limit, the (rescaled) representation cost of DNNs R∞(f ; Ω, σa) = limL→∞ R(f ;Ω,σa,L) L converges to a value ‘sandwiched’ between the above two notions of rank: Theorem 1. For any bounded domain Ω and any FPLF f RankJ(f ; Ω) ≤ R∞(f ; Ω, σα) ≤ RankBN (f ; Ω). Furthermore the limiting representation cost R∞(f ; Ω, σa) satisfies properties 2 to 4. Proof. The lower bound follows from taking L → ∞ in Proposition 3 (see Section 4). The upper bound is constructive: a function f = h ◦ g can be represented as a network in three consecutive parts: a first part (of depth Lg) representing g, a final part (of depth Lh) representing h, and in the middle L− Lg − Lh identity layers on a k-dimensional space. The contribution to the norm of the parameters of the middle part is k(L−Lg −Lh) and it dominates as L→ ∞, since the contribution of the first and final parts are finite. Note that R∞(f ; Ω, σa) might satisfy property 1 as well, we were simply not able to prove it. Theorem 1 implies that for functions of the form f = ψ ◦ A ◦ ϕ for bijections ψ and ϕ, R∞(f ; Ω, σa) = RankJ(f ; Ω) = RankBN (f ; Ω) = RankA. Remark 4. Motivated by some aspects of the proofs and a general intuition (which is described in Section 4) we conjecture that R∞(f ; Ω, σa) = RankBN (f ; Ω). This would imply that the limiting representation cost does not depend on the choice of nonlinearity, as long as it is of the form σa (which we already proved is the case for functions of the form ψ ◦A ◦ ϕ). This result suggests that large-depth neural networks are biased towards function which have a low Jacobian rank and (if our above mentioned conjecture is true) low Bottleneck rank, much like linear networks are biased towards low-rank linear maps. It also suggests that the rescaled norm of the parameters ∥W∥2/L is an approximate upper bound on the Jacobian rank (and if our conjecture is true on the Bottleneck rank too) of the function fW. In the next section, we partly formalize these ideas. 4 RANK RECOVERY IN FINITE DEPTH NETWORKS In this section, we study how the (approximate) rank of minimizer functions fŴ (i.e. functions at a global minimum Ŵ) for the MSE Lλ(W) = 1N ∑N i=1(fW(xi)−yi)2+ λ L ∥W∥ 2 with data sampled from a distribution with support Ω is affected by the depth L. In particular, when the outputs are generated from a true function f∗ (i.e. yi = f∗(xi)) with k = RankBN (f∗; Ω), we study in which condition the ‘true rank’ k is recovered. APPROXIMATE RANK 1 REGIME One can build a function with BN-rank 1 that fits any training data (for example by first projecting the input to a line with no overlap and then mapping the points from the line to the outputs with a piecewise linear function). This implies the following bound: Proposition 2. There is a constant CN (which depends on the training data only) such that for any large enough L, at any global minimum Ŵ of the loss Lλ the represented function fŴ satisfies 1 L R(fŴ;σa,Ω, L) ≤ 1 + CN L . Proof. We use the same construction as in the proof of Theorem 1 for any fitting rank 1 function. This bound implies that the function fŴ represented by the network at a global minimum is approximately rank 1 both w.r.t. to the Jacobian and Bottleneck ranks, showing the bias towards low-rank functions even for finite (but possibly very large) depths. Jacobian Rank: For any function f , the rescaled norm representation cost 1LR(f ; Ω, σa, L) bounds the Lp-Schatten norm of the Jacobian (with p = 2L ) at any point: Proposition 3. Let f be a FPLF, then at any differentiable point x, we have ∥Jf(x)∥2/L2/L := RankJfW(x)∑ k=1 sk (Jf(x)) 2 L ≤ 1 L R(f ; Ω, σa, L), where sk (JfW(x)) is the k-th singular value of the Jacobian JfW(x). Together with Proposition 2, this implies that the second singular value of the Jacobian of any minimizer function must be exponentially small s2 ( JfŴ(x) ) ≤ ( 1+ CN L 2 )L 2 in L. Bottleneck Rank: We can further prove the existence of a bottleneck in the network in any minimizer network, i.e. a layer ℓ whose hidden representation is approximately rank 1: Proposition 4. For any global minimum Ŵ of the L2-regularized loss Lλ with λ > 0 and any set of Ñ datapoints X̃ ∈ Rdin×Ñ (which do not have to be the training set X) with non-constant outputs, there is a layer ℓ0 such that the first two singular values s1, s2 of the hidden representation Zℓ0 ∈ Rnℓ×N (whose columns are the activations αℓ0(xi) for all the inputs xi in X̃) satisfies s2 s1 = O(L− 1 4 ). The fact that the global minima of the loss are approximately rank 1 not only in the Jacobian but also in the Bottleneck sense further supports our conjecture that the limiting representation cost equals the Bottleneck rank R∞ = RankBN . Furthermore, it shows that the global minimum of the L2-regularized is biased towards low-rank functions for large depths, since it fits the data with (approximately) the smallest possible rank. RANK RECOVERY FOR INTERMEDIATE DEPTHS However, learning rank 1 functions is not always a good thing. Assume that we are trying to fit a ‘true function’ f∗ : Ω → Rdout with a certain rank k = RankBN (f∗; Ω). If k > 1 the global minima of a large depth network will end up underestimating the true rank k. In contrast, in the linear setting underestimating the true rank is almost never a problem: for example in matrix completion one always wants to find a minimal rank solution (Candès & Recht, 2009; Arora et al., 2019). The difference is due to the fact that rank 1 nonlinear functions can fit any finite training set, which is not the case in the linear case. Thankfully, for large datasets it becomes more and more difficult to underestimate the rank, since for large N fitting the data with a rank 1 function requires large derivatives, which in turn implies a large parameter norm: Theorem 2. Given a Jacobian-rank k true function f∗ : Ω → Rdout on a bounded domain Ω, then for all ϵ there is a constant cϵ such that for any BN-rank 1 function f̂ : Ω → Rdout that fits f̂(xi) = f ∗(xi) a dataset x1, . . . , xN sampled i.i.d. from a distribution p with support Ω, we have 1 LR(f̂ ; Ω, σa, L) > cϵN 2 L (1− 1 k ) with prob. at least 1− ϵ. Proof. We show that there is a point x ∈ Ω with large derivative ∥Jf(x)∥op ≥ TSP(y1,...,yN ) diam(x1,...,xN ) for the Traveling Salesman Problem TSP(y1, . . . , yN ), i.e. the length of the shortest path passing through every point y1, . . . , ym, and the diameter diam(x1, . . . , xN ) of the points x1, . . . , xN . This follows from the fact that the image of f̂ is a line going through all yis, and if i and j are the first and last points visited, the image of segment [xi, xj ] is a line from yi to yj passing through all yks. The diameter is bounded by diamΩ while the TSP scales as N1− 1 k (Beardwood et al., 1959) since the yis are sampled from a k-dimensional distribution. The bound on the parameter norm then follows from Proposition 3. This implies that the constantCN in Proposition 2 explodes as the number of datapointsN increases, i.e. as N increases, larger and larger depths are required for the bound in Proposition 2 to be meaningful. In that case, a better upper bound on the norm of the parameters can be obtained, which implies that the functions fŴ at global minima are approximately rank k or less (at least in the Jacobian sense, according to Proposition 3): Proposition 5. Let the ‘true function’ f∗ : Ω → Rdout be piecewise linear with RankBN (f∗) = k, then there is a constant C which depends on f∗ only such that any minimizer function fŴ satisfies 1 L R(fŴ;σa,Ω, L) ≤ 1 L R(f∗;σa,Ω, L) ≤ k + C L . Theorem 2 and Proposition 5 imply that if the number of datapoints N is sufficiently large (N >( k+CL c ) kL 2k−2 ), there are parameters W∗ that fit the true function f∗ with a smaller parameter norm than any choice of parameters W that fit the data with a rank 1 function. In that case, the global minima will not be rank 1 and might instead recover the true rank k. Another interpretation is that since the constant C does not depend on the number of training points N (in contrast to CN ), there is a range of depths (which grows as N → ∞) where the upper bound of Proposition 5 is below that of Proposition 2. We expect rank recovery to happen roughly in this range of depths: too small depths can lead to an overestimation of the rank1, while too large depths can lead to an underestimation. Remark 5. Note that in our experiments, we were not able to observe gradient descent converging to a solution that underestimates the true rank, even for very deep networks. This is probably due to gradient descent converging to one of the many local minima in the loss surface of very deep L2-regularized DNNs. Some recent theoretical results offer a possible explanation for why gradient descent naturally avoids rank 1 solutions: the proof of Proposition 2 shows that rank 1 fitting functions have exploding gradient as N → ∞, and such high gradient functions are known (at the moment only for shallow networks with 1D inputs) to correspond to narrow minima (Mulayoff et al., 2021). Some of our results can be applied to local minima Ŵwith a small norm: Proposition 3 implies that the Jacobian rank of fŴ is approximately bounded by ∥Ŵ∥ 2 /L. Proposition 4 also applies to local minima, but only if ∥Ŵ∥2/L ≤ 1 + C/L for some constant C, though it could be generalized. DISCUSSION We now propose a tentative explanation for the phenomenon observed in this section. In contrast to the rest of the paper, this discussion is informal. 1Note that traditional regression models, such as Kernel Ridge Regression (KRR) typically overestimate the true rank, as described in Appendix D.1. Ideally, we want to learn functions f which can be factorized as a composition h ◦ g so that not only the inner dimension is small but the two functions g, h are not ‘too complex’. These two objectives are often contradictory and one needs to find a trade-off between the two. Instead of optimizing the bottleneck rank, one might want to optimize with a regularization term of the form min f=h◦g k + γ (C(g) + C(h)) , (1) optimizing over all possible factorization f = h ◦ g of f with inner dimension k, where C(g) and C(h) are measures of the complexity of g and h resp. The parameter γ ≥ 0 allows us to tune the balance between the minimization of the inner dimension and the complexity of g and h, recovering the Bottleneck rank when γ = 0. For small γ the minimizer is always rank 1 (since it is always possible to fit a finite dataset with a rank 1 function in the absence of restriction on the complexity on g and h), but with the right choice of γ one can recover the true rank. Some aspects of the proofs techniques we used in this paper suggest that large-depth DNNs are optimizing such a cost (or an approximation thereof). Consider a deep network that fits with minimal parameter norm a function f ; if we add more layers to the network it is natural to assume that the new optimal representation of f will be almost the same as that of the shallower network with some added (approximate) identity layers. The interesting question is where are those identity layers added? The cost of adding an identity layer at a layer ℓ equals the dimension dℓ of the hidden representation of the inputs at ℓ. It is therefore optimal to add identity layers where the hidden representations have minimal dimension. This suggests that for large depths the optimal representation of a function f approximately takes the form of Lg layers representing g, then L−Lg−Lh identity layers, and finally Lh layers representing h, for some factorization f = h◦g with inner dimension k. We observe in Figure 1 such a three-part representation structure in an MSE task with a low-rank true function. The rescaled parameter norm would then take the form 1 L ∥W∥2 = L− Lg − Lh L k + 1 L ( ∥Wg∥2 + ∥Wh∥2 ) , where Wg and Wh are the parameters of the first and last part of the network. For large depths, we can make the approximation L−Lg−LhL ≈ 1 to recover the same structure as Equation 1, with γ = 1/L, C(g) = ∥W∥2g and C(h) = ∥Wh∥ 2. This intuition offers a possible explanation for rank recovery in DNNs, though we are not yet able to prove it rigorously. 5 PRACTICAL IMPLICATIONS In this section, we describe the impact of rank minimization on two practical tasks: multiclass classification and autoencoders. MULTICLASS CLASSIFICATION Consider a function fW∗ : Rdin → Rm which solves a classification task with m classes, i.e. for all training points xi with class yi ∈ {1, . . . ,m} the yi-th entry of the vector fW∗ is strictly larger than all other entries. The Bottleneck rank k = RankBN (fW∗) of fW∗ has an impact on the topology of the resulting partition of the input space Ω into classes, leading to topological properties typical of a partition on a k-dimensional space rather than those of a partition on a din-dimensional space. When k = 1, the partition will be topologically equivalent to a classification on a line, which implies the absence of tripoints, i.e. points at the boundary of 3 (or more) classes. Indeed any boundary point x ∈ Ω will be mapped to a boundary point z = g(x) by the first function g : Ω → R in the factorization of fW∗ ; since z has at most two neighboring classes, then so does x. This property is illustrated in Figure 2: for a classification task on four classes on the plane, we observe that the partitions obtained by shallow networks (L = 2) leads to tripoints which are absent in deeper networks (L = 9). Notice also that the presence or absence of L2-regularization has little effect on the final shape, which is in line with the observation that the cross-entropy loss leads to an implicit L2-regularization (Soudry et al., 2018; Gunasekar et al., 2018a; Chizat & Bach, 2020), reducing the necessity of an explicit L2-regularization. AUTOENCODERS Consider learning an autoender on data of the form x = g(z) where z is sampled (with full dimensional support) in a latent space Rk and g : Rk → Rd is an injective FPLF. In this setting, the true rank is the intrinsic dimension k of the data, since the minimal rank function that equals the identity on the data distribution has rank k. Assume that the learned autoencoder f̂ : Rk → Rk fits the data f(x) = x for all x = g(z) and recovers the rank RankBN f̂ = k. At any datapoint x0 = g(z0) such that g is differentiale at z0, the data support g(Rk) is locally a k-dimensional affine subspace T = x0 + ImJg(z0). In the linear region of f̂ that contains x0, f̂ is an affine projection to T since it equals the identity when restricted to T and its Jacobian is rank k. This proves that rank recovering autoencoders are naturally (locally) denoising. 6 CONCLUSION We have shown that in infinitely deep networks, L2-regularization leads to a bias towards low-rank functions, for some notion of rank on FPLFs. We have then shown a set of results that suggest that this low-rank bias extends to large but finite depths. With the right depths, this leads to ‘rank recovery’, where the learned function has approximately the same rank as the ‘true function’. We proposed a tentative explanation for this rank recovery: for finite but large widths, the network is biased towards function f which can be factorized f = h ◦ g with both a small inner dimension k and small complexity of g and h. Finally, we have shown how rank recovery affects the topology of the class boundaries in a classification task and leads to natural denoising abilities in autoencoders.
1. What is the focus and contribution of the paper on neural network representation? 2. What are the strengths of the proposed approach, particularly in terms of its ability to match low-rank functions? 3. What are the weaknesses of the paper regarding its proofs and explanations? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns or limitations regarding the proposed approach that the author should address?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a notion of rank for non-linear functions, which is defined as the minimum possible L 2 norms of the weights of a neural network which matches the function, averaged over the layers, and asymptotically where the number of layers tends to infinity. It is conjectured that this notion of rank is equivalent to that of "bottleneck rank", which is the minimum embedding dimension of an encoder decoder network that represents the function. The idea is that as the number of layers tends to infinity (and is much larger than the minimum required depth to represent the function), most layers are identity functions, which have Frobenius norm k where k is the embedding dimension at those layers. Hence minimizing the notion of rank defined in this paper becomes similar to minimizing the bottleneck rank. It is shown that the rank defined here enjoys several sanity check properties with respect to compositions of functions etc. The first main theorem (Theorem 1) states that the rank defined here is sandwiched between the maximum rank of the jacobian of the function and the bottleneck rank. Later in Section 4, the paper studies the slightly more concrete situation of finite architectures. Proposition 2 is a non asymptotic version of Theorem 1, but it then leads onto proposition 4, which shows that if the ground truth rank is 1, then the global minimum of the regularized objective (corresponding to training a regularized neural network) has a very small ratio between the first two eigenvalues of its weights at at least one layer. In the next section, a concern is raised: BN rank 1 functions are universal approximators of piecewise linear functions with a one dimensional output, thus the bias for low rank or rank one models could prevent one from learning the true rank. However, it is shown that for iid data, the cost of fitting the training set with a rank one function is prohibitively large for a large number of samples. It is therefore suggested that the depth should be chosen at an appropriate regime depending on the amount of data. In the experiments, it is shown that deeper networks do learn "low-rank" functions at least in some cases (cf. figure 2). It is also shown that training deep neural networks on data of low BN rank results in a solution which contains many intermediate representations of rank approximately equal to the ground truth rank Strengths And Weaknesses Strengths: This is an extremely interesting topic at the forefront of AI theoretical research The results themselves are very interesting and appear to break new ground (I am not familiar enough with the recent literature to fully vouch for this though). The paper is quite well written in general Weaknesses: Not much, but if one is being picky: Some of the proofs should have more justifications There are a few typos Clarity, Quality, Novelty And Reproducibility The paper appears to be very novel and to open the door to an interesting direction. The paper is quite well written and generally clear. There are a few things which make the reading of the proofs more difficult though, but that remains minor. Details: Some results and theorems do not repeat enough of the general concept to make them easy to read. For instance, proposition 2 doesn't explicitly mention what optimization problem is being learnt anymore. This forces the reader to read the paper quite linearly, which not everyone wants to do (a lot of the time we might be going back and forth between the supplementary and the main). Figure 2 appears very early compared to the place where it is references, and the concept of "tripoint" is also only introduced later. In theorem 2, the proof of point 4 is missing and the "point 4" in the proof is actually point 5. In the beginning of the proof of Proposition 3 (page 3 of sup), a pointer to corollary 1 would be strongly appreciated. This applies to other places where this result is mentioned. I know that at least in one place it is clearly stated that this is a result of Arora et al., but in other parts of the paper it seems to be presented as if it was something the reader should consider obvious. The proof of proposition 4 is a little hard. How do the authors get the first equation (especially the term − δ )? The statement at the end of the first paragraph of Section 4 (page 6 of sup) is not proved. The proof of proposition is elegant, but it is hard to parse at first reading. There is a Lagrangian argument which is completely implied. The sums should be expressed less ambiguously (so one does not assume that the norms of the weights are also summed over layers) Some places(e.g. Prop 7) use | | for the Froebenius norm, whilst others use | | F , cf. Prop 4. Section D1 could also be extended. Note that the explanations are for KR (Kernel Regression) rather than KRR as claimed there and in the main paper. ===============================minor typos============ In the (-1)th line of Proposition 4, n ℓ should be replaced by ℓ 0 . At the end of the second paragraph of section "Rank Recovery for Intermediate Depths", "rank one function can fit" should be "rank one functions can fit" In page 2 of the supplementary, "finaly" should be "finally" At the beginning of the proof of proposition 3, "there is a depth ..... networks which representing g..." should be "there is a depth... network which represents g" At the third line of proposition 4 on page 3 of the sup, I think the X ℓ (in the brackets) should be X ℓ . At the beginning of the "upper bound" part of the proof of proposition 4, I think δ 0 should be ℓ 0 . In the middle of page 5 (still in the proof of Proposition 4), "equation 2" should be "equation (2)" Page 6 of the sup has quite a few typos. Proposition 5, two lines after the main equation, the sentence should be split and doesn't make gammatical sense. For the last line, removing the "at" in "at any rank 1..." would make the sentence more coherent. There is also a space missing before the final inline equation. Same at the bottom of page 6 "and the network hat the end" should be "and the network h at the end". I would also usually prefer to put "resp. " statements in brackets. Prop 7: "...for some λ > 0 then W satisfy..." ==> "...for some λ > 0 . Then W satisfies..."
ICLR
Title Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear Functions Abstract We show that the representation cost of fully connected neural networks with homogeneous nonlinearities which describes the implicit bias in function space of networks with L2-regularization or with losses such as the cross-entropy converges as the depth of the network goes to infinity to a notion of rank over nonlinear functions. We then inquire under which conditions the global minima of the loss recover the ‘true’ rank of the data: we show that for too large depths the global minimum will be approximately rank 1 (underestimating the rank); we then argue that there is a range of depths which grows with the number of datapoints where the true rank is recovered. Finally, we discuss the effect of the rank of a classifier on the topology of the resulting class boundaries and show that autoencoders with optimal nonlinear rank are naturally denoising. 1 INTRODUCTION There has been a lot of recent interest in the so-called implicit bias of DNNs, which describes what functions are favored by a network when fitting the training data. Different network architectures (choice of nonlinearity, depth, width of the network, and more) and training procedures (initialization, optimization algorithm, loss) can lead to widely different biases. In contrast to the so-called kernel regime where the implicit bias is described by the Neural Tangent Kernel (Jacot et al., 2018), there are several active regimes (also called rich or feature-learning regimes), whose implicit bias often feature a form sparsity that is absent from the kernel regime. Such active regimes have been observed for example in DNNs with small initialization (Chizat & Bach, 2018; Rotskoff & Vanden-Eijnden, 2018; Li et al., 2020; Jacot et al., 2022a), with L2regularization (Savarese et al., 2019; Ongie et al., 2020; Jacot et al., 2022b) or when trained on exponentially decaying losses (Gunasekar et al., 2018a;b; Soudry et al., 2018; Du et al., 2018; Ji & Telgarsky, 2018; Chizat & Bach, 2020; Ji & Telgarsky, 2020). In the latter two cases, the implicit bias is described by the representation cost: R(f) = min W:fW=f ∥W∥2 where f is a function that can be represented by the network and the minimization is over all parameters W that result in a network function fW equal to f , the parameters W form a vector and ∥W∥ is the L2-norm. The representation cost can in some cases be explicitly computed for linear networks. For diagonal linear networks, the representation cost of a linear function f(x) = wTx equals theLp normR(f) = L ∥w∥pp of the vector v for p = 2 L (Gunasekar et al., 2018a; Moroshko et al., 2020) where L is the depth of the network. For fully-connected linear networks, the representation cost of a linear function f(x) = Ax equals the Lp-Schatten norm (the Lp norm of the singular values) R(f) = L ∥A∥pp (Dai et al., 2021). A common thread between these examples is a bias towards some notion of sparsity: sparsity of the entries of the vector w in diagonal networks and sparsity of the singular values in fully connected networks. Furthermore, this bias becomes stronger with depth and in the infinite depth limit L→ ∞ the rescaled representation cost R(f)/L converges to the L0 norm ∥w∥0 (the number of non-zero entries in w) in the first case and to the rank Rank(A) in the second. For shallow (L = 2) nonlinear networks with a homogeneous activation, the representation cost also takes the form of a L1 norm (Bach, 2017; Chizat & Bach, 2020; Ongie et al., 2020), leading to sparsity in the effective number of neurons in the hidden layer of the network. However, the representation cost of deeper networks does not resemble any typical norm (Lp or not), though it still leads to some form of sparsity (Jacot et al., 2022b). Despite the absence of explicit formula, we will show that the rescaled representation cost R(f)/L converges to some notion of rank in nonlinear networks as L→ ∞, in analogy to infinite depth linear networks. CONTRIBUTIONS We first introduce two notions of rank: the Jacobian rank RankJ(f) = maxx Rank [Jf(x)] and the Bottleneck rank RankBN (f) which is the smallest integer k such that f can be factorized f = h ◦ g with inner dimension k. In general, RankJ(f) ≤ RankBN (f), but for functions of the form f = ψ ◦ A ◦ ϕ (for a linear map A and two bijections ψ and ϕ), we have RankJ(f) = RankBN (f) = RankA. These two notions of rank satisfy the properties (1) Rankf ∈ Z; (2) Rank(f ◦ g) ≤ min{Rankf,Rankg}; (3) Rank(f + g) ≤ Rankf +Rankg; (4) Rank(x 7→ Ax+ b) = RankA. We then show that in the infinite depth limit L→ ∞ the rescaled representation cost of DNNs with a general homogeneous nonlinearity is sandwiched between the Jacobian and Bottleneck ranks: RankJ (f) ≤ lim L→∞ R(f) L ≤ RankBN (f) . Furthermore limL→∞R(f) satisfies properties (2-4) above. We also conjecture that the limiting representation cost equals its upper bound RankBN (f). We then study how this bias towards low-rank functions translates to finite but large depths. We first show that for large depths the rescaled norm of the parameters ∥Ŵ∥2/L at any global minimum Ŵ is upper bounded by 1+CN/L for a constant CN which depends on the training points. This implies that the resulting function has approximately rank 1 w.r.t. the Jacobian and Bottleneck ranks. This is however problematic if we are trying to fit a ‘true function’ f∗ whose ‘true rank’ k = RankBNf ∗ is larger than 1. Thankfully we show that if k > 1 the constantCN explodes asN → ∞, so that the above bound (∥Ŵ∥2/L ≤ 1+CN/L) is relevant only for very large depths whenN is large. We show another upper bound ∥Ŵ∥2/L ≤ k + C/L with a constant C independent of N , suggesting the existence of a range of intermediate depths where the network recovers the true rank k. Finally, we discuss how rank recovery affects the topology of decision boundaries in classification and leads autoencoders to naturally be denoising, which we confirm with numerical experiments. RELATED WORKS The implicit bias of deep homogeneous networks has, to our knowledge, been much less studied than those of either linear networks or shallow nonlinear ones. (Ongie & Willett, 2022) study deep networks with only one nonlinear layer (all others being linear). Similarly (Le & Jegelka, 2022) show a low-rank alignment phenomenon in a network whose last layers are linear. Closer to our setup is the analysis of the representation cost of deep homogeneous networks in (Jacot et al., 2022b), which gives two reformulations for the optimization in the definition of the representation cost, with some implications on the sparsity of the representations, though the infinite depth limit is not studied. A very similar analysis of the sparsity effect of large depth on the global minima of L2-regularized networks is given in (Timor et al., 2022), however, they only show how the optimal weight matrices are almost rank 1 (and only on average), while we show low-rank properties of the learned function, as well as the existence of a layer with almost rank 1 hidden representations. 2 PRELIMINARIES In this section, we define fully-connected DNNs and their representation cost. FULLY CONNECTED DNNS In this paper, we study fully connected DNNs with L+1 layers numbered from 0 (input layer) to L (output layer). Each layer ℓ ∈ {0, . . . , L} has nℓ neurons, with n0 = din the input dimension and nL = dout the output dimension. The pre-activations α̃ℓ(x) ∈ Rnℓ and activations αℓ(x) ∈ Rnℓ of the layers of the network are defined inductively as α0(x) = x α̃ℓ(x) =Wℓαℓ−1(x) + bℓ αℓ(x) = σ (α̃ℓ(x)) , for the nℓ×nℓ−1 connection weight matrixWℓ, the nℓ bias vector bℓ and the nonlinearity σ : R → R applied entrywise to the vector α̃ℓ(x). The parameters of the network are the collection of all connection weights matrices and bias vectors W = (W1, b1, . . . ,WL, bL). We call the network function fW : Rdin → Rdout the function that maps an input x to the preactivations of the last layer α̃L(x). In this paper, we will focus on homogeneous nonlinearities σ, i.e. such that σ(λx) = λσ(x) for any λ ≥ 0 and x ∈ R, such as the traditional ReLU σ(x) = max{0, x}. In our theoretical analysis we will assume that the nonlinearity is of the form σa(x) = { x if x ≥ 0 ax otherwise for some α ∈ (−1, 1), since for a general homogeneous nonlinearity σ (which is not proportional to the identity function, the constant zero function or the absolute function), there are scalars a ∈ (−1, 1), b ∈ R and c ∈ {+1,−1} such that σ(x) = cσa(bx); as a result, the global minima and representation cost are the same up to scaling. Remark 1. By a simple generalization of the work of (Arora et al., 2018), the set of functions that can be represented by networks (with any finite widths and depth) with such nonlinearities is the set of piecewise linear functions with a finite number of linear regions. In contrast, the three types of homogeneous nonlinearities we rule out (the identity, the constant, or the absolute value) lead to different sets of functions: the linear functions, the constant functions, or the piecewise linear functions f such that limt→∞ ∥f(tx)− f(−tx)∥ is finite for all directions x ∈ Rdin (or possibly a subset of this class of functions). While some of the results of this paper could probably be generalized to the third case up to a few details, we rule it out for the sake of simplicity. Remark 2. All of our results will be for sufficiently wide networks, i.e. for all widths n such that nℓ ≥ n∗ℓ for some minimal widths n∗ℓ . Moreover these results are O(0) in the width, in the sense that above the threshold n∗ℓ the constants do not depend on the widths nℓ. When there are a finite number of datapoints N , it was shown by (Jacot et al., 2022b) that a width of N(N + 1) is always sufficient, that is we can always take n∗ℓ = N(N +1) (though it is observed empirically that a much smaller width can be sufficient in some cases). When we are trying to fit a piecewise linear function over the whole input domain Ω, the width required depends on the number of linear regions (He et al., 2018). REPRESENTATION COST The representation cost R(f ; Ω, σ, L) is the squared norm of the optimal weights W which represents the function f|Ω: R(f ; Ω, σ, L) = min W:fW|Ω=f|Ω ∥W∥2 where the minimum is taken over all weights W of a depth L network (with some finite widths n) such that fW(x) = f(x) for all x ∈ Ω. If no such weights exist, we define R(f ; Ω, σ, L) = ∞. The representation cost describes the natural bias on the represented function fW induced by adding L2 regularization on the weights W: min W C(fW) + λ ∥W∥2 = min f C(f) + λR(f ; Ω, σ, L) for any cost C (defined on functions f : Ω 7→ Rdout ) and where the minimization on the right is over all functions f that can be represented by a depth L network with nonlinearity σ. Therefore, if we can give a simple description of the representation cost of a function f , we can better understand what type of functions f are favored by a DNN with nonlinearity σ and depth L. Remark 3. Note that the representation cost does not only play a role in the presence of L2regularization, it also describes the implicit bias of networks trained on an exponentially decaying loss, such as the cross-entropy loss, as described in (Soudry et al., 2018; Gunasekar et al., 2018a; Chizat & Bach, 2020). 3 INFINITELY DEEP NETWORKS In this section, we first give 4 properties that a notion of rank on piecewise linear functions should satisfy and introduce two notions of rank that satisfy these properties. We then show that the infinitedepth limit L→ ∞ of the rescaled representation cost R(f ; Ω, σa, L)/L is sandwiched between the two notions of rank we introduced, and that this limit satisfies 3 of the 4 properties we introduced. RANK OF PIECEWISE LINEAR FUNCTIONS There is no single natural definition of rank for nonlinear functions, but we will provide two of them in this section and compare them. We focus on notions of rank for piecewise linear functions with a finite number of linear regions since these are the function that can be represented by DNNs with homogeneous nonlinearities (this is a Corollary of Theorem 2.1 from (Arora et al., 2018), for more details, see Appendix E.1). We call such functions finite piecewise linear functions (FPLF). Let us first state a set of properties that any notion of rank on FPLF should satisfy, inspired by properties of rank for linear functions: 1. The rank of a function is an integer Rank(f) ∈ N. 2. Rank(f ◦ g) ≤ min{Rankf,Rankg}. 3. Rank(f + g) ≤ Rankf +Rankg. 4. If f is affine (f(x) = Ax+ b) then Rankf = RankA. Taking g = id or f = id in (2) implies Rank(f) ≤ min{din, dout}. Properties (2) and (4) also imply that for any bijection ϕ on Rd, Rank(ϕ) = Rank(ϕ−1) = d. Note that these properties do not uniquely define a notion of rank. Indeed we will now give two notions of rank which satisfy these properties but do not always match. However any such notion of rank must agree on a large family of functions: Property 2 implies that Rank is invariant under preand post-composition with bijections (see Appendix A), which implies that the rank of functions of the form ψ ◦ f ◦ϕ for an affine function f(x) = Ax+ b and two (piecewise linear) bijections ψ and ϕ always equals RankA. The first notion of rank we consider is based on the rank of the Jacobian of the function: Definition 1. The Jacobian rank of a FPLF f is RankJ(f ; Ω) = maxx RankJf(x), taking the max over points where x is differentiable. Note that since the jacobian is constant over the linear regions of the FPLF f , we only need to take the maximum over every linear region. As observed in (Feng et al., 2022), the Jacobian rank measures the intrinsic dimension of the output set f(Ω). The second notion of rank is inspired by the fact that for linear functions f , the rank of f equals the minimal dimension k such that f can be written as the composition of two linear function f = g ◦ h with inner dimension k. We define the bottleneck rank as: Definition 2. The bottleneck rank RankBN (f ; Ω) is the smallest integer k ∈ N such that there is a factorization as the composition of two FPLFs f|Ω = (g ◦ h)|Ω with inner dimension k. The following proposition relates these two notions of rank: Proposition 1. Both RankJ and RankBN satisfy properties 1− 4 above. Furthermore: • For any FPLF and any set Ω, RankJ(f ; Ω) ≤ RankBN (f ; Ω). • There exists a FPLF f : R2 → R2 and a domain Ω such that RankJ(f ; Ω) = 1 and RankBN (f ; Ω) = 2. INFINITE-DEPTH REPRESENTATION COST In the infinite-depth limit, the (rescaled) representation cost of DNNs R∞(f ; Ω, σa) = limL→∞ R(f ;Ω,σa,L) L converges to a value ‘sandwiched’ between the above two notions of rank: Theorem 1. For any bounded domain Ω and any FPLF f RankJ(f ; Ω) ≤ R∞(f ; Ω, σα) ≤ RankBN (f ; Ω). Furthermore the limiting representation cost R∞(f ; Ω, σa) satisfies properties 2 to 4. Proof. The lower bound follows from taking L → ∞ in Proposition 3 (see Section 4). The upper bound is constructive: a function f = h ◦ g can be represented as a network in three consecutive parts: a first part (of depth Lg) representing g, a final part (of depth Lh) representing h, and in the middle L− Lg − Lh identity layers on a k-dimensional space. The contribution to the norm of the parameters of the middle part is k(L−Lg −Lh) and it dominates as L→ ∞, since the contribution of the first and final parts are finite. Note that R∞(f ; Ω, σa) might satisfy property 1 as well, we were simply not able to prove it. Theorem 1 implies that for functions of the form f = ψ ◦ A ◦ ϕ for bijections ψ and ϕ, R∞(f ; Ω, σa) = RankJ(f ; Ω) = RankBN (f ; Ω) = RankA. Remark 4. Motivated by some aspects of the proofs and a general intuition (which is described in Section 4) we conjecture that R∞(f ; Ω, σa) = RankBN (f ; Ω). This would imply that the limiting representation cost does not depend on the choice of nonlinearity, as long as it is of the form σa (which we already proved is the case for functions of the form ψ ◦A ◦ ϕ). This result suggests that large-depth neural networks are biased towards function which have a low Jacobian rank and (if our above mentioned conjecture is true) low Bottleneck rank, much like linear networks are biased towards low-rank linear maps. It also suggests that the rescaled norm of the parameters ∥W∥2/L is an approximate upper bound on the Jacobian rank (and if our conjecture is true on the Bottleneck rank too) of the function fW. In the next section, we partly formalize these ideas. 4 RANK RECOVERY IN FINITE DEPTH NETWORKS In this section, we study how the (approximate) rank of minimizer functions fŴ (i.e. functions at a global minimum Ŵ) for the MSE Lλ(W) = 1N ∑N i=1(fW(xi)−yi)2+ λ L ∥W∥ 2 with data sampled from a distribution with support Ω is affected by the depth L. In particular, when the outputs are generated from a true function f∗ (i.e. yi = f∗(xi)) with k = RankBN (f∗; Ω), we study in which condition the ‘true rank’ k is recovered. APPROXIMATE RANK 1 REGIME One can build a function with BN-rank 1 that fits any training data (for example by first projecting the input to a line with no overlap and then mapping the points from the line to the outputs with a piecewise linear function). This implies the following bound: Proposition 2. There is a constant CN (which depends on the training data only) such that for any large enough L, at any global minimum Ŵ of the loss Lλ the represented function fŴ satisfies 1 L R(fŴ;σa,Ω, L) ≤ 1 + CN L . Proof. We use the same construction as in the proof of Theorem 1 for any fitting rank 1 function. This bound implies that the function fŴ represented by the network at a global minimum is approximately rank 1 both w.r.t. to the Jacobian and Bottleneck ranks, showing the bias towards low-rank functions even for finite (but possibly very large) depths. Jacobian Rank: For any function f , the rescaled norm representation cost 1LR(f ; Ω, σa, L) bounds the Lp-Schatten norm of the Jacobian (with p = 2L ) at any point: Proposition 3. Let f be a FPLF, then at any differentiable point x, we have ∥Jf(x)∥2/L2/L := RankJfW(x)∑ k=1 sk (Jf(x)) 2 L ≤ 1 L R(f ; Ω, σa, L), where sk (JfW(x)) is the k-th singular value of the Jacobian JfW(x). Together with Proposition 2, this implies that the second singular value of the Jacobian of any minimizer function must be exponentially small s2 ( JfŴ(x) ) ≤ ( 1+ CN L 2 )L 2 in L. Bottleneck Rank: We can further prove the existence of a bottleneck in the network in any minimizer network, i.e. a layer ℓ whose hidden representation is approximately rank 1: Proposition 4. For any global minimum Ŵ of the L2-regularized loss Lλ with λ > 0 and any set of Ñ datapoints X̃ ∈ Rdin×Ñ (which do not have to be the training set X) with non-constant outputs, there is a layer ℓ0 such that the first two singular values s1, s2 of the hidden representation Zℓ0 ∈ Rnℓ×N (whose columns are the activations αℓ0(xi) for all the inputs xi in X̃) satisfies s2 s1 = O(L− 1 4 ). The fact that the global minima of the loss are approximately rank 1 not only in the Jacobian but also in the Bottleneck sense further supports our conjecture that the limiting representation cost equals the Bottleneck rank R∞ = RankBN . Furthermore, it shows that the global minimum of the L2-regularized is biased towards low-rank functions for large depths, since it fits the data with (approximately) the smallest possible rank. RANK RECOVERY FOR INTERMEDIATE DEPTHS However, learning rank 1 functions is not always a good thing. Assume that we are trying to fit a ‘true function’ f∗ : Ω → Rdout with a certain rank k = RankBN (f∗; Ω). If k > 1 the global minima of a large depth network will end up underestimating the true rank k. In contrast, in the linear setting underestimating the true rank is almost never a problem: for example in matrix completion one always wants to find a minimal rank solution (Candès & Recht, 2009; Arora et al., 2019). The difference is due to the fact that rank 1 nonlinear functions can fit any finite training set, which is not the case in the linear case. Thankfully, for large datasets it becomes more and more difficult to underestimate the rank, since for large N fitting the data with a rank 1 function requires large derivatives, which in turn implies a large parameter norm: Theorem 2. Given a Jacobian-rank k true function f∗ : Ω → Rdout on a bounded domain Ω, then for all ϵ there is a constant cϵ such that for any BN-rank 1 function f̂ : Ω → Rdout that fits f̂(xi) = f ∗(xi) a dataset x1, . . . , xN sampled i.i.d. from a distribution p with support Ω, we have 1 LR(f̂ ; Ω, σa, L) > cϵN 2 L (1− 1 k ) with prob. at least 1− ϵ. Proof. We show that there is a point x ∈ Ω with large derivative ∥Jf(x)∥op ≥ TSP(y1,...,yN ) diam(x1,...,xN ) for the Traveling Salesman Problem TSP(y1, . . . , yN ), i.e. the length of the shortest path passing through every point y1, . . . , ym, and the diameter diam(x1, . . . , xN ) of the points x1, . . . , xN . This follows from the fact that the image of f̂ is a line going through all yis, and if i and j are the first and last points visited, the image of segment [xi, xj ] is a line from yi to yj passing through all yks. The diameter is bounded by diamΩ while the TSP scales as N1− 1 k (Beardwood et al., 1959) since the yis are sampled from a k-dimensional distribution. The bound on the parameter norm then follows from Proposition 3. This implies that the constantCN in Proposition 2 explodes as the number of datapointsN increases, i.e. as N increases, larger and larger depths are required for the bound in Proposition 2 to be meaningful. In that case, a better upper bound on the norm of the parameters can be obtained, which implies that the functions fŴ at global minima are approximately rank k or less (at least in the Jacobian sense, according to Proposition 3): Proposition 5. Let the ‘true function’ f∗ : Ω → Rdout be piecewise linear with RankBN (f∗) = k, then there is a constant C which depends on f∗ only such that any minimizer function fŴ satisfies 1 L R(fŴ;σa,Ω, L) ≤ 1 L R(f∗;σa,Ω, L) ≤ k + C L . Theorem 2 and Proposition 5 imply that if the number of datapoints N is sufficiently large (N >( k+CL c ) kL 2k−2 ), there are parameters W∗ that fit the true function f∗ with a smaller parameter norm than any choice of parameters W that fit the data with a rank 1 function. In that case, the global minima will not be rank 1 and might instead recover the true rank k. Another interpretation is that since the constant C does not depend on the number of training points N (in contrast to CN ), there is a range of depths (which grows as N → ∞) where the upper bound of Proposition 5 is below that of Proposition 2. We expect rank recovery to happen roughly in this range of depths: too small depths can lead to an overestimation of the rank1, while too large depths can lead to an underestimation. Remark 5. Note that in our experiments, we were not able to observe gradient descent converging to a solution that underestimates the true rank, even for very deep networks. This is probably due to gradient descent converging to one of the many local minima in the loss surface of very deep L2-regularized DNNs. Some recent theoretical results offer a possible explanation for why gradient descent naturally avoids rank 1 solutions: the proof of Proposition 2 shows that rank 1 fitting functions have exploding gradient as N → ∞, and such high gradient functions are known (at the moment only for shallow networks with 1D inputs) to correspond to narrow minima (Mulayoff et al., 2021). Some of our results can be applied to local minima Ŵwith a small norm: Proposition 3 implies that the Jacobian rank of fŴ is approximately bounded by ∥Ŵ∥ 2 /L. Proposition 4 also applies to local minima, but only if ∥Ŵ∥2/L ≤ 1 + C/L for some constant C, though it could be generalized. DISCUSSION We now propose a tentative explanation for the phenomenon observed in this section. In contrast to the rest of the paper, this discussion is informal. 1Note that traditional regression models, such as Kernel Ridge Regression (KRR) typically overestimate the true rank, as described in Appendix D.1. Ideally, we want to learn functions f which can be factorized as a composition h ◦ g so that not only the inner dimension is small but the two functions g, h are not ‘too complex’. These two objectives are often contradictory and one needs to find a trade-off between the two. Instead of optimizing the bottleneck rank, one might want to optimize with a regularization term of the form min f=h◦g k + γ (C(g) + C(h)) , (1) optimizing over all possible factorization f = h ◦ g of f with inner dimension k, where C(g) and C(h) are measures of the complexity of g and h resp. The parameter γ ≥ 0 allows us to tune the balance between the minimization of the inner dimension and the complexity of g and h, recovering the Bottleneck rank when γ = 0. For small γ the minimizer is always rank 1 (since it is always possible to fit a finite dataset with a rank 1 function in the absence of restriction on the complexity on g and h), but with the right choice of γ one can recover the true rank. Some aspects of the proofs techniques we used in this paper suggest that large-depth DNNs are optimizing such a cost (or an approximation thereof). Consider a deep network that fits with minimal parameter norm a function f ; if we add more layers to the network it is natural to assume that the new optimal representation of f will be almost the same as that of the shallower network with some added (approximate) identity layers. The interesting question is where are those identity layers added? The cost of adding an identity layer at a layer ℓ equals the dimension dℓ of the hidden representation of the inputs at ℓ. It is therefore optimal to add identity layers where the hidden representations have minimal dimension. This suggests that for large depths the optimal representation of a function f approximately takes the form of Lg layers representing g, then L−Lg−Lh identity layers, and finally Lh layers representing h, for some factorization f = h◦g with inner dimension k. We observe in Figure 1 such a three-part representation structure in an MSE task with a low-rank true function. The rescaled parameter norm would then take the form 1 L ∥W∥2 = L− Lg − Lh L k + 1 L ( ∥Wg∥2 + ∥Wh∥2 ) , where Wg and Wh are the parameters of the first and last part of the network. For large depths, we can make the approximation L−Lg−LhL ≈ 1 to recover the same structure as Equation 1, with γ = 1/L, C(g) = ∥W∥2g and C(h) = ∥Wh∥ 2. This intuition offers a possible explanation for rank recovery in DNNs, though we are not yet able to prove it rigorously. 5 PRACTICAL IMPLICATIONS In this section, we describe the impact of rank minimization on two practical tasks: multiclass classification and autoencoders. MULTICLASS CLASSIFICATION Consider a function fW∗ : Rdin → Rm which solves a classification task with m classes, i.e. for all training points xi with class yi ∈ {1, . . . ,m} the yi-th entry of the vector fW∗ is strictly larger than all other entries. The Bottleneck rank k = RankBN (fW∗) of fW∗ has an impact on the topology of the resulting partition of the input space Ω into classes, leading to topological properties typical of a partition on a k-dimensional space rather than those of a partition on a din-dimensional space. When k = 1, the partition will be topologically equivalent to a classification on a line, which implies the absence of tripoints, i.e. points at the boundary of 3 (or more) classes. Indeed any boundary point x ∈ Ω will be mapped to a boundary point z = g(x) by the first function g : Ω → R in the factorization of fW∗ ; since z has at most two neighboring classes, then so does x. This property is illustrated in Figure 2: for a classification task on four classes on the plane, we observe that the partitions obtained by shallow networks (L = 2) leads to tripoints which are absent in deeper networks (L = 9). Notice also that the presence or absence of L2-regularization has little effect on the final shape, which is in line with the observation that the cross-entropy loss leads to an implicit L2-regularization (Soudry et al., 2018; Gunasekar et al., 2018a; Chizat & Bach, 2020), reducing the necessity of an explicit L2-regularization. AUTOENCODERS Consider learning an autoender on data of the form x = g(z) where z is sampled (with full dimensional support) in a latent space Rk and g : Rk → Rd is an injective FPLF. In this setting, the true rank is the intrinsic dimension k of the data, since the minimal rank function that equals the identity on the data distribution has rank k. Assume that the learned autoencoder f̂ : Rk → Rk fits the data f(x) = x for all x = g(z) and recovers the rank RankBN f̂ = k. At any datapoint x0 = g(z0) such that g is differentiale at z0, the data support g(Rk) is locally a k-dimensional affine subspace T = x0 + ImJg(z0). In the linear region of f̂ that contains x0, f̂ is an affine projection to T since it equals the identity when restricted to T and its Jacobian is rank k. This proves that rank recovering autoencoders are naturally (locally) denoising. 6 CONCLUSION We have shown that in infinitely deep networks, L2-regularization leads to a bias towards low-rank functions, for some notion of rank on FPLFs. We have then shown a set of results that suggest that this low-rank bias extends to large but finite depths. With the right depths, this leads to ‘rank recovery’, where the learned function has approximately the same rank as the ‘true function’. We proposed a tentative explanation for this rank recovery: for finite but large widths, the network is biased towards function f which can be factorized f = h ◦ g with both a small inner dimension k and small complexity of g and h. Finally, we have shown how rank recovery affects the topology of the class boundaries in a classification task and leads to natural denoising abilities in autoencoders.
1. What is the focus of the paper regarding deep homogeneous nonlinear networks? 2. What are the strengths of the proposed approach, particularly in terms of its novel perspective and practical implications? 3. Do you have any concerns or questions regarding the paper's results and their implications? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper studies the representation cost of piecewise linear functions by deep homogeneous nonlinear networks. The representation cost of f is defined as R(f) = \min_{W : f_W = f} ||W||^2, where ||.|| is the L2 norm of the parameters and f_W is the neural network parametrized by W. This representation cost arises in the analysis of neural networks in a variety of settings, including training with cross-entropy loss, training with ridge regularization, and training with a small initialization. The paper proves that in the limit of infinite depth L \to \infty, the representation cost of f : R^{d_{in}} \to R^{d_{out}} is sandwiched between two bounds: a lower bound based on the maximum rank of the Jacobian of the function at a point, and an upper bound called the "bottleneck" upper bound, based on the minimum inner dimension k such that one can write f = g \circ h, where g : R^{k} \to R^{d_{out}} and h : R^{d_{in}} \to R^{k}. Furthermore, \lim_{L \to\infty} R(f) / L satisfies several properties that one would like to have for a rank. The paper also studies why, for large but finite depth L, the representation cost R(f) / L of the function f restricted to the training dataset does not trivialize to be approximately equal to 1 (Theorem 2). Finally, practical implications are discussed, including qualitative differences between the classification boundaries learned by deep networks vs. shallow networks. Strengths And Weaknesses Strengths: This is a refreshing paper, which provides a novel and elegant perspective on the representation cost of functions by nonlinear, deep neural networks. I found the practical implications of the result particularly interesting -- specifically that deeper neural networks should find classification boundaries on a finite number of data points that do not have tripoints. I wonder whether a finer quantitative statement can be made for multiclass classification of a finite number of data points N and finite depth L, in the style of Theorem 2 and Proposition 5 -- which bounds the number of tripoints in the minimum-representation-cost classification boundary. There is also the intriguing open question about whether the upper bound is tight. And you can also ask about what happens if you add residual connections, which would seemingly lead to quite different behavior. Weaknesses: I found the practical implications section relating to autoencoders a bit sparse, and would appreciate if more information could be added to clarify what is meant. In the proof of Theorem 1 in the appendix, maybe add a note that you are shifting f and g so that you only have to represent the identity on the upper quadrant (because Omega is bounded.) This is only mentioned in the proof of the second part of Theorem 1, and was a point of confusion for me before I read that. Clarity, Quality, Novelty And Reproducibility The paper is well-written, clear, and original.
ICLR
Title Unified Principles For Multi-Source Transfer Learning Under Label Shifts Abstract We study the label shift problem in multi-source transfer learning and derive new generic principles. Our proposed framework unifies the principles of conditional feature alignment, label distribution ratio estimation and domain relation weights estimation. Based on inspired practical principles, we provide unified practical framework for three multi-source label shift transfer scenarios: learning with limited target data, unsupervised domain adaptation and label partial unsupervised domain adaptation. We evaluate the proposed method on these scenarios by extensive experiments and show that our proposed algorithm can significantly outperform the baselines. 1 INTRODUCTION Transfer learning (Pan & Yang, 2009) is based on the motivation that learning a new task is easier after having learned several similar tasks. By learning the inductive bias from a set of related source domains (S1, . . . ,ST ) and then leveraging the shared knowledge upon learning the target domain T , the prediction performance can be significantly improved. Based on this, transfer learning arises in deep learning applications such as computer vision (Zhang et al., 2019; Tan et al., 2018; Hoffman et al., 2018b), natural language processing (Ruder et al., 2019; Houlsby et al., 2019) and biomedical engineering (Raghu et al., 2019; Lundervold & Lundervold, 2019; Zhang & An, 2017). To ensure a reliable transfer, it is critical to understand the theoretical assumptions between the domains. One implicit assumption in most transfer learning algorithms is that the label proportions remain unchanged across different domains (Du Plessis & Sugiyama, 2014) (i.e., S(y) = T (y)). However, in many real-world applications, the label distributions can vary markedly (i.e. label shift) (Wen et al., 2014; Lipton et al., 2018; Li et al., 2019b), in which existing approaches cannot guarantee a small target generalization error, which is recently proved by Combes et al. (2020). Moreover, transfer learning becomes more challenging when transferring knowledge from multiple sources to build a model for the target domain, as this requires an effective selection and leveraging the most useful source domains when label shift occurs. This is not only theoretically interesting but also commonly encountered in real-world applications. For example, in medical diagnostics, the disease distribution changes over countries (Liu et al., 2004; Geiss et al., 2014). Considering the task of diagnosing a disease in a country without sufficient data, how can we leverage the information from different countries with abundant data to help the diagnosing? Obviously, naı̈vely combining all the sources and applying one-to-one single source transfer learning algorithm can lead to undesired results, as it can include low quality or even untrusted data from certain sources, which can severely influence the performance. In this paper, we study the label shift problem in multi-source transfer learning where St(y) 6= T (y). We propose unified principles that are applicable for three common transfer scenarios: unsupervised Domain Adaptation (DA) (Ben-David et al., 2010), limited target labels (Mansour et al., 2020) and partial unsupervised DA with supp(T (y)) ⊆ supp(St(y)) (Cao et al., 2018), where prior works generally treated them as separate scenario. It should be noted that this work deals with target shift without assuming that semantic conditional distributions are identical (i.e., St(x|y) 6= T (x|y)), which is more realistic for real-world problems. Our contributions in this paper are two-folds: (I) We propose to use Wasserstein distance (Arjovsky et al., 2017) to develop a new target generalization risk upper bound (Theorem 1), which reveals the importance of label distribution ratio estimation and provides a principled guideline to learn the domain relation coefficients. Moreover, we provide a theoretical analysis in the context of representation learning (Theorem 2), which guides to learn a feature function that minimizes the conditional Wasserstein distance as well as controls the weighted source risk. We further reveal the relations in the aforementioned three scenarios lie in the different assumptions for estimating label distribution ratio. (II) Inspired by the theoretical results, we propose Wasserstein Aggregation Domain Network (WADN) for handling label-shift in multi-source transfer learning. We evaluate our algorithm on three benchmark datasets, and the results show that our algorithm can significantly outperform stateof-the-art principled approaches. 2 RELATED WORK Multi-Source Transfer Learning Theories have been investigated in the previous literature with different principles to aggregate source domains. In the popular unsupervised DA, (Zhao et al., 2018; Peng et al., 2019; Wen et al., 2020; Li et al., 2018b) adoptedH-divergence (Ben-David et al., 2007), discrepancy (Mansour et al., 2009) and Wasserstein distance (Arjovsky et al., 2017) of marginal distribution d(St(x), T (x)) to estimate domain relations and dynamically leveraged different domains. These algorithms generally consists source risk, domain discrepancy and an un-observable term η, the optimal risk on all the domains, which are ignored in these approaches. However, as Combes et al. (2020) pointed out, ignoring the influence of η will be problematic when label distributions between source and target domains are significantly different. Therefore it is necessary to take η into consideration by using a small amount of labelled data is available for the target domain (Wen et al., 2020). Following this line, very recent works (Konstantinov & Lampert, 2019; Wang et al., 2019a; Mansour et al., 2020) started to consider measure the divergence between two domains given label information for the target domain by using Y-discrepancy (Mohri & Medina, 2012). However, we empirically showed these methods are still unable to handle label shift. Label-Shift Label-Shift (Zhang et al., 2013; Gong et al., 2016) is a common phenomena in the transfer learning with S(y) 6= T (y) and generally ignored by the previous multi-source transfer learning practice. Several theoretical principled approaches have been proposed such as (Azizzadenesheli et al., 2019; Garg et al., 2020). In addition, (Combes et al., 2020; Wu et al., 2019) analyzed the generalized label shift problem in the one-to-one single unsupervised DA problem but did not provide guidelines of levering different sources to ensure a reliable transfer, which is more challenging. (Redko et al., 2019) proposed optimal transport strategy for the multiple unsupervised DA under label shift by assuming identical semantic conditional distribution. However they did not consider representation learning in conjunction with their framework and did not design neural network based approaches. Different from these, we analyzed our problem in the context of representation learning and propose an efficient and principled strategies. Moreover, our theoretical results highlights the importance of label shift problem in a variety of multi-source transfer problem. While the aforementioned work generally focus on the unsupervised DA problem, without considering unified rules for different scenarios (e.g. partial multi-source DA). 3 THEORETICAL INSIGHTS: TRANSFER RISK UPPER BOUND We assume a scoring hypothesis defined on the input space X and output space Y with h : X ×Y → R, is K-Lipschitz w.r.t. the feature x (given the same label), i.e. for ∀y, ‖h(x1, y) − h(x2, y)‖2 ≤ K‖x1 − x2‖2, and the loss function ` : R × R → R+ is positive, L-Lipschitz and upper bounded by Lmax. We denote the expected risk w.r.t distribution D: RD(h) = E(x,y)∼D`(h(x, y)) and its empirical counterpart (w.r.t. D̂) R̂D(h) = ∑ (x,y)∈D̂ `(h(x, y)). We adopted Wasserstein-1 distance (Arjovsky et al., 2017) as a metric to measure the similarity of the domains. Compared with other divergences, Wasserstein distance has been theoretically proved tighter than TV distance (Gong et al., 2016) or Jensen-Shnannon divergence (Combes et al., 2020). Based on previous work, the label shift is generally handled by label-distribution ratio weighted loss: RαS(h) = E(x,y)∼Sα(y)`(h(x, y)) with α(y) = T (y)/S(y). We also denote α̂t as its empirical counterpart, estimated from samples. Besides, to measure the task relations, we define a simplex λ with λ[t] ≥ 0, ∑T t=1 λ[t] = 1 as the task relation coefficient vector by assigning high weight to the similar task. Then we first present Theorem 1, which proposed theoretical insights about how to combine source domains through properly estimating λ. Theorem 1. Let {Ŝt = {(xi, yi)} NSt i=1 }Tt=1 and T̂ = {(xi, yi)} NT i=1, respectively be T source and target i.i.d. samples. For ∀h ∈ H with H the hypothesis family and ∀λ, with high probability ≥ 1− 4δ, the target risk can be upper bounded by: RT (h) ≤ ∑ t λ[t]R̂α̂tSt (h) + LK ∑ t λ[t]Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) + Lmaxd sup ∞ √√√√ T∑ t=1 λ[t]2 βt √ log(1/δ) 2N + Lmax sup t ‖αt − α̂t‖2 + Comp(NS1 , . . . , NST , NT , δ), where N = ∑T t=1NSt and βt = NSt/N and d sup ∞ = maxt∈[1,T ],y∈[1,Y] αt(y) the maximum true label distribution ratio value. W1(·‖·) is the Wasserstein-1 distance with L2-distance as cost function. Comp(NS1 , . . . , NST , NT , δ) is a function that decreases with larger NS1 , . . . , NT , given a fixed δ and hypothesis familyH. (See Appendix E for details) Remarks (1) In the first two terms, the relation coefficient λ is controlled by αt-weighted loss R̂α̂tSt (h) and conditional Wasserstein distance Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)). To minimize the upper bound, we need to assign a higher λ[t] to the source t with a smaller weighted prediction loss and a smaller weighted semantic conditional Wasserstein distance. Intuitively, we tend to leverage the source task which is semantic similar to the target and easier to learn. (2) If each source have equal observations with βt = 1, then the third term will become ‖λ‖2, a L2 norm regularization, which can be viewed as an encouragement of uniformly leveraging all the sources. Combing these three terms, we need to consider the trade-off between assigning a higher λ[t] to the source t that has a smaller weighted prediction loss and conditional Wasserstein distance, and assigning balanced λ[t] for avoiding concentrating on only one source. (3) ‖α̂t − αt‖2 indicates the gap between ground-truth and empirical label ratio. Therefore if we can estimate a good α̂t, these terms can be small. In the practice, If target labels are available, α̂t can be computed from the observed data and α̂t → αt. If target labels are absent (unsupervised DA), we need to design methods and to properly estimate α̂t (Sec. 4). (4) Comp(NS1 , . . . , NST , NT , δ) is a function that reflects the convergence behavior, which decreases with larger observation numbers. If we fixH, δ, N and NT , this term can be viewed as a constant. Insights in Representation Learning Apart from Theorem 1, we propose a novel theoretical analysis in the context of representation learning, which motivates practical guidelines in the deep learning regime. We define a stochastic feature function g and we denote its conditional distribution w.r.t. latent variable Z (induced by g) as S(z|Y = y) = ∫ x g(z|x)S(x|Y = y)dx. Then we have: Theorem 2. We assume the settings of loss, the hypothesis are the same with Theorem 1. We further denote the stochastic feature learning function g : X → Z , and the hypothesis h : Z × Y → R. Then ∀λ, the target risk is upper bounded by: RT (h, g) ≤ ∑ t λ[t]RαtSt (h, g) + LK ∑ t λ[t]Ey∼T (y)W1(St(z|Y = y)‖T (z|Y = y)), where RT (h, g) = E(x,y)∼T (x,y)Ez∼g(z|x)`(h(z, y)). Theorem 2 reveal that to control the upper bound, we need to learn g that minimizes the weighted conditional Wasserstein distance and learn (g, h) that minimizes the weighted source risk. Comparison with previous Theorems. Our theory proposed an alternative prospective to understand transfer learning. The first term is α-weighted loss. And it will recover the typical source loss minimization if there is no label shift with αt(y) ≡ 1 (Li et al., 2019a; Peng et al., 2019; Zhao et al., 2018; Wen et al., 2020). Beside, minimizing the conditional Wasserstein distances has been shown to be advantageous, compared with W1(St(z)‖T (z)) (Long et al., 2018). Moreover, Theorem 2 explicitly proposed the theoretical insights about the representation learning function g, which remains elusive for previous multi-source transfer theories such as (Wang et al., 2019a; Mansour et al., 2020; Konstantinov & Lampert, 2019; Li et al., 2019a; Peng et al., 2019). 4 UNIFIED PRACTICAL FRAMEWORK IN DEEP LEARNING The theoretical results in Section 3 motivate general principles to follow when designing multisource transfer learning algorithms. We summarize those principles in the following rules. (I) Learn a g that minimizes the weighted conditional Wasserstein distance as well as learn (g, h) that minimizes the α̂t-weighted source risk (Sec. 4.1). (II) Properly estimate the label distribution ratio α̂t (Sec. 4.2). (III) Balance the trade-off between assigning a higher λ[t] to the source t that has a smaller weighted prediction loss and conditional Wasserstein distance, and assigning balanced λ[t]. (Sec. 4.3). We instantiate these rules with a unified practical framework for solving multi-source transfer learning problems, as shown in Tab 1. We would like to point out that our original theoretical result is based on setting with the available target labels. The proposed algorithm can be applied to unsupervised scenarios under additional assumptions. 4.1 GUIDELINES IN THE REPRESENTATION LEARNING Motivated by Theorem 2, given a fixed label ratio estimation α̂t and fixed λ, we should find a representation function g : X → Z and a hypothesis function h : Z × Y → R such that: min g,h ∑ t λ[t]R̂α̂tSt (h, g) + C0 ∑ t λ[t]Ey∼T̂ (y)W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) (1) Explicit Conditional Loss When target label information is available, one can explicitly solve the conditional optimal transport problem with g and h for a given Y = y. However, due to the high computational complexity in solving T × |Y| optimal transport problems, the original form is practically intractable. To address this issue, we propose to approximate the conditional distribution on latent space Z as Gaussian distribution with identical Covariance matrix such that Ŝt(z|Y = y) ≈ N (Cyt ,Σ) and T̂ (z|Y = y) ≈ N (Cy,Σ). Then we have W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) ≤ ‖Cyt −Cy‖2 (see Appendix G for details). Intuitively, the approximation term is equivalent to the well known feature mean matching (Sugiyama & Kawanabe, 2012), which computes the feature centroid of each class (on latent space Z) and aligns them by minimizing their L2 distance. Implicit Conditional Loss When target label information is not available (e.g. unsupervised DA and partial DA), the explicit matching approach can adopt pseudo-label predicted by the hypothesis h as a surrogate of the true target label. However, in the early stage of the learning process, the pseudo-labels can be unreliable, which can lead to an inaccurate estimate of W1(Ŝ(z|Y = y)‖T̂ (z|Y = y)). To address this, the following Lemma indicates that estimating the conditional Wasserstein distance is equivalent to estimating the Wasserstein adversarial loss weighted by the label-distribution ratio. Lemma 1. The weighted conditional Wasserstein distance can be implicitly expressed as:∑ t λ[t]Ey∼T (y)W1(St(z|Y = y)‖T (z|Y = y)) = max d1,··· ,dT ∑ t λ[t][Ez∼St(z)ᾱt(z)dt(z)−Ez∼T (z)dt(z)], where ᾱt(z) = 1{(z,y)∼St}αt(Y = y), and d1, . . . , dT : Z → R+ are the 1-Lipschitz domain discriminators (Ganin et al., 2016; Arjovsky et al., 2017). Lemma 1 reveals that instead of using pseudo-labels to estimate the weighted conditional Wasserstein distance, one can train T domain discriminators with weighted Wasserstein adversarial loss, which does not require the pseudo-label of each target sample during the matching. On the other hand, ᾱt can be obtained from α̂t, which will be elaborated in Sec. 3.2. In practice, we adopt a hybrid approach by linearly combining the explicit and implicit matching strategies for all the scenarios, in which empirical results show its effectiveness. 4.2 ESTIMATE LABEL DISTRIBUTION RATIO α̂t Multi-Source Transfer with target labels When the target labels are available, α̂t can be directly estimated from the data without any assumption and α̂t → αt can be proved from asymptotic statistics. Unsupervised Multi-Source DA In this scenario, it is impossible to estimate a good α̂t without imposing any additional assumptions. Following (Zhang et al., 2013; Lipton et al., 2018; Azizzadenesheli et al., 2019; Combes et al., 2020), we assume that the conditional distributions are aligned between the target and source domains (i.e., St(z|y) = T (z|y)). Then, we denote S̄t(y), T̄ (y) as the predicted t-source/target label distribution through the hypothesis h, and also define CŜt [y, k] = Ŝt[argmaxy′h(z, y ′) = y, Y = k] is the t-source prediction confusion matrix. We can demonstrate that if the conditional distribution is aligned, we have T̄ (y) = T̄α̂t(y), with T̄α̂t(Y = y) = ∑Y k=1 CŜt [y, k]α̂t(k) the constructed target prediction distribution from the t-source information. (See Appendix I for the proof). Then we can estimate α̂t through matching these two distributions by minimizing DKL(T̄ (y)‖T̄α̂t(y)), which is equivalent to: min α̂t − |Y|∑ y=1 T̄ (y) log( |Y|∑ k=1 CŜt [y, k]α̂t(k)) s.t ∀y ∈ Y, α̂t(y) ≥ 0, |Y|∑ y=1 α̂t(y)Ŝt(y) = 1 (2) In the aforementioned part, we have assumed the conditional distribution is aligned, which is a feasible requirement in our algorithm, since the goal of g exactly aims at gradually achieving this. In the experiments, we iteratively estimate α̂t and learn g. Unsupervised Multi-Source Partial DA When supp(T (y)) ⊆ supp(St(y)), αt is sparse due to the non-overlapped classes. Accordingly, in addition to the assumption of St(z|y) = T (z|y) as in unsupervised DA, we also impose such prior knowledge by adding a regularizer ‖α̂t‖1 to the objective of Eq. (2) to induce sparsity in α̂t (See Appendix J for more details). In training the neural network, since the non-overlapped classes will be automatically assigned with a small or zero α̂t, (g, h) will be less affected by the classes with small α̂t. Our empirical results effectively validate its capability in detecting non-overlapping classes and show significant improvements over other baselines. 4.3 ESTIMATE TASK RELATION COEFFICIENT λ Inspired by Theorem 1, given fixed α̂t and (g, h), we estimate λ through optimizing the derived upper bound. min λ ∑ t λ[t]R̂α̂tSt (h, g) + C0 ∑ t λ[t]Ey∼T̂ (y)W1(T̂ (z|Y = y)‖Ŝ(z|Y = y)) + C1 √√√√ T∑ t=1 λ2[t] βt s.t ∀t,λ[t] ≥ 0, T∑ t=1 λ[t] = 1 (3) In practice, R̂α̂tSt (h, g) is the weighted empirical prediction error and Ey∼T̂ (y)W1(T̂ (z|Y = y)‖Ŝ(z|Y = y)) is approximated by the dynamic feature centroid distance ∑ y T̄ (y)‖C y t − Cy‖2 (See Appendix L for details). Thus, solving λ is a standard convex optimization problem. 4.4 ALGORITHM DESCRIPTION Based on the aforementioned components, we present the description of WADN (Algorithm 1) in the unsupervised scenarios (UDA and Partial DA), which iteratively updates (g, h), α̂t, and λ. When Algorithm 1 Wasserstein Aggregation Domain Network (unsupervised scenarios, one iteration) Require: Labeled source samples Ŝ1, . . . , ŜT , Target samples T̂ Ensure: Label distribution ratio α̂t, task relation simplex λ. Feature Function g, Classifier h, Do- main critic function d1, . . . , dT , class centroid for source C y t and target C y (∀t = [1, T ], y ∈ Y). 1: . . . DNN Parameter Training Stage (fixed αt and λ) / / / 2: for mini-batch of samples (xS1 ,yS1) ∼ Ŝ1, . . . , (xST ,yST ) ∼ ŜT , (xT ) ∼ T̂ do 3: Predict target pseudo-label ȳT = argmaxyh(g(xT ), y) 4: Compute source confusion matrix for each batch (un-normalized) CŜt = #[argmaxy′h(z, y ′) = y, Y = k] (t = 1, . . . , T ) 5: Compute the batched class centroid for source Cyt and target C y . 6: Moving Average for update source/target class centroid: ( 1 = 0.7) 7: Update Source class centroid Cyt = 1 ×C y t + (1− 1)× C y t 8: Update Target class centroid Cy = 1 ×Cy + (1− 1)× Cy 9: Updating g, h, d1, . . . , dT (SGD and Gradient Reversal), by solving: min g,h max d1,...,dT ∑ t λ[t]R̂α̂tSt (h, g)︸ ︷︷ ︸ Classification Loss + C0 ∑ t λ[t]Ey∼T̄ (y)‖C y t −C y‖2︸ ︷︷ ︸ Explicit Conditional Loss + (1− )C0 ∑ t λ[t][Ez∼Ŝt(z)ᾱt(z)d(z)− Ez∼T̂ (z)d(z)]︸ ︷︷ ︸ Implicit Conditional Loss 10: end for 11: . . . Estimation α̂t and λ / / / 12: Compute the global(normalized) source confusion matrix CŜt = Ŝt[argmaxy′h(z, y ′) = y, Y = k] (t = 1, . . . , T ) 13: Solve αt (denoted as {α′t}Tt=1) by (Sec. 4.2 Unsupervised DA or Partial UDA). 14: Update αt by moving average: αt = 1 × αt + (1− 1)× α′t 15: Compute the weighted loss and weighted centroid distance, then solve λ (denoted as λ′) from Sec. 4.3. And updating λ by moving average: λ = 1 × λ + (1− 1)× λ′ updating λ and αt, we used package CVXPY to optimize the two standard convex losses after each training epoch, then we updating them by using the moving average. As for WADN under target label information, we did not require pseudo-label and directly compute α̂t, shown in Appendix L. 5 EXPERIMENTS In this section, we compare proposed approaches with several baselines for the popular tasks. For all the scenarios, the following baselines are evaluated: (I) Source method applied only labelled source data to train the model. (II) DANN (Ganin et al., 2016). We follow the protocol of Wen et al. (2020) to merge all the source dataset as a global source domain. (III) MDAN (Zhao et al., 2018); (IV) MDMN (Li et al., 2018b); (V) M3SDA (Peng et al., 2019) adopted maximizing classifier discrepancy (Saito et al., 2018) and (VI) DARN (Wen et al., 2020). For the conventional multi-source transfer and partial unsupervised multi-source DA, we additionally compare specific baselines. All the baselines are re-implemented in the same network structure for fair comparisons. The detailed network structures, hyper-parameter settings, training details are put in Appendix M. We evaluate the performance on three different datasets: (I) Amazon Review. (Blitzer et al., 2007) It contains four domains (Books, DVD, Electronics, and Kitchen) with positive and negative product reviews. We follow the common data pre-processing strategies as (Chen et al. (2012)) to form a 5000-dimensional bag-of-words feature. Note that the label distribution in the original dataset is uniform. To enhance the benefits of the proposed approach, we create a label distribution drifted task by randomly dropping 50% negative reviews of all the sources while keeping the target identical. (show in Fig.3 (a)). (II) Digits. It consists four digits recognition datasets including MNIST, USPS (Hull, 1994), SVHN (Netzer et al., 2011) and Synth (Ganin et al., 2016). We also create a slight label distribution drift for the sources by randomly dropping 50% samples on digits 5-9 and keep target identical. (showed in Fig.(3)(b)). (III) Office-Home Dataset (Venkateswara et al., 2017). It contains 65 classes for four different domains: Art, Clipart, Product and Real-World. We used the ResNet50 (He et al., 2016) pretrained from the ImageNet in PyTorch as the base network for feature learning and put a MLP for the classification. The label distributions in these four domains are different and we did not manually create a label drift (showed in Fig.3 (c)). 5.1 UNSUPERVISED MULTI-SOURCE DA In the unsupervised multi-source DA, we evaluate the proposed approach on all the three datasets. We use a similar hyper-parameter selection strategy as in DANN (Ganin et al., 2016). All reported results are averaged from five runs. The detailed experimental settings are illustrated in Appendix M. The empirical results are illustrated in Tab. 7, 2 and 3. Since we did not change the target label distribution throughout the whole experiments, then we still use the target accuracy as the metric. We report the means and standard deviations for each approach. The best approaches based on a two-sided Wilcoxon signed-rank test (significance level p = 0.05) are shown in bold. The empirical results reveal a significantly better performance (≈ 3%) on different datasets. For understanding the working principles of WADN, we evaluate the performance under different levels of source label shift in Amazon Review dataset (Fig.1(a)). The results show strong practical benefits for WADN during a gradual larger label shift. In addition, we visualize the task relations in digits (Fig.1(b)) and demonstrate a non-uniform λ, which highlights the importance of properly choosing the most related source rather than simply merging all the data. E.g. when the target domain is SVHN, WADN mainly leverages the information from SYNTH, since they are more semantically similar and MNIST does not help too much for SVHN (observed by Ganin et al. (2016)). The additional analysis and results can be found in Appendix O. 5.2 MULTI-SOURCE TRANSFER LEARNING WITH LIMITED TARGET SAMPLES We adopt Amazon Review and Digits in the multi-source transfer learning with limited target samples, which have been widely used. In the experiments, we still use shifted sources. We randomly sample only 10% labeled samples (w.r.t. target dataset in unsupervised DA) as training set and the rest 90% samples as the unseen target test set. (See Appendix M for details). We adopt the same hyper-parameters and training strategies with unsupervised DA. We specifically add a recent baseline RLUS (Konstantinov & Lampert, 2019) and MME (Saito et al., 2019), which also considered transfer learning with the labeled target. The results are reported in Tabs. 4, 5, which also indicates strong empirical benefits. To show the effectiveness of WADN, we select various portions of labelled samples (1% ∼ 10%) on the target. The results in Fig.1(c) on USPS dataset shows a consistently better than the baselines, even in the few target samples. 5.3 PARTIAL UNSUPERVISED MULTI-SOURCE DA In this scenario, we adopt the Office-Home dataset to evaluate our approach, as it contains large (65) classes. We do not change the source domains and we randomly choose 35 classes from the target. We evaluate all the baselines on the same selected classes and repeat 5 times. All reported results are averaged from 3 different sub-class selections (15 runs in total), showing in Tab.6. (See Appendix M for details.) We additionally compare PADA (Cao et al., 2018) approach by merging all the sources and use one-to-one partial DA algorithm. We adopt the same hyper-parameters and training strategies with unsupervised DA scenario. The reported results are also significantly better than the current multi-source DA or one-to-one partial DA approach, which verifies the benefits of WADN: properly estimating α̂t and assigning proper λ for each source. Besides, we change the number of selected classes (Fig 2(a)), the proposed WADN still indicates consistent better results by a large margin, which indicates the importance of considering α̂t and λ. In contrast, DANN shows unstable results in less selected classes. (See Appendix P for details) Beside, WADN shows a good estimation of the label distribution ratio (Fig 2(b)) and has correctly detected the non-overlapping classes, which indicates its good explainability. 6 CONCLUSION In this paper, we proposed a new theoretical principled algorithm WADN (Wasserstein Aggregation Domain Network) to solve the multi-source transfer learning problem under target shift. WADN provides a unified solution for various deep multi-source transfer scenarios: learning with limited target data, unsupervised DA, and partial unsupervised DA. We evaluate the proposed method by extensive experiments and show its strong empirical results. A ADDITIONAL EMPIRICAL RESULTS B ADDITIONAL RELATED WORK Multi-source transfer learning Practice has been proposed from various prospective. The key idea is to estimate the importance of different sources and then select the most related ones, to mitigate the influence of negative transfer. In the multi-source unsupervised DA, (Sankaranarayanan et al., 2018; Balaji et al., 2019; Pei et al., 2018; Zhao et al., 2019; Zhu et al., 2019; Zhao et al., 2020; 2019; Stojanov et al., 2019; Li et al., 2019b; Wang et al., 2019b; Lin et al., 2020) proposed different practical strategies in the classification, regression and semantic segmentation problems. In the presence of target labels, Hoffman et al. (2012); Tan et al. (2013); Wei et al. (2017); Yao & Doretto (2010); Konstantinov & Lampert (2019) used generalized linear model to learn the target. Christodoulidis et al. (2016); Li et al. (2019a); Chen et al. (2019) focused on deep learning approaches and Lee et al. (2019) proposed an ad-hoc strategy to combine to sources in the few-shot target domains. These ideas are generally data-driven approaches and do not analyze the why the proposed practice can control the generalization error. Label-Partial Transfer Learning Label-Partial can be viewed as a special case of the label-shift. 1 Most existing works focus on one-to-one partial transfer learning (Zhang et al., 2018; Chen et al., 2020; Bucci et al., 2019; Cao et al., 2019) by adopting the re-weighting training approach without a formal understanding. In our paper, we first rigorously analyzed this common practice and adopt the label distribution ratio as its weights, which provides a principled approach in this scenario. B.1 OTHER SCENARIOS RELATED TO MULTI-SOURCE TRANSFER LEARNING Domain Generalization The domain generalization (DG) resembles multi-source transfer but aims at different goals. A common setting in DG is to learn multiple source but directly predict on the unseen target domain. The conventional DG approaches generally learn a distribution invariant features (Balaji et al., 2018; Saenko et al., 2010; Motiian et al., 2017; Ilse et al., 2019) or conditional distribution invariant features (Li et al., 2018a; Akuzawa et al., 2019). However, our theoretical results reveal that in the presence of label shift (i.e αt(y) 6= 1) and outlier tasks then learning conditional or marginal invariant features can not guarantee a small target risk. Our theoretical result enables a formal understanding about the inherent difficulty in DG problems. Few-Shot Learning The few-shot learning (Finn et al., 2017; Snell et al., 2017; Sung et al., 2018) can be viewed as a very specific scenario of multi-source transfer learning. We would like to point out the differences between the few-shot learning and our paper. (1) Few-shot learning generally involves a very large set of source domains T 1 and each domain consists a modest number of observations NSt . In our paper, we are interested in the a modest number of source domains T but each source domain including a sufficient large number of observations (NSt 1). (2) In the target domain, the few-shot setting generally used K-samples (K is very small) for each class for the fine-tuning. We would like to point out this setting generally violates our theoretical assumption. In 1Since supp(T (y)) ⊆ supp(St(y)) then we naturally have T (y) 6= St(y). our paper, we assume the target data is i.i.d. sampled fromD(x, y). It is equivalently viewed that we first i.i.d. sample y ∼ D(y), then i.i.d. sample x ∼ D(x|y). Generally the D(y) is non-uniform, thus few-shot setting are generally not applicable for our theoretical assumptions. Multi-Task Learning The goal of multi-task learning (Zhang & Yang, 2017) aims to improve the prediction performance of all the tasks. In our paper, we aim at controlling the prediction risk of a specified target domain. We also notice some practical techniques are common such as the shared parameter (Zhang & Yeung, 2012), shared representation (Ruder, 2017), etc. C ADDITIONAL FIGURES RELATED TO THE MAIN PAPER D TABLE OF NOTATION E PROOF OF THEOREM 1 Proof idea Theorem 1 consists three steps in the proof: Lemma 2. If the prediction loss is assumed as L-Lipschitz and the hypothesis is K-Lipschitz w.r.t. the feature x (given the same label), i.e. for ∀Y = y, ‖h(x1, y)−h(x2, y)‖2 ≤ K‖x1−x2‖2. Then the target risk can be upper bounded by: RT (h) ≤ ∑ t λ[t]RαtS (h) + LK ∑ t λ[t]Ey∼T (y)W1(T (x|Y = y)‖S(x|Y = y)) (4) Proof. The target risk can be expressed as: RT (h(x, y)) = E(x,y)∼T `(h(x, y)) = Ey∼T (y)Ex∼T (x|y)`(h(x, y)) By denoting α(y) = T (y)S(y) , then we have: Ey∼T (y)Ey∼T (x|y)`(h(x, y)) = Ey∼S(y)α(y)Ex∼T (x|y)`(h(x, y)) Then we aim to upper bound Ex∼T (x|y)`(h(x, y)). For any fixed y, Ex∼T (x|y)`(h(x, y))− Ex∼S(x|y)`(h(x, y)) ≤ | ∫ x∈X `(h(x, y))d(T (x|y)− S(x|y))| Then according to the Kantorovich-Rubinstein duality, for any distribution coupling γ ∈ Π(T (x|y),S(x|y)), then we have: = inf γ | ∫ X×X `(h(xp, y))− `(h(xq, y))dγ(xp, xq)| ≤ inf γ ∫ X×X |`(h(xp, y))− `(h(xq, y))|dγ(xp, xq) ≤ L inf γ ∫ X×X |h(xp, y))− h(xq, y)|dγ(xp, xq) ≤ LK inf γ ∫ X×X ‖xp − xq‖2dγ(xp, xq) = LKW1(T (x|Y = y)‖S(x|Y = y)) The first inequality is obvious; and the second inequality comes from the assumption that ` is LLipschitz; the third inequality comes from the hypothesis is K-Lipschitz w.r.t. the feature x (given the same label), i.e. for ∀Y = y, ‖h(x1, y)− h(x2, y)‖2 ≤ K‖x1 − x2‖2. Then we have: RT (h) ≤ Ey∼S(y)α(y)[Ex∼S(x|y)`(h(x, y)) + LKW1(T (x|y)‖S(x|y))] = E(x,y)∼Sα(y)`(h(x, y)) + LKEy∼T (y)W1(T (x|Y = y)‖S(x|Y = y)) = RαS(h) + LKEy∼T (y)W1(T (x|Y = y)‖S(x|Y = y)) Supposing each source St we assign the weight λ[t] and label distribution ratio αt(y) = T (y)St(y) , then by combining this T source target pair, we have: RT (h) ≤ ∑ t λ[t]RαtSt (h) + LK ∑ t λ[t]Ey∼T (y)W1(T (x|Y = y)‖St(x|Y = y)) Then we will prove Theorem 1 from this result, we will derive the non-asymptotic bound, estimated from the finite sample observations. Supposing the empirical label ratio value is α̂t, then for any simplex λ we can prove the high-probability bound. E.1 BOUNDING THE EMPIRICAL AND EXPECTED PREDICTION RISK Proof. We first bound the first term, which can be upper bounded as: sup h | ∑ t λ[t]RαtSt (h)− ∑ t λ[t]R̂α̂tSt (h)| ≤ sup h | ∑ t λ[t]RαtSt (h)− ∑ t λ[t]R̂αtSt (h)|︸ ︷︷ ︸ (I) + sup h | ∑ t λ[t]R̂αtSt (h)− ∑ t λ[t]R̂α̂tSt (h)|︸ ︷︷ ︸ (II) Bounding term (I) According to the McDiarmid inequality, each item changes at most | 2λ[t]αt(y)`NSt |. Then we have: P ((I)− E(I) ≥ t) ≤ exp( −2t 2∑T t=1 4 βtN λ2[t]αt(y)2`2 ) = δ By substituting δ, at high probability 1− δ we have: (I) ≤ E(I) + Lmaxdsup∞ √√√√ T∑ t=1 λ[t]2 βt √ log(1/δ) 2N Where Lmax = suph∈H `(h) and N = ∑T t=1NSt the total source observations and βt = NSt N the frequency ratio of each source. And d sup ∞ = maxt=1,...,T d∞(T (y)‖S(y)) = maxt=1,...,T maxy∈[1,Y] αt(y), the maximum true label shift value (constant). Bounding E sup(I), the expectation term can be upper bounded as the form of Rademacher Complexity: E(I) ≤ 2EσEŜT1 suph T∑ t=1 λ[t] ∑ (xt,yt)∈Ŝt 1 TN (αt(y)`(h(xt, yt)) ≤ 2 ∑ t λ[t]EσEŜT1 suph ∑ (xt,yt)∈Ŝt 1 TN (αt(y)`(h(xt, yt)) ≤ 2 sup t EσEŜt sup h ∑ (xt,yt)∈Ŝt 1 TN [αt(y)`(h(xt, yt))] = sup t 2Rt(`,H) = 2R̄(`,H) Where R̄(`,H) = suptRt(`,H) = supt suph∼H EŜt,σ ∑ (xt,yt)∈Ŝt 1 TN [αt(y)`(h(xt, yt))], represents the Rademacher complexity w.r.t. the prediction loss `, hypothesis h and true label distribution ratio αt. Therefore with high probability 1− δ, we have: sup h | ∑ t λ[t]RαtS (h)− ∑ t λ[t]R̂αtS (h)| ≤ R̄(`, h) + Lmaxd sup ∞ √√√√ T∑ t=1 λ[t]2 βt √ log(1/δ) 2N Bounding Term (II) For all the hypothesis h, we have: | ∑ t λ[t]R̂αtSt (h)− ∑ t λ[t]R̂α̂tSt (h)| = | ∑ t λ[t] 1 NSt NSt∑ i (α(y(i))− α̂(y(i)))`(h)| = ∑ t λ[t] 1 NSt | |Y|∑ y (α(Y = y)− α̂(Y = y))¯̀(Y = y)| Where ¯̀(Y = y) = ∑NSt i `(h(xi, yi = y)), represents the cumulative error, conditioned on a given label Y = y. According to the Holder inequality, we have: ∑ t λ[t] 1 NSt | |Y|∑ y (αt(Y = y)− α̂t(Y = y))¯̀(Y = y)| ≤ ∑ t λ[t] 1 NSt ‖αt − α̂t‖2‖¯̀(Y = y)‖2 ≤ Lmax ∑ t λ[t]‖αt − α̂t‖2 ≤ Lmax sup t ‖αt − α̂t‖2 Therefore, ∀h ∈ H, with high probability 1− δ we have: ∑ t λ[t]RαtS (h) ≤ ∑ t λ[t]R̂α̂tS (h)+2R̄(`, h)+Lmaxd sup ∞ √√√√ T∑ t=1 λ[t]2 βt √ log(1/δ) 2N +Lmax sup t ‖αt−α̂t‖2 E.2 BOUNDING EMPIRICAL WASSERSTEIN DISTANCE Then we need to derive the sample complexity of the empirical and true distributions, which can be decomposed as the following two parts. For any t, we have: Ey∼T (y)W1(T (x|Y = y)‖St(x|Y = y))− Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) ≤ Ey∼T (y)W1(T (x|Y = y)‖St(x|Y = y))− Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))︸ ︷︷ ︸ (I) + Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))− Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))︸ ︷︷ ︸ (II) Bounding (I) We have: Ey∼T (y)W1(T (x|Y = y)‖St(x|Y = y))− Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) = ∑ y T (y) ( W1(T (x|Y = y)‖St(x|Y = y))−W1(T̂ (x|Y = y)‖Ŝt(x|Y = y) ) ≤ | ∑ y T (y)| sup y ( W1(T (x|Y = y)‖St(x|Y = y))−W1(T̂ (x|Y = y)‖Ŝt(x|Y = y) ) = sup y ( W1(T (x|Y = y)‖St(x|Y = y))−W1(T̂ (x|Y = y)‖Ŝt(x|Y = y) ) ≤ sup y [W1(St(x|Y = y)‖Ŝt(x|Y = y)) +W1(Ŝt(x|Y = y)‖T̂ (x|Y = y)) +W1(T̂ (x|Y = y)‖T (x|Y = y))−W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))] = sup y W1(St(x|Y = y)‖Ŝt(x|Y = y)) +W1(T̂ (x|Y = y)‖T (x|Y = y)) The first inequality holds because of the Holder inequality. As for the second inequality, we use the triangle inequality of Wasserstein distance. W1(P‖Q) ≤W1(P‖P1) +W1(P1‖P2) +W1(P2‖Q). According to the convergence behavior of Wasserstein distance (Weed et al., 2019), with high probability ≥ 1− 2δ we have: W1(St(x|Y = y)‖Ŝt(x|Y = y)) +W1(T̂ (x|Y = y)‖T (x|Y = y)) ≤ κ(δ,NySt , N y T ) Where k(δ,NySt , N y T ) = Ct,y(N y St) −st,y +Cy(N y T ) −sy + √ 1 2 log( 2 δ )( √ 1 NySt + √ 1 Nyt ), where NySt is the number of Y = y in source t and NyT is the number of Y = y in target distribution. Ct,y , Cy st,y > 2, sy > 2 are positive constant in the concentration inequality. This indicates the convergence behavior between empirical and true Wasserstein distance. If we adopt the union bound (over all the labels) by setting δ ← δ/|Y|, then with high probability ≥ 1− 2δ, we have: sup y W1(S(x|Y = y)‖Ŝ(x|Y = y)) +W1(T̂ (x|Y = y)‖T (x|Y = y)) ≤ κ(δ,NySt , N y T ) where κ(δ,NySt , N y T ) = Ct,y(N y St) −st,y + Cy(N y T ) −sy + √ 1 2 log( 2|Y| δ )( √ 1 NySt + √ 1 NyT ) Again by adopting the union bound (over all the tasks) by setting δ ← δ/T , with high probability ≥ 1− 2δ, we have:∑ t λ[t]Ey∼T (y)W1(T (x|Y = y)‖S(x|Y = y))− ∑ t λ[t]Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝ(x|Y = y)) ≤ sup t κ(δ,NySt , N y T ) Where κ(δ,NySt , N y T ) = Ct,y(N y St) −st,y + Cy(N y T ) −sy + √ 1 2 log( 2T |Y| δ )( √ 1 NySt + √ 1 NyT ). Bounding (II) We can bound the second term: Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))− Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) ≤ sup y W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))| ∑ y T (y)− T̂ (y)| ≤ Ctmax| ∑ y T (y)− T̂ (y)| Where Ctmax = supyW1(T̂ (x|Y = y)‖Ŝ(x|Y = y)) is a positive and bounded constant. Then we need to bound | ∑ y T (y)−T̂ (y)|, by adopting MicDiarmid’s inequality, we have at high probability 1− δ: | ∑ y T (y)− T̂ (y)| ≤ ET̂ | ∑ y T (y)− T̂ (y)|+ √ log(1/δ) 2NT = 2EσET̂ ∑ y σT̂ (y) + √ log(1/δ) 2NT Then we bound EσET̂ ∑ y σT̂ (y). We use the properties of Rademacher complexity [Lemma 26.11, (Shalev-Shwartz & Ben-David, 2014)] and notice that T̂ (y) is a probability simplex, then we have: EσET̂ ∑ y σT̂ (y) ≤ √ 2 log(2|Y|) NT Then we have | ∑ y T (y)− T̂ (y)| ≤ √ 2 log(2|Y|) NT + √ log(1/δ) 2NT Then using the union bound and denoting δ ← δ/T , with high probability ≥ 1 − δ and for any simplex λ, we have:∑ t λ[t]Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) ≤ ∑ t λ[t]Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) Cmax( √ 2 log(2|Y|) NT + √ log(T/δ) 2NT ) where Cmax = supt C t max. Combining together, we can derive the PAC-Learning bound, which is estimated from the finite samples (with high probability 1− 4δ): RT (h) ≤ ∑ t λtR̂ α̂t St (h) + LH ∑ t λtEy∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝ(x|Y = y)) + Lmaxd sup ∞ √√√√ T∑ t=1 λ2t βt √ log(1/δ) 2N + 2R̄(`, h) + Lmax sup t ‖αt − α̂t‖2 + sup t κ(δ,NySt , N y T ) + Cmax( √ 2 log(2|Y|) NT + √ log(T/δ) 2NT ) Then we denote Comp(NS1 , . . . , NT , δ) = 2R̄(`, h) + supt κ(δ,N y St , N y T ) +Cmax( √ 2 log(2|Y|) NT +√ log(T/δ) 2NT ) as the convergence rate function that decreases with larger NS1 , . . . , NT . Bedsides, R̄(`, h) = suptRt(`,H) is the re-weighted Rademacher complexity. Given a fixed hypothesis with finite VC dimension 2, it can be proved R̄(`, h) = minNS1 ,...,NST O( √ 1 NSt ) i.e (Shalev-Shwartz & Ben-David, 2014). 2If the hypothesis is the neural network, the Rademacher complexity can still be bounded analogously. F PROOF OF THEOREM 2 We first recall the stochastic feature representation g such that g : X → Z and scoring hypothesis h h : Z × Y → R and the prediction loss ` with ` : R→ R. 3 Proof. The marginal distribution and conditional distribution w.r.t. latent variableZ that are induced by g, which can be reformulated as: S(z) = ∫ x g(z|x)S(x)dx S(z|y) = ∫ x g(z|x)S(x|Y = y)dx In the multi-class classification problem, we additionally define the following distributions: µk(z) = S(Y = k, z) = S(Y = k)S(z|Y = k) πk(z) = T (Y = k, z) = T (Y = k)T (z|Y = k) Based on (Nguyen et al., 2009) and g(z|x) is a stochastic representation learning function, the loss conditioned a fixed point (x, y) w.r.t. h and g is Ez∼g(z|x)`(h(z, y)). Then taking the expectation over the S(x, y) we have: 4 RS(h, g) = E(x,y)∼S(x,y)Ez∼g(z|x)`(h(z, y)) = |Y|∑ k=1 S(y = k) ∫ x S(x|Y = k) ∫ z g(z|x)`(h(z, y = k))dzdx = |Y|∑ k=1 S(y = k) ∫ z [ ∫ x S(x|Y = k)g(z|x)dx]`(h(z, y = k))dz = |Y|∑ k=1 S(y = k) ∫ z S(z|Y = k)`(h(z, y = k))dz = |Y|∑ k=1 ∫ z S(z, Y = k)`(h(z, y = k))dz = |Y|∑ k=1 ∫ z µk(z)`(h(z, y = k))dz Intuitively, the expected loss w.r.t. the joint distribution S can be decomposed as the expected loss on the label distribution S(y) (weighted by the labels) and conditional distribution S(·|y) (real valued conditional loss). Then the expected risk on the S and T can be expressed as: RS(h, g) = |Y|∑ k=1 ∫ z `(h(z, y = k))µk(z)dz RT (h, g) = |Y|∑ k=1 ∫ z `(h(z, y = k))πk(z)dz 3Note this definition is different from the conventional binary classification with binary output, and it is more suitable in the multi-classification scenario and cross entropy loss (Hoffman et al., 2018a). For example, if we define l = − log(·) and h(z, y) ∈ (0, 1) as a scalar score output. Then `(h(z, y)) can be viewed as the cross-entropy loss for the neural-network. 4An alternative understanding is based on the Markov chain. In this case it is a DAG with Y S(y|x)←−−−− X g−→ Z, X S(y|x)−−−−→ Y h−→ S h←− Z g←− X . (S is the output of the scoring function). Then the ex- pected loss over the all random variable can be equivalently written as ∫ P(x, y, z, s) `(s) d(x, y, z, s) =∫ P(x)P(y|x)P(z|x)P(s|z, y)`(s) = ∫ P(x, y)P(z|x)P(s|z, y)`(s)d(x, y)d(z)d(s). Since the scoring S is determined by h(x, y), then P(s|y, z) = 1. According to the definition we have P(z|x) = g(z|x), P(x, y) = S(x, y), then the loss can be finally expressed as ES(x,y)Eg(z|x)`(h(z, y)) By denoting α(y) = T (y)S(y) , we have the α-weighted loss: RαS(h, g) =T (Y = 1) ∫ z `(h(z, y = 1))S(z|Y = 1) + T (Y = 2) ∫ z `(h(z, y = 2))S(z|Y = 2) + · · ·+ T (Y = k) ∫ z `(h(z, y = k))S(z|Y = k)dz Then we have: RT (h, g)−RαS(h, g) ≤ ∑ k T (Y = k) ∫ z `(h(z, y = k))d|S(z|Y = k)− T (z|Y = k)| Under the same assumption, we have the loss function `(h(z, Y = k)) is KL-Lipschitz w.r.t. the cost ‖ · ‖2 (given a fixed k). Therefore by adopting the same proof strategy (Kantorovich-Rubinstein duality) in Lemma 2, we have ≤ KLT (Y = 1)W1(S(z|Y = 1)‖T (z|Y = 1)) + · · ·+KLT (Y = k)W1(S(z|Y = k)‖T (z|Y = k)) = KLEy∼T (y)W1(S(z|Y = y)‖T (z|Y = y)) Therefore, we have: RT (h, g) ≤ RαS(h, g) + LKEy∼T (y)W1(S(z|Y = y)‖T (z|Y = y)) Based on the aforementioned result, we have ∀t = 1, . . . , T and denote S = St and α(y) = αt(y) = T (y)/St(y): λ[t]RT (h, g) ≤ λ[t]RαtSt (h, g) + LKλ[t]Ey∼T (y)W1(St(z|Y = y)‖T (z|Y = y)) Summing over t = 1, . . . , T , we have: RT (h, g) ≤ T∑ t=1 λ[t]RαtSt (h, g) + LK T∑ t=1 λ[t]Ey∼T (y)W1(St(z|Y = y)‖T (z|Y = y)) G APPROXIMATION W1 DISTANCE According to Jensen inequality, we have W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) ≤ √ [W2(Ŝt(z|Y = y)‖T̂ (z|Y = y))]2 Supposing Ŝt(z|Y = y) ≈ N (Cyt ,Σ) and T̂ (z|Y = y) ≈ N (Cy,Σ), then we have: [W2(Ŝt(z|Y = y)‖T̂ (z|Y = y)]2 = ‖Cyt −Cy‖22 + Trace(2Σ− 2(ΣΣ)1/2) = ‖C y t −Cy‖22 We would like to point out that assuming the identical covariance matrix is more computationally efficient during the matching. This is advantageous and reasonable in the deep learning regime: we adopted the mini-batch (ranging from 20-128) for the neural network parameter optimization, in each mini-batch the samples of each class are small, then we compute the empirical covariance/variance matrix will be surely biased to the ground truth variance and induce a much higher complexity to optimize. By the contrary, the empirical mean is unbiased and computationally efficient, we can simply use the moving the moving average to efficiently update the estimated mean value (with a unbiased estimator). The empirical results verify the effectiveness of this idea. H PROOF OF LEMMA 1 For each source St, by introducing the duality of Wasserstein-1 distance, for y ∈ Y , we have: W1(St(z|y)‖T (z|y)) = sup ‖d‖L≤1 Ez∼St(z|y)d(z)− Ez∼T (z|y)d(z) = sup ‖d‖L≤1 ∑ z St(z|y)d(z)− ∑ z T (z|y)d(z) = 1 T (y) sup ‖d‖L≤1 T (y) St(y) ∑ z St(z, y)d(z)− ∑ z T (z, y)d(z) Then by defining ᾱt(z) = 1{(z,y)∼St} T (Y=y) St(Y=y) = 1{(z,y)∼St}αt(Y = y), we can see for each pair observation (z, y) sampled from the same distribution, then ᾱt(Z = z) = αt(Y = y). Then we have:∑ y T (y)W1(St(z|y)‖T (z|y)) = ∑ y sup ‖d‖L≤1 { ∑ z αt(y)St(z, y)d(z)− ∑ z T (z, y)d(z)} = sup ‖d‖L≤1 ∑ z ᾱt(z)St(z)d(z)− ∑ z T (z)d(z) = sup ‖d‖L≤1 Ez∼St(z)ᾱt(z)d(z)− Ez∼T (z)d(z) We propose a simple example to understand ᾱt: supposing three samples in St = {(z1, Y = 1), (z2, Y = 1), (z3, Y = 0)} then ᾱt(z1) = ᾱt(z2) = αt(1) and ᾱt(z3) = αt(0). Therefore, the conditional term is equivalent to the label-weighted Wasserstein adversarial learning. We plug in each source domain as weight λ[t] and domain discriminator as dt, we finally have Lemma 1. I DERIVE THE LABEL RATIO LOSS We suppose the representation learning aims at matching the conditional distribution such that T (z|y) ≈ St(z|y),∀t, then we suppose the predicted target distribution as T̄ (y). By simplifying the notation, we define f(z) = argmaxyh(z, y) the most possible prediction label output, then we have: T̄ (y) = Y∑ k=1 T (f(z) = y|Y = k)T (Y = k) = Y∑ k=1 St(f(z) = y|Y = k)T (Y = k) = Y∑ i=1 St(f(z) = y, Y = k)αt(k) = T̄αt(y) The first equality comes from the definition of target label prediction distribution, T̄ (y) = ET (z)1{f(z) = y} = T (f(z) = y) = ∑Y k=1 T (f(z) = y, Y = k) = ∑Y k=1 T (f(z) = y|Y = k)T (Y = k). The second equality T (f(z) = y|Y = k) = St(f(z) = y|Y = k) holds since ∀t, T (z|y) ≈ St(z|y), then for the shared hypothesis f , we have T (f(z) = y|Y = k) = St(f(z) = y|Y = k). The term St(f(z) = y, Y = k) is the (expected) source prediction confusion matrix, and we denote its empirical (observed) version as Ŝt(f(z) = y, Y = k). Based on this idea, in practice we want to find a α̂t to match the two predicted distribution T̄ and T̄α̂t . If we adopt the KL-divergence as the metric, we have: min α̂t DKL(T̄ ‖T̄α̂t) = min α̂t Ey∼T̄ log( T̄ (y) T̄α̂t(y) ) = min α̂t −Ey∼T̄ log(T̄α̂t(y)) = min α̂t − ∑ y T̄ (y) log( Y∑ k=1 St(f(z) = y, Y = k)α̂t(k)) We should notice the nature constraints of label ratio: {α̂t(y) ≥ 0, ∑ y α̂t(y)Ŝt(y) = 1}. Based on this principle, we proposed the optimization problem to estimate each label ratio. We adopt its empirical counterpart, the empirical confusion matrix CŜt [y, k] = Ŝt[f(z) = y, Y = k], then the optimization loss can be expressed as: min α̂t − |Y|∑ y=1 T̄ (y) log( |Y|∑ k=1 CŜt [y, k]α̂t(k)) s.t. ∀y ∈ Y, α̂t(y) ≥ 0, ∑ y α̂t(y)Ŝt(y) = 1 J LABEL PARTIAL MULTI-SOURCE UNSUPERVISED DA The key difference between multi-conventional and partial unsupervised DA is the estimation step of α̂t. In fact, we only add a sparse constraint for estimating each α̂t: min α̂t − |Y|∑ y=1 T̄ (y) log( |Y|∑ k=1 CŜt [y, k]α̂t(k)) + C2‖α̂t‖1 s.t. ∀y ∈ Y, α̂t(y) ≥ 0, ∑ y α̂t(y)Ŝt(y) = 1 (5) Where C2 is the hyper-parameter to control the level of target label sparsity, to estimate the target label distribution. In the paper, we denote C2 = 0.1. K EXPLICIT AND IMPLICIT CONDITIONAL LEARNING Inspired by Theorem 2, we need to learn the function g : X → Z and h : Z ×Y → R to minimize: min g,h ∑ t λ[t]R̂α̂tSt (h, g) + C0 ∑ t λ[t]Ey∼T̂ (y)W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) This can be equivalently expressed as: min g,h ∑ t λ[t]R̂αtSt (h, g) + C0 ∑ t λ[t]Ey∼T̂ (y)W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) + (1− )C0 ∑ t λ[t]Ey∼T̂ (y)W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) Due to the explicit and implicit approximation of conditional distance, we then optimize an alternative form: min g,h max d1,...,dT ∑ t λ[t]R̂α̂tSt (h, g)︸ ︷︷ ︸ Classification Loss + C0 ∑ t λ[t]Ey∼T̂ (y)‖C y t −Cy‖2︸ ︷︷ ︸ Explicit Conditional Loss + (1− )C0 ∑ t λ[t][Ez∼Ŝt(z)ᾱ t(z)d(z)− Ez∼T̂ (z)d(z)]︸ ︷︷ ︸ Implicit Conditional Loss (6) Where • Cyt = ∑ (zt,yt)∼Ŝt 1{yt=y}zt the centroid of label Y = y in source St. • Cy = ∑ (zt,yp)∼T̂ 1{yp=y}zt the centroid of pseudo-label Y = yp in target St. (If it is the unsupervised DA scenarios). • ᾱt(z) = 1{(z,y)∼St}α̂t(Y = y), namely if each pair observation (z, y) from the distribu- tion, then ᾱt(Z = z) = α̂t(Y = y). • d1, · · · , dT are domain discriminator (or critic function) restricted within 1-Lipschitz func- tion. • ∈ [0, 1] is the adjustment parameter in the trade-off of explicit and implicit learning. Based on the equivalence form, our approach proposed a theoretical principled way to tuning its weights. In the paper, we assume = 0.5. • T̂ (y) empirical target label distribution. (In the unsupervised DA scenarios, we approximate it by predicted target label distribution T̄ (y).) Gradient Penalty In order to enforce the Lipschitz property of the statistic critic function, we adopt the gradient penalty term (Gulrajani et al., 2017). More concretely, given two samples zs ∼ St(z) and zt ∼ T (z) we generate an interpolated sample zint = ξzs + (1− ξ)zt with ξ ∼ Unif[0, 1]. Then we add a gradient penalty ‖∇d(zint)‖22 as a regularization term to control the Lipschitz property w.r.t. the discriminator d1, · · · , dT . L ALGORITHM DESCRIPTIONS We propose a detailed pipeline of the proposed algorithm in the following, shown in Algorithm 2 and 3. As for updating λ and αt, we iteratively solve the convex optimization problem after each training epoch and updating them by using the moving average technique. For solving the λ and αt, we notice that frequently updating these two parameters in the mini-batch level will lead to an instability result during the training. 5 As a consequence, we compute the accumulated confusion matrix, weighted prediction risk, and conditional Wasserstein distance for the whole training epoch and then solve the optimization problem. We use CVXPY to optimize the two standard convex losses. 6 Comparison with different time and memory complexity. We discuss the time and memory complexity of our approach. Time complexity: In computing each batch we need to compute T re-weighted loss, T domain adversarial loss and T explicit conditional loss. Then our computational complexity is still (O)(T ) during the mini-batch training, which is comparable with recent SOTA such as MDAN and DARN. In addition, after each training epoch we need to estimate αt and λ, which can have time complexity O(T |Y|) with each epoch. (If we adopt SGD to solve these two convex problems). Therefore, the our proposed algorithm is time complexity O(T |Y|). The extra Y term in time complexity is due to the approach of label shift in the designed algorithm. Memory Complexity: Our proposed approach requires O(T ) domain discriminator and O(T |Y|) class-feature centroids. By the contrary, MDAN and DARN require O(T ) domain discriminator and M3SDA and MDMN require O(T 2) domain discriminators. Since our class-feature centroids are defined in the latent space (z), then the memory complexity of the class-feature centroids can be much smaller than domain discriminators. 5In the label distribution shift scenarios, the mini-batch datasets are highly labeled imbalanced. If we evaluate αt over the mini-batch, it can be computationally expensive and unstable. 6The optimization problem w.r.t. αt and λ is not large scale, then using the standard convex solver is fast and accurate. Algorithm 2 Wasserstein Aggregation Domain Network (unsupervised scenarios, one iteration) Require: Labeled source samples Ŝ1, . . . , ŜT , Target samples T̂ Ensure: Label distribution ratio α̂t and task relation simplex λ. Feature Learner g, Classifier h, Statistic critic function d1, . . . , dT , class centroid for source C y t and target C y (∀t = [1, T ], y ∈ Y). 1: . . . DNN Parameter Training Stage (fixed αt and λ) / / / 2: for mini-batch of samples (xS1 ,yS1) ∼ Ŝ1, . . . , (xST ,yST ) ∼ ŜT , (xT ) ∼ T̂ do 3: Predict target pseudo-label ȳT = argmaxyh(g(xT ), y) 4: Compute source confusion matrix for each batch (un-normalized) CŜt = #[argmaxy′h(z, y ′) = y, Y = k] (t = 1, . . . , T ) 5: Compute the batched class centroid for source Cyt and target C y . 6: Moving Average for update source/target class centroid: (We set 1 = 0.7) 7: Source class centroid update Cyt = 1 ×C y t + (1− 1)× C y t 8: Target class centroid update Cy = 1 ×Cy + (1− 1)× Cy 9: Updating g, h, d1, . . . , dT (SGD and Gradient Reversal), based on Eq.(6) 10: end for 11: . . . Estimation α̂t and λ / / / 12: Compute the global(normalized) source confusion matrix CŜt = Ŝt[argmaxy′h(z, y ′) = y, Y = k] (t = 1, . . . , T ) 13: Solve αt (denoted as {α′t}Tt=1) by Equation (2) (Or Eq.(5)) in the partial scenario). 14: Update αt by moving average: αt = 1 × αt + (1− 1)× α′t 15: Compute the weighted loss and weighted centroid distance, then solve λ (denoted as λ′) from Sec. 2.3. 16: Updating λ by moving average: λ = 0.8× λ + 0.2× λ′ Algorithm 3 Wasserstein Aggregation Domain Network (Limited Target Data, one iteration) Require: Labeled source samples Ŝ1, . . . , ŜT , Target samples T̂ , Label shift ratio αt Ensure: Task relation simplex λ. Feature Learner g, Classifier h, Statistic critic function d1, . . . , dT , class centroid for source C y t and target C y (∀t = [1, T ], y ∈ Y). 1: . . . DNN Parameter Training Stage (fixed λ) / / / 2: for mini-batch of samples (xS1 ,yS1) ∼ Ŝ1, . . . , (xST ,yST ) ∼ ŜT , (xT ) ∼ T̂ do 3: Compute the batched class centroid for source Cyt and target C y . 4: Moving Average for update source/target class centroid: (We set 1 = 0.7) 5: Source class centroid update Cyt = 1 ×C y t + (1− 1)× C y t 6: Target class centroid update Cy = 1 ×Cy + (1− 1)× Cy 7: Updating g, h, d1, . . . , dT (SGD and Gradient Reversal), based on Eq.(6). 8: end for 9: . . . Estimation λ / / / 10: Solve λ by Sec. 2.3. (denoted as λ′) 11: Updating λ by moving average: λ = 1 × λ + (1− 1)× λ′ M DATASET DESCRIPTION AND EXPERIMENTAL DETAILS M.1 AMAZON REVIEW DATASET We used the amazon review dataset (Blitzer et al., 2007). It contains four domains (Books, DVD, Electronics, and Kitchen) with positive (label ”1”) and negative product reviews (label ”0”). The data size is 6465 (Books), 5586 (DVD), 7681 (Electronics), and 7945 (Kitchen). We follow the common data pre-processing strategies Chen et al. (2012): use the bag-of-words (BOW) features then extract the top-5000 frequent unigram and bigrams of all the reviews. We also noticed the original data-set are
1. What is the focus and contribution of the paper regarding multi-source transfer learning? 2. What are the strengths and weaknesses of the proposed algorithm, particularly in its theoretical analysis? 3. Do you have any concerns regarding the assumptions and limitations of the main theorem? 4. How does the quality of pseudo labels affect the performance of DA methods, including the proposed approach? 5. What is the motivation behind using theory based on Wasserstein distance, and how does it compare to other approaches such as SHOT? 6. Can the proposed method be applied to address other DA problems beyond the three scenarios presented in the paper? 7. How might the format and presentation of the paper be improved for better clarity and impact?
Review
Review Summary This paper aims to provide a unified principle for multi-source transfer learning under label shifts. Based on this principle, this paper claims that a unified algorithm is proposed for various multi-source label shift transfer scenarios: learning with limited target data, unsupervised domain adaptation and label partial unsupervised domain adaptation. The proposed algorithm is validated on three benchmark datasets. The proof seems correct via combining existing single-domain DA theory and the theory regarding Wasserstein distance. The main theorem (Theorem 1) assumes that we can get the label information in the target domain, which is not realistic in many DA problem settings (e.g., UDA or multi-source UDA in this paper). In many DA problem settings, we have to use pseudo labels to replace with true labels in the target domain, which should be analysed in the proposed theorem. However, this paper does not make any efforts to theoretically analyse the effect of pseudo labels, which results in that this paper has very limited impacts on the DA field. Besides, there are some misleading conclusions in this paper. Pros: 1. The proposed theorems seem correct. 2. The proposed algorithm has good results after using the label-distribution ratio. The proposed method can address various DA problems and some machine learning problems. Cons: 1. The main theorem (Theorem 1) assumes that we can get the label information in the target domain, which is not realistic in many DA problem settings (e.g., UDA or multi-source UDA in this paper). 2. In many DA problem settings, we have to use pseudo labels to replace with true labels in the target domain. If the quality of pseudo labels is low, can we still obtain good adaptation results in the target domain using the proposed method? If not, how does the quality of pseudo labels affects the performance of DA methods? 3. In Theorem 1, this paper claims that Comp() decreases with larger observation numbers. However, it is still unknown if the Rademacher complexity (regarding deep networks) will converge to zero when we have infinite observations. This means that the claim made in the main content is not true, which will mislead broader ICLR readers. This claim is true under a fixed hypothesis with finite VC dimension (as said at the top of Page 20), which is not linked with your deep-network-based algorithm. 4. The motivation of this paper is unclear. Why should we use theory based on Wasserstein distance? How about directly using pseudo labels (like SHOT in ICML20)? There are many questions regarding the motivation of this paper. 5. The proposed method can only be used to address three transfer scenarios (presented in this paper)? Are there difficulties to apply the proposed method to address other DA problems? The motivation (testing the proposed method in the presented three problem settings) is unclear. 6. The format of this paper is poor, which should be revised before submission.
ICLR
Title Unified Principles For Multi-Source Transfer Learning Under Label Shifts Abstract We study the label shift problem in multi-source transfer learning and derive new generic principles. Our proposed framework unifies the principles of conditional feature alignment, label distribution ratio estimation and domain relation weights estimation. Based on inspired practical principles, we provide unified practical framework for three multi-source label shift transfer scenarios: learning with limited target data, unsupervised domain adaptation and label partial unsupervised domain adaptation. We evaluate the proposed method on these scenarios by extensive experiments and show that our proposed algorithm can significantly outperform the baselines. 1 INTRODUCTION Transfer learning (Pan & Yang, 2009) is based on the motivation that learning a new task is easier after having learned several similar tasks. By learning the inductive bias from a set of related source domains (S1, . . . ,ST ) and then leveraging the shared knowledge upon learning the target domain T , the prediction performance can be significantly improved. Based on this, transfer learning arises in deep learning applications such as computer vision (Zhang et al., 2019; Tan et al., 2018; Hoffman et al., 2018b), natural language processing (Ruder et al., 2019; Houlsby et al., 2019) and biomedical engineering (Raghu et al., 2019; Lundervold & Lundervold, 2019; Zhang & An, 2017). To ensure a reliable transfer, it is critical to understand the theoretical assumptions between the domains. One implicit assumption in most transfer learning algorithms is that the label proportions remain unchanged across different domains (Du Plessis & Sugiyama, 2014) (i.e., S(y) = T (y)). However, in many real-world applications, the label distributions can vary markedly (i.e. label shift) (Wen et al., 2014; Lipton et al., 2018; Li et al., 2019b), in which existing approaches cannot guarantee a small target generalization error, which is recently proved by Combes et al. (2020). Moreover, transfer learning becomes more challenging when transferring knowledge from multiple sources to build a model for the target domain, as this requires an effective selection and leveraging the most useful source domains when label shift occurs. This is not only theoretically interesting but also commonly encountered in real-world applications. For example, in medical diagnostics, the disease distribution changes over countries (Liu et al., 2004; Geiss et al., 2014). Considering the task of diagnosing a disease in a country without sufficient data, how can we leverage the information from different countries with abundant data to help the diagnosing? Obviously, naı̈vely combining all the sources and applying one-to-one single source transfer learning algorithm can lead to undesired results, as it can include low quality or even untrusted data from certain sources, which can severely influence the performance. In this paper, we study the label shift problem in multi-source transfer learning where St(y) 6= T (y). We propose unified principles that are applicable for three common transfer scenarios: unsupervised Domain Adaptation (DA) (Ben-David et al., 2010), limited target labels (Mansour et al., 2020) and partial unsupervised DA with supp(T (y)) ⊆ supp(St(y)) (Cao et al., 2018), where prior works generally treated them as separate scenario. It should be noted that this work deals with target shift without assuming that semantic conditional distributions are identical (i.e., St(x|y) 6= T (x|y)), which is more realistic for real-world problems. Our contributions in this paper are two-folds: (I) We propose to use Wasserstein distance (Arjovsky et al., 2017) to develop a new target generalization risk upper bound (Theorem 1), which reveals the importance of label distribution ratio estimation and provides a principled guideline to learn the domain relation coefficients. Moreover, we provide a theoretical analysis in the context of representation learning (Theorem 2), which guides to learn a feature function that minimizes the conditional Wasserstein distance as well as controls the weighted source risk. We further reveal the relations in the aforementioned three scenarios lie in the different assumptions for estimating label distribution ratio. (II) Inspired by the theoretical results, we propose Wasserstein Aggregation Domain Network (WADN) for handling label-shift in multi-source transfer learning. We evaluate our algorithm on three benchmark datasets, and the results show that our algorithm can significantly outperform stateof-the-art principled approaches. 2 RELATED WORK Multi-Source Transfer Learning Theories have been investigated in the previous literature with different principles to aggregate source domains. In the popular unsupervised DA, (Zhao et al., 2018; Peng et al., 2019; Wen et al., 2020; Li et al., 2018b) adoptedH-divergence (Ben-David et al., 2007), discrepancy (Mansour et al., 2009) and Wasserstein distance (Arjovsky et al., 2017) of marginal distribution d(St(x), T (x)) to estimate domain relations and dynamically leveraged different domains. These algorithms generally consists source risk, domain discrepancy and an un-observable term η, the optimal risk on all the domains, which are ignored in these approaches. However, as Combes et al. (2020) pointed out, ignoring the influence of η will be problematic when label distributions between source and target domains are significantly different. Therefore it is necessary to take η into consideration by using a small amount of labelled data is available for the target domain (Wen et al., 2020). Following this line, very recent works (Konstantinov & Lampert, 2019; Wang et al., 2019a; Mansour et al., 2020) started to consider measure the divergence between two domains given label information for the target domain by using Y-discrepancy (Mohri & Medina, 2012). However, we empirically showed these methods are still unable to handle label shift. Label-Shift Label-Shift (Zhang et al., 2013; Gong et al., 2016) is a common phenomena in the transfer learning with S(y) 6= T (y) and generally ignored by the previous multi-source transfer learning practice. Several theoretical principled approaches have been proposed such as (Azizzadenesheli et al., 2019; Garg et al., 2020). In addition, (Combes et al., 2020; Wu et al., 2019) analyzed the generalized label shift problem in the one-to-one single unsupervised DA problem but did not provide guidelines of levering different sources to ensure a reliable transfer, which is more challenging. (Redko et al., 2019) proposed optimal transport strategy for the multiple unsupervised DA under label shift by assuming identical semantic conditional distribution. However they did not consider representation learning in conjunction with their framework and did not design neural network based approaches. Different from these, we analyzed our problem in the context of representation learning and propose an efficient and principled strategies. Moreover, our theoretical results highlights the importance of label shift problem in a variety of multi-source transfer problem. While the aforementioned work generally focus on the unsupervised DA problem, without considering unified rules for different scenarios (e.g. partial multi-source DA). 3 THEORETICAL INSIGHTS: TRANSFER RISK UPPER BOUND We assume a scoring hypothesis defined on the input space X and output space Y with h : X ×Y → R, is K-Lipschitz w.r.t. the feature x (given the same label), i.e. for ∀y, ‖h(x1, y) − h(x2, y)‖2 ≤ K‖x1 − x2‖2, and the loss function ` : R × R → R+ is positive, L-Lipschitz and upper bounded by Lmax. We denote the expected risk w.r.t distribution D: RD(h) = E(x,y)∼D`(h(x, y)) and its empirical counterpart (w.r.t. D̂) R̂D(h) = ∑ (x,y)∈D̂ `(h(x, y)). We adopted Wasserstein-1 distance (Arjovsky et al., 2017) as a metric to measure the similarity of the domains. Compared with other divergences, Wasserstein distance has been theoretically proved tighter than TV distance (Gong et al., 2016) or Jensen-Shnannon divergence (Combes et al., 2020). Based on previous work, the label shift is generally handled by label-distribution ratio weighted loss: RαS(h) = E(x,y)∼Sα(y)`(h(x, y)) with α(y) = T (y)/S(y). We also denote α̂t as its empirical counterpart, estimated from samples. Besides, to measure the task relations, we define a simplex λ with λ[t] ≥ 0, ∑T t=1 λ[t] = 1 as the task relation coefficient vector by assigning high weight to the similar task. Then we first present Theorem 1, which proposed theoretical insights about how to combine source domains through properly estimating λ. Theorem 1. Let {Ŝt = {(xi, yi)} NSt i=1 }Tt=1 and T̂ = {(xi, yi)} NT i=1, respectively be T source and target i.i.d. samples. For ∀h ∈ H with H the hypothesis family and ∀λ, with high probability ≥ 1− 4δ, the target risk can be upper bounded by: RT (h) ≤ ∑ t λ[t]R̂α̂tSt (h) + LK ∑ t λ[t]Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) + Lmaxd sup ∞ √√√√ T∑ t=1 λ[t]2 βt √ log(1/δ) 2N + Lmax sup t ‖αt − α̂t‖2 + Comp(NS1 , . . . , NST , NT , δ), where N = ∑T t=1NSt and βt = NSt/N and d sup ∞ = maxt∈[1,T ],y∈[1,Y] αt(y) the maximum true label distribution ratio value. W1(·‖·) is the Wasserstein-1 distance with L2-distance as cost function. Comp(NS1 , . . . , NST , NT , δ) is a function that decreases with larger NS1 , . . . , NT , given a fixed δ and hypothesis familyH. (See Appendix E for details) Remarks (1) In the first two terms, the relation coefficient λ is controlled by αt-weighted loss R̂α̂tSt (h) and conditional Wasserstein distance Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)). To minimize the upper bound, we need to assign a higher λ[t] to the source t with a smaller weighted prediction loss and a smaller weighted semantic conditional Wasserstein distance. Intuitively, we tend to leverage the source task which is semantic similar to the target and easier to learn. (2) If each source have equal observations with βt = 1, then the third term will become ‖λ‖2, a L2 norm regularization, which can be viewed as an encouragement of uniformly leveraging all the sources. Combing these three terms, we need to consider the trade-off between assigning a higher λ[t] to the source t that has a smaller weighted prediction loss and conditional Wasserstein distance, and assigning balanced λ[t] for avoiding concentrating on only one source. (3) ‖α̂t − αt‖2 indicates the gap between ground-truth and empirical label ratio. Therefore if we can estimate a good α̂t, these terms can be small. In the practice, If target labels are available, α̂t can be computed from the observed data and α̂t → αt. If target labels are absent (unsupervised DA), we need to design methods and to properly estimate α̂t (Sec. 4). (4) Comp(NS1 , . . . , NST , NT , δ) is a function that reflects the convergence behavior, which decreases with larger observation numbers. If we fixH, δ, N and NT , this term can be viewed as a constant. Insights in Representation Learning Apart from Theorem 1, we propose a novel theoretical analysis in the context of representation learning, which motivates practical guidelines in the deep learning regime. We define a stochastic feature function g and we denote its conditional distribution w.r.t. latent variable Z (induced by g) as S(z|Y = y) = ∫ x g(z|x)S(x|Y = y)dx. Then we have: Theorem 2. We assume the settings of loss, the hypothesis are the same with Theorem 1. We further denote the stochastic feature learning function g : X → Z , and the hypothesis h : Z × Y → R. Then ∀λ, the target risk is upper bounded by: RT (h, g) ≤ ∑ t λ[t]RαtSt (h, g) + LK ∑ t λ[t]Ey∼T (y)W1(St(z|Y = y)‖T (z|Y = y)), where RT (h, g) = E(x,y)∼T (x,y)Ez∼g(z|x)`(h(z, y)). Theorem 2 reveal that to control the upper bound, we need to learn g that minimizes the weighted conditional Wasserstein distance and learn (g, h) that minimizes the weighted source risk. Comparison with previous Theorems. Our theory proposed an alternative prospective to understand transfer learning. The first term is α-weighted loss. And it will recover the typical source loss minimization if there is no label shift with αt(y) ≡ 1 (Li et al., 2019a; Peng et al., 2019; Zhao et al., 2018; Wen et al., 2020). Beside, minimizing the conditional Wasserstein distances has been shown to be advantageous, compared with W1(St(z)‖T (z)) (Long et al., 2018). Moreover, Theorem 2 explicitly proposed the theoretical insights about the representation learning function g, which remains elusive for previous multi-source transfer theories such as (Wang et al., 2019a; Mansour et al., 2020; Konstantinov & Lampert, 2019; Li et al., 2019a; Peng et al., 2019). 4 UNIFIED PRACTICAL FRAMEWORK IN DEEP LEARNING The theoretical results in Section 3 motivate general principles to follow when designing multisource transfer learning algorithms. We summarize those principles in the following rules. (I) Learn a g that minimizes the weighted conditional Wasserstein distance as well as learn (g, h) that minimizes the α̂t-weighted source risk (Sec. 4.1). (II) Properly estimate the label distribution ratio α̂t (Sec. 4.2). (III) Balance the trade-off between assigning a higher λ[t] to the source t that has a smaller weighted prediction loss and conditional Wasserstein distance, and assigning balanced λ[t]. (Sec. 4.3). We instantiate these rules with a unified practical framework for solving multi-source transfer learning problems, as shown in Tab 1. We would like to point out that our original theoretical result is based on setting with the available target labels. The proposed algorithm can be applied to unsupervised scenarios under additional assumptions. 4.1 GUIDELINES IN THE REPRESENTATION LEARNING Motivated by Theorem 2, given a fixed label ratio estimation α̂t and fixed λ, we should find a representation function g : X → Z and a hypothesis function h : Z × Y → R such that: min g,h ∑ t λ[t]R̂α̂tSt (h, g) + C0 ∑ t λ[t]Ey∼T̂ (y)W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) (1) Explicit Conditional Loss When target label information is available, one can explicitly solve the conditional optimal transport problem with g and h for a given Y = y. However, due to the high computational complexity in solving T × |Y| optimal transport problems, the original form is practically intractable. To address this issue, we propose to approximate the conditional distribution on latent space Z as Gaussian distribution with identical Covariance matrix such that Ŝt(z|Y = y) ≈ N (Cyt ,Σ) and T̂ (z|Y = y) ≈ N (Cy,Σ). Then we have W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) ≤ ‖Cyt −Cy‖2 (see Appendix G for details). Intuitively, the approximation term is equivalent to the well known feature mean matching (Sugiyama & Kawanabe, 2012), which computes the feature centroid of each class (on latent space Z) and aligns them by minimizing their L2 distance. Implicit Conditional Loss When target label information is not available (e.g. unsupervised DA and partial DA), the explicit matching approach can adopt pseudo-label predicted by the hypothesis h as a surrogate of the true target label. However, in the early stage of the learning process, the pseudo-labels can be unreliable, which can lead to an inaccurate estimate of W1(Ŝ(z|Y = y)‖T̂ (z|Y = y)). To address this, the following Lemma indicates that estimating the conditional Wasserstein distance is equivalent to estimating the Wasserstein adversarial loss weighted by the label-distribution ratio. Lemma 1. The weighted conditional Wasserstein distance can be implicitly expressed as:∑ t λ[t]Ey∼T (y)W1(St(z|Y = y)‖T (z|Y = y)) = max d1,··· ,dT ∑ t λ[t][Ez∼St(z)ᾱt(z)dt(z)−Ez∼T (z)dt(z)], where ᾱt(z) = 1{(z,y)∼St}αt(Y = y), and d1, . . . , dT : Z → R+ are the 1-Lipschitz domain discriminators (Ganin et al., 2016; Arjovsky et al., 2017). Lemma 1 reveals that instead of using pseudo-labels to estimate the weighted conditional Wasserstein distance, one can train T domain discriminators with weighted Wasserstein adversarial loss, which does not require the pseudo-label of each target sample during the matching. On the other hand, ᾱt can be obtained from α̂t, which will be elaborated in Sec. 3.2. In practice, we adopt a hybrid approach by linearly combining the explicit and implicit matching strategies for all the scenarios, in which empirical results show its effectiveness. 4.2 ESTIMATE LABEL DISTRIBUTION RATIO α̂t Multi-Source Transfer with target labels When the target labels are available, α̂t can be directly estimated from the data without any assumption and α̂t → αt can be proved from asymptotic statistics. Unsupervised Multi-Source DA In this scenario, it is impossible to estimate a good α̂t without imposing any additional assumptions. Following (Zhang et al., 2013; Lipton et al., 2018; Azizzadenesheli et al., 2019; Combes et al., 2020), we assume that the conditional distributions are aligned between the target and source domains (i.e., St(z|y) = T (z|y)). Then, we denote S̄t(y), T̄ (y) as the predicted t-source/target label distribution through the hypothesis h, and also define CŜt [y, k] = Ŝt[argmaxy′h(z, y ′) = y, Y = k] is the t-source prediction confusion matrix. We can demonstrate that if the conditional distribution is aligned, we have T̄ (y) = T̄α̂t(y), with T̄α̂t(Y = y) = ∑Y k=1 CŜt [y, k]α̂t(k) the constructed target prediction distribution from the t-source information. (See Appendix I for the proof). Then we can estimate α̂t through matching these two distributions by minimizing DKL(T̄ (y)‖T̄α̂t(y)), which is equivalent to: min α̂t − |Y|∑ y=1 T̄ (y) log( |Y|∑ k=1 CŜt [y, k]α̂t(k)) s.t ∀y ∈ Y, α̂t(y) ≥ 0, |Y|∑ y=1 α̂t(y)Ŝt(y) = 1 (2) In the aforementioned part, we have assumed the conditional distribution is aligned, which is a feasible requirement in our algorithm, since the goal of g exactly aims at gradually achieving this. In the experiments, we iteratively estimate α̂t and learn g. Unsupervised Multi-Source Partial DA When supp(T (y)) ⊆ supp(St(y)), αt is sparse due to the non-overlapped classes. Accordingly, in addition to the assumption of St(z|y) = T (z|y) as in unsupervised DA, we also impose such prior knowledge by adding a regularizer ‖α̂t‖1 to the objective of Eq. (2) to induce sparsity in α̂t (See Appendix J for more details). In training the neural network, since the non-overlapped classes will be automatically assigned with a small or zero α̂t, (g, h) will be less affected by the classes with small α̂t. Our empirical results effectively validate its capability in detecting non-overlapping classes and show significant improvements over other baselines. 4.3 ESTIMATE TASK RELATION COEFFICIENT λ Inspired by Theorem 1, given fixed α̂t and (g, h), we estimate λ through optimizing the derived upper bound. min λ ∑ t λ[t]R̂α̂tSt (h, g) + C0 ∑ t λ[t]Ey∼T̂ (y)W1(T̂ (z|Y = y)‖Ŝ(z|Y = y)) + C1 √√√√ T∑ t=1 λ2[t] βt s.t ∀t,λ[t] ≥ 0, T∑ t=1 λ[t] = 1 (3) In practice, R̂α̂tSt (h, g) is the weighted empirical prediction error and Ey∼T̂ (y)W1(T̂ (z|Y = y)‖Ŝ(z|Y = y)) is approximated by the dynamic feature centroid distance ∑ y T̄ (y)‖C y t − Cy‖2 (See Appendix L for details). Thus, solving λ is a standard convex optimization problem. 4.4 ALGORITHM DESCRIPTION Based on the aforementioned components, we present the description of WADN (Algorithm 1) in the unsupervised scenarios (UDA and Partial DA), which iteratively updates (g, h), α̂t, and λ. When Algorithm 1 Wasserstein Aggregation Domain Network (unsupervised scenarios, one iteration) Require: Labeled source samples Ŝ1, . . . , ŜT , Target samples T̂ Ensure: Label distribution ratio α̂t, task relation simplex λ. Feature Function g, Classifier h, Do- main critic function d1, . . . , dT , class centroid for source C y t and target C y (∀t = [1, T ], y ∈ Y). 1: . . . DNN Parameter Training Stage (fixed αt and λ) / / / 2: for mini-batch of samples (xS1 ,yS1) ∼ Ŝ1, . . . , (xST ,yST ) ∼ ŜT , (xT ) ∼ T̂ do 3: Predict target pseudo-label ȳT = argmaxyh(g(xT ), y) 4: Compute source confusion matrix for each batch (un-normalized) CŜt = #[argmaxy′h(z, y ′) = y, Y = k] (t = 1, . . . , T ) 5: Compute the batched class centroid for source Cyt and target C y . 6: Moving Average for update source/target class centroid: ( 1 = 0.7) 7: Update Source class centroid Cyt = 1 ×C y t + (1− 1)× C y t 8: Update Target class centroid Cy = 1 ×Cy + (1− 1)× Cy 9: Updating g, h, d1, . . . , dT (SGD and Gradient Reversal), by solving: min g,h max d1,...,dT ∑ t λ[t]R̂α̂tSt (h, g)︸ ︷︷ ︸ Classification Loss + C0 ∑ t λ[t]Ey∼T̄ (y)‖C y t −C y‖2︸ ︷︷ ︸ Explicit Conditional Loss + (1− )C0 ∑ t λ[t][Ez∼Ŝt(z)ᾱt(z)d(z)− Ez∼T̂ (z)d(z)]︸ ︷︷ ︸ Implicit Conditional Loss 10: end for 11: . . . Estimation α̂t and λ / / / 12: Compute the global(normalized) source confusion matrix CŜt = Ŝt[argmaxy′h(z, y ′) = y, Y = k] (t = 1, . . . , T ) 13: Solve αt (denoted as {α′t}Tt=1) by (Sec. 4.2 Unsupervised DA or Partial UDA). 14: Update αt by moving average: αt = 1 × αt + (1− 1)× α′t 15: Compute the weighted loss and weighted centroid distance, then solve λ (denoted as λ′) from Sec. 4.3. And updating λ by moving average: λ = 1 × λ + (1− 1)× λ′ updating λ and αt, we used package CVXPY to optimize the two standard convex losses after each training epoch, then we updating them by using the moving average. As for WADN under target label information, we did not require pseudo-label and directly compute α̂t, shown in Appendix L. 5 EXPERIMENTS In this section, we compare proposed approaches with several baselines for the popular tasks. For all the scenarios, the following baselines are evaluated: (I) Source method applied only labelled source data to train the model. (II) DANN (Ganin et al., 2016). We follow the protocol of Wen et al. (2020) to merge all the source dataset as a global source domain. (III) MDAN (Zhao et al., 2018); (IV) MDMN (Li et al., 2018b); (V) M3SDA (Peng et al., 2019) adopted maximizing classifier discrepancy (Saito et al., 2018) and (VI) DARN (Wen et al., 2020). For the conventional multi-source transfer and partial unsupervised multi-source DA, we additionally compare specific baselines. All the baselines are re-implemented in the same network structure for fair comparisons. The detailed network structures, hyper-parameter settings, training details are put in Appendix M. We evaluate the performance on three different datasets: (I) Amazon Review. (Blitzer et al., 2007) It contains four domains (Books, DVD, Electronics, and Kitchen) with positive and negative product reviews. We follow the common data pre-processing strategies as (Chen et al. (2012)) to form a 5000-dimensional bag-of-words feature. Note that the label distribution in the original dataset is uniform. To enhance the benefits of the proposed approach, we create a label distribution drifted task by randomly dropping 50% negative reviews of all the sources while keeping the target identical. (show in Fig.3 (a)). (II) Digits. It consists four digits recognition datasets including MNIST, USPS (Hull, 1994), SVHN (Netzer et al., 2011) and Synth (Ganin et al., 2016). We also create a slight label distribution drift for the sources by randomly dropping 50% samples on digits 5-9 and keep target identical. (showed in Fig.(3)(b)). (III) Office-Home Dataset (Venkateswara et al., 2017). It contains 65 classes for four different domains: Art, Clipart, Product and Real-World. We used the ResNet50 (He et al., 2016) pretrained from the ImageNet in PyTorch as the base network for feature learning and put a MLP for the classification. The label distributions in these four domains are different and we did not manually create a label drift (showed in Fig.3 (c)). 5.1 UNSUPERVISED MULTI-SOURCE DA In the unsupervised multi-source DA, we evaluate the proposed approach on all the three datasets. We use a similar hyper-parameter selection strategy as in DANN (Ganin et al., 2016). All reported results are averaged from five runs. The detailed experimental settings are illustrated in Appendix M. The empirical results are illustrated in Tab. 7, 2 and 3. Since we did not change the target label distribution throughout the whole experiments, then we still use the target accuracy as the metric. We report the means and standard deviations for each approach. The best approaches based on a two-sided Wilcoxon signed-rank test (significance level p = 0.05) are shown in bold. The empirical results reveal a significantly better performance (≈ 3%) on different datasets. For understanding the working principles of WADN, we evaluate the performance under different levels of source label shift in Amazon Review dataset (Fig.1(a)). The results show strong practical benefits for WADN during a gradual larger label shift. In addition, we visualize the task relations in digits (Fig.1(b)) and demonstrate a non-uniform λ, which highlights the importance of properly choosing the most related source rather than simply merging all the data. E.g. when the target domain is SVHN, WADN mainly leverages the information from SYNTH, since they are more semantically similar and MNIST does not help too much for SVHN (observed by Ganin et al. (2016)). The additional analysis and results can be found in Appendix O. 5.2 MULTI-SOURCE TRANSFER LEARNING WITH LIMITED TARGET SAMPLES We adopt Amazon Review and Digits in the multi-source transfer learning with limited target samples, which have been widely used. In the experiments, we still use shifted sources. We randomly sample only 10% labeled samples (w.r.t. target dataset in unsupervised DA) as training set and the rest 90% samples as the unseen target test set. (See Appendix M for details). We adopt the same hyper-parameters and training strategies with unsupervised DA. We specifically add a recent baseline RLUS (Konstantinov & Lampert, 2019) and MME (Saito et al., 2019), which also considered transfer learning with the labeled target. The results are reported in Tabs. 4, 5, which also indicates strong empirical benefits. To show the effectiveness of WADN, we select various portions of labelled samples (1% ∼ 10%) on the target. The results in Fig.1(c) on USPS dataset shows a consistently better than the baselines, even in the few target samples. 5.3 PARTIAL UNSUPERVISED MULTI-SOURCE DA In this scenario, we adopt the Office-Home dataset to evaluate our approach, as it contains large (65) classes. We do not change the source domains and we randomly choose 35 classes from the target. We evaluate all the baselines on the same selected classes and repeat 5 times. All reported results are averaged from 3 different sub-class selections (15 runs in total), showing in Tab.6. (See Appendix M for details.) We additionally compare PADA (Cao et al., 2018) approach by merging all the sources and use one-to-one partial DA algorithm. We adopt the same hyper-parameters and training strategies with unsupervised DA scenario. The reported results are also significantly better than the current multi-source DA or one-to-one partial DA approach, which verifies the benefits of WADN: properly estimating α̂t and assigning proper λ for each source. Besides, we change the number of selected classes (Fig 2(a)), the proposed WADN still indicates consistent better results by a large margin, which indicates the importance of considering α̂t and λ. In contrast, DANN shows unstable results in less selected classes. (See Appendix P for details) Beside, WADN shows a good estimation of the label distribution ratio (Fig 2(b)) and has correctly detected the non-overlapping classes, which indicates its good explainability. 6 CONCLUSION In this paper, we proposed a new theoretical principled algorithm WADN (Wasserstein Aggregation Domain Network) to solve the multi-source transfer learning problem under target shift. WADN provides a unified solution for various deep multi-source transfer scenarios: learning with limited target data, unsupervised DA, and partial unsupervised DA. We evaluate the proposed method by extensive experiments and show its strong empirical results. A ADDITIONAL EMPIRICAL RESULTS B ADDITIONAL RELATED WORK Multi-source transfer learning Practice has been proposed from various prospective. The key idea is to estimate the importance of different sources and then select the most related ones, to mitigate the influence of negative transfer. In the multi-source unsupervised DA, (Sankaranarayanan et al., 2018; Balaji et al., 2019; Pei et al., 2018; Zhao et al., 2019; Zhu et al., 2019; Zhao et al., 2020; 2019; Stojanov et al., 2019; Li et al., 2019b; Wang et al., 2019b; Lin et al., 2020) proposed different practical strategies in the classification, regression and semantic segmentation problems. In the presence of target labels, Hoffman et al. (2012); Tan et al. (2013); Wei et al. (2017); Yao & Doretto (2010); Konstantinov & Lampert (2019) used generalized linear model to learn the target. Christodoulidis et al. (2016); Li et al. (2019a); Chen et al. (2019) focused on deep learning approaches and Lee et al. (2019) proposed an ad-hoc strategy to combine to sources in the few-shot target domains. These ideas are generally data-driven approaches and do not analyze the why the proposed practice can control the generalization error. Label-Partial Transfer Learning Label-Partial can be viewed as a special case of the label-shift. 1 Most existing works focus on one-to-one partial transfer learning (Zhang et al., 2018; Chen et al., 2020; Bucci et al., 2019; Cao et al., 2019) by adopting the re-weighting training approach without a formal understanding. In our paper, we first rigorously analyzed this common practice and adopt the label distribution ratio as its weights, which provides a principled approach in this scenario. B.1 OTHER SCENARIOS RELATED TO MULTI-SOURCE TRANSFER LEARNING Domain Generalization The domain generalization (DG) resembles multi-source transfer but aims at different goals. A common setting in DG is to learn multiple source but directly predict on the unseen target domain. The conventional DG approaches generally learn a distribution invariant features (Balaji et al., 2018; Saenko et al., 2010; Motiian et al., 2017; Ilse et al., 2019) or conditional distribution invariant features (Li et al., 2018a; Akuzawa et al., 2019). However, our theoretical results reveal that in the presence of label shift (i.e αt(y) 6= 1) and outlier tasks then learning conditional or marginal invariant features can not guarantee a small target risk. Our theoretical result enables a formal understanding about the inherent difficulty in DG problems. Few-Shot Learning The few-shot learning (Finn et al., 2017; Snell et al., 2017; Sung et al., 2018) can be viewed as a very specific scenario of multi-source transfer learning. We would like to point out the differences between the few-shot learning and our paper. (1) Few-shot learning generally involves a very large set of source domains T 1 and each domain consists a modest number of observations NSt . In our paper, we are interested in the a modest number of source domains T but each source domain including a sufficient large number of observations (NSt 1). (2) In the target domain, the few-shot setting generally used K-samples (K is very small) for each class for the fine-tuning. We would like to point out this setting generally violates our theoretical assumption. In 1Since supp(T (y)) ⊆ supp(St(y)) then we naturally have T (y) 6= St(y). our paper, we assume the target data is i.i.d. sampled fromD(x, y). It is equivalently viewed that we first i.i.d. sample y ∼ D(y), then i.i.d. sample x ∼ D(x|y). Generally the D(y) is non-uniform, thus few-shot setting are generally not applicable for our theoretical assumptions. Multi-Task Learning The goal of multi-task learning (Zhang & Yang, 2017) aims to improve the prediction performance of all the tasks. In our paper, we aim at controlling the prediction risk of a specified target domain. We also notice some practical techniques are common such as the shared parameter (Zhang & Yeung, 2012), shared representation (Ruder, 2017), etc. C ADDITIONAL FIGURES RELATED TO THE MAIN PAPER D TABLE OF NOTATION E PROOF OF THEOREM 1 Proof idea Theorem 1 consists three steps in the proof: Lemma 2. If the prediction loss is assumed as L-Lipschitz and the hypothesis is K-Lipschitz w.r.t. the feature x (given the same label), i.e. for ∀Y = y, ‖h(x1, y)−h(x2, y)‖2 ≤ K‖x1−x2‖2. Then the target risk can be upper bounded by: RT (h) ≤ ∑ t λ[t]RαtS (h) + LK ∑ t λ[t]Ey∼T (y)W1(T (x|Y = y)‖S(x|Y = y)) (4) Proof. The target risk can be expressed as: RT (h(x, y)) = E(x,y)∼T `(h(x, y)) = Ey∼T (y)Ex∼T (x|y)`(h(x, y)) By denoting α(y) = T (y)S(y) , then we have: Ey∼T (y)Ey∼T (x|y)`(h(x, y)) = Ey∼S(y)α(y)Ex∼T (x|y)`(h(x, y)) Then we aim to upper bound Ex∼T (x|y)`(h(x, y)). For any fixed y, Ex∼T (x|y)`(h(x, y))− Ex∼S(x|y)`(h(x, y)) ≤ | ∫ x∈X `(h(x, y))d(T (x|y)− S(x|y))| Then according to the Kantorovich-Rubinstein duality, for any distribution coupling γ ∈ Π(T (x|y),S(x|y)), then we have: = inf γ | ∫ X×X `(h(xp, y))− `(h(xq, y))dγ(xp, xq)| ≤ inf γ ∫ X×X |`(h(xp, y))− `(h(xq, y))|dγ(xp, xq) ≤ L inf γ ∫ X×X |h(xp, y))− h(xq, y)|dγ(xp, xq) ≤ LK inf γ ∫ X×X ‖xp − xq‖2dγ(xp, xq) = LKW1(T (x|Y = y)‖S(x|Y = y)) The first inequality is obvious; and the second inequality comes from the assumption that ` is LLipschitz; the third inequality comes from the hypothesis is K-Lipschitz w.r.t. the feature x (given the same label), i.e. for ∀Y = y, ‖h(x1, y)− h(x2, y)‖2 ≤ K‖x1 − x2‖2. Then we have: RT (h) ≤ Ey∼S(y)α(y)[Ex∼S(x|y)`(h(x, y)) + LKW1(T (x|y)‖S(x|y))] = E(x,y)∼Sα(y)`(h(x, y)) + LKEy∼T (y)W1(T (x|Y = y)‖S(x|Y = y)) = RαS(h) + LKEy∼T (y)W1(T (x|Y = y)‖S(x|Y = y)) Supposing each source St we assign the weight λ[t] and label distribution ratio αt(y) = T (y)St(y) , then by combining this T source target pair, we have: RT (h) ≤ ∑ t λ[t]RαtSt (h) + LK ∑ t λ[t]Ey∼T (y)W1(T (x|Y = y)‖St(x|Y = y)) Then we will prove Theorem 1 from this result, we will derive the non-asymptotic bound, estimated from the finite sample observations. Supposing the empirical label ratio value is α̂t, then for any simplex λ we can prove the high-probability bound. E.1 BOUNDING THE EMPIRICAL AND EXPECTED PREDICTION RISK Proof. We first bound the first term, which can be upper bounded as: sup h | ∑ t λ[t]RαtSt (h)− ∑ t λ[t]R̂α̂tSt (h)| ≤ sup h | ∑ t λ[t]RαtSt (h)− ∑ t λ[t]R̂αtSt (h)|︸ ︷︷ ︸ (I) + sup h | ∑ t λ[t]R̂αtSt (h)− ∑ t λ[t]R̂α̂tSt (h)|︸ ︷︷ ︸ (II) Bounding term (I) According to the McDiarmid inequality, each item changes at most | 2λ[t]αt(y)`NSt |. Then we have: P ((I)− E(I) ≥ t) ≤ exp( −2t 2∑T t=1 4 βtN λ2[t]αt(y)2`2 ) = δ By substituting δ, at high probability 1− δ we have: (I) ≤ E(I) + Lmaxdsup∞ √√√√ T∑ t=1 λ[t]2 βt √ log(1/δ) 2N Where Lmax = suph∈H `(h) and N = ∑T t=1NSt the total source observations and βt = NSt N the frequency ratio of each source. And d sup ∞ = maxt=1,...,T d∞(T (y)‖S(y)) = maxt=1,...,T maxy∈[1,Y] αt(y), the maximum true label shift value (constant). Bounding E sup(I), the expectation term can be upper bounded as the form of Rademacher Complexity: E(I) ≤ 2EσEŜT1 suph T∑ t=1 λ[t] ∑ (xt,yt)∈Ŝt 1 TN (αt(y)`(h(xt, yt)) ≤ 2 ∑ t λ[t]EσEŜT1 suph ∑ (xt,yt)∈Ŝt 1 TN (αt(y)`(h(xt, yt)) ≤ 2 sup t EσEŜt sup h ∑ (xt,yt)∈Ŝt 1 TN [αt(y)`(h(xt, yt))] = sup t 2Rt(`,H) = 2R̄(`,H) Where R̄(`,H) = suptRt(`,H) = supt suph∼H EŜt,σ ∑ (xt,yt)∈Ŝt 1 TN [αt(y)`(h(xt, yt))], represents the Rademacher complexity w.r.t. the prediction loss `, hypothesis h and true label distribution ratio αt. Therefore with high probability 1− δ, we have: sup h | ∑ t λ[t]RαtS (h)− ∑ t λ[t]R̂αtS (h)| ≤ R̄(`, h) + Lmaxd sup ∞ √√√√ T∑ t=1 λ[t]2 βt √ log(1/δ) 2N Bounding Term (II) For all the hypothesis h, we have: | ∑ t λ[t]R̂αtSt (h)− ∑ t λ[t]R̂α̂tSt (h)| = | ∑ t λ[t] 1 NSt NSt∑ i (α(y(i))− α̂(y(i)))`(h)| = ∑ t λ[t] 1 NSt | |Y|∑ y (α(Y = y)− α̂(Y = y))¯̀(Y = y)| Where ¯̀(Y = y) = ∑NSt i `(h(xi, yi = y)), represents the cumulative error, conditioned on a given label Y = y. According to the Holder inequality, we have: ∑ t λ[t] 1 NSt | |Y|∑ y (αt(Y = y)− α̂t(Y = y))¯̀(Y = y)| ≤ ∑ t λ[t] 1 NSt ‖αt − α̂t‖2‖¯̀(Y = y)‖2 ≤ Lmax ∑ t λ[t]‖αt − α̂t‖2 ≤ Lmax sup t ‖αt − α̂t‖2 Therefore, ∀h ∈ H, with high probability 1− δ we have: ∑ t λ[t]RαtS (h) ≤ ∑ t λ[t]R̂α̂tS (h)+2R̄(`, h)+Lmaxd sup ∞ √√√√ T∑ t=1 λ[t]2 βt √ log(1/δ) 2N +Lmax sup t ‖αt−α̂t‖2 E.2 BOUNDING EMPIRICAL WASSERSTEIN DISTANCE Then we need to derive the sample complexity of the empirical and true distributions, which can be decomposed as the following two parts. For any t, we have: Ey∼T (y)W1(T (x|Y = y)‖St(x|Y = y))− Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) ≤ Ey∼T (y)W1(T (x|Y = y)‖St(x|Y = y))− Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))︸ ︷︷ ︸ (I) + Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))− Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))︸ ︷︷ ︸ (II) Bounding (I) We have: Ey∼T (y)W1(T (x|Y = y)‖St(x|Y = y))− Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) = ∑ y T (y) ( W1(T (x|Y = y)‖St(x|Y = y))−W1(T̂ (x|Y = y)‖Ŝt(x|Y = y) ) ≤ | ∑ y T (y)| sup y ( W1(T (x|Y = y)‖St(x|Y = y))−W1(T̂ (x|Y = y)‖Ŝt(x|Y = y) ) = sup y ( W1(T (x|Y = y)‖St(x|Y = y))−W1(T̂ (x|Y = y)‖Ŝt(x|Y = y) ) ≤ sup y [W1(St(x|Y = y)‖Ŝt(x|Y = y)) +W1(Ŝt(x|Y = y)‖T̂ (x|Y = y)) +W1(T̂ (x|Y = y)‖T (x|Y = y))−W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))] = sup y W1(St(x|Y = y)‖Ŝt(x|Y = y)) +W1(T̂ (x|Y = y)‖T (x|Y = y)) The first inequality holds because of the Holder inequality. As for the second inequality, we use the triangle inequality of Wasserstein distance. W1(P‖Q) ≤W1(P‖P1) +W1(P1‖P2) +W1(P2‖Q). According to the convergence behavior of Wasserstein distance (Weed et al., 2019), with high probability ≥ 1− 2δ we have: W1(St(x|Y = y)‖Ŝt(x|Y = y)) +W1(T̂ (x|Y = y)‖T (x|Y = y)) ≤ κ(δ,NySt , N y T ) Where k(δ,NySt , N y T ) = Ct,y(N y St) −st,y +Cy(N y T ) −sy + √ 1 2 log( 2 δ )( √ 1 NySt + √ 1 Nyt ), where NySt is the number of Y = y in source t and NyT is the number of Y = y in target distribution. Ct,y , Cy st,y > 2, sy > 2 are positive constant in the concentration inequality. This indicates the convergence behavior between empirical and true Wasserstein distance. If we adopt the union bound (over all the labels) by setting δ ← δ/|Y|, then with high probability ≥ 1− 2δ, we have: sup y W1(S(x|Y = y)‖Ŝ(x|Y = y)) +W1(T̂ (x|Y = y)‖T (x|Y = y)) ≤ κ(δ,NySt , N y T ) where κ(δ,NySt , N y T ) = Ct,y(N y St) −st,y + Cy(N y T ) −sy + √ 1 2 log( 2|Y| δ )( √ 1 NySt + √ 1 NyT ) Again by adopting the union bound (over all the tasks) by setting δ ← δ/T , with high probability ≥ 1− 2δ, we have:∑ t λ[t]Ey∼T (y)W1(T (x|Y = y)‖S(x|Y = y))− ∑ t λ[t]Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝ(x|Y = y)) ≤ sup t κ(δ,NySt , N y T ) Where κ(δ,NySt , N y T ) = Ct,y(N y St) −st,y + Cy(N y T ) −sy + √ 1 2 log( 2T |Y| δ )( √ 1 NySt + √ 1 NyT ). Bounding (II) We can bound the second term: Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))− Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) ≤ sup y W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))| ∑ y T (y)− T̂ (y)| ≤ Ctmax| ∑ y T (y)− T̂ (y)| Where Ctmax = supyW1(T̂ (x|Y = y)‖Ŝ(x|Y = y)) is a positive and bounded constant. Then we need to bound | ∑ y T (y)−T̂ (y)|, by adopting MicDiarmid’s inequality, we have at high probability 1− δ: | ∑ y T (y)− T̂ (y)| ≤ ET̂ | ∑ y T (y)− T̂ (y)|+ √ log(1/δ) 2NT = 2EσET̂ ∑ y σT̂ (y) + √ log(1/δ) 2NT Then we bound EσET̂ ∑ y σT̂ (y). We use the properties of Rademacher complexity [Lemma 26.11, (Shalev-Shwartz & Ben-David, 2014)] and notice that T̂ (y) is a probability simplex, then we have: EσET̂ ∑ y σT̂ (y) ≤ √ 2 log(2|Y|) NT Then we have | ∑ y T (y)− T̂ (y)| ≤ √ 2 log(2|Y|) NT + √ log(1/δ) 2NT Then using the union bound and denoting δ ← δ/T , with high probability ≥ 1 − δ and for any simplex λ, we have:∑ t λ[t]Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) ≤ ∑ t λ[t]Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) Cmax( √ 2 log(2|Y|) NT + √ log(T/δ) 2NT ) where Cmax = supt C t max. Combining together, we can derive the PAC-Learning bound, which is estimated from the finite samples (with high probability 1− 4δ): RT (h) ≤ ∑ t λtR̂ α̂t St (h) + LH ∑ t λtEy∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝ(x|Y = y)) + Lmaxd sup ∞ √√√√ T∑ t=1 λ2t βt √ log(1/δ) 2N + 2R̄(`, h) + Lmax sup t ‖αt − α̂t‖2 + sup t κ(δ,NySt , N y T ) + Cmax( √ 2 log(2|Y|) NT + √ log(T/δ) 2NT ) Then we denote Comp(NS1 , . . . , NT , δ) = 2R̄(`, h) + supt κ(δ,N y St , N y T ) +Cmax( √ 2 log(2|Y|) NT +√ log(T/δ) 2NT ) as the convergence rate function that decreases with larger NS1 , . . . , NT . Bedsides, R̄(`, h) = suptRt(`,H) is the re-weighted Rademacher complexity. Given a fixed hypothesis with finite VC dimension 2, it can be proved R̄(`, h) = minNS1 ,...,NST O( √ 1 NSt ) i.e (Shalev-Shwartz & Ben-David, 2014). 2If the hypothesis is the neural network, the Rademacher complexity can still be bounded analogously. F PROOF OF THEOREM 2 We first recall the stochastic feature representation g such that g : X → Z and scoring hypothesis h h : Z × Y → R and the prediction loss ` with ` : R→ R. 3 Proof. The marginal distribution and conditional distribution w.r.t. latent variableZ that are induced by g, which can be reformulated as: S(z) = ∫ x g(z|x)S(x)dx S(z|y) = ∫ x g(z|x)S(x|Y = y)dx In the multi-class classification problem, we additionally define the following distributions: µk(z) = S(Y = k, z) = S(Y = k)S(z|Y = k) πk(z) = T (Y = k, z) = T (Y = k)T (z|Y = k) Based on (Nguyen et al., 2009) and g(z|x) is a stochastic representation learning function, the loss conditioned a fixed point (x, y) w.r.t. h and g is Ez∼g(z|x)`(h(z, y)). Then taking the expectation over the S(x, y) we have: 4 RS(h, g) = E(x,y)∼S(x,y)Ez∼g(z|x)`(h(z, y)) = |Y|∑ k=1 S(y = k) ∫ x S(x|Y = k) ∫ z g(z|x)`(h(z, y = k))dzdx = |Y|∑ k=1 S(y = k) ∫ z [ ∫ x S(x|Y = k)g(z|x)dx]`(h(z, y = k))dz = |Y|∑ k=1 S(y = k) ∫ z S(z|Y = k)`(h(z, y = k))dz = |Y|∑ k=1 ∫ z S(z, Y = k)`(h(z, y = k))dz = |Y|∑ k=1 ∫ z µk(z)`(h(z, y = k))dz Intuitively, the expected loss w.r.t. the joint distribution S can be decomposed as the expected loss on the label distribution S(y) (weighted by the labels) and conditional distribution S(·|y) (real valued conditional loss). Then the expected risk on the S and T can be expressed as: RS(h, g) = |Y|∑ k=1 ∫ z `(h(z, y = k))µk(z)dz RT (h, g) = |Y|∑ k=1 ∫ z `(h(z, y = k))πk(z)dz 3Note this definition is different from the conventional binary classification with binary output, and it is more suitable in the multi-classification scenario and cross entropy loss (Hoffman et al., 2018a). For example, if we define l = − log(·) and h(z, y) ∈ (0, 1) as a scalar score output. Then `(h(z, y)) can be viewed as the cross-entropy loss for the neural-network. 4An alternative understanding is based on the Markov chain. In this case it is a DAG with Y S(y|x)←−−−− X g−→ Z, X S(y|x)−−−−→ Y h−→ S h←− Z g←− X . (S is the output of the scoring function). Then the ex- pected loss over the all random variable can be equivalently written as ∫ P(x, y, z, s) `(s) d(x, y, z, s) =∫ P(x)P(y|x)P(z|x)P(s|z, y)`(s) = ∫ P(x, y)P(z|x)P(s|z, y)`(s)d(x, y)d(z)d(s). Since the scoring S is determined by h(x, y), then P(s|y, z) = 1. According to the definition we have P(z|x) = g(z|x), P(x, y) = S(x, y), then the loss can be finally expressed as ES(x,y)Eg(z|x)`(h(z, y)) By denoting α(y) = T (y)S(y) , we have the α-weighted loss: RαS(h, g) =T (Y = 1) ∫ z `(h(z, y = 1))S(z|Y = 1) + T (Y = 2) ∫ z `(h(z, y = 2))S(z|Y = 2) + · · ·+ T (Y = k) ∫ z `(h(z, y = k))S(z|Y = k)dz Then we have: RT (h, g)−RαS(h, g) ≤ ∑ k T (Y = k) ∫ z `(h(z, y = k))d|S(z|Y = k)− T (z|Y = k)| Under the same assumption, we have the loss function `(h(z, Y = k)) is KL-Lipschitz w.r.t. the cost ‖ · ‖2 (given a fixed k). Therefore by adopting the same proof strategy (Kantorovich-Rubinstein duality) in Lemma 2, we have ≤ KLT (Y = 1)W1(S(z|Y = 1)‖T (z|Y = 1)) + · · ·+KLT (Y = k)W1(S(z|Y = k)‖T (z|Y = k)) = KLEy∼T (y)W1(S(z|Y = y)‖T (z|Y = y)) Therefore, we have: RT (h, g) ≤ RαS(h, g) + LKEy∼T (y)W1(S(z|Y = y)‖T (z|Y = y)) Based on the aforementioned result, we have ∀t = 1, . . . , T and denote S = St and α(y) = αt(y) = T (y)/St(y): λ[t]RT (h, g) ≤ λ[t]RαtSt (h, g) + LKλ[t]Ey∼T (y)W1(St(z|Y = y)‖T (z|Y = y)) Summing over t = 1, . . . , T , we have: RT (h, g) ≤ T∑ t=1 λ[t]RαtSt (h, g) + LK T∑ t=1 λ[t]Ey∼T (y)W1(St(z|Y = y)‖T (z|Y = y)) G APPROXIMATION W1 DISTANCE According to Jensen inequality, we have W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) ≤ √ [W2(Ŝt(z|Y = y)‖T̂ (z|Y = y))]2 Supposing Ŝt(z|Y = y) ≈ N (Cyt ,Σ) and T̂ (z|Y = y) ≈ N (Cy,Σ), then we have: [W2(Ŝt(z|Y = y)‖T̂ (z|Y = y)]2 = ‖Cyt −Cy‖22 + Trace(2Σ− 2(ΣΣ)1/2) = ‖C y t −Cy‖22 We would like to point out that assuming the identical covariance matrix is more computationally efficient during the matching. This is advantageous and reasonable in the deep learning regime: we adopted the mini-batch (ranging from 20-128) for the neural network parameter optimization, in each mini-batch the samples of each class are small, then we compute the empirical covariance/variance matrix will be surely biased to the ground truth variance and induce a much higher complexity to optimize. By the contrary, the empirical mean is unbiased and computationally efficient, we can simply use the moving the moving average to efficiently update the estimated mean value (with a unbiased estimator). The empirical results verify the effectiveness of this idea. H PROOF OF LEMMA 1 For each source St, by introducing the duality of Wasserstein-1 distance, for y ∈ Y , we have: W1(St(z|y)‖T (z|y)) = sup ‖d‖L≤1 Ez∼St(z|y)d(z)− Ez∼T (z|y)d(z) = sup ‖d‖L≤1 ∑ z St(z|y)d(z)− ∑ z T (z|y)d(z) = 1 T (y) sup ‖d‖L≤1 T (y) St(y) ∑ z St(z, y)d(z)− ∑ z T (z, y)d(z) Then by defining ᾱt(z) = 1{(z,y)∼St} T (Y=y) St(Y=y) = 1{(z,y)∼St}αt(Y = y), we can see for each pair observation (z, y) sampled from the same distribution, then ᾱt(Z = z) = αt(Y = y). Then we have:∑ y T (y)W1(St(z|y)‖T (z|y)) = ∑ y sup ‖d‖L≤1 { ∑ z αt(y)St(z, y)d(z)− ∑ z T (z, y)d(z)} = sup ‖d‖L≤1 ∑ z ᾱt(z)St(z)d(z)− ∑ z T (z)d(z) = sup ‖d‖L≤1 Ez∼St(z)ᾱt(z)d(z)− Ez∼T (z)d(z) We propose a simple example to understand ᾱt: supposing three samples in St = {(z1, Y = 1), (z2, Y = 1), (z3, Y = 0)} then ᾱt(z1) = ᾱt(z2) = αt(1) and ᾱt(z3) = αt(0). Therefore, the conditional term is equivalent to the label-weighted Wasserstein adversarial learning. We plug in each source domain as weight λ[t] and domain discriminator as dt, we finally have Lemma 1. I DERIVE THE LABEL RATIO LOSS We suppose the representation learning aims at matching the conditional distribution such that T (z|y) ≈ St(z|y),∀t, then we suppose the predicted target distribution as T̄ (y). By simplifying the notation, we define f(z) = argmaxyh(z, y) the most possible prediction label output, then we have: T̄ (y) = Y∑ k=1 T (f(z) = y|Y = k)T (Y = k) = Y∑ k=1 St(f(z) = y|Y = k)T (Y = k) = Y∑ i=1 St(f(z) = y, Y = k)αt(k) = T̄αt(y) The first equality comes from the definition of target label prediction distribution, T̄ (y) = ET (z)1{f(z) = y} = T (f(z) = y) = ∑Y k=1 T (f(z) = y, Y = k) = ∑Y k=1 T (f(z) = y|Y = k)T (Y = k). The second equality T (f(z) = y|Y = k) = St(f(z) = y|Y = k) holds since ∀t, T (z|y) ≈ St(z|y), then for the shared hypothesis f , we have T (f(z) = y|Y = k) = St(f(z) = y|Y = k). The term St(f(z) = y, Y = k) is the (expected) source prediction confusion matrix, and we denote its empirical (observed) version as Ŝt(f(z) = y, Y = k). Based on this idea, in practice we want to find a α̂t to match the two predicted distribution T̄ and T̄α̂t . If we adopt the KL-divergence as the metric, we have: min α̂t DKL(T̄ ‖T̄α̂t) = min α̂t Ey∼T̄ log( T̄ (y) T̄α̂t(y) ) = min α̂t −Ey∼T̄ log(T̄α̂t(y)) = min α̂t − ∑ y T̄ (y) log( Y∑ k=1 St(f(z) = y, Y = k)α̂t(k)) We should notice the nature constraints of label ratio: {α̂t(y) ≥ 0, ∑ y α̂t(y)Ŝt(y) = 1}. Based on this principle, we proposed the optimization problem to estimate each label ratio. We adopt its empirical counterpart, the empirical confusion matrix CŜt [y, k] = Ŝt[f(z) = y, Y = k], then the optimization loss can be expressed as: min α̂t − |Y|∑ y=1 T̄ (y) log( |Y|∑ k=1 CŜt [y, k]α̂t(k)) s.t. ∀y ∈ Y, α̂t(y) ≥ 0, ∑ y α̂t(y)Ŝt(y) = 1 J LABEL PARTIAL MULTI-SOURCE UNSUPERVISED DA The key difference between multi-conventional and partial unsupervised DA is the estimation step of α̂t. In fact, we only add a sparse constraint for estimating each α̂t: min α̂t − |Y|∑ y=1 T̄ (y) log( |Y|∑ k=1 CŜt [y, k]α̂t(k)) + C2‖α̂t‖1 s.t. ∀y ∈ Y, α̂t(y) ≥ 0, ∑ y α̂t(y)Ŝt(y) = 1 (5) Where C2 is the hyper-parameter to control the level of target label sparsity, to estimate the target label distribution. In the paper, we denote C2 = 0.1. K EXPLICIT AND IMPLICIT CONDITIONAL LEARNING Inspired by Theorem 2, we need to learn the function g : X → Z and h : Z ×Y → R to minimize: min g,h ∑ t λ[t]R̂α̂tSt (h, g) + C0 ∑ t λ[t]Ey∼T̂ (y)W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) This can be equivalently expressed as: min g,h ∑ t λ[t]R̂αtSt (h, g) + C0 ∑ t λ[t]Ey∼T̂ (y)W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) + (1− )C0 ∑ t λ[t]Ey∼T̂ (y)W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) Due to the explicit and implicit approximation of conditional distance, we then optimize an alternative form: min g,h max d1,...,dT ∑ t λ[t]R̂α̂tSt (h, g)︸ ︷︷ ︸ Classification Loss + C0 ∑ t λ[t]Ey∼T̂ (y)‖C y t −Cy‖2︸ ︷︷ ︸ Explicit Conditional Loss + (1− )C0 ∑ t λ[t][Ez∼Ŝt(z)ᾱ t(z)d(z)− Ez∼T̂ (z)d(z)]︸ ︷︷ ︸ Implicit Conditional Loss (6) Where • Cyt = ∑ (zt,yt)∼Ŝt 1{yt=y}zt the centroid of label Y = y in source St. • Cy = ∑ (zt,yp)∼T̂ 1{yp=y}zt the centroid of pseudo-label Y = yp in target St. (If it is the unsupervised DA scenarios). • ᾱt(z) = 1{(z,y)∼St}α̂t(Y = y), namely if each pair observation (z, y) from the distribu- tion, then ᾱt(Z = z) = α̂t(Y = y). • d1, · · · , dT are domain discriminator (or critic function) restricted within 1-Lipschitz func- tion. • ∈ [0, 1] is the adjustment parameter in the trade-off of explicit and implicit learning. Based on the equivalence form, our approach proposed a theoretical principled way to tuning its weights. In the paper, we assume = 0.5. • T̂ (y) empirical target label distribution. (In the unsupervised DA scenarios, we approximate it by predicted target label distribution T̄ (y).) Gradient Penalty In order to enforce the Lipschitz property of the statistic critic function, we adopt the gradient penalty term (Gulrajani et al., 2017). More concretely, given two samples zs ∼ St(z) and zt ∼ T (z) we generate an interpolated sample zint = ξzs + (1− ξ)zt with ξ ∼ Unif[0, 1]. Then we add a gradient penalty ‖∇d(zint)‖22 as a regularization term to control the Lipschitz property w.r.t. the discriminator d1, · · · , dT . L ALGORITHM DESCRIPTIONS We propose a detailed pipeline of the proposed algorithm in the following, shown in Algorithm 2 and 3. As for updating λ and αt, we iteratively solve the convex optimization problem after each training epoch and updating them by using the moving average technique. For solving the λ and αt, we notice that frequently updating these two parameters in the mini-batch level will lead to an instability result during the training. 5 As a consequence, we compute the accumulated confusion matrix, weighted prediction risk, and conditional Wasserstein distance for the whole training epoch and then solve the optimization problem. We use CVXPY to optimize the two standard convex losses. 6 Comparison with different time and memory complexity. We discuss the time and memory complexity of our approach. Time complexity: In computing each batch we need to compute T re-weighted loss, T domain adversarial loss and T explicit conditional loss. Then our computational complexity is still (O)(T ) during the mini-batch training, which is comparable with recent SOTA such as MDAN and DARN. In addition, after each training epoch we need to estimate αt and λ, which can have time complexity O(T |Y|) with each epoch. (If we adopt SGD to solve these two convex problems). Therefore, the our proposed algorithm is time complexity O(T |Y|). The extra Y term in time complexity is due to the approach of label shift in the designed algorithm. Memory Complexity: Our proposed approach requires O(T ) domain discriminator and O(T |Y|) class-feature centroids. By the contrary, MDAN and DARN require O(T ) domain discriminator and M3SDA and MDMN require O(T 2) domain discriminators. Since our class-feature centroids are defined in the latent space (z), then the memory complexity of the class-feature centroids can be much smaller than domain discriminators. 5In the label distribution shift scenarios, the mini-batch datasets are highly labeled imbalanced. If we evaluate αt over the mini-batch, it can be computationally expensive and unstable. 6The optimization problem w.r.t. αt and λ is not large scale, then using the standard convex solver is fast and accurate. Algorithm 2 Wasserstein Aggregation Domain Network (unsupervised scenarios, one iteration) Require: Labeled source samples Ŝ1, . . . , ŜT , Target samples T̂ Ensure: Label distribution ratio α̂t and task relation simplex λ. Feature Learner g, Classifier h, Statistic critic function d1, . . . , dT , class centroid for source C y t and target C y (∀t = [1, T ], y ∈ Y). 1: . . . DNN Parameter Training Stage (fixed αt and λ) / / / 2: for mini-batch of samples (xS1 ,yS1) ∼ Ŝ1, . . . , (xST ,yST ) ∼ ŜT , (xT ) ∼ T̂ do 3: Predict target pseudo-label ȳT = argmaxyh(g(xT ), y) 4: Compute source confusion matrix for each batch (un-normalized) CŜt = #[argmaxy′h(z, y ′) = y, Y = k] (t = 1, . . . , T ) 5: Compute the batched class centroid for source Cyt and target C y . 6: Moving Average for update source/target class centroid: (We set 1 = 0.7) 7: Source class centroid update Cyt = 1 ×C y t + (1− 1)× C y t 8: Target class centroid update Cy = 1 ×Cy + (1− 1)× Cy 9: Updating g, h, d1, . . . , dT (SGD and Gradient Reversal), based on Eq.(6) 10: end for 11: . . . Estimation α̂t and λ / / / 12: Compute the global(normalized) source confusion matrix CŜt = Ŝt[argmaxy′h(z, y ′) = y, Y = k] (t = 1, . . . , T ) 13: Solve αt (denoted as {α′t}Tt=1) by Equation (2) (Or Eq.(5)) in the partial scenario). 14: Update αt by moving average: αt = 1 × αt + (1− 1)× α′t 15: Compute the weighted loss and weighted centroid distance, then solve λ (denoted as λ′) from Sec. 2.3. 16: Updating λ by moving average: λ = 0.8× λ + 0.2× λ′ Algorithm 3 Wasserstein Aggregation Domain Network (Limited Target Data, one iteration) Require: Labeled source samples Ŝ1, . . . , ŜT , Target samples T̂ , Label shift ratio αt Ensure: Task relation simplex λ. Feature Learner g, Classifier h, Statistic critic function d1, . . . , dT , class centroid for source C y t and target C y (∀t = [1, T ], y ∈ Y). 1: . . . DNN Parameter Training Stage (fixed λ) / / / 2: for mini-batch of samples (xS1 ,yS1) ∼ Ŝ1, . . . , (xST ,yST ) ∼ ŜT , (xT ) ∼ T̂ do 3: Compute the batched class centroid for source Cyt and target C y . 4: Moving Average for update source/target class centroid: (We set 1 = 0.7) 5: Source class centroid update Cyt = 1 ×C y t + (1− 1)× C y t 6: Target class centroid update Cy = 1 ×Cy + (1− 1)× Cy 7: Updating g, h, d1, . . . , dT (SGD and Gradient Reversal), based on Eq.(6). 8: end for 9: . . . Estimation λ / / / 10: Solve λ by Sec. 2.3. (denoted as λ′) 11: Updating λ by moving average: λ = 1 × λ + (1− 1)× λ′ M DATASET DESCRIPTION AND EXPERIMENTAL DETAILS M.1 AMAZON REVIEW DATASET We used the amazon review dataset (Blitzer et al., 2007). It contains four domains (Books, DVD, Electronics, and Kitchen) with positive (label ”1”) and negative product reviews (label ”0”). The data size is 6465 (Books), 5586 (DVD), 7681 (Electronics), and 7945 (Kitchen). We follow the common data pre-processing strategies Chen et al. (2012): use the bag-of-words (BOW) features then extract the top-5000 frequent unigram and bigrams of all the reviews. We also noticed the original data-set are
1. What are the contributions and novel aspects of the paper regarding multi-source transfer learning and label shift problem? 2. What are the strengths and weaknesses of the proposed WADN algorithm, particularly in its ability to outperform SOTA methods in certain scenarios? 3. How does the reviewer assess the clarity and organization of the paper's content, including the placement of literature review in the appendices? 4. What are the limitations of the experimental validation, and how could they be addressed to provide a more comprehensive understanding of the method's effectiveness? 5. Are there any concerns regarding the reproducibility of the results due to the unavailability of the code?
Review
Review In this paper, the authors focus on the label shift problem in multi-source transfer learning and derive new generic principles to control the target generalization risk. They propose a framework that unifies the principles of conditional feature alignment, label distribution ratio estimation, and domain relation weights estimation. A WADN algorithm is proposed for 3 multi-source label shift transfer scenarios: learning with limited target data, unsupervised DA, and label partial unsupervised DA. The proposed WADN algorithm is validated on different scenarios on common benchmark datasets (Digits, HomeOffice, Amazon Review), and results indicate that it can outperform related SOTA methods for these scenarios. Although the paper is well written, and all the key concepts are described in some detail, I found some parts of Section 2 difficult to follow. The supplementary material in appendices provides much additional information for the reader: proofs, results, etc. In terms of the organization, the authors present their review and analysis of the SOTA literature in Appendix A (no in the main paper). Therefore, it is not immediately clear in the main paper how their framework and algorithm are motivated by challenges in literature. The experimental validation is limited in some respects. The authors do present averaged results over independent replications, using some cross-validation process. There should be further analysis of the impact on the performance of growing: the number and size of the sources, the degree of shift, and diversity among source and target domains? The experimental section should be expanded to compare results on different backbone networks. Their model should be also compared with SOA methods in terms of time and/or memory complexity. Tables 2-7 shows the lower-bound (source or source + tar), but does not show upper-bound results, like when training a DL model on the source and target data that are labeled (in the UDA scenario). These tables also show DA results with an average value (last column). This is common in DA papers, but I still fail to see the point of averaging across different problems. It seems like their code is not made available, so there is a concern that the results in this paper would be very difficult for a reader to reproduce.
ICLR
Title Unified Principles For Multi-Source Transfer Learning Under Label Shifts Abstract We study the label shift problem in multi-source transfer learning and derive new generic principles. Our proposed framework unifies the principles of conditional feature alignment, label distribution ratio estimation and domain relation weights estimation. Based on inspired practical principles, we provide unified practical framework for three multi-source label shift transfer scenarios: learning with limited target data, unsupervised domain adaptation and label partial unsupervised domain adaptation. We evaluate the proposed method on these scenarios by extensive experiments and show that our proposed algorithm can significantly outperform the baselines. 1 INTRODUCTION Transfer learning (Pan & Yang, 2009) is based on the motivation that learning a new task is easier after having learned several similar tasks. By learning the inductive bias from a set of related source domains (S1, . . . ,ST ) and then leveraging the shared knowledge upon learning the target domain T , the prediction performance can be significantly improved. Based on this, transfer learning arises in deep learning applications such as computer vision (Zhang et al., 2019; Tan et al., 2018; Hoffman et al., 2018b), natural language processing (Ruder et al., 2019; Houlsby et al., 2019) and biomedical engineering (Raghu et al., 2019; Lundervold & Lundervold, 2019; Zhang & An, 2017). To ensure a reliable transfer, it is critical to understand the theoretical assumptions between the domains. One implicit assumption in most transfer learning algorithms is that the label proportions remain unchanged across different domains (Du Plessis & Sugiyama, 2014) (i.e., S(y) = T (y)). However, in many real-world applications, the label distributions can vary markedly (i.e. label shift) (Wen et al., 2014; Lipton et al., 2018; Li et al., 2019b), in which existing approaches cannot guarantee a small target generalization error, which is recently proved by Combes et al. (2020). Moreover, transfer learning becomes more challenging when transferring knowledge from multiple sources to build a model for the target domain, as this requires an effective selection and leveraging the most useful source domains when label shift occurs. This is not only theoretically interesting but also commonly encountered in real-world applications. For example, in medical diagnostics, the disease distribution changes over countries (Liu et al., 2004; Geiss et al., 2014). Considering the task of diagnosing a disease in a country without sufficient data, how can we leverage the information from different countries with abundant data to help the diagnosing? Obviously, naı̈vely combining all the sources and applying one-to-one single source transfer learning algorithm can lead to undesired results, as it can include low quality or even untrusted data from certain sources, which can severely influence the performance. In this paper, we study the label shift problem in multi-source transfer learning where St(y) 6= T (y). We propose unified principles that are applicable for three common transfer scenarios: unsupervised Domain Adaptation (DA) (Ben-David et al., 2010), limited target labels (Mansour et al., 2020) and partial unsupervised DA with supp(T (y)) ⊆ supp(St(y)) (Cao et al., 2018), where prior works generally treated them as separate scenario. It should be noted that this work deals with target shift without assuming that semantic conditional distributions are identical (i.e., St(x|y) 6= T (x|y)), which is more realistic for real-world problems. Our contributions in this paper are two-folds: (I) We propose to use Wasserstein distance (Arjovsky et al., 2017) to develop a new target generalization risk upper bound (Theorem 1), which reveals the importance of label distribution ratio estimation and provides a principled guideline to learn the domain relation coefficients. Moreover, we provide a theoretical analysis in the context of representation learning (Theorem 2), which guides to learn a feature function that minimizes the conditional Wasserstein distance as well as controls the weighted source risk. We further reveal the relations in the aforementioned three scenarios lie in the different assumptions for estimating label distribution ratio. (II) Inspired by the theoretical results, we propose Wasserstein Aggregation Domain Network (WADN) for handling label-shift in multi-source transfer learning. We evaluate our algorithm on three benchmark datasets, and the results show that our algorithm can significantly outperform stateof-the-art principled approaches. 2 RELATED WORK Multi-Source Transfer Learning Theories have been investigated in the previous literature with different principles to aggregate source domains. In the popular unsupervised DA, (Zhao et al., 2018; Peng et al., 2019; Wen et al., 2020; Li et al., 2018b) adoptedH-divergence (Ben-David et al., 2007), discrepancy (Mansour et al., 2009) and Wasserstein distance (Arjovsky et al., 2017) of marginal distribution d(St(x), T (x)) to estimate domain relations and dynamically leveraged different domains. These algorithms generally consists source risk, domain discrepancy and an un-observable term η, the optimal risk on all the domains, which are ignored in these approaches. However, as Combes et al. (2020) pointed out, ignoring the influence of η will be problematic when label distributions between source and target domains are significantly different. Therefore it is necessary to take η into consideration by using a small amount of labelled data is available for the target domain (Wen et al., 2020). Following this line, very recent works (Konstantinov & Lampert, 2019; Wang et al., 2019a; Mansour et al., 2020) started to consider measure the divergence between two domains given label information for the target domain by using Y-discrepancy (Mohri & Medina, 2012). However, we empirically showed these methods are still unable to handle label shift. Label-Shift Label-Shift (Zhang et al., 2013; Gong et al., 2016) is a common phenomena in the transfer learning with S(y) 6= T (y) and generally ignored by the previous multi-source transfer learning practice. Several theoretical principled approaches have been proposed such as (Azizzadenesheli et al., 2019; Garg et al., 2020). In addition, (Combes et al., 2020; Wu et al., 2019) analyzed the generalized label shift problem in the one-to-one single unsupervised DA problem but did not provide guidelines of levering different sources to ensure a reliable transfer, which is more challenging. (Redko et al., 2019) proposed optimal transport strategy for the multiple unsupervised DA under label shift by assuming identical semantic conditional distribution. However they did not consider representation learning in conjunction with their framework and did not design neural network based approaches. Different from these, we analyzed our problem in the context of representation learning and propose an efficient and principled strategies. Moreover, our theoretical results highlights the importance of label shift problem in a variety of multi-source transfer problem. While the aforementioned work generally focus on the unsupervised DA problem, without considering unified rules for different scenarios (e.g. partial multi-source DA). 3 THEORETICAL INSIGHTS: TRANSFER RISK UPPER BOUND We assume a scoring hypothesis defined on the input space X and output space Y with h : X ×Y → R, is K-Lipschitz w.r.t. the feature x (given the same label), i.e. for ∀y, ‖h(x1, y) − h(x2, y)‖2 ≤ K‖x1 − x2‖2, and the loss function ` : R × R → R+ is positive, L-Lipschitz and upper bounded by Lmax. We denote the expected risk w.r.t distribution D: RD(h) = E(x,y)∼D`(h(x, y)) and its empirical counterpart (w.r.t. D̂) R̂D(h) = ∑ (x,y)∈D̂ `(h(x, y)). We adopted Wasserstein-1 distance (Arjovsky et al., 2017) as a metric to measure the similarity of the domains. Compared with other divergences, Wasserstein distance has been theoretically proved tighter than TV distance (Gong et al., 2016) or Jensen-Shnannon divergence (Combes et al., 2020). Based on previous work, the label shift is generally handled by label-distribution ratio weighted loss: RαS(h) = E(x,y)∼Sα(y)`(h(x, y)) with α(y) = T (y)/S(y). We also denote α̂t as its empirical counterpart, estimated from samples. Besides, to measure the task relations, we define a simplex λ with λ[t] ≥ 0, ∑T t=1 λ[t] = 1 as the task relation coefficient vector by assigning high weight to the similar task. Then we first present Theorem 1, which proposed theoretical insights about how to combine source domains through properly estimating λ. Theorem 1. Let {Ŝt = {(xi, yi)} NSt i=1 }Tt=1 and T̂ = {(xi, yi)} NT i=1, respectively be T source and target i.i.d. samples. For ∀h ∈ H with H the hypothesis family and ∀λ, with high probability ≥ 1− 4δ, the target risk can be upper bounded by: RT (h) ≤ ∑ t λ[t]R̂α̂tSt (h) + LK ∑ t λ[t]Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) + Lmaxd sup ∞ √√√√ T∑ t=1 λ[t]2 βt √ log(1/δ) 2N + Lmax sup t ‖αt − α̂t‖2 + Comp(NS1 , . . . , NST , NT , δ), where N = ∑T t=1NSt and βt = NSt/N and d sup ∞ = maxt∈[1,T ],y∈[1,Y] αt(y) the maximum true label distribution ratio value. W1(·‖·) is the Wasserstein-1 distance with L2-distance as cost function. Comp(NS1 , . . . , NST , NT , δ) is a function that decreases with larger NS1 , . . . , NT , given a fixed δ and hypothesis familyH. (See Appendix E for details) Remarks (1) In the first two terms, the relation coefficient λ is controlled by αt-weighted loss R̂α̂tSt (h) and conditional Wasserstein distance Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)). To minimize the upper bound, we need to assign a higher λ[t] to the source t with a smaller weighted prediction loss and a smaller weighted semantic conditional Wasserstein distance. Intuitively, we tend to leverage the source task which is semantic similar to the target and easier to learn. (2) If each source have equal observations with βt = 1, then the third term will become ‖λ‖2, a L2 norm regularization, which can be viewed as an encouragement of uniformly leveraging all the sources. Combing these three terms, we need to consider the trade-off between assigning a higher λ[t] to the source t that has a smaller weighted prediction loss and conditional Wasserstein distance, and assigning balanced λ[t] for avoiding concentrating on only one source. (3) ‖α̂t − αt‖2 indicates the gap between ground-truth and empirical label ratio. Therefore if we can estimate a good α̂t, these terms can be small. In the practice, If target labels are available, α̂t can be computed from the observed data and α̂t → αt. If target labels are absent (unsupervised DA), we need to design methods and to properly estimate α̂t (Sec. 4). (4) Comp(NS1 , . . . , NST , NT , δ) is a function that reflects the convergence behavior, which decreases with larger observation numbers. If we fixH, δ, N and NT , this term can be viewed as a constant. Insights in Representation Learning Apart from Theorem 1, we propose a novel theoretical analysis in the context of representation learning, which motivates practical guidelines in the deep learning regime. We define a stochastic feature function g and we denote its conditional distribution w.r.t. latent variable Z (induced by g) as S(z|Y = y) = ∫ x g(z|x)S(x|Y = y)dx. Then we have: Theorem 2. We assume the settings of loss, the hypothesis are the same with Theorem 1. We further denote the stochastic feature learning function g : X → Z , and the hypothesis h : Z × Y → R. Then ∀λ, the target risk is upper bounded by: RT (h, g) ≤ ∑ t λ[t]RαtSt (h, g) + LK ∑ t λ[t]Ey∼T (y)W1(St(z|Y = y)‖T (z|Y = y)), where RT (h, g) = E(x,y)∼T (x,y)Ez∼g(z|x)`(h(z, y)). Theorem 2 reveal that to control the upper bound, we need to learn g that minimizes the weighted conditional Wasserstein distance and learn (g, h) that minimizes the weighted source risk. Comparison with previous Theorems. Our theory proposed an alternative prospective to understand transfer learning. The first term is α-weighted loss. And it will recover the typical source loss minimization if there is no label shift with αt(y) ≡ 1 (Li et al., 2019a; Peng et al., 2019; Zhao et al., 2018; Wen et al., 2020). Beside, minimizing the conditional Wasserstein distances has been shown to be advantageous, compared with W1(St(z)‖T (z)) (Long et al., 2018). Moreover, Theorem 2 explicitly proposed the theoretical insights about the representation learning function g, which remains elusive for previous multi-source transfer theories such as (Wang et al., 2019a; Mansour et al., 2020; Konstantinov & Lampert, 2019; Li et al., 2019a; Peng et al., 2019). 4 UNIFIED PRACTICAL FRAMEWORK IN DEEP LEARNING The theoretical results in Section 3 motivate general principles to follow when designing multisource transfer learning algorithms. We summarize those principles in the following rules. (I) Learn a g that minimizes the weighted conditional Wasserstein distance as well as learn (g, h) that minimizes the α̂t-weighted source risk (Sec. 4.1). (II) Properly estimate the label distribution ratio α̂t (Sec. 4.2). (III) Balance the trade-off between assigning a higher λ[t] to the source t that has a smaller weighted prediction loss and conditional Wasserstein distance, and assigning balanced λ[t]. (Sec. 4.3). We instantiate these rules with a unified practical framework for solving multi-source transfer learning problems, as shown in Tab 1. We would like to point out that our original theoretical result is based on setting with the available target labels. The proposed algorithm can be applied to unsupervised scenarios under additional assumptions. 4.1 GUIDELINES IN THE REPRESENTATION LEARNING Motivated by Theorem 2, given a fixed label ratio estimation α̂t and fixed λ, we should find a representation function g : X → Z and a hypothesis function h : Z × Y → R such that: min g,h ∑ t λ[t]R̂α̂tSt (h, g) + C0 ∑ t λ[t]Ey∼T̂ (y)W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) (1) Explicit Conditional Loss When target label information is available, one can explicitly solve the conditional optimal transport problem with g and h for a given Y = y. However, due to the high computational complexity in solving T × |Y| optimal transport problems, the original form is practically intractable. To address this issue, we propose to approximate the conditional distribution on latent space Z as Gaussian distribution with identical Covariance matrix such that Ŝt(z|Y = y) ≈ N (Cyt ,Σ) and T̂ (z|Y = y) ≈ N (Cy,Σ). Then we have W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) ≤ ‖Cyt −Cy‖2 (see Appendix G for details). Intuitively, the approximation term is equivalent to the well known feature mean matching (Sugiyama & Kawanabe, 2012), which computes the feature centroid of each class (on latent space Z) and aligns them by minimizing their L2 distance. Implicit Conditional Loss When target label information is not available (e.g. unsupervised DA and partial DA), the explicit matching approach can adopt pseudo-label predicted by the hypothesis h as a surrogate of the true target label. However, in the early stage of the learning process, the pseudo-labels can be unreliable, which can lead to an inaccurate estimate of W1(Ŝ(z|Y = y)‖T̂ (z|Y = y)). To address this, the following Lemma indicates that estimating the conditional Wasserstein distance is equivalent to estimating the Wasserstein adversarial loss weighted by the label-distribution ratio. Lemma 1. The weighted conditional Wasserstein distance can be implicitly expressed as:∑ t λ[t]Ey∼T (y)W1(St(z|Y = y)‖T (z|Y = y)) = max d1,··· ,dT ∑ t λ[t][Ez∼St(z)ᾱt(z)dt(z)−Ez∼T (z)dt(z)], where ᾱt(z) = 1{(z,y)∼St}αt(Y = y), and d1, . . . , dT : Z → R+ are the 1-Lipschitz domain discriminators (Ganin et al., 2016; Arjovsky et al., 2017). Lemma 1 reveals that instead of using pseudo-labels to estimate the weighted conditional Wasserstein distance, one can train T domain discriminators with weighted Wasserstein adversarial loss, which does not require the pseudo-label of each target sample during the matching. On the other hand, ᾱt can be obtained from α̂t, which will be elaborated in Sec. 3.2. In practice, we adopt a hybrid approach by linearly combining the explicit and implicit matching strategies for all the scenarios, in which empirical results show its effectiveness. 4.2 ESTIMATE LABEL DISTRIBUTION RATIO α̂t Multi-Source Transfer with target labels When the target labels are available, α̂t can be directly estimated from the data without any assumption and α̂t → αt can be proved from asymptotic statistics. Unsupervised Multi-Source DA In this scenario, it is impossible to estimate a good α̂t without imposing any additional assumptions. Following (Zhang et al., 2013; Lipton et al., 2018; Azizzadenesheli et al., 2019; Combes et al., 2020), we assume that the conditional distributions are aligned between the target and source domains (i.e., St(z|y) = T (z|y)). Then, we denote S̄t(y), T̄ (y) as the predicted t-source/target label distribution through the hypothesis h, and also define CŜt [y, k] = Ŝt[argmaxy′h(z, y ′) = y, Y = k] is the t-source prediction confusion matrix. We can demonstrate that if the conditional distribution is aligned, we have T̄ (y) = T̄α̂t(y), with T̄α̂t(Y = y) = ∑Y k=1 CŜt [y, k]α̂t(k) the constructed target prediction distribution from the t-source information. (See Appendix I for the proof). Then we can estimate α̂t through matching these two distributions by minimizing DKL(T̄ (y)‖T̄α̂t(y)), which is equivalent to: min α̂t − |Y|∑ y=1 T̄ (y) log( |Y|∑ k=1 CŜt [y, k]α̂t(k)) s.t ∀y ∈ Y, α̂t(y) ≥ 0, |Y|∑ y=1 α̂t(y)Ŝt(y) = 1 (2) In the aforementioned part, we have assumed the conditional distribution is aligned, which is a feasible requirement in our algorithm, since the goal of g exactly aims at gradually achieving this. In the experiments, we iteratively estimate α̂t and learn g. Unsupervised Multi-Source Partial DA When supp(T (y)) ⊆ supp(St(y)), αt is sparse due to the non-overlapped classes. Accordingly, in addition to the assumption of St(z|y) = T (z|y) as in unsupervised DA, we also impose such prior knowledge by adding a regularizer ‖α̂t‖1 to the objective of Eq. (2) to induce sparsity in α̂t (See Appendix J for more details). In training the neural network, since the non-overlapped classes will be automatically assigned with a small or zero α̂t, (g, h) will be less affected by the classes with small α̂t. Our empirical results effectively validate its capability in detecting non-overlapping classes and show significant improvements over other baselines. 4.3 ESTIMATE TASK RELATION COEFFICIENT λ Inspired by Theorem 1, given fixed α̂t and (g, h), we estimate λ through optimizing the derived upper bound. min λ ∑ t λ[t]R̂α̂tSt (h, g) + C0 ∑ t λ[t]Ey∼T̂ (y)W1(T̂ (z|Y = y)‖Ŝ(z|Y = y)) + C1 √√√√ T∑ t=1 λ2[t] βt s.t ∀t,λ[t] ≥ 0, T∑ t=1 λ[t] = 1 (3) In practice, R̂α̂tSt (h, g) is the weighted empirical prediction error and Ey∼T̂ (y)W1(T̂ (z|Y = y)‖Ŝ(z|Y = y)) is approximated by the dynamic feature centroid distance ∑ y T̄ (y)‖C y t − Cy‖2 (See Appendix L for details). Thus, solving λ is a standard convex optimization problem. 4.4 ALGORITHM DESCRIPTION Based on the aforementioned components, we present the description of WADN (Algorithm 1) in the unsupervised scenarios (UDA and Partial DA), which iteratively updates (g, h), α̂t, and λ. When Algorithm 1 Wasserstein Aggregation Domain Network (unsupervised scenarios, one iteration) Require: Labeled source samples Ŝ1, . . . , ŜT , Target samples T̂ Ensure: Label distribution ratio α̂t, task relation simplex λ. Feature Function g, Classifier h, Do- main critic function d1, . . . , dT , class centroid for source C y t and target C y (∀t = [1, T ], y ∈ Y). 1: . . . DNN Parameter Training Stage (fixed αt and λ) / / / 2: for mini-batch of samples (xS1 ,yS1) ∼ Ŝ1, . . . , (xST ,yST ) ∼ ŜT , (xT ) ∼ T̂ do 3: Predict target pseudo-label ȳT = argmaxyh(g(xT ), y) 4: Compute source confusion matrix for each batch (un-normalized) CŜt = #[argmaxy′h(z, y ′) = y, Y = k] (t = 1, . . . , T ) 5: Compute the batched class centroid for source Cyt and target C y . 6: Moving Average for update source/target class centroid: ( 1 = 0.7) 7: Update Source class centroid Cyt = 1 ×C y t + (1− 1)× C y t 8: Update Target class centroid Cy = 1 ×Cy + (1− 1)× Cy 9: Updating g, h, d1, . . . , dT (SGD and Gradient Reversal), by solving: min g,h max d1,...,dT ∑ t λ[t]R̂α̂tSt (h, g)︸ ︷︷ ︸ Classification Loss + C0 ∑ t λ[t]Ey∼T̄ (y)‖C y t −C y‖2︸ ︷︷ ︸ Explicit Conditional Loss + (1− )C0 ∑ t λ[t][Ez∼Ŝt(z)ᾱt(z)d(z)− Ez∼T̂ (z)d(z)]︸ ︷︷ ︸ Implicit Conditional Loss 10: end for 11: . . . Estimation α̂t and λ / / / 12: Compute the global(normalized) source confusion matrix CŜt = Ŝt[argmaxy′h(z, y ′) = y, Y = k] (t = 1, . . . , T ) 13: Solve αt (denoted as {α′t}Tt=1) by (Sec. 4.2 Unsupervised DA or Partial UDA). 14: Update αt by moving average: αt = 1 × αt + (1− 1)× α′t 15: Compute the weighted loss and weighted centroid distance, then solve λ (denoted as λ′) from Sec. 4.3. And updating λ by moving average: λ = 1 × λ + (1− 1)× λ′ updating λ and αt, we used package CVXPY to optimize the two standard convex losses after each training epoch, then we updating them by using the moving average. As for WADN under target label information, we did not require pseudo-label and directly compute α̂t, shown in Appendix L. 5 EXPERIMENTS In this section, we compare proposed approaches with several baselines for the popular tasks. For all the scenarios, the following baselines are evaluated: (I) Source method applied only labelled source data to train the model. (II) DANN (Ganin et al., 2016). We follow the protocol of Wen et al. (2020) to merge all the source dataset as a global source domain. (III) MDAN (Zhao et al., 2018); (IV) MDMN (Li et al., 2018b); (V) M3SDA (Peng et al., 2019) adopted maximizing classifier discrepancy (Saito et al., 2018) and (VI) DARN (Wen et al., 2020). For the conventional multi-source transfer and partial unsupervised multi-source DA, we additionally compare specific baselines. All the baselines are re-implemented in the same network structure for fair comparisons. The detailed network structures, hyper-parameter settings, training details are put in Appendix M. We evaluate the performance on three different datasets: (I) Amazon Review. (Blitzer et al., 2007) It contains four domains (Books, DVD, Electronics, and Kitchen) with positive and negative product reviews. We follow the common data pre-processing strategies as (Chen et al. (2012)) to form a 5000-dimensional bag-of-words feature. Note that the label distribution in the original dataset is uniform. To enhance the benefits of the proposed approach, we create a label distribution drifted task by randomly dropping 50% negative reviews of all the sources while keeping the target identical. (show in Fig.3 (a)). (II) Digits. It consists four digits recognition datasets including MNIST, USPS (Hull, 1994), SVHN (Netzer et al., 2011) and Synth (Ganin et al., 2016). We also create a slight label distribution drift for the sources by randomly dropping 50% samples on digits 5-9 and keep target identical. (showed in Fig.(3)(b)). (III) Office-Home Dataset (Venkateswara et al., 2017). It contains 65 classes for four different domains: Art, Clipart, Product and Real-World. We used the ResNet50 (He et al., 2016) pretrained from the ImageNet in PyTorch as the base network for feature learning and put a MLP for the classification. The label distributions in these four domains are different and we did not manually create a label drift (showed in Fig.3 (c)). 5.1 UNSUPERVISED MULTI-SOURCE DA In the unsupervised multi-source DA, we evaluate the proposed approach on all the three datasets. We use a similar hyper-parameter selection strategy as in DANN (Ganin et al., 2016). All reported results are averaged from five runs. The detailed experimental settings are illustrated in Appendix M. The empirical results are illustrated in Tab. 7, 2 and 3. Since we did not change the target label distribution throughout the whole experiments, then we still use the target accuracy as the metric. We report the means and standard deviations for each approach. The best approaches based on a two-sided Wilcoxon signed-rank test (significance level p = 0.05) are shown in bold. The empirical results reveal a significantly better performance (≈ 3%) on different datasets. For understanding the working principles of WADN, we evaluate the performance under different levels of source label shift in Amazon Review dataset (Fig.1(a)). The results show strong practical benefits for WADN during a gradual larger label shift. In addition, we visualize the task relations in digits (Fig.1(b)) and demonstrate a non-uniform λ, which highlights the importance of properly choosing the most related source rather than simply merging all the data. E.g. when the target domain is SVHN, WADN mainly leverages the information from SYNTH, since they are more semantically similar and MNIST does not help too much for SVHN (observed by Ganin et al. (2016)). The additional analysis and results can be found in Appendix O. 5.2 MULTI-SOURCE TRANSFER LEARNING WITH LIMITED TARGET SAMPLES We adopt Amazon Review and Digits in the multi-source transfer learning with limited target samples, which have been widely used. In the experiments, we still use shifted sources. We randomly sample only 10% labeled samples (w.r.t. target dataset in unsupervised DA) as training set and the rest 90% samples as the unseen target test set. (See Appendix M for details). We adopt the same hyper-parameters and training strategies with unsupervised DA. We specifically add a recent baseline RLUS (Konstantinov & Lampert, 2019) and MME (Saito et al., 2019), which also considered transfer learning with the labeled target. The results are reported in Tabs. 4, 5, which also indicates strong empirical benefits. To show the effectiveness of WADN, we select various portions of labelled samples (1% ∼ 10%) on the target. The results in Fig.1(c) on USPS dataset shows a consistently better than the baselines, even in the few target samples. 5.3 PARTIAL UNSUPERVISED MULTI-SOURCE DA In this scenario, we adopt the Office-Home dataset to evaluate our approach, as it contains large (65) classes. We do not change the source domains and we randomly choose 35 classes from the target. We evaluate all the baselines on the same selected classes and repeat 5 times. All reported results are averaged from 3 different sub-class selections (15 runs in total), showing in Tab.6. (See Appendix M for details.) We additionally compare PADA (Cao et al., 2018) approach by merging all the sources and use one-to-one partial DA algorithm. We adopt the same hyper-parameters and training strategies with unsupervised DA scenario. The reported results are also significantly better than the current multi-source DA or one-to-one partial DA approach, which verifies the benefits of WADN: properly estimating α̂t and assigning proper λ for each source. Besides, we change the number of selected classes (Fig 2(a)), the proposed WADN still indicates consistent better results by a large margin, which indicates the importance of considering α̂t and λ. In contrast, DANN shows unstable results in less selected classes. (See Appendix P for details) Beside, WADN shows a good estimation of the label distribution ratio (Fig 2(b)) and has correctly detected the non-overlapping classes, which indicates its good explainability. 6 CONCLUSION In this paper, we proposed a new theoretical principled algorithm WADN (Wasserstein Aggregation Domain Network) to solve the multi-source transfer learning problem under target shift. WADN provides a unified solution for various deep multi-source transfer scenarios: learning with limited target data, unsupervised DA, and partial unsupervised DA. We evaluate the proposed method by extensive experiments and show its strong empirical results. A ADDITIONAL EMPIRICAL RESULTS B ADDITIONAL RELATED WORK Multi-source transfer learning Practice has been proposed from various prospective. The key idea is to estimate the importance of different sources and then select the most related ones, to mitigate the influence of negative transfer. In the multi-source unsupervised DA, (Sankaranarayanan et al., 2018; Balaji et al., 2019; Pei et al., 2018; Zhao et al., 2019; Zhu et al., 2019; Zhao et al., 2020; 2019; Stojanov et al., 2019; Li et al., 2019b; Wang et al., 2019b; Lin et al., 2020) proposed different practical strategies in the classification, regression and semantic segmentation problems. In the presence of target labels, Hoffman et al. (2012); Tan et al. (2013); Wei et al. (2017); Yao & Doretto (2010); Konstantinov & Lampert (2019) used generalized linear model to learn the target. Christodoulidis et al. (2016); Li et al. (2019a); Chen et al. (2019) focused on deep learning approaches and Lee et al. (2019) proposed an ad-hoc strategy to combine to sources in the few-shot target domains. These ideas are generally data-driven approaches and do not analyze the why the proposed practice can control the generalization error. Label-Partial Transfer Learning Label-Partial can be viewed as a special case of the label-shift. 1 Most existing works focus on one-to-one partial transfer learning (Zhang et al., 2018; Chen et al., 2020; Bucci et al., 2019; Cao et al., 2019) by adopting the re-weighting training approach without a formal understanding. In our paper, we first rigorously analyzed this common practice and adopt the label distribution ratio as its weights, which provides a principled approach in this scenario. B.1 OTHER SCENARIOS RELATED TO MULTI-SOURCE TRANSFER LEARNING Domain Generalization The domain generalization (DG) resembles multi-source transfer but aims at different goals. A common setting in DG is to learn multiple source but directly predict on the unseen target domain. The conventional DG approaches generally learn a distribution invariant features (Balaji et al., 2018; Saenko et al., 2010; Motiian et al., 2017; Ilse et al., 2019) or conditional distribution invariant features (Li et al., 2018a; Akuzawa et al., 2019). However, our theoretical results reveal that in the presence of label shift (i.e αt(y) 6= 1) and outlier tasks then learning conditional or marginal invariant features can not guarantee a small target risk. Our theoretical result enables a formal understanding about the inherent difficulty in DG problems. Few-Shot Learning The few-shot learning (Finn et al., 2017; Snell et al., 2017; Sung et al., 2018) can be viewed as a very specific scenario of multi-source transfer learning. We would like to point out the differences between the few-shot learning and our paper. (1) Few-shot learning generally involves a very large set of source domains T 1 and each domain consists a modest number of observations NSt . In our paper, we are interested in the a modest number of source domains T but each source domain including a sufficient large number of observations (NSt 1). (2) In the target domain, the few-shot setting generally used K-samples (K is very small) for each class for the fine-tuning. We would like to point out this setting generally violates our theoretical assumption. In 1Since supp(T (y)) ⊆ supp(St(y)) then we naturally have T (y) 6= St(y). our paper, we assume the target data is i.i.d. sampled fromD(x, y). It is equivalently viewed that we first i.i.d. sample y ∼ D(y), then i.i.d. sample x ∼ D(x|y). Generally the D(y) is non-uniform, thus few-shot setting are generally not applicable for our theoretical assumptions. Multi-Task Learning The goal of multi-task learning (Zhang & Yang, 2017) aims to improve the prediction performance of all the tasks. In our paper, we aim at controlling the prediction risk of a specified target domain. We also notice some practical techniques are common such as the shared parameter (Zhang & Yeung, 2012), shared representation (Ruder, 2017), etc. C ADDITIONAL FIGURES RELATED TO THE MAIN PAPER D TABLE OF NOTATION E PROOF OF THEOREM 1 Proof idea Theorem 1 consists three steps in the proof: Lemma 2. If the prediction loss is assumed as L-Lipschitz and the hypothesis is K-Lipschitz w.r.t. the feature x (given the same label), i.e. for ∀Y = y, ‖h(x1, y)−h(x2, y)‖2 ≤ K‖x1−x2‖2. Then the target risk can be upper bounded by: RT (h) ≤ ∑ t λ[t]RαtS (h) + LK ∑ t λ[t]Ey∼T (y)W1(T (x|Y = y)‖S(x|Y = y)) (4) Proof. The target risk can be expressed as: RT (h(x, y)) = E(x,y)∼T `(h(x, y)) = Ey∼T (y)Ex∼T (x|y)`(h(x, y)) By denoting α(y) = T (y)S(y) , then we have: Ey∼T (y)Ey∼T (x|y)`(h(x, y)) = Ey∼S(y)α(y)Ex∼T (x|y)`(h(x, y)) Then we aim to upper bound Ex∼T (x|y)`(h(x, y)). For any fixed y, Ex∼T (x|y)`(h(x, y))− Ex∼S(x|y)`(h(x, y)) ≤ | ∫ x∈X `(h(x, y))d(T (x|y)− S(x|y))| Then according to the Kantorovich-Rubinstein duality, for any distribution coupling γ ∈ Π(T (x|y),S(x|y)), then we have: = inf γ | ∫ X×X `(h(xp, y))− `(h(xq, y))dγ(xp, xq)| ≤ inf γ ∫ X×X |`(h(xp, y))− `(h(xq, y))|dγ(xp, xq) ≤ L inf γ ∫ X×X |h(xp, y))− h(xq, y)|dγ(xp, xq) ≤ LK inf γ ∫ X×X ‖xp − xq‖2dγ(xp, xq) = LKW1(T (x|Y = y)‖S(x|Y = y)) The first inequality is obvious; and the second inequality comes from the assumption that ` is LLipschitz; the third inequality comes from the hypothesis is K-Lipschitz w.r.t. the feature x (given the same label), i.e. for ∀Y = y, ‖h(x1, y)− h(x2, y)‖2 ≤ K‖x1 − x2‖2. Then we have: RT (h) ≤ Ey∼S(y)α(y)[Ex∼S(x|y)`(h(x, y)) + LKW1(T (x|y)‖S(x|y))] = E(x,y)∼Sα(y)`(h(x, y)) + LKEy∼T (y)W1(T (x|Y = y)‖S(x|Y = y)) = RαS(h) + LKEy∼T (y)W1(T (x|Y = y)‖S(x|Y = y)) Supposing each source St we assign the weight λ[t] and label distribution ratio αt(y) = T (y)St(y) , then by combining this T source target pair, we have: RT (h) ≤ ∑ t λ[t]RαtSt (h) + LK ∑ t λ[t]Ey∼T (y)W1(T (x|Y = y)‖St(x|Y = y)) Then we will prove Theorem 1 from this result, we will derive the non-asymptotic bound, estimated from the finite sample observations. Supposing the empirical label ratio value is α̂t, then for any simplex λ we can prove the high-probability bound. E.1 BOUNDING THE EMPIRICAL AND EXPECTED PREDICTION RISK Proof. We first bound the first term, which can be upper bounded as: sup h | ∑ t λ[t]RαtSt (h)− ∑ t λ[t]R̂α̂tSt (h)| ≤ sup h | ∑ t λ[t]RαtSt (h)− ∑ t λ[t]R̂αtSt (h)|︸ ︷︷ ︸ (I) + sup h | ∑ t λ[t]R̂αtSt (h)− ∑ t λ[t]R̂α̂tSt (h)|︸ ︷︷ ︸ (II) Bounding term (I) According to the McDiarmid inequality, each item changes at most | 2λ[t]αt(y)`NSt |. Then we have: P ((I)− E(I) ≥ t) ≤ exp( −2t 2∑T t=1 4 βtN λ2[t]αt(y)2`2 ) = δ By substituting δ, at high probability 1− δ we have: (I) ≤ E(I) + Lmaxdsup∞ √√√√ T∑ t=1 λ[t]2 βt √ log(1/δ) 2N Where Lmax = suph∈H `(h) and N = ∑T t=1NSt the total source observations and βt = NSt N the frequency ratio of each source. And d sup ∞ = maxt=1,...,T d∞(T (y)‖S(y)) = maxt=1,...,T maxy∈[1,Y] αt(y), the maximum true label shift value (constant). Bounding E sup(I), the expectation term can be upper bounded as the form of Rademacher Complexity: E(I) ≤ 2EσEŜT1 suph T∑ t=1 λ[t] ∑ (xt,yt)∈Ŝt 1 TN (αt(y)`(h(xt, yt)) ≤ 2 ∑ t λ[t]EσEŜT1 suph ∑ (xt,yt)∈Ŝt 1 TN (αt(y)`(h(xt, yt)) ≤ 2 sup t EσEŜt sup h ∑ (xt,yt)∈Ŝt 1 TN [αt(y)`(h(xt, yt))] = sup t 2Rt(`,H) = 2R̄(`,H) Where R̄(`,H) = suptRt(`,H) = supt suph∼H EŜt,σ ∑ (xt,yt)∈Ŝt 1 TN [αt(y)`(h(xt, yt))], represents the Rademacher complexity w.r.t. the prediction loss `, hypothesis h and true label distribution ratio αt. Therefore with high probability 1− δ, we have: sup h | ∑ t λ[t]RαtS (h)− ∑ t λ[t]R̂αtS (h)| ≤ R̄(`, h) + Lmaxd sup ∞ √√√√ T∑ t=1 λ[t]2 βt √ log(1/δ) 2N Bounding Term (II) For all the hypothesis h, we have: | ∑ t λ[t]R̂αtSt (h)− ∑ t λ[t]R̂α̂tSt (h)| = | ∑ t λ[t] 1 NSt NSt∑ i (α(y(i))− α̂(y(i)))`(h)| = ∑ t λ[t] 1 NSt | |Y|∑ y (α(Y = y)− α̂(Y = y))¯̀(Y = y)| Where ¯̀(Y = y) = ∑NSt i `(h(xi, yi = y)), represents the cumulative error, conditioned on a given label Y = y. According to the Holder inequality, we have: ∑ t λ[t] 1 NSt | |Y|∑ y (αt(Y = y)− α̂t(Y = y))¯̀(Y = y)| ≤ ∑ t λ[t] 1 NSt ‖αt − α̂t‖2‖¯̀(Y = y)‖2 ≤ Lmax ∑ t λ[t]‖αt − α̂t‖2 ≤ Lmax sup t ‖αt − α̂t‖2 Therefore, ∀h ∈ H, with high probability 1− δ we have: ∑ t λ[t]RαtS (h) ≤ ∑ t λ[t]R̂α̂tS (h)+2R̄(`, h)+Lmaxd sup ∞ √√√√ T∑ t=1 λ[t]2 βt √ log(1/δ) 2N +Lmax sup t ‖αt−α̂t‖2 E.2 BOUNDING EMPIRICAL WASSERSTEIN DISTANCE Then we need to derive the sample complexity of the empirical and true distributions, which can be decomposed as the following two parts. For any t, we have: Ey∼T (y)W1(T (x|Y = y)‖St(x|Y = y))− Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) ≤ Ey∼T (y)W1(T (x|Y = y)‖St(x|Y = y))− Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))︸ ︷︷ ︸ (I) + Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))− Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))︸ ︷︷ ︸ (II) Bounding (I) We have: Ey∼T (y)W1(T (x|Y = y)‖St(x|Y = y))− Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) = ∑ y T (y) ( W1(T (x|Y = y)‖St(x|Y = y))−W1(T̂ (x|Y = y)‖Ŝt(x|Y = y) ) ≤ | ∑ y T (y)| sup y ( W1(T (x|Y = y)‖St(x|Y = y))−W1(T̂ (x|Y = y)‖Ŝt(x|Y = y) ) = sup y ( W1(T (x|Y = y)‖St(x|Y = y))−W1(T̂ (x|Y = y)‖Ŝt(x|Y = y) ) ≤ sup y [W1(St(x|Y = y)‖Ŝt(x|Y = y)) +W1(Ŝt(x|Y = y)‖T̂ (x|Y = y)) +W1(T̂ (x|Y = y)‖T (x|Y = y))−W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))] = sup y W1(St(x|Y = y)‖Ŝt(x|Y = y)) +W1(T̂ (x|Y = y)‖T (x|Y = y)) The first inequality holds because of the Holder inequality. As for the second inequality, we use the triangle inequality of Wasserstein distance. W1(P‖Q) ≤W1(P‖P1) +W1(P1‖P2) +W1(P2‖Q). According to the convergence behavior of Wasserstein distance (Weed et al., 2019), with high probability ≥ 1− 2δ we have: W1(St(x|Y = y)‖Ŝt(x|Y = y)) +W1(T̂ (x|Y = y)‖T (x|Y = y)) ≤ κ(δ,NySt , N y T ) Where k(δ,NySt , N y T ) = Ct,y(N y St) −st,y +Cy(N y T ) −sy + √ 1 2 log( 2 δ )( √ 1 NySt + √ 1 Nyt ), where NySt is the number of Y = y in source t and NyT is the number of Y = y in target distribution. Ct,y , Cy st,y > 2, sy > 2 are positive constant in the concentration inequality. This indicates the convergence behavior between empirical and true Wasserstein distance. If we adopt the union bound (over all the labels) by setting δ ← δ/|Y|, then with high probability ≥ 1− 2δ, we have: sup y W1(S(x|Y = y)‖Ŝ(x|Y = y)) +W1(T̂ (x|Y = y)‖T (x|Y = y)) ≤ κ(δ,NySt , N y T ) where κ(δ,NySt , N y T ) = Ct,y(N y St) −st,y + Cy(N y T ) −sy + √ 1 2 log( 2|Y| δ )( √ 1 NySt + √ 1 NyT ) Again by adopting the union bound (over all the tasks) by setting δ ← δ/T , with high probability ≥ 1− 2δ, we have:∑ t λ[t]Ey∼T (y)W1(T (x|Y = y)‖S(x|Y = y))− ∑ t λ[t]Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝ(x|Y = y)) ≤ sup t κ(δ,NySt , N y T ) Where κ(δ,NySt , N y T ) = Ct,y(N y St) −st,y + Cy(N y T ) −sy + √ 1 2 log( 2T |Y| δ )( √ 1 NySt + √ 1 NyT ). Bounding (II) We can bound the second term: Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))− Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) ≤ sup y W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))| ∑ y T (y)− T̂ (y)| ≤ Ctmax| ∑ y T (y)− T̂ (y)| Where Ctmax = supyW1(T̂ (x|Y = y)‖Ŝ(x|Y = y)) is a positive and bounded constant. Then we need to bound | ∑ y T (y)−T̂ (y)|, by adopting MicDiarmid’s inequality, we have at high probability 1− δ: | ∑ y T (y)− T̂ (y)| ≤ ET̂ | ∑ y T (y)− T̂ (y)|+ √ log(1/δ) 2NT = 2EσET̂ ∑ y σT̂ (y) + √ log(1/δ) 2NT Then we bound EσET̂ ∑ y σT̂ (y). We use the properties of Rademacher complexity [Lemma 26.11, (Shalev-Shwartz & Ben-David, 2014)] and notice that T̂ (y) is a probability simplex, then we have: EσET̂ ∑ y σT̂ (y) ≤ √ 2 log(2|Y|) NT Then we have | ∑ y T (y)− T̂ (y)| ≤ √ 2 log(2|Y|) NT + √ log(1/δ) 2NT Then using the union bound and denoting δ ← δ/T , with high probability ≥ 1 − δ and for any simplex λ, we have:∑ t λ[t]Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) ≤ ∑ t λ[t]Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) Cmax( √ 2 log(2|Y|) NT + √ log(T/δ) 2NT ) where Cmax = supt C t max. Combining together, we can derive the PAC-Learning bound, which is estimated from the finite samples (with high probability 1− 4δ): RT (h) ≤ ∑ t λtR̂ α̂t St (h) + LH ∑ t λtEy∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝ(x|Y = y)) + Lmaxd sup ∞ √√√√ T∑ t=1 λ2t βt √ log(1/δ) 2N + 2R̄(`, h) + Lmax sup t ‖αt − α̂t‖2 + sup t κ(δ,NySt , N y T ) + Cmax( √ 2 log(2|Y|) NT + √ log(T/δ) 2NT ) Then we denote Comp(NS1 , . . . , NT , δ) = 2R̄(`, h) + supt κ(δ,N y St , N y T ) +Cmax( √ 2 log(2|Y|) NT +√ log(T/δ) 2NT ) as the convergence rate function that decreases with larger NS1 , . . . , NT . Bedsides, R̄(`, h) = suptRt(`,H) is the re-weighted Rademacher complexity. Given a fixed hypothesis with finite VC dimension 2, it can be proved R̄(`, h) = minNS1 ,...,NST O( √ 1 NSt ) i.e (Shalev-Shwartz & Ben-David, 2014). 2If the hypothesis is the neural network, the Rademacher complexity can still be bounded analogously. F PROOF OF THEOREM 2 We first recall the stochastic feature representation g such that g : X → Z and scoring hypothesis h h : Z × Y → R and the prediction loss ` with ` : R→ R. 3 Proof. The marginal distribution and conditional distribution w.r.t. latent variableZ that are induced by g, which can be reformulated as: S(z) = ∫ x g(z|x)S(x)dx S(z|y) = ∫ x g(z|x)S(x|Y = y)dx In the multi-class classification problem, we additionally define the following distributions: µk(z) = S(Y = k, z) = S(Y = k)S(z|Y = k) πk(z) = T (Y = k, z) = T (Y = k)T (z|Y = k) Based on (Nguyen et al., 2009) and g(z|x) is a stochastic representation learning function, the loss conditioned a fixed point (x, y) w.r.t. h and g is Ez∼g(z|x)`(h(z, y)). Then taking the expectation over the S(x, y) we have: 4 RS(h, g) = E(x,y)∼S(x,y)Ez∼g(z|x)`(h(z, y)) = |Y|∑ k=1 S(y = k) ∫ x S(x|Y = k) ∫ z g(z|x)`(h(z, y = k))dzdx = |Y|∑ k=1 S(y = k) ∫ z [ ∫ x S(x|Y = k)g(z|x)dx]`(h(z, y = k))dz = |Y|∑ k=1 S(y = k) ∫ z S(z|Y = k)`(h(z, y = k))dz = |Y|∑ k=1 ∫ z S(z, Y = k)`(h(z, y = k))dz = |Y|∑ k=1 ∫ z µk(z)`(h(z, y = k))dz Intuitively, the expected loss w.r.t. the joint distribution S can be decomposed as the expected loss on the label distribution S(y) (weighted by the labels) and conditional distribution S(·|y) (real valued conditional loss). Then the expected risk on the S and T can be expressed as: RS(h, g) = |Y|∑ k=1 ∫ z `(h(z, y = k))µk(z)dz RT (h, g) = |Y|∑ k=1 ∫ z `(h(z, y = k))πk(z)dz 3Note this definition is different from the conventional binary classification with binary output, and it is more suitable in the multi-classification scenario and cross entropy loss (Hoffman et al., 2018a). For example, if we define l = − log(·) and h(z, y) ∈ (0, 1) as a scalar score output. Then `(h(z, y)) can be viewed as the cross-entropy loss for the neural-network. 4An alternative understanding is based on the Markov chain. In this case it is a DAG with Y S(y|x)←−−−− X g−→ Z, X S(y|x)−−−−→ Y h−→ S h←− Z g←− X . (S is the output of the scoring function). Then the ex- pected loss over the all random variable can be equivalently written as ∫ P(x, y, z, s) `(s) d(x, y, z, s) =∫ P(x)P(y|x)P(z|x)P(s|z, y)`(s) = ∫ P(x, y)P(z|x)P(s|z, y)`(s)d(x, y)d(z)d(s). Since the scoring S is determined by h(x, y), then P(s|y, z) = 1. According to the definition we have P(z|x) = g(z|x), P(x, y) = S(x, y), then the loss can be finally expressed as ES(x,y)Eg(z|x)`(h(z, y)) By denoting α(y) = T (y)S(y) , we have the α-weighted loss: RαS(h, g) =T (Y = 1) ∫ z `(h(z, y = 1))S(z|Y = 1) + T (Y = 2) ∫ z `(h(z, y = 2))S(z|Y = 2) + · · ·+ T (Y = k) ∫ z `(h(z, y = k))S(z|Y = k)dz Then we have: RT (h, g)−RαS(h, g) ≤ ∑ k T (Y = k) ∫ z `(h(z, y = k))d|S(z|Y = k)− T (z|Y = k)| Under the same assumption, we have the loss function `(h(z, Y = k)) is KL-Lipschitz w.r.t. the cost ‖ · ‖2 (given a fixed k). Therefore by adopting the same proof strategy (Kantorovich-Rubinstein duality) in Lemma 2, we have ≤ KLT (Y = 1)W1(S(z|Y = 1)‖T (z|Y = 1)) + · · ·+KLT (Y = k)W1(S(z|Y = k)‖T (z|Y = k)) = KLEy∼T (y)W1(S(z|Y = y)‖T (z|Y = y)) Therefore, we have: RT (h, g) ≤ RαS(h, g) + LKEy∼T (y)W1(S(z|Y = y)‖T (z|Y = y)) Based on the aforementioned result, we have ∀t = 1, . . . , T and denote S = St and α(y) = αt(y) = T (y)/St(y): λ[t]RT (h, g) ≤ λ[t]RαtSt (h, g) + LKλ[t]Ey∼T (y)W1(St(z|Y = y)‖T (z|Y = y)) Summing over t = 1, . . . , T , we have: RT (h, g) ≤ T∑ t=1 λ[t]RαtSt (h, g) + LK T∑ t=1 λ[t]Ey∼T (y)W1(St(z|Y = y)‖T (z|Y = y)) G APPROXIMATION W1 DISTANCE According to Jensen inequality, we have W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) ≤ √ [W2(Ŝt(z|Y = y)‖T̂ (z|Y = y))]2 Supposing Ŝt(z|Y = y) ≈ N (Cyt ,Σ) and T̂ (z|Y = y) ≈ N (Cy,Σ), then we have: [W2(Ŝt(z|Y = y)‖T̂ (z|Y = y)]2 = ‖Cyt −Cy‖22 + Trace(2Σ− 2(ΣΣ)1/2) = ‖C y t −Cy‖22 We would like to point out that assuming the identical covariance matrix is more computationally efficient during the matching. This is advantageous and reasonable in the deep learning regime: we adopted the mini-batch (ranging from 20-128) for the neural network parameter optimization, in each mini-batch the samples of each class are small, then we compute the empirical covariance/variance matrix will be surely biased to the ground truth variance and induce a much higher complexity to optimize. By the contrary, the empirical mean is unbiased and computationally efficient, we can simply use the moving the moving average to efficiently update the estimated mean value (with a unbiased estimator). The empirical results verify the effectiveness of this idea. H PROOF OF LEMMA 1 For each source St, by introducing the duality of Wasserstein-1 distance, for y ∈ Y , we have: W1(St(z|y)‖T (z|y)) = sup ‖d‖L≤1 Ez∼St(z|y)d(z)− Ez∼T (z|y)d(z) = sup ‖d‖L≤1 ∑ z St(z|y)d(z)− ∑ z T (z|y)d(z) = 1 T (y) sup ‖d‖L≤1 T (y) St(y) ∑ z St(z, y)d(z)− ∑ z T (z, y)d(z) Then by defining ᾱt(z) = 1{(z,y)∼St} T (Y=y) St(Y=y) = 1{(z,y)∼St}αt(Y = y), we can see for each pair observation (z, y) sampled from the same distribution, then ᾱt(Z = z) = αt(Y = y). Then we have:∑ y T (y)W1(St(z|y)‖T (z|y)) = ∑ y sup ‖d‖L≤1 { ∑ z αt(y)St(z, y)d(z)− ∑ z T (z, y)d(z)} = sup ‖d‖L≤1 ∑ z ᾱt(z)St(z)d(z)− ∑ z T (z)d(z) = sup ‖d‖L≤1 Ez∼St(z)ᾱt(z)d(z)− Ez∼T (z)d(z) We propose a simple example to understand ᾱt: supposing three samples in St = {(z1, Y = 1), (z2, Y = 1), (z3, Y = 0)} then ᾱt(z1) = ᾱt(z2) = αt(1) and ᾱt(z3) = αt(0). Therefore, the conditional term is equivalent to the label-weighted Wasserstein adversarial learning. We plug in each source domain as weight λ[t] and domain discriminator as dt, we finally have Lemma 1. I DERIVE THE LABEL RATIO LOSS We suppose the representation learning aims at matching the conditional distribution such that T (z|y) ≈ St(z|y),∀t, then we suppose the predicted target distribution as T̄ (y). By simplifying the notation, we define f(z) = argmaxyh(z, y) the most possible prediction label output, then we have: T̄ (y) = Y∑ k=1 T (f(z) = y|Y = k)T (Y = k) = Y∑ k=1 St(f(z) = y|Y = k)T (Y = k) = Y∑ i=1 St(f(z) = y, Y = k)αt(k) = T̄αt(y) The first equality comes from the definition of target label prediction distribution, T̄ (y) = ET (z)1{f(z) = y} = T (f(z) = y) = ∑Y k=1 T (f(z) = y, Y = k) = ∑Y k=1 T (f(z) = y|Y = k)T (Y = k). The second equality T (f(z) = y|Y = k) = St(f(z) = y|Y = k) holds since ∀t, T (z|y) ≈ St(z|y), then for the shared hypothesis f , we have T (f(z) = y|Y = k) = St(f(z) = y|Y = k). The term St(f(z) = y, Y = k) is the (expected) source prediction confusion matrix, and we denote its empirical (observed) version as Ŝt(f(z) = y, Y = k). Based on this idea, in practice we want to find a α̂t to match the two predicted distribution T̄ and T̄α̂t . If we adopt the KL-divergence as the metric, we have: min α̂t DKL(T̄ ‖T̄α̂t) = min α̂t Ey∼T̄ log( T̄ (y) T̄α̂t(y) ) = min α̂t −Ey∼T̄ log(T̄α̂t(y)) = min α̂t − ∑ y T̄ (y) log( Y∑ k=1 St(f(z) = y, Y = k)α̂t(k)) We should notice the nature constraints of label ratio: {α̂t(y) ≥ 0, ∑ y α̂t(y)Ŝt(y) = 1}. Based on this principle, we proposed the optimization problem to estimate each label ratio. We adopt its empirical counterpart, the empirical confusion matrix CŜt [y, k] = Ŝt[f(z) = y, Y = k], then the optimization loss can be expressed as: min α̂t − |Y|∑ y=1 T̄ (y) log( |Y|∑ k=1 CŜt [y, k]α̂t(k)) s.t. ∀y ∈ Y, α̂t(y) ≥ 0, ∑ y α̂t(y)Ŝt(y) = 1 J LABEL PARTIAL MULTI-SOURCE UNSUPERVISED DA The key difference between multi-conventional and partial unsupervised DA is the estimation step of α̂t. In fact, we only add a sparse constraint for estimating each α̂t: min α̂t − |Y|∑ y=1 T̄ (y) log( |Y|∑ k=1 CŜt [y, k]α̂t(k)) + C2‖α̂t‖1 s.t. ∀y ∈ Y, α̂t(y) ≥ 0, ∑ y α̂t(y)Ŝt(y) = 1 (5) Where C2 is the hyper-parameter to control the level of target label sparsity, to estimate the target label distribution. In the paper, we denote C2 = 0.1. K EXPLICIT AND IMPLICIT CONDITIONAL LEARNING Inspired by Theorem 2, we need to learn the function g : X → Z and h : Z ×Y → R to minimize: min g,h ∑ t λ[t]R̂α̂tSt (h, g) + C0 ∑ t λ[t]Ey∼T̂ (y)W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) This can be equivalently expressed as: min g,h ∑ t λ[t]R̂αtSt (h, g) + C0 ∑ t λ[t]Ey∼T̂ (y)W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) + (1− )C0 ∑ t λ[t]Ey∼T̂ (y)W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) Due to the explicit and implicit approximation of conditional distance, we then optimize an alternative form: min g,h max d1,...,dT ∑ t λ[t]R̂α̂tSt (h, g)︸ ︷︷ ︸ Classification Loss + C0 ∑ t λ[t]Ey∼T̂ (y)‖C y t −Cy‖2︸ ︷︷ ︸ Explicit Conditional Loss + (1− )C0 ∑ t λ[t][Ez∼Ŝt(z)ᾱ t(z)d(z)− Ez∼T̂ (z)d(z)]︸ ︷︷ ︸ Implicit Conditional Loss (6) Where • Cyt = ∑ (zt,yt)∼Ŝt 1{yt=y}zt the centroid of label Y = y in source St. • Cy = ∑ (zt,yp)∼T̂ 1{yp=y}zt the centroid of pseudo-label Y = yp in target St. (If it is the unsupervised DA scenarios). • ᾱt(z) = 1{(z,y)∼St}α̂t(Y = y), namely if each pair observation (z, y) from the distribu- tion, then ᾱt(Z = z) = α̂t(Y = y). • d1, · · · , dT are domain discriminator (or critic function) restricted within 1-Lipschitz func- tion. • ∈ [0, 1] is the adjustment parameter in the trade-off of explicit and implicit learning. Based on the equivalence form, our approach proposed a theoretical principled way to tuning its weights. In the paper, we assume = 0.5. • T̂ (y) empirical target label distribution. (In the unsupervised DA scenarios, we approximate it by predicted target label distribution T̄ (y).) Gradient Penalty In order to enforce the Lipschitz property of the statistic critic function, we adopt the gradient penalty term (Gulrajani et al., 2017). More concretely, given two samples zs ∼ St(z) and zt ∼ T (z) we generate an interpolated sample zint = ξzs + (1− ξ)zt with ξ ∼ Unif[0, 1]. Then we add a gradient penalty ‖∇d(zint)‖22 as a regularization term to control the Lipschitz property w.r.t. the discriminator d1, · · · , dT . L ALGORITHM DESCRIPTIONS We propose a detailed pipeline of the proposed algorithm in the following, shown in Algorithm 2 and 3. As for updating λ and αt, we iteratively solve the convex optimization problem after each training epoch and updating them by using the moving average technique. For solving the λ and αt, we notice that frequently updating these two parameters in the mini-batch level will lead to an instability result during the training. 5 As a consequence, we compute the accumulated confusion matrix, weighted prediction risk, and conditional Wasserstein distance for the whole training epoch and then solve the optimization problem. We use CVXPY to optimize the two standard convex losses. 6 Comparison with different time and memory complexity. We discuss the time and memory complexity of our approach. Time complexity: In computing each batch we need to compute T re-weighted loss, T domain adversarial loss and T explicit conditional loss. Then our computational complexity is still (O)(T ) during the mini-batch training, which is comparable with recent SOTA such as MDAN and DARN. In addition, after each training epoch we need to estimate αt and λ, which can have time complexity O(T |Y|) with each epoch. (If we adopt SGD to solve these two convex problems). Therefore, the our proposed algorithm is time complexity O(T |Y|). The extra Y term in time complexity is due to the approach of label shift in the designed algorithm. Memory Complexity: Our proposed approach requires O(T ) domain discriminator and O(T |Y|) class-feature centroids. By the contrary, MDAN and DARN require O(T ) domain discriminator and M3SDA and MDMN require O(T 2) domain discriminators. Since our class-feature centroids are defined in the latent space (z), then the memory complexity of the class-feature centroids can be much smaller than domain discriminators. 5In the label distribution shift scenarios, the mini-batch datasets are highly labeled imbalanced. If we evaluate αt over the mini-batch, it can be computationally expensive and unstable. 6The optimization problem w.r.t. αt and λ is not large scale, then using the standard convex solver is fast and accurate. Algorithm 2 Wasserstein Aggregation Domain Network (unsupervised scenarios, one iteration) Require: Labeled source samples Ŝ1, . . . , ŜT , Target samples T̂ Ensure: Label distribution ratio α̂t and task relation simplex λ. Feature Learner g, Classifier h, Statistic critic function d1, . . . , dT , class centroid for source C y t and target C y (∀t = [1, T ], y ∈ Y). 1: . . . DNN Parameter Training Stage (fixed αt and λ) / / / 2: for mini-batch of samples (xS1 ,yS1) ∼ Ŝ1, . . . , (xST ,yST ) ∼ ŜT , (xT ) ∼ T̂ do 3: Predict target pseudo-label ȳT = argmaxyh(g(xT ), y) 4: Compute source confusion matrix for each batch (un-normalized) CŜt = #[argmaxy′h(z, y ′) = y, Y = k] (t = 1, . . . , T ) 5: Compute the batched class centroid for source Cyt and target C y . 6: Moving Average for update source/target class centroid: (We set 1 = 0.7) 7: Source class centroid update Cyt = 1 ×C y t + (1− 1)× C y t 8: Target class centroid update Cy = 1 ×Cy + (1− 1)× Cy 9: Updating g, h, d1, . . . , dT (SGD and Gradient Reversal), based on Eq.(6) 10: end for 11: . . . Estimation α̂t and λ / / / 12: Compute the global(normalized) source confusion matrix CŜt = Ŝt[argmaxy′h(z, y ′) = y, Y = k] (t = 1, . . . , T ) 13: Solve αt (denoted as {α′t}Tt=1) by Equation (2) (Or Eq.(5)) in the partial scenario). 14: Update αt by moving average: αt = 1 × αt + (1− 1)× α′t 15: Compute the weighted loss and weighted centroid distance, then solve λ (denoted as λ′) from Sec. 2.3. 16: Updating λ by moving average: λ = 0.8× λ + 0.2× λ′ Algorithm 3 Wasserstein Aggregation Domain Network (Limited Target Data, one iteration) Require: Labeled source samples Ŝ1, . . . , ŜT , Target samples T̂ , Label shift ratio αt Ensure: Task relation simplex λ. Feature Learner g, Classifier h, Statistic critic function d1, . . . , dT , class centroid for source C y t and target C y (∀t = [1, T ], y ∈ Y). 1: . . . DNN Parameter Training Stage (fixed λ) / / / 2: for mini-batch of samples (xS1 ,yS1) ∼ Ŝ1, . . . , (xST ,yST ) ∼ ŜT , (xT ) ∼ T̂ do 3: Compute the batched class centroid for source Cyt and target C y . 4: Moving Average for update source/target class centroid: (We set 1 = 0.7) 5: Source class centroid update Cyt = 1 ×C y t + (1− 1)× C y t 6: Target class centroid update Cy = 1 ×Cy + (1− 1)× Cy 7: Updating g, h, d1, . . . , dT (SGD and Gradient Reversal), based on Eq.(6). 8: end for 9: . . . Estimation λ / / / 10: Solve λ by Sec. 2.3. (denoted as λ′) 11: Updating λ by moving average: λ = 1 × λ + (1− 1)× λ′ M DATASET DESCRIPTION AND EXPERIMENTAL DETAILS M.1 AMAZON REVIEW DATASET We used the amazon review dataset (Blitzer et al., 2007). It contains four domains (Books, DVD, Electronics, and Kitchen) with positive (label ”1”) and negative product reviews (label ”0”). The data size is 6465 (Books), 5586 (DVD), 7681 (Electronics), and 7945 (Kitchen). We follow the common data pre-processing strategies Chen et al. (2012): use the bag-of-words (BOW) features then extract the top-5000 frequent unigram and bigrams of all the reviews. We also noticed the original data-set are
1. What is the main contribution of the paper in unsupervised domain adaptation? 2. What are the strengths of the proposed approach, particularly in its ability to handle different scenarios? 3. What are the weaknesses of the paper regarding its comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review This paper has made a good attempt to provide a unified approach for unsupervised domain adaptation. The proposed approach is applicable to three scenarios which have been traditionally treated as three separate problems. The three problems that are treated in a unified way are Unsupervised Domain Adaptation (UDA), limited target labels and partially unsupervised domain adaptation. Another feature of the proposed approach is that it deals with target shift without assuming that conditional distributions are identical, a more realistic assumption for real-world problems. Results on three different benchmark datasets are provided. Results show uniform improvements in the range of 2-6% over methods compared in various tables. The paper can be improved by providing comparisons with recent UDA and domain generalization methods from Balaji, Sankaranarayanan (CVPR 2018, NIPS 2018) and Balaji and Feizi (ICCV 2019). The ICCV paper also uses the Wasserstein distance for unsupervised domain adaptation. Given that one of the problems that is considered is UDA, I am not sure why the authors have not compared their approach on the Office dataset that is used in UDA papers. A recent paper from Saenko and Trevor Darrell [Saito, K., Kim, D., Sclaroff, S., Darrell, T., & Saenko, K. (2019). Semi-supervised Domain Adaptation via Minimax Entropy. arXiv preprint arXiv:1904.06487] has considered the problem of small source. small target and large unlabeled target data as a domain adaptation problem. Since the authors consider the limited target labels problem as a one of the cases, comparisons with this paper should also be provided.
ICLR
Title Unified Principles For Multi-Source Transfer Learning Under Label Shifts Abstract We study the label shift problem in multi-source transfer learning and derive new generic principles. Our proposed framework unifies the principles of conditional feature alignment, label distribution ratio estimation and domain relation weights estimation. Based on inspired practical principles, we provide unified practical framework for three multi-source label shift transfer scenarios: learning with limited target data, unsupervised domain adaptation and label partial unsupervised domain adaptation. We evaluate the proposed method on these scenarios by extensive experiments and show that our proposed algorithm can significantly outperform the baselines. 1 INTRODUCTION Transfer learning (Pan & Yang, 2009) is based on the motivation that learning a new task is easier after having learned several similar tasks. By learning the inductive bias from a set of related source domains (S1, . . . ,ST ) and then leveraging the shared knowledge upon learning the target domain T , the prediction performance can be significantly improved. Based on this, transfer learning arises in deep learning applications such as computer vision (Zhang et al., 2019; Tan et al., 2018; Hoffman et al., 2018b), natural language processing (Ruder et al., 2019; Houlsby et al., 2019) and biomedical engineering (Raghu et al., 2019; Lundervold & Lundervold, 2019; Zhang & An, 2017). To ensure a reliable transfer, it is critical to understand the theoretical assumptions between the domains. One implicit assumption in most transfer learning algorithms is that the label proportions remain unchanged across different domains (Du Plessis & Sugiyama, 2014) (i.e., S(y) = T (y)). However, in many real-world applications, the label distributions can vary markedly (i.e. label shift) (Wen et al., 2014; Lipton et al., 2018; Li et al., 2019b), in which existing approaches cannot guarantee a small target generalization error, which is recently proved by Combes et al. (2020). Moreover, transfer learning becomes more challenging when transferring knowledge from multiple sources to build a model for the target domain, as this requires an effective selection and leveraging the most useful source domains when label shift occurs. This is not only theoretically interesting but also commonly encountered in real-world applications. For example, in medical diagnostics, the disease distribution changes over countries (Liu et al., 2004; Geiss et al., 2014). Considering the task of diagnosing a disease in a country without sufficient data, how can we leverage the information from different countries with abundant data to help the diagnosing? Obviously, naı̈vely combining all the sources and applying one-to-one single source transfer learning algorithm can lead to undesired results, as it can include low quality or even untrusted data from certain sources, which can severely influence the performance. In this paper, we study the label shift problem in multi-source transfer learning where St(y) 6= T (y). We propose unified principles that are applicable for three common transfer scenarios: unsupervised Domain Adaptation (DA) (Ben-David et al., 2010), limited target labels (Mansour et al., 2020) and partial unsupervised DA with supp(T (y)) ⊆ supp(St(y)) (Cao et al., 2018), where prior works generally treated them as separate scenario. It should be noted that this work deals with target shift without assuming that semantic conditional distributions are identical (i.e., St(x|y) 6= T (x|y)), which is more realistic for real-world problems. Our contributions in this paper are two-folds: (I) We propose to use Wasserstein distance (Arjovsky et al., 2017) to develop a new target generalization risk upper bound (Theorem 1), which reveals the importance of label distribution ratio estimation and provides a principled guideline to learn the domain relation coefficients. Moreover, we provide a theoretical analysis in the context of representation learning (Theorem 2), which guides to learn a feature function that minimizes the conditional Wasserstein distance as well as controls the weighted source risk. We further reveal the relations in the aforementioned three scenarios lie in the different assumptions for estimating label distribution ratio. (II) Inspired by the theoretical results, we propose Wasserstein Aggregation Domain Network (WADN) for handling label-shift in multi-source transfer learning. We evaluate our algorithm on three benchmark datasets, and the results show that our algorithm can significantly outperform stateof-the-art principled approaches. 2 RELATED WORK Multi-Source Transfer Learning Theories have been investigated in the previous literature with different principles to aggregate source domains. In the popular unsupervised DA, (Zhao et al., 2018; Peng et al., 2019; Wen et al., 2020; Li et al., 2018b) adoptedH-divergence (Ben-David et al., 2007), discrepancy (Mansour et al., 2009) and Wasserstein distance (Arjovsky et al., 2017) of marginal distribution d(St(x), T (x)) to estimate domain relations and dynamically leveraged different domains. These algorithms generally consists source risk, domain discrepancy and an un-observable term η, the optimal risk on all the domains, which are ignored in these approaches. However, as Combes et al. (2020) pointed out, ignoring the influence of η will be problematic when label distributions between source and target domains are significantly different. Therefore it is necessary to take η into consideration by using a small amount of labelled data is available for the target domain (Wen et al., 2020). Following this line, very recent works (Konstantinov & Lampert, 2019; Wang et al., 2019a; Mansour et al., 2020) started to consider measure the divergence between two domains given label information for the target domain by using Y-discrepancy (Mohri & Medina, 2012). However, we empirically showed these methods are still unable to handle label shift. Label-Shift Label-Shift (Zhang et al., 2013; Gong et al., 2016) is a common phenomena in the transfer learning with S(y) 6= T (y) and generally ignored by the previous multi-source transfer learning practice. Several theoretical principled approaches have been proposed such as (Azizzadenesheli et al., 2019; Garg et al., 2020). In addition, (Combes et al., 2020; Wu et al., 2019) analyzed the generalized label shift problem in the one-to-one single unsupervised DA problem but did not provide guidelines of levering different sources to ensure a reliable transfer, which is more challenging. (Redko et al., 2019) proposed optimal transport strategy for the multiple unsupervised DA under label shift by assuming identical semantic conditional distribution. However they did not consider representation learning in conjunction with their framework and did not design neural network based approaches. Different from these, we analyzed our problem in the context of representation learning and propose an efficient and principled strategies. Moreover, our theoretical results highlights the importance of label shift problem in a variety of multi-source transfer problem. While the aforementioned work generally focus on the unsupervised DA problem, without considering unified rules for different scenarios (e.g. partial multi-source DA). 3 THEORETICAL INSIGHTS: TRANSFER RISK UPPER BOUND We assume a scoring hypothesis defined on the input space X and output space Y with h : X ×Y → R, is K-Lipschitz w.r.t. the feature x (given the same label), i.e. for ∀y, ‖h(x1, y) − h(x2, y)‖2 ≤ K‖x1 − x2‖2, and the loss function ` : R × R → R+ is positive, L-Lipschitz and upper bounded by Lmax. We denote the expected risk w.r.t distribution D: RD(h) = E(x,y)∼D`(h(x, y)) and its empirical counterpart (w.r.t. D̂) R̂D(h) = ∑ (x,y)∈D̂ `(h(x, y)). We adopted Wasserstein-1 distance (Arjovsky et al., 2017) as a metric to measure the similarity of the domains. Compared with other divergences, Wasserstein distance has been theoretically proved tighter than TV distance (Gong et al., 2016) or Jensen-Shnannon divergence (Combes et al., 2020). Based on previous work, the label shift is generally handled by label-distribution ratio weighted loss: RαS(h) = E(x,y)∼Sα(y)`(h(x, y)) with α(y) = T (y)/S(y). We also denote α̂t as its empirical counterpart, estimated from samples. Besides, to measure the task relations, we define a simplex λ with λ[t] ≥ 0, ∑T t=1 λ[t] = 1 as the task relation coefficient vector by assigning high weight to the similar task. Then we first present Theorem 1, which proposed theoretical insights about how to combine source domains through properly estimating λ. Theorem 1. Let {Ŝt = {(xi, yi)} NSt i=1 }Tt=1 and T̂ = {(xi, yi)} NT i=1, respectively be T source and target i.i.d. samples. For ∀h ∈ H with H the hypothesis family and ∀λ, with high probability ≥ 1− 4δ, the target risk can be upper bounded by: RT (h) ≤ ∑ t λ[t]R̂α̂tSt (h) + LK ∑ t λ[t]Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) + Lmaxd sup ∞ √√√√ T∑ t=1 λ[t]2 βt √ log(1/δ) 2N + Lmax sup t ‖αt − α̂t‖2 + Comp(NS1 , . . . , NST , NT , δ), where N = ∑T t=1NSt and βt = NSt/N and d sup ∞ = maxt∈[1,T ],y∈[1,Y] αt(y) the maximum true label distribution ratio value. W1(·‖·) is the Wasserstein-1 distance with L2-distance as cost function. Comp(NS1 , . . . , NST , NT , δ) is a function that decreases with larger NS1 , . . . , NT , given a fixed δ and hypothesis familyH. (See Appendix E for details) Remarks (1) In the first two terms, the relation coefficient λ is controlled by αt-weighted loss R̂α̂tSt (h) and conditional Wasserstein distance Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)). To minimize the upper bound, we need to assign a higher λ[t] to the source t with a smaller weighted prediction loss and a smaller weighted semantic conditional Wasserstein distance. Intuitively, we tend to leverage the source task which is semantic similar to the target and easier to learn. (2) If each source have equal observations with βt = 1, then the third term will become ‖λ‖2, a L2 norm regularization, which can be viewed as an encouragement of uniformly leveraging all the sources. Combing these three terms, we need to consider the trade-off between assigning a higher λ[t] to the source t that has a smaller weighted prediction loss and conditional Wasserstein distance, and assigning balanced λ[t] for avoiding concentrating on only one source. (3) ‖α̂t − αt‖2 indicates the gap between ground-truth and empirical label ratio. Therefore if we can estimate a good α̂t, these terms can be small. In the practice, If target labels are available, α̂t can be computed from the observed data and α̂t → αt. If target labels are absent (unsupervised DA), we need to design methods and to properly estimate α̂t (Sec. 4). (4) Comp(NS1 , . . . , NST , NT , δ) is a function that reflects the convergence behavior, which decreases with larger observation numbers. If we fixH, δ, N and NT , this term can be viewed as a constant. Insights in Representation Learning Apart from Theorem 1, we propose a novel theoretical analysis in the context of representation learning, which motivates practical guidelines in the deep learning regime. We define a stochastic feature function g and we denote its conditional distribution w.r.t. latent variable Z (induced by g) as S(z|Y = y) = ∫ x g(z|x)S(x|Y = y)dx. Then we have: Theorem 2. We assume the settings of loss, the hypothesis are the same with Theorem 1. We further denote the stochastic feature learning function g : X → Z , and the hypothesis h : Z × Y → R. Then ∀λ, the target risk is upper bounded by: RT (h, g) ≤ ∑ t λ[t]RαtSt (h, g) + LK ∑ t λ[t]Ey∼T (y)W1(St(z|Y = y)‖T (z|Y = y)), where RT (h, g) = E(x,y)∼T (x,y)Ez∼g(z|x)`(h(z, y)). Theorem 2 reveal that to control the upper bound, we need to learn g that minimizes the weighted conditional Wasserstein distance and learn (g, h) that minimizes the weighted source risk. Comparison with previous Theorems. Our theory proposed an alternative prospective to understand transfer learning. The first term is α-weighted loss. And it will recover the typical source loss minimization if there is no label shift with αt(y) ≡ 1 (Li et al., 2019a; Peng et al., 2019; Zhao et al., 2018; Wen et al., 2020). Beside, minimizing the conditional Wasserstein distances has been shown to be advantageous, compared with W1(St(z)‖T (z)) (Long et al., 2018). Moreover, Theorem 2 explicitly proposed the theoretical insights about the representation learning function g, which remains elusive for previous multi-source transfer theories such as (Wang et al., 2019a; Mansour et al., 2020; Konstantinov & Lampert, 2019; Li et al., 2019a; Peng et al., 2019). 4 UNIFIED PRACTICAL FRAMEWORK IN DEEP LEARNING The theoretical results in Section 3 motivate general principles to follow when designing multisource transfer learning algorithms. We summarize those principles in the following rules. (I) Learn a g that minimizes the weighted conditional Wasserstein distance as well as learn (g, h) that minimizes the α̂t-weighted source risk (Sec. 4.1). (II) Properly estimate the label distribution ratio α̂t (Sec. 4.2). (III) Balance the trade-off between assigning a higher λ[t] to the source t that has a smaller weighted prediction loss and conditional Wasserstein distance, and assigning balanced λ[t]. (Sec. 4.3). We instantiate these rules with a unified practical framework for solving multi-source transfer learning problems, as shown in Tab 1. We would like to point out that our original theoretical result is based on setting with the available target labels. The proposed algorithm can be applied to unsupervised scenarios under additional assumptions. 4.1 GUIDELINES IN THE REPRESENTATION LEARNING Motivated by Theorem 2, given a fixed label ratio estimation α̂t and fixed λ, we should find a representation function g : X → Z and a hypothesis function h : Z × Y → R such that: min g,h ∑ t λ[t]R̂α̂tSt (h, g) + C0 ∑ t λ[t]Ey∼T̂ (y)W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) (1) Explicit Conditional Loss When target label information is available, one can explicitly solve the conditional optimal transport problem with g and h for a given Y = y. However, due to the high computational complexity in solving T × |Y| optimal transport problems, the original form is practically intractable. To address this issue, we propose to approximate the conditional distribution on latent space Z as Gaussian distribution with identical Covariance matrix such that Ŝt(z|Y = y) ≈ N (Cyt ,Σ) and T̂ (z|Y = y) ≈ N (Cy,Σ). Then we have W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) ≤ ‖Cyt −Cy‖2 (see Appendix G for details). Intuitively, the approximation term is equivalent to the well known feature mean matching (Sugiyama & Kawanabe, 2012), which computes the feature centroid of each class (on latent space Z) and aligns them by minimizing their L2 distance. Implicit Conditional Loss When target label information is not available (e.g. unsupervised DA and partial DA), the explicit matching approach can adopt pseudo-label predicted by the hypothesis h as a surrogate of the true target label. However, in the early stage of the learning process, the pseudo-labels can be unreliable, which can lead to an inaccurate estimate of W1(Ŝ(z|Y = y)‖T̂ (z|Y = y)). To address this, the following Lemma indicates that estimating the conditional Wasserstein distance is equivalent to estimating the Wasserstein adversarial loss weighted by the label-distribution ratio. Lemma 1. The weighted conditional Wasserstein distance can be implicitly expressed as:∑ t λ[t]Ey∼T (y)W1(St(z|Y = y)‖T (z|Y = y)) = max d1,··· ,dT ∑ t λ[t][Ez∼St(z)ᾱt(z)dt(z)−Ez∼T (z)dt(z)], where ᾱt(z) = 1{(z,y)∼St}αt(Y = y), and d1, . . . , dT : Z → R+ are the 1-Lipschitz domain discriminators (Ganin et al., 2016; Arjovsky et al., 2017). Lemma 1 reveals that instead of using pseudo-labels to estimate the weighted conditional Wasserstein distance, one can train T domain discriminators with weighted Wasserstein adversarial loss, which does not require the pseudo-label of each target sample during the matching. On the other hand, ᾱt can be obtained from α̂t, which will be elaborated in Sec. 3.2. In practice, we adopt a hybrid approach by linearly combining the explicit and implicit matching strategies for all the scenarios, in which empirical results show its effectiveness. 4.2 ESTIMATE LABEL DISTRIBUTION RATIO α̂t Multi-Source Transfer with target labels When the target labels are available, α̂t can be directly estimated from the data without any assumption and α̂t → αt can be proved from asymptotic statistics. Unsupervised Multi-Source DA In this scenario, it is impossible to estimate a good α̂t without imposing any additional assumptions. Following (Zhang et al., 2013; Lipton et al., 2018; Azizzadenesheli et al., 2019; Combes et al., 2020), we assume that the conditional distributions are aligned between the target and source domains (i.e., St(z|y) = T (z|y)). Then, we denote S̄t(y), T̄ (y) as the predicted t-source/target label distribution through the hypothesis h, and also define CŜt [y, k] = Ŝt[argmaxy′h(z, y ′) = y, Y = k] is the t-source prediction confusion matrix. We can demonstrate that if the conditional distribution is aligned, we have T̄ (y) = T̄α̂t(y), with T̄α̂t(Y = y) = ∑Y k=1 CŜt [y, k]α̂t(k) the constructed target prediction distribution from the t-source information. (See Appendix I for the proof). Then we can estimate α̂t through matching these two distributions by minimizing DKL(T̄ (y)‖T̄α̂t(y)), which is equivalent to: min α̂t − |Y|∑ y=1 T̄ (y) log( |Y|∑ k=1 CŜt [y, k]α̂t(k)) s.t ∀y ∈ Y, α̂t(y) ≥ 0, |Y|∑ y=1 α̂t(y)Ŝt(y) = 1 (2) In the aforementioned part, we have assumed the conditional distribution is aligned, which is a feasible requirement in our algorithm, since the goal of g exactly aims at gradually achieving this. In the experiments, we iteratively estimate α̂t and learn g. Unsupervised Multi-Source Partial DA When supp(T (y)) ⊆ supp(St(y)), αt is sparse due to the non-overlapped classes. Accordingly, in addition to the assumption of St(z|y) = T (z|y) as in unsupervised DA, we also impose such prior knowledge by adding a regularizer ‖α̂t‖1 to the objective of Eq. (2) to induce sparsity in α̂t (See Appendix J for more details). In training the neural network, since the non-overlapped classes will be automatically assigned with a small or zero α̂t, (g, h) will be less affected by the classes with small α̂t. Our empirical results effectively validate its capability in detecting non-overlapping classes and show significant improvements over other baselines. 4.3 ESTIMATE TASK RELATION COEFFICIENT λ Inspired by Theorem 1, given fixed α̂t and (g, h), we estimate λ through optimizing the derived upper bound. min λ ∑ t λ[t]R̂α̂tSt (h, g) + C0 ∑ t λ[t]Ey∼T̂ (y)W1(T̂ (z|Y = y)‖Ŝ(z|Y = y)) + C1 √√√√ T∑ t=1 λ2[t] βt s.t ∀t,λ[t] ≥ 0, T∑ t=1 λ[t] = 1 (3) In practice, R̂α̂tSt (h, g) is the weighted empirical prediction error and Ey∼T̂ (y)W1(T̂ (z|Y = y)‖Ŝ(z|Y = y)) is approximated by the dynamic feature centroid distance ∑ y T̄ (y)‖C y t − Cy‖2 (See Appendix L for details). Thus, solving λ is a standard convex optimization problem. 4.4 ALGORITHM DESCRIPTION Based on the aforementioned components, we present the description of WADN (Algorithm 1) in the unsupervised scenarios (UDA and Partial DA), which iteratively updates (g, h), α̂t, and λ. When Algorithm 1 Wasserstein Aggregation Domain Network (unsupervised scenarios, one iteration) Require: Labeled source samples Ŝ1, . . . , ŜT , Target samples T̂ Ensure: Label distribution ratio α̂t, task relation simplex λ. Feature Function g, Classifier h, Do- main critic function d1, . . . , dT , class centroid for source C y t and target C y (∀t = [1, T ], y ∈ Y). 1: . . . DNN Parameter Training Stage (fixed αt and λ) / / / 2: for mini-batch of samples (xS1 ,yS1) ∼ Ŝ1, . . . , (xST ,yST ) ∼ ŜT , (xT ) ∼ T̂ do 3: Predict target pseudo-label ȳT = argmaxyh(g(xT ), y) 4: Compute source confusion matrix for each batch (un-normalized) CŜt = #[argmaxy′h(z, y ′) = y, Y = k] (t = 1, . . . , T ) 5: Compute the batched class centroid for source Cyt and target C y . 6: Moving Average for update source/target class centroid: ( 1 = 0.7) 7: Update Source class centroid Cyt = 1 ×C y t + (1− 1)× C y t 8: Update Target class centroid Cy = 1 ×Cy + (1− 1)× Cy 9: Updating g, h, d1, . . . , dT (SGD and Gradient Reversal), by solving: min g,h max d1,...,dT ∑ t λ[t]R̂α̂tSt (h, g)︸ ︷︷ ︸ Classification Loss + C0 ∑ t λ[t]Ey∼T̄ (y)‖C y t −C y‖2︸ ︷︷ ︸ Explicit Conditional Loss + (1− )C0 ∑ t λ[t][Ez∼Ŝt(z)ᾱt(z)d(z)− Ez∼T̂ (z)d(z)]︸ ︷︷ ︸ Implicit Conditional Loss 10: end for 11: . . . Estimation α̂t and λ / / / 12: Compute the global(normalized) source confusion matrix CŜt = Ŝt[argmaxy′h(z, y ′) = y, Y = k] (t = 1, . . . , T ) 13: Solve αt (denoted as {α′t}Tt=1) by (Sec. 4.2 Unsupervised DA or Partial UDA). 14: Update αt by moving average: αt = 1 × αt + (1− 1)× α′t 15: Compute the weighted loss and weighted centroid distance, then solve λ (denoted as λ′) from Sec. 4.3. And updating λ by moving average: λ = 1 × λ + (1− 1)× λ′ updating λ and αt, we used package CVXPY to optimize the two standard convex losses after each training epoch, then we updating them by using the moving average. As for WADN under target label information, we did not require pseudo-label and directly compute α̂t, shown in Appendix L. 5 EXPERIMENTS In this section, we compare proposed approaches with several baselines for the popular tasks. For all the scenarios, the following baselines are evaluated: (I) Source method applied only labelled source data to train the model. (II) DANN (Ganin et al., 2016). We follow the protocol of Wen et al. (2020) to merge all the source dataset as a global source domain. (III) MDAN (Zhao et al., 2018); (IV) MDMN (Li et al., 2018b); (V) M3SDA (Peng et al., 2019) adopted maximizing classifier discrepancy (Saito et al., 2018) and (VI) DARN (Wen et al., 2020). For the conventional multi-source transfer and partial unsupervised multi-source DA, we additionally compare specific baselines. All the baselines are re-implemented in the same network structure for fair comparisons. The detailed network structures, hyper-parameter settings, training details are put in Appendix M. We evaluate the performance on three different datasets: (I) Amazon Review. (Blitzer et al., 2007) It contains four domains (Books, DVD, Electronics, and Kitchen) with positive and negative product reviews. We follow the common data pre-processing strategies as (Chen et al. (2012)) to form a 5000-dimensional bag-of-words feature. Note that the label distribution in the original dataset is uniform. To enhance the benefits of the proposed approach, we create a label distribution drifted task by randomly dropping 50% negative reviews of all the sources while keeping the target identical. (show in Fig.3 (a)). (II) Digits. It consists four digits recognition datasets including MNIST, USPS (Hull, 1994), SVHN (Netzer et al., 2011) and Synth (Ganin et al., 2016). We also create a slight label distribution drift for the sources by randomly dropping 50% samples on digits 5-9 and keep target identical. (showed in Fig.(3)(b)). (III) Office-Home Dataset (Venkateswara et al., 2017). It contains 65 classes for four different domains: Art, Clipart, Product and Real-World. We used the ResNet50 (He et al., 2016) pretrained from the ImageNet in PyTorch as the base network for feature learning and put a MLP for the classification. The label distributions in these four domains are different and we did not manually create a label drift (showed in Fig.3 (c)). 5.1 UNSUPERVISED MULTI-SOURCE DA In the unsupervised multi-source DA, we evaluate the proposed approach on all the three datasets. We use a similar hyper-parameter selection strategy as in DANN (Ganin et al., 2016). All reported results are averaged from five runs. The detailed experimental settings are illustrated in Appendix M. The empirical results are illustrated in Tab. 7, 2 and 3. Since we did not change the target label distribution throughout the whole experiments, then we still use the target accuracy as the metric. We report the means and standard deviations for each approach. The best approaches based on a two-sided Wilcoxon signed-rank test (significance level p = 0.05) are shown in bold. The empirical results reveal a significantly better performance (≈ 3%) on different datasets. For understanding the working principles of WADN, we evaluate the performance under different levels of source label shift in Amazon Review dataset (Fig.1(a)). The results show strong practical benefits for WADN during a gradual larger label shift. In addition, we visualize the task relations in digits (Fig.1(b)) and demonstrate a non-uniform λ, which highlights the importance of properly choosing the most related source rather than simply merging all the data. E.g. when the target domain is SVHN, WADN mainly leverages the information from SYNTH, since they are more semantically similar and MNIST does not help too much for SVHN (observed by Ganin et al. (2016)). The additional analysis and results can be found in Appendix O. 5.2 MULTI-SOURCE TRANSFER LEARNING WITH LIMITED TARGET SAMPLES We adopt Amazon Review and Digits in the multi-source transfer learning with limited target samples, which have been widely used. In the experiments, we still use shifted sources. We randomly sample only 10% labeled samples (w.r.t. target dataset in unsupervised DA) as training set and the rest 90% samples as the unseen target test set. (See Appendix M for details). We adopt the same hyper-parameters and training strategies with unsupervised DA. We specifically add a recent baseline RLUS (Konstantinov & Lampert, 2019) and MME (Saito et al., 2019), which also considered transfer learning with the labeled target. The results are reported in Tabs. 4, 5, which also indicates strong empirical benefits. To show the effectiveness of WADN, we select various portions of labelled samples (1% ∼ 10%) on the target. The results in Fig.1(c) on USPS dataset shows a consistently better than the baselines, even in the few target samples. 5.3 PARTIAL UNSUPERVISED MULTI-SOURCE DA In this scenario, we adopt the Office-Home dataset to evaluate our approach, as it contains large (65) classes. We do not change the source domains and we randomly choose 35 classes from the target. We evaluate all the baselines on the same selected classes and repeat 5 times. All reported results are averaged from 3 different sub-class selections (15 runs in total), showing in Tab.6. (See Appendix M for details.) We additionally compare PADA (Cao et al., 2018) approach by merging all the sources and use one-to-one partial DA algorithm. We adopt the same hyper-parameters and training strategies with unsupervised DA scenario. The reported results are also significantly better than the current multi-source DA or one-to-one partial DA approach, which verifies the benefits of WADN: properly estimating α̂t and assigning proper λ for each source. Besides, we change the number of selected classes (Fig 2(a)), the proposed WADN still indicates consistent better results by a large margin, which indicates the importance of considering α̂t and λ. In contrast, DANN shows unstable results in less selected classes. (See Appendix P for details) Beside, WADN shows a good estimation of the label distribution ratio (Fig 2(b)) and has correctly detected the non-overlapping classes, which indicates its good explainability. 6 CONCLUSION In this paper, we proposed a new theoretical principled algorithm WADN (Wasserstein Aggregation Domain Network) to solve the multi-source transfer learning problem under target shift. WADN provides a unified solution for various deep multi-source transfer scenarios: learning with limited target data, unsupervised DA, and partial unsupervised DA. We evaluate the proposed method by extensive experiments and show its strong empirical results. A ADDITIONAL EMPIRICAL RESULTS B ADDITIONAL RELATED WORK Multi-source transfer learning Practice has been proposed from various prospective. The key idea is to estimate the importance of different sources and then select the most related ones, to mitigate the influence of negative transfer. In the multi-source unsupervised DA, (Sankaranarayanan et al., 2018; Balaji et al., 2019; Pei et al., 2018; Zhao et al., 2019; Zhu et al., 2019; Zhao et al., 2020; 2019; Stojanov et al., 2019; Li et al., 2019b; Wang et al., 2019b; Lin et al., 2020) proposed different practical strategies in the classification, regression and semantic segmentation problems. In the presence of target labels, Hoffman et al. (2012); Tan et al. (2013); Wei et al. (2017); Yao & Doretto (2010); Konstantinov & Lampert (2019) used generalized linear model to learn the target. Christodoulidis et al. (2016); Li et al. (2019a); Chen et al. (2019) focused on deep learning approaches and Lee et al. (2019) proposed an ad-hoc strategy to combine to sources in the few-shot target domains. These ideas are generally data-driven approaches and do not analyze the why the proposed practice can control the generalization error. Label-Partial Transfer Learning Label-Partial can be viewed as a special case of the label-shift. 1 Most existing works focus on one-to-one partial transfer learning (Zhang et al., 2018; Chen et al., 2020; Bucci et al., 2019; Cao et al., 2019) by adopting the re-weighting training approach without a formal understanding. In our paper, we first rigorously analyzed this common practice and adopt the label distribution ratio as its weights, which provides a principled approach in this scenario. B.1 OTHER SCENARIOS RELATED TO MULTI-SOURCE TRANSFER LEARNING Domain Generalization The domain generalization (DG) resembles multi-source transfer but aims at different goals. A common setting in DG is to learn multiple source but directly predict on the unseen target domain. The conventional DG approaches generally learn a distribution invariant features (Balaji et al., 2018; Saenko et al., 2010; Motiian et al., 2017; Ilse et al., 2019) or conditional distribution invariant features (Li et al., 2018a; Akuzawa et al., 2019). However, our theoretical results reveal that in the presence of label shift (i.e αt(y) 6= 1) and outlier tasks then learning conditional or marginal invariant features can not guarantee a small target risk. Our theoretical result enables a formal understanding about the inherent difficulty in DG problems. Few-Shot Learning The few-shot learning (Finn et al., 2017; Snell et al., 2017; Sung et al., 2018) can be viewed as a very specific scenario of multi-source transfer learning. We would like to point out the differences between the few-shot learning and our paper. (1) Few-shot learning generally involves a very large set of source domains T 1 and each domain consists a modest number of observations NSt . In our paper, we are interested in the a modest number of source domains T but each source domain including a sufficient large number of observations (NSt 1). (2) In the target domain, the few-shot setting generally used K-samples (K is very small) for each class for the fine-tuning. We would like to point out this setting generally violates our theoretical assumption. In 1Since supp(T (y)) ⊆ supp(St(y)) then we naturally have T (y) 6= St(y). our paper, we assume the target data is i.i.d. sampled fromD(x, y). It is equivalently viewed that we first i.i.d. sample y ∼ D(y), then i.i.d. sample x ∼ D(x|y). Generally the D(y) is non-uniform, thus few-shot setting are generally not applicable for our theoretical assumptions. Multi-Task Learning The goal of multi-task learning (Zhang & Yang, 2017) aims to improve the prediction performance of all the tasks. In our paper, we aim at controlling the prediction risk of a specified target domain. We also notice some practical techniques are common such as the shared parameter (Zhang & Yeung, 2012), shared representation (Ruder, 2017), etc. C ADDITIONAL FIGURES RELATED TO THE MAIN PAPER D TABLE OF NOTATION E PROOF OF THEOREM 1 Proof idea Theorem 1 consists three steps in the proof: Lemma 2. If the prediction loss is assumed as L-Lipschitz and the hypothesis is K-Lipschitz w.r.t. the feature x (given the same label), i.e. for ∀Y = y, ‖h(x1, y)−h(x2, y)‖2 ≤ K‖x1−x2‖2. Then the target risk can be upper bounded by: RT (h) ≤ ∑ t λ[t]RαtS (h) + LK ∑ t λ[t]Ey∼T (y)W1(T (x|Y = y)‖S(x|Y = y)) (4) Proof. The target risk can be expressed as: RT (h(x, y)) = E(x,y)∼T `(h(x, y)) = Ey∼T (y)Ex∼T (x|y)`(h(x, y)) By denoting α(y) = T (y)S(y) , then we have: Ey∼T (y)Ey∼T (x|y)`(h(x, y)) = Ey∼S(y)α(y)Ex∼T (x|y)`(h(x, y)) Then we aim to upper bound Ex∼T (x|y)`(h(x, y)). For any fixed y, Ex∼T (x|y)`(h(x, y))− Ex∼S(x|y)`(h(x, y)) ≤ | ∫ x∈X `(h(x, y))d(T (x|y)− S(x|y))| Then according to the Kantorovich-Rubinstein duality, for any distribution coupling γ ∈ Π(T (x|y),S(x|y)), then we have: = inf γ | ∫ X×X `(h(xp, y))− `(h(xq, y))dγ(xp, xq)| ≤ inf γ ∫ X×X |`(h(xp, y))− `(h(xq, y))|dγ(xp, xq) ≤ L inf γ ∫ X×X |h(xp, y))− h(xq, y)|dγ(xp, xq) ≤ LK inf γ ∫ X×X ‖xp − xq‖2dγ(xp, xq) = LKW1(T (x|Y = y)‖S(x|Y = y)) The first inequality is obvious; and the second inequality comes from the assumption that ` is LLipschitz; the third inequality comes from the hypothesis is K-Lipschitz w.r.t. the feature x (given the same label), i.e. for ∀Y = y, ‖h(x1, y)− h(x2, y)‖2 ≤ K‖x1 − x2‖2. Then we have: RT (h) ≤ Ey∼S(y)α(y)[Ex∼S(x|y)`(h(x, y)) + LKW1(T (x|y)‖S(x|y))] = E(x,y)∼Sα(y)`(h(x, y)) + LKEy∼T (y)W1(T (x|Y = y)‖S(x|Y = y)) = RαS(h) + LKEy∼T (y)W1(T (x|Y = y)‖S(x|Y = y)) Supposing each source St we assign the weight λ[t] and label distribution ratio αt(y) = T (y)St(y) , then by combining this T source target pair, we have: RT (h) ≤ ∑ t λ[t]RαtSt (h) + LK ∑ t λ[t]Ey∼T (y)W1(T (x|Y = y)‖St(x|Y = y)) Then we will prove Theorem 1 from this result, we will derive the non-asymptotic bound, estimated from the finite sample observations. Supposing the empirical label ratio value is α̂t, then for any simplex λ we can prove the high-probability bound. E.1 BOUNDING THE EMPIRICAL AND EXPECTED PREDICTION RISK Proof. We first bound the first term, which can be upper bounded as: sup h | ∑ t λ[t]RαtSt (h)− ∑ t λ[t]R̂α̂tSt (h)| ≤ sup h | ∑ t λ[t]RαtSt (h)− ∑ t λ[t]R̂αtSt (h)|︸ ︷︷ ︸ (I) + sup h | ∑ t λ[t]R̂αtSt (h)− ∑ t λ[t]R̂α̂tSt (h)|︸ ︷︷ ︸ (II) Bounding term (I) According to the McDiarmid inequality, each item changes at most | 2λ[t]αt(y)`NSt |. Then we have: P ((I)− E(I) ≥ t) ≤ exp( −2t 2∑T t=1 4 βtN λ2[t]αt(y)2`2 ) = δ By substituting δ, at high probability 1− δ we have: (I) ≤ E(I) + Lmaxdsup∞ √√√√ T∑ t=1 λ[t]2 βt √ log(1/δ) 2N Where Lmax = suph∈H `(h) and N = ∑T t=1NSt the total source observations and βt = NSt N the frequency ratio of each source. And d sup ∞ = maxt=1,...,T d∞(T (y)‖S(y)) = maxt=1,...,T maxy∈[1,Y] αt(y), the maximum true label shift value (constant). Bounding E sup(I), the expectation term can be upper bounded as the form of Rademacher Complexity: E(I) ≤ 2EσEŜT1 suph T∑ t=1 λ[t] ∑ (xt,yt)∈Ŝt 1 TN (αt(y)`(h(xt, yt)) ≤ 2 ∑ t λ[t]EσEŜT1 suph ∑ (xt,yt)∈Ŝt 1 TN (αt(y)`(h(xt, yt)) ≤ 2 sup t EσEŜt sup h ∑ (xt,yt)∈Ŝt 1 TN [αt(y)`(h(xt, yt))] = sup t 2Rt(`,H) = 2R̄(`,H) Where R̄(`,H) = suptRt(`,H) = supt suph∼H EŜt,σ ∑ (xt,yt)∈Ŝt 1 TN [αt(y)`(h(xt, yt))], represents the Rademacher complexity w.r.t. the prediction loss `, hypothesis h and true label distribution ratio αt. Therefore with high probability 1− δ, we have: sup h | ∑ t λ[t]RαtS (h)− ∑ t λ[t]R̂αtS (h)| ≤ R̄(`, h) + Lmaxd sup ∞ √√√√ T∑ t=1 λ[t]2 βt √ log(1/δ) 2N Bounding Term (II) For all the hypothesis h, we have: | ∑ t λ[t]R̂αtSt (h)− ∑ t λ[t]R̂α̂tSt (h)| = | ∑ t λ[t] 1 NSt NSt∑ i (α(y(i))− α̂(y(i)))`(h)| = ∑ t λ[t] 1 NSt | |Y|∑ y (α(Y = y)− α̂(Y = y))¯̀(Y = y)| Where ¯̀(Y = y) = ∑NSt i `(h(xi, yi = y)), represents the cumulative error, conditioned on a given label Y = y. According to the Holder inequality, we have: ∑ t λ[t] 1 NSt | |Y|∑ y (αt(Y = y)− α̂t(Y = y))¯̀(Y = y)| ≤ ∑ t λ[t] 1 NSt ‖αt − α̂t‖2‖¯̀(Y = y)‖2 ≤ Lmax ∑ t λ[t]‖αt − α̂t‖2 ≤ Lmax sup t ‖αt − α̂t‖2 Therefore, ∀h ∈ H, with high probability 1− δ we have: ∑ t λ[t]RαtS (h) ≤ ∑ t λ[t]R̂α̂tS (h)+2R̄(`, h)+Lmaxd sup ∞ √√√√ T∑ t=1 λ[t]2 βt √ log(1/δ) 2N +Lmax sup t ‖αt−α̂t‖2 E.2 BOUNDING EMPIRICAL WASSERSTEIN DISTANCE Then we need to derive the sample complexity of the empirical and true distributions, which can be decomposed as the following two parts. For any t, we have: Ey∼T (y)W1(T (x|Y = y)‖St(x|Y = y))− Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) ≤ Ey∼T (y)W1(T (x|Y = y)‖St(x|Y = y))− Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))︸ ︷︷ ︸ (I) + Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))− Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))︸ ︷︷ ︸ (II) Bounding (I) We have: Ey∼T (y)W1(T (x|Y = y)‖St(x|Y = y))− Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) = ∑ y T (y) ( W1(T (x|Y = y)‖St(x|Y = y))−W1(T̂ (x|Y = y)‖Ŝt(x|Y = y) ) ≤ | ∑ y T (y)| sup y ( W1(T (x|Y = y)‖St(x|Y = y))−W1(T̂ (x|Y = y)‖Ŝt(x|Y = y) ) = sup y ( W1(T (x|Y = y)‖St(x|Y = y))−W1(T̂ (x|Y = y)‖Ŝt(x|Y = y) ) ≤ sup y [W1(St(x|Y = y)‖Ŝt(x|Y = y)) +W1(Ŝt(x|Y = y)‖T̂ (x|Y = y)) +W1(T̂ (x|Y = y)‖T (x|Y = y))−W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))] = sup y W1(St(x|Y = y)‖Ŝt(x|Y = y)) +W1(T̂ (x|Y = y)‖T (x|Y = y)) The first inequality holds because of the Holder inequality. As for the second inequality, we use the triangle inequality of Wasserstein distance. W1(P‖Q) ≤W1(P‖P1) +W1(P1‖P2) +W1(P2‖Q). According to the convergence behavior of Wasserstein distance (Weed et al., 2019), with high probability ≥ 1− 2δ we have: W1(St(x|Y = y)‖Ŝt(x|Y = y)) +W1(T̂ (x|Y = y)‖T (x|Y = y)) ≤ κ(δ,NySt , N y T ) Where k(δ,NySt , N y T ) = Ct,y(N y St) −st,y +Cy(N y T ) −sy + √ 1 2 log( 2 δ )( √ 1 NySt + √ 1 Nyt ), where NySt is the number of Y = y in source t and NyT is the number of Y = y in target distribution. Ct,y , Cy st,y > 2, sy > 2 are positive constant in the concentration inequality. This indicates the convergence behavior between empirical and true Wasserstein distance. If we adopt the union bound (over all the labels) by setting δ ← δ/|Y|, then with high probability ≥ 1− 2δ, we have: sup y W1(S(x|Y = y)‖Ŝ(x|Y = y)) +W1(T̂ (x|Y = y)‖T (x|Y = y)) ≤ κ(δ,NySt , N y T ) where κ(δ,NySt , N y T ) = Ct,y(N y St) −st,y + Cy(N y T ) −sy + √ 1 2 log( 2|Y| δ )( √ 1 NySt + √ 1 NyT ) Again by adopting the union bound (over all the tasks) by setting δ ← δ/T , with high probability ≥ 1− 2δ, we have:∑ t λ[t]Ey∼T (y)W1(T (x|Y = y)‖S(x|Y = y))− ∑ t λ[t]Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝ(x|Y = y)) ≤ sup t κ(δ,NySt , N y T ) Where κ(δ,NySt , N y T ) = Ct,y(N y St) −st,y + Cy(N y T ) −sy + √ 1 2 log( 2T |Y| δ )( √ 1 NySt + √ 1 NyT ). Bounding (II) We can bound the second term: Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))− Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) ≤ sup y W1(T̂ (x|Y = y)‖Ŝt(x|Y = y))| ∑ y T (y)− T̂ (y)| ≤ Ctmax| ∑ y T (y)− T̂ (y)| Where Ctmax = supyW1(T̂ (x|Y = y)‖Ŝ(x|Y = y)) is a positive and bounded constant. Then we need to bound | ∑ y T (y)−T̂ (y)|, by adopting MicDiarmid’s inequality, we have at high probability 1− δ: | ∑ y T (y)− T̂ (y)| ≤ ET̂ | ∑ y T (y)− T̂ (y)|+ √ log(1/δ) 2NT = 2EσET̂ ∑ y σT̂ (y) + √ log(1/δ) 2NT Then we bound EσET̂ ∑ y σT̂ (y). We use the properties of Rademacher complexity [Lemma 26.11, (Shalev-Shwartz & Ben-David, 2014)] and notice that T̂ (y) is a probability simplex, then we have: EσET̂ ∑ y σT̂ (y) ≤ √ 2 log(2|Y|) NT Then we have | ∑ y T (y)− T̂ (y)| ≤ √ 2 log(2|Y|) NT + √ log(1/δ) 2NT Then using the union bound and denoting δ ← δ/T , with high probability ≥ 1 − δ and for any simplex λ, we have:∑ t λ[t]Ey∼T (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) ≤ ∑ t λ[t]Ey∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝt(x|Y = y)) Cmax( √ 2 log(2|Y|) NT + √ log(T/δ) 2NT ) where Cmax = supt C t max. Combining together, we can derive the PAC-Learning bound, which is estimated from the finite samples (with high probability 1− 4δ): RT (h) ≤ ∑ t λtR̂ α̂t St (h) + LH ∑ t λtEy∼T̂ (y)W1(T̂ (x|Y = y)‖Ŝ(x|Y = y)) + Lmaxd sup ∞ √√√√ T∑ t=1 λ2t βt √ log(1/δ) 2N + 2R̄(`, h) + Lmax sup t ‖αt − α̂t‖2 + sup t κ(δ,NySt , N y T ) + Cmax( √ 2 log(2|Y|) NT + √ log(T/δ) 2NT ) Then we denote Comp(NS1 , . . . , NT , δ) = 2R̄(`, h) + supt κ(δ,N y St , N y T ) +Cmax( √ 2 log(2|Y|) NT +√ log(T/δ) 2NT ) as the convergence rate function that decreases with larger NS1 , . . . , NT . Bedsides, R̄(`, h) = suptRt(`,H) is the re-weighted Rademacher complexity. Given a fixed hypothesis with finite VC dimension 2, it can be proved R̄(`, h) = minNS1 ,...,NST O( √ 1 NSt ) i.e (Shalev-Shwartz & Ben-David, 2014). 2If the hypothesis is the neural network, the Rademacher complexity can still be bounded analogously. F PROOF OF THEOREM 2 We first recall the stochastic feature representation g such that g : X → Z and scoring hypothesis h h : Z × Y → R and the prediction loss ` with ` : R→ R. 3 Proof. The marginal distribution and conditional distribution w.r.t. latent variableZ that are induced by g, which can be reformulated as: S(z) = ∫ x g(z|x)S(x)dx S(z|y) = ∫ x g(z|x)S(x|Y = y)dx In the multi-class classification problem, we additionally define the following distributions: µk(z) = S(Y = k, z) = S(Y = k)S(z|Y = k) πk(z) = T (Y = k, z) = T (Y = k)T (z|Y = k) Based on (Nguyen et al., 2009) and g(z|x) is a stochastic representation learning function, the loss conditioned a fixed point (x, y) w.r.t. h and g is Ez∼g(z|x)`(h(z, y)). Then taking the expectation over the S(x, y) we have: 4 RS(h, g) = E(x,y)∼S(x,y)Ez∼g(z|x)`(h(z, y)) = |Y|∑ k=1 S(y = k) ∫ x S(x|Y = k) ∫ z g(z|x)`(h(z, y = k))dzdx = |Y|∑ k=1 S(y = k) ∫ z [ ∫ x S(x|Y = k)g(z|x)dx]`(h(z, y = k))dz = |Y|∑ k=1 S(y = k) ∫ z S(z|Y = k)`(h(z, y = k))dz = |Y|∑ k=1 ∫ z S(z, Y = k)`(h(z, y = k))dz = |Y|∑ k=1 ∫ z µk(z)`(h(z, y = k))dz Intuitively, the expected loss w.r.t. the joint distribution S can be decomposed as the expected loss on the label distribution S(y) (weighted by the labels) and conditional distribution S(·|y) (real valued conditional loss). Then the expected risk on the S and T can be expressed as: RS(h, g) = |Y|∑ k=1 ∫ z `(h(z, y = k))µk(z)dz RT (h, g) = |Y|∑ k=1 ∫ z `(h(z, y = k))πk(z)dz 3Note this definition is different from the conventional binary classification with binary output, and it is more suitable in the multi-classification scenario and cross entropy loss (Hoffman et al., 2018a). For example, if we define l = − log(·) and h(z, y) ∈ (0, 1) as a scalar score output. Then `(h(z, y)) can be viewed as the cross-entropy loss for the neural-network. 4An alternative understanding is based on the Markov chain. In this case it is a DAG with Y S(y|x)←−−−− X g−→ Z, X S(y|x)−−−−→ Y h−→ S h←− Z g←− X . (S is the output of the scoring function). Then the ex- pected loss over the all random variable can be equivalently written as ∫ P(x, y, z, s) `(s) d(x, y, z, s) =∫ P(x)P(y|x)P(z|x)P(s|z, y)`(s) = ∫ P(x, y)P(z|x)P(s|z, y)`(s)d(x, y)d(z)d(s). Since the scoring S is determined by h(x, y), then P(s|y, z) = 1. According to the definition we have P(z|x) = g(z|x), P(x, y) = S(x, y), then the loss can be finally expressed as ES(x,y)Eg(z|x)`(h(z, y)) By denoting α(y) = T (y)S(y) , we have the α-weighted loss: RαS(h, g) =T (Y = 1) ∫ z `(h(z, y = 1))S(z|Y = 1) + T (Y = 2) ∫ z `(h(z, y = 2))S(z|Y = 2) + · · ·+ T (Y = k) ∫ z `(h(z, y = k))S(z|Y = k)dz Then we have: RT (h, g)−RαS(h, g) ≤ ∑ k T (Y = k) ∫ z `(h(z, y = k))d|S(z|Y = k)− T (z|Y = k)| Under the same assumption, we have the loss function `(h(z, Y = k)) is KL-Lipschitz w.r.t. the cost ‖ · ‖2 (given a fixed k). Therefore by adopting the same proof strategy (Kantorovich-Rubinstein duality) in Lemma 2, we have ≤ KLT (Y = 1)W1(S(z|Y = 1)‖T (z|Y = 1)) + · · ·+KLT (Y = k)W1(S(z|Y = k)‖T (z|Y = k)) = KLEy∼T (y)W1(S(z|Y = y)‖T (z|Y = y)) Therefore, we have: RT (h, g) ≤ RαS(h, g) + LKEy∼T (y)W1(S(z|Y = y)‖T (z|Y = y)) Based on the aforementioned result, we have ∀t = 1, . . . , T and denote S = St and α(y) = αt(y) = T (y)/St(y): λ[t]RT (h, g) ≤ λ[t]RαtSt (h, g) + LKλ[t]Ey∼T (y)W1(St(z|Y = y)‖T (z|Y = y)) Summing over t = 1, . . . , T , we have: RT (h, g) ≤ T∑ t=1 λ[t]RαtSt (h, g) + LK T∑ t=1 λ[t]Ey∼T (y)W1(St(z|Y = y)‖T (z|Y = y)) G APPROXIMATION W1 DISTANCE According to Jensen inequality, we have W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) ≤ √ [W2(Ŝt(z|Y = y)‖T̂ (z|Y = y))]2 Supposing Ŝt(z|Y = y) ≈ N (Cyt ,Σ) and T̂ (z|Y = y) ≈ N (Cy,Σ), then we have: [W2(Ŝt(z|Y = y)‖T̂ (z|Y = y)]2 = ‖Cyt −Cy‖22 + Trace(2Σ− 2(ΣΣ)1/2) = ‖C y t −Cy‖22 We would like to point out that assuming the identical covariance matrix is more computationally efficient during the matching. This is advantageous and reasonable in the deep learning regime: we adopted the mini-batch (ranging from 20-128) for the neural network parameter optimization, in each mini-batch the samples of each class are small, then we compute the empirical covariance/variance matrix will be surely biased to the ground truth variance and induce a much higher complexity to optimize. By the contrary, the empirical mean is unbiased and computationally efficient, we can simply use the moving the moving average to efficiently update the estimated mean value (with a unbiased estimator). The empirical results verify the effectiveness of this idea. H PROOF OF LEMMA 1 For each source St, by introducing the duality of Wasserstein-1 distance, for y ∈ Y , we have: W1(St(z|y)‖T (z|y)) = sup ‖d‖L≤1 Ez∼St(z|y)d(z)− Ez∼T (z|y)d(z) = sup ‖d‖L≤1 ∑ z St(z|y)d(z)− ∑ z T (z|y)d(z) = 1 T (y) sup ‖d‖L≤1 T (y) St(y) ∑ z St(z, y)d(z)− ∑ z T (z, y)d(z) Then by defining ᾱt(z) = 1{(z,y)∼St} T (Y=y) St(Y=y) = 1{(z,y)∼St}αt(Y = y), we can see for each pair observation (z, y) sampled from the same distribution, then ᾱt(Z = z) = αt(Y = y). Then we have:∑ y T (y)W1(St(z|y)‖T (z|y)) = ∑ y sup ‖d‖L≤1 { ∑ z αt(y)St(z, y)d(z)− ∑ z T (z, y)d(z)} = sup ‖d‖L≤1 ∑ z ᾱt(z)St(z)d(z)− ∑ z T (z)d(z) = sup ‖d‖L≤1 Ez∼St(z)ᾱt(z)d(z)− Ez∼T (z)d(z) We propose a simple example to understand ᾱt: supposing three samples in St = {(z1, Y = 1), (z2, Y = 1), (z3, Y = 0)} then ᾱt(z1) = ᾱt(z2) = αt(1) and ᾱt(z3) = αt(0). Therefore, the conditional term is equivalent to the label-weighted Wasserstein adversarial learning. We plug in each source domain as weight λ[t] and domain discriminator as dt, we finally have Lemma 1. I DERIVE THE LABEL RATIO LOSS We suppose the representation learning aims at matching the conditional distribution such that T (z|y) ≈ St(z|y),∀t, then we suppose the predicted target distribution as T̄ (y). By simplifying the notation, we define f(z) = argmaxyh(z, y) the most possible prediction label output, then we have: T̄ (y) = Y∑ k=1 T (f(z) = y|Y = k)T (Y = k) = Y∑ k=1 St(f(z) = y|Y = k)T (Y = k) = Y∑ i=1 St(f(z) = y, Y = k)αt(k) = T̄αt(y) The first equality comes from the definition of target label prediction distribution, T̄ (y) = ET (z)1{f(z) = y} = T (f(z) = y) = ∑Y k=1 T (f(z) = y, Y = k) = ∑Y k=1 T (f(z) = y|Y = k)T (Y = k). The second equality T (f(z) = y|Y = k) = St(f(z) = y|Y = k) holds since ∀t, T (z|y) ≈ St(z|y), then for the shared hypothesis f , we have T (f(z) = y|Y = k) = St(f(z) = y|Y = k). The term St(f(z) = y, Y = k) is the (expected) source prediction confusion matrix, and we denote its empirical (observed) version as Ŝt(f(z) = y, Y = k). Based on this idea, in practice we want to find a α̂t to match the two predicted distribution T̄ and T̄α̂t . If we adopt the KL-divergence as the metric, we have: min α̂t DKL(T̄ ‖T̄α̂t) = min α̂t Ey∼T̄ log( T̄ (y) T̄α̂t(y) ) = min α̂t −Ey∼T̄ log(T̄α̂t(y)) = min α̂t − ∑ y T̄ (y) log( Y∑ k=1 St(f(z) = y, Y = k)α̂t(k)) We should notice the nature constraints of label ratio: {α̂t(y) ≥ 0, ∑ y α̂t(y)Ŝt(y) = 1}. Based on this principle, we proposed the optimization problem to estimate each label ratio. We adopt its empirical counterpart, the empirical confusion matrix CŜt [y, k] = Ŝt[f(z) = y, Y = k], then the optimization loss can be expressed as: min α̂t − |Y|∑ y=1 T̄ (y) log( |Y|∑ k=1 CŜt [y, k]α̂t(k)) s.t. ∀y ∈ Y, α̂t(y) ≥ 0, ∑ y α̂t(y)Ŝt(y) = 1 J LABEL PARTIAL MULTI-SOURCE UNSUPERVISED DA The key difference between multi-conventional and partial unsupervised DA is the estimation step of α̂t. In fact, we only add a sparse constraint for estimating each α̂t: min α̂t − |Y|∑ y=1 T̄ (y) log( |Y|∑ k=1 CŜt [y, k]α̂t(k)) + C2‖α̂t‖1 s.t. ∀y ∈ Y, α̂t(y) ≥ 0, ∑ y α̂t(y)Ŝt(y) = 1 (5) Where C2 is the hyper-parameter to control the level of target label sparsity, to estimate the target label distribution. In the paper, we denote C2 = 0.1. K EXPLICIT AND IMPLICIT CONDITIONAL LEARNING Inspired by Theorem 2, we need to learn the function g : X → Z and h : Z ×Y → R to minimize: min g,h ∑ t λ[t]R̂α̂tSt (h, g) + C0 ∑ t λ[t]Ey∼T̂ (y)W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) This can be equivalently expressed as: min g,h ∑ t λ[t]R̂αtSt (h, g) + C0 ∑ t λ[t]Ey∼T̂ (y)W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) + (1− )C0 ∑ t λ[t]Ey∼T̂ (y)W1(Ŝt(z|Y = y)‖T̂ (z|Y = y)) Due to the explicit and implicit approximation of conditional distance, we then optimize an alternative form: min g,h max d1,...,dT ∑ t λ[t]R̂α̂tSt (h, g)︸ ︷︷ ︸ Classification Loss + C0 ∑ t λ[t]Ey∼T̂ (y)‖C y t −Cy‖2︸ ︷︷ ︸ Explicit Conditional Loss + (1− )C0 ∑ t λ[t][Ez∼Ŝt(z)ᾱ t(z)d(z)− Ez∼T̂ (z)d(z)]︸ ︷︷ ︸ Implicit Conditional Loss (6) Where • Cyt = ∑ (zt,yt)∼Ŝt 1{yt=y}zt the centroid of label Y = y in source St. • Cy = ∑ (zt,yp)∼T̂ 1{yp=y}zt the centroid of pseudo-label Y = yp in target St. (If it is the unsupervised DA scenarios). • ᾱt(z) = 1{(z,y)∼St}α̂t(Y = y), namely if each pair observation (z, y) from the distribu- tion, then ᾱt(Z = z) = α̂t(Y = y). • d1, · · · , dT are domain discriminator (or critic function) restricted within 1-Lipschitz func- tion. • ∈ [0, 1] is the adjustment parameter in the trade-off of explicit and implicit learning. Based on the equivalence form, our approach proposed a theoretical principled way to tuning its weights. In the paper, we assume = 0.5. • T̂ (y) empirical target label distribution. (In the unsupervised DA scenarios, we approximate it by predicted target label distribution T̄ (y).) Gradient Penalty In order to enforce the Lipschitz property of the statistic critic function, we adopt the gradient penalty term (Gulrajani et al., 2017). More concretely, given two samples zs ∼ St(z) and zt ∼ T (z) we generate an interpolated sample zint = ξzs + (1− ξ)zt with ξ ∼ Unif[0, 1]. Then we add a gradient penalty ‖∇d(zint)‖22 as a regularization term to control the Lipschitz property w.r.t. the discriminator d1, · · · , dT . L ALGORITHM DESCRIPTIONS We propose a detailed pipeline of the proposed algorithm in the following, shown in Algorithm 2 and 3. As for updating λ and αt, we iteratively solve the convex optimization problem after each training epoch and updating them by using the moving average technique. For solving the λ and αt, we notice that frequently updating these two parameters in the mini-batch level will lead to an instability result during the training. 5 As a consequence, we compute the accumulated confusion matrix, weighted prediction risk, and conditional Wasserstein distance for the whole training epoch and then solve the optimization problem. We use CVXPY to optimize the two standard convex losses. 6 Comparison with different time and memory complexity. We discuss the time and memory complexity of our approach. Time complexity: In computing each batch we need to compute T re-weighted loss, T domain adversarial loss and T explicit conditional loss. Then our computational complexity is still (O)(T ) during the mini-batch training, which is comparable with recent SOTA such as MDAN and DARN. In addition, after each training epoch we need to estimate αt and λ, which can have time complexity O(T |Y|) with each epoch. (If we adopt SGD to solve these two convex problems). Therefore, the our proposed algorithm is time complexity O(T |Y|). The extra Y term in time complexity is due to the approach of label shift in the designed algorithm. Memory Complexity: Our proposed approach requires O(T ) domain discriminator and O(T |Y|) class-feature centroids. By the contrary, MDAN and DARN require O(T ) domain discriminator and M3SDA and MDMN require O(T 2) domain discriminators. Since our class-feature centroids are defined in the latent space (z), then the memory complexity of the class-feature centroids can be much smaller than domain discriminators. 5In the label distribution shift scenarios, the mini-batch datasets are highly labeled imbalanced. If we evaluate αt over the mini-batch, it can be computationally expensive and unstable. 6The optimization problem w.r.t. αt and λ is not large scale, then using the standard convex solver is fast and accurate. Algorithm 2 Wasserstein Aggregation Domain Network (unsupervised scenarios, one iteration) Require: Labeled source samples Ŝ1, . . . , ŜT , Target samples T̂ Ensure: Label distribution ratio α̂t and task relation simplex λ. Feature Learner g, Classifier h, Statistic critic function d1, . . . , dT , class centroid for source C y t and target C y (∀t = [1, T ], y ∈ Y). 1: . . . DNN Parameter Training Stage (fixed αt and λ) / / / 2: for mini-batch of samples (xS1 ,yS1) ∼ Ŝ1, . . . , (xST ,yST ) ∼ ŜT , (xT ) ∼ T̂ do 3: Predict target pseudo-label ȳT = argmaxyh(g(xT ), y) 4: Compute source confusion matrix for each batch (un-normalized) CŜt = #[argmaxy′h(z, y ′) = y, Y = k] (t = 1, . . . , T ) 5: Compute the batched class centroid for source Cyt and target C y . 6: Moving Average for update source/target class centroid: (We set 1 = 0.7) 7: Source class centroid update Cyt = 1 ×C y t + (1− 1)× C y t 8: Target class centroid update Cy = 1 ×Cy + (1− 1)× Cy 9: Updating g, h, d1, . . . , dT (SGD and Gradient Reversal), based on Eq.(6) 10: end for 11: . . . Estimation α̂t and λ / / / 12: Compute the global(normalized) source confusion matrix CŜt = Ŝt[argmaxy′h(z, y ′) = y, Y = k] (t = 1, . . . , T ) 13: Solve αt (denoted as {α′t}Tt=1) by Equation (2) (Or Eq.(5)) in the partial scenario). 14: Update αt by moving average: αt = 1 × αt + (1− 1)× α′t 15: Compute the weighted loss and weighted centroid distance, then solve λ (denoted as λ′) from Sec. 2.3. 16: Updating λ by moving average: λ = 0.8× λ + 0.2× λ′ Algorithm 3 Wasserstein Aggregation Domain Network (Limited Target Data, one iteration) Require: Labeled source samples Ŝ1, . . . , ŜT , Target samples T̂ , Label shift ratio αt Ensure: Task relation simplex λ. Feature Learner g, Classifier h, Statistic critic function d1, . . . , dT , class centroid for source C y t and target C y (∀t = [1, T ], y ∈ Y). 1: . . . DNN Parameter Training Stage (fixed λ) / / / 2: for mini-batch of samples (xS1 ,yS1) ∼ Ŝ1, . . . , (xST ,yST ) ∼ ŜT , (xT ) ∼ T̂ do 3: Compute the batched class centroid for source Cyt and target C y . 4: Moving Average for update source/target class centroid: (We set 1 = 0.7) 5: Source class centroid update Cyt = 1 ×C y t + (1− 1)× C y t 6: Target class centroid update Cy = 1 ×Cy + (1− 1)× Cy 7: Updating g, h, d1, . . . , dT (SGD and Gradient Reversal), based on Eq.(6). 8: end for 9: . . . Estimation λ / / / 10: Solve λ by Sec. 2.3. (denoted as λ′) 11: Updating λ by moving average: λ = 1 × λ + (1− 1)× λ′ M DATASET DESCRIPTION AND EXPERIMENTAL DETAILS M.1 AMAZON REVIEW DATASET We used the amazon review dataset (Blitzer et al., 2007). It contains four domains (Books, DVD, Electronics, and Kitchen) with positive (label ”1”) and negative product reviews (label ”0”). The data size is 6465 (Books), 5586 (DVD), 7681 (Electronics), and 7945 (Kitchen). We follow the common data pre-processing strategies Chen et al. (2012): use the bag-of-words (BOW) features then extract the top-5000 frequent unigram and bigrams of all the reviews. We also noticed the original data-set are
1. What is the main contribution of the paper regarding label shift in multi-source transfer learning? 2. What are the strengths and weaknesses of the proposed framework for learning with no target labels and limited target labels? 3. How does the paper compare to other methods, particularly DANN and CDANN, in terms of performance and approach? 4. What is the significance of the bounds introduced in the paper, and how do they relate to previous works using Wasserstein distance? 5. How does the paper handle related work, and what is the connection between the proposed method and established models like DANN and CDANN? 6. What are the limitations of the experimental setup, and how could it be improved to provide a more comprehensive comparison with other approaches? 7. Minor questions: a. How does the paper differ from assuming p(y|x) is the same instead of p(x|y)? b. What does "label partial unsupervised domain adaptation" mean?
Review
Review Summary: The paper is concerned with label shift in multi-source transfer learning setup. In particular authors look into target shift without assuming conditional distributions to be the same in source and target. They propose a unified frameworkthat can be used for learning with no target labels and limited target labels. They show that the performance on target depends on how well we are able to estimate the ratio of label distrib between source and target, and the gap between real and estimated ratios,. It also depends on the weights assigned to each source tasks. Authors then proceed to show how to learn a model in various settings using the knowledge about the bounds. Overall, this paper CAN be really impactful, but as of now, it is really hard to follow and understand how it compares to other methods (not just empirically). The structure of the paper is strange, it is impossible to just read the paper and get enough information on any of the subsections, everything is in the appendix. Related work is missing (see appendix), the theorem with bounds (that is used through the paper) appears without at least a sketch/idea of how authors got to it and where wasserstein distance comes from. Since it is not intuitive, and since it is not clear how these bounds compare to DANN bounds (https://arxiv.org/pdf/1505.07818.pdf or Ben-David bounds for that matter), it is really hard to believe in the algorithm authors derive, even though it seems to give good empirical improvements Also K section in appendix is essential for understanding and should be moved to the main paper. Related work absolutely needs to be in the main paper It is impossible to understand what connection your work has with respect to established models like DANN and CDANN. Additionally Wassertein distance was used in a number of papers already, https://arxiv.org/pdf/2009.02831.pdf, https://arxiv.org/pdf/1707.01217.pdf, https://arxiv.org/pdf/1909.08675.pdf. Again not clear on connection if any and comparison Experiments: For DANN, did i understand correctly that you take all source tasks as one "source" domain and all the target as another domain. Why not to consider all the source tasks as separate domains and target as an additional domain, DANN allows for it easily (CE instead of log loss in adv head) You should be comparing to CDAN https://arxiv.org/pdf/1705.10667.pdf which has been shown to better than DANN Also didn't understand your setup for experiments. For example on digits, what are your sources and what is your target task? A table shows only target, does it mean that source is all other digits datasets? Minor: in Into, you mention "without assuming that conditional distributions are the same". I think it is most common to assume p(y|x) is the same, not p(x|y) as you show. what exactly is "label partial unsupervised domain adaptation"
ICLR
Title Deep Double Descent: Where Bigger Models and More Data Hurt Abstract We show that a variety of modern deep learning tasks exhibit a “double-descent” phenomenon where, as we increase model size, performance first gets worse and then gets better. Moreover, we show that double descent occurs not just as a function of model size, but also as a function of the number of training epochs. We unify the above phenomena by defining a new complexity measure we call the effective model complexity and conjecture a generalized double descent with respect to this measure. Furthermore, our notion of model complexity allows us to identify certain regimes where increasing (even quadrupling) the number of train samples actually hurts test performance. 1 INTRODUCTION The bias-variance trade-off is a fundamental concept in classical statistical learning theory (e.g., Hastie et al. (2005)). The idea is that models of higher complexity have lower bias but higher variance. According to this theory, once model complexity passes a certain threshold, models “overfit” with the variance term dominating the test error, and hence from this point onward, increasing model complexity will only decrease performance (i.e., increase test error). Hence conventional wisdom in classical statistics is that, once we pass a certain threshold, “larger models are worse.” However, modern neural networks exhibit no such phenomenon. Such networks have millions of parameters, more than enough to fit even random labels (Zhang et al. (2016)), and yet they perform much better on many tasks than smaller models. Indeed, conventional wisdom among practitioners is that “larger models are better’’ (Krizhevsky et al. (2012), Huang et al. (2018), Szegedy et al. ∗Work performed in part while Preetum Nakkiran was interning at OpenAI, with Ilya Sutskever. We especially thank Mikhail Belkin and Christopher Olah for helpful discussions throughout this work. Correspondence Email: preetum@cs.harvard.edu †Equal contribution (2015), Radford et al. (2019)). The effect of training time on test performance is also up for debate. In some settings, “early stopping” improves test performance, while in other settings training neural networks to zero training error only improves performance. Finally, if there is one thing both classical statisticians and deep learning practitioners agree on is “more data is always better”. In this paper, we present empirical evidence that both reconcile and challenge some of the above “conventional wisdoms.” We show that many deep learning settings have two different regimes. In the under-parameterized regime, where the model complexity is small compared to the number of samples, the test error as a function of model complexity follows the U-like behavior predicted by the classical bias/variance tradeoff. However, once model complexity is sufficiently large to interpolate i.e., achieve (close to) zero training error, then increasing complexity only decreases test error, following the modern intuition of “bigger models are better”. Similar behavior was previously observed in Opper (1995; 2001), Advani & Saxe (2017), Spigler et al. (2018), and Geiger et al. (2019b). This phenomenon was first postulated in generality by Belkin et al. (2018) who named it “double descent”, and demonstrated it for decision trees, random features, and 2-layer neural networks with `2 loss, on a variety of learning tasks including MNIST and CIFAR-10. Main contributions. We show that double descent is a robust phenomenon that occurs in a variety of tasks, architectures, and optimization methods (see Figure 1 and Section 5; our experiments are summarized in Table A). Moreover, we propose a much more general notion of “double descent” that goes beyond varying the number of parameters. We define the effective model complexity (EMC) of a training procedure as the maximum number of samples on which it can achieve close to zero training error. The EMC depends not just on the data distribution and the architecture of the classifier but also on the training procedure—and in particular increasing training time will increase the EMC. We hypothesize that for many natural models and learning algorithms, double descent occurs as a function of the EMC. Indeed we observe “epoch-wise double descent” when we keep the model fixed and increase the training time, with performance following a classical U-like curve in the underfitting stage (when the EMC is smaller than the number of samples) and then improving with training time once the EMC is sufficiently larger than the number of samples (see Figure 2). As a corollary, early stopping only helps in the relatively narrow parameter regime of critically parameterized models. Sample non-monotonicity. Finally, our results shed light on test performance as a function of the number of train samples. Since the test error peaks around the point where EMC matches the number of samples (the transition from the under- to over-parameterization), increasing the number of samples has the effect of shifting this peak to the right. While in most settings increasing the number of samples decreases error, this shifting effect can sometimes result in a setting where more data is worse! For example, Figure 3 demonstrates cases in which increasing the number of samples by a factor of 4.5 results in worse test performance. For models in this range 4.5x more samples harm test loss
 Figure 3: Test loss (per-token perplexity) as a function of Transformer model size (embedding dimension dmodel) on language translation (IWSLT‘14 German-to-English). The curve for 18k samples is generally lower than the one for 4k samples, but also shifted to the right, since fitting 18k samples requires a larger model. Thus, for some models, the performance for 18k samples is worse than for 4k samples. 2 OUR RESULTS To state our hypothesis more precisely, we define the notion of effective model complexity. We define a training procedure T to be any procedure that takes as input a set S = {(x1, y1), . . . , (xn, yn)} of labeled training samples and outputs a classifier T (S) mapping data to labels. We define the effective model complexity of T (w.r.t. distribution D) to be the maximum number of samples n on which T achieves on average ≈ 0 training error. Definition 1 (Effective Model Complexity) The Effective Model Complexity (EMC) of a training procedure T , with respect to distribution D and parameter > 0, is defined as: EMCD, (T ) := max {n | ES∼Dn [ErrorS(T (S))] ≤ } where ErrorS(M) is the mean error of model M on train samples S. Our main hypothesis can be informally stated as follows: Hypothesis 1 (Generalized Double Descent hypothesis, informal) For any natural data distribution D, neural-network-based training procedure T , and small > 0, if we consider the task of predicting labels based on n samples from D then: Under-paremeterized regime. If EMCD, (T ) is sufficiently smaller than n, any perturbation of T that increases its effective complexity will decrease the test error. Over-parameterized regime. If EMCD, (T ) is sufficiently larger than n, any perturbation of T that increases its effective complexity will decrease the test error. Critically parameterized regime. If EMCD, (T ) ≈ n, then a perturbation of T that increases its effective complexity might decrease or increase the test error. Hypothesis 1 is informal in several ways. We do not have a principled way to choose the parameter (and currently heuristically use = 0.1). We also are yet to have a formal specification for “sufficiently smaller” and “sufficiently larger”. Our experiments suggest that there is a critical interval around the interpolation threshold when EMCD, (T ) = n: below and above this interval increasing complexity helps performance, while within this interval it may hurt performance. The width of the critical interval depends on both the distribution and the training procedure in ways we do not yet completely understand. We believe Hypothesis 1 sheds light on the interaction between optimization algorithms, model size, and test performance and helps reconcile some of the competing intuitions about them. The main result of this paper is an experimental validation of Hypothesis 1 under a variety of settings, where we considered several natural choices of datasets, architectures, and optimization algorithms, and we changed the “interpolation threshold” by varying the number of model parameters, the length of training, the amount of label noise in the distribution, and the number of train samples. Model-wise Double Descent. In Section 5, we study the test error of models of increasing size, for a fixed large number of optimization steps. We show that “model-wise double-descent” occurs for various modern datasets (CIFAR-10, CIFAR-100, IWSLT‘14 de-en, with varying amounts of label noise), model architectures (CNNs, ResNets, Transformers), optimizers (SGD, Adam), number of train samples, and training procedures (data-augmentation, and regularization). Moreover, the peak in test error systematically occurs at the interpolation threshold. In particular, we demonstrate realistic settings in which bigger models are worse. Epoch-wise Double Descent. In Section 6, we study the test error of a fixed, large architecture over the course of training. We demonstrate, in similar settings as above, a corresponding peak in test performance when models are trained just long enough to reach ≈ 0 train error. The test error of a large model first decreases (at the beginning of training), then increases (around the critical regime), then decreases once more (at the end of training)—that is, training longer can correct overfitting. Sample-wise Non-monotonicity. In Section 7, we study the test error of a fixed model and training procedure, for varying number of train samples. Consistent with our generalized double-descent hypothesis, we observe distinct test behavior in the “critical regime”, when the number of samples is near the maximum that the model can fit. This often manifests as a long plateau region, in which taking significantly more data might not help when training to completion (as is the case for CNNs on CIFAR-10). Moreover, we show settings (Transformers on IWSLT‘14 en-de), where this manifests as a peak—and for a fixed architecture and training procedure, more data actually hurts. Remarks on Label Noise. We observe all forms of double descent most strongly in settings with label noise in the train set (as is often the case when collecting train data in the real-world). However, we also show several realistic settings with a test-error peak even without label noise: ResNets (Figure 4a) and CNNs (Figure 20) on CIFAR-100; Transformers on IWSLT‘14 (Figure 8). Moreover, all our experiments demonstrate distinctly different test behavior in the critical regime— often manifesting as a “plateau” in the test error in the noiseless case which develops into a peak with added label noise. See Section 8 for further discussion. 3 RELATED WORK Model-wise double descent was first proposed as a general phenomenon by Belkin et al. (2018). Similar behavior had been observed in Opper (1995; 2001), Advani & Saxe (2017), Spigler et al. (2018), and Geiger et al. (2019b). Subsequently, there has been a large body of work studying the double descent phenomenon. A growing list of papers that theoretically analyze it in the tractable setting of linear least squares regression includes Belkin et al. (2019); Hastie et al. (2019); Bartlett et al. (2019); Muthukumar et al. (2019); Bibas et al. (2019); Mitra (2019); Mei & Montanari (2019). Moreover, Geiger et al. (2019a) provide preliminary results for model-wise double descent in convolutional networks trained on CIFAR-10. Our work differs from the above papers in two crucial aspects: First, we extend the idea of double-descent beyond the number of parameters to incorporate the training procedure under a unified notion of “Effective Model Complexity”, leading to novel insights like epoch-wise double descent and sample non-monotonicity. The notion that increasing train time corresponds to increasing complexity was also presented in Nakkiran et al. (2019). Second, we provide an extensive and rigorous demonstration of double-descent in modern deep learning, spanning a variety of architectures, datasets, and optimization procedures. An extended discussion of the related work is provided in Appendix C. 4 EXPERIMENTAL SETUP We briefly describe the experimental setup here; full details are in Appendix B 1. We consider three families of architectures: ResNets, standard CNNs, and Transformers. ResNets: We parameterize a family of ResNet18s (He et al. (2016)) by scaling the width (number of filters) of convolutional layers. Specifically, we use layer widths [k, 2k, 4k, 8k] for varying k. The standard ResNet18 corresponds to k = 64. Standard CNNs: We consider a simple family of 5-layer CNNs, with 4 convolutional layers of widths [k, 2k, 4k, 8k] for varying k, and a fully-connected layer. For context, the CNN with width k = 64, can reach over 90% test accuracy on CIFAR-10 with dataaugmentation. Transformers: We consider the 6 layer encoder-decoder from Vaswani et al. (2017), as implemented by Ott et al. (2019). We scale the size of the network by modifying the embedding dimension dmodel, and setting the width of the fully-connected layers proportionally (dff = 4 ·dmodel). 1The raw data from our experiments are available at: https://gitlab.com/ harvard-machine-learning/double-descent/tree/master For ResNets and CNNs, we train with cross-entropy loss, and the following optimizers: (1) Adam with learning-rate 0.0001 for 4K epochs; (2) SGD with learning rate∝ 1√ T for 500K gradient steps. We train Transformers for 80K gradient steps, with 10% label smoothing and no drop-out. Label Noise. In our experiments, label noise of probability p refers to training on a samples which have the correct label with probability (1 − p), and a uniformly random incorrect label otherwise (label noise is sampled only once and not per epoch). Figure 1 plots test error on the noisy distribution, while the remaining figures plot test error with respect to the clean distribution (the two curves are just linear rescaling of one another). 5 MODEL-WISE DOUBLE DESCENT In this section, we study the test error of models of increasing size, when training to completion (for a fixed large number of optimization steps). We demonstrate model-wise double descent across different architectures, datasets, optimizers, and training procedures. The critical region exhibits distinctly different test behavior around the interpolation point and there is often a peak in test error that becomes more prominent in settings with label noise. For the experiments in this section (Figures 4, 5, 6, 7, 8), notice that all modifications which increase the interpolation threshold (such as adding label noise, using data augmentation, and increasing the number of train samples) also correspondingly shift the peak in test error towards larger models. Additional plots showing the early-stopping behavior of these models, and additional experiments showing double descent in settings with no label noise (e.g. Figure 19) are in Appendix E.2. We also observed model-wise double descent for adversarial training, with a prominent robust test error peak even in settings without label noise. See Figure 26 in Appendix E.2. Discussion. Fully understanding the mechanisms behind model-wise double descent in deep neural networks remains an important open question. However, an analog of model-wise double descent occurs even for linear models. A recent stream of theoretical works analyzes this setting (Bartlett et al. (2019); Muthukumar et al. (2019); Belkin et al. (2019); Mei & Montanari (2019); Hastie et al. (2019)). We believe similar mechanisms may be at work in deep neural networks. Informally, our intuition is that for model-sizes at the interpolation threshold, there is effectively only one model that fits the train data and this interpolating model is very sensitive to noise in the train set and/or model mis-specification. That is, since the model is just barely able to fit the train data, forcing it to fit even slightly-noisy or mis-specified labels will destroy its global structure, and result in high test error. (See Figure 28 in the Appendix for an experiment demonstrating this noise sensitivity, by showing that ensembling helps significantly in the critically-parameterized regime). However for over-parameterized models, there are many interpolating models that fit the train set, and SGD is able to find one that “memorizes” (or “absorbs”) the noise while still performing well on the distribution. The above intuition is theoretically justified for linear models. In general, this situation manifests even without label noise for linear models (Mei & Montanari (2019)), and occurs whenever there is model mis-specification between the structure of the true distribution and the model family. We believe this intuition extends to deep learning as well, and it is consistent with our experiments. 6 EPOCH-WISE DOUBLE DESCENT In this section, we demonstrate a novel form of double-descent with respect to training epochs, which is consistent with our unified view of effective model complexity (EMC) and the generalized double descent hypothesis. Increasing the train time increases the EMC—and thus a sufficiently large model transitions from under- to over-parameterized over the course of training. As illustrated in Figure 9, sufficiently large models can undergo a “double descent” behavior where test error first decreases then increases near the interpolation threshold, and then decreases again. In contrast, for “medium sized” models, for which training to completion will only barely reach ≈ 0 error, the test error as a function of training time will follow a classical U-like curve where it is better to stop early. Models that are too small to reach the approximation threshold will remain in the “under parameterized” regime where increasing train time monotonically decreases test error. Our experiments (Figure 10) show that many settings of dataset and architecture exhibit epoch-wise double descent, in the presence of label noise. Further, this phenomenon is robust across optimizer variations and learning rate schedules (see additional experiments in Appendix E.1). As in modelwise double descent, the test error peak is accentuated with label noise. Conventional wisdom suggests that training is split into two phases: (1) In the first phase, the network learns a function with a small generalization gap (2) In the second phase, the network starts to over-fit the data leading to an increase in test error. Our experiments suggest that this is not the complete picture—in some regimes, the test error decreases again and may achieve a lower value at the end of training as compared to the first minimum (see Fig 10 for 10% label noise). 7 SAMPLE-WISE NON-MONOTONICITY In this section, we investigate the effect of varying the number of train samples, for a fixed model and training procedure. Previously, in model-wise and epoch-wise double descent, we explored behavior in the critical regime, where EMCD, (T ) ≈ n, by varying the EMC. Here, we explore the critical regime by varying the number of train samples n. By increasing n, the same training procedure T can switch from being effectively over-parameterized to effectively under-parameterized. We show that increasing the number of samples has two different effects on the test error vs. model complexity graph. On the one hand, (as expected) increasing the number of samples shrinks the area under the curve. On the other hand, increasing the number of samples also has the effect of “shifting the curve to the right” and increasing the model complexity at which test error peaks. These twin effects are shown in Figure 11a. Note that there is a range of model sizes where the effects “cancel out”—and having 4× more train samples does not help test performance when training to completion. Outside the critically-parameterized regime, for sufficiently under- or overparameterized models, having more samples helps. This phenomenon is corroborated in Figure 12, which shows test error as a function of both model and sample size, in the same setting as Figure 11a. In some settings, these two effects combine to yield a regime of model sizes where more data actually hurts test performance as in Figure 3 (see also Figure 11b). Note that this phenomenon is not unique to DNNs: more data can hurt even for linear models (see Appendix D). 8 CONCLUSION AND DISCUSSION We introduce a generalized double descent hypothesis: models and training procedures exhibit atypical behavior when their Effective Model Complexity is comparable to the number of train samples. We provide extensive evidence for our hypothesis in modern deep learning settings, and show that it is robust to choices of dataset, architecture, and training procedures. In particular, we demonstrate “model-wise double descent” for modern deep networks and characterize the regime where bigger models can perform worse. We also demonstrate “epoch-wise double descent,” which, to the best of our knowledge, has not been previously proposed. Finally, we show that the double descent phenomenon can lead to a regime where training on more data leads to worse test performance. Preliminary results suggest that double descent also holds as we vary the amount of regularization for a fixed model (see Figure 22). We also believe our characterization of the critical regime provides a useful way of thinking for practitioners—if a model and training procedure are just barely able to fit the train set, then small changes to the model or training procedure may yield unexpected behavior (e.g. making the model slightly larger or smaller, changing regularization, etc. may hurt test performance). Early stopping. We note that many of the phenomena that we highlight often do not occur with optimal early-stopping. However, this is consistent with our generalized double descent hypothesis: if early stopping prevents models from reaching 0 train error then we would not expect to see doubledescent, since the EMC does not reach the number of train samples. Further, we show at least one setting where model-wise double descent can still occur even with optimal early stopping (ResNets on CIFAR-100 with no label noise, see Figure 19). We have not observed settings where more data hurts when optimal early-stopping is used. However, we are not aware of reasons which preclude this from occurring. We leave fully understanding the optimal early stopping behavior of double descent as an important open question for future work. Label Noise. In our experiments, we observe double descent most strongly in settings with label noise. However, we believe this effect is not fundamentally about label noise, but rather about model mis-specification. For example, consider a setting where the label noise is not truly random, but rather pseudorandom (with respect to the family of classifiers being trained). In this setting, the performance of the Bayes optimal classifier would not change (since the pseudorandom noise is deterministic, and invertible), but we would observe an identical double descent as with truly random label noise. Thus, we view adding label noise as merely a proxy for making distributions “harder”— i.e. increasing the amount of model mis-specification. Other Notions of Model Complexity. Our notion of Effective Model Complexity is related to classical complexity notions such as Rademacher complexity, but differs in several crucial ways: (1) EMC depends on the true labels of the data distribution, and (2) EMC depends on the training procedure, not just the model architecture. Other notions of model complexity which do not incorporate features (1) and (2) would not suffice to characterize the location of the double-descent peak. Rademacher complexity, for example, is determined by the ability of a model architecture to fit a randomly-labeled train set. But Rademacher complexity and VC dimension are both insufficient to determine the model-wise double descent peak location, since they do not depend on the distribution of labels— and our experiments show that adding label noise shifts the location of the peak. Moreover, both Rademacher complexity and VC dimension depend only on the model family and data distribution, and not on the training procedure used to find models. Thus, they are not capable of capturing train-time double-descent effects, such as “epoch-wise” double descent, and the effect of data-augmentation on the peak location. ACKNOWLEDGMENTS We thank Mikhail Belkin for extremely useful discussions in the early stages of this work. We thank Christopher Olah for suggesting the Model Size × Epoch visualization, which led to the investigation of epoch-wise double descent, as well as for useful discussion and feedback. We also thank Alec Radford, Jacob Steinhardt, and Vaishaal Shankar for helpful discussion and suggestions. P.N. thanks OpenAI, the Simons Institute, and the Harvard Theory Group for a research environment that enabled this kind of work. We thank Dimitris Kalimeris, Benjamin L. Edelman, and Sharon Qian, and Aditya Ramesh for comments on an early draft of this work. This work supported in part by NSF grant CAREER CCF 1452961, BSF grant 2014389, NSF USICCS proposal 1540428, a Google Research award, a Facebook research award, a Simons Investigator Award, a Simons Investigator Fellowship, and NSF Awards CCF 1715187, CCF 1565264, CCF 1301976, IIS 1409097, and CNS 1618026. Y.B. would like to thank the MIT-IBM Watson AI Lab for contributing computational resources for experiments. A SUMMARY TABLE OF EXPERIMENTAL RESULTS Double-Descent Dataset Architecture Opt. Aug. % Noise Model Epoch Figure(s) CIFAR 10 CNN SGD X 0 7 7 5, 27 X 10 X X 5, 27, 6 X 20 X X 5, 27 0 7 7 5, 25 10 X X 5 20 X X 5 SGD + w.d. X 20 X X 21 Adam 0 X – 25 ResNet Adam X 0 7 7 4, 10 X 5 X – 4 X 10 X X 4, 10 X 15 X X 4, 2 X 20 X X 4, 9, 10 Various X 20 – X 16, 17, 18 (subsampled) CNN SGD X 10 X – 11a SGD X 20 X – 11a, 12 (adversarial) ResNet SGD 0 Robust err. – 26 CIFAR 100 ResNet Adam X 0 X 7 4, 19, 10 X 10 X X 4, 10 X 20 X X 4, 10 CNN SGD 0 X 7 20 IWSLT ’14 de-en Transformer Adam 0 X 7 8, 24 (subsampled) Transformer Adam 0 X 7 11b, 23 WMT ’14 en-fr Transformer Adam 0 X 7 8, 24 B APPENDIX: EXPERIMENTAL DETAILS B.1 MODELS We use the following families of architectures. The PyTorch Paszke et al. (2017) specification of our ResNets and CNNs are available at https://gitlab.com/ harvard-machine-learning/double-descent/tree/master. ResNets. We define a family of ResNet18s of increasing size as follows. We follow the Preactivation ResNet18 architecture of He et al. (2016), using 4 ResNet blocks, each consisting of two BatchNorm-ReLU-Convolution layers. The layer widths for the 4 blocks are [k, 2k, 4k, 8k] for varying k ∈ N and the strides are [1, 2, 2, 2]. The standard ResNet18 corresponds to k = 64 convolutional channels in the first layer. The scaling of model size with k is shown in Figure 13b. Our implementation is adapted from https://github.com/kuangliu/pytorch-cifar. Standard CNNs. We consider a simple family of 5-layer CNNs, with four Conv-BatchNormReLU-MaxPool layers and a fully-connected output layer. We scale the four convolutional layer widths as [k, 2k, 4k, 8k]. The MaxPool is [1, 2, 2, 8]. For all the convolution layers, the kernel size = 3, stride = 1 and padding=1. This architecture is based on the “backbone” architecture from Page (2018). For k = 64, this CNN has 1558026 parameters and can reach > 90% test accuracy on CIFAR-10 (Krizhevsky (2009)) with data-augmentation. The scaling of model size with k is shown in Figure 13a. Transformers. We consider the encoder-decoder Transformer model from Vaswani et al. (2017) with 6 layers and 8 attention heads per layer, as implemented by fairseq Ott et al. (2019). We scale the size of the network by modifying the embedding dimension (dmodel), and scale the width of the fully-connected layers proportionally (dff = 4dmodel). We train with 10% label smoothing and no drop-out, for 80 gradient steps. B.2 IMAGE CLASSIFICATION: EXPERIMENTAL SETUP We describe the details of training for CNNs and ResNets below. Loss function: Unless stated otherwise, we use the cross-entropy loss for all the experiments. Data-augmentation: In experiments where data-augmentation was used, we apply RandomCrop(32, padding=4) and RandomHorizontalFlip. In experiments with added label noise, the label for all augmentations of a given training sample are given the same label. Regularization: No explicit regularization like weight decay or dropout was applied unless explicitly stated. Initialization: We use the default initialization provided by PyTorch for all the layers. Optimization: • Adam: Unless specified otherwise, learning rate was set at constant to 1e−4 and all other parameters were set to their default PyTorch values. • SGD: Unless specified otherwise, learning rate schedule inverse-square root (defined below) was used with initial learning rate γ0 = 0.1 and updates every L = 512 gradient steps. No momentum was used. We found our results are robust to various other natural choices of optimizers and learning rate schedule. We used the above settings because (1) they optimize well, and (2) they do not require experiment-specific hyperparameter tuning, and allow us to use the same optimization across many experiments. Batch size: All experiments use a batchsize of 128. Learning rate schedule descriptions: • Inverse-square root (γ0, L): At gradient step t, the learning rate is set to γ(t) := γ0√ 1+bt/512c . We set learning-rate with respect to number of gradient steps, and not epochs, in order to allow comparison between experiments with varying train-set sizes. • Dynamic drop (γ0, drop, patience): Starts with an initial learning rate of γ0 and drops by a factor of ’drop’ if the training loss has remained constant or become worse for ’patience’ number of gradient steps. B.3 NEURAL MACHINE TRANSLATION: EXPERIMENTAL SETUP Here we describe the experimental setup for the neural machine translation experiments. Training procedure. In this setting, the distribution D consists of triples (x, y, i) : x ∈ V ∗src, y ∈ V ∗tgt, i ∈ {0, . . . , |y|} where Vsrc and Vtgt are the source and target vocabularies, the string x is a sentence in the source language, y is its translation in the target language, and i is the index of the token to be predicted by the model. We assume that i|x, y is distributed uniformly on {0, . . . , |y|}. A standard probabilistic model defines an autoregressive factorization of the likelihood: pM (y|x) = |y|∏ i=1 pM (yi|y<i, x). Given a set of training samples S, we define ErrorS(M) = 1 |S| ∑ (x,y,i)∈S − log pM (yi|y<i, x). In practice, S is not constructed from independent samples from D, but rather by first sampling (x, y) and then including all (x, y, 0), . . . , (x, y, |y|) in S. For training transformers, we replicate the optimization procedure specified in Vaswani et al. (2017) section 5.3, where the learning rate schedule consists of a “warmup” phase with linearly increasing learning rate followed by a phase with inverse square-root decay. We preprocess the data using byte pair encoding (BPE) as described in Sennrich et al. (2015). We use the implementation provided by fairseq (https://github.com/pytorch/fairseq). Datasets. The IWSLT ’14 German to English dataset contains TED Talks as described in Cettolo et al. (2012). The WMT ’14 English to French dataset is taken from http://www.statmt. org/wmt14/translation-task.html. B.4 PER-SECTION EXPERIMENTAL DETAILS Here we provide full details for experiments in the body, when not otherwise provided. Introduction: Experimental Details Figure 1: All models were trained using Adam with learningrate 0.0001 for 4K epochs. Plotting means and standard deviations for 5 trials, with random network initialization. Model-wise Double Descent: Experimental Details Figure 7: Plotting means and standard deviations for 5 trials, with random network initialization. Sample-wise Nonmonotonicity: Experimental Details Figure 11a: All models are trained with SGD for 500K epochs, and data-augmentation. Bottom: Means and standard deviations from 5 trials with random initialization, and random subsampling of the train set. C EXTENDED DISCUSSION OF RELATED WORK Belkin et al. (2018): This paper proposed, in very general terms, that the apparent contradiction between traditional notions of the bias-variance trade-off and empirically successful practices in deep learning can be reconciled under a double-descent curve—as model complexity increases, the test error follows the traditional “U-shaped curve”, but beyond the point of interpolation, the error starts to decrease. This work provides empirical evidence for the double-descent curve with fully connected networks trained on subsets of MNIST, CIFAR10, SVHN and TIMIT datasets. They use the l2 loss for their experiments. They demonstrate that neural networks are not an aberration in this regard—double-descent is a general phenomenon observed also in linear regression with random features and random forests. Theoretical works on linear least squares regression: A variety of papers have attempted to theoretically analyze this behavior in restricted settings, particularly the case of least squares regression under various assumptions on the training data, feature spaces and regularization method. 1. Advani & Saxe (2017); Hastie et al. (2019) both consider the linear regression problem stated above and analyze the generalization behavior in the asymptotic limit N,D → ∞ using random matrix theory. Hastie et al. (2019) highlight that when the model is misspecified, the minimum of training error can occur for over-parameterized models 2. Belkin et al. (2019) Linear least squares regression for two data models, where the input data is sampled from a Gaussian and a Fourier series model for functions on a circle. They provide a finite-sample analysis for these two cases 3. Bartlett et al. (2019) provides generalization bounds for the minimum l2-norm interpolant for Gaussian features 4. Muthukumar et al. (2019) characterize the fundamental limit of of any interpolating solution in the presence of noise and provide some interesting Fourier-theoretic interpretations. 5. Mei & Montanari (2019): This work provides asymptotic analysis for ridge regression over random features Similar double descent behavior, in restricted settings, was investigated in Trunk (1979); Opper (1995; 2001); Skurichina & Duin (2002). Neal et al. (2018) conducts a study of bias and variance in modern neural networks, observing that both bias and variance can decrease with increasing model size, contrary to conventional wisdom. Geiger et al. (2019b) showed that deep fully connected networks trained on the MNIST dataset with hinge loss exhibit a “jamming transition” when the number of parameters exceeds a threshold that allows training to near-zero train loss. Geiger et al. (2019a) provide further experiments on CIFAR10 with a convolutional network. They also highlight interesting behavior with ensembling around the critical regime, which is consistent with our informal intuitions in Section 5 and our experiments in Figures 28, 29. Advani & Saxe (2017); Geiger et al. (2019b;a) also point out that double-descent is not observed when optimal early-stopping is used. The study of sample non-monotonicity in learning algorithms had also existed prior to double descent, including in Duin (1995; 2000); Opper (2001); Loog & Duin (2012). D RANDOM FEATURES: A CASE STUDY In this section, for completeness sake, we show that both the model- and sample-wise double descent phenomena are not unique to deep neural networks—they exist even in the setting of Random Fourier Features of Rahimi & Recht (2008). This setting is equivalent to a two-layer neural network with e−ix activation. The first layer is initialized with a N (0, 1d ) Gaussian distribution and then fixed throughout training. The width (or embedding dimension) d of the first layer parameterizes the model size. The second layer is initialized with 0s and trained with MSE loss. Figure 14 shows the grid of Test Error as a function of both number of samples n and model size d. Note that in this setting EMC = d (the embedding dimension). As a result, as demonstrated in the figure, the peak follows the path of n = d. Both model-wise and sample-wise (see figure 15) double descent phenomena are captured, by horizontally and vertically crossing the grid, respectively. Figure 15: Sample-wise double-descent slice for Random Fourier Features on the Fashion MNIST dataset. In this figure the embedding dimension (number of random features) is 1000. E APPENDIX: ADDITIONAL EXPERIMENTS E.1 EPOCH-WISE DOUBLE DESCENT: ADDITIONAL RESULTS Here, we provide a rigorous evaluation of epoch-wise double descent for a variety of optimizers and learning rate schedules. We train ResNet18 on CIFAR-10 with data-augmentation and 20% label noise with three different optimizers—Adam, SGD, SGD + Momentum (momentum set to 0.9) and three different learning rate schedules—constant, inverse-square root, dynamic drop for differnet values of initial learning rate. We observe that double-descent occurs reliably for all optimizers and learning rate schedules and the peak of the double descent curve shifts with the interpolation point. A practical recommendation resulting from epoch-wise double descent is that stopping the training when the test error starts to increase may not always be the best strategy. In some cases, the test error may decrease again after reaching a maximum, and the final value may be lower than the minimum earlier in training. CIFAR100, ResNet18 E.2.1 CLEAN SETTINGS WITH MODEL-WISE DOUBLE DESCENT E.2 MODEL-WISE DOUBLE DESCENT: ADDITIONAL RESULTS CIFAR100, Standard CNN E.2.2 WEIGHT DECAY Here, we now study the effect of varying the level of regularization on test error. We train CIFAR10 with data-augmentation and 20% label noise on ResNet18 for weight decay co-efficients λ ranging from 0 to 0.1. We train the networks using SGD + inverse-square root learning rate. Figure below shows a picture qualitatively very similar to that observed for model-wise double descent wherein ”model complexity” is now controlled by the regularization parameter. This confirms our generalized double descent hypothesis along yet another axis of Effective Model Complexity. E.2.3 EARLY STOPPING DOES NOT EXHIBIT DOUBLE DESCENT Language models CIFAR10, 10% noise, SGD E.2.4 TRAINING PROCEDURE E.3 ENSEMBLING
1. What is the focus of the paper, and what does it aim to achieve? 2. What is the significance of the concept of Effective Model Complexity (EMC)? 3. How does the paper empirically demonstrate the double descent phenomenon's dependence on EMC? 4. What insights does the paper offer regarding the observed phenomena? 5. Are there any limitations or areas for improvement in the paper's approach or conclusions?
Review
Review The paper defines the effective model complexity(EMC) that defines the complexity of the model. EMC depends on several factors such as data distribution and architecture of the classifier. The paper empirically shows that the double descent phenomenon occurs as a function of EMC. The paper provides interesting perspectives of their experiments and gives the intuitions for these observations. However, these are mainly hypotheses. I like the paper for its rigorous experiments and providing intutions from their observations. However, I am very new to this area of research and will only provide my review on my understanding.
ICLR
Title Deep Double Descent: Where Bigger Models and More Data Hurt Abstract We show that a variety of modern deep learning tasks exhibit a “double-descent” phenomenon where, as we increase model size, performance first gets worse and then gets better. Moreover, we show that double descent occurs not just as a function of model size, but also as a function of the number of training epochs. We unify the above phenomena by defining a new complexity measure we call the effective model complexity and conjecture a generalized double descent with respect to this measure. Furthermore, our notion of model complexity allows us to identify certain regimes where increasing (even quadrupling) the number of train samples actually hurts test performance. 1 INTRODUCTION The bias-variance trade-off is a fundamental concept in classical statistical learning theory (e.g., Hastie et al. (2005)). The idea is that models of higher complexity have lower bias but higher variance. According to this theory, once model complexity passes a certain threshold, models “overfit” with the variance term dominating the test error, and hence from this point onward, increasing model complexity will only decrease performance (i.e., increase test error). Hence conventional wisdom in classical statistics is that, once we pass a certain threshold, “larger models are worse.” However, modern neural networks exhibit no such phenomenon. Such networks have millions of parameters, more than enough to fit even random labels (Zhang et al. (2016)), and yet they perform much better on many tasks than smaller models. Indeed, conventional wisdom among practitioners is that “larger models are better’’ (Krizhevsky et al. (2012), Huang et al. (2018), Szegedy et al. ∗Work performed in part while Preetum Nakkiran was interning at OpenAI, with Ilya Sutskever. We especially thank Mikhail Belkin and Christopher Olah for helpful discussions throughout this work. Correspondence Email: preetum@cs.harvard.edu †Equal contribution (2015), Radford et al. (2019)). The effect of training time on test performance is also up for debate. In some settings, “early stopping” improves test performance, while in other settings training neural networks to zero training error only improves performance. Finally, if there is one thing both classical statisticians and deep learning practitioners agree on is “more data is always better”. In this paper, we present empirical evidence that both reconcile and challenge some of the above “conventional wisdoms.” We show that many deep learning settings have two different regimes. In the under-parameterized regime, where the model complexity is small compared to the number of samples, the test error as a function of model complexity follows the U-like behavior predicted by the classical bias/variance tradeoff. However, once model complexity is sufficiently large to interpolate i.e., achieve (close to) zero training error, then increasing complexity only decreases test error, following the modern intuition of “bigger models are better”. Similar behavior was previously observed in Opper (1995; 2001), Advani & Saxe (2017), Spigler et al. (2018), and Geiger et al. (2019b). This phenomenon was first postulated in generality by Belkin et al. (2018) who named it “double descent”, and demonstrated it for decision trees, random features, and 2-layer neural networks with `2 loss, on a variety of learning tasks including MNIST and CIFAR-10. Main contributions. We show that double descent is a robust phenomenon that occurs in a variety of tasks, architectures, and optimization methods (see Figure 1 and Section 5; our experiments are summarized in Table A). Moreover, we propose a much more general notion of “double descent” that goes beyond varying the number of parameters. We define the effective model complexity (EMC) of a training procedure as the maximum number of samples on which it can achieve close to zero training error. The EMC depends not just on the data distribution and the architecture of the classifier but also on the training procedure—and in particular increasing training time will increase the EMC. We hypothesize that for many natural models and learning algorithms, double descent occurs as a function of the EMC. Indeed we observe “epoch-wise double descent” when we keep the model fixed and increase the training time, with performance following a classical U-like curve in the underfitting stage (when the EMC is smaller than the number of samples) and then improving with training time once the EMC is sufficiently larger than the number of samples (see Figure 2). As a corollary, early stopping only helps in the relatively narrow parameter regime of critically parameterized models. Sample non-monotonicity. Finally, our results shed light on test performance as a function of the number of train samples. Since the test error peaks around the point where EMC matches the number of samples (the transition from the under- to over-parameterization), increasing the number of samples has the effect of shifting this peak to the right. While in most settings increasing the number of samples decreases error, this shifting effect can sometimes result in a setting where more data is worse! For example, Figure 3 demonstrates cases in which increasing the number of samples by a factor of 4.5 results in worse test performance. For models in this range 4.5x more samples harm test loss
 Figure 3: Test loss (per-token perplexity) as a function of Transformer model size (embedding dimension dmodel) on language translation (IWSLT‘14 German-to-English). The curve for 18k samples is generally lower than the one for 4k samples, but also shifted to the right, since fitting 18k samples requires a larger model. Thus, for some models, the performance for 18k samples is worse than for 4k samples. 2 OUR RESULTS To state our hypothesis more precisely, we define the notion of effective model complexity. We define a training procedure T to be any procedure that takes as input a set S = {(x1, y1), . . . , (xn, yn)} of labeled training samples and outputs a classifier T (S) mapping data to labels. We define the effective model complexity of T (w.r.t. distribution D) to be the maximum number of samples n on which T achieves on average ≈ 0 training error. Definition 1 (Effective Model Complexity) The Effective Model Complexity (EMC) of a training procedure T , with respect to distribution D and parameter > 0, is defined as: EMCD, (T ) := max {n | ES∼Dn [ErrorS(T (S))] ≤ } where ErrorS(M) is the mean error of model M on train samples S. Our main hypothesis can be informally stated as follows: Hypothesis 1 (Generalized Double Descent hypothesis, informal) For any natural data distribution D, neural-network-based training procedure T , and small > 0, if we consider the task of predicting labels based on n samples from D then: Under-paremeterized regime. If EMCD, (T ) is sufficiently smaller than n, any perturbation of T that increases its effective complexity will decrease the test error. Over-parameterized regime. If EMCD, (T ) is sufficiently larger than n, any perturbation of T that increases its effective complexity will decrease the test error. Critically parameterized regime. If EMCD, (T ) ≈ n, then a perturbation of T that increases its effective complexity might decrease or increase the test error. Hypothesis 1 is informal in several ways. We do not have a principled way to choose the parameter (and currently heuristically use = 0.1). We also are yet to have a formal specification for “sufficiently smaller” and “sufficiently larger”. Our experiments suggest that there is a critical interval around the interpolation threshold when EMCD, (T ) = n: below and above this interval increasing complexity helps performance, while within this interval it may hurt performance. The width of the critical interval depends on both the distribution and the training procedure in ways we do not yet completely understand. We believe Hypothesis 1 sheds light on the interaction between optimization algorithms, model size, and test performance and helps reconcile some of the competing intuitions about them. The main result of this paper is an experimental validation of Hypothesis 1 under a variety of settings, where we considered several natural choices of datasets, architectures, and optimization algorithms, and we changed the “interpolation threshold” by varying the number of model parameters, the length of training, the amount of label noise in the distribution, and the number of train samples. Model-wise Double Descent. In Section 5, we study the test error of models of increasing size, for a fixed large number of optimization steps. We show that “model-wise double-descent” occurs for various modern datasets (CIFAR-10, CIFAR-100, IWSLT‘14 de-en, with varying amounts of label noise), model architectures (CNNs, ResNets, Transformers), optimizers (SGD, Adam), number of train samples, and training procedures (data-augmentation, and regularization). Moreover, the peak in test error systematically occurs at the interpolation threshold. In particular, we demonstrate realistic settings in which bigger models are worse. Epoch-wise Double Descent. In Section 6, we study the test error of a fixed, large architecture over the course of training. We demonstrate, in similar settings as above, a corresponding peak in test performance when models are trained just long enough to reach ≈ 0 train error. The test error of a large model first decreases (at the beginning of training), then increases (around the critical regime), then decreases once more (at the end of training)—that is, training longer can correct overfitting. Sample-wise Non-monotonicity. In Section 7, we study the test error of a fixed model and training procedure, for varying number of train samples. Consistent with our generalized double-descent hypothesis, we observe distinct test behavior in the “critical regime”, when the number of samples is near the maximum that the model can fit. This often manifests as a long plateau region, in which taking significantly more data might not help when training to completion (as is the case for CNNs on CIFAR-10). Moreover, we show settings (Transformers on IWSLT‘14 en-de), where this manifests as a peak—and for a fixed architecture and training procedure, more data actually hurts. Remarks on Label Noise. We observe all forms of double descent most strongly in settings with label noise in the train set (as is often the case when collecting train data in the real-world). However, we also show several realistic settings with a test-error peak even without label noise: ResNets (Figure 4a) and CNNs (Figure 20) on CIFAR-100; Transformers on IWSLT‘14 (Figure 8). Moreover, all our experiments demonstrate distinctly different test behavior in the critical regime— often manifesting as a “plateau” in the test error in the noiseless case which develops into a peak with added label noise. See Section 8 for further discussion. 3 RELATED WORK Model-wise double descent was first proposed as a general phenomenon by Belkin et al. (2018). Similar behavior had been observed in Opper (1995; 2001), Advani & Saxe (2017), Spigler et al. (2018), and Geiger et al. (2019b). Subsequently, there has been a large body of work studying the double descent phenomenon. A growing list of papers that theoretically analyze it in the tractable setting of linear least squares regression includes Belkin et al. (2019); Hastie et al. (2019); Bartlett et al. (2019); Muthukumar et al. (2019); Bibas et al. (2019); Mitra (2019); Mei & Montanari (2019). Moreover, Geiger et al. (2019a) provide preliminary results for model-wise double descent in convolutional networks trained on CIFAR-10. Our work differs from the above papers in two crucial aspects: First, we extend the idea of double-descent beyond the number of parameters to incorporate the training procedure under a unified notion of “Effective Model Complexity”, leading to novel insights like epoch-wise double descent and sample non-monotonicity. The notion that increasing train time corresponds to increasing complexity was also presented in Nakkiran et al. (2019). Second, we provide an extensive and rigorous demonstration of double-descent in modern deep learning, spanning a variety of architectures, datasets, and optimization procedures. An extended discussion of the related work is provided in Appendix C. 4 EXPERIMENTAL SETUP We briefly describe the experimental setup here; full details are in Appendix B 1. We consider three families of architectures: ResNets, standard CNNs, and Transformers. ResNets: We parameterize a family of ResNet18s (He et al. (2016)) by scaling the width (number of filters) of convolutional layers. Specifically, we use layer widths [k, 2k, 4k, 8k] for varying k. The standard ResNet18 corresponds to k = 64. Standard CNNs: We consider a simple family of 5-layer CNNs, with 4 convolutional layers of widths [k, 2k, 4k, 8k] for varying k, and a fully-connected layer. For context, the CNN with width k = 64, can reach over 90% test accuracy on CIFAR-10 with dataaugmentation. Transformers: We consider the 6 layer encoder-decoder from Vaswani et al. (2017), as implemented by Ott et al. (2019). We scale the size of the network by modifying the embedding dimension dmodel, and setting the width of the fully-connected layers proportionally (dff = 4 ·dmodel). 1The raw data from our experiments are available at: https://gitlab.com/ harvard-machine-learning/double-descent/tree/master For ResNets and CNNs, we train with cross-entropy loss, and the following optimizers: (1) Adam with learning-rate 0.0001 for 4K epochs; (2) SGD with learning rate∝ 1√ T for 500K gradient steps. We train Transformers for 80K gradient steps, with 10% label smoothing and no drop-out. Label Noise. In our experiments, label noise of probability p refers to training on a samples which have the correct label with probability (1 − p), and a uniformly random incorrect label otherwise (label noise is sampled only once and not per epoch). Figure 1 plots test error on the noisy distribution, while the remaining figures plot test error with respect to the clean distribution (the two curves are just linear rescaling of one another). 5 MODEL-WISE DOUBLE DESCENT In this section, we study the test error of models of increasing size, when training to completion (for a fixed large number of optimization steps). We demonstrate model-wise double descent across different architectures, datasets, optimizers, and training procedures. The critical region exhibits distinctly different test behavior around the interpolation point and there is often a peak in test error that becomes more prominent in settings with label noise. For the experiments in this section (Figures 4, 5, 6, 7, 8), notice that all modifications which increase the interpolation threshold (such as adding label noise, using data augmentation, and increasing the number of train samples) also correspondingly shift the peak in test error towards larger models. Additional plots showing the early-stopping behavior of these models, and additional experiments showing double descent in settings with no label noise (e.g. Figure 19) are in Appendix E.2. We also observed model-wise double descent for adversarial training, with a prominent robust test error peak even in settings without label noise. See Figure 26 in Appendix E.2. Discussion. Fully understanding the mechanisms behind model-wise double descent in deep neural networks remains an important open question. However, an analog of model-wise double descent occurs even for linear models. A recent stream of theoretical works analyzes this setting (Bartlett et al. (2019); Muthukumar et al. (2019); Belkin et al. (2019); Mei & Montanari (2019); Hastie et al. (2019)). We believe similar mechanisms may be at work in deep neural networks. Informally, our intuition is that for model-sizes at the interpolation threshold, there is effectively only one model that fits the train data and this interpolating model is very sensitive to noise in the train set and/or model mis-specification. That is, since the model is just barely able to fit the train data, forcing it to fit even slightly-noisy or mis-specified labels will destroy its global structure, and result in high test error. (See Figure 28 in the Appendix for an experiment demonstrating this noise sensitivity, by showing that ensembling helps significantly in the critically-parameterized regime). However for over-parameterized models, there are many interpolating models that fit the train set, and SGD is able to find one that “memorizes” (or “absorbs”) the noise while still performing well on the distribution. The above intuition is theoretically justified for linear models. In general, this situation manifests even without label noise for linear models (Mei & Montanari (2019)), and occurs whenever there is model mis-specification between the structure of the true distribution and the model family. We believe this intuition extends to deep learning as well, and it is consistent with our experiments. 6 EPOCH-WISE DOUBLE DESCENT In this section, we demonstrate a novel form of double-descent with respect to training epochs, which is consistent with our unified view of effective model complexity (EMC) and the generalized double descent hypothesis. Increasing the train time increases the EMC—and thus a sufficiently large model transitions from under- to over-parameterized over the course of training. As illustrated in Figure 9, sufficiently large models can undergo a “double descent” behavior where test error first decreases then increases near the interpolation threshold, and then decreases again. In contrast, for “medium sized” models, for which training to completion will only barely reach ≈ 0 error, the test error as a function of training time will follow a classical U-like curve where it is better to stop early. Models that are too small to reach the approximation threshold will remain in the “under parameterized” regime where increasing train time monotonically decreases test error. Our experiments (Figure 10) show that many settings of dataset and architecture exhibit epoch-wise double descent, in the presence of label noise. Further, this phenomenon is robust across optimizer variations and learning rate schedules (see additional experiments in Appendix E.1). As in modelwise double descent, the test error peak is accentuated with label noise. Conventional wisdom suggests that training is split into two phases: (1) In the first phase, the network learns a function with a small generalization gap (2) In the second phase, the network starts to over-fit the data leading to an increase in test error. Our experiments suggest that this is not the complete picture—in some regimes, the test error decreases again and may achieve a lower value at the end of training as compared to the first minimum (see Fig 10 for 10% label noise). 7 SAMPLE-WISE NON-MONOTONICITY In this section, we investigate the effect of varying the number of train samples, for a fixed model and training procedure. Previously, in model-wise and epoch-wise double descent, we explored behavior in the critical regime, where EMCD, (T ) ≈ n, by varying the EMC. Here, we explore the critical regime by varying the number of train samples n. By increasing n, the same training procedure T can switch from being effectively over-parameterized to effectively under-parameterized. We show that increasing the number of samples has two different effects on the test error vs. model complexity graph. On the one hand, (as expected) increasing the number of samples shrinks the area under the curve. On the other hand, increasing the number of samples also has the effect of “shifting the curve to the right” and increasing the model complexity at which test error peaks. These twin effects are shown in Figure 11a. Note that there is a range of model sizes where the effects “cancel out”—and having 4× more train samples does not help test performance when training to completion. Outside the critically-parameterized regime, for sufficiently under- or overparameterized models, having more samples helps. This phenomenon is corroborated in Figure 12, which shows test error as a function of both model and sample size, in the same setting as Figure 11a. In some settings, these two effects combine to yield a regime of model sizes where more data actually hurts test performance as in Figure 3 (see also Figure 11b). Note that this phenomenon is not unique to DNNs: more data can hurt even for linear models (see Appendix D). 8 CONCLUSION AND DISCUSSION We introduce a generalized double descent hypothesis: models and training procedures exhibit atypical behavior when their Effective Model Complexity is comparable to the number of train samples. We provide extensive evidence for our hypothesis in modern deep learning settings, and show that it is robust to choices of dataset, architecture, and training procedures. In particular, we demonstrate “model-wise double descent” for modern deep networks and characterize the regime where bigger models can perform worse. We also demonstrate “epoch-wise double descent,” which, to the best of our knowledge, has not been previously proposed. Finally, we show that the double descent phenomenon can lead to a regime where training on more data leads to worse test performance. Preliminary results suggest that double descent also holds as we vary the amount of regularization for a fixed model (see Figure 22). We also believe our characterization of the critical regime provides a useful way of thinking for practitioners—if a model and training procedure are just barely able to fit the train set, then small changes to the model or training procedure may yield unexpected behavior (e.g. making the model slightly larger or smaller, changing regularization, etc. may hurt test performance). Early stopping. We note that many of the phenomena that we highlight often do not occur with optimal early-stopping. However, this is consistent with our generalized double descent hypothesis: if early stopping prevents models from reaching 0 train error then we would not expect to see doubledescent, since the EMC does not reach the number of train samples. Further, we show at least one setting where model-wise double descent can still occur even with optimal early stopping (ResNets on CIFAR-100 with no label noise, see Figure 19). We have not observed settings where more data hurts when optimal early-stopping is used. However, we are not aware of reasons which preclude this from occurring. We leave fully understanding the optimal early stopping behavior of double descent as an important open question for future work. Label Noise. In our experiments, we observe double descent most strongly in settings with label noise. However, we believe this effect is not fundamentally about label noise, but rather about model mis-specification. For example, consider a setting where the label noise is not truly random, but rather pseudorandom (with respect to the family of classifiers being trained). In this setting, the performance of the Bayes optimal classifier would not change (since the pseudorandom noise is deterministic, and invertible), but we would observe an identical double descent as with truly random label noise. Thus, we view adding label noise as merely a proxy for making distributions “harder”— i.e. increasing the amount of model mis-specification. Other Notions of Model Complexity. Our notion of Effective Model Complexity is related to classical complexity notions such as Rademacher complexity, but differs in several crucial ways: (1) EMC depends on the true labels of the data distribution, and (2) EMC depends on the training procedure, not just the model architecture. Other notions of model complexity which do not incorporate features (1) and (2) would not suffice to characterize the location of the double-descent peak. Rademacher complexity, for example, is determined by the ability of a model architecture to fit a randomly-labeled train set. But Rademacher complexity and VC dimension are both insufficient to determine the model-wise double descent peak location, since they do not depend on the distribution of labels— and our experiments show that adding label noise shifts the location of the peak. Moreover, both Rademacher complexity and VC dimension depend only on the model family and data distribution, and not on the training procedure used to find models. Thus, they are not capable of capturing train-time double-descent effects, such as “epoch-wise” double descent, and the effect of data-augmentation on the peak location. ACKNOWLEDGMENTS We thank Mikhail Belkin for extremely useful discussions in the early stages of this work. We thank Christopher Olah for suggesting the Model Size × Epoch visualization, which led to the investigation of epoch-wise double descent, as well as for useful discussion and feedback. We also thank Alec Radford, Jacob Steinhardt, and Vaishaal Shankar for helpful discussion and suggestions. P.N. thanks OpenAI, the Simons Institute, and the Harvard Theory Group for a research environment that enabled this kind of work. We thank Dimitris Kalimeris, Benjamin L. Edelman, and Sharon Qian, and Aditya Ramesh for comments on an early draft of this work. This work supported in part by NSF grant CAREER CCF 1452961, BSF grant 2014389, NSF USICCS proposal 1540428, a Google Research award, a Facebook research award, a Simons Investigator Award, a Simons Investigator Fellowship, and NSF Awards CCF 1715187, CCF 1565264, CCF 1301976, IIS 1409097, and CNS 1618026. Y.B. would like to thank the MIT-IBM Watson AI Lab for contributing computational resources for experiments. A SUMMARY TABLE OF EXPERIMENTAL RESULTS Double-Descent Dataset Architecture Opt. Aug. % Noise Model Epoch Figure(s) CIFAR 10 CNN SGD X 0 7 7 5, 27 X 10 X X 5, 27, 6 X 20 X X 5, 27 0 7 7 5, 25 10 X X 5 20 X X 5 SGD + w.d. X 20 X X 21 Adam 0 X – 25 ResNet Adam X 0 7 7 4, 10 X 5 X – 4 X 10 X X 4, 10 X 15 X X 4, 2 X 20 X X 4, 9, 10 Various X 20 – X 16, 17, 18 (subsampled) CNN SGD X 10 X – 11a SGD X 20 X – 11a, 12 (adversarial) ResNet SGD 0 Robust err. – 26 CIFAR 100 ResNet Adam X 0 X 7 4, 19, 10 X 10 X X 4, 10 X 20 X X 4, 10 CNN SGD 0 X 7 20 IWSLT ’14 de-en Transformer Adam 0 X 7 8, 24 (subsampled) Transformer Adam 0 X 7 11b, 23 WMT ’14 en-fr Transformer Adam 0 X 7 8, 24 B APPENDIX: EXPERIMENTAL DETAILS B.1 MODELS We use the following families of architectures. The PyTorch Paszke et al. (2017) specification of our ResNets and CNNs are available at https://gitlab.com/ harvard-machine-learning/double-descent/tree/master. ResNets. We define a family of ResNet18s of increasing size as follows. We follow the Preactivation ResNet18 architecture of He et al. (2016), using 4 ResNet blocks, each consisting of two BatchNorm-ReLU-Convolution layers. The layer widths for the 4 blocks are [k, 2k, 4k, 8k] for varying k ∈ N and the strides are [1, 2, 2, 2]. The standard ResNet18 corresponds to k = 64 convolutional channels in the first layer. The scaling of model size with k is shown in Figure 13b. Our implementation is adapted from https://github.com/kuangliu/pytorch-cifar. Standard CNNs. We consider a simple family of 5-layer CNNs, with four Conv-BatchNormReLU-MaxPool layers and a fully-connected output layer. We scale the four convolutional layer widths as [k, 2k, 4k, 8k]. The MaxPool is [1, 2, 2, 8]. For all the convolution layers, the kernel size = 3, stride = 1 and padding=1. This architecture is based on the “backbone” architecture from Page (2018). For k = 64, this CNN has 1558026 parameters and can reach > 90% test accuracy on CIFAR-10 (Krizhevsky (2009)) with data-augmentation. The scaling of model size with k is shown in Figure 13a. Transformers. We consider the encoder-decoder Transformer model from Vaswani et al. (2017) with 6 layers and 8 attention heads per layer, as implemented by fairseq Ott et al. (2019). We scale the size of the network by modifying the embedding dimension (dmodel), and scale the width of the fully-connected layers proportionally (dff = 4dmodel). We train with 10% label smoothing and no drop-out, for 80 gradient steps. B.2 IMAGE CLASSIFICATION: EXPERIMENTAL SETUP We describe the details of training for CNNs and ResNets below. Loss function: Unless stated otherwise, we use the cross-entropy loss for all the experiments. Data-augmentation: In experiments where data-augmentation was used, we apply RandomCrop(32, padding=4) and RandomHorizontalFlip. In experiments with added label noise, the label for all augmentations of a given training sample are given the same label. Regularization: No explicit regularization like weight decay or dropout was applied unless explicitly stated. Initialization: We use the default initialization provided by PyTorch for all the layers. Optimization: • Adam: Unless specified otherwise, learning rate was set at constant to 1e−4 and all other parameters were set to their default PyTorch values. • SGD: Unless specified otherwise, learning rate schedule inverse-square root (defined below) was used with initial learning rate γ0 = 0.1 and updates every L = 512 gradient steps. No momentum was used. We found our results are robust to various other natural choices of optimizers and learning rate schedule. We used the above settings because (1) they optimize well, and (2) they do not require experiment-specific hyperparameter tuning, and allow us to use the same optimization across many experiments. Batch size: All experiments use a batchsize of 128. Learning rate schedule descriptions: • Inverse-square root (γ0, L): At gradient step t, the learning rate is set to γ(t) := γ0√ 1+bt/512c . We set learning-rate with respect to number of gradient steps, and not epochs, in order to allow comparison between experiments with varying train-set sizes. • Dynamic drop (γ0, drop, patience): Starts with an initial learning rate of γ0 and drops by a factor of ’drop’ if the training loss has remained constant or become worse for ’patience’ number of gradient steps. B.3 NEURAL MACHINE TRANSLATION: EXPERIMENTAL SETUP Here we describe the experimental setup for the neural machine translation experiments. Training procedure. In this setting, the distribution D consists of triples (x, y, i) : x ∈ V ∗src, y ∈ V ∗tgt, i ∈ {0, . . . , |y|} where Vsrc and Vtgt are the source and target vocabularies, the string x is a sentence in the source language, y is its translation in the target language, and i is the index of the token to be predicted by the model. We assume that i|x, y is distributed uniformly on {0, . . . , |y|}. A standard probabilistic model defines an autoregressive factorization of the likelihood: pM (y|x) = |y|∏ i=1 pM (yi|y<i, x). Given a set of training samples S, we define ErrorS(M) = 1 |S| ∑ (x,y,i)∈S − log pM (yi|y<i, x). In practice, S is not constructed from independent samples from D, but rather by first sampling (x, y) and then including all (x, y, 0), . . . , (x, y, |y|) in S. For training transformers, we replicate the optimization procedure specified in Vaswani et al. (2017) section 5.3, where the learning rate schedule consists of a “warmup” phase with linearly increasing learning rate followed by a phase with inverse square-root decay. We preprocess the data using byte pair encoding (BPE) as described in Sennrich et al. (2015). We use the implementation provided by fairseq (https://github.com/pytorch/fairseq). Datasets. The IWSLT ’14 German to English dataset contains TED Talks as described in Cettolo et al. (2012). The WMT ’14 English to French dataset is taken from http://www.statmt. org/wmt14/translation-task.html. B.4 PER-SECTION EXPERIMENTAL DETAILS Here we provide full details for experiments in the body, when not otherwise provided. Introduction: Experimental Details Figure 1: All models were trained using Adam with learningrate 0.0001 for 4K epochs. Plotting means and standard deviations for 5 trials, with random network initialization. Model-wise Double Descent: Experimental Details Figure 7: Plotting means and standard deviations for 5 trials, with random network initialization. Sample-wise Nonmonotonicity: Experimental Details Figure 11a: All models are trained with SGD for 500K epochs, and data-augmentation. Bottom: Means and standard deviations from 5 trials with random initialization, and random subsampling of the train set. C EXTENDED DISCUSSION OF RELATED WORK Belkin et al. (2018): This paper proposed, in very general terms, that the apparent contradiction between traditional notions of the bias-variance trade-off and empirically successful practices in deep learning can be reconciled under a double-descent curve—as model complexity increases, the test error follows the traditional “U-shaped curve”, but beyond the point of interpolation, the error starts to decrease. This work provides empirical evidence for the double-descent curve with fully connected networks trained on subsets of MNIST, CIFAR10, SVHN and TIMIT datasets. They use the l2 loss for their experiments. They demonstrate that neural networks are not an aberration in this regard—double-descent is a general phenomenon observed also in linear regression with random features and random forests. Theoretical works on linear least squares regression: A variety of papers have attempted to theoretically analyze this behavior in restricted settings, particularly the case of least squares regression under various assumptions on the training data, feature spaces and regularization method. 1. Advani & Saxe (2017); Hastie et al. (2019) both consider the linear regression problem stated above and analyze the generalization behavior in the asymptotic limit N,D → ∞ using random matrix theory. Hastie et al. (2019) highlight that when the model is misspecified, the minimum of training error can occur for over-parameterized models 2. Belkin et al. (2019) Linear least squares regression for two data models, where the input data is sampled from a Gaussian and a Fourier series model for functions on a circle. They provide a finite-sample analysis for these two cases 3. Bartlett et al. (2019) provides generalization bounds for the minimum l2-norm interpolant for Gaussian features 4. Muthukumar et al. (2019) characterize the fundamental limit of of any interpolating solution in the presence of noise and provide some interesting Fourier-theoretic interpretations. 5. Mei & Montanari (2019): This work provides asymptotic analysis for ridge regression over random features Similar double descent behavior, in restricted settings, was investigated in Trunk (1979); Opper (1995; 2001); Skurichina & Duin (2002). Neal et al. (2018) conducts a study of bias and variance in modern neural networks, observing that both bias and variance can decrease with increasing model size, contrary to conventional wisdom. Geiger et al. (2019b) showed that deep fully connected networks trained on the MNIST dataset with hinge loss exhibit a “jamming transition” when the number of parameters exceeds a threshold that allows training to near-zero train loss. Geiger et al. (2019a) provide further experiments on CIFAR10 with a convolutional network. They also highlight interesting behavior with ensembling around the critical regime, which is consistent with our informal intuitions in Section 5 and our experiments in Figures 28, 29. Advani & Saxe (2017); Geiger et al. (2019b;a) also point out that double-descent is not observed when optimal early-stopping is used. The study of sample non-monotonicity in learning algorithms had also existed prior to double descent, including in Duin (1995; 2000); Opper (2001); Loog & Duin (2012). D RANDOM FEATURES: A CASE STUDY In this section, for completeness sake, we show that both the model- and sample-wise double descent phenomena are not unique to deep neural networks—they exist even in the setting of Random Fourier Features of Rahimi & Recht (2008). This setting is equivalent to a two-layer neural network with e−ix activation. The first layer is initialized with a N (0, 1d ) Gaussian distribution and then fixed throughout training. The width (or embedding dimension) d of the first layer parameterizes the model size. The second layer is initialized with 0s and trained with MSE loss. Figure 14 shows the grid of Test Error as a function of both number of samples n and model size d. Note that in this setting EMC = d (the embedding dimension). As a result, as demonstrated in the figure, the peak follows the path of n = d. Both model-wise and sample-wise (see figure 15) double descent phenomena are captured, by horizontally and vertically crossing the grid, respectively. Figure 15: Sample-wise double-descent slice for Random Fourier Features on the Fashion MNIST dataset. In this figure the embedding dimension (number of random features) is 1000. E APPENDIX: ADDITIONAL EXPERIMENTS E.1 EPOCH-WISE DOUBLE DESCENT: ADDITIONAL RESULTS Here, we provide a rigorous evaluation of epoch-wise double descent for a variety of optimizers and learning rate schedules. We train ResNet18 on CIFAR-10 with data-augmentation and 20% label noise with three different optimizers—Adam, SGD, SGD + Momentum (momentum set to 0.9) and three different learning rate schedules—constant, inverse-square root, dynamic drop for differnet values of initial learning rate. We observe that double-descent occurs reliably for all optimizers and learning rate schedules and the peak of the double descent curve shifts with the interpolation point. A practical recommendation resulting from epoch-wise double descent is that stopping the training when the test error starts to increase may not always be the best strategy. In some cases, the test error may decrease again after reaching a maximum, and the final value may be lower than the minimum earlier in training. CIFAR100, ResNet18 E.2.1 CLEAN SETTINGS WITH MODEL-WISE DOUBLE DESCENT E.2 MODEL-WISE DOUBLE DESCENT: ADDITIONAL RESULTS CIFAR100, Standard CNN E.2.2 WEIGHT DECAY Here, we now study the effect of varying the level of regularization on test error. We train CIFAR10 with data-augmentation and 20% label noise on ResNet18 for weight decay co-efficients λ ranging from 0 to 0.1. We train the networks using SGD + inverse-square root learning rate. Figure below shows a picture qualitatively very similar to that observed for model-wise double descent wherein ”model complexity” is now controlled by the regularization parameter. This confirms our generalized double descent hypothesis along yet another axis of Effective Model Complexity. E.2.3 EARLY STOPPING DOES NOT EXHIBIT DOUBLE DESCENT Language models CIFAR10, 10% noise, SGD E.2.4 TRAINING PROCEDURE E.3 ENSEMBLING
1. What is the focus of the paper regarding neural networks? 2. What are the strengths and weaknesses of the paper's empirical study? 3. How does the paper relate to prior works on double descent behavior in neural networks? 4. Are there any inconsistencies or inaccuracies in the paper's discussion of effective model complexity and its relation to previous research? 5. How does the paper's intuition on model sizes at the interpolation threshold compare to earlier works in the field? 6. Is there novelty in the paper's observation that training with more data samples can lead to worse generalization?
Review
Review This paper provides a valuable and detailed empirical study of the double descent behaviour in neural networks. It investigates presence of this behaviour in a range of neural network architectures and apart of identifying it as a function of the model size it also identifies it as a function of training time which I believe is novel. Overall I think the paper presents results valuable to the community. At the same time it has several issues that need to be addressed. If the issues are addressed in a satisfactory manner I will recommend acceptance of the paper. Issues and questions: ** "Such a phenomenon was first postulated by Belkin et al. (2018) who named it “double descent” and demonstrated it on MNIST with decision trees, random features, and 2-layer neural networks with `2 loss." This is not a correct statement. The authors cite works (Advani & Saxe 2017) and there is also "Stefano Spigler, Mario Geiger, Stephane d’Ascoli, Levent Sagun, Giulio Biroli, and Matthieu Wyart. A jamming transition from under-to over-parametrization affects loss landscape and generalization. arXiv preprint arXiv:1810.09665, 2018." that both identified this behaviour prior to the Belkin et al. (2018) work. There is even a much older line of work identifying the peak in the generalization error by Opper's group: http://www.ki.tu-berlin.de/fileadmin/fg135/publikationen/opper/Op03b.pdf (1995) Siegfried B¨os and Manfred Opper. Dynamics of training. In Advances in Neural Information Processing Systems, pages 141–147, 1997. http://www.ki.tu-berlin.de/fileadmin/fg135/publikationen/opper/Op01.pdf Moreover, as is only said in the supplementary material, Geiger et al. also tested on the CIFAR dataset, while the introduction of the present article only mentions previous experiments on MNIST. The fact that previous work also observed this in CIFAR should be moved in the introduction. ** The authors claim to define the "effective model complexity (EMC)" and claim this as one of the main results of the paper. Presenting this as a new measure is strange, this exactly is what Geiger et all call jamming transition/threshold in Stefano Spigler, Mario Geiger, St´ephane d’Ascoli, Levent Sagun, Giulio Biroli, and Matthieu Wyart. A jamming transition from under-to over-parametrization affects loss landscape and generalization. arXiv preprint arXiv:1810.09665, 2018. And very closely related to what Belkin calls "interpolation threshold" (to define interpolation threshold we could fix the model and vary the number of samples, Belkin et all fix number of samples and vary size of the model, but one is just the dual of the other). Giving it a yet completely new name seems to create more noise than value. The relation between EMC and the jamming transition is discussed in the supplement. However, not in a accurate way. The authors say "a ”jamming transition” when the number of parameters exceeds a threshold that allows training to near-zero train loss." but jamming is inspired by the physical phenomena when spheres are added into a finite volume box and the more spheres we have the harder it gets to fit them all in until it is not possible anymore. The analogy here is that fitting sphere in = reaching training error zero. Then number of spheres = number of samples. Thus the more samples the harder it is to get the training error to zero, leading to the jamming transition. Both in training and jamming the number of samples at which this happened depends on the details of the protocol/training, thus it does depend on things such as regularization. The only aspect that I have not seen covered in the jamming analogy is the epoch-wise double descent. In any case the discussion in the paper needs to be adjusted and these relevant relations to previous work corrected and moved from the supplement to the introduction. ** "Informally, our intuition is that for model-sizes at the interpolation threshold, there is effectively only one model that fits the train data and this interpolating model is very sensitive to noise in the train set and/or model mis-specification." This intuition is correct I believe. However, it should not be called "our intuition" as it already appeared in the line of work by Opper cited above. ** The authors present as another main result the fact that under comparable training conditions training with more data samples provides worse generalization, examples of this is also included already in the papers by Opper et al. cited above.
ICLR
Title Deep Double Descent: Where Bigger Models and More Data Hurt Abstract We show that a variety of modern deep learning tasks exhibit a “double-descent” phenomenon where, as we increase model size, performance first gets worse and then gets better. Moreover, we show that double descent occurs not just as a function of model size, but also as a function of the number of training epochs. We unify the above phenomena by defining a new complexity measure we call the effective model complexity and conjecture a generalized double descent with respect to this measure. Furthermore, our notion of model complexity allows us to identify certain regimes where increasing (even quadrupling) the number of train samples actually hurts test performance. 1 INTRODUCTION The bias-variance trade-off is a fundamental concept in classical statistical learning theory (e.g., Hastie et al. (2005)). The idea is that models of higher complexity have lower bias but higher variance. According to this theory, once model complexity passes a certain threshold, models “overfit” with the variance term dominating the test error, and hence from this point onward, increasing model complexity will only decrease performance (i.e., increase test error). Hence conventional wisdom in classical statistics is that, once we pass a certain threshold, “larger models are worse.” However, modern neural networks exhibit no such phenomenon. Such networks have millions of parameters, more than enough to fit even random labels (Zhang et al. (2016)), and yet they perform much better on many tasks than smaller models. Indeed, conventional wisdom among practitioners is that “larger models are better’’ (Krizhevsky et al. (2012), Huang et al. (2018), Szegedy et al. ∗Work performed in part while Preetum Nakkiran was interning at OpenAI, with Ilya Sutskever. We especially thank Mikhail Belkin and Christopher Olah for helpful discussions throughout this work. Correspondence Email: preetum@cs.harvard.edu †Equal contribution (2015), Radford et al. (2019)). The effect of training time on test performance is also up for debate. In some settings, “early stopping” improves test performance, while in other settings training neural networks to zero training error only improves performance. Finally, if there is one thing both classical statisticians and deep learning practitioners agree on is “more data is always better”. In this paper, we present empirical evidence that both reconcile and challenge some of the above “conventional wisdoms.” We show that many deep learning settings have two different regimes. In the under-parameterized regime, where the model complexity is small compared to the number of samples, the test error as a function of model complexity follows the U-like behavior predicted by the classical bias/variance tradeoff. However, once model complexity is sufficiently large to interpolate i.e., achieve (close to) zero training error, then increasing complexity only decreases test error, following the modern intuition of “bigger models are better”. Similar behavior was previously observed in Opper (1995; 2001), Advani & Saxe (2017), Spigler et al. (2018), and Geiger et al. (2019b). This phenomenon was first postulated in generality by Belkin et al. (2018) who named it “double descent”, and demonstrated it for decision trees, random features, and 2-layer neural networks with `2 loss, on a variety of learning tasks including MNIST and CIFAR-10. Main contributions. We show that double descent is a robust phenomenon that occurs in a variety of tasks, architectures, and optimization methods (see Figure 1 and Section 5; our experiments are summarized in Table A). Moreover, we propose a much more general notion of “double descent” that goes beyond varying the number of parameters. We define the effective model complexity (EMC) of a training procedure as the maximum number of samples on which it can achieve close to zero training error. The EMC depends not just on the data distribution and the architecture of the classifier but also on the training procedure—and in particular increasing training time will increase the EMC. We hypothesize that for many natural models and learning algorithms, double descent occurs as a function of the EMC. Indeed we observe “epoch-wise double descent” when we keep the model fixed and increase the training time, with performance following a classical U-like curve in the underfitting stage (when the EMC is smaller than the number of samples) and then improving with training time once the EMC is sufficiently larger than the number of samples (see Figure 2). As a corollary, early stopping only helps in the relatively narrow parameter regime of critically parameterized models. Sample non-monotonicity. Finally, our results shed light on test performance as a function of the number of train samples. Since the test error peaks around the point where EMC matches the number of samples (the transition from the under- to over-parameterization), increasing the number of samples has the effect of shifting this peak to the right. While in most settings increasing the number of samples decreases error, this shifting effect can sometimes result in a setting where more data is worse! For example, Figure 3 demonstrates cases in which increasing the number of samples by a factor of 4.5 results in worse test performance. For models in this range 4.5x more samples harm test loss
 Figure 3: Test loss (per-token perplexity) as a function of Transformer model size (embedding dimension dmodel) on language translation (IWSLT‘14 German-to-English). The curve for 18k samples is generally lower than the one for 4k samples, but also shifted to the right, since fitting 18k samples requires a larger model. Thus, for some models, the performance for 18k samples is worse than for 4k samples. 2 OUR RESULTS To state our hypothesis more precisely, we define the notion of effective model complexity. We define a training procedure T to be any procedure that takes as input a set S = {(x1, y1), . . . , (xn, yn)} of labeled training samples and outputs a classifier T (S) mapping data to labels. We define the effective model complexity of T (w.r.t. distribution D) to be the maximum number of samples n on which T achieves on average ≈ 0 training error. Definition 1 (Effective Model Complexity) The Effective Model Complexity (EMC) of a training procedure T , with respect to distribution D and parameter > 0, is defined as: EMCD, (T ) := max {n | ES∼Dn [ErrorS(T (S))] ≤ } where ErrorS(M) is the mean error of model M on train samples S. Our main hypothesis can be informally stated as follows: Hypothesis 1 (Generalized Double Descent hypothesis, informal) For any natural data distribution D, neural-network-based training procedure T , and small > 0, if we consider the task of predicting labels based on n samples from D then: Under-paremeterized regime. If EMCD, (T ) is sufficiently smaller than n, any perturbation of T that increases its effective complexity will decrease the test error. Over-parameterized regime. If EMCD, (T ) is sufficiently larger than n, any perturbation of T that increases its effective complexity will decrease the test error. Critically parameterized regime. If EMCD, (T ) ≈ n, then a perturbation of T that increases its effective complexity might decrease or increase the test error. Hypothesis 1 is informal in several ways. We do not have a principled way to choose the parameter (and currently heuristically use = 0.1). We also are yet to have a formal specification for “sufficiently smaller” and “sufficiently larger”. Our experiments suggest that there is a critical interval around the interpolation threshold when EMCD, (T ) = n: below and above this interval increasing complexity helps performance, while within this interval it may hurt performance. The width of the critical interval depends on both the distribution and the training procedure in ways we do not yet completely understand. We believe Hypothesis 1 sheds light on the interaction between optimization algorithms, model size, and test performance and helps reconcile some of the competing intuitions about them. The main result of this paper is an experimental validation of Hypothesis 1 under a variety of settings, where we considered several natural choices of datasets, architectures, and optimization algorithms, and we changed the “interpolation threshold” by varying the number of model parameters, the length of training, the amount of label noise in the distribution, and the number of train samples. Model-wise Double Descent. In Section 5, we study the test error of models of increasing size, for a fixed large number of optimization steps. We show that “model-wise double-descent” occurs for various modern datasets (CIFAR-10, CIFAR-100, IWSLT‘14 de-en, with varying amounts of label noise), model architectures (CNNs, ResNets, Transformers), optimizers (SGD, Adam), number of train samples, and training procedures (data-augmentation, and regularization). Moreover, the peak in test error systematically occurs at the interpolation threshold. In particular, we demonstrate realistic settings in which bigger models are worse. Epoch-wise Double Descent. In Section 6, we study the test error of a fixed, large architecture over the course of training. We demonstrate, in similar settings as above, a corresponding peak in test performance when models are trained just long enough to reach ≈ 0 train error. The test error of a large model first decreases (at the beginning of training), then increases (around the critical regime), then decreases once more (at the end of training)—that is, training longer can correct overfitting. Sample-wise Non-monotonicity. In Section 7, we study the test error of a fixed model and training procedure, for varying number of train samples. Consistent with our generalized double-descent hypothesis, we observe distinct test behavior in the “critical regime”, when the number of samples is near the maximum that the model can fit. This often manifests as a long plateau region, in which taking significantly more data might not help when training to completion (as is the case for CNNs on CIFAR-10). Moreover, we show settings (Transformers on IWSLT‘14 en-de), where this manifests as a peak—and for a fixed architecture and training procedure, more data actually hurts. Remarks on Label Noise. We observe all forms of double descent most strongly in settings with label noise in the train set (as is often the case when collecting train data in the real-world). However, we also show several realistic settings with a test-error peak even without label noise: ResNets (Figure 4a) and CNNs (Figure 20) on CIFAR-100; Transformers on IWSLT‘14 (Figure 8). Moreover, all our experiments demonstrate distinctly different test behavior in the critical regime— often manifesting as a “plateau” in the test error in the noiseless case which develops into a peak with added label noise. See Section 8 for further discussion. 3 RELATED WORK Model-wise double descent was first proposed as a general phenomenon by Belkin et al. (2018). Similar behavior had been observed in Opper (1995; 2001), Advani & Saxe (2017), Spigler et al. (2018), and Geiger et al. (2019b). Subsequently, there has been a large body of work studying the double descent phenomenon. A growing list of papers that theoretically analyze it in the tractable setting of linear least squares regression includes Belkin et al. (2019); Hastie et al. (2019); Bartlett et al. (2019); Muthukumar et al. (2019); Bibas et al. (2019); Mitra (2019); Mei & Montanari (2019). Moreover, Geiger et al. (2019a) provide preliminary results for model-wise double descent in convolutional networks trained on CIFAR-10. Our work differs from the above papers in two crucial aspects: First, we extend the idea of double-descent beyond the number of parameters to incorporate the training procedure under a unified notion of “Effective Model Complexity”, leading to novel insights like epoch-wise double descent and sample non-monotonicity. The notion that increasing train time corresponds to increasing complexity was also presented in Nakkiran et al. (2019). Second, we provide an extensive and rigorous demonstration of double-descent in modern deep learning, spanning a variety of architectures, datasets, and optimization procedures. An extended discussion of the related work is provided in Appendix C. 4 EXPERIMENTAL SETUP We briefly describe the experimental setup here; full details are in Appendix B 1. We consider three families of architectures: ResNets, standard CNNs, and Transformers. ResNets: We parameterize a family of ResNet18s (He et al. (2016)) by scaling the width (number of filters) of convolutional layers. Specifically, we use layer widths [k, 2k, 4k, 8k] for varying k. The standard ResNet18 corresponds to k = 64. Standard CNNs: We consider a simple family of 5-layer CNNs, with 4 convolutional layers of widths [k, 2k, 4k, 8k] for varying k, and a fully-connected layer. For context, the CNN with width k = 64, can reach over 90% test accuracy on CIFAR-10 with dataaugmentation. Transformers: We consider the 6 layer encoder-decoder from Vaswani et al. (2017), as implemented by Ott et al. (2019). We scale the size of the network by modifying the embedding dimension dmodel, and setting the width of the fully-connected layers proportionally (dff = 4 ·dmodel). 1The raw data from our experiments are available at: https://gitlab.com/ harvard-machine-learning/double-descent/tree/master For ResNets and CNNs, we train with cross-entropy loss, and the following optimizers: (1) Adam with learning-rate 0.0001 for 4K epochs; (2) SGD with learning rate∝ 1√ T for 500K gradient steps. We train Transformers for 80K gradient steps, with 10% label smoothing and no drop-out. Label Noise. In our experiments, label noise of probability p refers to training on a samples which have the correct label with probability (1 − p), and a uniformly random incorrect label otherwise (label noise is sampled only once and not per epoch). Figure 1 plots test error on the noisy distribution, while the remaining figures plot test error with respect to the clean distribution (the two curves are just linear rescaling of one another). 5 MODEL-WISE DOUBLE DESCENT In this section, we study the test error of models of increasing size, when training to completion (for a fixed large number of optimization steps). We demonstrate model-wise double descent across different architectures, datasets, optimizers, and training procedures. The critical region exhibits distinctly different test behavior around the interpolation point and there is often a peak in test error that becomes more prominent in settings with label noise. For the experiments in this section (Figures 4, 5, 6, 7, 8), notice that all modifications which increase the interpolation threshold (such as adding label noise, using data augmentation, and increasing the number of train samples) also correspondingly shift the peak in test error towards larger models. Additional plots showing the early-stopping behavior of these models, and additional experiments showing double descent in settings with no label noise (e.g. Figure 19) are in Appendix E.2. We also observed model-wise double descent for adversarial training, with a prominent robust test error peak even in settings without label noise. See Figure 26 in Appendix E.2. Discussion. Fully understanding the mechanisms behind model-wise double descent in deep neural networks remains an important open question. However, an analog of model-wise double descent occurs even for linear models. A recent stream of theoretical works analyzes this setting (Bartlett et al. (2019); Muthukumar et al. (2019); Belkin et al. (2019); Mei & Montanari (2019); Hastie et al. (2019)). We believe similar mechanisms may be at work in deep neural networks. Informally, our intuition is that for model-sizes at the interpolation threshold, there is effectively only one model that fits the train data and this interpolating model is very sensitive to noise in the train set and/or model mis-specification. That is, since the model is just barely able to fit the train data, forcing it to fit even slightly-noisy or mis-specified labels will destroy its global structure, and result in high test error. (See Figure 28 in the Appendix for an experiment demonstrating this noise sensitivity, by showing that ensembling helps significantly in the critically-parameterized regime). However for over-parameterized models, there are many interpolating models that fit the train set, and SGD is able to find one that “memorizes” (or “absorbs”) the noise while still performing well on the distribution. The above intuition is theoretically justified for linear models. In general, this situation manifests even without label noise for linear models (Mei & Montanari (2019)), and occurs whenever there is model mis-specification between the structure of the true distribution and the model family. We believe this intuition extends to deep learning as well, and it is consistent with our experiments. 6 EPOCH-WISE DOUBLE DESCENT In this section, we demonstrate a novel form of double-descent with respect to training epochs, which is consistent with our unified view of effective model complexity (EMC) and the generalized double descent hypothesis. Increasing the train time increases the EMC—and thus a sufficiently large model transitions from under- to over-parameterized over the course of training. As illustrated in Figure 9, sufficiently large models can undergo a “double descent” behavior where test error first decreases then increases near the interpolation threshold, and then decreases again. In contrast, for “medium sized” models, for which training to completion will only barely reach ≈ 0 error, the test error as a function of training time will follow a classical U-like curve where it is better to stop early. Models that are too small to reach the approximation threshold will remain in the “under parameterized” regime where increasing train time monotonically decreases test error. Our experiments (Figure 10) show that many settings of dataset and architecture exhibit epoch-wise double descent, in the presence of label noise. Further, this phenomenon is robust across optimizer variations and learning rate schedules (see additional experiments in Appendix E.1). As in modelwise double descent, the test error peak is accentuated with label noise. Conventional wisdom suggests that training is split into two phases: (1) In the first phase, the network learns a function with a small generalization gap (2) In the second phase, the network starts to over-fit the data leading to an increase in test error. Our experiments suggest that this is not the complete picture—in some regimes, the test error decreases again and may achieve a lower value at the end of training as compared to the first minimum (see Fig 10 for 10% label noise). 7 SAMPLE-WISE NON-MONOTONICITY In this section, we investigate the effect of varying the number of train samples, for a fixed model and training procedure. Previously, in model-wise and epoch-wise double descent, we explored behavior in the critical regime, where EMCD, (T ) ≈ n, by varying the EMC. Here, we explore the critical regime by varying the number of train samples n. By increasing n, the same training procedure T can switch from being effectively over-parameterized to effectively under-parameterized. We show that increasing the number of samples has two different effects on the test error vs. model complexity graph. On the one hand, (as expected) increasing the number of samples shrinks the area under the curve. On the other hand, increasing the number of samples also has the effect of “shifting the curve to the right” and increasing the model complexity at which test error peaks. These twin effects are shown in Figure 11a. Note that there is a range of model sizes where the effects “cancel out”—and having 4× more train samples does not help test performance when training to completion. Outside the critically-parameterized regime, for sufficiently under- or overparameterized models, having more samples helps. This phenomenon is corroborated in Figure 12, which shows test error as a function of both model and sample size, in the same setting as Figure 11a. In some settings, these two effects combine to yield a regime of model sizes where more data actually hurts test performance as in Figure 3 (see also Figure 11b). Note that this phenomenon is not unique to DNNs: more data can hurt even for linear models (see Appendix D). 8 CONCLUSION AND DISCUSSION We introduce a generalized double descent hypothesis: models and training procedures exhibit atypical behavior when their Effective Model Complexity is comparable to the number of train samples. We provide extensive evidence for our hypothesis in modern deep learning settings, and show that it is robust to choices of dataset, architecture, and training procedures. In particular, we demonstrate “model-wise double descent” for modern deep networks and characterize the regime where bigger models can perform worse. We also demonstrate “epoch-wise double descent,” which, to the best of our knowledge, has not been previously proposed. Finally, we show that the double descent phenomenon can lead to a regime where training on more data leads to worse test performance. Preliminary results suggest that double descent also holds as we vary the amount of regularization for a fixed model (see Figure 22). We also believe our characterization of the critical regime provides a useful way of thinking for practitioners—if a model and training procedure are just barely able to fit the train set, then small changes to the model or training procedure may yield unexpected behavior (e.g. making the model slightly larger or smaller, changing regularization, etc. may hurt test performance). Early stopping. We note that many of the phenomena that we highlight often do not occur with optimal early-stopping. However, this is consistent with our generalized double descent hypothesis: if early stopping prevents models from reaching 0 train error then we would not expect to see doubledescent, since the EMC does not reach the number of train samples. Further, we show at least one setting where model-wise double descent can still occur even with optimal early stopping (ResNets on CIFAR-100 with no label noise, see Figure 19). We have not observed settings where more data hurts when optimal early-stopping is used. However, we are not aware of reasons which preclude this from occurring. We leave fully understanding the optimal early stopping behavior of double descent as an important open question for future work. Label Noise. In our experiments, we observe double descent most strongly in settings with label noise. However, we believe this effect is not fundamentally about label noise, but rather about model mis-specification. For example, consider a setting where the label noise is not truly random, but rather pseudorandom (with respect to the family of classifiers being trained). In this setting, the performance of the Bayes optimal classifier would not change (since the pseudorandom noise is deterministic, and invertible), but we would observe an identical double descent as with truly random label noise. Thus, we view adding label noise as merely a proxy for making distributions “harder”— i.e. increasing the amount of model mis-specification. Other Notions of Model Complexity. Our notion of Effective Model Complexity is related to classical complexity notions such as Rademacher complexity, but differs in several crucial ways: (1) EMC depends on the true labels of the data distribution, and (2) EMC depends on the training procedure, not just the model architecture. Other notions of model complexity which do not incorporate features (1) and (2) would not suffice to characterize the location of the double-descent peak. Rademacher complexity, for example, is determined by the ability of a model architecture to fit a randomly-labeled train set. But Rademacher complexity and VC dimension are both insufficient to determine the model-wise double descent peak location, since they do not depend on the distribution of labels— and our experiments show that adding label noise shifts the location of the peak. Moreover, both Rademacher complexity and VC dimension depend only on the model family and data distribution, and not on the training procedure used to find models. Thus, they are not capable of capturing train-time double-descent effects, such as “epoch-wise” double descent, and the effect of data-augmentation on the peak location. ACKNOWLEDGMENTS We thank Mikhail Belkin for extremely useful discussions in the early stages of this work. We thank Christopher Olah for suggesting the Model Size × Epoch visualization, which led to the investigation of epoch-wise double descent, as well as for useful discussion and feedback. We also thank Alec Radford, Jacob Steinhardt, and Vaishaal Shankar for helpful discussion and suggestions. P.N. thanks OpenAI, the Simons Institute, and the Harvard Theory Group for a research environment that enabled this kind of work. We thank Dimitris Kalimeris, Benjamin L. Edelman, and Sharon Qian, and Aditya Ramesh for comments on an early draft of this work. This work supported in part by NSF grant CAREER CCF 1452961, BSF grant 2014389, NSF USICCS proposal 1540428, a Google Research award, a Facebook research award, a Simons Investigator Award, a Simons Investigator Fellowship, and NSF Awards CCF 1715187, CCF 1565264, CCF 1301976, IIS 1409097, and CNS 1618026. Y.B. would like to thank the MIT-IBM Watson AI Lab for contributing computational resources for experiments. A SUMMARY TABLE OF EXPERIMENTAL RESULTS Double-Descent Dataset Architecture Opt. Aug. % Noise Model Epoch Figure(s) CIFAR 10 CNN SGD X 0 7 7 5, 27 X 10 X X 5, 27, 6 X 20 X X 5, 27 0 7 7 5, 25 10 X X 5 20 X X 5 SGD + w.d. X 20 X X 21 Adam 0 X – 25 ResNet Adam X 0 7 7 4, 10 X 5 X – 4 X 10 X X 4, 10 X 15 X X 4, 2 X 20 X X 4, 9, 10 Various X 20 – X 16, 17, 18 (subsampled) CNN SGD X 10 X – 11a SGD X 20 X – 11a, 12 (adversarial) ResNet SGD 0 Robust err. – 26 CIFAR 100 ResNet Adam X 0 X 7 4, 19, 10 X 10 X X 4, 10 X 20 X X 4, 10 CNN SGD 0 X 7 20 IWSLT ’14 de-en Transformer Adam 0 X 7 8, 24 (subsampled) Transformer Adam 0 X 7 11b, 23 WMT ’14 en-fr Transformer Adam 0 X 7 8, 24 B APPENDIX: EXPERIMENTAL DETAILS B.1 MODELS We use the following families of architectures. The PyTorch Paszke et al. (2017) specification of our ResNets and CNNs are available at https://gitlab.com/ harvard-machine-learning/double-descent/tree/master. ResNets. We define a family of ResNet18s of increasing size as follows. We follow the Preactivation ResNet18 architecture of He et al. (2016), using 4 ResNet blocks, each consisting of two BatchNorm-ReLU-Convolution layers. The layer widths for the 4 blocks are [k, 2k, 4k, 8k] for varying k ∈ N and the strides are [1, 2, 2, 2]. The standard ResNet18 corresponds to k = 64 convolutional channels in the first layer. The scaling of model size with k is shown in Figure 13b. Our implementation is adapted from https://github.com/kuangliu/pytorch-cifar. Standard CNNs. We consider a simple family of 5-layer CNNs, with four Conv-BatchNormReLU-MaxPool layers and a fully-connected output layer. We scale the four convolutional layer widths as [k, 2k, 4k, 8k]. The MaxPool is [1, 2, 2, 8]. For all the convolution layers, the kernel size = 3, stride = 1 and padding=1. This architecture is based on the “backbone” architecture from Page (2018). For k = 64, this CNN has 1558026 parameters and can reach > 90% test accuracy on CIFAR-10 (Krizhevsky (2009)) with data-augmentation. The scaling of model size with k is shown in Figure 13a. Transformers. We consider the encoder-decoder Transformer model from Vaswani et al. (2017) with 6 layers and 8 attention heads per layer, as implemented by fairseq Ott et al. (2019). We scale the size of the network by modifying the embedding dimension (dmodel), and scale the width of the fully-connected layers proportionally (dff = 4dmodel). We train with 10% label smoothing and no drop-out, for 80 gradient steps. B.2 IMAGE CLASSIFICATION: EXPERIMENTAL SETUP We describe the details of training for CNNs and ResNets below. Loss function: Unless stated otherwise, we use the cross-entropy loss for all the experiments. Data-augmentation: In experiments where data-augmentation was used, we apply RandomCrop(32, padding=4) and RandomHorizontalFlip. In experiments with added label noise, the label for all augmentations of a given training sample are given the same label. Regularization: No explicit regularization like weight decay or dropout was applied unless explicitly stated. Initialization: We use the default initialization provided by PyTorch for all the layers. Optimization: • Adam: Unless specified otherwise, learning rate was set at constant to 1e−4 and all other parameters were set to their default PyTorch values. • SGD: Unless specified otherwise, learning rate schedule inverse-square root (defined below) was used with initial learning rate γ0 = 0.1 and updates every L = 512 gradient steps. No momentum was used. We found our results are robust to various other natural choices of optimizers and learning rate schedule. We used the above settings because (1) they optimize well, and (2) they do not require experiment-specific hyperparameter tuning, and allow us to use the same optimization across many experiments. Batch size: All experiments use a batchsize of 128. Learning rate schedule descriptions: • Inverse-square root (γ0, L): At gradient step t, the learning rate is set to γ(t) := γ0√ 1+bt/512c . We set learning-rate with respect to number of gradient steps, and not epochs, in order to allow comparison between experiments with varying train-set sizes. • Dynamic drop (γ0, drop, patience): Starts with an initial learning rate of γ0 and drops by a factor of ’drop’ if the training loss has remained constant or become worse for ’patience’ number of gradient steps. B.3 NEURAL MACHINE TRANSLATION: EXPERIMENTAL SETUP Here we describe the experimental setup for the neural machine translation experiments. Training procedure. In this setting, the distribution D consists of triples (x, y, i) : x ∈ V ∗src, y ∈ V ∗tgt, i ∈ {0, . . . , |y|} where Vsrc and Vtgt are the source and target vocabularies, the string x is a sentence in the source language, y is its translation in the target language, and i is the index of the token to be predicted by the model. We assume that i|x, y is distributed uniformly on {0, . . . , |y|}. A standard probabilistic model defines an autoregressive factorization of the likelihood: pM (y|x) = |y|∏ i=1 pM (yi|y<i, x). Given a set of training samples S, we define ErrorS(M) = 1 |S| ∑ (x,y,i)∈S − log pM (yi|y<i, x). In practice, S is not constructed from independent samples from D, but rather by first sampling (x, y) and then including all (x, y, 0), . . . , (x, y, |y|) in S. For training transformers, we replicate the optimization procedure specified in Vaswani et al. (2017) section 5.3, where the learning rate schedule consists of a “warmup” phase with linearly increasing learning rate followed by a phase with inverse square-root decay. We preprocess the data using byte pair encoding (BPE) as described in Sennrich et al. (2015). We use the implementation provided by fairseq (https://github.com/pytorch/fairseq). Datasets. The IWSLT ’14 German to English dataset contains TED Talks as described in Cettolo et al. (2012). The WMT ’14 English to French dataset is taken from http://www.statmt. org/wmt14/translation-task.html. B.4 PER-SECTION EXPERIMENTAL DETAILS Here we provide full details for experiments in the body, when not otherwise provided. Introduction: Experimental Details Figure 1: All models were trained using Adam with learningrate 0.0001 for 4K epochs. Plotting means and standard deviations for 5 trials, with random network initialization. Model-wise Double Descent: Experimental Details Figure 7: Plotting means and standard deviations for 5 trials, with random network initialization. Sample-wise Nonmonotonicity: Experimental Details Figure 11a: All models are trained with SGD for 500K epochs, and data-augmentation. Bottom: Means and standard deviations from 5 trials with random initialization, and random subsampling of the train set. C EXTENDED DISCUSSION OF RELATED WORK Belkin et al. (2018): This paper proposed, in very general terms, that the apparent contradiction between traditional notions of the bias-variance trade-off and empirically successful practices in deep learning can be reconciled under a double-descent curve—as model complexity increases, the test error follows the traditional “U-shaped curve”, but beyond the point of interpolation, the error starts to decrease. This work provides empirical evidence for the double-descent curve with fully connected networks trained on subsets of MNIST, CIFAR10, SVHN and TIMIT datasets. They use the l2 loss for their experiments. They demonstrate that neural networks are not an aberration in this regard—double-descent is a general phenomenon observed also in linear regression with random features and random forests. Theoretical works on linear least squares regression: A variety of papers have attempted to theoretically analyze this behavior in restricted settings, particularly the case of least squares regression under various assumptions on the training data, feature spaces and regularization method. 1. Advani & Saxe (2017); Hastie et al. (2019) both consider the linear regression problem stated above and analyze the generalization behavior in the asymptotic limit N,D → ∞ using random matrix theory. Hastie et al. (2019) highlight that when the model is misspecified, the minimum of training error can occur for over-parameterized models 2. Belkin et al. (2019) Linear least squares regression for two data models, where the input data is sampled from a Gaussian and a Fourier series model for functions on a circle. They provide a finite-sample analysis for these two cases 3. Bartlett et al. (2019) provides generalization bounds for the minimum l2-norm interpolant for Gaussian features 4. Muthukumar et al. (2019) characterize the fundamental limit of of any interpolating solution in the presence of noise and provide some interesting Fourier-theoretic interpretations. 5. Mei & Montanari (2019): This work provides asymptotic analysis for ridge regression over random features Similar double descent behavior, in restricted settings, was investigated in Trunk (1979); Opper (1995; 2001); Skurichina & Duin (2002). Neal et al. (2018) conducts a study of bias and variance in modern neural networks, observing that both bias and variance can decrease with increasing model size, contrary to conventional wisdom. Geiger et al. (2019b) showed that deep fully connected networks trained on the MNIST dataset with hinge loss exhibit a “jamming transition” when the number of parameters exceeds a threshold that allows training to near-zero train loss. Geiger et al. (2019a) provide further experiments on CIFAR10 with a convolutional network. They also highlight interesting behavior with ensembling around the critical regime, which is consistent with our informal intuitions in Section 5 and our experiments in Figures 28, 29. Advani & Saxe (2017); Geiger et al. (2019b;a) also point out that double-descent is not observed when optimal early-stopping is used. The study of sample non-monotonicity in learning algorithms had also existed prior to double descent, including in Duin (1995; 2000); Opper (2001); Loog & Duin (2012). D RANDOM FEATURES: A CASE STUDY In this section, for completeness sake, we show that both the model- and sample-wise double descent phenomena are not unique to deep neural networks—they exist even in the setting of Random Fourier Features of Rahimi & Recht (2008). This setting is equivalent to a two-layer neural network with e−ix activation. The first layer is initialized with a N (0, 1d ) Gaussian distribution and then fixed throughout training. The width (or embedding dimension) d of the first layer parameterizes the model size. The second layer is initialized with 0s and trained with MSE loss. Figure 14 shows the grid of Test Error as a function of both number of samples n and model size d. Note that in this setting EMC = d (the embedding dimension). As a result, as demonstrated in the figure, the peak follows the path of n = d. Both model-wise and sample-wise (see figure 15) double descent phenomena are captured, by horizontally and vertically crossing the grid, respectively. Figure 15: Sample-wise double-descent slice for Random Fourier Features on the Fashion MNIST dataset. In this figure the embedding dimension (number of random features) is 1000. E APPENDIX: ADDITIONAL EXPERIMENTS E.1 EPOCH-WISE DOUBLE DESCENT: ADDITIONAL RESULTS Here, we provide a rigorous evaluation of epoch-wise double descent for a variety of optimizers and learning rate schedules. We train ResNet18 on CIFAR-10 with data-augmentation and 20% label noise with three different optimizers—Adam, SGD, SGD + Momentum (momentum set to 0.9) and three different learning rate schedules—constant, inverse-square root, dynamic drop for differnet values of initial learning rate. We observe that double-descent occurs reliably for all optimizers and learning rate schedules and the peak of the double descent curve shifts with the interpolation point. A practical recommendation resulting from epoch-wise double descent is that stopping the training when the test error starts to increase may not always be the best strategy. In some cases, the test error may decrease again after reaching a maximum, and the final value may be lower than the minimum earlier in training. CIFAR100, ResNet18 E.2.1 CLEAN SETTINGS WITH MODEL-WISE DOUBLE DESCENT E.2 MODEL-WISE DOUBLE DESCENT: ADDITIONAL RESULTS CIFAR100, Standard CNN E.2.2 WEIGHT DECAY Here, we now study the effect of varying the level of regularization on test error. We train CIFAR10 with data-augmentation and 20% label noise on ResNet18 for weight decay co-efficients λ ranging from 0 to 0.1. We train the networks using SGD + inverse-square root learning rate. Figure below shows a picture qualitatively very similar to that observed for model-wise double descent wherein ”model complexity” is now controlled by the regularization parameter. This confirms our generalized double descent hypothesis along yet another axis of Effective Model Complexity. E.2.3 EARLY STOPPING DOES NOT EXHIBIT DOUBLE DESCENT Language models CIFAR10, 10% noise, SGD E.2.4 TRAINING PROCEDURE E.3 ENSEMBLING
1. What is the main contribution of the paper regarding the double descent phenomenon? 2. What are the strengths of the paper, particularly in its simulations and observations? 3. Do you have any concerns or comments regarding the paper's content or references? 4. How does the reviewer assess the novelty and generality of the paper's findings? 5. Are there any specific aspects or claims in the paper that the reviewer questions or disagrees with?
Review
Review I do not have much to say about the paper except that I like the sumilation, and found rather interesting. It shows a rather extensive set of simulations, that enrich the observations of the so-called double descent phenomena, and shows empirically its apparent generality. Also, the Effective Model Complexity seems to be a good description of what is empirically observe. All in all, i definitely support publication. I have, however, two comments: first, a minor one: there is a part of the story on double descent that is completely forgotten: the fact that there is a pic in the data for some classificators at #example close to #parameters was known at least in the 90! It is discussed in "Statistical Mechanics of Learning" A. Engel & C. Van den Broeck, 2001, page 61, with a plot that comes rather from an old work of Pr. Manfred Opper (1995), can be seen here: http://www.ki.tu-berlin.de/fileadmin/fg135/publikationen/opper/Op01.pdf FIG 10 From this work, it is also rather clear that increasing the number of training examples can hurt performances (this is exactly what the plot says), at least without regularisation. Since the authors are quite thorough in their review of history -especially in the appendix- I thought I would point this reference. The second comment is that the paper should acknowledge the work of the Geiger et al more explicitly, and cite as well the related work of Spiegler et al (https://arxiv.org/abs/1810.09665) rather explicitly! Virtually all they show was already shown in this work (which by the way PREDATES Belkin et al), albeit only for a simpler set of data. I agree that the authors extend a lot on this papers, especially in terms of dataset and completeness of experiments, but they are definitely closely related, and the fact that it is not cited is a serious flaw to the current version. Also, in the conclusion "We note that many of the phenomena that we highlight often do not occur with optimal early- stopping. However, this is consistent with our generalized double descent hypothesis:…". Again this was shown (empirically) as well in the above mentioned papers and also explained in "physics terms" where the peak is like a phase transition (jamming), if one stop the dynamics.
ICLR
Title Fair Federated Learning via Bounded Group Loss Abstract Fair prediction across protected groups is an important constraint for many federated learning applications. However, prior work studying group fair federated learning lacks formal convergence or fairness guarantees. In this work we propose a general framework for provably fair federated learning. In particular, we explore and extend the notion of Bounded Group Loss as a theoretically-grounded approach for group fairness. Using this setup, we propose a scalable federated optimization method that optimizes the empirical risk under a number of group fairness constraints. We provide convergence guarantees for the method as well as fairness guarantees for the resulting solution. Empirically, we evaluate our method across common benchmarks from fair ML and federated learning, showing that it can provide both fairer and more accurate predictions than baseline approaches. 1 INTRODUCTION Group fairness aims to mitigate unfair biases against certain protected demographic groups (e.g. race, gender, age) in the use of machine learning. Many methods have been proposed to incorporate group fairness constraints in centralized settings (e.g., Agarwal et al., 2018; Feldman et al., 2015; Hardt et al., 2016; Zafar et al., 2017a). However, there is a lack of work studying these approaches in the context of federated learning (FL), a training paradigm where a model is fit to data generated by a set of disparate data silos, such as a network of remote devices or collection of organizations (Kairouz et al., 2019; Li et al., 2020; McMahan et al., 2017). Mirroring concerns around fairness in non-federated settings, many FL applications similarly require performing fair prediction across protected groups. Unfortunately, as we show in Figure 1, naively applying existing approaches to each client in a federated network in isolation may be inaccurate due to heterogeneity across clients—failing to produce a fair model across the entire population (Zeng et al., 2021). Several recent works have considered addressing this issue by exploring specific forms of group fairness in FL (e.g., Chu et al., 2021; Cui et al., 2021; Du et al., 2021; Papadaki et al., 2022; RodríguezGálvez et al., 2021; Zeng et al., 2021). Despite promising empirical performance, these prior works lack formal guarantees surrounding the resulting fairness of the solutions (Section 2), which is problematic as it is unclear how the methods may perform in real-world FL deployments. In this work we provide a formulation and method for group fair FL that can provably satisfy global fairness constraints. Common group fairness notions that aim to achieve equal prediction quality between any two protected groups (e.g., Demographic Parity, Equal Opportunity (Hardt et al., 2016)) are difficult to provably satisfy while simultaneously finding a model with high utility. Instead, we consider a different fairness notion known as Bounded Group Loss (BGL) (Agarwal et al., 2019), which aims to promote worst group’s performance, to capture these common group fairness criteria. As we show, a benefit of this approach is that in addition to having practical advantages in terms of fairness-utility trade-offs (Section 5), it maintains smoothness and convexity properties that can equip our solver with favorable theoretical guarantees. Based on our group fairness formulation, we then provide a scalable method (PFFL) to solve the proposed objectives via federated saddle point optimization. Theoretically, we provide convergence guarantees for the method as well as fairness and generalization guarantees for the resulting solutions. Empirically, we demonstrate the effectiveness of our approach on common benchmarks from fair machine learning and federated learning. We summarize our main contributions below: • We propose a novel fair federated learning framework for a range of group fairness notions. Our framework models the fair FL problem as a saddle point optimization problem and leverages variations of Bounded Group Loss (Agarwal et al., 2019) to capture common forms of group fairness. We also extend BGL to consider a new fairness notion called Conditional Bounded Group Loss (CBGL), which may be of independent interest and utility in non-federated settings. • We propose a scalable federated optimization method for our group fair FL framework. We provide a regret bound analysis for our method under convex ML objectives to demonstrate formal convergence guarantees. Further, we provide fairness and generalization guarantees on the model for a variety of fairness notions. • Finally, we evaluate our method on common benchmarks used in fair machine learning and federated learning. In all settings, we find that our method can significantly improve model fairness compared to baselines without sacrificing model accuracy. Additionally, even though we do not directly optimize classical group fairness constraints (e.g., Demographic Parity, Equal Opportunity), we find that our method can still provide comparable/better fairness-utility trade-offs relative to existing approaches when evaluated on these metrics. 2 BACKGROUND AND RELATED WORK Fair Machine Learning. Algorithmic fairness in machine learning aims to identify and correct bias in the learning process. Common approaches for obtaining fairness include pre-processing methods that rectify the features or raw data to enhance fairness (Calmon et al., 2017; Feldman et al., 2015; Zemel et al., 2013); post-processing methods that revise the prediction score for a trained model (Dwork et al., 2018; Hardt et al., 2016; Menon & Williamson, 2018); and in-processing methods that directly modify the training objective/solver to produce a fair predictor (Agarwal et al., 2018; 2019; Woodworth et al., 2017; Zafar et al., 2017a;b). Most existing methods in fair ML rely on using a centralized dataset to train and evaluate the model. As shown in Figure 1, in the federated setting where data is privately distributed across different data silos, directly applying these methods locally only ensures fairness for each silo rather than the entire population. Developing effective and efficient techniques for fair FL is thus an important area of study. Fair Federated Learning. In FL, definitions of fairness may take many forms. A commonly studied notion of fairness is representation parity (Hashimoto et al., 2018), whose application in FL requires the model’s performance across all clients to have small variance (Donahue & Kleinberg, 2021; Li et al., 2019a; 2021; Mohri et al., 2019; Yue et al., 2021). In this work we instead focus on notions of group fairness, in which every data point in the federated network belongs to some (possibly) protected group, and we aim to find a model that doesn’t introduce bias towards any group. Recent works have proposed various objectives for group fairness in federated learning. Zeng et al. (2021) proposes a bi-level optimization objective that minimizes the difference between each group’s loss while finding an optimal global model. Similarly, several works propose using a constrained optimization problem that aims to find the best model subject to an upper bound on the group loss difference (Chu et al., 2021; Cui et al., 2021; Du et al., 2021; Rodríguez-Gálvez et al., 2021). Different from these approaches, our method focuses on a fairness constraint based on upperbounding the loss of each group with a constant rather than the loss difference between any two groups. More closely related to our work, Papadaki et al. (2022) weighs the empirical loss given each group by a trainable vector λ and finds the best model for the worst case λ. Though similar to our method for ζ = 0, this approach fails to achieve both strong utility and fairness performance under non-convex loss functions (see Section 5). Zhang et al. (2021) also propose a similar objective to learn a model with unified fairness. Among these works, Zeng et al. (2021) and Cui et al. (2021) also provide simplified convergence and fairness guarantees for their method. However, these works lack formal analyses around the convergence for arbitrary convex loss functions as well as the behavior of the fairness constraint over the true data distribution. Ours is the first work we are aware to provide such guarantees in the context of group fair federated learning. 3 FAIR FEDERATED LEARNING VIA BOUNDED GROUP LOSS In this section we first formalize the group fair federated learning problem and a fairness-aware objective solving this problem (Section 3.1). We then provide several examples of group fairness based on the notion of BGL and show how to incorporate them into our framework (Section 3.2). 3.1 SETUP: GROUP FAIR FEDERATED LEARNING Many applications of FL require treatment of data belonging to protected groups (e.g., race, gender, age). This is particularly common in applications of cross-silo FL, where we may wish to produce a model that fairly treats individuals from various demographic groups across a collection of data silos (e.g. hospitals, schools, financial institutions) (Chu et al., 2021; Cui et al., 2021; Vaid et al., 2021). FL Setup. Following standard federated learning scenarios (McMahan et al., 2017), we consider a network with K different clients. Each client k ∈ [K] has access to training data D̂k := {(xi, yi, ai)}i=1,··· ,mk sampled from the true data distribution Dk, where xi is an observation, yi ∈ Y is the label, ai ∈ A is the protected attribute. Let the hypothesis class beH and for any model h ∈ H, and define the loss function on data (x, y, a) to be l(h(x), y). Federated learning applications typically aim to solve: min h∈H F(h) = min h∈H E(x,y)∼D [l(h(x), y)] . (1) In practice, Dk is estimated by observing {(xi, yi, ai)}i=1,··· ,mk , and we solve the empirical risk: min h∈H F (h) = min h∈H 1 K K∑ k=1 1 mk mk∑ i=1 l(h(xk,i), yk,i) . (2) For simplicity, we define fk(h) = 1mk ∑mk i=1 l(h(xk,i), yk,i) as the local objective for client k. Further, we assume h is parameterized by a vector w ∈ Rp where p is the number of parameters. We will use F (w) and fk(w) to represent F (h) and fk(h) intermittently in the remainder of the paper. Fairness via Constrained Optimization. When a centralized dataset is available, a standard approach to learn a model that incorporates fairness criteria is to solve a constrained optimization problem where one tries to find the best model subject to some fairness notion (Agarwal et al., 2019; Barocas et al., 2019). Following this formulation, we formalize a similar learning problem in the federated setting, solving: minh∈H F (h) subject to R(h) ≤ ζ, (3) where R(h), ζ ∈ RZ encodes the constraint set on h. For instance, the z-th constraint could be written as Rz(h) ≤ ζz where ζz is a fixed constant. This formulation is commonly used to satisfy group fairness notions such as equal opportunity, equalized odds (Hardt et al., 2016), and minimax group fairness (Diana et al., 2021). To solve the constrained optimization problem 3, a common method is to use Lagrangian multipliers. In particular, let λ ∈ RZ+ be a dual variable and assume λ has ∥ · ∥1 at most B. The magnitude of B could be viewed as the regularization strength for the constraint term. Objective equation 3 can then be converted into the following saddle point optimization problem: min w max λ∈RZ+,∥λ∥1≤B G(w;λ) = βF (w) + λT r(w) , (Main Objective) where the q-th index of r encodes the q-th constraint from R (i.e. rq(w) := Rq(w)− ζq) and β is a fixed constant. In other words, the objective finds the best model under the scenario where the fairness constraint is most violated (i.e., the regularization term is maximized). There are two steps needed in order to provide meaningful utility and fairness guarantees for the model found by solving Main Objective: (1) showing that it is possible to recover a solution close to the ‘optimal’ solution, (2) providing an upper bound for both the risk (F (w)) and the fairness constraint (r(w)) given this solution. To formally define what an ‘optimal’ solution is, in this work we aim to identify constraints that satisfy the following key assumption: Assumption 0 (Convexity of G). Assume that G(w;λ) is convex in w for any fixed λ. Remark. In particular, since G is linear in λ, given a fixed w0, we can find a solution to the problem maxλ G(w0;λ), denoted as λ∗, i.e. G(w0;λ∗) ≥ G(w0;λ) for all λ. When G is convex in w, we can argue that given a fixed λ0, there exists w∗ that satisfies w∗ = argminw G(w;λ0), i.e. G(w∗;λ0) ≤ G(w;λ0) for all w. Therefore, (w∗,λ∗) is a saddle point of G(·; ·), which is denoted as the optimal solution in our setting. 3.2 FORMULATING FAIR FL: BOUNDED GROUP LOSS AND VARIANTS Many prior works in fair federated learning consider instantiating R(h) in equation 3 as a constraint that bounds the difference between any two groups’ losses, a common technique used to enforce group fairness notions such as equalized odds and demographic parity (e.g., Chu et al., 2021; Cui et al., 2021; Zeng et al., 2021). Unfortunately, this results in G(w;λ) becoming nonconvex in w, thus violating our Assumption 0. This nonconvexity is problematic as it increases the likelihood that a solver will find a local minima that either does not satisfy the fairness constraint or achieves poor utility. Instead of enforcing equity between the prediction quality of any two groups, in this work we explore using a constraint based on Bounded Group Loss (BGL) (Agarwal et al., 2019) which promotes worst group’s prediction quality and propose new variants that can retain convexity assumptions while satisfying meaningful fairness notions. In particular, we explore three instantiations of group fairness constraints R(h) that satisfy Assumption 0 below. Instantiation 1 (Bounded Group Loss). We begin by considering fairness via the Bounded Group Loss (defined below), which was originally proposed by Agarwal et al. (2019). Different from applying Bounded Group Loss in a centralized setting, BGL in the context of federated learning requires that for any group a ∈ A, the average loss for all data belonging to group a is below a certain threshold. As we discuss in Section 4 this (along with general constraints of FL such as communication) necessitates the development of novel solvers and analyses for the objective. Definition 1 (Agarwal et al. (2019)). A classifier h satisfies Bounded Group Loss (BGL) at level ζ under distribution D if for all a ∈ A, we have E [l(h(x), y)|A = a] ≤ ζ . In practice, we could define empirical bounded group loss constraint at level ζ under the empirical distribution D̂ = 1K ∑K k=1 D̂k to be ra(h) := K∑ k=1 ra,k(h) = K∑ k=1 1/ma ∑ ak,i=a l(h(xk,i), yk,i)− ζ/K ≤ 0 . Benefits of BGL. BGL ensures that the prediction quality on any group reaches a certain threshold. Compared to standard loss parity constraints that aim to equalize the losses across all protected groups (e.g. overall accuracy equity (Dieterich et al., 2016)), BGL has two main advantages. First, G(w;λ) preserves convexity in w, as long as the loss function l itself is convex. In contrast, loss parity constraints are generally non-convex even if l is convex. Second, when the prediction difficulties are uneven across groups, loss parity may force an unnecessary drop of accuracy on some groups just to equalize all losses (Agarwal et al., 2019). In contrast, the criterion of BGL can avoid such undesirable effects. Instantiation 2 (Conditional Bounded Group Loss). In some applications one needs a stronger fairness notion beyond ensuring that no group’s loss is too large. For example, in the scenario of binary classification, a commonly used fairness requirement is equalized true positive rate or false positive rate (Hardt et al., 2016). In the context of optimization for arbitrary loss functions, a natural substitute is equalized true / false positive loss. In other words, any group’s loss conditioned on positively / negatively labeled data should be equalized. Therefore, similar to BGL, we propose a novel fairness definition known as Conditional Bounded Group Loss (CBGL) defined below: Definition 2. A classifier h satisfies Conditional Bounded Group Loss (CBGL) for y ∈ Y at level ζy under distribution D if for all a ∈ A, we have E [l(h(x), y)|A = a, Y = y] ≤ ζy . In practice, we could define empirical Conditional Bounded Group Loss constraint at level [ζy]y∈Y under D̂ to be ra,y(h) := K∑ k=1 r(a,y),k(h) = K∑ k=1 1/ma,y ∑ ak,i=a,yk,i=y l(h(xk,i), yk,i)− ζy/K ≤ 0 . Note that satisfying CBGL for all Y is a strictly harder problem than satisfying BGL alone. In fact, we can show that a classifier that satisfies CBGL at level [ζy]y∈Y also satisfies BGL at level Ey∼ρa [ζy] where ρa be the probability density of labels for all data from group a. Relationship between CBGL and Equalized Odds. For binary classification tasks in centralized settings, a common fairness notion is Equalized Odds (EO) (Hardt et al., 2016), which requires the True/False Positive Rate to be equal for all groups. Our CBGL definition can be viewed as a relaxation of EO. Consider a binary classification example where Y = {0, 1}. Let the loss function l be the 0-1 loss. CBGL requires classifier h to satisfy Pr[h(x) = y|Y = y0, A = a] ≤ ζy0 for all a ∈ A and y0 ∈ Y . EO requires Pr[h(x) = y|Y = y0, A = a] to be the same for all a ∈ A given a fixed y0, which may not be feasible if the hypothesis classH is not rich enough. Instead of requiring equity of each group’s TPR/FPR, CBGL only imposes an upper bound for each group’s TPR/FPR. Similar to the comparison between BGL and loss parity, CBGL offers more flexibility than EO since it does not force an artificial increase on FPR or FNR when a prediction task on one of the protected groups is much harder. In addition, for applications where logistic regression or DNNs are used (e.g., CV, NLP), it is uncommon to use the 0-1 loss in the objective. Thus, CBGL can provide a relaxed notion of fairness for more general loss functions whose level of fairness can be flexibly tuned. Instantiation 3 (MinMax Fairness). Recently Papadaki et al. (2022) proposed a framework called FedMinMax by solving an agnostic fair federated learning framework where the weight is applied to empirical risk conditioned on each group. Note that using BGL as the fairness constraint, our framework could reduce to FedMinMax as a special case by setting β = 0, B = 1 and ζ = 0. Definition 3. Use the same definition of ra(h) as we had in Instantiation 1. FedMinMax (Papadaki et al., 2022) aims to solve for the following objective: minh maxλ∈R|A|+ ,∥λ∥1=1 ∑ a∈A λara(h). Note that a key property of FedMinMax is the constant ζ used to upper bound the per group loss is set to 0. From a constrained optimization view, the only feasible solution that satisfies all fairness constraints for this problem is a model with perfect utility performance since requiring all losses to be smaller than 0 is equivalent to having all of them to be exactly 0. Such a property limits the ability to provide fairness guarantees for FedMinMax. Fixing B and ζ also limits its empirical performance on the relation between fairness and utility, as we will show later in Appendix F. 4 PROVABLY FAIR FEDERATED LEARNING In this section, we first propose Provably Fair Federated Learning (PFFL), a scalable solver for Main Objective, presented in Algorithm 1. We provide formal convergence guarantees for the method in Section 4.2. Given the solution found by PFFL, in Section 4.3 we then demonstrate the fairness guarantee for different examples of fairness notions defined in Section 3 (BGL, CBGL). 4.1 ALGORITHM To find a saddle point for Main Objective, we follow the scheme from Freund & Schapire (1997) and summarize our solver for fair FL in Algorithm 1 (full algorithm description in Appendix A). Our algorithm is based off of FedAvg (McMahan et al., 2017), a common scalable approach in federated learning. Intuitively, the method alternates between two steps: (1) given a fixed λ, optimize our regularized objective F (w) + λT r(w) over w; (2) given a fixed w, optimize the fairness violation term λT r(w) over λ. While Agarwal et al. (2019) also follows a similar recipe to ensure BGL, our method needs to overcome additional challenges in the federated settings. In particular, the method in Agarwal et al. (2019) optimizes w by performing exact best response, which is in general in feasible when data for distributed data sets. Our method overcomes this challenge by applying a gradient-descent-ascent style optimization process that utilizes the output of a FL learning algorithm as an approximation for the best response. In Algorithm 1, we provide an example in which the first step is achieved by using FedAvg to solve minw F (w) + λT r(w) (L 4-12). Note that solving this objective does not require the FedAvg solver; any algorithm that learns a global model in FL could be used to find a certain w given λ. After we obtain a global model from a federated training round, we use exponentiated gradient descent to update λ, following Alg 2 in Agarwal et al. (2019). This completes one training round. At the end of training, we calculate and return the average iterate as the fair global model. Note that the ultimate goal to solve for Main Objective is to find a w such that it minimizes the empirical risk subject to r(w) ≤ 0. Therefore, at the end of training, our algorithm checks whether the resulting model w̄ violates the fairness guarantee by at most some constant error M+2νB where M is the upper bound for the empirical risk and ν is the upper bound provided in Equation 5 (L 16-20). We will show in the Lemma 1 that this is always true when there exists a solution w∗ for Problem 3. However, it is also worth noting that the Problem 3 is not always feasible. For example when we set ζ = 0, requiring r(w) ≤ 0 is equivalent to requiring the empirical risk given any group a ∈ A is non positive, which is only feasible when the loss is 0 for every data in the dataset. In this case, our algorithm will simply output null if the fairness guarantee is violated by an error larger than M+2νB . Privacy aspect of PFFL Compared to FedAvg, our solver communicates losses conditioned on each group in addition to model updates. This is common in prior works that solve a min-max optimization problem in federated learning (Hou et al., 2021; Zeng et al., 2021). We note that our method could be easily extended to satisfy example-level DP for FL by performing DP-SGD locally for each client. Our algorithm also gives natural client-level DP and LDP variants. In particular, we can compute via a trusted server the average loss at each client, which is sufficient statistics to update λ. 4.2 CONVERGENCE GUARANTEE Different from Agarwal et al. (2019), while our algorithm handles arbitrary convex losses in federated setting by replacing the best response with the FedAvg output, we want to show that after running finitely many rounds, how close our solution is to the actual best response. In this section, we provide a no regret bound style analysis for our PFFL algorithm. To formally measure the the distance between the solution found by our algorithm and the optimal solution, we introduce ν-approximate saddle point as a generalization of saddle point (See Remark in Section 3.1) defined below: Definition 4. (ŵ, λ̂) is a ν-approximate saddle point of G if G(ŵ, λ̂) ≤ G(w, λ̂) + ν for all w G(ŵ, λ̂) ≥ G(ŵ,λ)− ν for all λ (4) As an example, the optimal solution (w∗,λ∗) is a 0-approximate saddle point of G. To show convergence, we first introduce some basic assumptions below: Assumption 1. Let fk be µ-strongly convex and L-smooth for all k = 1, · · · ,K. Assumption 2. Assume the stochastic gradient of fk has bounded variance: E[∥∇fi(wkt ; ξkt ) − ∇fk(wkt )∥2] ≤ σ2k for all k = 1, · · · ,K. Assumption 3. Assume the stochastic gradient of fk is uniformly bounded: E[∥∇fk(wkt ; ξkt )∥2] ≤ G2 for all k = 1, · · · ,K. These are common assumptions used when proving the convergence for FedAvg (e.g., Li et al., 2019b). Now we present our main theorem of convergence: Theorem 1 (Informal Convergence Guarantee). Let Assumption 1-3 hold. Define κ = Lµ , γ = max{8κ, J} and step size ηQ = 2(β+B)µ(γ+t) , and assume ∥r∥∞ ≤ ρ. Letting w̄ = 1 ET ∑ET t=1 w t, λ̄ = 1ET ∑ET t=1 λ t, we have: max λ G(w̄;λ)−min w G(w; λ̄) ≤ 1 T T∑ t=1 κ γ + t− 1 C + B log(Z + 1) ηθET + ηθρ 2B , (5) where C is a constant. The upper bound in Equation 5 consists of two parts: (1) the error for the FedAvg process to obtain w̄ which is a term of order O(log T/T ); (2) the error for the Exponentiated Gradient Ascent process to obtain λ̄ which converges to a noise ball given a fixed ηθ. Following Theorem 1, we could express the solution of Algorithm 1 as a ν-approximate saddle point of G by picking appropriate ηθ and T : Corollary 2. Let ηθ = ν2ρ2B and T ≥ 1 ν(γ+1)−2κC ( 4ρ2B2 log(Z+1)(γ+1) νE + 2κC(γ − 1) ) , then (w̄, λ̄) is a ν-approximate saddle point of G. We provide detailed proofs for both Theorem 1 and Corollary 2 in Appendix C. Different from the setting in prior FedAvg analyses (e.g., Li et al., 2019b), in our case the outer minimization problem changes as λ gets updated. Therefore, our analysis necessitates considering a more general scenario where the objective function could change over time. 4.3 FAIRNESS GUARANTEE In the previous section, we demonstrated that our Algorithm 1 could converge and find a νapproximate saddle point of the objective G. In this section, we further motivate why we care about finding a ν-approximate saddle point. The ultimate goal for our algorithm is to: (1) learn a model that produces fair predictions on training data, and (2) more importantly, produces fair predictions on test data, i.e., data from federated clients not seen during training. Before presenting the formal fairness and generalization guarantees, we state the following additional assumption, which is a common assumption for showing the generalization guarantee using the Rademacher complexity generalizations bound (Mohri et al., 2019). Assumption 4. Let F and F be upper bounded by constant M . We first show the fairness guarantee on the training data. Lemma 1 (Empirical Fairness Guarantee). Assume there exists w∗ satisfies r(w∗) ≤ 0Z , we have max j rj(w̄)+ ≤ M + 2ν B . (6) Lemma 1 characterizes the upper bound for the worst fairness constraint evaluated on the training data. Given a fixed ν, one could increase B to obtain a stronger fairness guarantee, i.e. a smaller upper bound. Combining this with Corollary 2, it can be seen that when B is large, additional exponentiated gradient ascent rounds are required to achieve stronger fairness. Next we formalize the fairness guarantee for the entire true data distribution. Define the true data distribution to be D = 1K ∑K k=1Dk. We would like to formalize how well our model is evaluated on the true distribution D as well as how well the fairness constraint is satisfied under D. This result is presented below in Theorem 3. Theorem 3 (Full Fairness and Generalization Guarantee). Let Assumption 1-4 holds and (w̄, λ̄) a ν-approximate saddle point of G. Then with probability 1− δ, either there doesn’t exist solution for Problem 3 and Algorithm 1 returns null or Algorithm 1 returns w̄ satisfies F(w̄) ≤ F(w∗) + 2ν + 4Rm(H) + 2MK √∑K k=1 1 2mk log(2/δ), rj(w̄) ≤ M+2νB +Genr,j (7) where w∗ is a solution for Problem 3 and Genr is the generalization error. The first part for Equation 7 characterizes how well our model performs over the true data distribution compared to the optimal solution. As number of clients K increases, we achieve smaller generalization error. The second part for Equation 7 characterizes how well the fairness constraints are satisfied over the true data distribution. Note that the upper bound could be viewed as the sum of empirical fairness violation and a generalization error. Based on our fairness notions defined in Section 3.2, we demonstrate what generalization error is under different fairness constraints r. Proposition 1 (r encodes BGL at level ζ). There are in total |A| fairness constraints, one for each group. Define the weighted rademacher complexity for group a as Ra(H) = ESk∼Dmkk ,σ [ suph∈H ∑K k=1 1 ma ∑ ak,i=a σk,il (h(xk,i), yk,i) ] . In this scenario, we have: Genr,a = 2Ra(H) + M ma √ K 2 log(2|A|/δ). Note that the fairness constraint for group a under true distribution in Equation 7 is upper bounded by O (√ K ma ) . For any group a0 with sufficient data, i.e., ma0 is large, the BGL constraint with respect to group a0 under D has a stronger formal fairness guarantee compared to any group with less data. It is also worth noting that this generalization error grows as the number of clients K grows. Recall that the generalization error becomes smaller when K grows; combing the two results together provides us a tradeoff between fairness notion of BGL and utility over the true data distribution in terms of K. Proposition 2 (r encodes CBGL at level [ζy]y∈Y ). There are in total |A||Y | fairness constraints, one for each group and label. Define the weighted rademacher complexity for group a conditioned on y as Ra,y(H) = ESk∼Dmkk ,σ [ suph∈H ∑K k=1 1 ma,y ∑ ak,i=a,yk,i=y σk,il (h(xk,i), y) ] where ma,y is the number of all examples from group a with label y. In this scenario, we have: Genr,(a,y) = 2Ra,y(H) + M ma,y √ K 2 log(2|A||Y |/δ). Similar to Proposition 1, in order to achieve strong fairness guarantees for any specific constraint on the true data distribution, we need a sufficient number of samples associated with that constraint. We provide details and proof for Theorem 3 in Appendix D. Different from the analysis performed in Agarwal et al. (2019), we analyze the generalization behaviour in federated setting where we introduce the generalization bound as a function of number of clients K. We then further formally demonstrate the tension between utility and fairness performance evaluated on the true data distribution induced by K, which has not been studied previously to the best of our knowledge. 5 EXPERIMENTS We evaluate PFFL (Algorithm 1) empirically on ProPublica COMPAS, a dataset commonly studied in fair ML (Angwin et al., 2016; Zeng et al., 2021); the US-wide ACS PUMS data, a recent group fairness benchmark dataset (Ding et al., 2021); and CelebA (Caldas et al., 2018), a common federated learning dataset. We compare our method with training a vanilla FedAvg model in terms of both fairness and utility in Section 5.1, and explore performance relative to baselines that aim to enforce other fairness notions (Demographic Parity and Equal Opportunity) in Section 5.2. Setup. For all experiments, we evaluate the accuracy and the empirical loss for each group on test data that belongs to all the silos of our fair federated learning solver. We consider COMPAS Recidivism prediction with gender as protected attribute, the ACS Employment task (Ding et al., 2021) with race as protected attribute, and CelebA (Caldas et al., 2018) with gender as a protected attribute. To reflect the federated setting, we use heterogeneous data partitions to create data silos. ACS Employent is naturally partitioned into 50 states; COMPAS and CelebA are manually partitioned in a non-IID manner into a collection of data silos. A detailed description of datasets, models, and partition schemes can be found in Appendix B. 5.1 FAIRNESS-UTILITY TRADE-OFFS FOR ALGORITHM 1 We first explore how test error rate differs as a function maximum group loss using our Algorithm 1. To be consistent with our method and theoretical analysis, we exclude the protected attribute ai for each data as a feature for learning the predictor. For each dataset, we evaluated PFFL with BGL; CBGL for Y = 1; and CBGL for Y = 0. For each method we evaluate, given fixed number of training iterations E and T , we finetune B and ζ and evaluate both test error rate and test loss on each group. Given a certain test error rate, we select the hyperparameter pair (B, ζ) that yields the lowest maximum group loss. We show the relation between test accuracy vs. max group loss in Figure 2. In particular, we compare our fairness-aware FL methods with two baseline methods: vanilla FedAvg and FedAvg trained on loss weighted by groups. In FL, applying fair training locally at each data silo and aggregating the resulting model may not provide strong population-wide fairness guarantees with the same fairness definition (Zeng et al., 2021). Hence, we also explore the relationship between test accuracy and max group loss under local BGL and global BGL constraints. On all datasets, there exists a natural tradeoff between error rate and the fairness constraint: when a model achieves stronger fairness (smaller max group loss), the model tends to have worse utility (higher error rate). However, in all scenarios, our method not only yields a model with significantly smaller maximum group loss than vanilla FedAvg, but also achieves higher test accuracy than the baseline FedAvg which is unaware of group fairness. Meanwhile, for all datasets and fairness metrics, as expected, PFFL with Global BGL achieves improved fairness-utility tradeoffs relative to PFFL with Local BGL. Therefore, our PFFL with Global fairness constraint framework yields a model where utility can coexist with fairness constraints relying on Bounded Group Loss. 5.2 BGL/CBGL EVALUATED ON OTHER FAIRNESS NOTIONS Beyond BGL and CBGL, there are other fairness notions commonly used in the fair machine learning literature. For example, several works in group fair FL have proposed optimizing the difference between every two groups’ losses (possibly conditioned on the true label) with the aim of achieving Demographic Parity (or Equal Opportunity) (Chu et al., 2021; Cui et al., 2021; Hardt et al., 2016; Zeng et al., 2021). Formally, consider the case where the protected attribute set A = {0, 1}. Define ∆DP = |Pr(h(X) = 1|A = 0)− Pr(h(X) = 1|A = 1)| and ∆EO = |Pr(h(X) = 1|A = 0, Y = 1)− Pr(h(X) = 1|A = 1, Y = 1)|. These works aim to train a model such that we could achieve small ∆DP or small ∆EO, depending on the fairness constraint selected during optimization. As discussed in Section 3.2, CBGL could be viewed as a more general definition of Equal Opportunity and Equalized Odds. In this section, we compare our method with FedFB (Zeng et al., 2021), FedFair (Chu et al., 2021), and FCFL (Cui et al., 2021), all of which aim to optimize ∆DP and ∆EO. We evaluate ∆DP and ∆EO for all approaches on COMPAS and ACS Employment, with results shown in Figure 3. Similar to Figure 2, we only show the points lying on the pareto frontier for our method. Although PFFL with BGL and CBGL was not directly designed for this fairness criteria (i.e., it does not directly enforce the loss or prediction parity of two groups’ losses to be close), we see that our method is still able to outperform training a FedAvg baseline, and in fact performs comparably or better than prior methods (which were designed for this setting). 6 CONCLUSIONS, LIMITATIONS, AND FUTURE WORK In this work, we propose a fair learning objective for federated settings via Bounded Group Loss. We then propose a scalable federated solver to find an approximate saddle point for the objective. Theoretically, we provide convergence and fairness guarantees for our method. Empirically, we show that our method can provide high accuracy and fairness simultaneously across tasks from fair ML and federated learning. In addition to strong empirical performance, ours is the first work we are aware of to provide formal convergence and fairness/generalization guarantees for group fair FL with general convex loss functions. In future work we are interested in investigating additional benefits that could be provided by using our framework, including applications in non-federated settings. Finally, similar to prior works in group fair FL, our method communicates additional parameters beyond standard non-fair FL (e.g., via FedAvg); studying and mitigating the privacy risks of such communications in the context of fair federated learning would be an interesting direction of future work.
1. What is the focus and contribution of the paper regarding fair federated learning? 2. What are the strengths of the proposed approach, particularly in terms of allowing direct specification of target group-specific loss minimums? 3. What are the weaknesses of the paper, especially regarding the novelty of BGL and its similarity to existing methods? 4. Do you have any concerns or questions about the theoretical comparisons with other approaches, such as group DRO and FedMinMax? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper discusses the use of bounded group loss (BGL) in the context of fair federated learning. The core objective is to learn a classifier that satisfies some predefined constraints on the expected loss function of each group on average across all clients. It is expected that the client-conditional data distribution may differ between clients. The paper further assumes that the bounded loss function is convex w.r.t. the model parameters, and proposes a distributed saddle point optimization algorithm to recover a model that minimizes empirical risk subject to these group constraints. A convergence proof for the algorithm is also provided Strengths And Weaknesses Strengths: The paper allows the direct specification of target group-specific loss minimums, which are a natural way of specifying group constraints. The resulting algorithm is simple and well grounded, and the resulting fairness-utility tradeoffs seem comparable to existing approaches. Weaknesses: I think the novelty of BGL is limited, the proposed objective is equivalent to FedMinMax, which is discussed on the paper, with an additional static bias added to the group-conditional loss terms. The formulation as a saddle point presented in this paper is not fully compatible with the bounds on the Lagrangian weights presented in Eq 2, and Algorithm 1 makes it seem like the weights asymptotically reach the norm bound B, which begs the question on why is that a separate optimization parameter from \beta in Eq 2. The theoretical comparisons with existing methods, in particular group DRO, which uses a similar exponential weight update rule for lambda and thresholds on the group losses (albeit with an additional relu), and FedMinMax, which in its paper formulation has a lower bound constraint on lambda which makes both objectives theoretically equivalent, is sorely lacking. Clarity, Quality, Novelty And Reproducibility The paper is clearly written and easily reproducible
ICLR
Title Fair Federated Learning via Bounded Group Loss Abstract Fair prediction across protected groups is an important constraint for many federated learning applications. However, prior work studying group fair federated learning lacks formal convergence or fairness guarantees. In this work we propose a general framework for provably fair federated learning. In particular, we explore and extend the notion of Bounded Group Loss as a theoretically-grounded approach for group fairness. Using this setup, we propose a scalable federated optimization method that optimizes the empirical risk under a number of group fairness constraints. We provide convergence guarantees for the method as well as fairness guarantees for the resulting solution. Empirically, we evaluate our method across common benchmarks from fair ML and federated learning, showing that it can provide both fairer and more accurate predictions than baseline approaches. 1 INTRODUCTION Group fairness aims to mitigate unfair biases against certain protected demographic groups (e.g. race, gender, age) in the use of machine learning. Many methods have been proposed to incorporate group fairness constraints in centralized settings (e.g., Agarwal et al., 2018; Feldman et al., 2015; Hardt et al., 2016; Zafar et al., 2017a). However, there is a lack of work studying these approaches in the context of federated learning (FL), a training paradigm where a model is fit to data generated by a set of disparate data silos, such as a network of remote devices or collection of organizations (Kairouz et al., 2019; Li et al., 2020; McMahan et al., 2017). Mirroring concerns around fairness in non-federated settings, many FL applications similarly require performing fair prediction across protected groups. Unfortunately, as we show in Figure 1, naively applying existing approaches to each client in a federated network in isolation may be inaccurate due to heterogeneity across clients—failing to produce a fair model across the entire population (Zeng et al., 2021). Several recent works have considered addressing this issue by exploring specific forms of group fairness in FL (e.g., Chu et al., 2021; Cui et al., 2021; Du et al., 2021; Papadaki et al., 2022; RodríguezGálvez et al., 2021; Zeng et al., 2021). Despite promising empirical performance, these prior works lack formal guarantees surrounding the resulting fairness of the solutions (Section 2), which is problematic as it is unclear how the methods may perform in real-world FL deployments. In this work we provide a formulation and method for group fair FL that can provably satisfy global fairness constraints. Common group fairness notions that aim to achieve equal prediction quality between any two protected groups (e.g., Demographic Parity, Equal Opportunity (Hardt et al., 2016)) are difficult to provably satisfy while simultaneously finding a model with high utility. Instead, we consider a different fairness notion known as Bounded Group Loss (BGL) (Agarwal et al., 2019), which aims to promote worst group’s performance, to capture these common group fairness criteria. As we show, a benefit of this approach is that in addition to having practical advantages in terms of fairness-utility trade-offs (Section 5), it maintains smoothness and convexity properties that can equip our solver with favorable theoretical guarantees. Based on our group fairness formulation, we then provide a scalable method (PFFL) to solve the proposed objectives via federated saddle point optimization. Theoretically, we provide convergence guarantees for the method as well as fairness and generalization guarantees for the resulting solutions. Empirically, we demonstrate the effectiveness of our approach on common benchmarks from fair machine learning and federated learning. We summarize our main contributions below: • We propose a novel fair federated learning framework for a range of group fairness notions. Our framework models the fair FL problem as a saddle point optimization problem and leverages variations of Bounded Group Loss (Agarwal et al., 2019) to capture common forms of group fairness. We also extend BGL to consider a new fairness notion called Conditional Bounded Group Loss (CBGL), which may be of independent interest and utility in non-federated settings. • We propose a scalable federated optimization method for our group fair FL framework. We provide a regret bound analysis for our method under convex ML objectives to demonstrate formal convergence guarantees. Further, we provide fairness and generalization guarantees on the model for a variety of fairness notions. • Finally, we evaluate our method on common benchmarks used in fair machine learning and federated learning. In all settings, we find that our method can significantly improve model fairness compared to baselines without sacrificing model accuracy. Additionally, even though we do not directly optimize classical group fairness constraints (e.g., Demographic Parity, Equal Opportunity), we find that our method can still provide comparable/better fairness-utility trade-offs relative to existing approaches when evaluated on these metrics. 2 BACKGROUND AND RELATED WORK Fair Machine Learning. Algorithmic fairness in machine learning aims to identify and correct bias in the learning process. Common approaches for obtaining fairness include pre-processing methods that rectify the features or raw data to enhance fairness (Calmon et al., 2017; Feldman et al., 2015; Zemel et al., 2013); post-processing methods that revise the prediction score for a trained model (Dwork et al., 2018; Hardt et al., 2016; Menon & Williamson, 2018); and in-processing methods that directly modify the training objective/solver to produce a fair predictor (Agarwal et al., 2018; 2019; Woodworth et al., 2017; Zafar et al., 2017a;b). Most existing methods in fair ML rely on using a centralized dataset to train and evaluate the model. As shown in Figure 1, in the federated setting where data is privately distributed across different data silos, directly applying these methods locally only ensures fairness for each silo rather than the entire population. Developing effective and efficient techniques for fair FL is thus an important area of study. Fair Federated Learning. In FL, definitions of fairness may take many forms. A commonly studied notion of fairness is representation parity (Hashimoto et al., 2018), whose application in FL requires the model’s performance across all clients to have small variance (Donahue & Kleinberg, 2021; Li et al., 2019a; 2021; Mohri et al., 2019; Yue et al., 2021). In this work we instead focus on notions of group fairness, in which every data point in the federated network belongs to some (possibly) protected group, and we aim to find a model that doesn’t introduce bias towards any group. Recent works have proposed various objectives for group fairness in federated learning. Zeng et al. (2021) proposes a bi-level optimization objective that minimizes the difference between each group’s loss while finding an optimal global model. Similarly, several works propose using a constrained optimization problem that aims to find the best model subject to an upper bound on the group loss difference (Chu et al., 2021; Cui et al., 2021; Du et al., 2021; Rodríguez-Gálvez et al., 2021). Different from these approaches, our method focuses on a fairness constraint based on upperbounding the loss of each group with a constant rather than the loss difference between any two groups. More closely related to our work, Papadaki et al. (2022) weighs the empirical loss given each group by a trainable vector λ and finds the best model for the worst case λ. Though similar to our method for ζ = 0, this approach fails to achieve both strong utility and fairness performance under non-convex loss functions (see Section 5). Zhang et al. (2021) also propose a similar objective to learn a model with unified fairness. Among these works, Zeng et al. (2021) and Cui et al. (2021) also provide simplified convergence and fairness guarantees for their method. However, these works lack formal analyses around the convergence for arbitrary convex loss functions as well as the behavior of the fairness constraint over the true data distribution. Ours is the first work we are aware to provide such guarantees in the context of group fair federated learning. 3 FAIR FEDERATED LEARNING VIA BOUNDED GROUP LOSS In this section we first formalize the group fair federated learning problem and a fairness-aware objective solving this problem (Section 3.1). We then provide several examples of group fairness based on the notion of BGL and show how to incorporate them into our framework (Section 3.2). 3.1 SETUP: GROUP FAIR FEDERATED LEARNING Many applications of FL require treatment of data belonging to protected groups (e.g., race, gender, age). This is particularly common in applications of cross-silo FL, where we may wish to produce a model that fairly treats individuals from various demographic groups across a collection of data silos (e.g. hospitals, schools, financial institutions) (Chu et al., 2021; Cui et al., 2021; Vaid et al., 2021). FL Setup. Following standard federated learning scenarios (McMahan et al., 2017), we consider a network with K different clients. Each client k ∈ [K] has access to training data D̂k := {(xi, yi, ai)}i=1,··· ,mk sampled from the true data distribution Dk, where xi is an observation, yi ∈ Y is the label, ai ∈ A is the protected attribute. Let the hypothesis class beH and for any model h ∈ H, and define the loss function on data (x, y, a) to be l(h(x), y). Federated learning applications typically aim to solve: min h∈H F(h) = min h∈H E(x,y)∼D [l(h(x), y)] . (1) In practice, Dk is estimated by observing {(xi, yi, ai)}i=1,··· ,mk , and we solve the empirical risk: min h∈H F (h) = min h∈H 1 K K∑ k=1 1 mk mk∑ i=1 l(h(xk,i), yk,i) . (2) For simplicity, we define fk(h) = 1mk ∑mk i=1 l(h(xk,i), yk,i) as the local objective for client k. Further, we assume h is parameterized by a vector w ∈ Rp where p is the number of parameters. We will use F (w) and fk(w) to represent F (h) and fk(h) intermittently in the remainder of the paper. Fairness via Constrained Optimization. When a centralized dataset is available, a standard approach to learn a model that incorporates fairness criteria is to solve a constrained optimization problem where one tries to find the best model subject to some fairness notion (Agarwal et al., 2019; Barocas et al., 2019). Following this formulation, we formalize a similar learning problem in the federated setting, solving: minh∈H F (h) subject to R(h) ≤ ζ, (3) where R(h), ζ ∈ RZ encodes the constraint set on h. For instance, the z-th constraint could be written as Rz(h) ≤ ζz where ζz is a fixed constant. This formulation is commonly used to satisfy group fairness notions such as equal opportunity, equalized odds (Hardt et al., 2016), and minimax group fairness (Diana et al., 2021). To solve the constrained optimization problem 3, a common method is to use Lagrangian multipliers. In particular, let λ ∈ RZ+ be a dual variable and assume λ has ∥ · ∥1 at most B. The magnitude of B could be viewed as the regularization strength for the constraint term. Objective equation 3 can then be converted into the following saddle point optimization problem: min w max λ∈RZ+,∥λ∥1≤B G(w;λ) = βF (w) + λT r(w) , (Main Objective) where the q-th index of r encodes the q-th constraint from R (i.e. rq(w) := Rq(w)− ζq) and β is a fixed constant. In other words, the objective finds the best model under the scenario where the fairness constraint is most violated (i.e., the regularization term is maximized). There are two steps needed in order to provide meaningful utility and fairness guarantees for the model found by solving Main Objective: (1) showing that it is possible to recover a solution close to the ‘optimal’ solution, (2) providing an upper bound for both the risk (F (w)) and the fairness constraint (r(w)) given this solution. To formally define what an ‘optimal’ solution is, in this work we aim to identify constraints that satisfy the following key assumption: Assumption 0 (Convexity of G). Assume that G(w;λ) is convex in w for any fixed λ. Remark. In particular, since G is linear in λ, given a fixed w0, we can find a solution to the problem maxλ G(w0;λ), denoted as λ∗, i.e. G(w0;λ∗) ≥ G(w0;λ) for all λ. When G is convex in w, we can argue that given a fixed λ0, there exists w∗ that satisfies w∗ = argminw G(w;λ0), i.e. G(w∗;λ0) ≤ G(w;λ0) for all w. Therefore, (w∗,λ∗) is a saddle point of G(·; ·), which is denoted as the optimal solution in our setting. 3.2 FORMULATING FAIR FL: BOUNDED GROUP LOSS AND VARIANTS Many prior works in fair federated learning consider instantiating R(h) in equation 3 as a constraint that bounds the difference between any two groups’ losses, a common technique used to enforce group fairness notions such as equalized odds and demographic parity (e.g., Chu et al., 2021; Cui et al., 2021; Zeng et al., 2021). Unfortunately, this results in G(w;λ) becoming nonconvex in w, thus violating our Assumption 0. This nonconvexity is problematic as it increases the likelihood that a solver will find a local minima that either does not satisfy the fairness constraint or achieves poor utility. Instead of enforcing equity between the prediction quality of any two groups, in this work we explore using a constraint based on Bounded Group Loss (BGL) (Agarwal et al., 2019) which promotes worst group’s prediction quality and propose new variants that can retain convexity assumptions while satisfying meaningful fairness notions. In particular, we explore three instantiations of group fairness constraints R(h) that satisfy Assumption 0 below. Instantiation 1 (Bounded Group Loss). We begin by considering fairness via the Bounded Group Loss (defined below), which was originally proposed by Agarwal et al. (2019). Different from applying Bounded Group Loss in a centralized setting, BGL in the context of federated learning requires that for any group a ∈ A, the average loss for all data belonging to group a is below a certain threshold. As we discuss in Section 4 this (along with general constraints of FL such as communication) necessitates the development of novel solvers and analyses for the objective. Definition 1 (Agarwal et al. (2019)). A classifier h satisfies Bounded Group Loss (BGL) at level ζ under distribution D if for all a ∈ A, we have E [l(h(x), y)|A = a] ≤ ζ . In practice, we could define empirical bounded group loss constraint at level ζ under the empirical distribution D̂ = 1K ∑K k=1 D̂k to be ra(h) := K∑ k=1 ra,k(h) = K∑ k=1 1/ma ∑ ak,i=a l(h(xk,i), yk,i)− ζ/K ≤ 0 . Benefits of BGL. BGL ensures that the prediction quality on any group reaches a certain threshold. Compared to standard loss parity constraints that aim to equalize the losses across all protected groups (e.g. overall accuracy equity (Dieterich et al., 2016)), BGL has two main advantages. First, G(w;λ) preserves convexity in w, as long as the loss function l itself is convex. In contrast, loss parity constraints are generally non-convex even if l is convex. Second, when the prediction difficulties are uneven across groups, loss parity may force an unnecessary drop of accuracy on some groups just to equalize all losses (Agarwal et al., 2019). In contrast, the criterion of BGL can avoid such undesirable effects. Instantiation 2 (Conditional Bounded Group Loss). In some applications one needs a stronger fairness notion beyond ensuring that no group’s loss is too large. For example, in the scenario of binary classification, a commonly used fairness requirement is equalized true positive rate or false positive rate (Hardt et al., 2016). In the context of optimization for arbitrary loss functions, a natural substitute is equalized true / false positive loss. In other words, any group’s loss conditioned on positively / negatively labeled data should be equalized. Therefore, similar to BGL, we propose a novel fairness definition known as Conditional Bounded Group Loss (CBGL) defined below: Definition 2. A classifier h satisfies Conditional Bounded Group Loss (CBGL) for y ∈ Y at level ζy under distribution D if for all a ∈ A, we have E [l(h(x), y)|A = a, Y = y] ≤ ζy . In practice, we could define empirical Conditional Bounded Group Loss constraint at level [ζy]y∈Y under D̂ to be ra,y(h) := K∑ k=1 r(a,y),k(h) = K∑ k=1 1/ma,y ∑ ak,i=a,yk,i=y l(h(xk,i), yk,i)− ζy/K ≤ 0 . Note that satisfying CBGL for all Y is a strictly harder problem than satisfying BGL alone. In fact, we can show that a classifier that satisfies CBGL at level [ζy]y∈Y also satisfies BGL at level Ey∼ρa [ζy] where ρa be the probability density of labels for all data from group a. Relationship between CBGL and Equalized Odds. For binary classification tasks in centralized settings, a common fairness notion is Equalized Odds (EO) (Hardt et al., 2016), which requires the True/False Positive Rate to be equal for all groups. Our CBGL definition can be viewed as a relaxation of EO. Consider a binary classification example where Y = {0, 1}. Let the loss function l be the 0-1 loss. CBGL requires classifier h to satisfy Pr[h(x) = y|Y = y0, A = a] ≤ ζy0 for all a ∈ A and y0 ∈ Y . EO requires Pr[h(x) = y|Y = y0, A = a] to be the same for all a ∈ A given a fixed y0, which may not be feasible if the hypothesis classH is not rich enough. Instead of requiring equity of each group’s TPR/FPR, CBGL only imposes an upper bound for each group’s TPR/FPR. Similar to the comparison between BGL and loss parity, CBGL offers more flexibility than EO since it does not force an artificial increase on FPR or FNR when a prediction task on one of the protected groups is much harder. In addition, for applications where logistic regression or DNNs are used (e.g., CV, NLP), it is uncommon to use the 0-1 loss in the objective. Thus, CBGL can provide a relaxed notion of fairness for more general loss functions whose level of fairness can be flexibly tuned. Instantiation 3 (MinMax Fairness). Recently Papadaki et al. (2022) proposed a framework called FedMinMax by solving an agnostic fair federated learning framework where the weight is applied to empirical risk conditioned on each group. Note that using BGL as the fairness constraint, our framework could reduce to FedMinMax as a special case by setting β = 0, B = 1 and ζ = 0. Definition 3. Use the same definition of ra(h) as we had in Instantiation 1. FedMinMax (Papadaki et al., 2022) aims to solve for the following objective: minh maxλ∈R|A|+ ,∥λ∥1=1 ∑ a∈A λara(h). Note that a key property of FedMinMax is the constant ζ used to upper bound the per group loss is set to 0. From a constrained optimization view, the only feasible solution that satisfies all fairness constraints for this problem is a model with perfect utility performance since requiring all losses to be smaller than 0 is equivalent to having all of them to be exactly 0. Such a property limits the ability to provide fairness guarantees for FedMinMax. Fixing B and ζ also limits its empirical performance on the relation between fairness and utility, as we will show later in Appendix F. 4 PROVABLY FAIR FEDERATED LEARNING In this section, we first propose Provably Fair Federated Learning (PFFL), a scalable solver for Main Objective, presented in Algorithm 1. We provide formal convergence guarantees for the method in Section 4.2. Given the solution found by PFFL, in Section 4.3 we then demonstrate the fairness guarantee for different examples of fairness notions defined in Section 3 (BGL, CBGL). 4.1 ALGORITHM To find a saddle point for Main Objective, we follow the scheme from Freund & Schapire (1997) and summarize our solver for fair FL in Algorithm 1 (full algorithm description in Appendix A). Our algorithm is based off of FedAvg (McMahan et al., 2017), a common scalable approach in federated learning. Intuitively, the method alternates between two steps: (1) given a fixed λ, optimize our regularized objective F (w) + λT r(w) over w; (2) given a fixed w, optimize the fairness violation term λT r(w) over λ. While Agarwal et al. (2019) also follows a similar recipe to ensure BGL, our method needs to overcome additional challenges in the federated settings. In particular, the method in Agarwal et al. (2019) optimizes w by performing exact best response, which is in general in feasible when data for distributed data sets. Our method overcomes this challenge by applying a gradient-descent-ascent style optimization process that utilizes the output of a FL learning algorithm as an approximation for the best response. In Algorithm 1, we provide an example in which the first step is achieved by using FedAvg to solve minw F (w) + λT r(w) (L 4-12). Note that solving this objective does not require the FedAvg solver; any algorithm that learns a global model in FL could be used to find a certain w given λ. After we obtain a global model from a federated training round, we use exponentiated gradient descent to update λ, following Alg 2 in Agarwal et al. (2019). This completes one training round. At the end of training, we calculate and return the average iterate as the fair global model. Note that the ultimate goal to solve for Main Objective is to find a w such that it minimizes the empirical risk subject to r(w) ≤ 0. Therefore, at the end of training, our algorithm checks whether the resulting model w̄ violates the fairness guarantee by at most some constant error M+2νB where M is the upper bound for the empirical risk and ν is the upper bound provided in Equation 5 (L 16-20). We will show in the Lemma 1 that this is always true when there exists a solution w∗ for Problem 3. However, it is also worth noting that the Problem 3 is not always feasible. For example when we set ζ = 0, requiring r(w) ≤ 0 is equivalent to requiring the empirical risk given any group a ∈ A is non positive, which is only feasible when the loss is 0 for every data in the dataset. In this case, our algorithm will simply output null if the fairness guarantee is violated by an error larger than M+2νB . Privacy aspect of PFFL Compared to FedAvg, our solver communicates losses conditioned on each group in addition to model updates. This is common in prior works that solve a min-max optimization problem in federated learning (Hou et al., 2021; Zeng et al., 2021). We note that our method could be easily extended to satisfy example-level DP for FL by performing DP-SGD locally for each client. Our algorithm also gives natural client-level DP and LDP variants. In particular, we can compute via a trusted server the average loss at each client, which is sufficient statistics to update λ. 4.2 CONVERGENCE GUARANTEE Different from Agarwal et al. (2019), while our algorithm handles arbitrary convex losses in federated setting by replacing the best response with the FedAvg output, we want to show that after running finitely many rounds, how close our solution is to the actual best response. In this section, we provide a no regret bound style analysis for our PFFL algorithm. To formally measure the the distance between the solution found by our algorithm and the optimal solution, we introduce ν-approximate saddle point as a generalization of saddle point (See Remark in Section 3.1) defined below: Definition 4. (ŵ, λ̂) is a ν-approximate saddle point of G if G(ŵ, λ̂) ≤ G(w, λ̂) + ν for all w G(ŵ, λ̂) ≥ G(ŵ,λ)− ν for all λ (4) As an example, the optimal solution (w∗,λ∗) is a 0-approximate saddle point of G. To show convergence, we first introduce some basic assumptions below: Assumption 1. Let fk be µ-strongly convex and L-smooth for all k = 1, · · · ,K. Assumption 2. Assume the stochastic gradient of fk has bounded variance: E[∥∇fi(wkt ; ξkt ) − ∇fk(wkt )∥2] ≤ σ2k for all k = 1, · · · ,K. Assumption 3. Assume the stochastic gradient of fk is uniformly bounded: E[∥∇fk(wkt ; ξkt )∥2] ≤ G2 for all k = 1, · · · ,K. These are common assumptions used when proving the convergence for FedAvg (e.g., Li et al., 2019b). Now we present our main theorem of convergence: Theorem 1 (Informal Convergence Guarantee). Let Assumption 1-3 hold. Define κ = Lµ , γ = max{8κ, J} and step size ηQ = 2(β+B)µ(γ+t) , and assume ∥r∥∞ ≤ ρ. Letting w̄ = 1 ET ∑ET t=1 w t, λ̄ = 1ET ∑ET t=1 λ t, we have: max λ G(w̄;λ)−min w G(w; λ̄) ≤ 1 T T∑ t=1 κ γ + t− 1 C + B log(Z + 1) ηθET + ηθρ 2B , (5) where C is a constant. The upper bound in Equation 5 consists of two parts: (1) the error for the FedAvg process to obtain w̄ which is a term of order O(log T/T ); (2) the error for the Exponentiated Gradient Ascent process to obtain λ̄ which converges to a noise ball given a fixed ηθ. Following Theorem 1, we could express the solution of Algorithm 1 as a ν-approximate saddle point of G by picking appropriate ηθ and T : Corollary 2. Let ηθ = ν2ρ2B and T ≥ 1 ν(γ+1)−2κC ( 4ρ2B2 log(Z+1)(γ+1) νE + 2κC(γ − 1) ) , then (w̄, λ̄) is a ν-approximate saddle point of G. We provide detailed proofs for both Theorem 1 and Corollary 2 in Appendix C. Different from the setting in prior FedAvg analyses (e.g., Li et al., 2019b), in our case the outer minimization problem changes as λ gets updated. Therefore, our analysis necessitates considering a more general scenario where the objective function could change over time. 4.3 FAIRNESS GUARANTEE In the previous section, we demonstrated that our Algorithm 1 could converge and find a νapproximate saddle point of the objective G. In this section, we further motivate why we care about finding a ν-approximate saddle point. The ultimate goal for our algorithm is to: (1) learn a model that produces fair predictions on training data, and (2) more importantly, produces fair predictions on test data, i.e., data from federated clients not seen during training. Before presenting the formal fairness and generalization guarantees, we state the following additional assumption, which is a common assumption for showing the generalization guarantee using the Rademacher complexity generalizations bound (Mohri et al., 2019). Assumption 4. Let F and F be upper bounded by constant M . We first show the fairness guarantee on the training data. Lemma 1 (Empirical Fairness Guarantee). Assume there exists w∗ satisfies r(w∗) ≤ 0Z , we have max j rj(w̄)+ ≤ M + 2ν B . (6) Lemma 1 characterizes the upper bound for the worst fairness constraint evaluated on the training data. Given a fixed ν, one could increase B to obtain a stronger fairness guarantee, i.e. a smaller upper bound. Combining this with Corollary 2, it can be seen that when B is large, additional exponentiated gradient ascent rounds are required to achieve stronger fairness. Next we formalize the fairness guarantee for the entire true data distribution. Define the true data distribution to be D = 1K ∑K k=1Dk. We would like to formalize how well our model is evaluated on the true distribution D as well as how well the fairness constraint is satisfied under D. This result is presented below in Theorem 3. Theorem 3 (Full Fairness and Generalization Guarantee). Let Assumption 1-4 holds and (w̄, λ̄) a ν-approximate saddle point of G. Then with probability 1− δ, either there doesn’t exist solution for Problem 3 and Algorithm 1 returns null or Algorithm 1 returns w̄ satisfies F(w̄) ≤ F(w∗) + 2ν + 4Rm(H) + 2MK √∑K k=1 1 2mk log(2/δ), rj(w̄) ≤ M+2νB +Genr,j (7) where w∗ is a solution for Problem 3 and Genr is the generalization error. The first part for Equation 7 characterizes how well our model performs over the true data distribution compared to the optimal solution. As number of clients K increases, we achieve smaller generalization error. The second part for Equation 7 characterizes how well the fairness constraints are satisfied over the true data distribution. Note that the upper bound could be viewed as the sum of empirical fairness violation and a generalization error. Based on our fairness notions defined in Section 3.2, we demonstrate what generalization error is under different fairness constraints r. Proposition 1 (r encodes BGL at level ζ). There are in total |A| fairness constraints, one for each group. Define the weighted rademacher complexity for group a as Ra(H) = ESk∼Dmkk ,σ [ suph∈H ∑K k=1 1 ma ∑ ak,i=a σk,il (h(xk,i), yk,i) ] . In this scenario, we have: Genr,a = 2Ra(H) + M ma √ K 2 log(2|A|/δ). Note that the fairness constraint for group a under true distribution in Equation 7 is upper bounded by O (√ K ma ) . For any group a0 with sufficient data, i.e., ma0 is large, the BGL constraint with respect to group a0 under D has a stronger formal fairness guarantee compared to any group with less data. It is also worth noting that this generalization error grows as the number of clients K grows. Recall that the generalization error becomes smaller when K grows; combing the two results together provides us a tradeoff between fairness notion of BGL and utility over the true data distribution in terms of K. Proposition 2 (r encodes CBGL at level [ζy]y∈Y ). There are in total |A||Y | fairness constraints, one for each group and label. Define the weighted rademacher complexity for group a conditioned on y as Ra,y(H) = ESk∼Dmkk ,σ [ suph∈H ∑K k=1 1 ma,y ∑ ak,i=a,yk,i=y σk,il (h(xk,i), y) ] where ma,y is the number of all examples from group a with label y. In this scenario, we have: Genr,(a,y) = 2Ra,y(H) + M ma,y √ K 2 log(2|A||Y |/δ). Similar to Proposition 1, in order to achieve strong fairness guarantees for any specific constraint on the true data distribution, we need a sufficient number of samples associated with that constraint. We provide details and proof for Theorem 3 in Appendix D. Different from the analysis performed in Agarwal et al. (2019), we analyze the generalization behaviour in federated setting where we introduce the generalization bound as a function of number of clients K. We then further formally demonstrate the tension between utility and fairness performance evaluated on the true data distribution induced by K, which has not been studied previously to the best of our knowledge. 5 EXPERIMENTS We evaluate PFFL (Algorithm 1) empirically on ProPublica COMPAS, a dataset commonly studied in fair ML (Angwin et al., 2016; Zeng et al., 2021); the US-wide ACS PUMS data, a recent group fairness benchmark dataset (Ding et al., 2021); and CelebA (Caldas et al., 2018), a common federated learning dataset. We compare our method with training a vanilla FedAvg model in terms of both fairness and utility in Section 5.1, and explore performance relative to baselines that aim to enforce other fairness notions (Demographic Parity and Equal Opportunity) in Section 5.2. Setup. For all experiments, we evaluate the accuracy and the empirical loss for each group on test data that belongs to all the silos of our fair federated learning solver. We consider COMPAS Recidivism prediction with gender as protected attribute, the ACS Employment task (Ding et al., 2021) with race as protected attribute, and CelebA (Caldas et al., 2018) with gender as a protected attribute. To reflect the federated setting, we use heterogeneous data partitions to create data silos. ACS Employent is naturally partitioned into 50 states; COMPAS and CelebA are manually partitioned in a non-IID manner into a collection of data silos. A detailed description of datasets, models, and partition schemes can be found in Appendix B. 5.1 FAIRNESS-UTILITY TRADE-OFFS FOR ALGORITHM 1 We first explore how test error rate differs as a function maximum group loss using our Algorithm 1. To be consistent with our method and theoretical analysis, we exclude the protected attribute ai for each data as a feature for learning the predictor. For each dataset, we evaluated PFFL with BGL; CBGL for Y = 1; and CBGL for Y = 0. For each method we evaluate, given fixed number of training iterations E and T , we finetune B and ζ and evaluate both test error rate and test loss on each group. Given a certain test error rate, we select the hyperparameter pair (B, ζ) that yields the lowest maximum group loss. We show the relation between test accuracy vs. max group loss in Figure 2. In particular, we compare our fairness-aware FL methods with two baseline methods: vanilla FedAvg and FedAvg trained on loss weighted by groups. In FL, applying fair training locally at each data silo and aggregating the resulting model may not provide strong population-wide fairness guarantees with the same fairness definition (Zeng et al., 2021). Hence, we also explore the relationship between test accuracy and max group loss under local BGL and global BGL constraints. On all datasets, there exists a natural tradeoff between error rate and the fairness constraint: when a model achieves stronger fairness (smaller max group loss), the model tends to have worse utility (higher error rate). However, in all scenarios, our method not only yields a model with significantly smaller maximum group loss than vanilla FedAvg, but also achieves higher test accuracy than the baseline FedAvg which is unaware of group fairness. Meanwhile, for all datasets and fairness metrics, as expected, PFFL with Global BGL achieves improved fairness-utility tradeoffs relative to PFFL with Local BGL. Therefore, our PFFL with Global fairness constraint framework yields a model where utility can coexist with fairness constraints relying on Bounded Group Loss. 5.2 BGL/CBGL EVALUATED ON OTHER FAIRNESS NOTIONS Beyond BGL and CBGL, there are other fairness notions commonly used in the fair machine learning literature. For example, several works in group fair FL have proposed optimizing the difference between every two groups’ losses (possibly conditioned on the true label) with the aim of achieving Demographic Parity (or Equal Opportunity) (Chu et al., 2021; Cui et al., 2021; Hardt et al., 2016; Zeng et al., 2021). Formally, consider the case where the protected attribute set A = {0, 1}. Define ∆DP = |Pr(h(X) = 1|A = 0)− Pr(h(X) = 1|A = 1)| and ∆EO = |Pr(h(X) = 1|A = 0, Y = 1)− Pr(h(X) = 1|A = 1, Y = 1)|. These works aim to train a model such that we could achieve small ∆DP or small ∆EO, depending on the fairness constraint selected during optimization. As discussed in Section 3.2, CBGL could be viewed as a more general definition of Equal Opportunity and Equalized Odds. In this section, we compare our method with FedFB (Zeng et al., 2021), FedFair (Chu et al., 2021), and FCFL (Cui et al., 2021), all of which aim to optimize ∆DP and ∆EO. We evaluate ∆DP and ∆EO for all approaches on COMPAS and ACS Employment, with results shown in Figure 3. Similar to Figure 2, we only show the points lying on the pareto frontier for our method. Although PFFL with BGL and CBGL was not directly designed for this fairness criteria (i.e., it does not directly enforce the loss or prediction parity of two groups’ losses to be close), we see that our method is still able to outperform training a FedAvg baseline, and in fact performs comparably or better than prior methods (which were designed for this setting). 6 CONCLUSIONS, LIMITATIONS, AND FUTURE WORK In this work, we propose a fair learning objective for federated settings via Bounded Group Loss. We then propose a scalable federated solver to find an approximate saddle point for the objective. Theoretically, we provide convergence and fairness guarantees for our method. Empirically, we show that our method can provide high accuracy and fairness simultaneously across tasks from fair ML and federated learning. In addition to strong empirical performance, ours is the first work we are aware of to provide formal convergence and fairness/generalization guarantees for group fair FL with general convex loss functions. In future work we are interested in investigating additional benefits that could be provided by using our framework, including applications in non-federated settings. Finally, similar to prior works in group fair FL, our method communicates additional parameters beyond standard non-fair FL (e.g., via FedAvg); studying and mitigating the privacy risks of such communications in the context of fair federated learning would be an interesting direction of future work.
1. What are the strengths and weaknesses of the proposed group federated learning algorithm? 2. How does the reviewer assess the privacy leakage and secrecy of the proposed algorithm? 3. Does the reviewer have concerns regarding the use of BGL for certain tasks? 4. Why do some of the experimental results appear unusual or unexpected? 5. How were the hyperparameters chosen for the baseline methods? 6. Are there any missing baselines that should be included in the comparison? 7. Can the authors provide additional information on the standard reduction method with central data as an upper bound?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a group federated learning algorithm with formal convergence/BGL guarantees. Strengths And Weaknesses -Privacy leakage/secrecy of the proposed algorithm is not discussed. It is unclear how much privacy leakage occurs. The proposed algorithm requires the exchange of additional information "r" per round, so one must discuss the implication of this. Federated learning algorithms with frequent synchronization/additional data exchange may be subject to an unacceptable level of information leakage, making it no different from sharing all the local datasets with every client. To avoid this kind of pitfall, one must clearly discuss the privacy aspect of new algorithms. -BGL seems to be ok if it's used for regression or used for classification without any preferred labels. ACS Emp./COMPAS are the tasks with preferred outputs, so I am not sure if using BGL makes sense in these settings. -Some of the experimental results look weird. In most of the experiments, FedAvg, which should minimize the error rate better than any other constrained counterparts, does not seem to achieve the lowest error rate. Sometimes it performs the worst among all tested methods. Why? -How are the hyperparameters chosen for the baseline methods? -Some missing baselines. Also, please add the performance of the standard reduction method with central data (upper bound). [1] FairFed: Enabling Group Fairness in Federated Learning https://arxiv.org/abs/2110.00857 [2] GIFAIR-FL: A Framework for Group and Individual Fairness in Federated Learning https://arxiv.org/abs/2108.02741 Clarity, Quality, Novelty And Reproducibility The paper is clearly written.
ICLR
Title Fair Federated Learning via Bounded Group Loss Abstract Fair prediction across protected groups is an important constraint for many federated learning applications. However, prior work studying group fair federated learning lacks formal convergence or fairness guarantees. In this work we propose a general framework for provably fair federated learning. In particular, we explore and extend the notion of Bounded Group Loss as a theoretically-grounded approach for group fairness. Using this setup, we propose a scalable federated optimization method that optimizes the empirical risk under a number of group fairness constraints. We provide convergence guarantees for the method as well as fairness guarantees for the resulting solution. Empirically, we evaluate our method across common benchmarks from fair ML and federated learning, showing that it can provide both fairer and more accurate predictions than baseline approaches. 1 INTRODUCTION Group fairness aims to mitigate unfair biases against certain protected demographic groups (e.g. race, gender, age) in the use of machine learning. Many methods have been proposed to incorporate group fairness constraints in centralized settings (e.g., Agarwal et al., 2018; Feldman et al., 2015; Hardt et al., 2016; Zafar et al., 2017a). However, there is a lack of work studying these approaches in the context of federated learning (FL), a training paradigm where a model is fit to data generated by a set of disparate data silos, such as a network of remote devices or collection of organizations (Kairouz et al., 2019; Li et al., 2020; McMahan et al., 2017). Mirroring concerns around fairness in non-federated settings, many FL applications similarly require performing fair prediction across protected groups. Unfortunately, as we show in Figure 1, naively applying existing approaches to each client in a federated network in isolation may be inaccurate due to heterogeneity across clients—failing to produce a fair model across the entire population (Zeng et al., 2021). Several recent works have considered addressing this issue by exploring specific forms of group fairness in FL (e.g., Chu et al., 2021; Cui et al., 2021; Du et al., 2021; Papadaki et al., 2022; RodríguezGálvez et al., 2021; Zeng et al., 2021). Despite promising empirical performance, these prior works lack formal guarantees surrounding the resulting fairness of the solutions (Section 2), which is problematic as it is unclear how the methods may perform in real-world FL deployments. In this work we provide a formulation and method for group fair FL that can provably satisfy global fairness constraints. Common group fairness notions that aim to achieve equal prediction quality between any two protected groups (e.g., Demographic Parity, Equal Opportunity (Hardt et al., 2016)) are difficult to provably satisfy while simultaneously finding a model with high utility. Instead, we consider a different fairness notion known as Bounded Group Loss (BGL) (Agarwal et al., 2019), which aims to promote worst group’s performance, to capture these common group fairness criteria. As we show, a benefit of this approach is that in addition to having practical advantages in terms of fairness-utility trade-offs (Section 5), it maintains smoothness and convexity properties that can equip our solver with favorable theoretical guarantees. Based on our group fairness formulation, we then provide a scalable method (PFFL) to solve the proposed objectives via federated saddle point optimization. Theoretically, we provide convergence guarantees for the method as well as fairness and generalization guarantees for the resulting solutions. Empirically, we demonstrate the effectiveness of our approach on common benchmarks from fair machine learning and federated learning. We summarize our main contributions below: • We propose a novel fair federated learning framework for a range of group fairness notions. Our framework models the fair FL problem as a saddle point optimization problem and leverages variations of Bounded Group Loss (Agarwal et al., 2019) to capture common forms of group fairness. We also extend BGL to consider a new fairness notion called Conditional Bounded Group Loss (CBGL), which may be of independent interest and utility in non-federated settings. • We propose a scalable federated optimization method for our group fair FL framework. We provide a regret bound analysis for our method under convex ML objectives to demonstrate formal convergence guarantees. Further, we provide fairness and generalization guarantees on the model for a variety of fairness notions. • Finally, we evaluate our method on common benchmarks used in fair machine learning and federated learning. In all settings, we find that our method can significantly improve model fairness compared to baselines without sacrificing model accuracy. Additionally, even though we do not directly optimize classical group fairness constraints (e.g., Demographic Parity, Equal Opportunity), we find that our method can still provide comparable/better fairness-utility trade-offs relative to existing approaches when evaluated on these metrics. 2 BACKGROUND AND RELATED WORK Fair Machine Learning. Algorithmic fairness in machine learning aims to identify and correct bias in the learning process. Common approaches for obtaining fairness include pre-processing methods that rectify the features or raw data to enhance fairness (Calmon et al., 2017; Feldman et al., 2015; Zemel et al., 2013); post-processing methods that revise the prediction score for a trained model (Dwork et al., 2018; Hardt et al., 2016; Menon & Williamson, 2018); and in-processing methods that directly modify the training objective/solver to produce a fair predictor (Agarwal et al., 2018; 2019; Woodworth et al., 2017; Zafar et al., 2017a;b). Most existing methods in fair ML rely on using a centralized dataset to train and evaluate the model. As shown in Figure 1, in the federated setting where data is privately distributed across different data silos, directly applying these methods locally only ensures fairness for each silo rather than the entire population. Developing effective and efficient techniques for fair FL is thus an important area of study. Fair Federated Learning. In FL, definitions of fairness may take many forms. A commonly studied notion of fairness is representation parity (Hashimoto et al., 2018), whose application in FL requires the model’s performance across all clients to have small variance (Donahue & Kleinberg, 2021; Li et al., 2019a; 2021; Mohri et al., 2019; Yue et al., 2021). In this work we instead focus on notions of group fairness, in which every data point in the federated network belongs to some (possibly) protected group, and we aim to find a model that doesn’t introduce bias towards any group. Recent works have proposed various objectives for group fairness in federated learning. Zeng et al. (2021) proposes a bi-level optimization objective that minimizes the difference between each group’s loss while finding an optimal global model. Similarly, several works propose using a constrained optimization problem that aims to find the best model subject to an upper bound on the group loss difference (Chu et al., 2021; Cui et al., 2021; Du et al., 2021; Rodríguez-Gálvez et al., 2021). Different from these approaches, our method focuses on a fairness constraint based on upperbounding the loss of each group with a constant rather than the loss difference between any two groups. More closely related to our work, Papadaki et al. (2022) weighs the empirical loss given each group by a trainable vector λ and finds the best model for the worst case λ. Though similar to our method for ζ = 0, this approach fails to achieve both strong utility and fairness performance under non-convex loss functions (see Section 5). Zhang et al. (2021) also propose a similar objective to learn a model with unified fairness. Among these works, Zeng et al. (2021) and Cui et al. (2021) also provide simplified convergence and fairness guarantees for their method. However, these works lack formal analyses around the convergence for arbitrary convex loss functions as well as the behavior of the fairness constraint over the true data distribution. Ours is the first work we are aware to provide such guarantees in the context of group fair federated learning. 3 FAIR FEDERATED LEARNING VIA BOUNDED GROUP LOSS In this section we first formalize the group fair federated learning problem and a fairness-aware objective solving this problem (Section 3.1). We then provide several examples of group fairness based on the notion of BGL and show how to incorporate them into our framework (Section 3.2). 3.1 SETUP: GROUP FAIR FEDERATED LEARNING Many applications of FL require treatment of data belonging to protected groups (e.g., race, gender, age). This is particularly common in applications of cross-silo FL, where we may wish to produce a model that fairly treats individuals from various demographic groups across a collection of data silos (e.g. hospitals, schools, financial institutions) (Chu et al., 2021; Cui et al., 2021; Vaid et al., 2021). FL Setup. Following standard federated learning scenarios (McMahan et al., 2017), we consider a network with K different clients. Each client k ∈ [K] has access to training data D̂k := {(xi, yi, ai)}i=1,··· ,mk sampled from the true data distribution Dk, where xi is an observation, yi ∈ Y is the label, ai ∈ A is the protected attribute. Let the hypothesis class beH and for any model h ∈ H, and define the loss function on data (x, y, a) to be l(h(x), y). Federated learning applications typically aim to solve: min h∈H F(h) = min h∈H E(x,y)∼D [l(h(x), y)] . (1) In practice, Dk is estimated by observing {(xi, yi, ai)}i=1,··· ,mk , and we solve the empirical risk: min h∈H F (h) = min h∈H 1 K K∑ k=1 1 mk mk∑ i=1 l(h(xk,i), yk,i) . (2) For simplicity, we define fk(h) = 1mk ∑mk i=1 l(h(xk,i), yk,i) as the local objective for client k. Further, we assume h is parameterized by a vector w ∈ Rp where p is the number of parameters. We will use F (w) and fk(w) to represent F (h) and fk(h) intermittently in the remainder of the paper. Fairness via Constrained Optimization. When a centralized dataset is available, a standard approach to learn a model that incorporates fairness criteria is to solve a constrained optimization problem where one tries to find the best model subject to some fairness notion (Agarwal et al., 2019; Barocas et al., 2019). Following this formulation, we formalize a similar learning problem in the federated setting, solving: minh∈H F (h) subject to R(h) ≤ ζ, (3) where R(h), ζ ∈ RZ encodes the constraint set on h. For instance, the z-th constraint could be written as Rz(h) ≤ ζz where ζz is a fixed constant. This formulation is commonly used to satisfy group fairness notions such as equal opportunity, equalized odds (Hardt et al., 2016), and minimax group fairness (Diana et al., 2021). To solve the constrained optimization problem 3, a common method is to use Lagrangian multipliers. In particular, let λ ∈ RZ+ be a dual variable and assume λ has ∥ · ∥1 at most B. The magnitude of B could be viewed as the regularization strength for the constraint term. Objective equation 3 can then be converted into the following saddle point optimization problem: min w max λ∈RZ+,∥λ∥1≤B G(w;λ) = βF (w) + λT r(w) , (Main Objective) where the q-th index of r encodes the q-th constraint from R (i.e. rq(w) := Rq(w)− ζq) and β is a fixed constant. In other words, the objective finds the best model under the scenario where the fairness constraint is most violated (i.e., the regularization term is maximized). There are two steps needed in order to provide meaningful utility and fairness guarantees for the model found by solving Main Objective: (1) showing that it is possible to recover a solution close to the ‘optimal’ solution, (2) providing an upper bound for both the risk (F (w)) and the fairness constraint (r(w)) given this solution. To formally define what an ‘optimal’ solution is, in this work we aim to identify constraints that satisfy the following key assumption: Assumption 0 (Convexity of G). Assume that G(w;λ) is convex in w for any fixed λ. Remark. In particular, since G is linear in λ, given a fixed w0, we can find a solution to the problem maxλ G(w0;λ), denoted as λ∗, i.e. G(w0;λ∗) ≥ G(w0;λ) for all λ. When G is convex in w, we can argue that given a fixed λ0, there exists w∗ that satisfies w∗ = argminw G(w;λ0), i.e. G(w∗;λ0) ≤ G(w;λ0) for all w. Therefore, (w∗,λ∗) is a saddle point of G(·; ·), which is denoted as the optimal solution in our setting. 3.2 FORMULATING FAIR FL: BOUNDED GROUP LOSS AND VARIANTS Many prior works in fair federated learning consider instantiating R(h) in equation 3 as a constraint that bounds the difference between any two groups’ losses, a common technique used to enforce group fairness notions such as equalized odds and demographic parity (e.g., Chu et al., 2021; Cui et al., 2021; Zeng et al., 2021). Unfortunately, this results in G(w;λ) becoming nonconvex in w, thus violating our Assumption 0. This nonconvexity is problematic as it increases the likelihood that a solver will find a local minima that either does not satisfy the fairness constraint or achieves poor utility. Instead of enforcing equity between the prediction quality of any two groups, in this work we explore using a constraint based on Bounded Group Loss (BGL) (Agarwal et al., 2019) which promotes worst group’s prediction quality and propose new variants that can retain convexity assumptions while satisfying meaningful fairness notions. In particular, we explore three instantiations of group fairness constraints R(h) that satisfy Assumption 0 below. Instantiation 1 (Bounded Group Loss). We begin by considering fairness via the Bounded Group Loss (defined below), which was originally proposed by Agarwal et al. (2019). Different from applying Bounded Group Loss in a centralized setting, BGL in the context of federated learning requires that for any group a ∈ A, the average loss for all data belonging to group a is below a certain threshold. As we discuss in Section 4 this (along with general constraints of FL such as communication) necessitates the development of novel solvers and analyses for the objective. Definition 1 (Agarwal et al. (2019)). A classifier h satisfies Bounded Group Loss (BGL) at level ζ under distribution D if for all a ∈ A, we have E [l(h(x), y)|A = a] ≤ ζ . In practice, we could define empirical bounded group loss constraint at level ζ under the empirical distribution D̂ = 1K ∑K k=1 D̂k to be ra(h) := K∑ k=1 ra,k(h) = K∑ k=1 1/ma ∑ ak,i=a l(h(xk,i), yk,i)− ζ/K ≤ 0 . Benefits of BGL. BGL ensures that the prediction quality on any group reaches a certain threshold. Compared to standard loss parity constraints that aim to equalize the losses across all protected groups (e.g. overall accuracy equity (Dieterich et al., 2016)), BGL has two main advantages. First, G(w;λ) preserves convexity in w, as long as the loss function l itself is convex. In contrast, loss parity constraints are generally non-convex even if l is convex. Second, when the prediction difficulties are uneven across groups, loss parity may force an unnecessary drop of accuracy on some groups just to equalize all losses (Agarwal et al., 2019). In contrast, the criterion of BGL can avoid such undesirable effects. Instantiation 2 (Conditional Bounded Group Loss). In some applications one needs a stronger fairness notion beyond ensuring that no group’s loss is too large. For example, in the scenario of binary classification, a commonly used fairness requirement is equalized true positive rate or false positive rate (Hardt et al., 2016). In the context of optimization for arbitrary loss functions, a natural substitute is equalized true / false positive loss. In other words, any group’s loss conditioned on positively / negatively labeled data should be equalized. Therefore, similar to BGL, we propose a novel fairness definition known as Conditional Bounded Group Loss (CBGL) defined below: Definition 2. A classifier h satisfies Conditional Bounded Group Loss (CBGL) for y ∈ Y at level ζy under distribution D if for all a ∈ A, we have E [l(h(x), y)|A = a, Y = y] ≤ ζy . In practice, we could define empirical Conditional Bounded Group Loss constraint at level [ζy]y∈Y under D̂ to be ra,y(h) := K∑ k=1 r(a,y),k(h) = K∑ k=1 1/ma,y ∑ ak,i=a,yk,i=y l(h(xk,i), yk,i)− ζy/K ≤ 0 . Note that satisfying CBGL for all Y is a strictly harder problem than satisfying BGL alone. In fact, we can show that a classifier that satisfies CBGL at level [ζy]y∈Y also satisfies BGL at level Ey∼ρa [ζy] where ρa be the probability density of labels for all data from group a. Relationship between CBGL and Equalized Odds. For binary classification tasks in centralized settings, a common fairness notion is Equalized Odds (EO) (Hardt et al., 2016), which requires the True/False Positive Rate to be equal for all groups. Our CBGL definition can be viewed as a relaxation of EO. Consider a binary classification example where Y = {0, 1}. Let the loss function l be the 0-1 loss. CBGL requires classifier h to satisfy Pr[h(x) = y|Y = y0, A = a] ≤ ζy0 for all a ∈ A and y0 ∈ Y . EO requires Pr[h(x) = y|Y = y0, A = a] to be the same for all a ∈ A given a fixed y0, which may not be feasible if the hypothesis classH is not rich enough. Instead of requiring equity of each group’s TPR/FPR, CBGL only imposes an upper bound for each group’s TPR/FPR. Similar to the comparison between BGL and loss parity, CBGL offers more flexibility than EO since it does not force an artificial increase on FPR or FNR when a prediction task on one of the protected groups is much harder. In addition, for applications where logistic regression or DNNs are used (e.g., CV, NLP), it is uncommon to use the 0-1 loss in the objective. Thus, CBGL can provide a relaxed notion of fairness for more general loss functions whose level of fairness can be flexibly tuned. Instantiation 3 (MinMax Fairness). Recently Papadaki et al. (2022) proposed a framework called FedMinMax by solving an agnostic fair federated learning framework where the weight is applied to empirical risk conditioned on each group. Note that using BGL as the fairness constraint, our framework could reduce to FedMinMax as a special case by setting β = 0, B = 1 and ζ = 0. Definition 3. Use the same definition of ra(h) as we had in Instantiation 1. FedMinMax (Papadaki et al., 2022) aims to solve for the following objective: minh maxλ∈R|A|+ ,∥λ∥1=1 ∑ a∈A λara(h). Note that a key property of FedMinMax is the constant ζ used to upper bound the per group loss is set to 0. From a constrained optimization view, the only feasible solution that satisfies all fairness constraints for this problem is a model with perfect utility performance since requiring all losses to be smaller than 0 is equivalent to having all of them to be exactly 0. Such a property limits the ability to provide fairness guarantees for FedMinMax. Fixing B and ζ also limits its empirical performance on the relation between fairness and utility, as we will show later in Appendix F. 4 PROVABLY FAIR FEDERATED LEARNING In this section, we first propose Provably Fair Federated Learning (PFFL), a scalable solver for Main Objective, presented in Algorithm 1. We provide formal convergence guarantees for the method in Section 4.2. Given the solution found by PFFL, in Section 4.3 we then demonstrate the fairness guarantee for different examples of fairness notions defined in Section 3 (BGL, CBGL). 4.1 ALGORITHM To find a saddle point for Main Objective, we follow the scheme from Freund & Schapire (1997) and summarize our solver for fair FL in Algorithm 1 (full algorithm description in Appendix A). Our algorithm is based off of FedAvg (McMahan et al., 2017), a common scalable approach in federated learning. Intuitively, the method alternates between two steps: (1) given a fixed λ, optimize our regularized objective F (w) + λT r(w) over w; (2) given a fixed w, optimize the fairness violation term λT r(w) over λ. While Agarwal et al. (2019) also follows a similar recipe to ensure BGL, our method needs to overcome additional challenges in the federated settings. In particular, the method in Agarwal et al. (2019) optimizes w by performing exact best response, which is in general in feasible when data for distributed data sets. Our method overcomes this challenge by applying a gradient-descent-ascent style optimization process that utilizes the output of a FL learning algorithm as an approximation for the best response. In Algorithm 1, we provide an example in which the first step is achieved by using FedAvg to solve minw F (w) + λT r(w) (L 4-12). Note that solving this objective does not require the FedAvg solver; any algorithm that learns a global model in FL could be used to find a certain w given λ. After we obtain a global model from a federated training round, we use exponentiated gradient descent to update λ, following Alg 2 in Agarwal et al. (2019). This completes one training round. At the end of training, we calculate and return the average iterate as the fair global model. Note that the ultimate goal to solve for Main Objective is to find a w such that it minimizes the empirical risk subject to r(w) ≤ 0. Therefore, at the end of training, our algorithm checks whether the resulting model w̄ violates the fairness guarantee by at most some constant error M+2νB where M is the upper bound for the empirical risk and ν is the upper bound provided in Equation 5 (L 16-20). We will show in the Lemma 1 that this is always true when there exists a solution w∗ for Problem 3. However, it is also worth noting that the Problem 3 is not always feasible. For example when we set ζ = 0, requiring r(w) ≤ 0 is equivalent to requiring the empirical risk given any group a ∈ A is non positive, which is only feasible when the loss is 0 for every data in the dataset. In this case, our algorithm will simply output null if the fairness guarantee is violated by an error larger than M+2νB . Privacy aspect of PFFL Compared to FedAvg, our solver communicates losses conditioned on each group in addition to model updates. This is common in prior works that solve a min-max optimization problem in federated learning (Hou et al., 2021; Zeng et al., 2021). We note that our method could be easily extended to satisfy example-level DP for FL by performing DP-SGD locally for each client. Our algorithm also gives natural client-level DP and LDP variants. In particular, we can compute via a trusted server the average loss at each client, which is sufficient statistics to update λ. 4.2 CONVERGENCE GUARANTEE Different from Agarwal et al. (2019), while our algorithm handles arbitrary convex losses in federated setting by replacing the best response with the FedAvg output, we want to show that after running finitely many rounds, how close our solution is to the actual best response. In this section, we provide a no regret bound style analysis for our PFFL algorithm. To formally measure the the distance between the solution found by our algorithm and the optimal solution, we introduce ν-approximate saddle point as a generalization of saddle point (See Remark in Section 3.1) defined below: Definition 4. (ŵ, λ̂) is a ν-approximate saddle point of G if G(ŵ, λ̂) ≤ G(w, λ̂) + ν for all w G(ŵ, λ̂) ≥ G(ŵ,λ)− ν for all λ (4) As an example, the optimal solution (w∗,λ∗) is a 0-approximate saddle point of G. To show convergence, we first introduce some basic assumptions below: Assumption 1. Let fk be µ-strongly convex and L-smooth for all k = 1, · · · ,K. Assumption 2. Assume the stochastic gradient of fk has bounded variance: E[∥∇fi(wkt ; ξkt ) − ∇fk(wkt )∥2] ≤ σ2k for all k = 1, · · · ,K. Assumption 3. Assume the stochastic gradient of fk is uniformly bounded: E[∥∇fk(wkt ; ξkt )∥2] ≤ G2 for all k = 1, · · · ,K. These are common assumptions used when proving the convergence for FedAvg (e.g., Li et al., 2019b). Now we present our main theorem of convergence: Theorem 1 (Informal Convergence Guarantee). Let Assumption 1-3 hold. Define κ = Lµ , γ = max{8κ, J} and step size ηQ = 2(β+B)µ(γ+t) , and assume ∥r∥∞ ≤ ρ. Letting w̄ = 1 ET ∑ET t=1 w t, λ̄ = 1ET ∑ET t=1 λ t, we have: max λ G(w̄;λ)−min w G(w; λ̄) ≤ 1 T T∑ t=1 κ γ + t− 1 C + B log(Z + 1) ηθET + ηθρ 2B , (5) where C is a constant. The upper bound in Equation 5 consists of two parts: (1) the error for the FedAvg process to obtain w̄ which is a term of order O(log T/T ); (2) the error for the Exponentiated Gradient Ascent process to obtain λ̄ which converges to a noise ball given a fixed ηθ. Following Theorem 1, we could express the solution of Algorithm 1 as a ν-approximate saddle point of G by picking appropriate ηθ and T : Corollary 2. Let ηθ = ν2ρ2B and T ≥ 1 ν(γ+1)−2κC ( 4ρ2B2 log(Z+1)(γ+1) νE + 2κC(γ − 1) ) , then (w̄, λ̄) is a ν-approximate saddle point of G. We provide detailed proofs for both Theorem 1 and Corollary 2 in Appendix C. Different from the setting in prior FedAvg analyses (e.g., Li et al., 2019b), in our case the outer minimization problem changes as λ gets updated. Therefore, our analysis necessitates considering a more general scenario where the objective function could change over time. 4.3 FAIRNESS GUARANTEE In the previous section, we demonstrated that our Algorithm 1 could converge and find a νapproximate saddle point of the objective G. In this section, we further motivate why we care about finding a ν-approximate saddle point. The ultimate goal for our algorithm is to: (1) learn a model that produces fair predictions on training data, and (2) more importantly, produces fair predictions on test data, i.e., data from federated clients not seen during training. Before presenting the formal fairness and generalization guarantees, we state the following additional assumption, which is a common assumption for showing the generalization guarantee using the Rademacher complexity generalizations bound (Mohri et al., 2019). Assumption 4. Let F and F be upper bounded by constant M . We first show the fairness guarantee on the training data. Lemma 1 (Empirical Fairness Guarantee). Assume there exists w∗ satisfies r(w∗) ≤ 0Z , we have max j rj(w̄)+ ≤ M + 2ν B . (6) Lemma 1 characterizes the upper bound for the worst fairness constraint evaluated on the training data. Given a fixed ν, one could increase B to obtain a stronger fairness guarantee, i.e. a smaller upper bound. Combining this with Corollary 2, it can be seen that when B is large, additional exponentiated gradient ascent rounds are required to achieve stronger fairness. Next we formalize the fairness guarantee for the entire true data distribution. Define the true data distribution to be D = 1K ∑K k=1Dk. We would like to formalize how well our model is evaluated on the true distribution D as well as how well the fairness constraint is satisfied under D. This result is presented below in Theorem 3. Theorem 3 (Full Fairness and Generalization Guarantee). Let Assumption 1-4 holds and (w̄, λ̄) a ν-approximate saddle point of G. Then with probability 1− δ, either there doesn’t exist solution for Problem 3 and Algorithm 1 returns null or Algorithm 1 returns w̄ satisfies F(w̄) ≤ F(w∗) + 2ν + 4Rm(H) + 2MK √∑K k=1 1 2mk log(2/δ), rj(w̄) ≤ M+2νB +Genr,j (7) where w∗ is a solution for Problem 3 and Genr is the generalization error. The first part for Equation 7 characterizes how well our model performs over the true data distribution compared to the optimal solution. As number of clients K increases, we achieve smaller generalization error. The second part for Equation 7 characterizes how well the fairness constraints are satisfied over the true data distribution. Note that the upper bound could be viewed as the sum of empirical fairness violation and a generalization error. Based on our fairness notions defined in Section 3.2, we demonstrate what generalization error is under different fairness constraints r. Proposition 1 (r encodes BGL at level ζ). There are in total |A| fairness constraints, one for each group. Define the weighted rademacher complexity for group a as Ra(H) = ESk∼Dmkk ,σ [ suph∈H ∑K k=1 1 ma ∑ ak,i=a σk,il (h(xk,i), yk,i) ] . In this scenario, we have: Genr,a = 2Ra(H) + M ma √ K 2 log(2|A|/δ). Note that the fairness constraint for group a under true distribution in Equation 7 is upper bounded by O (√ K ma ) . For any group a0 with sufficient data, i.e., ma0 is large, the BGL constraint with respect to group a0 under D has a stronger formal fairness guarantee compared to any group with less data. It is also worth noting that this generalization error grows as the number of clients K grows. Recall that the generalization error becomes smaller when K grows; combing the two results together provides us a tradeoff between fairness notion of BGL and utility over the true data distribution in terms of K. Proposition 2 (r encodes CBGL at level [ζy]y∈Y ). There are in total |A||Y | fairness constraints, one for each group and label. Define the weighted rademacher complexity for group a conditioned on y as Ra,y(H) = ESk∼Dmkk ,σ [ suph∈H ∑K k=1 1 ma,y ∑ ak,i=a,yk,i=y σk,il (h(xk,i), y) ] where ma,y is the number of all examples from group a with label y. In this scenario, we have: Genr,(a,y) = 2Ra,y(H) + M ma,y √ K 2 log(2|A||Y |/δ). Similar to Proposition 1, in order to achieve strong fairness guarantees for any specific constraint on the true data distribution, we need a sufficient number of samples associated with that constraint. We provide details and proof for Theorem 3 in Appendix D. Different from the analysis performed in Agarwal et al. (2019), we analyze the generalization behaviour in federated setting where we introduce the generalization bound as a function of number of clients K. We then further formally demonstrate the tension between utility and fairness performance evaluated on the true data distribution induced by K, which has not been studied previously to the best of our knowledge. 5 EXPERIMENTS We evaluate PFFL (Algorithm 1) empirically on ProPublica COMPAS, a dataset commonly studied in fair ML (Angwin et al., 2016; Zeng et al., 2021); the US-wide ACS PUMS data, a recent group fairness benchmark dataset (Ding et al., 2021); and CelebA (Caldas et al., 2018), a common federated learning dataset. We compare our method with training a vanilla FedAvg model in terms of both fairness and utility in Section 5.1, and explore performance relative to baselines that aim to enforce other fairness notions (Demographic Parity and Equal Opportunity) in Section 5.2. Setup. For all experiments, we evaluate the accuracy and the empirical loss for each group on test data that belongs to all the silos of our fair federated learning solver. We consider COMPAS Recidivism prediction with gender as protected attribute, the ACS Employment task (Ding et al., 2021) with race as protected attribute, and CelebA (Caldas et al., 2018) with gender as a protected attribute. To reflect the federated setting, we use heterogeneous data partitions to create data silos. ACS Employent is naturally partitioned into 50 states; COMPAS and CelebA are manually partitioned in a non-IID manner into a collection of data silos. A detailed description of datasets, models, and partition schemes can be found in Appendix B. 5.1 FAIRNESS-UTILITY TRADE-OFFS FOR ALGORITHM 1 We first explore how test error rate differs as a function maximum group loss using our Algorithm 1. To be consistent with our method and theoretical analysis, we exclude the protected attribute ai for each data as a feature for learning the predictor. For each dataset, we evaluated PFFL with BGL; CBGL for Y = 1; and CBGL for Y = 0. For each method we evaluate, given fixed number of training iterations E and T , we finetune B and ζ and evaluate both test error rate and test loss on each group. Given a certain test error rate, we select the hyperparameter pair (B, ζ) that yields the lowest maximum group loss. We show the relation between test accuracy vs. max group loss in Figure 2. In particular, we compare our fairness-aware FL methods with two baseline methods: vanilla FedAvg and FedAvg trained on loss weighted by groups. In FL, applying fair training locally at each data silo and aggregating the resulting model may not provide strong population-wide fairness guarantees with the same fairness definition (Zeng et al., 2021). Hence, we also explore the relationship between test accuracy and max group loss under local BGL and global BGL constraints. On all datasets, there exists a natural tradeoff between error rate and the fairness constraint: when a model achieves stronger fairness (smaller max group loss), the model tends to have worse utility (higher error rate). However, in all scenarios, our method not only yields a model with significantly smaller maximum group loss than vanilla FedAvg, but also achieves higher test accuracy than the baseline FedAvg which is unaware of group fairness. Meanwhile, for all datasets and fairness metrics, as expected, PFFL with Global BGL achieves improved fairness-utility tradeoffs relative to PFFL with Local BGL. Therefore, our PFFL with Global fairness constraint framework yields a model where utility can coexist with fairness constraints relying on Bounded Group Loss. 5.2 BGL/CBGL EVALUATED ON OTHER FAIRNESS NOTIONS Beyond BGL and CBGL, there are other fairness notions commonly used in the fair machine learning literature. For example, several works in group fair FL have proposed optimizing the difference between every two groups’ losses (possibly conditioned on the true label) with the aim of achieving Demographic Parity (or Equal Opportunity) (Chu et al., 2021; Cui et al., 2021; Hardt et al., 2016; Zeng et al., 2021). Formally, consider the case where the protected attribute set A = {0, 1}. Define ∆DP = |Pr(h(X) = 1|A = 0)− Pr(h(X) = 1|A = 1)| and ∆EO = |Pr(h(X) = 1|A = 0, Y = 1)− Pr(h(X) = 1|A = 1, Y = 1)|. These works aim to train a model such that we could achieve small ∆DP or small ∆EO, depending on the fairness constraint selected during optimization. As discussed in Section 3.2, CBGL could be viewed as a more general definition of Equal Opportunity and Equalized Odds. In this section, we compare our method with FedFB (Zeng et al., 2021), FedFair (Chu et al., 2021), and FCFL (Cui et al., 2021), all of which aim to optimize ∆DP and ∆EO. We evaluate ∆DP and ∆EO for all approaches on COMPAS and ACS Employment, with results shown in Figure 3. Similar to Figure 2, we only show the points lying on the pareto frontier for our method. Although PFFL with BGL and CBGL was not directly designed for this fairness criteria (i.e., it does not directly enforce the loss or prediction parity of two groups’ losses to be close), we see that our method is still able to outperform training a FedAvg baseline, and in fact performs comparably or better than prior methods (which were designed for this setting). 6 CONCLUSIONS, LIMITATIONS, AND FUTURE WORK In this work, we propose a fair learning objective for federated settings via Bounded Group Loss. We then propose a scalable federated solver to find an approximate saddle point for the objective. Theoretically, we provide convergence and fairness guarantees for our method. Empirically, we show that our method can provide high accuracy and fairness simultaneously across tasks from fair ML and federated learning. In addition to strong empirical performance, ours is the first work we are aware of to provide formal convergence and fairness/generalization guarantees for group fair FL with general convex loss functions. In future work we are interested in investigating additional benefits that could be provided by using our framework, including applications in non-federated settings. Finally, similar to prior works in group fair FL, our method communicates additional parameters beyond standard non-fair FL (e.g., via FedAvg); studying and mitigating the privacy risks of such communications in the context of fair federated learning would be an interesting direction of future work.
1. What is the focus of the paper regarding fair learning objectives? 2. What are the strengths and weaknesses of the proposed method in terms of its technical idea and selection of loss functions? 3. Do you have any concerns or questions about the lack of discussions on techniques and comparisons with other works in the literature? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a fair learning objective for federated settings via Bounded Group Loss. The authors propose a scalable federated solver to find an approximate saddle point for the objective. Theoretically, they provide convergence and fairness guarantees for the method. Empirically, they show that their method can provide high accuracy and fairness simultaneously across tasks from fair ML and federated learning. Strengths And Weaknesses Strength: The topic of fair federated learning is relevant. The technical idea of reducing to federated saddle point optimization is natural. Weaknesses: The selection of Bounded Group Loss is not very well motivated. To my understanding, this selection is due to its convexity instead of its practical interest. The advantage of the proposed algorithm in Bounded Group Loss is not surprising since the baselines are not defined for this fairness measure. Lack of discussion on techniques. The reduction to federated saddle point optimization is natural, and hence I expect to see a comparison with federated saddle point optimization algorithms in the literature. For example, [Hou et al., Efficient Algorithms for Federated Saddle Point Optimization, 2021] and [Shen et al., FedMM: Saddle Point Optimization for Federated Adversarial Domain Adaptation, 2021]. Clarity, Quality, Novelty And Reproducibility Clarity: The writing is ok, but lacks some literature search for federated saddle point optimization. Quality: The optimization approach is standard. Novelty: The provable guarantee for fair federated learning is novel.
ICLR
Title Fair Federated Learning via Bounded Group Loss Abstract Fair prediction across protected groups is an important constraint for many federated learning applications. However, prior work studying group fair federated learning lacks formal convergence or fairness guarantees. In this work we propose a general framework for provably fair federated learning. In particular, we explore and extend the notion of Bounded Group Loss as a theoretically-grounded approach for group fairness. Using this setup, we propose a scalable federated optimization method that optimizes the empirical risk under a number of group fairness constraints. We provide convergence guarantees for the method as well as fairness guarantees for the resulting solution. Empirically, we evaluate our method across common benchmarks from fair ML and federated learning, showing that it can provide both fairer and more accurate predictions than baseline approaches. 1 INTRODUCTION Group fairness aims to mitigate unfair biases against certain protected demographic groups (e.g. race, gender, age) in the use of machine learning. Many methods have been proposed to incorporate group fairness constraints in centralized settings (e.g., Agarwal et al., 2018; Feldman et al., 2015; Hardt et al., 2016; Zafar et al., 2017a). However, there is a lack of work studying these approaches in the context of federated learning (FL), a training paradigm where a model is fit to data generated by a set of disparate data silos, such as a network of remote devices or collection of organizations (Kairouz et al., 2019; Li et al., 2020; McMahan et al., 2017). Mirroring concerns around fairness in non-federated settings, many FL applications similarly require performing fair prediction across protected groups. Unfortunately, as we show in Figure 1, naively applying existing approaches to each client in a federated network in isolation may be inaccurate due to heterogeneity across clients—failing to produce a fair model across the entire population (Zeng et al., 2021). Several recent works have considered addressing this issue by exploring specific forms of group fairness in FL (e.g., Chu et al., 2021; Cui et al., 2021; Du et al., 2021; Papadaki et al., 2022; RodríguezGálvez et al., 2021; Zeng et al., 2021). Despite promising empirical performance, these prior works lack formal guarantees surrounding the resulting fairness of the solutions (Section 2), which is problematic as it is unclear how the methods may perform in real-world FL deployments. In this work we provide a formulation and method for group fair FL that can provably satisfy global fairness constraints. Common group fairness notions that aim to achieve equal prediction quality between any two protected groups (e.g., Demographic Parity, Equal Opportunity (Hardt et al., 2016)) are difficult to provably satisfy while simultaneously finding a model with high utility. Instead, we consider a different fairness notion known as Bounded Group Loss (BGL) (Agarwal et al., 2019), which aims to promote worst group’s performance, to capture these common group fairness criteria. As we show, a benefit of this approach is that in addition to having practical advantages in terms of fairness-utility trade-offs (Section 5), it maintains smoothness and convexity properties that can equip our solver with favorable theoretical guarantees. Based on our group fairness formulation, we then provide a scalable method (PFFL) to solve the proposed objectives via federated saddle point optimization. Theoretically, we provide convergence guarantees for the method as well as fairness and generalization guarantees for the resulting solutions. Empirically, we demonstrate the effectiveness of our approach on common benchmarks from fair machine learning and federated learning. We summarize our main contributions below: • We propose a novel fair federated learning framework for a range of group fairness notions. Our framework models the fair FL problem as a saddle point optimization problem and leverages variations of Bounded Group Loss (Agarwal et al., 2019) to capture common forms of group fairness. We also extend BGL to consider a new fairness notion called Conditional Bounded Group Loss (CBGL), which may be of independent interest and utility in non-federated settings. • We propose a scalable federated optimization method for our group fair FL framework. We provide a regret bound analysis for our method under convex ML objectives to demonstrate formal convergence guarantees. Further, we provide fairness and generalization guarantees on the model for a variety of fairness notions. • Finally, we evaluate our method on common benchmarks used in fair machine learning and federated learning. In all settings, we find that our method can significantly improve model fairness compared to baselines without sacrificing model accuracy. Additionally, even though we do not directly optimize classical group fairness constraints (e.g., Demographic Parity, Equal Opportunity), we find that our method can still provide comparable/better fairness-utility trade-offs relative to existing approaches when evaluated on these metrics. 2 BACKGROUND AND RELATED WORK Fair Machine Learning. Algorithmic fairness in machine learning aims to identify and correct bias in the learning process. Common approaches for obtaining fairness include pre-processing methods that rectify the features or raw data to enhance fairness (Calmon et al., 2017; Feldman et al., 2015; Zemel et al., 2013); post-processing methods that revise the prediction score for a trained model (Dwork et al., 2018; Hardt et al., 2016; Menon & Williamson, 2018); and in-processing methods that directly modify the training objective/solver to produce a fair predictor (Agarwal et al., 2018; 2019; Woodworth et al., 2017; Zafar et al., 2017a;b). Most existing methods in fair ML rely on using a centralized dataset to train and evaluate the model. As shown in Figure 1, in the federated setting where data is privately distributed across different data silos, directly applying these methods locally only ensures fairness for each silo rather than the entire population. Developing effective and efficient techniques for fair FL is thus an important area of study. Fair Federated Learning. In FL, definitions of fairness may take many forms. A commonly studied notion of fairness is representation parity (Hashimoto et al., 2018), whose application in FL requires the model’s performance across all clients to have small variance (Donahue & Kleinberg, 2021; Li et al., 2019a; 2021; Mohri et al., 2019; Yue et al., 2021). In this work we instead focus on notions of group fairness, in which every data point in the federated network belongs to some (possibly) protected group, and we aim to find a model that doesn’t introduce bias towards any group. Recent works have proposed various objectives for group fairness in federated learning. Zeng et al. (2021) proposes a bi-level optimization objective that minimizes the difference between each group’s loss while finding an optimal global model. Similarly, several works propose using a constrained optimization problem that aims to find the best model subject to an upper bound on the group loss difference (Chu et al., 2021; Cui et al., 2021; Du et al., 2021; Rodríguez-Gálvez et al., 2021). Different from these approaches, our method focuses on a fairness constraint based on upperbounding the loss of each group with a constant rather than the loss difference between any two groups. More closely related to our work, Papadaki et al. (2022) weighs the empirical loss given each group by a trainable vector λ and finds the best model for the worst case λ. Though similar to our method for ζ = 0, this approach fails to achieve both strong utility and fairness performance under non-convex loss functions (see Section 5). Zhang et al. (2021) also propose a similar objective to learn a model with unified fairness. Among these works, Zeng et al. (2021) and Cui et al. (2021) also provide simplified convergence and fairness guarantees for their method. However, these works lack formal analyses around the convergence for arbitrary convex loss functions as well as the behavior of the fairness constraint over the true data distribution. Ours is the first work we are aware to provide such guarantees in the context of group fair federated learning. 3 FAIR FEDERATED LEARNING VIA BOUNDED GROUP LOSS In this section we first formalize the group fair federated learning problem and a fairness-aware objective solving this problem (Section 3.1). We then provide several examples of group fairness based on the notion of BGL and show how to incorporate them into our framework (Section 3.2). 3.1 SETUP: GROUP FAIR FEDERATED LEARNING Many applications of FL require treatment of data belonging to protected groups (e.g., race, gender, age). This is particularly common in applications of cross-silo FL, where we may wish to produce a model that fairly treats individuals from various demographic groups across a collection of data silos (e.g. hospitals, schools, financial institutions) (Chu et al., 2021; Cui et al., 2021; Vaid et al., 2021). FL Setup. Following standard federated learning scenarios (McMahan et al., 2017), we consider a network with K different clients. Each client k ∈ [K] has access to training data D̂k := {(xi, yi, ai)}i=1,··· ,mk sampled from the true data distribution Dk, where xi is an observation, yi ∈ Y is the label, ai ∈ A is the protected attribute. Let the hypothesis class beH and for any model h ∈ H, and define the loss function on data (x, y, a) to be l(h(x), y). Federated learning applications typically aim to solve: min h∈H F(h) = min h∈H E(x,y)∼D [l(h(x), y)] . (1) In practice, Dk is estimated by observing {(xi, yi, ai)}i=1,··· ,mk , and we solve the empirical risk: min h∈H F (h) = min h∈H 1 K K∑ k=1 1 mk mk∑ i=1 l(h(xk,i), yk,i) . (2) For simplicity, we define fk(h) = 1mk ∑mk i=1 l(h(xk,i), yk,i) as the local objective for client k. Further, we assume h is parameterized by a vector w ∈ Rp where p is the number of parameters. We will use F (w) and fk(w) to represent F (h) and fk(h) intermittently in the remainder of the paper. Fairness via Constrained Optimization. When a centralized dataset is available, a standard approach to learn a model that incorporates fairness criteria is to solve a constrained optimization problem where one tries to find the best model subject to some fairness notion (Agarwal et al., 2019; Barocas et al., 2019). Following this formulation, we formalize a similar learning problem in the federated setting, solving: minh∈H F (h) subject to R(h) ≤ ζ, (3) where R(h), ζ ∈ RZ encodes the constraint set on h. For instance, the z-th constraint could be written as Rz(h) ≤ ζz where ζz is a fixed constant. This formulation is commonly used to satisfy group fairness notions such as equal opportunity, equalized odds (Hardt et al., 2016), and minimax group fairness (Diana et al., 2021). To solve the constrained optimization problem 3, a common method is to use Lagrangian multipliers. In particular, let λ ∈ RZ+ be a dual variable and assume λ has ∥ · ∥1 at most B. The magnitude of B could be viewed as the regularization strength for the constraint term. Objective equation 3 can then be converted into the following saddle point optimization problem: min w max λ∈RZ+,∥λ∥1≤B G(w;λ) = βF (w) + λT r(w) , (Main Objective) where the q-th index of r encodes the q-th constraint from R (i.e. rq(w) := Rq(w)− ζq) and β is a fixed constant. In other words, the objective finds the best model under the scenario where the fairness constraint is most violated (i.e., the regularization term is maximized). There are two steps needed in order to provide meaningful utility and fairness guarantees for the model found by solving Main Objective: (1) showing that it is possible to recover a solution close to the ‘optimal’ solution, (2) providing an upper bound for both the risk (F (w)) and the fairness constraint (r(w)) given this solution. To formally define what an ‘optimal’ solution is, in this work we aim to identify constraints that satisfy the following key assumption: Assumption 0 (Convexity of G). Assume that G(w;λ) is convex in w for any fixed λ. Remark. In particular, since G is linear in λ, given a fixed w0, we can find a solution to the problem maxλ G(w0;λ), denoted as λ∗, i.e. G(w0;λ∗) ≥ G(w0;λ) for all λ. When G is convex in w, we can argue that given a fixed λ0, there exists w∗ that satisfies w∗ = argminw G(w;λ0), i.e. G(w∗;λ0) ≤ G(w;λ0) for all w. Therefore, (w∗,λ∗) is a saddle point of G(·; ·), which is denoted as the optimal solution in our setting. 3.2 FORMULATING FAIR FL: BOUNDED GROUP LOSS AND VARIANTS Many prior works in fair federated learning consider instantiating R(h) in equation 3 as a constraint that bounds the difference between any two groups’ losses, a common technique used to enforce group fairness notions such as equalized odds and demographic parity (e.g., Chu et al., 2021; Cui et al., 2021; Zeng et al., 2021). Unfortunately, this results in G(w;λ) becoming nonconvex in w, thus violating our Assumption 0. This nonconvexity is problematic as it increases the likelihood that a solver will find a local minima that either does not satisfy the fairness constraint or achieves poor utility. Instead of enforcing equity between the prediction quality of any two groups, in this work we explore using a constraint based on Bounded Group Loss (BGL) (Agarwal et al., 2019) which promotes worst group’s prediction quality and propose new variants that can retain convexity assumptions while satisfying meaningful fairness notions. In particular, we explore three instantiations of group fairness constraints R(h) that satisfy Assumption 0 below. Instantiation 1 (Bounded Group Loss). We begin by considering fairness via the Bounded Group Loss (defined below), which was originally proposed by Agarwal et al. (2019). Different from applying Bounded Group Loss in a centralized setting, BGL in the context of federated learning requires that for any group a ∈ A, the average loss for all data belonging to group a is below a certain threshold. As we discuss in Section 4 this (along with general constraints of FL such as communication) necessitates the development of novel solvers and analyses for the objective. Definition 1 (Agarwal et al. (2019)). A classifier h satisfies Bounded Group Loss (BGL) at level ζ under distribution D if for all a ∈ A, we have E [l(h(x), y)|A = a] ≤ ζ . In practice, we could define empirical bounded group loss constraint at level ζ under the empirical distribution D̂ = 1K ∑K k=1 D̂k to be ra(h) := K∑ k=1 ra,k(h) = K∑ k=1 1/ma ∑ ak,i=a l(h(xk,i), yk,i)− ζ/K ≤ 0 . Benefits of BGL. BGL ensures that the prediction quality on any group reaches a certain threshold. Compared to standard loss parity constraints that aim to equalize the losses across all protected groups (e.g. overall accuracy equity (Dieterich et al., 2016)), BGL has two main advantages. First, G(w;λ) preserves convexity in w, as long as the loss function l itself is convex. In contrast, loss parity constraints are generally non-convex even if l is convex. Second, when the prediction difficulties are uneven across groups, loss parity may force an unnecessary drop of accuracy on some groups just to equalize all losses (Agarwal et al., 2019). In contrast, the criterion of BGL can avoid such undesirable effects. Instantiation 2 (Conditional Bounded Group Loss). In some applications one needs a stronger fairness notion beyond ensuring that no group’s loss is too large. For example, in the scenario of binary classification, a commonly used fairness requirement is equalized true positive rate or false positive rate (Hardt et al., 2016). In the context of optimization for arbitrary loss functions, a natural substitute is equalized true / false positive loss. In other words, any group’s loss conditioned on positively / negatively labeled data should be equalized. Therefore, similar to BGL, we propose a novel fairness definition known as Conditional Bounded Group Loss (CBGL) defined below: Definition 2. A classifier h satisfies Conditional Bounded Group Loss (CBGL) for y ∈ Y at level ζy under distribution D if for all a ∈ A, we have E [l(h(x), y)|A = a, Y = y] ≤ ζy . In practice, we could define empirical Conditional Bounded Group Loss constraint at level [ζy]y∈Y under D̂ to be ra,y(h) := K∑ k=1 r(a,y),k(h) = K∑ k=1 1/ma,y ∑ ak,i=a,yk,i=y l(h(xk,i), yk,i)− ζy/K ≤ 0 . Note that satisfying CBGL for all Y is a strictly harder problem than satisfying BGL alone. In fact, we can show that a classifier that satisfies CBGL at level [ζy]y∈Y also satisfies BGL at level Ey∼ρa [ζy] where ρa be the probability density of labels for all data from group a. Relationship between CBGL and Equalized Odds. For binary classification tasks in centralized settings, a common fairness notion is Equalized Odds (EO) (Hardt et al., 2016), which requires the True/False Positive Rate to be equal for all groups. Our CBGL definition can be viewed as a relaxation of EO. Consider a binary classification example where Y = {0, 1}. Let the loss function l be the 0-1 loss. CBGL requires classifier h to satisfy Pr[h(x) = y|Y = y0, A = a] ≤ ζy0 for all a ∈ A and y0 ∈ Y . EO requires Pr[h(x) = y|Y = y0, A = a] to be the same for all a ∈ A given a fixed y0, which may not be feasible if the hypothesis classH is not rich enough. Instead of requiring equity of each group’s TPR/FPR, CBGL only imposes an upper bound for each group’s TPR/FPR. Similar to the comparison between BGL and loss parity, CBGL offers more flexibility than EO since it does not force an artificial increase on FPR or FNR when a prediction task on one of the protected groups is much harder. In addition, for applications where logistic regression or DNNs are used (e.g., CV, NLP), it is uncommon to use the 0-1 loss in the objective. Thus, CBGL can provide a relaxed notion of fairness for more general loss functions whose level of fairness can be flexibly tuned. Instantiation 3 (MinMax Fairness). Recently Papadaki et al. (2022) proposed a framework called FedMinMax by solving an agnostic fair federated learning framework where the weight is applied to empirical risk conditioned on each group. Note that using BGL as the fairness constraint, our framework could reduce to FedMinMax as a special case by setting β = 0, B = 1 and ζ = 0. Definition 3. Use the same definition of ra(h) as we had in Instantiation 1. FedMinMax (Papadaki et al., 2022) aims to solve for the following objective: minh maxλ∈R|A|+ ,∥λ∥1=1 ∑ a∈A λara(h). Note that a key property of FedMinMax is the constant ζ used to upper bound the per group loss is set to 0. From a constrained optimization view, the only feasible solution that satisfies all fairness constraints for this problem is a model with perfect utility performance since requiring all losses to be smaller than 0 is equivalent to having all of them to be exactly 0. Such a property limits the ability to provide fairness guarantees for FedMinMax. Fixing B and ζ also limits its empirical performance on the relation between fairness and utility, as we will show later in Appendix F. 4 PROVABLY FAIR FEDERATED LEARNING In this section, we first propose Provably Fair Federated Learning (PFFL), a scalable solver for Main Objective, presented in Algorithm 1. We provide formal convergence guarantees for the method in Section 4.2. Given the solution found by PFFL, in Section 4.3 we then demonstrate the fairness guarantee for different examples of fairness notions defined in Section 3 (BGL, CBGL). 4.1 ALGORITHM To find a saddle point for Main Objective, we follow the scheme from Freund & Schapire (1997) and summarize our solver for fair FL in Algorithm 1 (full algorithm description in Appendix A). Our algorithm is based off of FedAvg (McMahan et al., 2017), a common scalable approach in federated learning. Intuitively, the method alternates between two steps: (1) given a fixed λ, optimize our regularized objective F (w) + λT r(w) over w; (2) given a fixed w, optimize the fairness violation term λT r(w) over λ. While Agarwal et al. (2019) also follows a similar recipe to ensure BGL, our method needs to overcome additional challenges in the federated settings. In particular, the method in Agarwal et al. (2019) optimizes w by performing exact best response, which is in general in feasible when data for distributed data sets. Our method overcomes this challenge by applying a gradient-descent-ascent style optimization process that utilizes the output of a FL learning algorithm as an approximation for the best response. In Algorithm 1, we provide an example in which the first step is achieved by using FedAvg to solve minw F (w) + λT r(w) (L 4-12). Note that solving this objective does not require the FedAvg solver; any algorithm that learns a global model in FL could be used to find a certain w given λ. After we obtain a global model from a federated training round, we use exponentiated gradient descent to update λ, following Alg 2 in Agarwal et al. (2019). This completes one training round. At the end of training, we calculate and return the average iterate as the fair global model. Note that the ultimate goal to solve for Main Objective is to find a w such that it minimizes the empirical risk subject to r(w) ≤ 0. Therefore, at the end of training, our algorithm checks whether the resulting model w̄ violates the fairness guarantee by at most some constant error M+2νB where M is the upper bound for the empirical risk and ν is the upper bound provided in Equation 5 (L 16-20). We will show in the Lemma 1 that this is always true when there exists a solution w∗ for Problem 3. However, it is also worth noting that the Problem 3 is not always feasible. For example when we set ζ = 0, requiring r(w) ≤ 0 is equivalent to requiring the empirical risk given any group a ∈ A is non positive, which is only feasible when the loss is 0 for every data in the dataset. In this case, our algorithm will simply output null if the fairness guarantee is violated by an error larger than M+2νB . Privacy aspect of PFFL Compared to FedAvg, our solver communicates losses conditioned on each group in addition to model updates. This is common in prior works that solve a min-max optimization problem in federated learning (Hou et al., 2021; Zeng et al., 2021). We note that our method could be easily extended to satisfy example-level DP for FL by performing DP-SGD locally for each client. Our algorithm also gives natural client-level DP and LDP variants. In particular, we can compute via a trusted server the average loss at each client, which is sufficient statistics to update λ. 4.2 CONVERGENCE GUARANTEE Different from Agarwal et al. (2019), while our algorithm handles arbitrary convex losses in federated setting by replacing the best response with the FedAvg output, we want to show that after running finitely many rounds, how close our solution is to the actual best response. In this section, we provide a no regret bound style analysis for our PFFL algorithm. To formally measure the the distance between the solution found by our algorithm and the optimal solution, we introduce ν-approximate saddle point as a generalization of saddle point (See Remark in Section 3.1) defined below: Definition 4. (ŵ, λ̂) is a ν-approximate saddle point of G if G(ŵ, λ̂) ≤ G(w, λ̂) + ν for all w G(ŵ, λ̂) ≥ G(ŵ,λ)− ν for all λ (4) As an example, the optimal solution (w∗,λ∗) is a 0-approximate saddle point of G. To show convergence, we first introduce some basic assumptions below: Assumption 1. Let fk be µ-strongly convex and L-smooth for all k = 1, · · · ,K. Assumption 2. Assume the stochastic gradient of fk has bounded variance: E[∥∇fi(wkt ; ξkt ) − ∇fk(wkt )∥2] ≤ σ2k for all k = 1, · · · ,K. Assumption 3. Assume the stochastic gradient of fk is uniformly bounded: E[∥∇fk(wkt ; ξkt )∥2] ≤ G2 for all k = 1, · · · ,K. These are common assumptions used when proving the convergence for FedAvg (e.g., Li et al., 2019b). Now we present our main theorem of convergence: Theorem 1 (Informal Convergence Guarantee). Let Assumption 1-3 hold. Define κ = Lµ , γ = max{8κ, J} and step size ηQ = 2(β+B)µ(γ+t) , and assume ∥r∥∞ ≤ ρ. Letting w̄ = 1 ET ∑ET t=1 w t, λ̄ = 1ET ∑ET t=1 λ t, we have: max λ G(w̄;λ)−min w G(w; λ̄) ≤ 1 T T∑ t=1 κ γ + t− 1 C + B log(Z + 1) ηθET + ηθρ 2B , (5) where C is a constant. The upper bound in Equation 5 consists of two parts: (1) the error for the FedAvg process to obtain w̄ which is a term of order O(log T/T ); (2) the error for the Exponentiated Gradient Ascent process to obtain λ̄ which converges to a noise ball given a fixed ηθ. Following Theorem 1, we could express the solution of Algorithm 1 as a ν-approximate saddle point of G by picking appropriate ηθ and T : Corollary 2. Let ηθ = ν2ρ2B and T ≥ 1 ν(γ+1)−2κC ( 4ρ2B2 log(Z+1)(γ+1) νE + 2κC(γ − 1) ) , then (w̄, λ̄) is a ν-approximate saddle point of G. We provide detailed proofs for both Theorem 1 and Corollary 2 in Appendix C. Different from the setting in prior FedAvg analyses (e.g., Li et al., 2019b), in our case the outer minimization problem changes as λ gets updated. Therefore, our analysis necessitates considering a more general scenario where the objective function could change over time. 4.3 FAIRNESS GUARANTEE In the previous section, we demonstrated that our Algorithm 1 could converge and find a νapproximate saddle point of the objective G. In this section, we further motivate why we care about finding a ν-approximate saddle point. The ultimate goal for our algorithm is to: (1) learn a model that produces fair predictions on training data, and (2) more importantly, produces fair predictions on test data, i.e., data from federated clients not seen during training. Before presenting the formal fairness and generalization guarantees, we state the following additional assumption, which is a common assumption for showing the generalization guarantee using the Rademacher complexity generalizations bound (Mohri et al., 2019). Assumption 4. Let F and F be upper bounded by constant M . We first show the fairness guarantee on the training data. Lemma 1 (Empirical Fairness Guarantee). Assume there exists w∗ satisfies r(w∗) ≤ 0Z , we have max j rj(w̄)+ ≤ M + 2ν B . (6) Lemma 1 characterizes the upper bound for the worst fairness constraint evaluated on the training data. Given a fixed ν, one could increase B to obtain a stronger fairness guarantee, i.e. a smaller upper bound. Combining this with Corollary 2, it can be seen that when B is large, additional exponentiated gradient ascent rounds are required to achieve stronger fairness. Next we formalize the fairness guarantee for the entire true data distribution. Define the true data distribution to be D = 1K ∑K k=1Dk. We would like to formalize how well our model is evaluated on the true distribution D as well as how well the fairness constraint is satisfied under D. This result is presented below in Theorem 3. Theorem 3 (Full Fairness and Generalization Guarantee). Let Assumption 1-4 holds and (w̄, λ̄) a ν-approximate saddle point of G. Then with probability 1− δ, either there doesn’t exist solution for Problem 3 and Algorithm 1 returns null or Algorithm 1 returns w̄ satisfies F(w̄) ≤ F(w∗) + 2ν + 4Rm(H) + 2MK √∑K k=1 1 2mk log(2/δ), rj(w̄) ≤ M+2νB +Genr,j (7) where w∗ is a solution for Problem 3 and Genr is the generalization error. The first part for Equation 7 characterizes how well our model performs over the true data distribution compared to the optimal solution. As number of clients K increases, we achieve smaller generalization error. The second part for Equation 7 characterizes how well the fairness constraints are satisfied over the true data distribution. Note that the upper bound could be viewed as the sum of empirical fairness violation and a generalization error. Based on our fairness notions defined in Section 3.2, we demonstrate what generalization error is under different fairness constraints r. Proposition 1 (r encodes BGL at level ζ). There are in total |A| fairness constraints, one for each group. Define the weighted rademacher complexity for group a as Ra(H) = ESk∼Dmkk ,σ [ suph∈H ∑K k=1 1 ma ∑ ak,i=a σk,il (h(xk,i), yk,i) ] . In this scenario, we have: Genr,a = 2Ra(H) + M ma √ K 2 log(2|A|/δ). Note that the fairness constraint for group a under true distribution in Equation 7 is upper bounded by O (√ K ma ) . For any group a0 with sufficient data, i.e., ma0 is large, the BGL constraint with respect to group a0 under D has a stronger formal fairness guarantee compared to any group with less data. It is also worth noting that this generalization error grows as the number of clients K grows. Recall that the generalization error becomes smaller when K grows; combing the two results together provides us a tradeoff between fairness notion of BGL and utility over the true data distribution in terms of K. Proposition 2 (r encodes CBGL at level [ζy]y∈Y ). There are in total |A||Y | fairness constraints, one for each group and label. Define the weighted rademacher complexity for group a conditioned on y as Ra,y(H) = ESk∼Dmkk ,σ [ suph∈H ∑K k=1 1 ma,y ∑ ak,i=a,yk,i=y σk,il (h(xk,i), y) ] where ma,y is the number of all examples from group a with label y. In this scenario, we have: Genr,(a,y) = 2Ra,y(H) + M ma,y √ K 2 log(2|A||Y |/δ). Similar to Proposition 1, in order to achieve strong fairness guarantees for any specific constraint on the true data distribution, we need a sufficient number of samples associated with that constraint. We provide details and proof for Theorem 3 in Appendix D. Different from the analysis performed in Agarwal et al. (2019), we analyze the generalization behaviour in federated setting where we introduce the generalization bound as a function of number of clients K. We then further formally demonstrate the tension between utility and fairness performance evaluated on the true data distribution induced by K, which has not been studied previously to the best of our knowledge. 5 EXPERIMENTS We evaluate PFFL (Algorithm 1) empirically on ProPublica COMPAS, a dataset commonly studied in fair ML (Angwin et al., 2016; Zeng et al., 2021); the US-wide ACS PUMS data, a recent group fairness benchmark dataset (Ding et al., 2021); and CelebA (Caldas et al., 2018), a common federated learning dataset. We compare our method with training a vanilla FedAvg model in terms of both fairness and utility in Section 5.1, and explore performance relative to baselines that aim to enforce other fairness notions (Demographic Parity and Equal Opportunity) in Section 5.2. Setup. For all experiments, we evaluate the accuracy and the empirical loss for each group on test data that belongs to all the silos of our fair federated learning solver. We consider COMPAS Recidivism prediction with gender as protected attribute, the ACS Employment task (Ding et al., 2021) with race as protected attribute, and CelebA (Caldas et al., 2018) with gender as a protected attribute. To reflect the federated setting, we use heterogeneous data partitions to create data silos. ACS Employent is naturally partitioned into 50 states; COMPAS and CelebA are manually partitioned in a non-IID manner into a collection of data silos. A detailed description of datasets, models, and partition schemes can be found in Appendix B. 5.1 FAIRNESS-UTILITY TRADE-OFFS FOR ALGORITHM 1 We first explore how test error rate differs as a function maximum group loss using our Algorithm 1. To be consistent with our method and theoretical analysis, we exclude the protected attribute ai for each data as a feature for learning the predictor. For each dataset, we evaluated PFFL with BGL; CBGL for Y = 1; and CBGL for Y = 0. For each method we evaluate, given fixed number of training iterations E and T , we finetune B and ζ and evaluate both test error rate and test loss on each group. Given a certain test error rate, we select the hyperparameter pair (B, ζ) that yields the lowest maximum group loss. We show the relation between test accuracy vs. max group loss in Figure 2. In particular, we compare our fairness-aware FL methods with two baseline methods: vanilla FedAvg and FedAvg trained on loss weighted by groups. In FL, applying fair training locally at each data silo and aggregating the resulting model may not provide strong population-wide fairness guarantees with the same fairness definition (Zeng et al., 2021). Hence, we also explore the relationship between test accuracy and max group loss under local BGL and global BGL constraints. On all datasets, there exists a natural tradeoff between error rate and the fairness constraint: when a model achieves stronger fairness (smaller max group loss), the model tends to have worse utility (higher error rate). However, in all scenarios, our method not only yields a model with significantly smaller maximum group loss than vanilla FedAvg, but also achieves higher test accuracy than the baseline FedAvg which is unaware of group fairness. Meanwhile, for all datasets and fairness metrics, as expected, PFFL with Global BGL achieves improved fairness-utility tradeoffs relative to PFFL with Local BGL. Therefore, our PFFL with Global fairness constraint framework yields a model where utility can coexist with fairness constraints relying on Bounded Group Loss. 5.2 BGL/CBGL EVALUATED ON OTHER FAIRNESS NOTIONS Beyond BGL and CBGL, there are other fairness notions commonly used in the fair machine learning literature. For example, several works in group fair FL have proposed optimizing the difference between every two groups’ losses (possibly conditioned on the true label) with the aim of achieving Demographic Parity (or Equal Opportunity) (Chu et al., 2021; Cui et al., 2021; Hardt et al., 2016; Zeng et al., 2021). Formally, consider the case where the protected attribute set A = {0, 1}. Define ∆DP = |Pr(h(X) = 1|A = 0)− Pr(h(X) = 1|A = 1)| and ∆EO = |Pr(h(X) = 1|A = 0, Y = 1)− Pr(h(X) = 1|A = 1, Y = 1)|. These works aim to train a model such that we could achieve small ∆DP or small ∆EO, depending on the fairness constraint selected during optimization. As discussed in Section 3.2, CBGL could be viewed as a more general definition of Equal Opportunity and Equalized Odds. In this section, we compare our method with FedFB (Zeng et al., 2021), FedFair (Chu et al., 2021), and FCFL (Cui et al., 2021), all of which aim to optimize ∆DP and ∆EO. We evaluate ∆DP and ∆EO for all approaches on COMPAS and ACS Employment, with results shown in Figure 3. Similar to Figure 2, we only show the points lying on the pareto frontier for our method. Although PFFL with BGL and CBGL was not directly designed for this fairness criteria (i.e., it does not directly enforce the loss or prediction parity of two groups’ losses to be close), we see that our method is still able to outperform training a FedAvg baseline, and in fact performs comparably or better than prior methods (which were designed for this setting). 6 CONCLUSIONS, LIMITATIONS, AND FUTURE WORK In this work, we propose a fair learning objective for federated settings via Bounded Group Loss. We then propose a scalable federated solver to find an approximate saddle point for the objective. Theoretically, we provide convergence and fairness guarantees for our method. Empirically, we show that our method can provide high accuracy and fairness simultaneously across tasks from fair ML and federated learning. In addition to strong empirical performance, ours is the first work we are aware of to provide formal convergence and fairness/generalization guarantees for group fair FL with general convex loss functions. In future work we are interested in investigating additional benefits that could be provided by using our framework, including applications in non-federated settings. Finally, similar to prior works in group fair FL, our method communicates additional parameters beyond standard non-fair FL (e.g., via FedAvg); studying and mitigating the privacy risks of such communications in the context of fair federated learning would be an interesting direction of future work.
1. What is the focus and contribution of the paper regarding fair federated learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its fairness criterion and theoretical guarantees? 3. Do you have any concerns about the choice of fairness metric and its potential impact on unfair decision-making? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions for improving the paper, such as modifying Figure 1 or including Algorithm 1 in the main paper?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper studies fair federated learning (FL) where each group of clients are guaranteed to have the same loss function value upper bound. Theoretical guarantees for both model convergence (for convex loss functions) and fairness are provided, together with experimental results that support the claims. Strengths And Weaknesses Strength: Given bounded group loss (BGL) as the fairness criterion, this paper develops a FL solution that optimizes empirical risks for different groups under absolute loss constraints. Theoretical analysis of convergence and fairness guarantee is provided. Weakness: The biggest issue of this work is the fairness criterion that, in my opinion, violates the core principle of fairness. The BGL definition in Def. 1 uses an absolute average loss threshold for every (restricted) group a ∈ A . The authors' argument to move away from the equity between different groups and move towards such absolute, one-size-for-all quality threshold is highly flawed. First, the authors argue that this makes the problem convex, but such consideration is based on whether the authors can technically solve the problem as opposed to what is the right metric for fairness. Second, the argument that this metric "boosts the worst group's prediction quality" is solely focused on one group's performance, as opposed to the fairness among all groups. Fundamentally, fairness must consider all stakeholders. Last but probably the most importantly, using a single threshold could essentially lead to one group (the so-called worst group) barely meeting this threshold, while other groups achieve significantly smaller loss values. The authors consider this case to be fair, but in reality, such prediction outcome discrepancy can still lead to unfair decision making. Additionally, choosing such threshold ζ becomes non-trivial, as it significantly affects BGL. Having the algorithm fairness depend critically on hyperparameter tuning is dangerous and should be avoided when possible. As the whole work is built on such absolute threshold, I believe this is a critical issue that should be addressed. As the authors stated in the Remark, G is linear in λ and convex in w 0 . I'm not sure why the optimal solution has to be defined w.r.t. a saddle point instead of the true optimum. The novelty of this work is limited given the state of the art. The algorithmic approach described in Sec. 4.1 is a standard alternative optimization procedure. The technique used in the convergence analysis in Sec. 4.2 is very similar to the prior approach, e.g., in Li et al. 2019b. Clarity, Quality, Novelty And Reproducibility I'm not sure how Fig. 1 is supposed to be interpreted. I do not think it gives a clear motivation of the problem. I suggest the authors to either modify this figure (and caption) to make it more illustrative, or remove it all together. Instead, I suggest the authors to add Alg. 1 to the main paper, as opposed to leaving it entirely in the appendix.
ICLR
Title On Representation Learning in the First Layer of Deep CNNs and the Dynamics of Gradient Descent Abstract It has previously been reported that the representation that is learned in the first layer of deep CNNs is very different from the initial representation and highly consistent across initialization and architecture. In this work, we quantify this consistency by considering the set of filters as a filter bank and measuring its energy distribution. We find that the energy distribution is remarkably consistent and try to determine the source of this consistency. We show that this consistency cannot be explained by the fact that CNNs learn a representation that is useful for recognition and that CNNs trained with fixed, random filters in the first layer yield comparable recognition performance to full learning. We then show that similar behavior occurs in simple, linear CNNs and obtain an analytical characterization of the energy profile of linear CNNs trained with gradient descent. Our analysis shows that the energy profile is determined by two factors (1) the correlation of the average patch and the class label and (2) an implicit bias given the dynamics of gradient descent. Finally, we show that in commonly used image recognition datasets the correlation between the average patch and the class label is very low and it is the implicit bias that best explains the consistency of representations observed in real-world CNNs. 1 INTRODUCTION The remarkable success of Convolutional Neural Networks (CNNs) on a wide variety of image recognition tasks is often attributed to the fact that they learn a good representation of images. Support for this view comes from the fact that very different CNNs tend to learn similar representations and that features of CNNs that are trained for one task are often useful in very different tasks (Yosinski et al., 2014; Doimo et al., 2020; Gidaris et al., 2018). A natural starting point for investigating representation learning in deep CNNs is the very first layer. Studying this representation is somewhat easier than studying more general representation learning since each neuron can be characterized by a single linear filter which can be easily visualized as an image. Figure 1 shows examples of visualizations of the learned filters: unlike the initial filters which are random and devoid of structure, the trained filters resemble Gabor filters (Krizhevsky et al., 2012) and are visually similar for different trained networks. In addition to the qualitative similarity of filters that can be seen in figure 1, there have also been some reports that the filters are quantitatively similar. For example, Li et al. (2015) showed that one can often find a good match for filters learned by one CNN in the set of filters learned by another CNN. In this work we introduce a new measure for qualitatively measuring consistency of representations in the very first layer of a CNN. Using this measure, we show a remarkably high degree of consistency (correlation coefficient close to 1) between the representations that are learned by different CNNs, regardless of initializations, architectures and training sets. The fact that these filters are so different from the initialization is interesting in the context of the theory of deep networks which indicates that under certain conditions they can be trained in a ”lazy” regime (Chizat et al., 2019) - the representations in all intermediate layers hardly differing from their initialization and only the last output layer has weights that differ from initialization. Figure 1 clearly shows that ”lazy training” does not occur in the first layer of deep CNNs and that consistent representation learning occurs instead. A natural explanation for the learning of consistent filters in the first layer is that these filters are optimal in some sense for solving the recognition task. Indeed, Gabor filters and similar oriented filters were often used as a representation of images in the days of ”handcrafted” features for computer vision (Dalal & Triggs, 2005). Similarly, under this explanation, the networks have simply learned that in order to minimize the training loss the first layer of deep CNNs must have filters that resemble Gabors. In this paper we present empirical and theoretical results that are inconsistent with this explanation. We show that CNNs with commonly used architectures can be trained with fixed, random filters in the first layer and still yield comparable performance to full learning. We then show that consistent representation learning in the first layer also occurs in simple, linear CNNs and prove that for these CNNs the dynamics of gradient descent learning together with the statistics of natural image patches introduce an implicit bias towards certain filter distributions. We then show that in real-world CNNs trained on commonly used datasets, a highly consistent representation is learned in the first layer when the true labels are replaced with random labels and therefore that it is the implicit bias that best explains the consistency of representations observed in real-world CNNs. 2 QUANTIFYING CONSISTENCY USING ENERGY PROFILES The visual similarity of the filters that are learned in the first layer of CNNs (figure 1) is easy to see, but we wish to quantify the similarity of representations and go beyond the qualitative similarity. Recent works (Kornblith et al., 2019; Nguyen et al., 2021) suggest comparing two representations based on the distance between the distribution over patches induced by the two representations. But estimating this distance in high dimensions is nontrivial and two very different networks might give similar distributions over patches when the input distribution is highly skewed Ding et al. (2021). In this paper we propose a new method which avoids these shortcomings and is especially relevant for the first layer of a CNN, in which the representation is a linear function of the input patch. Given two patches x1, x2 and a linear transformation A whose rows are the filters, the squared distance between the transformed patches is ∥Ax1 −Ax2∥2 or alternatively (x1 − x2)TATA(x1 − x2). Thus a natural way to understand how distances are transformed when going from x1 to Ax1 is to look at the eigendecomposition of ATA: the ith eigenvalue of ATA measures how much distances in the direction of the ith eigenvector are increased or decreased by the transformation. The eigenvectors of ATA are simply the principal components of the filters, and if we assume translation invariance of the filters, they will have the same principal components as those of natural image patches: namely sines and cosines of different spatial frequencies (Aapo Hyvärinen & Hoyer, 2009). Thus the transformation of similarities is mostly driven by the eigenvalues of ATA and we focus on these to define the consistency of learned filters. Denote p1, ..., pk the PCA components computed from the training images’ patches and A the weights of the first layer of some model trained on these images (where each row of A, ATj is a filter). We define the energy w.r.t each component pi: ei = ∥Api∥ = √∑ j (ATj pi) 2 (1) The energy profile of a set of filters is simply the vector e = (e1..ek) and we measure consistency of two different sets of filters by measuring the correlation coefficient between their energy profiles. Note that this consistency measure is invariant to a rescaling of the filters, to a permutation of the filters and to any orthogonal transformation of the filters. This way of comparing linear representation is equivalent to considering the set of filters as a filter bank and measuring the sensitivity of the filter bank to different spatial frequencies. Figures figs. 2a to 2c show that different models trained with gradient descent are remarkably consistent using our proposed measure. Regardless of architecture or the particular dataset that they were trained on, different CNNs have very similar energy profiles that are less sensitive to very high spatial frequencies and very low ones and the peak sensitivity is for intermediate spatial frequencies (qualitatively similar to the sensitivity pattern of the human visual system which is also sensitive to intermediate spatial frequencies, as shown in figure 2d). Table 1 quantifies this similarity. The correlation coefficient between energy profiles with different random initializations and architecture is remarkably high (over 0.98 in many cases) and the correlation between the learned profiles and the random initialization is close to zero. An expansive set of experiments on various models and datasets can be found in B.2. Thus the use of our new measure allows us to quantitatively show that deep CNNs trained with gradient descent using standard parameters do not exhibit ”lazy” training in the first layer and that highly consistent representation learning takes place. We now ask: what determines this consistency? 3 IS CONSISTENCY DUE TO CNNS LEARNING SEMANTICALLY MEANINGFUL FEATURES? A natural explanation for the remarkable consistency of the learned representation in the first layer is that CNNs learn a representation that is good for object recognition. In particular, high spatial frequencies are often noisy while very low spatial frequencies are often influenced by illumination conditions. Thus learning a representation that is mostly sensitive to intermediate spatial frequencies makes sense if the goal is to recognize objects. Similarly, human vision is also mostly sensitive to intermediate spatial frequencies (Owsley, 2003) (see figure 2d), presumably for the same reasons. In order to test this hypothesis we asked if training modern CNNs while freezing the first layer will result in a decrease in performance. If indeed, Gabors of intermediate frequencies are optimal for object recognition, we would expect performance to suffer if we froze the first layer to have random filters with equal energy in all frequencies. Figure 3 shows that there is almost no change in the performance of modern CNNs when the weights in the first layer are frozen. This is true when measuring training accuracy, training loss or validation accuracy and validation loss. Apparently the networks learn to compensate for the random filters in the first layer by learning different weights in the subsequent layers. In other words, if we were to train modern CNNs using some discrete search over weights (e.g. genetic programming) to minimize the training loss, there is no reason to expect that consistent Gabors of intermediate frequencies would be found. Equally good training loss can be obtained with random filters in the first layer. To summarize, while quantitatively highly consistent representations are learned in the first layer of commonly used CNNs, this cannot be explained by the networks minimization of the training loss, This motivates us to analyze representation learning in much simpler CNNs. 4 SIMPLE, LINEAR CNN In order to understand the consistency that we observe among energy profiles in the first layer of trained CNNs, we turn to analyzing a very simple model: a linear CNN with one hidden layer. Specifically, in this simple model, the first layer includes convolutions with W different filters and the output is given by a global average pool of the filters over all locations. This model is clearly very different from real-world CNNs but we use it because it allows closed form-analysis and also exhibits some of the same consistency behaviors that we found in real-world CNNs. Specifically we have found that: • The energy profiles of simple, linear CNNs are highly consistent across initializations and widths and are very different from the energy profiles of the initial conditions. • The energy profile of simple, linear CNNs trained with gradient descent is different from the energy profile of the filters that globally optimize the loss. • The energy profile of simple, linear CNNs are highly consistent when the true labels are replaced with random labels. These properties are all displayed in figure 4 where we show results of training a linear model on binary tasks from CIFAR10. In all cases, the energy profile that is learned with true labels (red) is different from the initial conditions and is sensitive mostly to intermediate frequencies while the optimal energy profile (shown in blue) is quite different and shows a high sensitivity to high spatial frequencies. Training these networks with random labels give an energy profile (in green) that is similar to that of the true labels. The following theorems show the same behaviors analytically. Theorem 4.1. Consider a linear CNN with arbitrary width that solves a binary classification problem with L2 loss. The energy profile for the filters that globally minimize the loss is given by µ2opt with: µopt = (K TK)−1KT y Where K is the matrix of average patches in each image (in the PCA basis) and y is the vector of labels. Proof. This simply follows from the model being equivalent to linear model in the average image patch (lemmas A.1 and A.2), and solving the suitable linear regression problem. Theorem 4.2. Consider a linear CNN with arbitrary width that solves a binary classification problem with L2 loss and is trained with gradient descent starting with zero mean filters and covariance σ2I . The squared energy profile for the filters at any iteration is given by µ2GD + σ 2 with: µGD = (K TK + Λ)−1KT y With the same K and y as in theorem 4.1, and Λ is a spectral regularizer that depends on the learning rate, number of iterations and KTK. Proof. The full proof is given in the appendix and uses a similar technique that was used to derive spectral biases in gradient descent learning of fully connected networks LeCun et al. (1991). See A.3 for a full proof. Theorem 4.3. Consider a linear CNN with arbitrary width that solves a binary classification problem with L2 loss and is trained with gradient descent starting with zero mean filters and covariance σ2I . If the label y is uncorrelated with the average patch in each image then the squared energy profile for the filters at any iteration is given by µ20 + σ 2 with: µ0 ∝ (KTK + Λ)−1KT 1 (2) Where K is the matrix of average patches in each image and 1 is a vector of all ones and Λ is a spectral regularizer that depends on the learning rate and number of iterations. Proof. This follows from theorem 4.2 and the fact that the quantity Ky is proportional to the empirical expectation of the product between each average image patch and it’s label. When the average patch is uncorrelated with the label, this expectation is the product of the expected average patch and expected label, so it is proportional to KT 1 which is the expectation of the average patch. Corollary 4.4. Consider a linear CNN with arbitrary width that solves a binary classification problem with L2 loss and is trained with gradient descent starting with zero mean filters and covariance σ2I . If the average patch is the same in the two classes, then training with the true labels and training with random labels will give the same energy profile given by: µ0 ∝ (KTK + Λ)−1KT 1 Proof. Follows from KT y being a sum of all average image patches with yi = 1. If both average class patches are equal, then in expectation over a random (binary) y, the sums of all average patches will be equal. Full description of µ0 can be found in theorem A.5. These theorems show that in the case of simple, linear CNNs different networks (initial conditions, widths) will learn the same representation in the first layer but despite the fact that the loss is convex, the learned representation will not be the representation that globally optimizes the training loss with any finite number of training steps. Rather, the use of gradient descent will introduce an implicit bias that favors certain energy profiles that depend on the number of training steps and the learning rate (with the form of the bias given explicitly by equation 2). This implicit bias causes the learned profiles to be highly consistent yet very different from the optimal one. 5 IMPLICIT BIAS IN NONLINEAR CNNS The theoretical analysis of linear CNNs shows that if the true labels are uncorrelated with the average patch in an image, the learned energy profile will converge to a consistent profile that is determined by the dynamics of gradient descent and the statistics of the input patches. We therefore ask: is the consistent energy profile that we find in real-world CNNs also due to the dynamics of SGD? According to our analysis, the implicit bias is strongest when the label is uncorrelated with the average patch. We measured this correlation in commonly used image classification datasets by computing the correlation coefficient between the average PCA value in an image and its label for different tasks (a binary classification of one class versus the rest). The results are shown in figure 5. For almost all tasks and coefficients, the correlation coefficient is close to zero. Given the small amount of correlation, we would expect a similar energy profile when we train with random labels and true labels. Maennel et al. (2020) have already shown that when CNNs are trained with random labels, the representations that are learned in the first layers are still useful for other tasks. Here, we ask a more quantitative question: are the energy profiles the same? As shown in figures 6 and table 2 the answer is clearly yes. Correlations above 0.9 are consistently observed even when the true labels are replaced with random labels and the representations that are learned are still mostly sensitive to intermediate spatial frequencies. This is true both when trained on multiclass recognition problems (e.g. CIFAR10, CIFAR100, CelebA) and when trained on smaller, 2-class problems for which we’ve already seen consistency of linear CNNs (fig. 4). As an additional test of the hypothesis that the energy profiles we see in real-world CNNs are mostly due to the implicit bias, we created new datasets in which we artificially created strong correlations between the label and particular PCA components and trained VGG on them. The image labels were determined by the average patch projection onto some PCA component, such that the 5,000 images with the largest magnitude of projection were labeled with 1 and so on. As can be seen in fig. 7, once changing the average patch of each class manually the correlation between the true and random profiles decreases from the original 0.9 ± 0.02 to as low as −0.24 ± 0.02, depending on the component changed and the learned energy profile no longer resemble the human sensitivity function. 6 RELATED WORK The fact that different CNNs tend to learn similar filters in the first layer has been reported previously (e.g. Yosinski et al. (2014); Sarwar et al. (2017); Luan et al. (2017); Alekseev & Bobe (2019)), and follows from a line of work of visualizing representations in deep CNNs Zeiler & Fergus (2013); Girshick et al. (2013). Our work extends this finding by showing that the overall representation in the first layer is not only qualitatively but also is quantitatively similar - different CNNs not only learn to recognize spatial frequencies in their first layer but also the same distribution of frequencies. This consistency is then expanded to networks trained with true and random labels. Prior works have also studied the ability of neural networks to overfit random labels (Arpit et al., 2017), and use representations learned in this regime for transfer learning. Maennel et al. (2020) hypothesised that the ability of networks trained on random labels to transfer to new tasks is due to the fact that under certain conditions, the first layers of networks trained with random labels have aligned covariances with input image patches. We expand on this hypothesis by showing that networks trained with true labels, for which there is no alignment guarantee, display the same energy profile as networks trained with random labels. We show this quantitatively for VGG11 and ResNet and theoretically for linear CNNs with a single hidden layer. The fact that gradient descent training biases towards certain solutions has been known for many years, and proven mainly for linear predictors and separable data. Studies on linear networks (Soudry et al., 2018) and linear CNNs (Gunasekar et al., 2018) found that under certain conditions, gradient descent causes the effective linear predictor to be biased towards sparsity (in Fourier space in the case of CNNs) or minimal norm or max-margin (Chizat & Bach, 2020). Similar works have also shown that deep non-linear networks are biased towards learning lower frequencies first (Rahaman et al., 2019). Our work follows this line, focusing on the features learned in the first layer as a result of this bias, and that of the input image statistics. In its theoretical part, our analysis methods follow closely on the methods used by (LeCun et al., 1991; Hacohen & Weinshall, 2022) which analyze the dynamics of weights in a fully connected network during learning with gradient descent. We use a similar technique but our focus is on the first layer of a CNN. Additionally, we rely on linear networks to gain insight into the behavior of nonlinear networks, following previous works (Hacohen & Weinshall, 2022; ?; Gissin et al., 2019). In the same manner, we support our simplified theoretical claims by quantitatively showing consistency of the theory in real-world CNNs such as VGG. As a result of the consistency of Gabors being learned in the first layers of CNNs such as ResNet, GoogLeNet and DenseNet, each a state-of-the-art at its time - some lines of work attempted building CNNs with learnable Gabors Sarwar et al. (2017); Luan et al. (2017); Alekseev & Bobe (2019) in the first layer. Nevertheless, these failed to reach the high level of performance on benchmark tasks such as the ”vanilla” architectures. Our work expands on this contradiction by showing that not only do the networks consistently learn Gabor filters but also a specific distribution of their frequencies. The distribution mentioned above, was portrayed in our work using the energy profile of the first layers. This measure follows a line of many works in the field of measuring and visualizing similarities between representations Csiszárik et al. (2021); Kornblith et al. (2019); Nguyen et al. (2021); Li et al. (2015); Doimo et al. (2020), varying between comparing the output of transformations induced by the neurons or the neurons themselves. The energy profile is yet another method in this line, while allowing for semantically meaningful (as PCA components correspond to spatial frequencies) visualization of the representation, without any need for dimensionality reduction. 7 DISCUSSION The dramatic success of CNNs in computer vision has lead to increased interest in the representations that they learn. In this paper we have focused on the representation that CNNs learn in the very first layer and presented a high degree of quantitative consistency between the energy profiles learned by different networks using different initializations and architectures. We have examined the hypothesis that this consistency is due to networks learning a representation that is useful for object recognition and presented results that are inconsistent with that hypothesis. By analysing a simple, linear CNNs we have shown that such networks will provably converge to a consistent energy profile under many conditions, but this profile may have nothing to do with the labels and is instead determined by an implicit bias due to the dynamics of gradient descent and the statistics of the input patches. APPENDIX A LINEAR CONVOLUTIONAL NETWORKS A.1 PROOFS OF THEOREMS ON LINEAR CONVOLUTIONAL NETWORKS We first begin with a basic claim on the model composed of a hidden convolutional layer and followed by a global average pool. Lemma A.1. A linear CNN of depth 1 (followed by a global average pool) trained with MSE loss, is equivalent to linear regression on the average image patch. Proof. Let {Xi}Ni=1 with Xi ∈ Rc×w×h be the set of training images, {yi} N i=1 their binary labels (yi ∈ {0, 1}) and the weights of the first layer be W ∈ Rk×c×d×d - k filters of dimension d × d. Denote the output dimensions of the convolution as w′, h′, then: L ( W ; {(Xi, yi)}Ni=1 ) = 1 N N∑ i=1 1 2 ||( 1 k · w′ · h′ ∑ k,w′,h′ Xi ∗W )− yi||2 Summing over the output dimensions is equivalent to summing over a dot product of the patches with a single filter, therefore denoting Ki ∈ Rw ′·h′×c·d2 as the patch matrix of the i’th image and W̃ ∈ Rc·d2×k the reshaped weights matrix: L ( W ; {(Xi, yi)}Ni=1 ) = 1 N N∑ i=1 1 2 ∥∥∥∥∥ ( 1 w′ · h′ 1 )T KiW̃ ( 1 k 1 ) − yi ∥∥∥∥∥ 2 And noting that ( 1 w′·h′1 )T Ki is the average patch of the i’th image and W̃ ( 1 k1 ) the average filter concludes the proof. Another lemma we’ll use further on claims that during training of the linear CNN model, only the average filter changes while the filter covariance remains as during initialization. Therefore proofs from one filter to multiple filters are easily extendable. Lemma A.2. In a linear CNN of depth 1 followed by a global average pool, of any width, trained with GD and MSE loss, the average filter changes during iterations while the covariance of filters remains as during initialization. Proof. Following the notation of lemma A.1, denote K ∈ RN×c·d2 as the average image patch matrix - the image whose i’th row is the average patch of the image Xi, and the network consists of filters w1....wm, therefore trained with the following loss: L (w1...wm;K, y) = ∥∥∥∥∥ 1m m∑ i=1 Kwi − y ∥∥∥∥∥ 2 (3) = ∥∥∥∥∥K ( 1 m m∑ i=1 wi ) − y ∥∥∥∥∥ 2 = ∥Kw̄ − y∥2 (4) Where w̄ is the average filter. The dynamics of a single filter in this layer: ∂L ∂wj = 1 2m KT ( K ( 1 m m∑ i=1 wi ) − y ) = 1 2m KT (Kw̄ − y) (5) Meaning that the gradients w.r.t to all filters are equal and depend only on the average filter at the current iteration. By recursion we can see that the change in the average filter is as follows, for learning rate η: w̄t = 1 m m∑ i=1 ( wt−1i − η∇L t−1(wi) ) = 1 m m∑ i=1 ( wt−1i − η∇L t−1(w̄) ) (6) = ( 1 m m∑ i=1 wt−1i ) − η∇Lt−1(w̄) = w̄t−1 − η∇Lt−1(w̄) (7) Concluding that the gradients for all filters are equal, and depend only on the average filter. Theorem A.3. Let K be a matrix whose i’th row is the average image patch of the i’th image and y is a vector with the labels of all images, and let K̄ = KU be the same matrix in the PC basis (with U being the PCA eigenvector matrix). The squared energy profile of weights of linear CNN, initialized with random weights sampled zero mean and covariance σ2I and trained with GD, is equal to the following: e2i := 1 M M∑ j=1 ⟨fj , pi⟩2 = w̃2i + σ2 (8) where w̃ = ( K̄T K̄ + Λ )−1 K̄T y is the solution to a regularized regression problem in the PC basis, that regresses the average patch in an image with its label, with Λ = Λ(KTK, t, η) a matrix depending on the eigenvalues of KTK, the iteration of GD and the step size. Proof. It follows from lemma A.2 that during training all filters change by the average filter. We’ll show that a single filter (at iteration t of GD) corresponds to a solution to ridge regression with some matrix Λ = Λ(t, η,KTK) with η being the step size. Opening the recursion of GD updates, and assuming w is initialized at w = 0: wt = wt−1 − ηK̄T ( K̄wt−1 − y ) = wt−1 − ηK̄T K̄wt−1 + ηK̄T y wt = η t−1∑ j=0 ( I − ηK̄T K̄ )j K̄T y (9) In this coordinate system, K̄T K̄ is a diagonal matrix with the empirical variances σ̂2i on the diagonal if it is centered. If the matrix isn’t centered, then K̄T K̄ = Σ̂ + µ̂µ̂T where Σ̂ is a diagonal matrix with the empirical variances on the diagonal and µ̂i is the empirical mean estimating Ex [⟨x, pi⟩]. This is because in PCA coordinates, K̄ = KU , where U contains the eigenvectors as columns. Since K isn’t centered, K = K0 + 1KTavg with K0 being zero meaned and Kavg being the average row. Therefore, K̄T K̄ = ( K0U + 1K T avgU )T ( K0U + 1K T avgU ) = Σ̂ + µ̂µ̂T , where the phrase KT0 (1K T avg) disappears since K0 has zero mean. Therefore: wt = η t−1∑ j=0 ( I − ηK̄T K̄ )j K̄T y = η t−1∑ j=0 ( I − η ( Σ̂ + µ̂µ̂T ))j K̄T y (10) Notice that ( I − η ( Σ̂ + µ̂µ̂T ))j can be decomposed in the following manner using the binomial theorem:( I − η ( Σ̂ + µ̂µ̂T ))j = ( I − ηΣ̂ )j + j∑ k=1 ( j k ) (−η)k ∥µ̂∥2(k−1) ( I − ηΣ̂ )j−k µ̂µ̂T (11) Putting it back into equation 10: wt = η t−1∑ j=0 ( I − η ( Σ̂ + µ̂µ̂T ))j K̄T y = η t−1∑ j=0 ( I − ηΣ̂ )j + j∑ k=1 ( j k ) (−η)k ∥µ̂∥2(k−1) ( I − ηΣ̂ )j−k µ̂µ̂T K̄T y (12) Looking at the i’th coordinate, with λi being the i’th eigenvalue on the diagonal of Σ̂: wt(i) =(K̄ T y)(i) t−1∑ j=0 (1− ηλi)j +(µ̂µ̂T K̄T y)(i) t−1∑ j=1 1 ∥µ̂∥2 ( j∑ k=1 ( j k ) (−η ∥µ̂∥2)k (I − ηλi)j−k ) (13) After some algebra: wt(i) = 1− ( 1− η(λi + ∥µ̂∥2) )t ∥µ̂∥2 (λi + ∥µ̂∥2) − 1− (1− ηλi) t λi (µ̂µ̂T K̄T y)(i) + ( 1− (1− ηλi)t λi ) (K̄T y)(i) (14) And in matrix notation, define the diagonal matrix A with Aii = 1−(1−ηλi)t λi as the i’th element on the diagonal, and the diagonal matrix B with Bii = 1−(1−η(λi+∥µ̂∥2)) t ∥µ̂∥2(λi+∥µ̂∥2) the i’th element on the diagonal and we get that: wt = (B −A)µ̂µ̂T K̄T y +AK̄T y (15) Solving the following: wt = ( K̄T K̄ + Λ )−1 K̄T y = ( Σ̂ + µ̂µ̂T + Λ )−1 K̄T y (16) we get that: Λ = ( B + (A−B)µ̂µ̂T )−1 − Σ̂− µ̂µ̂T (17) and we got a definition for the regularization matrix. Since the filter covariance stays constant throughout training due to lemma A.2, treating the filters as a random variable initialized with covariance σ2I (in PCA basis) means that their empirical second moment is equal to the sum of the squared mean and variance. Therefore denoting the filters in PCA basis as f̃j , we get that in the ith coordinate: 1 M M∑ j=1 ⟨fj , pi⟩2 = 1 M M∑ j=1 ⟨fj , pi⟩ 2 + 1 M M∑ j=1 ⟨fj , pi⟩ − 1 M M∑ j=1 ⟨fj , pi⟩ 2 = w̃2(i) + σ2 (18) Theorem A.4 (Effect of Labels). Let W tTrue be the weights of the first layer of a linear CNN with a single hidden layer and any width, trained for t steps on a binary classification task with MSE loss and gradient descent, and let WRandom be the weights of the first layer of the same CNN trained with random labels drawn from a Bernoulli distribution. If the average patch of both classes is identical, and the dataset is balanced between them, then at any training iteration: Ey∼Bernoulli( 12 ) [ W tRandom ] = W tTrue (19) Proof. Let K ∈ RN×c·d2 be the average image patch matrix and y ∈ {0, 1}N the image labels. From lemma A.1, training a linear CNN with 1 layer followed by a global average pool is equivalent to solving the following linear regression problem for weights matrix W ∈ Rc·d2×1: L (W ;K, y) = 1 N 1 2 ∥KW − y∥2 Using gradient descent with learning rate η, the update rule for W is: Wt = Wt−1 − η N ( KT (KWt−1y) ) = ( I − η N KTK ) Wt−1 + η N KT y (20) Notice that in expectation, Ey∼Bernoulli( 12 ) [y] = 1 21, therefore Ey∼Bernoulli( 12 ) [ KT y ] is (half) the sum of all average image patches. From our assumption, the average image is equal between classes. Denote this average patch as z, and since K is the average patch matrix z = 2NKy. Combining this observation with the above: Ey∼Bernoulli( 12 ) [ η N KT y ] = η N · 1 2 KT1 = η 1 2N N · z = η N KT y (21) And that concludes the proof. Note that we assumed that the CNN is of width 1, but using lemma A.2 is enough for generalizing to any width. Theorem A.5 (Solution in PCA Basis). Let w̃ = ( K̄T K̄ + Λ )−1 K̃T y be as described in theo- rem A.3, for K̄ the average image patch matrix in the PCA basis and Λ = Λ(K̄T K̄, t, η). Denote µ̂ as the empirical mean projection onto the PCA basis and Σ̂ as the the uncentered data covariance in PCA basis. If the labels are drawn randomly from a Bernoulli distribution, then in expectation, w̃ can be calculated at any iteration t and for any step size η with the following formula: Ey∼Bernoulli( 12 ) [w̃] ∝ ( I − Σ̂ ′−1µµT 1 + µT Σ̂′−1µ ) Σ̂′−1µ (22) with Σ̂′ = Σ̂ + Λ. Proof. Following the notation from before, denote K ∈ RN×c·d2 as the average patch matrix, and K̄ as the same matrix in the PCA basis coordinates. From theorem A.3 K̄T K̄ = Σ̂+ µ̂µ̂T . Solving the linear ridge regression problem in this coordinate system as described in theorem A.3: L(w; K̄, y) = 1 2 ∥∥K̄w − y∥∥2 + 1 2 wTΛw ⇒ w̄ = ( K̄T K̄ + Λ )−1 K̄T y (23) In expectation over a random y, as described in theorem A.4: E [y] = 121, therefore E [ K̃T y ] = N 2 µ̂. As mentioned before, K̄T K̄ = Σ̂+ µ̂µ̂T . Define Σ̂′ = Σ̂+Λ a matrix summing the PCA variances and the regularization coefficients. Now using Woodbury matrix identity:( Σ̂′ + µµT )−1 = Σ̂′−1 − Σ̂′−1µ(I + µT Σ̂′−1µ)µT Σ̂′−1 = (I − Σ̂ ′−1µµT 1 + µT Σ̂′−1µ )Σ̂′−1 and we get that: w̃ ∝ (I − Σ̂ ′−1µµT 1 + µT Σ̂′−1µ )Σ̂′−1µ A.2 CORRELATION FIGURES As mentioned in 4, energy profiles of linear CNNs had much higher similarity when training with true and random labels using SGD, compared to their random initialization and the optimal solution to the corresponding linear regression problem. To complement 4, 3 displays the mean and standard deviation of correlation coefficients between the mentioned energy profiles. Again, it is clear that there is a high similarity in energy profiles when training a linear CNN with SGD on true and random labels. APPENDIX B EXPANDED RESULTS ON SIMILARITY BETWEEN PRETRAINED B.1 ACCURACY OF NETWORKS TRAINED WITH AND WITHOUT A FROZEN FIRST LAYER As shown in 3, networks of different depths converge to the same minimal loss value when trained with and without their first layer. To complement this we present the accuracies of these models below (fig. 9), echoing this result. B.2 COMPARISON OF PRETRAINED CNNS ON CIFAR AND IMAGENET To expand on the similarity between first layers of different architectures, we present correlation plots emphasizing the difference between a random initialization and the learned weights of different networks on different datasets. Presented are figures comparing pretrained models on ImageNet (figs. 10 and 11), CIFAR10 (fig. 12) CIFAR100 (fig. 13), and ResNets trained on different datasets (fig. 14). All models were downloaded through the Pytorch Model Hub. Although it might seem odd that correlation on Imagenet is much higher than on the CIFAR datasets, we believe this is due to resolution - while on the CIFAR datasets correlation is calculated over an energy profile in R27, the Imagenet example contains profiles in R147, making the calculated correlation smoother and less sensitive to noise. This is demonstrated in fig. 15 which presents the correlation between 27 components of the Imagenet profiles. When looking in higher resolution the correlation coefficients between the different models drop and are relatively equal to those between the different models on the CIFAR datasets. B.3 COMPARISON OF VGG WITH DIFFERENT LOSSES Although A.4 and all other theorems are proved on a linear network using MSE loss (as customary in theoretical works on linear networks e.g. (Hacohen & Weinshall, 2022; LeCun et al., 1991)), in practice most CNNs for multi-class classification are trained with crossentropy loss. To test the effect on the energy profile of a real network, we trained VGG with both crossentropy and MSE, and with true and random labels, the results are displayed 16 and correlations in 4. As can be seen in the figure, even in this case the networks’ energy profiles are highly correlated, thus supporting our hypothesis that the main difference between the formula A.5 and the pretrained networks is due to the oversimplification of the linear model, and not for example the loss used in theory vs practice. B.4 FULL FIGURES ON TRUE AND RANDOM LABELS B.5 EXPERIMENTAL DETAILS All models - linear and non linear were trained with SGD and a constant learning rate of 0.1. No preprocessing was applied to the data except when stated otherwise. All models were trained for 150 epochs, with minibatches of size 256. All results are averaged over at least 3 different random seeds. When referring to models ”trained with random labels”, we trained models until they overfit the training data, as both ResNet and VGG can reach 99% train accuracy on CIFAR10 with random labels. All models in the main text were trained ourselves, except those depicted in 1. All pretrained models in 1 and B.2 were downloaded from the Pytorch Model Hub.
1. What is the main contribution of the paper regarding the consistency of CNN representations? 2. What are the strengths and weaknesses of the proposed measure called energy profiles? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What are the implications of the insight that replacing true labels with random labels does not change the energy profile? 5. Are there any inconsistencies or missing details in the theoretical analysis, particularly in the proof of Theorem 4.1 and Theorem A.3?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper studies the empirical phenomenon that CNNs learn consistent representations in the first layer across different experimental settings (architecture, initializations, etc.). The goal is to understand the source of this consistency. To quantitatively measure consistency, the authors suggest a new measure called energy profiles. They show that the energy profiles are also consistent across different experimental setups. Next, the authors examine whether the consistent representations are necessary for good performance. They show empirically that this is not the case, by replacing the first layer with random weights and showing that the test performance does not change. To better understand the consistency of energy profiles, a theoretical analysis of this measure is shown for linear CNNs. One insight from this analysis is that if the average image patch is uncorrelated with the labels, then the energy profile is the same as the energy profile for random labels. This finding is corroborated with experiments on non-linear CNNs. Strengths And Weaknesses Strengths: New quantitative measure of learned representations. Novel theoretical results on linear CNNs Interesting insights on the consistency of learned representations. Weaknesses: The theoretical proofs are not fully rigorous, unclear and not given in detail. Some of the empirical results and insights are not very significant. Some of the statements are vague. The paper is not clearly written. Here are detailed comments: Major comments: The proof of Theorem 4.1 is not given in detail, only a rough sketch. In Theorem 4.1, the assumptions on the data are not stated. For example, it seems that there should be assumptions for K^T*K to be invertible. In the proof of Theorem A.3, why do the GD dynamics depend on the average patch in PC basis? In Lemma A.2 they depend on the average patch in the regular basis. In the paper it is claimed that: “This implicit bias causes the learned profiles to be highly consistent yet very different from the optimal one.”. But if t goes to infinity, doesn’t GD converge to the optimal solution and therefore the same energy profile? This point is not addressed in the theory section. Generally, it is unclear if there is a qualitative difference between the energy profile of GD after training until convergence and the optimal one in the linear CNN setting. In theoretical part, the labels are taken in expectation but the data (K) is assumed to be finite and fixed. This is not consistent and rigorous. Shouldn’t all the theorems hold in the population setting where the data is also in expectation? The significance of the insight that replacing true labels with random labels does not change the energy profile is unclear. Are the filters also qualitatively similar in both settings as in Figure 1(b) and Figure 1(c)? Other comments: How is Lemma A.2 used in the proof of Theorem 4.1? F_j are not defined in the statement of Theorem A.3. “Average patch” is used and it is not clear if this is the average patch of the image or the class. E.g., “According to our analysis, the implicit bias is strongest when the label is uncorrelated with the average patch.” In page 7. Furthermore, these concepts are not formally defined. In the proof of Theorem A.3, why is there necessarily a matrix Gamma such that K^T*K + Gamma is invertible? It is not proved that the given matrix is invertible. The correlation between the image average patch and the labels is not formally defined. It is claimed that the experiments with random labels suggest that the implicit bias is the source of consistency. This is somewhat confusing. Using “implicit bias” in this context implies that it only depends on the data (without the labels) and the algorithm. However, it does depend on the labels, because given different labels, GD converges to different solutions and therefore has different biases. This statement is unclear: “the implicit bias is strongest when the label is uncorrelated with the average patch”. What does strongest implicit bias mean? Clarity, Quality, Novelty And Reproducibility Quality: Theoretical results are not clear and not fully rigorous. The empirical results and experiments are of high quality. Clarity: Some of the statements are vague, the writing can be improved. Novelty: New theoretical results and novel empirical insights on the consistency of CNN representations.
ICLR
Title On Representation Learning in the First Layer of Deep CNNs and the Dynamics of Gradient Descent Abstract It has previously been reported that the representation that is learned in the first layer of deep CNNs is very different from the initial representation and highly consistent across initialization and architecture. In this work, we quantify this consistency by considering the set of filters as a filter bank and measuring its energy distribution. We find that the energy distribution is remarkably consistent and try to determine the source of this consistency. We show that this consistency cannot be explained by the fact that CNNs learn a representation that is useful for recognition and that CNNs trained with fixed, random filters in the first layer yield comparable recognition performance to full learning. We then show that similar behavior occurs in simple, linear CNNs and obtain an analytical characterization of the energy profile of linear CNNs trained with gradient descent. Our analysis shows that the energy profile is determined by two factors (1) the correlation of the average patch and the class label and (2) an implicit bias given the dynamics of gradient descent. Finally, we show that in commonly used image recognition datasets the correlation between the average patch and the class label is very low and it is the implicit bias that best explains the consistency of representations observed in real-world CNNs. 1 INTRODUCTION The remarkable success of Convolutional Neural Networks (CNNs) on a wide variety of image recognition tasks is often attributed to the fact that they learn a good representation of images. Support for this view comes from the fact that very different CNNs tend to learn similar representations and that features of CNNs that are trained for one task are often useful in very different tasks (Yosinski et al., 2014; Doimo et al., 2020; Gidaris et al., 2018). A natural starting point for investigating representation learning in deep CNNs is the very first layer. Studying this representation is somewhat easier than studying more general representation learning since each neuron can be characterized by a single linear filter which can be easily visualized as an image. Figure 1 shows examples of visualizations of the learned filters: unlike the initial filters which are random and devoid of structure, the trained filters resemble Gabor filters (Krizhevsky et al., 2012) and are visually similar for different trained networks. In addition to the qualitative similarity of filters that can be seen in figure 1, there have also been some reports that the filters are quantitatively similar. For example, Li et al. (2015) showed that one can often find a good match for filters learned by one CNN in the set of filters learned by another CNN. In this work we introduce a new measure for qualitatively measuring consistency of representations in the very first layer of a CNN. Using this measure, we show a remarkably high degree of consistency (correlation coefficient close to 1) between the representations that are learned by different CNNs, regardless of initializations, architectures and training sets. The fact that these filters are so different from the initialization is interesting in the context of the theory of deep networks which indicates that under certain conditions they can be trained in a ”lazy” regime (Chizat et al., 2019) - the representations in all intermediate layers hardly differing from their initialization and only the last output layer has weights that differ from initialization. Figure 1 clearly shows that ”lazy training” does not occur in the first layer of deep CNNs and that consistent representation learning occurs instead. A natural explanation for the learning of consistent filters in the first layer is that these filters are optimal in some sense for solving the recognition task. Indeed, Gabor filters and similar oriented filters were often used as a representation of images in the days of ”handcrafted” features for computer vision (Dalal & Triggs, 2005). Similarly, under this explanation, the networks have simply learned that in order to minimize the training loss the first layer of deep CNNs must have filters that resemble Gabors. In this paper we present empirical and theoretical results that are inconsistent with this explanation. We show that CNNs with commonly used architectures can be trained with fixed, random filters in the first layer and still yield comparable performance to full learning. We then show that consistent representation learning in the first layer also occurs in simple, linear CNNs and prove that for these CNNs the dynamics of gradient descent learning together with the statistics of natural image patches introduce an implicit bias towards certain filter distributions. We then show that in real-world CNNs trained on commonly used datasets, a highly consistent representation is learned in the first layer when the true labels are replaced with random labels and therefore that it is the implicit bias that best explains the consistency of representations observed in real-world CNNs. 2 QUANTIFYING CONSISTENCY USING ENERGY PROFILES The visual similarity of the filters that are learned in the first layer of CNNs (figure 1) is easy to see, but we wish to quantify the similarity of representations and go beyond the qualitative similarity. Recent works (Kornblith et al., 2019; Nguyen et al., 2021) suggest comparing two representations based on the distance between the distribution over patches induced by the two representations. But estimating this distance in high dimensions is nontrivial and two very different networks might give similar distributions over patches when the input distribution is highly skewed Ding et al. (2021). In this paper we propose a new method which avoids these shortcomings and is especially relevant for the first layer of a CNN, in which the representation is a linear function of the input patch. Given two patches x1, x2 and a linear transformation A whose rows are the filters, the squared distance between the transformed patches is ∥Ax1 −Ax2∥2 or alternatively (x1 − x2)TATA(x1 − x2). Thus a natural way to understand how distances are transformed when going from x1 to Ax1 is to look at the eigendecomposition of ATA: the ith eigenvalue of ATA measures how much distances in the direction of the ith eigenvector are increased or decreased by the transformation. The eigenvectors of ATA are simply the principal components of the filters, and if we assume translation invariance of the filters, they will have the same principal components as those of natural image patches: namely sines and cosines of different spatial frequencies (Aapo Hyvärinen & Hoyer, 2009). Thus the transformation of similarities is mostly driven by the eigenvalues of ATA and we focus on these to define the consistency of learned filters. Denote p1, ..., pk the PCA components computed from the training images’ patches and A the weights of the first layer of some model trained on these images (where each row of A, ATj is a filter). We define the energy w.r.t each component pi: ei = ∥Api∥ = √∑ j (ATj pi) 2 (1) The energy profile of a set of filters is simply the vector e = (e1..ek) and we measure consistency of two different sets of filters by measuring the correlation coefficient between their energy profiles. Note that this consistency measure is invariant to a rescaling of the filters, to a permutation of the filters and to any orthogonal transformation of the filters. This way of comparing linear representation is equivalent to considering the set of filters as a filter bank and measuring the sensitivity of the filter bank to different spatial frequencies. Figures figs. 2a to 2c show that different models trained with gradient descent are remarkably consistent using our proposed measure. Regardless of architecture or the particular dataset that they were trained on, different CNNs have very similar energy profiles that are less sensitive to very high spatial frequencies and very low ones and the peak sensitivity is for intermediate spatial frequencies (qualitatively similar to the sensitivity pattern of the human visual system which is also sensitive to intermediate spatial frequencies, as shown in figure 2d). Table 1 quantifies this similarity. The correlation coefficient between energy profiles with different random initializations and architecture is remarkably high (over 0.98 in many cases) and the correlation between the learned profiles and the random initialization is close to zero. An expansive set of experiments on various models and datasets can be found in B.2. Thus the use of our new measure allows us to quantitatively show that deep CNNs trained with gradient descent using standard parameters do not exhibit ”lazy” training in the first layer and that highly consistent representation learning takes place. We now ask: what determines this consistency? 3 IS CONSISTENCY DUE TO CNNS LEARNING SEMANTICALLY MEANINGFUL FEATURES? A natural explanation for the remarkable consistency of the learned representation in the first layer is that CNNs learn a representation that is good for object recognition. In particular, high spatial frequencies are often noisy while very low spatial frequencies are often influenced by illumination conditions. Thus learning a representation that is mostly sensitive to intermediate spatial frequencies makes sense if the goal is to recognize objects. Similarly, human vision is also mostly sensitive to intermediate spatial frequencies (Owsley, 2003) (see figure 2d), presumably for the same reasons. In order to test this hypothesis we asked if training modern CNNs while freezing the first layer will result in a decrease in performance. If indeed, Gabors of intermediate frequencies are optimal for object recognition, we would expect performance to suffer if we froze the first layer to have random filters with equal energy in all frequencies. Figure 3 shows that there is almost no change in the performance of modern CNNs when the weights in the first layer are frozen. This is true when measuring training accuracy, training loss or validation accuracy and validation loss. Apparently the networks learn to compensate for the random filters in the first layer by learning different weights in the subsequent layers. In other words, if we were to train modern CNNs using some discrete search over weights (e.g. genetic programming) to minimize the training loss, there is no reason to expect that consistent Gabors of intermediate frequencies would be found. Equally good training loss can be obtained with random filters in the first layer. To summarize, while quantitatively highly consistent representations are learned in the first layer of commonly used CNNs, this cannot be explained by the networks minimization of the training loss, This motivates us to analyze representation learning in much simpler CNNs. 4 SIMPLE, LINEAR CNN In order to understand the consistency that we observe among energy profiles in the first layer of trained CNNs, we turn to analyzing a very simple model: a linear CNN with one hidden layer. Specifically, in this simple model, the first layer includes convolutions with W different filters and the output is given by a global average pool of the filters over all locations. This model is clearly very different from real-world CNNs but we use it because it allows closed form-analysis and also exhibits some of the same consistency behaviors that we found in real-world CNNs. Specifically we have found that: • The energy profiles of simple, linear CNNs are highly consistent across initializations and widths and are very different from the energy profiles of the initial conditions. • The energy profile of simple, linear CNNs trained with gradient descent is different from the energy profile of the filters that globally optimize the loss. • The energy profile of simple, linear CNNs are highly consistent when the true labels are replaced with random labels. These properties are all displayed in figure 4 where we show results of training a linear model on binary tasks from CIFAR10. In all cases, the energy profile that is learned with true labels (red) is different from the initial conditions and is sensitive mostly to intermediate frequencies while the optimal energy profile (shown in blue) is quite different and shows a high sensitivity to high spatial frequencies. Training these networks with random labels give an energy profile (in green) that is similar to that of the true labels. The following theorems show the same behaviors analytically. Theorem 4.1. Consider a linear CNN with arbitrary width that solves a binary classification problem with L2 loss. The energy profile for the filters that globally minimize the loss is given by µ2opt with: µopt = (K TK)−1KT y Where K is the matrix of average patches in each image (in the PCA basis) and y is the vector of labels. Proof. This simply follows from the model being equivalent to linear model in the average image patch (lemmas A.1 and A.2), and solving the suitable linear regression problem. Theorem 4.2. Consider a linear CNN with arbitrary width that solves a binary classification problem with L2 loss and is trained with gradient descent starting with zero mean filters and covariance σ2I . The squared energy profile for the filters at any iteration is given by µ2GD + σ 2 with: µGD = (K TK + Λ)−1KT y With the same K and y as in theorem 4.1, and Λ is a spectral regularizer that depends on the learning rate, number of iterations and KTK. Proof. The full proof is given in the appendix and uses a similar technique that was used to derive spectral biases in gradient descent learning of fully connected networks LeCun et al. (1991). See A.3 for a full proof. Theorem 4.3. Consider a linear CNN with arbitrary width that solves a binary classification problem with L2 loss and is trained with gradient descent starting with zero mean filters and covariance σ2I . If the label y is uncorrelated with the average patch in each image then the squared energy profile for the filters at any iteration is given by µ20 + σ 2 with: µ0 ∝ (KTK + Λ)−1KT 1 (2) Where K is the matrix of average patches in each image and 1 is a vector of all ones and Λ is a spectral regularizer that depends on the learning rate and number of iterations. Proof. This follows from theorem 4.2 and the fact that the quantity Ky is proportional to the empirical expectation of the product between each average image patch and it’s label. When the average patch is uncorrelated with the label, this expectation is the product of the expected average patch and expected label, so it is proportional to KT 1 which is the expectation of the average patch. Corollary 4.4. Consider a linear CNN with arbitrary width that solves a binary classification problem with L2 loss and is trained with gradient descent starting with zero mean filters and covariance σ2I . If the average patch is the same in the two classes, then training with the true labels and training with random labels will give the same energy profile given by: µ0 ∝ (KTK + Λ)−1KT 1 Proof. Follows from KT y being a sum of all average image patches with yi = 1. If both average class patches are equal, then in expectation over a random (binary) y, the sums of all average patches will be equal. Full description of µ0 can be found in theorem A.5. These theorems show that in the case of simple, linear CNNs different networks (initial conditions, widths) will learn the same representation in the first layer but despite the fact that the loss is convex, the learned representation will not be the representation that globally optimizes the training loss with any finite number of training steps. Rather, the use of gradient descent will introduce an implicit bias that favors certain energy profiles that depend on the number of training steps and the learning rate (with the form of the bias given explicitly by equation 2). This implicit bias causes the learned profiles to be highly consistent yet very different from the optimal one. 5 IMPLICIT BIAS IN NONLINEAR CNNS The theoretical analysis of linear CNNs shows that if the true labels are uncorrelated with the average patch in an image, the learned energy profile will converge to a consistent profile that is determined by the dynamics of gradient descent and the statistics of the input patches. We therefore ask: is the consistent energy profile that we find in real-world CNNs also due to the dynamics of SGD? According to our analysis, the implicit bias is strongest when the label is uncorrelated with the average patch. We measured this correlation in commonly used image classification datasets by computing the correlation coefficient between the average PCA value in an image and its label for different tasks (a binary classification of one class versus the rest). The results are shown in figure 5. For almost all tasks and coefficients, the correlation coefficient is close to zero. Given the small amount of correlation, we would expect a similar energy profile when we train with random labels and true labels. Maennel et al. (2020) have already shown that when CNNs are trained with random labels, the representations that are learned in the first layers are still useful for other tasks. Here, we ask a more quantitative question: are the energy profiles the same? As shown in figures 6 and table 2 the answer is clearly yes. Correlations above 0.9 are consistently observed even when the true labels are replaced with random labels and the representations that are learned are still mostly sensitive to intermediate spatial frequencies. This is true both when trained on multiclass recognition problems (e.g. CIFAR10, CIFAR100, CelebA) and when trained on smaller, 2-class problems for which we’ve already seen consistency of linear CNNs (fig. 4). As an additional test of the hypothesis that the energy profiles we see in real-world CNNs are mostly due to the implicit bias, we created new datasets in which we artificially created strong correlations between the label and particular PCA components and trained VGG on them. The image labels were determined by the average patch projection onto some PCA component, such that the 5,000 images with the largest magnitude of projection were labeled with 1 and so on. As can be seen in fig. 7, once changing the average patch of each class manually the correlation between the true and random profiles decreases from the original 0.9 ± 0.02 to as low as −0.24 ± 0.02, depending on the component changed and the learned energy profile no longer resemble the human sensitivity function. 6 RELATED WORK The fact that different CNNs tend to learn similar filters in the first layer has been reported previously (e.g. Yosinski et al. (2014); Sarwar et al. (2017); Luan et al. (2017); Alekseev & Bobe (2019)), and follows from a line of work of visualizing representations in deep CNNs Zeiler & Fergus (2013); Girshick et al. (2013). Our work extends this finding by showing that the overall representation in the first layer is not only qualitatively but also is quantitatively similar - different CNNs not only learn to recognize spatial frequencies in their first layer but also the same distribution of frequencies. This consistency is then expanded to networks trained with true and random labels. Prior works have also studied the ability of neural networks to overfit random labels (Arpit et al., 2017), and use representations learned in this regime for transfer learning. Maennel et al. (2020) hypothesised that the ability of networks trained on random labels to transfer to new tasks is due to the fact that under certain conditions, the first layers of networks trained with random labels have aligned covariances with input image patches. We expand on this hypothesis by showing that networks trained with true labels, for which there is no alignment guarantee, display the same energy profile as networks trained with random labels. We show this quantitatively for VGG11 and ResNet and theoretically for linear CNNs with a single hidden layer. The fact that gradient descent training biases towards certain solutions has been known for many years, and proven mainly for linear predictors and separable data. Studies on linear networks (Soudry et al., 2018) and linear CNNs (Gunasekar et al., 2018) found that under certain conditions, gradient descent causes the effective linear predictor to be biased towards sparsity (in Fourier space in the case of CNNs) or minimal norm or max-margin (Chizat & Bach, 2020). Similar works have also shown that deep non-linear networks are biased towards learning lower frequencies first (Rahaman et al., 2019). Our work follows this line, focusing on the features learned in the first layer as a result of this bias, and that of the input image statistics. In its theoretical part, our analysis methods follow closely on the methods used by (LeCun et al., 1991; Hacohen & Weinshall, 2022) which analyze the dynamics of weights in a fully connected network during learning with gradient descent. We use a similar technique but our focus is on the first layer of a CNN. Additionally, we rely on linear networks to gain insight into the behavior of nonlinear networks, following previous works (Hacohen & Weinshall, 2022; ?; Gissin et al., 2019). In the same manner, we support our simplified theoretical claims by quantitatively showing consistency of the theory in real-world CNNs such as VGG. As a result of the consistency of Gabors being learned in the first layers of CNNs such as ResNet, GoogLeNet and DenseNet, each a state-of-the-art at its time - some lines of work attempted building CNNs with learnable Gabors Sarwar et al. (2017); Luan et al. (2017); Alekseev & Bobe (2019) in the first layer. Nevertheless, these failed to reach the high level of performance on benchmark tasks such as the ”vanilla” architectures. Our work expands on this contradiction by showing that not only do the networks consistently learn Gabor filters but also a specific distribution of their frequencies. The distribution mentioned above, was portrayed in our work using the energy profile of the first layers. This measure follows a line of many works in the field of measuring and visualizing similarities between representations Csiszárik et al. (2021); Kornblith et al. (2019); Nguyen et al. (2021); Li et al. (2015); Doimo et al. (2020), varying between comparing the output of transformations induced by the neurons or the neurons themselves. The energy profile is yet another method in this line, while allowing for semantically meaningful (as PCA components correspond to spatial frequencies) visualization of the representation, without any need for dimensionality reduction. 7 DISCUSSION The dramatic success of CNNs in computer vision has lead to increased interest in the representations that they learn. In this paper we have focused on the representation that CNNs learn in the very first layer and presented a high degree of quantitative consistency between the energy profiles learned by different networks using different initializations and architectures. We have examined the hypothesis that this consistency is due to networks learning a representation that is useful for object recognition and presented results that are inconsistent with that hypothesis. By analysing a simple, linear CNNs we have shown that such networks will provably converge to a consistent energy profile under many conditions, but this profile may have nothing to do with the labels and is instead determined by an implicit bias due to the dynamics of gradient descent and the statistics of the input patches. APPENDIX A LINEAR CONVOLUTIONAL NETWORKS A.1 PROOFS OF THEOREMS ON LINEAR CONVOLUTIONAL NETWORKS We first begin with a basic claim on the model composed of a hidden convolutional layer and followed by a global average pool. Lemma A.1. A linear CNN of depth 1 (followed by a global average pool) trained with MSE loss, is equivalent to linear regression on the average image patch. Proof. Let {Xi}Ni=1 with Xi ∈ Rc×w×h be the set of training images, {yi} N i=1 their binary labels (yi ∈ {0, 1}) and the weights of the first layer be W ∈ Rk×c×d×d - k filters of dimension d × d. Denote the output dimensions of the convolution as w′, h′, then: L ( W ; {(Xi, yi)}Ni=1 ) = 1 N N∑ i=1 1 2 ||( 1 k · w′ · h′ ∑ k,w′,h′ Xi ∗W )− yi||2 Summing over the output dimensions is equivalent to summing over a dot product of the patches with a single filter, therefore denoting Ki ∈ Rw ′·h′×c·d2 as the patch matrix of the i’th image and W̃ ∈ Rc·d2×k the reshaped weights matrix: L ( W ; {(Xi, yi)}Ni=1 ) = 1 N N∑ i=1 1 2 ∥∥∥∥∥ ( 1 w′ · h′ 1 )T KiW̃ ( 1 k 1 ) − yi ∥∥∥∥∥ 2 And noting that ( 1 w′·h′1 )T Ki is the average patch of the i’th image and W̃ ( 1 k1 ) the average filter concludes the proof. Another lemma we’ll use further on claims that during training of the linear CNN model, only the average filter changes while the filter covariance remains as during initialization. Therefore proofs from one filter to multiple filters are easily extendable. Lemma A.2. In a linear CNN of depth 1 followed by a global average pool, of any width, trained with GD and MSE loss, the average filter changes during iterations while the covariance of filters remains as during initialization. Proof. Following the notation of lemma A.1, denote K ∈ RN×c·d2 as the average image patch matrix - the image whose i’th row is the average patch of the image Xi, and the network consists of filters w1....wm, therefore trained with the following loss: L (w1...wm;K, y) = ∥∥∥∥∥ 1m m∑ i=1 Kwi − y ∥∥∥∥∥ 2 (3) = ∥∥∥∥∥K ( 1 m m∑ i=1 wi ) − y ∥∥∥∥∥ 2 = ∥Kw̄ − y∥2 (4) Where w̄ is the average filter. The dynamics of a single filter in this layer: ∂L ∂wj = 1 2m KT ( K ( 1 m m∑ i=1 wi ) − y ) = 1 2m KT (Kw̄ − y) (5) Meaning that the gradients w.r.t to all filters are equal and depend only on the average filter at the current iteration. By recursion we can see that the change in the average filter is as follows, for learning rate η: w̄t = 1 m m∑ i=1 ( wt−1i − η∇L t−1(wi) ) = 1 m m∑ i=1 ( wt−1i − η∇L t−1(w̄) ) (6) = ( 1 m m∑ i=1 wt−1i ) − η∇Lt−1(w̄) = w̄t−1 − η∇Lt−1(w̄) (7) Concluding that the gradients for all filters are equal, and depend only on the average filter. Theorem A.3. Let K be a matrix whose i’th row is the average image patch of the i’th image and y is a vector with the labels of all images, and let K̄ = KU be the same matrix in the PC basis (with U being the PCA eigenvector matrix). The squared energy profile of weights of linear CNN, initialized with random weights sampled zero mean and covariance σ2I and trained with GD, is equal to the following: e2i := 1 M M∑ j=1 ⟨fj , pi⟩2 = w̃2i + σ2 (8) where w̃ = ( K̄T K̄ + Λ )−1 K̄T y is the solution to a regularized regression problem in the PC basis, that regresses the average patch in an image with its label, with Λ = Λ(KTK, t, η) a matrix depending on the eigenvalues of KTK, the iteration of GD and the step size. Proof. It follows from lemma A.2 that during training all filters change by the average filter. We’ll show that a single filter (at iteration t of GD) corresponds to a solution to ridge regression with some matrix Λ = Λ(t, η,KTK) with η being the step size. Opening the recursion of GD updates, and assuming w is initialized at w = 0: wt = wt−1 − ηK̄T ( K̄wt−1 − y ) = wt−1 − ηK̄T K̄wt−1 + ηK̄T y wt = η t−1∑ j=0 ( I − ηK̄T K̄ )j K̄T y (9) In this coordinate system, K̄T K̄ is a diagonal matrix with the empirical variances σ̂2i on the diagonal if it is centered. If the matrix isn’t centered, then K̄T K̄ = Σ̂ + µ̂µ̂T where Σ̂ is a diagonal matrix with the empirical variances on the diagonal and µ̂i is the empirical mean estimating Ex [⟨x, pi⟩]. This is because in PCA coordinates, K̄ = KU , where U contains the eigenvectors as columns. Since K isn’t centered, K = K0 + 1KTavg with K0 being zero meaned and Kavg being the average row. Therefore, K̄T K̄ = ( K0U + 1K T avgU )T ( K0U + 1K T avgU ) = Σ̂ + µ̂µ̂T , where the phrase KT0 (1K T avg) disappears since K0 has zero mean. Therefore: wt = η t−1∑ j=0 ( I − ηK̄T K̄ )j K̄T y = η t−1∑ j=0 ( I − η ( Σ̂ + µ̂µ̂T ))j K̄T y (10) Notice that ( I − η ( Σ̂ + µ̂µ̂T ))j can be decomposed in the following manner using the binomial theorem:( I − η ( Σ̂ + µ̂µ̂T ))j = ( I − ηΣ̂ )j + j∑ k=1 ( j k ) (−η)k ∥µ̂∥2(k−1) ( I − ηΣ̂ )j−k µ̂µ̂T (11) Putting it back into equation 10: wt = η t−1∑ j=0 ( I − η ( Σ̂ + µ̂µ̂T ))j K̄T y = η t−1∑ j=0 ( I − ηΣ̂ )j + j∑ k=1 ( j k ) (−η)k ∥µ̂∥2(k−1) ( I − ηΣ̂ )j−k µ̂µ̂T K̄T y (12) Looking at the i’th coordinate, with λi being the i’th eigenvalue on the diagonal of Σ̂: wt(i) =(K̄ T y)(i) t−1∑ j=0 (1− ηλi)j +(µ̂µ̂T K̄T y)(i) t−1∑ j=1 1 ∥µ̂∥2 ( j∑ k=1 ( j k ) (−η ∥µ̂∥2)k (I − ηλi)j−k ) (13) After some algebra: wt(i) = 1− ( 1− η(λi + ∥µ̂∥2) )t ∥µ̂∥2 (λi + ∥µ̂∥2) − 1− (1− ηλi) t λi (µ̂µ̂T K̄T y)(i) + ( 1− (1− ηλi)t λi ) (K̄T y)(i) (14) And in matrix notation, define the diagonal matrix A with Aii = 1−(1−ηλi)t λi as the i’th element on the diagonal, and the diagonal matrix B with Bii = 1−(1−η(λi+∥µ̂∥2)) t ∥µ̂∥2(λi+∥µ̂∥2) the i’th element on the diagonal and we get that: wt = (B −A)µ̂µ̂T K̄T y +AK̄T y (15) Solving the following: wt = ( K̄T K̄ + Λ )−1 K̄T y = ( Σ̂ + µ̂µ̂T + Λ )−1 K̄T y (16) we get that: Λ = ( B + (A−B)µ̂µ̂T )−1 − Σ̂− µ̂µ̂T (17) and we got a definition for the regularization matrix. Since the filter covariance stays constant throughout training due to lemma A.2, treating the filters as a random variable initialized with covariance σ2I (in PCA basis) means that their empirical second moment is equal to the sum of the squared mean and variance. Therefore denoting the filters in PCA basis as f̃j , we get that in the ith coordinate: 1 M M∑ j=1 ⟨fj , pi⟩2 = 1 M M∑ j=1 ⟨fj , pi⟩ 2 + 1 M M∑ j=1 ⟨fj , pi⟩ − 1 M M∑ j=1 ⟨fj , pi⟩ 2 = w̃2(i) + σ2 (18) Theorem A.4 (Effect of Labels). Let W tTrue be the weights of the first layer of a linear CNN with a single hidden layer and any width, trained for t steps on a binary classification task with MSE loss and gradient descent, and let WRandom be the weights of the first layer of the same CNN trained with random labels drawn from a Bernoulli distribution. If the average patch of both classes is identical, and the dataset is balanced between them, then at any training iteration: Ey∼Bernoulli( 12 ) [ W tRandom ] = W tTrue (19) Proof. Let K ∈ RN×c·d2 be the average image patch matrix and y ∈ {0, 1}N the image labels. From lemma A.1, training a linear CNN with 1 layer followed by a global average pool is equivalent to solving the following linear regression problem for weights matrix W ∈ Rc·d2×1: L (W ;K, y) = 1 N 1 2 ∥KW − y∥2 Using gradient descent with learning rate η, the update rule for W is: Wt = Wt−1 − η N ( KT (KWt−1y) ) = ( I − η N KTK ) Wt−1 + η N KT y (20) Notice that in expectation, Ey∼Bernoulli( 12 ) [y] = 1 21, therefore Ey∼Bernoulli( 12 ) [ KT y ] is (half) the sum of all average image patches. From our assumption, the average image is equal between classes. Denote this average patch as z, and since K is the average patch matrix z = 2NKy. Combining this observation with the above: Ey∼Bernoulli( 12 ) [ η N KT y ] = η N · 1 2 KT1 = η 1 2N N · z = η N KT y (21) And that concludes the proof. Note that we assumed that the CNN is of width 1, but using lemma A.2 is enough for generalizing to any width. Theorem A.5 (Solution in PCA Basis). Let w̃ = ( K̄T K̄ + Λ )−1 K̃T y be as described in theo- rem A.3, for K̄ the average image patch matrix in the PCA basis and Λ = Λ(K̄T K̄, t, η). Denote µ̂ as the empirical mean projection onto the PCA basis and Σ̂ as the the uncentered data covariance in PCA basis. If the labels are drawn randomly from a Bernoulli distribution, then in expectation, w̃ can be calculated at any iteration t and for any step size η with the following formula: Ey∼Bernoulli( 12 ) [w̃] ∝ ( I − Σ̂ ′−1µµT 1 + µT Σ̂′−1µ ) Σ̂′−1µ (22) with Σ̂′ = Σ̂ + Λ. Proof. Following the notation from before, denote K ∈ RN×c·d2 as the average patch matrix, and K̄ as the same matrix in the PCA basis coordinates. From theorem A.3 K̄T K̄ = Σ̂+ µ̂µ̂T . Solving the linear ridge regression problem in this coordinate system as described in theorem A.3: L(w; K̄, y) = 1 2 ∥∥K̄w − y∥∥2 + 1 2 wTΛw ⇒ w̄ = ( K̄T K̄ + Λ )−1 K̄T y (23) In expectation over a random y, as described in theorem A.4: E [y] = 121, therefore E [ K̃T y ] = N 2 µ̂. As mentioned before, K̄T K̄ = Σ̂+ µ̂µ̂T . Define Σ̂′ = Σ̂+Λ a matrix summing the PCA variances and the regularization coefficients. Now using Woodbury matrix identity:( Σ̂′ + µµT )−1 = Σ̂′−1 − Σ̂′−1µ(I + µT Σ̂′−1µ)µT Σ̂′−1 = (I − Σ̂ ′−1µµT 1 + µT Σ̂′−1µ )Σ̂′−1 and we get that: w̃ ∝ (I − Σ̂ ′−1µµT 1 + µT Σ̂′−1µ )Σ̂′−1µ A.2 CORRELATION FIGURES As mentioned in 4, energy profiles of linear CNNs had much higher similarity when training with true and random labels using SGD, compared to their random initialization and the optimal solution to the corresponding linear regression problem. To complement 4, 3 displays the mean and standard deviation of correlation coefficients between the mentioned energy profiles. Again, it is clear that there is a high similarity in energy profiles when training a linear CNN with SGD on true and random labels. APPENDIX B EXPANDED RESULTS ON SIMILARITY BETWEEN PRETRAINED B.1 ACCURACY OF NETWORKS TRAINED WITH AND WITHOUT A FROZEN FIRST LAYER As shown in 3, networks of different depths converge to the same minimal loss value when trained with and without their first layer. To complement this we present the accuracies of these models below (fig. 9), echoing this result. B.2 COMPARISON OF PRETRAINED CNNS ON CIFAR AND IMAGENET To expand on the similarity between first layers of different architectures, we present correlation plots emphasizing the difference between a random initialization and the learned weights of different networks on different datasets. Presented are figures comparing pretrained models on ImageNet (figs. 10 and 11), CIFAR10 (fig. 12) CIFAR100 (fig. 13), and ResNets trained on different datasets (fig. 14). All models were downloaded through the Pytorch Model Hub. Although it might seem odd that correlation on Imagenet is much higher than on the CIFAR datasets, we believe this is due to resolution - while on the CIFAR datasets correlation is calculated over an energy profile in R27, the Imagenet example contains profiles in R147, making the calculated correlation smoother and less sensitive to noise. This is demonstrated in fig. 15 which presents the correlation between 27 components of the Imagenet profiles. When looking in higher resolution the correlation coefficients between the different models drop and are relatively equal to those between the different models on the CIFAR datasets. B.3 COMPARISON OF VGG WITH DIFFERENT LOSSES Although A.4 and all other theorems are proved on a linear network using MSE loss (as customary in theoretical works on linear networks e.g. (Hacohen & Weinshall, 2022; LeCun et al., 1991)), in practice most CNNs for multi-class classification are trained with crossentropy loss. To test the effect on the energy profile of a real network, we trained VGG with both crossentropy and MSE, and with true and random labels, the results are displayed 16 and correlations in 4. As can be seen in the figure, even in this case the networks’ energy profiles are highly correlated, thus supporting our hypothesis that the main difference between the formula A.5 and the pretrained networks is due to the oversimplification of the linear model, and not for example the loss used in theory vs practice. B.4 FULL FIGURES ON TRUE AND RANDOM LABELS B.5 EXPERIMENTAL DETAILS All models - linear and non linear were trained with SGD and a constant learning rate of 0.1. No preprocessing was applied to the data except when stated otherwise. All models were trained for 150 epochs, with minibatches of size 256. All results are averaged over at least 3 different random seeds. When referring to models ”trained with random labels”, we trained models until they overfit the training data, as both ResNet and VGG can reach 99% train accuracy on CIFAR10 with random labels. All models in the main text were trained ourselves, except those depicted in 1. All pretrained models in 1 and B.2 were downloaded from the Pytorch Model Hub.
1. What is the focus of the paper regarding the analysis of CNN representations? 2. What are the strengths and weaknesses of the proposed approach, particularly in its experimental results and conclusions? 3. Do you have any concerns or questions about the paper's methodology, notation, and theoretical setting? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any issues with the correlation computation and definition used in the paper? 6. Can you explain Equation 11 and why the two matrices commute? 7. Is there a problem with Corollary 4.4 regarding the order of expectation in the random label setting?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper aims to analyze the representation learned in the first layer of CNNs. Existing work identifies that such representations are highly consistently across architectures and initializations. This paper introduces a functional called energy profile aimed to quantifying this consistence, and reaches a conclusion that such consistence is mainly owing to the implicit bias of SGD (learning order is from low- to high- frequency). The author also provides several experimental results to support their claim. Strengths And Weaknesses Strength: The problem is interesting and well motivated. The experiments that the networks can get comparable performance with first layer froze is interesting. Some of the results seem somewhat (but not totally) convincing, e.g., the consistence of the learned representation is partially owing to implicit bias and the spectrum structure of the patches in the images. Weakness: I have several concerns about the paper which is detailed in the next section. Please see below. Clarity, Quality, Novelty And Reproducibility What does the filter look like if trained on random labels compared with fig 1? Can you also display the eigenfunction p i ? Are they as interpretable as Fig 1(b,c)? Important: Please explain how the correlation is computed /defined in figure 2; by eyeballing the figures in fig2, i can see there is a correlation between green/orange plots, but the number 0.98 correlation seems too high as there are still quite a lot of variance between green/orange. In addition, by looking at Fig 4; I can't agree that the green (SGD+ random labels) and red (SGD+True labels) are similar and highly correlated, except they have low energy in high-frequency modes. The latter is not surprising due to high-frequency (low eigenvalues) being very hard to learn by SGD in the linear model setting. As such, I have doubts about the definition of correlation in the paper which may have placed too much weights on the uninformative high-frequency components. Notation is unclear. Please add more details about what c , h , w represent (in the appendix Lemma A.1). Equation below Lemma A.1: the sum is over k , w ′ , h ′ ; but the summand does not rely on k , w ′ , h ′ ; how does convolution define in the equation? The simple linear convolution is also very unconventional. I never see taking the average over all channels, which is equivalent to having only one channel for the linear convolution network. In addition, the theoretical setting is overly too simple, which is basically doing linear regression on the mean of patches. I am not convinced the results can be extrapolated to a more complicated/practical setting. I also have doubts about the sufficiency of the energy profile proposed by this paper to study learned representation in the first layer. Note that the energy profile characterizes only one functional (moment) of the distribution of the learned filters; how can we know that there aren't other functionals (or combinations of them) that are better suited? E.g., distinguish learned representations between true labels and random labels (if such differences exist). Though the energy profile roughly captures the so-called implicit bias of GD (learning order is low-freq -> high freq?) Can you explain equation 11? Why do the two matrices ( I − η Σ ^ ) and μ ^ μ ^ T commute? Please clarify. Corollary 4.4 seems problematic for random labels. There is an issue of order of expectation. The authors compute the mean of the prediction over a random label (Bernoulli) and then compute the energy profile. I disagree with this order. The correct one should compute the energy profile first and then compute the expectation over a random label later. In addition, the author's analysis in the paper relies on the mean of the random label being 1/2 and fails if the mean is zero (+1/-1). The results seem specific to the settings designed by the authors.
ICLR
Title On Representation Learning in the First Layer of Deep CNNs and the Dynamics of Gradient Descent Abstract It has previously been reported that the representation that is learned in the first layer of deep CNNs is very different from the initial representation and highly consistent across initialization and architecture. In this work, we quantify this consistency by considering the set of filters as a filter bank and measuring its energy distribution. We find that the energy distribution is remarkably consistent and try to determine the source of this consistency. We show that this consistency cannot be explained by the fact that CNNs learn a representation that is useful for recognition and that CNNs trained with fixed, random filters in the first layer yield comparable recognition performance to full learning. We then show that similar behavior occurs in simple, linear CNNs and obtain an analytical characterization of the energy profile of linear CNNs trained with gradient descent. Our analysis shows that the energy profile is determined by two factors (1) the correlation of the average patch and the class label and (2) an implicit bias given the dynamics of gradient descent. Finally, we show that in commonly used image recognition datasets the correlation between the average patch and the class label is very low and it is the implicit bias that best explains the consistency of representations observed in real-world CNNs. 1 INTRODUCTION The remarkable success of Convolutional Neural Networks (CNNs) on a wide variety of image recognition tasks is often attributed to the fact that they learn a good representation of images. Support for this view comes from the fact that very different CNNs tend to learn similar representations and that features of CNNs that are trained for one task are often useful in very different tasks (Yosinski et al., 2014; Doimo et al., 2020; Gidaris et al., 2018). A natural starting point for investigating representation learning in deep CNNs is the very first layer. Studying this representation is somewhat easier than studying more general representation learning since each neuron can be characterized by a single linear filter which can be easily visualized as an image. Figure 1 shows examples of visualizations of the learned filters: unlike the initial filters which are random and devoid of structure, the trained filters resemble Gabor filters (Krizhevsky et al., 2012) and are visually similar for different trained networks. In addition to the qualitative similarity of filters that can be seen in figure 1, there have also been some reports that the filters are quantitatively similar. For example, Li et al. (2015) showed that one can often find a good match for filters learned by one CNN in the set of filters learned by another CNN. In this work we introduce a new measure for qualitatively measuring consistency of representations in the very first layer of a CNN. Using this measure, we show a remarkably high degree of consistency (correlation coefficient close to 1) between the representations that are learned by different CNNs, regardless of initializations, architectures and training sets. The fact that these filters are so different from the initialization is interesting in the context of the theory of deep networks which indicates that under certain conditions they can be trained in a ”lazy” regime (Chizat et al., 2019) - the representations in all intermediate layers hardly differing from their initialization and only the last output layer has weights that differ from initialization. Figure 1 clearly shows that ”lazy training” does not occur in the first layer of deep CNNs and that consistent representation learning occurs instead. A natural explanation for the learning of consistent filters in the first layer is that these filters are optimal in some sense for solving the recognition task. Indeed, Gabor filters and similar oriented filters were often used as a representation of images in the days of ”handcrafted” features for computer vision (Dalal & Triggs, 2005). Similarly, under this explanation, the networks have simply learned that in order to minimize the training loss the first layer of deep CNNs must have filters that resemble Gabors. In this paper we present empirical and theoretical results that are inconsistent with this explanation. We show that CNNs with commonly used architectures can be trained with fixed, random filters in the first layer and still yield comparable performance to full learning. We then show that consistent representation learning in the first layer also occurs in simple, linear CNNs and prove that for these CNNs the dynamics of gradient descent learning together with the statistics of natural image patches introduce an implicit bias towards certain filter distributions. We then show that in real-world CNNs trained on commonly used datasets, a highly consistent representation is learned in the first layer when the true labels are replaced with random labels and therefore that it is the implicit bias that best explains the consistency of representations observed in real-world CNNs. 2 QUANTIFYING CONSISTENCY USING ENERGY PROFILES The visual similarity of the filters that are learned in the first layer of CNNs (figure 1) is easy to see, but we wish to quantify the similarity of representations and go beyond the qualitative similarity. Recent works (Kornblith et al., 2019; Nguyen et al., 2021) suggest comparing two representations based on the distance between the distribution over patches induced by the two representations. But estimating this distance in high dimensions is nontrivial and two very different networks might give similar distributions over patches when the input distribution is highly skewed Ding et al. (2021). In this paper we propose a new method which avoids these shortcomings and is especially relevant for the first layer of a CNN, in which the representation is a linear function of the input patch. Given two patches x1, x2 and a linear transformation A whose rows are the filters, the squared distance between the transformed patches is ∥Ax1 −Ax2∥2 or alternatively (x1 − x2)TATA(x1 − x2). Thus a natural way to understand how distances are transformed when going from x1 to Ax1 is to look at the eigendecomposition of ATA: the ith eigenvalue of ATA measures how much distances in the direction of the ith eigenvector are increased or decreased by the transformation. The eigenvectors of ATA are simply the principal components of the filters, and if we assume translation invariance of the filters, they will have the same principal components as those of natural image patches: namely sines and cosines of different spatial frequencies (Aapo Hyvärinen & Hoyer, 2009). Thus the transformation of similarities is mostly driven by the eigenvalues of ATA and we focus on these to define the consistency of learned filters. Denote p1, ..., pk the PCA components computed from the training images’ patches and A the weights of the first layer of some model trained on these images (where each row of A, ATj is a filter). We define the energy w.r.t each component pi: ei = ∥Api∥ = √∑ j (ATj pi) 2 (1) The energy profile of a set of filters is simply the vector e = (e1..ek) and we measure consistency of two different sets of filters by measuring the correlation coefficient between their energy profiles. Note that this consistency measure is invariant to a rescaling of the filters, to a permutation of the filters and to any orthogonal transformation of the filters. This way of comparing linear representation is equivalent to considering the set of filters as a filter bank and measuring the sensitivity of the filter bank to different spatial frequencies. Figures figs. 2a to 2c show that different models trained with gradient descent are remarkably consistent using our proposed measure. Regardless of architecture or the particular dataset that they were trained on, different CNNs have very similar energy profiles that are less sensitive to very high spatial frequencies and very low ones and the peak sensitivity is for intermediate spatial frequencies (qualitatively similar to the sensitivity pattern of the human visual system which is also sensitive to intermediate spatial frequencies, as shown in figure 2d). Table 1 quantifies this similarity. The correlation coefficient between energy profiles with different random initializations and architecture is remarkably high (over 0.98 in many cases) and the correlation between the learned profiles and the random initialization is close to zero. An expansive set of experiments on various models and datasets can be found in B.2. Thus the use of our new measure allows us to quantitatively show that deep CNNs trained with gradient descent using standard parameters do not exhibit ”lazy” training in the first layer and that highly consistent representation learning takes place. We now ask: what determines this consistency? 3 IS CONSISTENCY DUE TO CNNS LEARNING SEMANTICALLY MEANINGFUL FEATURES? A natural explanation for the remarkable consistency of the learned representation in the first layer is that CNNs learn a representation that is good for object recognition. In particular, high spatial frequencies are often noisy while very low spatial frequencies are often influenced by illumination conditions. Thus learning a representation that is mostly sensitive to intermediate spatial frequencies makes sense if the goal is to recognize objects. Similarly, human vision is also mostly sensitive to intermediate spatial frequencies (Owsley, 2003) (see figure 2d), presumably for the same reasons. In order to test this hypothesis we asked if training modern CNNs while freezing the first layer will result in a decrease in performance. If indeed, Gabors of intermediate frequencies are optimal for object recognition, we would expect performance to suffer if we froze the first layer to have random filters with equal energy in all frequencies. Figure 3 shows that there is almost no change in the performance of modern CNNs when the weights in the first layer are frozen. This is true when measuring training accuracy, training loss or validation accuracy and validation loss. Apparently the networks learn to compensate for the random filters in the first layer by learning different weights in the subsequent layers. In other words, if we were to train modern CNNs using some discrete search over weights (e.g. genetic programming) to minimize the training loss, there is no reason to expect that consistent Gabors of intermediate frequencies would be found. Equally good training loss can be obtained with random filters in the first layer. To summarize, while quantitatively highly consistent representations are learned in the first layer of commonly used CNNs, this cannot be explained by the networks minimization of the training loss, This motivates us to analyze representation learning in much simpler CNNs. 4 SIMPLE, LINEAR CNN In order to understand the consistency that we observe among energy profiles in the first layer of trained CNNs, we turn to analyzing a very simple model: a linear CNN with one hidden layer. Specifically, in this simple model, the first layer includes convolutions with W different filters and the output is given by a global average pool of the filters over all locations. This model is clearly very different from real-world CNNs but we use it because it allows closed form-analysis and also exhibits some of the same consistency behaviors that we found in real-world CNNs. Specifically we have found that: • The energy profiles of simple, linear CNNs are highly consistent across initializations and widths and are very different from the energy profiles of the initial conditions. • The energy profile of simple, linear CNNs trained with gradient descent is different from the energy profile of the filters that globally optimize the loss. • The energy profile of simple, linear CNNs are highly consistent when the true labels are replaced with random labels. These properties are all displayed in figure 4 where we show results of training a linear model on binary tasks from CIFAR10. In all cases, the energy profile that is learned with true labels (red) is different from the initial conditions and is sensitive mostly to intermediate frequencies while the optimal energy profile (shown in blue) is quite different and shows a high sensitivity to high spatial frequencies. Training these networks with random labels give an energy profile (in green) that is similar to that of the true labels. The following theorems show the same behaviors analytically. Theorem 4.1. Consider a linear CNN with arbitrary width that solves a binary classification problem with L2 loss. The energy profile for the filters that globally minimize the loss is given by µ2opt with: µopt = (K TK)−1KT y Where K is the matrix of average patches in each image (in the PCA basis) and y is the vector of labels. Proof. This simply follows from the model being equivalent to linear model in the average image patch (lemmas A.1 and A.2), and solving the suitable linear regression problem. Theorem 4.2. Consider a linear CNN with arbitrary width that solves a binary classification problem with L2 loss and is trained with gradient descent starting with zero mean filters and covariance σ2I . The squared energy profile for the filters at any iteration is given by µ2GD + σ 2 with: µGD = (K TK + Λ)−1KT y With the same K and y as in theorem 4.1, and Λ is a spectral regularizer that depends on the learning rate, number of iterations and KTK. Proof. The full proof is given in the appendix and uses a similar technique that was used to derive spectral biases in gradient descent learning of fully connected networks LeCun et al. (1991). See A.3 for a full proof. Theorem 4.3. Consider a linear CNN with arbitrary width that solves a binary classification problem with L2 loss and is trained with gradient descent starting with zero mean filters and covariance σ2I . If the label y is uncorrelated with the average patch in each image then the squared energy profile for the filters at any iteration is given by µ20 + σ 2 with: µ0 ∝ (KTK + Λ)−1KT 1 (2) Where K is the matrix of average patches in each image and 1 is a vector of all ones and Λ is a spectral regularizer that depends on the learning rate and number of iterations. Proof. This follows from theorem 4.2 and the fact that the quantity Ky is proportional to the empirical expectation of the product between each average image patch and it’s label. When the average patch is uncorrelated with the label, this expectation is the product of the expected average patch and expected label, so it is proportional to KT 1 which is the expectation of the average patch. Corollary 4.4. Consider a linear CNN with arbitrary width that solves a binary classification problem with L2 loss and is trained with gradient descent starting with zero mean filters and covariance σ2I . If the average patch is the same in the two classes, then training with the true labels and training with random labels will give the same energy profile given by: µ0 ∝ (KTK + Λ)−1KT 1 Proof. Follows from KT y being a sum of all average image patches with yi = 1. If both average class patches are equal, then in expectation over a random (binary) y, the sums of all average patches will be equal. Full description of µ0 can be found in theorem A.5. These theorems show that in the case of simple, linear CNNs different networks (initial conditions, widths) will learn the same representation in the first layer but despite the fact that the loss is convex, the learned representation will not be the representation that globally optimizes the training loss with any finite number of training steps. Rather, the use of gradient descent will introduce an implicit bias that favors certain energy profiles that depend on the number of training steps and the learning rate (with the form of the bias given explicitly by equation 2). This implicit bias causes the learned profiles to be highly consistent yet very different from the optimal one. 5 IMPLICIT BIAS IN NONLINEAR CNNS The theoretical analysis of linear CNNs shows that if the true labels are uncorrelated with the average patch in an image, the learned energy profile will converge to a consistent profile that is determined by the dynamics of gradient descent and the statistics of the input patches. We therefore ask: is the consistent energy profile that we find in real-world CNNs also due to the dynamics of SGD? According to our analysis, the implicit bias is strongest when the label is uncorrelated with the average patch. We measured this correlation in commonly used image classification datasets by computing the correlation coefficient between the average PCA value in an image and its label for different tasks (a binary classification of one class versus the rest). The results are shown in figure 5. For almost all tasks and coefficients, the correlation coefficient is close to zero. Given the small amount of correlation, we would expect a similar energy profile when we train with random labels and true labels. Maennel et al. (2020) have already shown that when CNNs are trained with random labels, the representations that are learned in the first layers are still useful for other tasks. Here, we ask a more quantitative question: are the energy profiles the same? As shown in figures 6 and table 2 the answer is clearly yes. Correlations above 0.9 are consistently observed even when the true labels are replaced with random labels and the representations that are learned are still mostly sensitive to intermediate spatial frequencies. This is true both when trained on multiclass recognition problems (e.g. CIFAR10, CIFAR100, CelebA) and when trained on smaller, 2-class problems for which we’ve already seen consistency of linear CNNs (fig. 4). As an additional test of the hypothesis that the energy profiles we see in real-world CNNs are mostly due to the implicit bias, we created new datasets in which we artificially created strong correlations between the label and particular PCA components and trained VGG on them. The image labels were determined by the average patch projection onto some PCA component, such that the 5,000 images with the largest magnitude of projection were labeled with 1 and so on. As can be seen in fig. 7, once changing the average patch of each class manually the correlation between the true and random profiles decreases from the original 0.9 ± 0.02 to as low as −0.24 ± 0.02, depending on the component changed and the learned energy profile no longer resemble the human sensitivity function. 6 RELATED WORK The fact that different CNNs tend to learn similar filters in the first layer has been reported previously (e.g. Yosinski et al. (2014); Sarwar et al. (2017); Luan et al. (2017); Alekseev & Bobe (2019)), and follows from a line of work of visualizing representations in deep CNNs Zeiler & Fergus (2013); Girshick et al. (2013). Our work extends this finding by showing that the overall representation in the first layer is not only qualitatively but also is quantitatively similar - different CNNs not only learn to recognize spatial frequencies in their first layer but also the same distribution of frequencies. This consistency is then expanded to networks trained with true and random labels. Prior works have also studied the ability of neural networks to overfit random labels (Arpit et al., 2017), and use representations learned in this regime for transfer learning. Maennel et al. (2020) hypothesised that the ability of networks trained on random labels to transfer to new tasks is due to the fact that under certain conditions, the first layers of networks trained with random labels have aligned covariances with input image patches. We expand on this hypothesis by showing that networks trained with true labels, for which there is no alignment guarantee, display the same energy profile as networks trained with random labels. We show this quantitatively for VGG11 and ResNet and theoretically for linear CNNs with a single hidden layer. The fact that gradient descent training biases towards certain solutions has been known for many years, and proven mainly for linear predictors and separable data. Studies on linear networks (Soudry et al., 2018) and linear CNNs (Gunasekar et al., 2018) found that under certain conditions, gradient descent causes the effective linear predictor to be biased towards sparsity (in Fourier space in the case of CNNs) or minimal norm or max-margin (Chizat & Bach, 2020). Similar works have also shown that deep non-linear networks are biased towards learning lower frequencies first (Rahaman et al., 2019). Our work follows this line, focusing on the features learned in the first layer as a result of this bias, and that of the input image statistics. In its theoretical part, our analysis methods follow closely on the methods used by (LeCun et al., 1991; Hacohen & Weinshall, 2022) which analyze the dynamics of weights in a fully connected network during learning with gradient descent. We use a similar technique but our focus is on the first layer of a CNN. Additionally, we rely on linear networks to gain insight into the behavior of nonlinear networks, following previous works (Hacohen & Weinshall, 2022; ?; Gissin et al., 2019). In the same manner, we support our simplified theoretical claims by quantitatively showing consistency of the theory in real-world CNNs such as VGG. As a result of the consistency of Gabors being learned in the first layers of CNNs such as ResNet, GoogLeNet and DenseNet, each a state-of-the-art at its time - some lines of work attempted building CNNs with learnable Gabors Sarwar et al. (2017); Luan et al. (2017); Alekseev & Bobe (2019) in the first layer. Nevertheless, these failed to reach the high level of performance on benchmark tasks such as the ”vanilla” architectures. Our work expands on this contradiction by showing that not only do the networks consistently learn Gabor filters but also a specific distribution of their frequencies. The distribution mentioned above, was portrayed in our work using the energy profile of the first layers. This measure follows a line of many works in the field of measuring and visualizing similarities between representations Csiszárik et al. (2021); Kornblith et al. (2019); Nguyen et al. (2021); Li et al. (2015); Doimo et al. (2020), varying between comparing the output of transformations induced by the neurons or the neurons themselves. The energy profile is yet another method in this line, while allowing for semantically meaningful (as PCA components correspond to spatial frequencies) visualization of the representation, without any need for dimensionality reduction. 7 DISCUSSION The dramatic success of CNNs in computer vision has lead to increased interest in the representations that they learn. In this paper we have focused on the representation that CNNs learn in the very first layer and presented a high degree of quantitative consistency between the energy profiles learned by different networks using different initializations and architectures. We have examined the hypothesis that this consistency is due to networks learning a representation that is useful for object recognition and presented results that are inconsistent with that hypothesis. By analysing a simple, linear CNNs we have shown that such networks will provably converge to a consistent energy profile under many conditions, but this profile may have nothing to do with the labels and is instead determined by an implicit bias due to the dynamics of gradient descent and the statistics of the input patches. APPENDIX A LINEAR CONVOLUTIONAL NETWORKS A.1 PROOFS OF THEOREMS ON LINEAR CONVOLUTIONAL NETWORKS We first begin with a basic claim on the model composed of a hidden convolutional layer and followed by a global average pool. Lemma A.1. A linear CNN of depth 1 (followed by a global average pool) trained with MSE loss, is equivalent to linear regression on the average image patch. Proof. Let {Xi}Ni=1 with Xi ∈ Rc×w×h be the set of training images, {yi} N i=1 their binary labels (yi ∈ {0, 1}) and the weights of the first layer be W ∈ Rk×c×d×d - k filters of dimension d × d. Denote the output dimensions of the convolution as w′, h′, then: L ( W ; {(Xi, yi)}Ni=1 ) = 1 N N∑ i=1 1 2 ||( 1 k · w′ · h′ ∑ k,w′,h′ Xi ∗W )− yi||2 Summing over the output dimensions is equivalent to summing over a dot product of the patches with a single filter, therefore denoting Ki ∈ Rw ′·h′×c·d2 as the patch matrix of the i’th image and W̃ ∈ Rc·d2×k the reshaped weights matrix: L ( W ; {(Xi, yi)}Ni=1 ) = 1 N N∑ i=1 1 2 ∥∥∥∥∥ ( 1 w′ · h′ 1 )T KiW̃ ( 1 k 1 ) − yi ∥∥∥∥∥ 2 And noting that ( 1 w′·h′1 )T Ki is the average patch of the i’th image and W̃ ( 1 k1 ) the average filter concludes the proof. Another lemma we’ll use further on claims that during training of the linear CNN model, only the average filter changes while the filter covariance remains as during initialization. Therefore proofs from one filter to multiple filters are easily extendable. Lemma A.2. In a linear CNN of depth 1 followed by a global average pool, of any width, trained with GD and MSE loss, the average filter changes during iterations while the covariance of filters remains as during initialization. Proof. Following the notation of lemma A.1, denote K ∈ RN×c·d2 as the average image patch matrix - the image whose i’th row is the average patch of the image Xi, and the network consists of filters w1....wm, therefore trained with the following loss: L (w1...wm;K, y) = ∥∥∥∥∥ 1m m∑ i=1 Kwi − y ∥∥∥∥∥ 2 (3) = ∥∥∥∥∥K ( 1 m m∑ i=1 wi ) − y ∥∥∥∥∥ 2 = ∥Kw̄ − y∥2 (4) Where w̄ is the average filter. The dynamics of a single filter in this layer: ∂L ∂wj = 1 2m KT ( K ( 1 m m∑ i=1 wi ) − y ) = 1 2m KT (Kw̄ − y) (5) Meaning that the gradients w.r.t to all filters are equal and depend only on the average filter at the current iteration. By recursion we can see that the change in the average filter is as follows, for learning rate η: w̄t = 1 m m∑ i=1 ( wt−1i − η∇L t−1(wi) ) = 1 m m∑ i=1 ( wt−1i − η∇L t−1(w̄) ) (6) = ( 1 m m∑ i=1 wt−1i ) − η∇Lt−1(w̄) = w̄t−1 − η∇Lt−1(w̄) (7) Concluding that the gradients for all filters are equal, and depend only on the average filter. Theorem A.3. Let K be a matrix whose i’th row is the average image patch of the i’th image and y is a vector with the labels of all images, and let K̄ = KU be the same matrix in the PC basis (with U being the PCA eigenvector matrix). The squared energy profile of weights of linear CNN, initialized with random weights sampled zero mean and covariance σ2I and trained with GD, is equal to the following: e2i := 1 M M∑ j=1 ⟨fj , pi⟩2 = w̃2i + σ2 (8) where w̃ = ( K̄T K̄ + Λ )−1 K̄T y is the solution to a regularized regression problem in the PC basis, that regresses the average patch in an image with its label, with Λ = Λ(KTK, t, η) a matrix depending on the eigenvalues of KTK, the iteration of GD and the step size. Proof. It follows from lemma A.2 that during training all filters change by the average filter. We’ll show that a single filter (at iteration t of GD) corresponds to a solution to ridge regression with some matrix Λ = Λ(t, η,KTK) with η being the step size. Opening the recursion of GD updates, and assuming w is initialized at w = 0: wt = wt−1 − ηK̄T ( K̄wt−1 − y ) = wt−1 − ηK̄T K̄wt−1 + ηK̄T y wt = η t−1∑ j=0 ( I − ηK̄T K̄ )j K̄T y (9) In this coordinate system, K̄T K̄ is a diagonal matrix with the empirical variances σ̂2i on the diagonal if it is centered. If the matrix isn’t centered, then K̄T K̄ = Σ̂ + µ̂µ̂T where Σ̂ is a diagonal matrix with the empirical variances on the diagonal and µ̂i is the empirical mean estimating Ex [⟨x, pi⟩]. This is because in PCA coordinates, K̄ = KU , where U contains the eigenvectors as columns. Since K isn’t centered, K = K0 + 1KTavg with K0 being zero meaned and Kavg being the average row. Therefore, K̄T K̄ = ( K0U + 1K T avgU )T ( K0U + 1K T avgU ) = Σ̂ + µ̂µ̂T , where the phrase KT0 (1K T avg) disappears since K0 has zero mean. Therefore: wt = η t−1∑ j=0 ( I − ηK̄T K̄ )j K̄T y = η t−1∑ j=0 ( I − η ( Σ̂ + µ̂µ̂T ))j K̄T y (10) Notice that ( I − η ( Σ̂ + µ̂µ̂T ))j can be decomposed in the following manner using the binomial theorem:( I − η ( Σ̂ + µ̂µ̂T ))j = ( I − ηΣ̂ )j + j∑ k=1 ( j k ) (−η)k ∥µ̂∥2(k−1) ( I − ηΣ̂ )j−k µ̂µ̂T (11) Putting it back into equation 10: wt = η t−1∑ j=0 ( I − η ( Σ̂ + µ̂µ̂T ))j K̄T y = η t−1∑ j=0 ( I − ηΣ̂ )j + j∑ k=1 ( j k ) (−η)k ∥µ̂∥2(k−1) ( I − ηΣ̂ )j−k µ̂µ̂T K̄T y (12) Looking at the i’th coordinate, with λi being the i’th eigenvalue on the diagonal of Σ̂: wt(i) =(K̄ T y)(i) t−1∑ j=0 (1− ηλi)j +(µ̂µ̂T K̄T y)(i) t−1∑ j=1 1 ∥µ̂∥2 ( j∑ k=1 ( j k ) (−η ∥µ̂∥2)k (I − ηλi)j−k ) (13) After some algebra: wt(i) = 1− ( 1− η(λi + ∥µ̂∥2) )t ∥µ̂∥2 (λi + ∥µ̂∥2) − 1− (1− ηλi) t λi (µ̂µ̂T K̄T y)(i) + ( 1− (1− ηλi)t λi ) (K̄T y)(i) (14) And in matrix notation, define the diagonal matrix A with Aii = 1−(1−ηλi)t λi as the i’th element on the diagonal, and the diagonal matrix B with Bii = 1−(1−η(λi+∥µ̂∥2)) t ∥µ̂∥2(λi+∥µ̂∥2) the i’th element on the diagonal and we get that: wt = (B −A)µ̂µ̂T K̄T y +AK̄T y (15) Solving the following: wt = ( K̄T K̄ + Λ )−1 K̄T y = ( Σ̂ + µ̂µ̂T + Λ )−1 K̄T y (16) we get that: Λ = ( B + (A−B)µ̂µ̂T )−1 − Σ̂− µ̂µ̂T (17) and we got a definition for the regularization matrix. Since the filter covariance stays constant throughout training due to lemma A.2, treating the filters as a random variable initialized with covariance σ2I (in PCA basis) means that their empirical second moment is equal to the sum of the squared mean and variance. Therefore denoting the filters in PCA basis as f̃j , we get that in the ith coordinate: 1 M M∑ j=1 ⟨fj , pi⟩2 = 1 M M∑ j=1 ⟨fj , pi⟩ 2 + 1 M M∑ j=1 ⟨fj , pi⟩ − 1 M M∑ j=1 ⟨fj , pi⟩ 2 = w̃2(i) + σ2 (18) Theorem A.4 (Effect of Labels). Let W tTrue be the weights of the first layer of a linear CNN with a single hidden layer and any width, trained for t steps on a binary classification task with MSE loss and gradient descent, and let WRandom be the weights of the first layer of the same CNN trained with random labels drawn from a Bernoulli distribution. If the average patch of both classes is identical, and the dataset is balanced between them, then at any training iteration: Ey∼Bernoulli( 12 ) [ W tRandom ] = W tTrue (19) Proof. Let K ∈ RN×c·d2 be the average image patch matrix and y ∈ {0, 1}N the image labels. From lemma A.1, training a linear CNN with 1 layer followed by a global average pool is equivalent to solving the following linear regression problem for weights matrix W ∈ Rc·d2×1: L (W ;K, y) = 1 N 1 2 ∥KW − y∥2 Using gradient descent with learning rate η, the update rule for W is: Wt = Wt−1 − η N ( KT (KWt−1y) ) = ( I − η N KTK ) Wt−1 + η N KT y (20) Notice that in expectation, Ey∼Bernoulli( 12 ) [y] = 1 21, therefore Ey∼Bernoulli( 12 ) [ KT y ] is (half) the sum of all average image patches. From our assumption, the average image is equal between classes. Denote this average patch as z, and since K is the average patch matrix z = 2NKy. Combining this observation with the above: Ey∼Bernoulli( 12 ) [ η N KT y ] = η N · 1 2 KT1 = η 1 2N N · z = η N KT y (21) And that concludes the proof. Note that we assumed that the CNN is of width 1, but using lemma A.2 is enough for generalizing to any width. Theorem A.5 (Solution in PCA Basis). Let w̃ = ( K̄T K̄ + Λ )−1 K̃T y be as described in theo- rem A.3, for K̄ the average image patch matrix in the PCA basis and Λ = Λ(K̄T K̄, t, η). Denote µ̂ as the empirical mean projection onto the PCA basis and Σ̂ as the the uncentered data covariance in PCA basis. If the labels are drawn randomly from a Bernoulli distribution, then in expectation, w̃ can be calculated at any iteration t and for any step size η with the following formula: Ey∼Bernoulli( 12 ) [w̃] ∝ ( I − Σ̂ ′−1µµT 1 + µT Σ̂′−1µ ) Σ̂′−1µ (22) with Σ̂′ = Σ̂ + Λ. Proof. Following the notation from before, denote K ∈ RN×c·d2 as the average patch matrix, and K̄ as the same matrix in the PCA basis coordinates. From theorem A.3 K̄T K̄ = Σ̂+ µ̂µ̂T . Solving the linear ridge regression problem in this coordinate system as described in theorem A.3: L(w; K̄, y) = 1 2 ∥∥K̄w − y∥∥2 + 1 2 wTΛw ⇒ w̄ = ( K̄T K̄ + Λ )−1 K̄T y (23) In expectation over a random y, as described in theorem A.4: E [y] = 121, therefore E [ K̃T y ] = N 2 µ̂. As mentioned before, K̄T K̄ = Σ̂+ µ̂µ̂T . Define Σ̂′ = Σ̂+Λ a matrix summing the PCA variances and the regularization coefficients. Now using Woodbury matrix identity:( Σ̂′ + µµT )−1 = Σ̂′−1 − Σ̂′−1µ(I + µT Σ̂′−1µ)µT Σ̂′−1 = (I − Σ̂ ′−1µµT 1 + µT Σ̂′−1µ )Σ̂′−1 and we get that: w̃ ∝ (I − Σ̂ ′−1µµT 1 + µT Σ̂′−1µ )Σ̂′−1µ A.2 CORRELATION FIGURES As mentioned in 4, energy profiles of linear CNNs had much higher similarity when training with true and random labels using SGD, compared to their random initialization and the optimal solution to the corresponding linear regression problem. To complement 4, 3 displays the mean and standard deviation of correlation coefficients between the mentioned energy profiles. Again, it is clear that there is a high similarity in energy profiles when training a linear CNN with SGD on true and random labels. APPENDIX B EXPANDED RESULTS ON SIMILARITY BETWEEN PRETRAINED B.1 ACCURACY OF NETWORKS TRAINED WITH AND WITHOUT A FROZEN FIRST LAYER As shown in 3, networks of different depths converge to the same minimal loss value when trained with and without their first layer. To complement this we present the accuracies of these models below (fig. 9), echoing this result. B.2 COMPARISON OF PRETRAINED CNNS ON CIFAR AND IMAGENET To expand on the similarity between first layers of different architectures, we present correlation plots emphasizing the difference between a random initialization and the learned weights of different networks on different datasets. Presented are figures comparing pretrained models on ImageNet (figs. 10 and 11), CIFAR10 (fig. 12) CIFAR100 (fig. 13), and ResNets trained on different datasets (fig. 14). All models were downloaded through the Pytorch Model Hub. Although it might seem odd that correlation on Imagenet is much higher than on the CIFAR datasets, we believe this is due to resolution - while on the CIFAR datasets correlation is calculated over an energy profile in R27, the Imagenet example contains profiles in R147, making the calculated correlation smoother and less sensitive to noise. This is demonstrated in fig. 15 which presents the correlation between 27 components of the Imagenet profiles. When looking in higher resolution the correlation coefficients between the different models drop and are relatively equal to those between the different models on the CIFAR datasets. B.3 COMPARISON OF VGG WITH DIFFERENT LOSSES Although A.4 and all other theorems are proved on a linear network using MSE loss (as customary in theoretical works on linear networks e.g. (Hacohen & Weinshall, 2022; LeCun et al., 1991)), in practice most CNNs for multi-class classification are trained with crossentropy loss. To test the effect on the energy profile of a real network, we trained VGG with both crossentropy and MSE, and with true and random labels, the results are displayed 16 and correlations in 4. As can be seen in the figure, even in this case the networks’ energy profiles are highly correlated, thus supporting our hypothesis that the main difference between the formula A.5 and the pretrained networks is due to the oversimplification of the linear model, and not for example the loss used in theory vs practice. B.4 FULL FIGURES ON TRUE AND RANDOM LABELS B.5 EXPERIMENTAL DETAILS All models - linear and non linear were trained with SGD and a constant learning rate of 0.1. No preprocessing was applied to the data except when stated otherwise. All models were trained for 150 epochs, with minibatches of size 256. All results are averaged over at least 3 different random seeds. When referring to models ”trained with random labels”, we trained models until they overfit the training data, as both ResNet and VGG can reach 99% train accuracy on CIFAR10 with random labels. All models in the main text were trained ourselves, except those depicted in 1. All pretrained models in 1 and B.2 were downloaded from the Pytorch Model Hub.
1. What is the focus of the paper regarding CNNs? 2. What are the strengths and weaknesses of the proposed approach? 3. Do you have any concerns or questions about the presented results and their implications? 4. How do you assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper shows that even when the first layers are kept random, CNNs are able to learn with the remaining layers with the results comparable with full learning, also that highly consistent representation is learned in the first layer when the true labels are replaced with random labels. Strengths And Weaknesses Strength: studies to understand the patterns of filters learning are interesting to improve understanding of deep networks as far as I know this is the first study showing random labeled data leads to similar filters on first layers Weaknesses: the intuition that relates Gabor-like features to first layers dates back to 10+ years and there is much more than this interpretation in the literature. in the fixed random experiments, I wonder if the following layers, after the fixed ones, would follow similar patterns the conclusions are quite limited to the empirical findings, which hardly generalize to practical scenarios. some claims are not very well supported such as "presented results that are inconsistent with useful representation hypothesis" since even with random labels (or surrogate labels as other papers have demonstrated) such basic filters represent general low-level image patterns the paper presentation may be improved Clarity, Quality, Novelty And Reproducibility The paper is clear in general, but some ideas are hard to follow, and others disconsider more recent developments on the interpretation of filter learning. The paper studies only one optimization method, and maybe the results and conclusions could change with the optimization algorithm. In terms of presentation: there is a broken reference (?), math notation is not standardized along the paper, and Figures should be improved, in particular text is too small in many Figures.