venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
NIPS
Title Online Decision Based Visual Tracking via Reinforcement Learning Abstract A deep visual tracker is typically based on either object detection or template matching while each of them is only suitable for a particular group of scenes. It is straightforward to consider fusing them together to pursue more reliable tracking. However, this is not wise as they follow different tracking principles. Unlike previous fusion-based methods, we propose a novel ensemble framework, named DTNet, with an online decision mechanism for visual tracking based on hierarchical reinforcement learning. The decision mechanism substantiates an intelligent switching strategy where the detection and the template trackers have to compete with each other to conduct tracking within different scenes that they are adept in. Besides, we present a novel detection tracker which avoids the common issue of incorrect proposal. Extensive results show that our DTNet achieves stateof-the-art tracking performance as well as a good balance between accuracy and efficiency. The project website is available at https://vsislab.github. io/DTNet/. N/A A deep visual tracker is typically based on either object detection or template matching while each of them is only suitable for a particular group of scenes. It is straightforward to consider fusing them together to pursue more reliable tracking. However, this is not wise as they follow different tracking principles. Unlike previous fusion-based methods, we propose a novel ensemble framework, named DTNet, with an online decision mechanism for visual tracking based on hierarchical reinforcement learning. The decision mechanism substantiates an intelligent switching strategy where the detection and the template trackers have to compete with each other to conduct tracking within different scenes that they are adept in. Besides, we present a novel detection tracker which avoids the common issue of incorrect proposal. Extensive results show that our DTNet achieves stateof-the-art tracking performance as well as a good balance between accuracy and efficiency. The project website is available at https://vsislab.github. io/DTNet/. 1 Introduction As a fundamental task in computer vision, visual tracking aims to estimate the trajectory of a specified object in a sequence of images. Inspired by the success of deep learning in general computer vision tasks, recent visual tracking algorithms mostly used deep networks, particularly CNNs which extract deep representations for various scenes. Among these deep trackers are two dominant tracking schemes. The first one treats tracking as a detection task, which typically builds a deep network to distinguish the foreground target from the background [5, 25, 39]. The second one regards tracking as a template matching task and addresses it via a matching network such as Siamese network, which learns a general similarity function to obtain the image patch best matching the target [11, 15, 29]. The detection tracker continuously updates the network online with the image patch detected as the target by itself. The diverse appearances of the patches lead to a good adaptability of the tracker while the continuous update is inefficient for real-world tracking. Also, albeit occasionally, an incorrect detection in a frame which represents a noisy appearance of the target could mislead the tracker. The template tracker utilizes the initial appearance of the target as a fixed template to conduct the matching operation, which runs efficiently at the cost of adaptability. Either the detection or the template tracker is merely suitable for a particular group of scenes. For instance, as shown in the top row of Fig. 1, due to the temporal occlusion within a frame, the detection tracker incorrectly captures the bicycle as the target in that frame and cannot recover from it in the succeeding frame. By contrast, the template tracker is robust to the temporal occlusion as it always looks back to the real target in the initial frame for delivering the matching. On the other hand, it ∗Corresponding author 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. is easy to understand that, as shown in the bottom row of Fig. 1, the template tracker is not reliable with the temporal deformation of the target while the detection tracker works well with it. Some recent works investigated various fusion schemes to pursue better performance [2, 17, 20, 34, 40]. However, directly fusing the two types of trackers together is not wise as they follow different tracking principles and thus cannot converge to each individual optimum simultaneously during training. Hence, it might be better to make them co-exist for handling different scenes alternatively. Differing from previous fusion-based methods, this paper presents a framework of decision learning for the ensemble of the two types of trackers where we explore how to automatically and intelligently switch between them for tracking in different scenes. Specifically, our method makes the two trackers compete with each other through a hierarchical reinforcement learning (HRL) framework so that it can make a proper online decision to choose the tracker which captures the target better in the current scene. This idea is based on the common observation as shown in Fig. 1 that different types of trackers are merely good at tracking the targets in a particular group of frames. We name the ensemble framework DTNet as it comprises a decision module and a tracker module as illustrated in Fig. 2. The decision module starts with a switch network that encodes the image patch inheriting from the previous frame and the target in the initial frame to decide whether the detection or the template tracker should be selected for the current frame. It is followed by a termination network which estimates the output of the tracker to generate a probability of terminating the current tracker. The switch and the termination networks in fact form a “Actor-Critic” structure [21]. Such intelligent switching between the two trackers repeats till all frames of the video are processed. We provide a specifically designed scheme for jointly training the decision and the tracker modules end-to-end via HRL. Furthermore, to improve the detection tracker, a fully-convolutional classifier is learned to differentiate the target from the distracting content, Since it does not rely on a number of candidate proposals to predict the bounding boxes of the target, it actually avoids the issue of the incorrect prediction of such proposals that could mislead the tracker. The contributions of this paper are summarized as follows. • We propose an ensemble framework which learns an online decision for visual tracking based on HRL where the detection and the template trackers compete with each other to substantiate a switching strategy. • We develop a novel proposal-free detection tracker, which does not require the proposal of candidate bounding boxes of the target and thus make the discriminating course flexible. • Our method demonstrates the state-of-the-art performance on several benchmarks. The ablation studies show that the decision mechanism composed of the switch and the termination networks can effectively select the proper trackers for different scenes. 2 Related Work Detection trackers. Trackers based on object detection in each video frame usually learn a classifier to pick up the positive candidate patches wrapping around previous observation. Nam and Han [25] proposed a lightweight CNN to learn generic feature representations by shared convolutional layers to detect the target object. Han et al. [14] selected a random subset of branches for model update to diversify learned target appearance models. Fan and Ling [13] took into account self-structural information to learn a discriminative appearance model. Song et al. [30] integrated adversarial learning into a tracking-by-detection framework to reduce overfitting on single frames. However, the occasional incorrect detection in a frame is still prone to contaminate and mislead the target appearance models. Template trackers. Trackers based on template matching have recently gained popularity due to its efficiency, which learns a similarity function to match the target template with the image patch in the searching region of each frame. Tao et al. [31] utilized Siamese network in an offline manner to learn a matching function from a large set of sequences, and then used the fixed matching function to search for the target in a local region. Bertinetto et al. [4] introduced a fully convolutional Siamese network (SiamFC) for tracking by measuring the region-wise feature similarity between the target object and the candidate. Wang et al. [36] incorporated an attention mechanism into Siamese network to enhances its discriminative capacity and adaptability. However, these trackers are prone to drift when the target suffers the variations such as shape deformation and color change in appearance due to the fixed appearance of the template without an online update. Fusion-based trackers. There exist some trackers adapting fusion strategies. The MEEM algorithm [40] proposed a multi-expert tracking framework with an entorpy regularized restoration scheme. And Li et al. [20] introduced a discrete graph optimization into the framework to handle the tracker drift problem. Wang et al. proposed MCCT [35] which selected the reliable outputs from multiple feature to refine the tracking results. Bertinetto et al. [3] combine two image patch representations that are sensitive to complementary factors to learn a model robust to colour changes and deformations. However, it is not easy to fuse multiple trackers significantly different in principle, as they can hardly converge to each individual optimum simultaneously during training. Unlike the fusion-based methods above, our method aims to learn an online strategy to decide which tracker should be used for each individual frame. 3 Method As shown in Fig. 2, the proposed framework consists of two modules: the decision module and the tracker module. As the key component of the entire framework, the former contains the switch network and the termination network, which work together to alternatively select the template or the detection tracker which compete with each other in the tracking task and jointly form the tracker module. In the decision module, the switch network encodes an image patch Φt inheriting from the previous frame It−1 and the initial template Φ∗, and then outputs a binary signal to select a tracker. A tracker can estimate the location of the target for the current frame It. The termination network estimates the output of the tracker and generates a probability to decide if the framework should keep using the current tracker or terminate it, which makes the decision module avoid oscillating between the two trackers especially when they have similar accuracy. Note that if the termination network decides to terminate, it merely indicates that the current tracker in use does not work well while it does not necessarily means that the other tracker can performs better. Thus in this case, the switch network will still select a new tracker from the two candidate trackers instead of blindly switching to the other tracker currently not in use. Fig. 2 illustrates all of the 4 possible switching situations of the framework. 3.1 Decision Module Given a set of states S and actions A, the Markovian options w ∈ Ω consist of three components [1]: an intra-option policy π : S × A→ [0, 1], a termination condition β : S+ → [0, 1], and an initiation set I ⊆ S. Here we assume that ∀s ∈ S, ∀w ∈ Ω : s ∈ I (i.e., the trackers are both available in all states). If an option ω is taken, then actions are selected according to πω until the option terminates stochastically according to βω . For controlling the switch of tracker in an HRL manner, the decision module utilizes the termination policy together with the policy over options corresponding to the trackers. Let QΩ denote the switch network which can be viewed as a function subject to option Ω parameterized with its network weights θ and the termination network βΩ,ν . A termination probability which decides if the current tracker in use should be terminated is estimated by βΩ,ν depending on option Ω and its network weights ν. Specifically, we define QΩ as below to evaluate the value of option ω in a manner of hierarchical reinforcement learning: QΩ(s, w; θ) = r(s, w) + γU(s ′, ω), (1) where r(s, w) denotes the reward that the agent receives after implementing ω representing the option for selecting a particular tracker. γ is the discount factor and U(s′, ω) is the value of executing ω on a new state s′ related to the termination probability βω,ν , which is computed by combining the outputs of the switch and the termination networks: U(s′, ω) = (1− βω,ν(s′))QΩ(s′, w) + βω,ν(s′)VΩ(s′), (2) where βω,ν(s′) is the termination probability on the state s′, and VΩ is the optimal of the switch function which can be found by searching for the maximum of the switch function QΩ over option ω: VΩ = max ω (QΩ(s ′, ω)). (3) If the current option, expressed as ωgood, works well, the agent will not terminate it, which means that βω,ν is close to 0. Thus based on Eq. (2), we have U(s′, ωgood) ≈ QΩ(s′, ωgood). If it is not a good option, according to Eq. (3), we have VΩ = QΩ(s′, ωgood). In this case, the agent tends to terminate the current option, which means that βω,ν is close to 1. Thus according to Eq. (2), we also have U(s′, ωgood) ≈ QΩ(s′, ωgood) as desired. Note that U(s, ω) is differentiable. Its gradient with respect to the weights ν of the termination network is expressed as: ∂U(s′, ω) ∂ν = −∂βω,ν(s ′) ∂ν (QΩ(s ′, ω)− VΩ(s′)) + (1− βω,ν(s′)) ∂U(s′′, ω′) ∂ν . (4) A similar form as in Eq. (4) can be derived by expanding U(s ′′,ω′) ∂ν recursively. Here the state-option pairs (s, ω) in one time step is involved in the calculation. As shown in Fig. 2, the switch network QΩ(s, ω) acting as the ‘Critic’ evaluates the value of options and provides the updating gradients for the network termination network βω,ν , which essentially acts as the ‘Actor’ and evaluates the performance of the tracker in use to decide if it should be terminated in the current frame so that the agent could optionally switch to the other tracker for the next frame. The weights θ of the switch network are learned the Bellman equation and the details will be given in Section 3.3. 3.2 Tracker Module Template tracker. We adopt SiamFC [4] as the template tracker. The standard Siamese architecture takes as input an image pair containing an exemplar image z and a candidate image x. The image z represents the object of interest (e.g., an image patch centered on the target object in the first video frame), while x is typically larger and represents the searching area in the subsequent video frames. The features of z and x are extracted by the same CNN ϕ parametrized with τ , which are cross-correlated as: fτ (z, x) = ϕτ (z) ? ϕτ (x) + b (5) where b denotes a bias term which takes the value b∈R at every location, ? represents the operation of convolution. Eq. (5) performs an exhaustive search for the pattern z over the image x. The goal is to match the maximum value in the response map f to the target location. Detection tracker. To build a tracker based on object detection while avoiding the expensive process of proposal generation, we adopt a fully convolutional tracker, namely FCT, as shown in Fig. 3 which includes a classification branch and a regression branch. The classification branch predicts the location of the target and while the regression branch a 4D vector indicating the distances from the center of the target to the edges of its bounding box. Given the feature map F∈RH×W×C of a backbone CNN and the sum s of all strides applied in previous layers, each location (x, y) in F corresponds to (b s2c+ xs, b s 2c+ ys) in image. And we directly predict the class label and the regressed distances for each location in F [32]. It is possible that the same class of objects are considered as targets in one sequence but background objects in another one. Due to such variations and inconsistencies, only using a typical classifier to simply assign “1” to the target and “0” to the background for all sequences is likely to cause conflicts across sequences [25]. Therefore, the proposed classification branch separates domain-independent information from the last domain-specific layer to capture shared representations via shared layers. Specifically, in each domain the location (x, y) is considered as a positive sample if it falls into the groundtruth box and the class label c∗ is assigned 1. Otherwise, it is a negative sample (i.e. background) and the class label c∗ is set to 0. The regression branch outputs a 4D vector re∗ = (l∗, t∗, r∗, b∗) where l∗, t∗, r∗ and b∗ denote the distances from the location of the target to the four edges of its bounding box as shown in Fig. 3. The tracker finally outputs the classification score map c and the regression value re. The loss function for training is given as below: L(c, r) = 1 N N∑ i=1 Lcls(ci, c ∗ i ) + λ N [Where{c∗>0}] N∑ i=1 Lreg(rei, re ∗ i ), (6) where N denotes the total number of the video frames for training. 3.3 Joint Training of Decision and Tracker Modules In this section, we detailed the joint training procedure of the DTNet, in which the decision and the tracker modules are trained end-to-end. Given K training sequences, for the j-th one we randomly extract a piece of training sequences Ij = {I1j , I2j , ..., ITj} with the corresponding ground truth Gj = {G1j , G2j , ..., GTj} in order, and each pair of adjacent frames is subject to a skip of n(0 6 n 6 5) frames with some probability. The initial target is sampled around the ground truth randomly in the first frame and regarded as the template. The switch network optionally evaluates the features encoded in the template and the observation inheriting from the previous frame and then selects a tracker. The reward during the switching process is defined as: rt(s, ω) = ηL ·DIoU , IF (Pt > thhi and P ∗t < thlo) ηL ·DIoU , IF (Pt < thlo and P ∗t > thhi) ηM ·DIoU , IF (Pt > thhi and P ∗t > thhi) ηS ·DIoU , IF (Pt < thlo and P ∗t < thlo) (7) where Pt is the intersection-over-union (IoU) between the predicted bounding box Bt from the selected tracker and the ground truth Gt and P ∗t is the IoU corresponding to the unselected tracker. DIoU is the difference value between them. Actually, three cases are divided by the above setup of reward: (1) One succeeds while the other fails; (2) Both succeed; (3) Both fail. Accordingly, three enlarger coefficients are assigned in descending order, which leads to select the agent with higher accuracy while guides the tracking competition. The samples are collected by the unselected tracker respectively to update the corresponding network. In other words, we keep on training the worse one to maintain the competitive relationship between the two trackers. A new state s′ is updated for the current frame by the prediction. Then, the agent takes the probability of βω,ν(s′) to terminate the previous option and re-evaluate the value of options. For the switch module, the ‘Critic’ model QΩ(s, ω) can be learned using the Bellman equation [22], the learning process is achieved by minimizing the following loss: L = 1 N N∑ i=1 (yi −QΩ(si, ωi; θ))2 (8) where yi = r(si, wi)+γ(1−βωi,ν(s′i)QΩ(s′i, ωi))+βωi,ν(s′i)VΩ(s′i). And the ‘Actor’ module βω,ν updates as follows: ν = ν − αν ∂βω,ν(s ′) ∂ν (QΩ(s ′, ω)− VΩ(s′)). (9) Please refer to Algorithm 1 in the supplementary material available at the website mentioned in the abstract for the details of the whole training process. 4 Experimental Results In this section, we conduct comparative evaluations on the benchmarks including OTB-2013 [37], OTB-50 [38], OTB-100 [38], LaSOT [12], TrackingNet [24], UAV123 [23] and VOT18 [18] with three considerations: 1) We compare the proposed DTNet with state-of-the-art trackers; 2) To demonstrate the effectiveness of the switch module, we compare the DTNet with some of its variants by employing different rackers; 3) We further compare our method with the trackers fused at the feature level to demonstrate the advantage of the decision-based strategy. Apart from the experimental results shown in this section, please refer to the website mentioned in the abstract for the supplementary results including the online visualization of the decision module of the proposed DTNet and the comparison with the state-of-the-art tracking methods. Implementation details. We build the switch and the termination networks by three convolutional layers and two fully connected layers, which receive the image patch of 84×84 as input. The sequences from VID [28] and Youtube_BB [27] datasets are used to train the DTNet including the decision and the tracker modules for 6 ×105 episodes with Adam optimizer. We set the capacity of the replay buffer to 5000, the learning rate to 0.0001, the discount factor γ in Equ. 1 to 0.2, the batch size to 128 and nκ is set to 3 ×105. For the -greedy algorithm, is set to 1 and decays to 0.1 gradually. The experiments were implemented in PyTorch on a computer with a 3.70GHz Intel Core i7-8700K CPU and two NVIDIA GTX 1080Ti GPUs. The average tracking speed is 36 FPS. Comparison with state-of-the-art trackers. We compare the DTNet (with FCT+SiamFC in this version) with the state-of-the-art trackers including CNN_SVM [16], SiamFC [4], DSST [9], ECO [8], SRDCF [10], SCT [6], HDT [26] and Staple [3]. In Fig. 4, we can observe that our DTNet achieves state-of-the-art performance in terms of the success rate and the precision on the OTB-2013, OTB-50 and OTB-100 datasets. It is noteworthy that although DTNet performs slightly worse than ECO, it is much more efficient than ECO. The high performance of DTNet can be attributed to two aspects. First, the decision module intelligently selects a proper tracker for each frame instead of fusing two trackers that could conflict with each other. Second, we improve the original detection tracker by considering the domain knowledge, and makes the discriminating course more flexible through eliminating the candidate boxes. Comparison with variants. We conduct ablation study to investigate the effectiveness of the DTNet. By comparing the quantitative results listed in the top half of Table 1 with those in its bottom half, we can see that the DTNet which combines two trackers always outperforms its ablated version which merely uses one tracker no matter what a single tracker is used. To further validate the effectiveness of the decision module of the DTNet, We have included a manually designed rule based decision module for comparison. It is implemented by picking a particular tracker based on the confidence score of tracking subject to the thresholds set manually. The results are given in the eighth row of Table 1. Apparently, our automated decision module significantly outperforms such a handcrafted one which relies on handcrafted thresholds for tracker selection. Besides, our method is more efficient as it only performs each tracker once in the decisionmaking process while the handcrafted module has to carry out both trackers and use their output confidence scores for decision. We also compare the DTNet with its variants by exploring different combinations of trackers including ACT [5], FCT, ATOM [7], SiamFC [4], CFNet [33] and SiamRPN++ [19]. It is noteworthy that ACT, FCT and ATOM are detection trackers while SiamFC, CFNet and SiamRPN++ are template trackers. We always combine a detection tracker and a template tracker to form a variant of the DTNet for comparison. The results show that the DTNet constantly outperforms each individual tracker in terms of AUC and precision on different benchmarks, which demonstrate the effectiveness of the decision module. Table 1 also clearly shows that the DTNet makes a good balance between the performance and the efficiency compared with its variants which have different combinations of trackers. Furthermore, our framework can be easily extended to more trackers. For instance, the results of using three trackers including ACT, FCT and SiamFC are shown in the penultimate row of the Table 1. Considering both accuracy and efficiency, we use two trackers in the proposed DTNet. Visualization of the decision module. Fig. 5 shows the visualization of the decision module during training where the outputs of the switch network, i.e. the Q values for the SiamFC and the FCT trackers estimated via the HRL (see Eq. 1), are displayed on top of each frame. It can be seen that in the first frame, the template tracker SiamFC works well while the detection tracker FCT outperforms it in the second frame according to the Q value. Thus it leads to a high probability to terminate the current template tracker and switch to the detection tracker. And in the third frame, since the detection tracker outperforms the template tracker again, the termination probability remains low and thus the detection tracker is still in use as desired. Comparison with fusion-based trackers. We further compare our DTNet with some trackers based on the fusion strategy. According to the quantitative results listed in Table 2, our method exhibits the best performance among real-time trackers on all four datasets. By associating Table 1 with Table 2, we find that although either FCT or SiamFC alone is outperformed by some state-of-the-art fusion-based trackers such as HSME (on OTB-2013) and MCCT-H (on OTB-2013 and OTB-100), the DTNet that combines them in a switching manner through the decision module performs significantly better than them. Such a finding demonstrates that the switching-based combination delivered by the decision module of the proposed DTNet is superior to the fusion-based combination that is broadly adopted by the existing state-of-the-art trackers. 5 Conclusions In this paper, we proposed an ensemble framework, namely DTNet, composed of a decision module and a tracker module for visual tracking. By HRL, the decision module enables the detection tracker and the template trackers that form the tracker module to compete with each other so that the DTNet can switch between them for different scenes. Differing from the fusion-based methods, the DTNet could learn an online decision to pick a particular tracker for a particular scene. Besides, we presented a new proposal-free detection tracker, which does not require the proposal of candidate bounding boxes of the target and thus makes the discriminating course flexible. Extensive results on several benchmarks demonstrated the superiority of the proposed DTNet over existing methods. Broader Impact In this paper, the authors introduce DTNet which learns an online decision for switching to a proper tracker to conduct visual tracking in the current video frame. Although this paper only validates the efficacy of the decision learning framework in the specific scenario of visual tracking, it can actually be extended to other video-based computer vision tasks such as person re-identification, motion caption and action recognition, etc. It can be applied by defining a reward concerning the specific task and replacing the two trackers used in this paper with some other algorithms. To this end, the proposed DTNet could be of broad interest in different fields such as transportation industry, film industry, sport industry, etc. As a method for visual tracking, the DTNet can inevitably be used for monitoring and security purpose. As a learning-based method, what the DTNet can track, a person or a pet, essentially depends on the training data. Therefore, the risk of applying our method to some tasks that could raise ethical issues can be mitigated by imposing a strict and secure data protection regulation such as the GDPR. Without a sufficiently large amount of data of high quality that contain the particular target, the DTNet cannot deliver a good tracking in the particular task. Acknowledgements We acknowledge the support of the National Key Research and Development Plan of China under Grant 2017YFB1300205, the National Natural Science Foundation of China under Grants 61991411 and U1913204, the Shandong Major Scientific and Technological Innovation Project 2018CXGC1503, the Young Taishan Scholars Program of Shandong Province No.tsqn201909029 and the Qilu Young Scholars Program of Shandong University No.31400082063101.
1. What is the focus and contribution of the paper regarding visual tracking? 2. What are the strengths of the proposed approach, particularly in terms of its novelty and effectiveness? 3. Do you have any concerns or questions regarding the design of the DTNet, especially regarding the termination network and switch network? 4. Can the method be extended to select a tracker from more than two candidate trackers? 5. How does the reviewer assess the efficiency and real-time performance of the proposed method?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper proposes a reinforcement learning framework (DTNet) for visual tracking. Unlike existing methods that either use a single tracker or fuse the outcomes of multiple trackers for each frame, the proposed method is essentially a selection scheme that learns online decisions to select an appropriate tracker for each individual frame. According to the results, the decisions are made wisely and thus as expected, the two trackers work jointly in a complementary manner. The idea of the online decision module for selecting a frame-specific tracker delivered by RL is interesting and forms the main contribution of this work. Strengths The motivation of the work is well explained in the introduction. The decision module of the DTNet is conceptually novel as it brings in a way to synergistically utilise multiple trackers that potentially have complementary performance over various frames for visual tracking. The idea of implementing the decision module as a termination network followed by a switch network also makes sense and the effectiveness of such an implementation through RL is well demonstrated by the experiments. The paper also presents an improved detection tracker in the context of the selection scheme and successfully incorporates it into the RL framework. Given the details provided in the paper, I believe the paper is technically sound as well. In addition, it is found that the DTNet seems efficient enough to deliver visual tracking at real-time rate according to the testing results. Therefore, this work could be of broad interest in the visual tracking community. Weaknesses Although the paper is technically sound, I have two issues about the design of the DTNet and would like to see the response from the authors. To deliver the decision module, why not directly use the switch network to make a decision on the selection of trackers? What is the point of introducing the termination network? Is it redundant or really useful? This work only explores how to select a tracker for each frame out of two trackers. I am curious if this method can be extended to select a tracker from more candidate trackers. For example, if three trackers are included, can the method still work with further improved performance?
NIPS
Title Online Decision Based Visual Tracking via Reinforcement Learning Abstract A deep visual tracker is typically based on either object detection or template matching while each of them is only suitable for a particular group of scenes. It is straightforward to consider fusing them together to pursue more reliable tracking. However, this is not wise as they follow different tracking principles. Unlike previous fusion-based methods, we propose a novel ensemble framework, named DTNet, with an online decision mechanism for visual tracking based on hierarchical reinforcement learning. The decision mechanism substantiates an intelligent switching strategy where the detection and the template trackers have to compete with each other to conduct tracking within different scenes that they are adept in. Besides, we present a novel detection tracker which avoids the common issue of incorrect proposal. Extensive results show that our DTNet achieves stateof-the-art tracking performance as well as a good balance between accuracy and efficiency. The project website is available at https://vsislab.github. io/DTNet/. N/A A deep visual tracker is typically based on either object detection or template matching while each of them is only suitable for a particular group of scenes. It is straightforward to consider fusing them together to pursue more reliable tracking. However, this is not wise as they follow different tracking principles. Unlike previous fusion-based methods, we propose a novel ensemble framework, named DTNet, with an online decision mechanism for visual tracking based on hierarchical reinforcement learning. The decision mechanism substantiates an intelligent switching strategy where the detection and the template trackers have to compete with each other to conduct tracking within different scenes that they are adept in. Besides, we present a novel detection tracker which avoids the common issue of incorrect proposal. Extensive results show that our DTNet achieves stateof-the-art tracking performance as well as a good balance between accuracy and efficiency. The project website is available at https://vsislab.github. io/DTNet/. 1 Introduction As a fundamental task in computer vision, visual tracking aims to estimate the trajectory of a specified object in a sequence of images. Inspired by the success of deep learning in general computer vision tasks, recent visual tracking algorithms mostly used deep networks, particularly CNNs which extract deep representations for various scenes. Among these deep trackers are two dominant tracking schemes. The first one treats tracking as a detection task, which typically builds a deep network to distinguish the foreground target from the background [5, 25, 39]. The second one regards tracking as a template matching task and addresses it via a matching network such as Siamese network, which learns a general similarity function to obtain the image patch best matching the target [11, 15, 29]. The detection tracker continuously updates the network online with the image patch detected as the target by itself. The diverse appearances of the patches lead to a good adaptability of the tracker while the continuous update is inefficient for real-world tracking. Also, albeit occasionally, an incorrect detection in a frame which represents a noisy appearance of the target could mislead the tracker. The template tracker utilizes the initial appearance of the target as a fixed template to conduct the matching operation, which runs efficiently at the cost of adaptability. Either the detection or the template tracker is merely suitable for a particular group of scenes. For instance, as shown in the top row of Fig. 1, due to the temporal occlusion within a frame, the detection tracker incorrectly captures the bicycle as the target in that frame and cannot recover from it in the succeeding frame. By contrast, the template tracker is robust to the temporal occlusion as it always looks back to the real target in the initial frame for delivering the matching. On the other hand, it ∗Corresponding author 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. is easy to understand that, as shown in the bottom row of Fig. 1, the template tracker is not reliable with the temporal deformation of the target while the detection tracker works well with it. Some recent works investigated various fusion schemes to pursue better performance [2, 17, 20, 34, 40]. However, directly fusing the two types of trackers together is not wise as they follow different tracking principles and thus cannot converge to each individual optimum simultaneously during training. Hence, it might be better to make them co-exist for handling different scenes alternatively. Differing from previous fusion-based methods, this paper presents a framework of decision learning for the ensemble of the two types of trackers where we explore how to automatically and intelligently switch between them for tracking in different scenes. Specifically, our method makes the two trackers compete with each other through a hierarchical reinforcement learning (HRL) framework so that it can make a proper online decision to choose the tracker which captures the target better in the current scene. This idea is based on the common observation as shown in Fig. 1 that different types of trackers are merely good at tracking the targets in a particular group of frames. We name the ensemble framework DTNet as it comprises a decision module and a tracker module as illustrated in Fig. 2. The decision module starts with a switch network that encodes the image patch inheriting from the previous frame and the target in the initial frame to decide whether the detection or the template tracker should be selected for the current frame. It is followed by a termination network which estimates the output of the tracker to generate a probability of terminating the current tracker. The switch and the termination networks in fact form a “Actor-Critic” structure [21]. Such intelligent switching between the two trackers repeats till all frames of the video are processed. We provide a specifically designed scheme for jointly training the decision and the tracker modules end-to-end via HRL. Furthermore, to improve the detection tracker, a fully-convolutional classifier is learned to differentiate the target from the distracting content, Since it does not rely on a number of candidate proposals to predict the bounding boxes of the target, it actually avoids the issue of the incorrect prediction of such proposals that could mislead the tracker. The contributions of this paper are summarized as follows. • We propose an ensemble framework which learns an online decision for visual tracking based on HRL where the detection and the template trackers compete with each other to substantiate a switching strategy. • We develop a novel proposal-free detection tracker, which does not require the proposal of candidate bounding boxes of the target and thus make the discriminating course flexible. • Our method demonstrates the state-of-the-art performance on several benchmarks. The ablation studies show that the decision mechanism composed of the switch and the termination networks can effectively select the proper trackers for different scenes. 2 Related Work Detection trackers. Trackers based on object detection in each video frame usually learn a classifier to pick up the positive candidate patches wrapping around previous observation. Nam and Han [25] proposed a lightweight CNN to learn generic feature representations by shared convolutional layers to detect the target object. Han et al. [14] selected a random subset of branches for model update to diversify learned target appearance models. Fan and Ling [13] took into account self-structural information to learn a discriminative appearance model. Song et al. [30] integrated adversarial learning into a tracking-by-detection framework to reduce overfitting on single frames. However, the occasional incorrect detection in a frame is still prone to contaminate and mislead the target appearance models. Template trackers. Trackers based on template matching have recently gained popularity due to its efficiency, which learns a similarity function to match the target template with the image patch in the searching region of each frame. Tao et al. [31] utilized Siamese network in an offline manner to learn a matching function from a large set of sequences, and then used the fixed matching function to search for the target in a local region. Bertinetto et al. [4] introduced a fully convolutional Siamese network (SiamFC) for tracking by measuring the region-wise feature similarity between the target object and the candidate. Wang et al. [36] incorporated an attention mechanism into Siamese network to enhances its discriminative capacity and adaptability. However, these trackers are prone to drift when the target suffers the variations such as shape deformation and color change in appearance due to the fixed appearance of the template without an online update. Fusion-based trackers. There exist some trackers adapting fusion strategies. The MEEM algorithm [40] proposed a multi-expert tracking framework with an entorpy regularized restoration scheme. And Li et al. [20] introduced a discrete graph optimization into the framework to handle the tracker drift problem. Wang et al. proposed MCCT [35] which selected the reliable outputs from multiple feature to refine the tracking results. Bertinetto et al. [3] combine two image patch representations that are sensitive to complementary factors to learn a model robust to colour changes and deformations. However, it is not easy to fuse multiple trackers significantly different in principle, as they can hardly converge to each individual optimum simultaneously during training. Unlike the fusion-based methods above, our method aims to learn an online strategy to decide which tracker should be used for each individual frame. 3 Method As shown in Fig. 2, the proposed framework consists of two modules: the decision module and the tracker module. As the key component of the entire framework, the former contains the switch network and the termination network, which work together to alternatively select the template or the detection tracker which compete with each other in the tracking task and jointly form the tracker module. In the decision module, the switch network encodes an image patch Φt inheriting from the previous frame It−1 and the initial template Φ∗, and then outputs a binary signal to select a tracker. A tracker can estimate the location of the target for the current frame It. The termination network estimates the output of the tracker and generates a probability to decide if the framework should keep using the current tracker or terminate it, which makes the decision module avoid oscillating between the two trackers especially when they have similar accuracy. Note that if the termination network decides to terminate, it merely indicates that the current tracker in use does not work well while it does not necessarily means that the other tracker can performs better. Thus in this case, the switch network will still select a new tracker from the two candidate trackers instead of blindly switching to the other tracker currently not in use. Fig. 2 illustrates all of the 4 possible switching situations of the framework. 3.1 Decision Module Given a set of states S and actions A, the Markovian options w ∈ Ω consist of three components [1]: an intra-option policy π : S × A→ [0, 1], a termination condition β : S+ → [0, 1], and an initiation set I ⊆ S. Here we assume that ∀s ∈ S, ∀w ∈ Ω : s ∈ I (i.e., the trackers are both available in all states). If an option ω is taken, then actions are selected according to πω until the option terminates stochastically according to βω . For controlling the switch of tracker in an HRL manner, the decision module utilizes the termination policy together with the policy over options corresponding to the trackers. Let QΩ denote the switch network which can be viewed as a function subject to option Ω parameterized with its network weights θ and the termination network βΩ,ν . A termination probability which decides if the current tracker in use should be terminated is estimated by βΩ,ν depending on option Ω and its network weights ν. Specifically, we define QΩ as below to evaluate the value of option ω in a manner of hierarchical reinforcement learning: QΩ(s, w; θ) = r(s, w) + γU(s ′, ω), (1) where r(s, w) denotes the reward that the agent receives after implementing ω representing the option for selecting a particular tracker. γ is the discount factor and U(s′, ω) is the value of executing ω on a new state s′ related to the termination probability βω,ν , which is computed by combining the outputs of the switch and the termination networks: U(s′, ω) = (1− βω,ν(s′))QΩ(s′, w) + βω,ν(s′)VΩ(s′), (2) where βω,ν(s′) is the termination probability on the state s′, and VΩ is the optimal of the switch function which can be found by searching for the maximum of the switch function QΩ over option ω: VΩ = max ω (QΩ(s ′, ω)). (3) If the current option, expressed as ωgood, works well, the agent will not terminate it, which means that βω,ν is close to 0. Thus based on Eq. (2), we have U(s′, ωgood) ≈ QΩ(s′, ωgood). If it is not a good option, according to Eq. (3), we have VΩ = QΩ(s′, ωgood). In this case, the agent tends to terminate the current option, which means that βω,ν is close to 1. Thus according to Eq. (2), we also have U(s′, ωgood) ≈ QΩ(s′, ωgood) as desired. Note that U(s, ω) is differentiable. Its gradient with respect to the weights ν of the termination network is expressed as: ∂U(s′, ω) ∂ν = −∂βω,ν(s ′) ∂ν (QΩ(s ′, ω)− VΩ(s′)) + (1− βω,ν(s′)) ∂U(s′′, ω′) ∂ν . (4) A similar form as in Eq. (4) can be derived by expanding U(s ′′,ω′) ∂ν recursively. Here the state-option pairs (s, ω) in one time step is involved in the calculation. As shown in Fig. 2, the switch network QΩ(s, ω) acting as the ‘Critic’ evaluates the value of options and provides the updating gradients for the network termination network βω,ν , which essentially acts as the ‘Actor’ and evaluates the performance of the tracker in use to decide if it should be terminated in the current frame so that the agent could optionally switch to the other tracker for the next frame. The weights θ of the switch network are learned the Bellman equation and the details will be given in Section 3.3. 3.2 Tracker Module Template tracker. We adopt SiamFC [4] as the template tracker. The standard Siamese architecture takes as input an image pair containing an exemplar image z and a candidate image x. The image z represents the object of interest (e.g., an image patch centered on the target object in the first video frame), while x is typically larger and represents the searching area in the subsequent video frames. The features of z and x are extracted by the same CNN ϕ parametrized with τ , which are cross-correlated as: fτ (z, x) = ϕτ (z) ? ϕτ (x) + b (5) where b denotes a bias term which takes the value b∈R at every location, ? represents the operation of convolution. Eq. (5) performs an exhaustive search for the pattern z over the image x. The goal is to match the maximum value in the response map f to the target location. Detection tracker. To build a tracker based on object detection while avoiding the expensive process of proposal generation, we adopt a fully convolutional tracker, namely FCT, as shown in Fig. 3 which includes a classification branch and a regression branch. The classification branch predicts the location of the target and while the regression branch a 4D vector indicating the distances from the center of the target to the edges of its bounding box. Given the feature map F∈RH×W×C of a backbone CNN and the sum s of all strides applied in previous layers, each location (x, y) in F corresponds to (b s2c+ xs, b s 2c+ ys) in image. And we directly predict the class label and the regressed distances for each location in F [32]. It is possible that the same class of objects are considered as targets in one sequence but background objects in another one. Due to such variations and inconsistencies, only using a typical classifier to simply assign “1” to the target and “0” to the background for all sequences is likely to cause conflicts across sequences [25]. Therefore, the proposed classification branch separates domain-independent information from the last domain-specific layer to capture shared representations via shared layers. Specifically, in each domain the location (x, y) is considered as a positive sample if it falls into the groundtruth box and the class label c∗ is assigned 1. Otherwise, it is a negative sample (i.e. background) and the class label c∗ is set to 0. The regression branch outputs a 4D vector re∗ = (l∗, t∗, r∗, b∗) where l∗, t∗, r∗ and b∗ denote the distances from the location of the target to the four edges of its bounding box as shown in Fig. 3. The tracker finally outputs the classification score map c and the regression value re. The loss function for training is given as below: L(c, r) = 1 N N∑ i=1 Lcls(ci, c ∗ i ) + λ N [Where{c∗>0}] N∑ i=1 Lreg(rei, re ∗ i ), (6) where N denotes the total number of the video frames for training. 3.3 Joint Training of Decision and Tracker Modules In this section, we detailed the joint training procedure of the DTNet, in which the decision and the tracker modules are trained end-to-end. Given K training sequences, for the j-th one we randomly extract a piece of training sequences Ij = {I1j , I2j , ..., ITj} with the corresponding ground truth Gj = {G1j , G2j , ..., GTj} in order, and each pair of adjacent frames is subject to a skip of n(0 6 n 6 5) frames with some probability. The initial target is sampled around the ground truth randomly in the first frame and regarded as the template. The switch network optionally evaluates the features encoded in the template and the observation inheriting from the previous frame and then selects a tracker. The reward during the switching process is defined as: rt(s, ω) = ηL ·DIoU , IF (Pt > thhi and P ∗t < thlo) ηL ·DIoU , IF (Pt < thlo and P ∗t > thhi) ηM ·DIoU , IF (Pt > thhi and P ∗t > thhi) ηS ·DIoU , IF (Pt < thlo and P ∗t < thlo) (7) where Pt is the intersection-over-union (IoU) between the predicted bounding box Bt from the selected tracker and the ground truth Gt and P ∗t is the IoU corresponding to the unselected tracker. DIoU is the difference value between them. Actually, three cases are divided by the above setup of reward: (1) One succeeds while the other fails; (2) Both succeed; (3) Both fail. Accordingly, three enlarger coefficients are assigned in descending order, which leads to select the agent with higher accuracy while guides the tracking competition. The samples are collected by the unselected tracker respectively to update the corresponding network. In other words, we keep on training the worse one to maintain the competitive relationship between the two trackers. A new state s′ is updated for the current frame by the prediction. Then, the agent takes the probability of βω,ν(s′) to terminate the previous option and re-evaluate the value of options. For the switch module, the ‘Critic’ model QΩ(s, ω) can be learned using the Bellman equation [22], the learning process is achieved by minimizing the following loss: L = 1 N N∑ i=1 (yi −QΩ(si, ωi; θ))2 (8) where yi = r(si, wi)+γ(1−βωi,ν(s′i)QΩ(s′i, ωi))+βωi,ν(s′i)VΩ(s′i). And the ‘Actor’ module βω,ν updates as follows: ν = ν − αν ∂βω,ν(s ′) ∂ν (QΩ(s ′, ω)− VΩ(s′)). (9) Please refer to Algorithm 1 in the supplementary material available at the website mentioned in the abstract for the details of the whole training process. 4 Experimental Results In this section, we conduct comparative evaluations on the benchmarks including OTB-2013 [37], OTB-50 [38], OTB-100 [38], LaSOT [12], TrackingNet [24], UAV123 [23] and VOT18 [18] with three considerations: 1) We compare the proposed DTNet with state-of-the-art trackers; 2) To demonstrate the effectiveness of the switch module, we compare the DTNet with some of its variants by employing different rackers; 3) We further compare our method with the trackers fused at the feature level to demonstrate the advantage of the decision-based strategy. Apart from the experimental results shown in this section, please refer to the website mentioned in the abstract for the supplementary results including the online visualization of the decision module of the proposed DTNet and the comparison with the state-of-the-art tracking methods. Implementation details. We build the switch and the termination networks by three convolutional layers and two fully connected layers, which receive the image patch of 84×84 as input. The sequences from VID [28] and Youtube_BB [27] datasets are used to train the DTNet including the decision and the tracker modules for 6 ×105 episodes with Adam optimizer. We set the capacity of the replay buffer to 5000, the learning rate to 0.0001, the discount factor γ in Equ. 1 to 0.2, the batch size to 128 and nκ is set to 3 ×105. For the -greedy algorithm, is set to 1 and decays to 0.1 gradually. The experiments were implemented in PyTorch on a computer with a 3.70GHz Intel Core i7-8700K CPU and two NVIDIA GTX 1080Ti GPUs. The average tracking speed is 36 FPS. Comparison with state-of-the-art trackers. We compare the DTNet (with FCT+SiamFC in this version) with the state-of-the-art trackers including CNN_SVM [16], SiamFC [4], DSST [9], ECO [8], SRDCF [10], SCT [6], HDT [26] and Staple [3]. In Fig. 4, we can observe that our DTNet achieves state-of-the-art performance in terms of the success rate and the precision on the OTB-2013, OTB-50 and OTB-100 datasets. It is noteworthy that although DTNet performs slightly worse than ECO, it is much more efficient than ECO. The high performance of DTNet can be attributed to two aspects. First, the decision module intelligently selects a proper tracker for each frame instead of fusing two trackers that could conflict with each other. Second, we improve the original detection tracker by considering the domain knowledge, and makes the discriminating course more flexible through eliminating the candidate boxes. Comparison with variants. We conduct ablation study to investigate the effectiveness of the DTNet. By comparing the quantitative results listed in the top half of Table 1 with those in its bottom half, we can see that the DTNet which combines two trackers always outperforms its ablated version which merely uses one tracker no matter what a single tracker is used. To further validate the effectiveness of the decision module of the DTNet, We have included a manually designed rule based decision module for comparison. It is implemented by picking a particular tracker based on the confidence score of tracking subject to the thresholds set manually. The results are given in the eighth row of Table 1. Apparently, our automated decision module significantly outperforms such a handcrafted one which relies on handcrafted thresholds for tracker selection. Besides, our method is more efficient as it only performs each tracker once in the decisionmaking process while the handcrafted module has to carry out both trackers and use their output confidence scores for decision. We also compare the DTNet with its variants by exploring different combinations of trackers including ACT [5], FCT, ATOM [7], SiamFC [4], CFNet [33] and SiamRPN++ [19]. It is noteworthy that ACT, FCT and ATOM are detection trackers while SiamFC, CFNet and SiamRPN++ are template trackers. We always combine a detection tracker and a template tracker to form a variant of the DTNet for comparison. The results show that the DTNet constantly outperforms each individual tracker in terms of AUC and precision on different benchmarks, which demonstrate the effectiveness of the decision module. Table 1 also clearly shows that the DTNet makes a good balance between the performance and the efficiency compared with its variants which have different combinations of trackers. Furthermore, our framework can be easily extended to more trackers. For instance, the results of using three trackers including ACT, FCT and SiamFC are shown in the penultimate row of the Table 1. Considering both accuracy and efficiency, we use two trackers in the proposed DTNet. Visualization of the decision module. Fig. 5 shows the visualization of the decision module during training where the outputs of the switch network, i.e. the Q values for the SiamFC and the FCT trackers estimated via the HRL (see Eq. 1), are displayed on top of each frame. It can be seen that in the first frame, the template tracker SiamFC works well while the detection tracker FCT outperforms it in the second frame according to the Q value. Thus it leads to a high probability to terminate the current template tracker and switch to the detection tracker. And in the third frame, since the detection tracker outperforms the template tracker again, the termination probability remains low and thus the detection tracker is still in use as desired. Comparison with fusion-based trackers. We further compare our DTNet with some trackers based on the fusion strategy. According to the quantitative results listed in Table 2, our method exhibits the best performance among real-time trackers on all four datasets. By associating Table 1 with Table 2, we find that although either FCT or SiamFC alone is outperformed by some state-of-the-art fusion-based trackers such as HSME (on OTB-2013) and MCCT-H (on OTB-2013 and OTB-100), the DTNet that combines them in a switching manner through the decision module performs significantly better than them. Such a finding demonstrates that the switching-based combination delivered by the decision module of the proposed DTNet is superior to the fusion-based combination that is broadly adopted by the existing state-of-the-art trackers. 5 Conclusions In this paper, we proposed an ensemble framework, namely DTNet, composed of a decision module and a tracker module for visual tracking. By HRL, the decision module enables the detection tracker and the template trackers that form the tracker module to compete with each other so that the DTNet can switch between them for different scenes. Differing from the fusion-based methods, the DTNet could learn an online decision to pick a particular tracker for a particular scene. Besides, we presented a new proposal-free detection tracker, which does not require the proposal of candidate bounding boxes of the target and thus makes the discriminating course flexible. Extensive results on several benchmarks demonstrated the superiority of the proposed DTNet over existing methods. Broader Impact In this paper, the authors introduce DTNet which learns an online decision for switching to a proper tracker to conduct visual tracking in the current video frame. Although this paper only validates the efficacy of the decision learning framework in the specific scenario of visual tracking, it can actually be extended to other video-based computer vision tasks such as person re-identification, motion caption and action recognition, etc. It can be applied by defining a reward concerning the specific task and replacing the two trackers used in this paper with some other algorithms. To this end, the proposed DTNet could be of broad interest in different fields such as transportation industry, film industry, sport industry, etc. As a method for visual tracking, the DTNet can inevitably be used for monitoring and security purpose. As a learning-based method, what the DTNet can track, a person or a pet, essentially depends on the training data. Therefore, the risk of applying our method to some tasks that could raise ethical issues can be mitigated by imposing a strict and secure data protection regulation such as the GDPR. Without a sufficiently large amount of data of high quality that contain the particular target, the DTNet cannot deliver a good tracking in the particular task. Acknowledgements We acknowledge the support of the National Key Research and Development Plan of China under Grant 2017YFB1300205, the National Natural Science Foundation of China under Grants 61991411 and U1913204, the Shandong Major Scientific and Technological Innovation Project 2018CXGC1503, the Young Taishan Scholars Program of Shandong Province No.tsqn201909029 and the Qilu Young Scholars Program of Shandong University No.31400082063101.
1. What is the main contribution of the paper in the field of visual tracking? 2. What are the strengths and weaknesses of the proposed approach in terms of its usefulness and effectiveness compared to current state-of-the-art trackers? 3. How does the reviewer assess the quality and scope of the experiments conducted in the paper, particularly regarding the choice of datasets and comparison with other works? 4. Are there any concerns regarding the novelty and originality of the proposed detection tracker FCT, and how does it differ from existing methods such as MDNet? 5. What are the limitations and areas for improvement in the paper's content, including the need for further analysis, ablation studies, and clearer motivation and method description?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper addresses the visual tracking task, in the generic setting. The authors introduce a technique to fuse a detection based and fusion based tracker. This is performed by learning a decision module, using reinforcement learning. It decides which tracker to use through a switch and a termination network. The authors also introduce a fully convolutional detection based tracker. Experiments are performed on the OTB and LaSOT datasets. Strengths - The paper is well written in terms of language. Weaknesses 1) I do not think that the authors are able to motivate or demonstrate the usefulness of their contributions. The proposed tracker performs far from current state-of-the-art trackers, such as SiamRPN++ and DiMP. I do not believe the proposed component to be effective in such more advanced trackers, since they integrate the advantages of both template and detection based method in a more unified manner. In order to demonstrate the usefulness, the authors should attempt to improve modern SOTA trackers, which is not done here. 2) The experiments are lacking in several aspects. Only two datasets are used, OTB and LaSOT (both OTB-2013 and OTB-50 are subsets of OTB-100 and should therefore not be considered). OTB is small and considered obsolete since results are saturated. In addition to LaSOT, the authors should experiment with large-scale datasets, such as TrackingNet, UAV123, or GOT10k. Moreover, the authors only compare with outdated trackers. The claim of state-of-the-art is wrong. 3) No deeper analysis or ablation of the method itself is performed. Only the fusion of different trackers is performed. 4) The motivation and method description is not that clear. Design choices are not motivated for the most part. 5) I could not find significant novelty in the proposed detection tracker FCT. It essentially seems to correspond to a fully-convolutional version of MDNet.
NIPS
Title Online Decision Based Visual Tracking via Reinforcement Learning Abstract A deep visual tracker is typically based on either object detection or template matching while each of them is only suitable for a particular group of scenes. It is straightforward to consider fusing them together to pursue more reliable tracking. However, this is not wise as they follow different tracking principles. Unlike previous fusion-based methods, we propose a novel ensemble framework, named DTNet, with an online decision mechanism for visual tracking based on hierarchical reinforcement learning. The decision mechanism substantiates an intelligent switching strategy where the detection and the template trackers have to compete with each other to conduct tracking within different scenes that they are adept in. Besides, we present a novel detection tracker which avoids the common issue of incorrect proposal. Extensive results show that our DTNet achieves stateof-the-art tracking performance as well as a good balance between accuracy and efficiency. The project website is available at https://vsislab.github. io/DTNet/. N/A A deep visual tracker is typically based on either object detection or template matching while each of them is only suitable for a particular group of scenes. It is straightforward to consider fusing them together to pursue more reliable tracking. However, this is not wise as they follow different tracking principles. Unlike previous fusion-based methods, we propose a novel ensemble framework, named DTNet, with an online decision mechanism for visual tracking based on hierarchical reinforcement learning. The decision mechanism substantiates an intelligent switching strategy where the detection and the template trackers have to compete with each other to conduct tracking within different scenes that they are adept in. Besides, we present a novel detection tracker which avoids the common issue of incorrect proposal. Extensive results show that our DTNet achieves stateof-the-art tracking performance as well as a good balance between accuracy and efficiency. The project website is available at https://vsislab.github. io/DTNet/. 1 Introduction As a fundamental task in computer vision, visual tracking aims to estimate the trajectory of a specified object in a sequence of images. Inspired by the success of deep learning in general computer vision tasks, recent visual tracking algorithms mostly used deep networks, particularly CNNs which extract deep representations for various scenes. Among these deep trackers are two dominant tracking schemes. The first one treats tracking as a detection task, which typically builds a deep network to distinguish the foreground target from the background [5, 25, 39]. The second one regards tracking as a template matching task and addresses it via a matching network such as Siamese network, which learns a general similarity function to obtain the image patch best matching the target [11, 15, 29]. The detection tracker continuously updates the network online with the image patch detected as the target by itself. The diverse appearances of the patches lead to a good adaptability of the tracker while the continuous update is inefficient for real-world tracking. Also, albeit occasionally, an incorrect detection in a frame which represents a noisy appearance of the target could mislead the tracker. The template tracker utilizes the initial appearance of the target as a fixed template to conduct the matching operation, which runs efficiently at the cost of adaptability. Either the detection or the template tracker is merely suitable for a particular group of scenes. For instance, as shown in the top row of Fig. 1, due to the temporal occlusion within a frame, the detection tracker incorrectly captures the bicycle as the target in that frame and cannot recover from it in the succeeding frame. By contrast, the template tracker is robust to the temporal occlusion as it always looks back to the real target in the initial frame for delivering the matching. On the other hand, it ∗Corresponding author 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. is easy to understand that, as shown in the bottom row of Fig. 1, the template tracker is not reliable with the temporal deformation of the target while the detection tracker works well with it. Some recent works investigated various fusion schemes to pursue better performance [2, 17, 20, 34, 40]. However, directly fusing the two types of trackers together is not wise as they follow different tracking principles and thus cannot converge to each individual optimum simultaneously during training. Hence, it might be better to make them co-exist for handling different scenes alternatively. Differing from previous fusion-based methods, this paper presents a framework of decision learning for the ensemble of the two types of trackers where we explore how to automatically and intelligently switch between them for tracking in different scenes. Specifically, our method makes the two trackers compete with each other through a hierarchical reinforcement learning (HRL) framework so that it can make a proper online decision to choose the tracker which captures the target better in the current scene. This idea is based on the common observation as shown in Fig. 1 that different types of trackers are merely good at tracking the targets in a particular group of frames. We name the ensemble framework DTNet as it comprises a decision module and a tracker module as illustrated in Fig. 2. The decision module starts with a switch network that encodes the image patch inheriting from the previous frame and the target in the initial frame to decide whether the detection or the template tracker should be selected for the current frame. It is followed by a termination network which estimates the output of the tracker to generate a probability of terminating the current tracker. The switch and the termination networks in fact form a “Actor-Critic” structure [21]. Such intelligent switching between the two trackers repeats till all frames of the video are processed. We provide a specifically designed scheme for jointly training the decision and the tracker modules end-to-end via HRL. Furthermore, to improve the detection tracker, a fully-convolutional classifier is learned to differentiate the target from the distracting content, Since it does not rely on a number of candidate proposals to predict the bounding boxes of the target, it actually avoids the issue of the incorrect prediction of such proposals that could mislead the tracker. The contributions of this paper are summarized as follows. • We propose an ensemble framework which learns an online decision for visual tracking based on HRL where the detection and the template trackers compete with each other to substantiate a switching strategy. • We develop a novel proposal-free detection tracker, which does not require the proposal of candidate bounding boxes of the target and thus make the discriminating course flexible. • Our method demonstrates the state-of-the-art performance on several benchmarks. The ablation studies show that the decision mechanism composed of the switch and the termination networks can effectively select the proper trackers for different scenes. 2 Related Work Detection trackers. Trackers based on object detection in each video frame usually learn a classifier to pick up the positive candidate patches wrapping around previous observation. Nam and Han [25] proposed a lightweight CNN to learn generic feature representations by shared convolutional layers to detect the target object. Han et al. [14] selected a random subset of branches for model update to diversify learned target appearance models. Fan and Ling [13] took into account self-structural information to learn a discriminative appearance model. Song et al. [30] integrated adversarial learning into a tracking-by-detection framework to reduce overfitting on single frames. However, the occasional incorrect detection in a frame is still prone to contaminate and mislead the target appearance models. Template trackers. Trackers based on template matching have recently gained popularity due to its efficiency, which learns a similarity function to match the target template with the image patch in the searching region of each frame. Tao et al. [31] utilized Siamese network in an offline manner to learn a matching function from a large set of sequences, and then used the fixed matching function to search for the target in a local region. Bertinetto et al. [4] introduced a fully convolutional Siamese network (SiamFC) for tracking by measuring the region-wise feature similarity between the target object and the candidate. Wang et al. [36] incorporated an attention mechanism into Siamese network to enhances its discriminative capacity and adaptability. However, these trackers are prone to drift when the target suffers the variations such as shape deformation and color change in appearance due to the fixed appearance of the template without an online update. Fusion-based trackers. There exist some trackers adapting fusion strategies. The MEEM algorithm [40] proposed a multi-expert tracking framework with an entorpy regularized restoration scheme. And Li et al. [20] introduced a discrete graph optimization into the framework to handle the tracker drift problem. Wang et al. proposed MCCT [35] which selected the reliable outputs from multiple feature to refine the tracking results. Bertinetto et al. [3] combine two image patch representations that are sensitive to complementary factors to learn a model robust to colour changes and deformations. However, it is not easy to fuse multiple trackers significantly different in principle, as they can hardly converge to each individual optimum simultaneously during training. Unlike the fusion-based methods above, our method aims to learn an online strategy to decide which tracker should be used for each individual frame. 3 Method As shown in Fig. 2, the proposed framework consists of two modules: the decision module and the tracker module. As the key component of the entire framework, the former contains the switch network and the termination network, which work together to alternatively select the template or the detection tracker which compete with each other in the tracking task and jointly form the tracker module. In the decision module, the switch network encodes an image patch Φt inheriting from the previous frame It−1 and the initial template Φ∗, and then outputs a binary signal to select a tracker. A tracker can estimate the location of the target for the current frame It. The termination network estimates the output of the tracker and generates a probability to decide if the framework should keep using the current tracker or terminate it, which makes the decision module avoid oscillating between the two trackers especially when they have similar accuracy. Note that if the termination network decides to terminate, it merely indicates that the current tracker in use does not work well while it does not necessarily means that the other tracker can performs better. Thus in this case, the switch network will still select a new tracker from the two candidate trackers instead of blindly switching to the other tracker currently not in use. Fig. 2 illustrates all of the 4 possible switching situations of the framework. 3.1 Decision Module Given a set of states S and actions A, the Markovian options w ∈ Ω consist of three components [1]: an intra-option policy π : S × A→ [0, 1], a termination condition β : S+ → [0, 1], and an initiation set I ⊆ S. Here we assume that ∀s ∈ S, ∀w ∈ Ω : s ∈ I (i.e., the trackers are both available in all states). If an option ω is taken, then actions are selected according to πω until the option terminates stochastically according to βω . For controlling the switch of tracker in an HRL manner, the decision module utilizes the termination policy together with the policy over options corresponding to the trackers. Let QΩ denote the switch network which can be viewed as a function subject to option Ω parameterized with its network weights θ and the termination network βΩ,ν . A termination probability which decides if the current tracker in use should be terminated is estimated by βΩ,ν depending on option Ω and its network weights ν. Specifically, we define QΩ as below to evaluate the value of option ω in a manner of hierarchical reinforcement learning: QΩ(s, w; θ) = r(s, w) + γU(s ′, ω), (1) where r(s, w) denotes the reward that the agent receives after implementing ω representing the option for selecting a particular tracker. γ is the discount factor and U(s′, ω) is the value of executing ω on a new state s′ related to the termination probability βω,ν , which is computed by combining the outputs of the switch and the termination networks: U(s′, ω) = (1− βω,ν(s′))QΩ(s′, w) + βω,ν(s′)VΩ(s′), (2) where βω,ν(s′) is the termination probability on the state s′, and VΩ is the optimal of the switch function which can be found by searching for the maximum of the switch function QΩ over option ω: VΩ = max ω (QΩ(s ′, ω)). (3) If the current option, expressed as ωgood, works well, the agent will not terminate it, which means that βω,ν is close to 0. Thus based on Eq. (2), we have U(s′, ωgood) ≈ QΩ(s′, ωgood). If it is not a good option, according to Eq. (3), we have VΩ = QΩ(s′, ωgood). In this case, the agent tends to terminate the current option, which means that βω,ν is close to 1. Thus according to Eq. (2), we also have U(s′, ωgood) ≈ QΩ(s′, ωgood) as desired. Note that U(s, ω) is differentiable. Its gradient with respect to the weights ν of the termination network is expressed as: ∂U(s′, ω) ∂ν = −∂βω,ν(s ′) ∂ν (QΩ(s ′, ω)− VΩ(s′)) + (1− βω,ν(s′)) ∂U(s′′, ω′) ∂ν . (4) A similar form as in Eq. (4) can be derived by expanding U(s ′′,ω′) ∂ν recursively. Here the state-option pairs (s, ω) in one time step is involved in the calculation. As shown in Fig. 2, the switch network QΩ(s, ω) acting as the ‘Critic’ evaluates the value of options and provides the updating gradients for the network termination network βω,ν , which essentially acts as the ‘Actor’ and evaluates the performance of the tracker in use to decide if it should be terminated in the current frame so that the agent could optionally switch to the other tracker for the next frame. The weights θ of the switch network are learned the Bellman equation and the details will be given in Section 3.3. 3.2 Tracker Module Template tracker. We adopt SiamFC [4] as the template tracker. The standard Siamese architecture takes as input an image pair containing an exemplar image z and a candidate image x. The image z represents the object of interest (e.g., an image patch centered on the target object in the first video frame), while x is typically larger and represents the searching area in the subsequent video frames. The features of z and x are extracted by the same CNN ϕ parametrized with τ , which are cross-correlated as: fτ (z, x) = ϕτ (z) ? ϕτ (x) + b (5) where b denotes a bias term which takes the value b∈R at every location, ? represents the operation of convolution. Eq. (5) performs an exhaustive search for the pattern z over the image x. The goal is to match the maximum value in the response map f to the target location. Detection tracker. To build a tracker based on object detection while avoiding the expensive process of proposal generation, we adopt a fully convolutional tracker, namely FCT, as shown in Fig. 3 which includes a classification branch and a regression branch. The classification branch predicts the location of the target and while the regression branch a 4D vector indicating the distances from the center of the target to the edges of its bounding box. Given the feature map F∈RH×W×C of a backbone CNN and the sum s of all strides applied in previous layers, each location (x, y) in F corresponds to (b s2c+ xs, b s 2c+ ys) in image. And we directly predict the class label and the regressed distances for each location in F [32]. It is possible that the same class of objects are considered as targets in one sequence but background objects in another one. Due to such variations and inconsistencies, only using a typical classifier to simply assign “1” to the target and “0” to the background for all sequences is likely to cause conflicts across sequences [25]. Therefore, the proposed classification branch separates domain-independent information from the last domain-specific layer to capture shared representations via shared layers. Specifically, in each domain the location (x, y) is considered as a positive sample if it falls into the groundtruth box and the class label c∗ is assigned 1. Otherwise, it is a negative sample (i.e. background) and the class label c∗ is set to 0. The regression branch outputs a 4D vector re∗ = (l∗, t∗, r∗, b∗) where l∗, t∗, r∗ and b∗ denote the distances from the location of the target to the four edges of its bounding box as shown in Fig. 3. The tracker finally outputs the classification score map c and the regression value re. The loss function for training is given as below: L(c, r) = 1 N N∑ i=1 Lcls(ci, c ∗ i ) + λ N [Where{c∗>0}] N∑ i=1 Lreg(rei, re ∗ i ), (6) where N denotes the total number of the video frames for training. 3.3 Joint Training of Decision and Tracker Modules In this section, we detailed the joint training procedure of the DTNet, in which the decision and the tracker modules are trained end-to-end. Given K training sequences, for the j-th one we randomly extract a piece of training sequences Ij = {I1j , I2j , ..., ITj} with the corresponding ground truth Gj = {G1j , G2j , ..., GTj} in order, and each pair of adjacent frames is subject to a skip of n(0 6 n 6 5) frames with some probability. The initial target is sampled around the ground truth randomly in the first frame and regarded as the template. The switch network optionally evaluates the features encoded in the template and the observation inheriting from the previous frame and then selects a tracker. The reward during the switching process is defined as: rt(s, ω) = ηL ·DIoU , IF (Pt > thhi and P ∗t < thlo) ηL ·DIoU , IF (Pt < thlo and P ∗t > thhi) ηM ·DIoU , IF (Pt > thhi and P ∗t > thhi) ηS ·DIoU , IF (Pt < thlo and P ∗t < thlo) (7) where Pt is the intersection-over-union (IoU) between the predicted bounding box Bt from the selected tracker and the ground truth Gt and P ∗t is the IoU corresponding to the unselected tracker. DIoU is the difference value between them. Actually, three cases are divided by the above setup of reward: (1) One succeeds while the other fails; (2) Both succeed; (3) Both fail. Accordingly, three enlarger coefficients are assigned in descending order, which leads to select the agent with higher accuracy while guides the tracking competition. The samples are collected by the unselected tracker respectively to update the corresponding network. In other words, we keep on training the worse one to maintain the competitive relationship between the two trackers. A new state s′ is updated for the current frame by the prediction. Then, the agent takes the probability of βω,ν(s′) to terminate the previous option and re-evaluate the value of options. For the switch module, the ‘Critic’ model QΩ(s, ω) can be learned using the Bellman equation [22], the learning process is achieved by minimizing the following loss: L = 1 N N∑ i=1 (yi −QΩ(si, ωi; θ))2 (8) where yi = r(si, wi)+γ(1−βωi,ν(s′i)QΩ(s′i, ωi))+βωi,ν(s′i)VΩ(s′i). And the ‘Actor’ module βω,ν updates as follows: ν = ν − αν ∂βω,ν(s ′) ∂ν (QΩ(s ′, ω)− VΩ(s′)). (9) Please refer to Algorithm 1 in the supplementary material available at the website mentioned in the abstract for the details of the whole training process. 4 Experimental Results In this section, we conduct comparative evaluations on the benchmarks including OTB-2013 [37], OTB-50 [38], OTB-100 [38], LaSOT [12], TrackingNet [24], UAV123 [23] and VOT18 [18] with three considerations: 1) We compare the proposed DTNet with state-of-the-art trackers; 2) To demonstrate the effectiveness of the switch module, we compare the DTNet with some of its variants by employing different rackers; 3) We further compare our method with the trackers fused at the feature level to demonstrate the advantage of the decision-based strategy. Apart from the experimental results shown in this section, please refer to the website mentioned in the abstract for the supplementary results including the online visualization of the decision module of the proposed DTNet and the comparison with the state-of-the-art tracking methods. Implementation details. We build the switch and the termination networks by three convolutional layers and two fully connected layers, which receive the image patch of 84×84 as input. The sequences from VID [28] and Youtube_BB [27] datasets are used to train the DTNet including the decision and the tracker modules for 6 ×105 episodes with Adam optimizer. We set the capacity of the replay buffer to 5000, the learning rate to 0.0001, the discount factor γ in Equ. 1 to 0.2, the batch size to 128 and nκ is set to 3 ×105. For the -greedy algorithm, is set to 1 and decays to 0.1 gradually. The experiments were implemented in PyTorch on a computer with a 3.70GHz Intel Core i7-8700K CPU and two NVIDIA GTX 1080Ti GPUs. The average tracking speed is 36 FPS. Comparison with state-of-the-art trackers. We compare the DTNet (with FCT+SiamFC in this version) with the state-of-the-art trackers including CNN_SVM [16], SiamFC [4], DSST [9], ECO [8], SRDCF [10], SCT [6], HDT [26] and Staple [3]. In Fig. 4, we can observe that our DTNet achieves state-of-the-art performance in terms of the success rate and the precision on the OTB-2013, OTB-50 and OTB-100 datasets. It is noteworthy that although DTNet performs slightly worse than ECO, it is much more efficient than ECO. The high performance of DTNet can be attributed to two aspects. First, the decision module intelligently selects a proper tracker for each frame instead of fusing two trackers that could conflict with each other. Second, we improve the original detection tracker by considering the domain knowledge, and makes the discriminating course more flexible through eliminating the candidate boxes. Comparison with variants. We conduct ablation study to investigate the effectiveness of the DTNet. By comparing the quantitative results listed in the top half of Table 1 with those in its bottom half, we can see that the DTNet which combines two trackers always outperforms its ablated version which merely uses one tracker no matter what a single tracker is used. To further validate the effectiveness of the decision module of the DTNet, We have included a manually designed rule based decision module for comparison. It is implemented by picking a particular tracker based on the confidence score of tracking subject to the thresholds set manually. The results are given in the eighth row of Table 1. Apparently, our automated decision module significantly outperforms such a handcrafted one which relies on handcrafted thresholds for tracker selection. Besides, our method is more efficient as it only performs each tracker once in the decisionmaking process while the handcrafted module has to carry out both trackers and use their output confidence scores for decision. We also compare the DTNet with its variants by exploring different combinations of trackers including ACT [5], FCT, ATOM [7], SiamFC [4], CFNet [33] and SiamRPN++ [19]. It is noteworthy that ACT, FCT and ATOM are detection trackers while SiamFC, CFNet and SiamRPN++ are template trackers. We always combine a detection tracker and a template tracker to form a variant of the DTNet for comparison. The results show that the DTNet constantly outperforms each individual tracker in terms of AUC and precision on different benchmarks, which demonstrate the effectiveness of the decision module. Table 1 also clearly shows that the DTNet makes a good balance between the performance and the efficiency compared with its variants which have different combinations of trackers. Furthermore, our framework can be easily extended to more trackers. For instance, the results of using three trackers including ACT, FCT and SiamFC are shown in the penultimate row of the Table 1. Considering both accuracy and efficiency, we use two trackers in the proposed DTNet. Visualization of the decision module. Fig. 5 shows the visualization of the decision module during training where the outputs of the switch network, i.e. the Q values for the SiamFC and the FCT trackers estimated via the HRL (see Eq. 1), are displayed on top of each frame. It can be seen that in the first frame, the template tracker SiamFC works well while the detection tracker FCT outperforms it in the second frame according to the Q value. Thus it leads to a high probability to terminate the current template tracker and switch to the detection tracker. And in the third frame, since the detection tracker outperforms the template tracker again, the termination probability remains low and thus the detection tracker is still in use as desired. Comparison with fusion-based trackers. We further compare our DTNet with some trackers based on the fusion strategy. According to the quantitative results listed in Table 2, our method exhibits the best performance among real-time trackers on all four datasets. By associating Table 1 with Table 2, we find that although either FCT or SiamFC alone is outperformed by some state-of-the-art fusion-based trackers such as HSME (on OTB-2013) and MCCT-H (on OTB-2013 and OTB-100), the DTNet that combines them in a switching manner through the decision module performs significantly better than them. Such a finding demonstrates that the switching-based combination delivered by the decision module of the proposed DTNet is superior to the fusion-based combination that is broadly adopted by the existing state-of-the-art trackers. 5 Conclusions In this paper, we proposed an ensemble framework, namely DTNet, composed of a decision module and a tracker module for visual tracking. By HRL, the decision module enables the detection tracker and the template trackers that form the tracker module to compete with each other so that the DTNet can switch between them for different scenes. Differing from the fusion-based methods, the DTNet could learn an online decision to pick a particular tracker for a particular scene. Besides, we presented a new proposal-free detection tracker, which does not require the proposal of candidate bounding boxes of the target and thus makes the discriminating course flexible. Extensive results on several benchmarks demonstrated the superiority of the proposed DTNet over existing methods. Broader Impact In this paper, the authors introduce DTNet which learns an online decision for switching to a proper tracker to conduct visual tracking in the current video frame. Although this paper only validates the efficacy of the decision learning framework in the specific scenario of visual tracking, it can actually be extended to other video-based computer vision tasks such as person re-identification, motion caption and action recognition, etc. It can be applied by defining a reward concerning the specific task and replacing the two trackers used in this paper with some other algorithms. To this end, the proposed DTNet could be of broad interest in different fields such as transportation industry, film industry, sport industry, etc. As a method for visual tracking, the DTNet can inevitably be used for monitoring and security purpose. As a learning-based method, what the DTNet can track, a person or a pet, essentially depends on the training data. Therefore, the risk of applying our method to some tasks that could raise ethical issues can be mitigated by imposing a strict and secure data protection regulation such as the GDPR. Without a sufficiently large amount of data of high quality that contain the particular target, the DTNet cannot deliver a good tracking in the particular task. Acknowledgements We acknowledge the support of the National Key Research and Development Plan of China under Grant 2017YFB1300205, the National Natural Science Foundation of China under Grants 61991411 and U1913204, the Shandong Major Scientific and Technological Innovation Project 2018CXGC1503, the Young Taishan Scholars Program of Shandong Province No.tsqn201909029 and the Qilu Young Scholars Program of Shandong University No.31400082063101.
1. What is the focus and contribution of the paper regarding visual tracking? 2. What are the strengths of the proposed approach, particularly in terms of its key idea and technical correctness? 3. What are the weaknesses of the paper, especially regarding the combination of different trackers, the online decision mechanism, and the experimental claims? 4. Do you have any concerns about the novelty and ad-hoc nature of the proposed methods? 5. How do you assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes an online decision based visual tracking framework and a proposal-free detection tracker. The organization and writing of this paper are unsatisfactory. The proposed method is adhoc and lacks novelty. The methods compared in the experiment section are slightly out-of-date. Strengths This major contribution of this paper is an online decision based visual tracking framework with reinforcement learning. Its key idea is to adaptively ensemble detection tracker and template tracker. The proposed method is intuitive and technically correct. The topic in this paper is vital for the real-world machine learning tasks, but the proposed method is slightly adhoc and lacks novelty. Weaknesses 1. In my opinion, "using different kinds of trackers to pursue more reliable tracking is not wise as they follow different tracking principles" is the cornerstone of this paper. However, the authors fail to demonstrate why this is not wise. In my opinion, combining different kinds of clues is expected to make the tracker stronger, especially when these clues are based on different principles. 2. Does the proposed online decision mechanism outperform a manually designed rule-based decision module? More comparisons are needed and essential for demonstrating the effectiveness of the proposed method. 3. Several claims are not well supported by experiments or enough demonstrations, such as the claim in lines 37-39. 4. The comparisons in this paper are not entirely fair. The previous methods in the experiment section are out-of-data. 5. Although the paper tackles the online decision framework and proposal-free detection problems simultaneously, both of the proposed methods lack novelty, making the entire ad hoc.
NIPS
Title Online Decision Based Visual Tracking via Reinforcement Learning Abstract A deep visual tracker is typically based on either object detection or template matching while each of them is only suitable for a particular group of scenes. It is straightforward to consider fusing them together to pursue more reliable tracking. However, this is not wise as they follow different tracking principles. Unlike previous fusion-based methods, we propose a novel ensemble framework, named DTNet, with an online decision mechanism for visual tracking based on hierarchical reinforcement learning. The decision mechanism substantiates an intelligent switching strategy where the detection and the template trackers have to compete with each other to conduct tracking within different scenes that they are adept in. Besides, we present a novel detection tracker which avoids the common issue of incorrect proposal. Extensive results show that our DTNet achieves stateof-the-art tracking performance as well as a good balance between accuracy and efficiency. The project website is available at https://vsislab.github. io/DTNet/. N/A A deep visual tracker is typically based on either object detection or template matching while each of them is only suitable for a particular group of scenes. It is straightforward to consider fusing them together to pursue more reliable tracking. However, this is not wise as they follow different tracking principles. Unlike previous fusion-based methods, we propose a novel ensemble framework, named DTNet, with an online decision mechanism for visual tracking based on hierarchical reinforcement learning. The decision mechanism substantiates an intelligent switching strategy where the detection and the template trackers have to compete with each other to conduct tracking within different scenes that they are adept in. Besides, we present a novel detection tracker which avoids the common issue of incorrect proposal. Extensive results show that our DTNet achieves stateof-the-art tracking performance as well as a good balance between accuracy and efficiency. The project website is available at https://vsislab.github. io/DTNet/. 1 Introduction As a fundamental task in computer vision, visual tracking aims to estimate the trajectory of a specified object in a sequence of images. Inspired by the success of deep learning in general computer vision tasks, recent visual tracking algorithms mostly used deep networks, particularly CNNs which extract deep representations for various scenes. Among these deep trackers are two dominant tracking schemes. The first one treats tracking as a detection task, which typically builds a deep network to distinguish the foreground target from the background [5, 25, 39]. The second one regards tracking as a template matching task and addresses it via a matching network such as Siamese network, which learns a general similarity function to obtain the image patch best matching the target [11, 15, 29]. The detection tracker continuously updates the network online with the image patch detected as the target by itself. The diverse appearances of the patches lead to a good adaptability of the tracker while the continuous update is inefficient for real-world tracking. Also, albeit occasionally, an incorrect detection in a frame which represents a noisy appearance of the target could mislead the tracker. The template tracker utilizes the initial appearance of the target as a fixed template to conduct the matching operation, which runs efficiently at the cost of adaptability. Either the detection or the template tracker is merely suitable for a particular group of scenes. For instance, as shown in the top row of Fig. 1, due to the temporal occlusion within a frame, the detection tracker incorrectly captures the bicycle as the target in that frame and cannot recover from it in the succeeding frame. By contrast, the template tracker is robust to the temporal occlusion as it always looks back to the real target in the initial frame for delivering the matching. On the other hand, it ∗Corresponding author 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. is easy to understand that, as shown in the bottom row of Fig. 1, the template tracker is not reliable with the temporal deformation of the target while the detection tracker works well with it. Some recent works investigated various fusion schemes to pursue better performance [2, 17, 20, 34, 40]. However, directly fusing the two types of trackers together is not wise as they follow different tracking principles and thus cannot converge to each individual optimum simultaneously during training. Hence, it might be better to make them co-exist for handling different scenes alternatively. Differing from previous fusion-based methods, this paper presents a framework of decision learning for the ensemble of the two types of trackers where we explore how to automatically and intelligently switch between them for tracking in different scenes. Specifically, our method makes the two trackers compete with each other through a hierarchical reinforcement learning (HRL) framework so that it can make a proper online decision to choose the tracker which captures the target better in the current scene. This idea is based on the common observation as shown in Fig. 1 that different types of trackers are merely good at tracking the targets in a particular group of frames. We name the ensemble framework DTNet as it comprises a decision module and a tracker module as illustrated in Fig. 2. The decision module starts with a switch network that encodes the image patch inheriting from the previous frame and the target in the initial frame to decide whether the detection or the template tracker should be selected for the current frame. It is followed by a termination network which estimates the output of the tracker to generate a probability of terminating the current tracker. The switch and the termination networks in fact form a “Actor-Critic” structure [21]. Such intelligent switching between the two trackers repeats till all frames of the video are processed. We provide a specifically designed scheme for jointly training the decision and the tracker modules end-to-end via HRL. Furthermore, to improve the detection tracker, a fully-convolutional classifier is learned to differentiate the target from the distracting content, Since it does not rely on a number of candidate proposals to predict the bounding boxes of the target, it actually avoids the issue of the incorrect prediction of such proposals that could mislead the tracker. The contributions of this paper are summarized as follows. • We propose an ensemble framework which learns an online decision for visual tracking based on HRL where the detection and the template trackers compete with each other to substantiate a switching strategy. • We develop a novel proposal-free detection tracker, which does not require the proposal of candidate bounding boxes of the target and thus make the discriminating course flexible. • Our method demonstrates the state-of-the-art performance on several benchmarks. The ablation studies show that the decision mechanism composed of the switch and the termination networks can effectively select the proper trackers for different scenes. 2 Related Work Detection trackers. Trackers based on object detection in each video frame usually learn a classifier to pick up the positive candidate patches wrapping around previous observation. Nam and Han [25] proposed a lightweight CNN to learn generic feature representations by shared convolutional layers to detect the target object. Han et al. [14] selected a random subset of branches for model update to diversify learned target appearance models. Fan and Ling [13] took into account self-structural information to learn a discriminative appearance model. Song et al. [30] integrated adversarial learning into a tracking-by-detection framework to reduce overfitting on single frames. However, the occasional incorrect detection in a frame is still prone to contaminate and mislead the target appearance models. Template trackers. Trackers based on template matching have recently gained popularity due to its efficiency, which learns a similarity function to match the target template with the image patch in the searching region of each frame. Tao et al. [31] utilized Siamese network in an offline manner to learn a matching function from a large set of sequences, and then used the fixed matching function to search for the target in a local region. Bertinetto et al. [4] introduced a fully convolutional Siamese network (SiamFC) for tracking by measuring the region-wise feature similarity between the target object and the candidate. Wang et al. [36] incorporated an attention mechanism into Siamese network to enhances its discriminative capacity and adaptability. However, these trackers are prone to drift when the target suffers the variations such as shape deformation and color change in appearance due to the fixed appearance of the template without an online update. Fusion-based trackers. There exist some trackers adapting fusion strategies. The MEEM algorithm [40] proposed a multi-expert tracking framework with an entorpy regularized restoration scheme. And Li et al. [20] introduced a discrete graph optimization into the framework to handle the tracker drift problem. Wang et al. proposed MCCT [35] which selected the reliable outputs from multiple feature to refine the tracking results. Bertinetto et al. [3] combine two image patch representations that are sensitive to complementary factors to learn a model robust to colour changes and deformations. However, it is not easy to fuse multiple trackers significantly different in principle, as they can hardly converge to each individual optimum simultaneously during training. Unlike the fusion-based methods above, our method aims to learn an online strategy to decide which tracker should be used for each individual frame. 3 Method As shown in Fig. 2, the proposed framework consists of two modules: the decision module and the tracker module. As the key component of the entire framework, the former contains the switch network and the termination network, which work together to alternatively select the template or the detection tracker which compete with each other in the tracking task and jointly form the tracker module. In the decision module, the switch network encodes an image patch Φt inheriting from the previous frame It−1 and the initial template Φ∗, and then outputs a binary signal to select a tracker. A tracker can estimate the location of the target for the current frame It. The termination network estimates the output of the tracker and generates a probability to decide if the framework should keep using the current tracker or terminate it, which makes the decision module avoid oscillating between the two trackers especially when they have similar accuracy. Note that if the termination network decides to terminate, it merely indicates that the current tracker in use does not work well while it does not necessarily means that the other tracker can performs better. Thus in this case, the switch network will still select a new tracker from the two candidate trackers instead of blindly switching to the other tracker currently not in use. Fig. 2 illustrates all of the 4 possible switching situations of the framework. 3.1 Decision Module Given a set of states S and actions A, the Markovian options w ∈ Ω consist of three components [1]: an intra-option policy π : S × A→ [0, 1], a termination condition β : S+ → [0, 1], and an initiation set I ⊆ S. Here we assume that ∀s ∈ S, ∀w ∈ Ω : s ∈ I (i.e., the trackers are both available in all states). If an option ω is taken, then actions are selected according to πω until the option terminates stochastically according to βω . For controlling the switch of tracker in an HRL manner, the decision module utilizes the termination policy together with the policy over options corresponding to the trackers. Let QΩ denote the switch network which can be viewed as a function subject to option Ω parameterized with its network weights θ and the termination network βΩ,ν . A termination probability which decides if the current tracker in use should be terminated is estimated by βΩ,ν depending on option Ω and its network weights ν. Specifically, we define QΩ as below to evaluate the value of option ω in a manner of hierarchical reinforcement learning: QΩ(s, w; θ) = r(s, w) + γU(s ′, ω), (1) where r(s, w) denotes the reward that the agent receives after implementing ω representing the option for selecting a particular tracker. γ is the discount factor and U(s′, ω) is the value of executing ω on a new state s′ related to the termination probability βω,ν , which is computed by combining the outputs of the switch and the termination networks: U(s′, ω) = (1− βω,ν(s′))QΩ(s′, w) + βω,ν(s′)VΩ(s′), (2) where βω,ν(s′) is the termination probability on the state s′, and VΩ is the optimal of the switch function which can be found by searching for the maximum of the switch function QΩ over option ω: VΩ = max ω (QΩ(s ′, ω)). (3) If the current option, expressed as ωgood, works well, the agent will not terminate it, which means that βω,ν is close to 0. Thus based on Eq. (2), we have U(s′, ωgood) ≈ QΩ(s′, ωgood). If it is not a good option, according to Eq. (3), we have VΩ = QΩ(s′, ωgood). In this case, the agent tends to terminate the current option, which means that βω,ν is close to 1. Thus according to Eq. (2), we also have U(s′, ωgood) ≈ QΩ(s′, ωgood) as desired. Note that U(s, ω) is differentiable. Its gradient with respect to the weights ν of the termination network is expressed as: ∂U(s′, ω) ∂ν = −∂βω,ν(s ′) ∂ν (QΩ(s ′, ω)− VΩ(s′)) + (1− βω,ν(s′)) ∂U(s′′, ω′) ∂ν . (4) A similar form as in Eq. (4) can be derived by expanding U(s ′′,ω′) ∂ν recursively. Here the state-option pairs (s, ω) in one time step is involved in the calculation. As shown in Fig. 2, the switch network QΩ(s, ω) acting as the ‘Critic’ evaluates the value of options and provides the updating gradients for the network termination network βω,ν , which essentially acts as the ‘Actor’ and evaluates the performance of the tracker in use to decide if it should be terminated in the current frame so that the agent could optionally switch to the other tracker for the next frame. The weights θ of the switch network are learned the Bellman equation and the details will be given in Section 3.3. 3.2 Tracker Module Template tracker. We adopt SiamFC [4] as the template tracker. The standard Siamese architecture takes as input an image pair containing an exemplar image z and a candidate image x. The image z represents the object of interest (e.g., an image patch centered on the target object in the first video frame), while x is typically larger and represents the searching area in the subsequent video frames. The features of z and x are extracted by the same CNN ϕ parametrized with τ , which are cross-correlated as: fτ (z, x) = ϕτ (z) ? ϕτ (x) + b (5) where b denotes a bias term which takes the value b∈R at every location, ? represents the operation of convolution. Eq. (5) performs an exhaustive search for the pattern z over the image x. The goal is to match the maximum value in the response map f to the target location. Detection tracker. To build a tracker based on object detection while avoiding the expensive process of proposal generation, we adopt a fully convolutional tracker, namely FCT, as shown in Fig. 3 which includes a classification branch and a regression branch. The classification branch predicts the location of the target and while the regression branch a 4D vector indicating the distances from the center of the target to the edges of its bounding box. Given the feature map F∈RH×W×C of a backbone CNN and the sum s of all strides applied in previous layers, each location (x, y) in F corresponds to (b s2c+ xs, b s 2c+ ys) in image. And we directly predict the class label and the regressed distances for each location in F [32]. It is possible that the same class of objects are considered as targets in one sequence but background objects in another one. Due to such variations and inconsistencies, only using a typical classifier to simply assign “1” to the target and “0” to the background for all sequences is likely to cause conflicts across sequences [25]. Therefore, the proposed classification branch separates domain-independent information from the last domain-specific layer to capture shared representations via shared layers. Specifically, in each domain the location (x, y) is considered as a positive sample if it falls into the groundtruth box and the class label c∗ is assigned 1. Otherwise, it is a negative sample (i.e. background) and the class label c∗ is set to 0. The regression branch outputs a 4D vector re∗ = (l∗, t∗, r∗, b∗) where l∗, t∗, r∗ and b∗ denote the distances from the location of the target to the four edges of its bounding box as shown in Fig. 3. The tracker finally outputs the classification score map c and the regression value re. The loss function for training is given as below: L(c, r) = 1 N N∑ i=1 Lcls(ci, c ∗ i ) + λ N [Where{c∗>0}] N∑ i=1 Lreg(rei, re ∗ i ), (6) where N denotes the total number of the video frames for training. 3.3 Joint Training of Decision and Tracker Modules In this section, we detailed the joint training procedure of the DTNet, in which the decision and the tracker modules are trained end-to-end. Given K training sequences, for the j-th one we randomly extract a piece of training sequences Ij = {I1j , I2j , ..., ITj} with the corresponding ground truth Gj = {G1j , G2j , ..., GTj} in order, and each pair of adjacent frames is subject to a skip of n(0 6 n 6 5) frames with some probability. The initial target is sampled around the ground truth randomly in the first frame and regarded as the template. The switch network optionally evaluates the features encoded in the template and the observation inheriting from the previous frame and then selects a tracker. The reward during the switching process is defined as: rt(s, ω) = ηL ·DIoU , IF (Pt > thhi and P ∗t < thlo) ηL ·DIoU , IF (Pt < thlo and P ∗t > thhi) ηM ·DIoU , IF (Pt > thhi and P ∗t > thhi) ηS ·DIoU , IF (Pt < thlo and P ∗t < thlo) (7) where Pt is the intersection-over-union (IoU) between the predicted bounding box Bt from the selected tracker and the ground truth Gt and P ∗t is the IoU corresponding to the unselected tracker. DIoU is the difference value between them. Actually, three cases are divided by the above setup of reward: (1) One succeeds while the other fails; (2) Both succeed; (3) Both fail. Accordingly, three enlarger coefficients are assigned in descending order, which leads to select the agent with higher accuracy while guides the tracking competition. The samples are collected by the unselected tracker respectively to update the corresponding network. In other words, we keep on training the worse one to maintain the competitive relationship between the two trackers. A new state s′ is updated for the current frame by the prediction. Then, the agent takes the probability of βω,ν(s′) to terminate the previous option and re-evaluate the value of options. For the switch module, the ‘Critic’ model QΩ(s, ω) can be learned using the Bellman equation [22], the learning process is achieved by minimizing the following loss: L = 1 N N∑ i=1 (yi −QΩ(si, ωi; θ))2 (8) where yi = r(si, wi)+γ(1−βωi,ν(s′i)QΩ(s′i, ωi))+βωi,ν(s′i)VΩ(s′i). And the ‘Actor’ module βω,ν updates as follows: ν = ν − αν ∂βω,ν(s ′) ∂ν (QΩ(s ′, ω)− VΩ(s′)). (9) Please refer to Algorithm 1 in the supplementary material available at the website mentioned in the abstract for the details of the whole training process. 4 Experimental Results In this section, we conduct comparative evaluations on the benchmarks including OTB-2013 [37], OTB-50 [38], OTB-100 [38], LaSOT [12], TrackingNet [24], UAV123 [23] and VOT18 [18] with three considerations: 1) We compare the proposed DTNet with state-of-the-art trackers; 2) To demonstrate the effectiveness of the switch module, we compare the DTNet with some of its variants by employing different rackers; 3) We further compare our method with the trackers fused at the feature level to demonstrate the advantage of the decision-based strategy. Apart from the experimental results shown in this section, please refer to the website mentioned in the abstract for the supplementary results including the online visualization of the decision module of the proposed DTNet and the comparison with the state-of-the-art tracking methods. Implementation details. We build the switch and the termination networks by three convolutional layers and two fully connected layers, which receive the image patch of 84×84 as input. The sequences from VID [28] and Youtube_BB [27] datasets are used to train the DTNet including the decision and the tracker modules for 6 ×105 episodes with Adam optimizer. We set the capacity of the replay buffer to 5000, the learning rate to 0.0001, the discount factor γ in Equ. 1 to 0.2, the batch size to 128 and nκ is set to 3 ×105. For the -greedy algorithm, is set to 1 and decays to 0.1 gradually. The experiments were implemented in PyTorch on a computer with a 3.70GHz Intel Core i7-8700K CPU and two NVIDIA GTX 1080Ti GPUs. The average tracking speed is 36 FPS. Comparison with state-of-the-art trackers. We compare the DTNet (with FCT+SiamFC in this version) with the state-of-the-art trackers including CNN_SVM [16], SiamFC [4], DSST [9], ECO [8], SRDCF [10], SCT [6], HDT [26] and Staple [3]. In Fig. 4, we can observe that our DTNet achieves state-of-the-art performance in terms of the success rate and the precision on the OTB-2013, OTB-50 and OTB-100 datasets. It is noteworthy that although DTNet performs slightly worse than ECO, it is much more efficient than ECO. The high performance of DTNet can be attributed to two aspects. First, the decision module intelligently selects a proper tracker for each frame instead of fusing two trackers that could conflict with each other. Second, we improve the original detection tracker by considering the domain knowledge, and makes the discriminating course more flexible through eliminating the candidate boxes. Comparison with variants. We conduct ablation study to investigate the effectiveness of the DTNet. By comparing the quantitative results listed in the top half of Table 1 with those in its bottom half, we can see that the DTNet which combines two trackers always outperforms its ablated version which merely uses one tracker no matter what a single tracker is used. To further validate the effectiveness of the decision module of the DTNet, We have included a manually designed rule based decision module for comparison. It is implemented by picking a particular tracker based on the confidence score of tracking subject to the thresholds set manually. The results are given in the eighth row of Table 1. Apparently, our automated decision module significantly outperforms such a handcrafted one which relies on handcrafted thresholds for tracker selection. Besides, our method is more efficient as it only performs each tracker once in the decisionmaking process while the handcrafted module has to carry out both trackers and use their output confidence scores for decision. We also compare the DTNet with its variants by exploring different combinations of trackers including ACT [5], FCT, ATOM [7], SiamFC [4], CFNet [33] and SiamRPN++ [19]. It is noteworthy that ACT, FCT and ATOM are detection trackers while SiamFC, CFNet and SiamRPN++ are template trackers. We always combine a detection tracker and a template tracker to form a variant of the DTNet for comparison. The results show that the DTNet constantly outperforms each individual tracker in terms of AUC and precision on different benchmarks, which demonstrate the effectiveness of the decision module. Table 1 also clearly shows that the DTNet makes a good balance between the performance and the efficiency compared with its variants which have different combinations of trackers. Furthermore, our framework can be easily extended to more trackers. For instance, the results of using three trackers including ACT, FCT and SiamFC are shown in the penultimate row of the Table 1. Considering both accuracy and efficiency, we use two trackers in the proposed DTNet. Visualization of the decision module. Fig. 5 shows the visualization of the decision module during training where the outputs of the switch network, i.e. the Q values for the SiamFC and the FCT trackers estimated via the HRL (see Eq. 1), are displayed on top of each frame. It can be seen that in the first frame, the template tracker SiamFC works well while the detection tracker FCT outperforms it in the second frame according to the Q value. Thus it leads to a high probability to terminate the current template tracker and switch to the detection tracker. And in the third frame, since the detection tracker outperforms the template tracker again, the termination probability remains low and thus the detection tracker is still in use as desired. Comparison with fusion-based trackers. We further compare our DTNet with some trackers based on the fusion strategy. According to the quantitative results listed in Table 2, our method exhibits the best performance among real-time trackers on all four datasets. By associating Table 1 with Table 2, we find that although either FCT or SiamFC alone is outperformed by some state-of-the-art fusion-based trackers such as HSME (on OTB-2013) and MCCT-H (on OTB-2013 and OTB-100), the DTNet that combines them in a switching manner through the decision module performs significantly better than them. Such a finding demonstrates that the switching-based combination delivered by the decision module of the proposed DTNet is superior to the fusion-based combination that is broadly adopted by the existing state-of-the-art trackers. 5 Conclusions In this paper, we proposed an ensemble framework, namely DTNet, composed of a decision module and a tracker module for visual tracking. By HRL, the decision module enables the detection tracker and the template trackers that form the tracker module to compete with each other so that the DTNet can switch between them for different scenes. Differing from the fusion-based methods, the DTNet could learn an online decision to pick a particular tracker for a particular scene. Besides, we presented a new proposal-free detection tracker, which does not require the proposal of candidate bounding boxes of the target and thus makes the discriminating course flexible. Extensive results on several benchmarks demonstrated the superiority of the proposed DTNet over existing methods. Broader Impact In this paper, the authors introduce DTNet which learns an online decision for switching to a proper tracker to conduct visual tracking in the current video frame. Although this paper only validates the efficacy of the decision learning framework in the specific scenario of visual tracking, it can actually be extended to other video-based computer vision tasks such as person re-identification, motion caption and action recognition, etc. It can be applied by defining a reward concerning the specific task and replacing the two trackers used in this paper with some other algorithms. To this end, the proposed DTNet could be of broad interest in different fields such as transportation industry, film industry, sport industry, etc. As a method for visual tracking, the DTNet can inevitably be used for monitoring and security purpose. As a learning-based method, what the DTNet can track, a person or a pet, essentially depends on the training data. Therefore, the risk of applying our method to some tasks that could raise ethical issues can be mitigated by imposing a strict and secure data protection regulation such as the GDPR. Without a sufficiently large amount of data of high quality that contain the particular target, the DTNet cannot deliver a good tracking in the particular task. Acknowledgements We acknowledge the support of the National Key Research and Development Plan of China under Grant 2017YFB1300205, the National Natural Science Foundation of China under Grants 61991411 and U1913204, the Shandong Major Scientific and Technological Innovation Project 2018CXGC1503, the Young Taishan Scholars Program of Shandong Province No.tsqn201909029 and the Qilu Young Scholars Program of Shandong University No.31400082063101.
1. What is the primary contribution of the paper regarding tracking methods? 2. What are the strengths of the proposed ensemble framework and detection tracker? 3. What are the weaknesses of the paper regarding its experimental evaluations and comparisons with other works? 4. How does the reviewer assess the novelty and effectiveness of the proposed "predict-and-evaluate" tactic through Hierarchical Reinforcement Learning? 5. What additional evaluations or visual plots does the reviewer suggest for improving the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The primary contribution of this paper is the ensemble framework which can switch between a template tracker and a detection tracker based on the tracking accuracy. This is driven by the observation that the two types of trackers are good at tracking the targets in different cases. Besides, a detection tracker without the need ofgenerating proposals is presented. Experimental results show that the proposed method improved the tracking performance by making good use of the advantages of both the detection and the template trackers. Strengths Overall, I am positive to the paper due to the following aspects of the work. 1) Prior methods (fusion-based methods) involving multiple trackers typically try to combine information extracted at feature level while the proposed method seeks information gathering at tracker level via a competing mechanism. The experimental results show that the proposed scheme of combination at tracker level outperforms the combination at feature level. 2) The proposed framework represents a novel ensemble strategy which essentially carries out a “predict-and-evaluate” tactic through Hierarchical Reinforcement Learning (HRL). The visualization of the decision module show this design reasonable and effective. 3) The authors build a fully convolutional tracker based on object detection which does not rely on the step of proposal generation, leading to an accurate and efficient detection of the bounding box of the target. Weaknesses The following concerns should be addressed to improve the paper. 1) The authors have validated the effectiveness of the proposed framework on OTB-50, OTB-100, OTB-2015 and LASOT datasets. Such an evaluation is good while all of these benchmarks evaluate the competing methods by the metrics of AUC and precision. So I suggest the authors perform additional evaluation on the VOT dataset to compare the methods in terms of different metrics like Accuracy, Robustness, and EAO. 2) It is nice to see the visualization of the decision module provided in Fig.5. I am also interested in some informative visual plot with regard to the cases listed in Eq. 7. In particular, how often the reward estimated via Eq. 7 during the switching process falls into the third case (i.e. neither tracker is able to track the target)?
NIPS
Title Adversarial Attacks on Graph Classifiers via Bayesian Optimisation Abstract Graph neural networks, a popular class of models effective in a wide range of graph-based learning tasks, have been shown to be vulnerable to adversarial attacks. While the majority of the literature focuses on such vulnerability in node-level classification tasks, little effort has been dedicated to analysing adversarial attacks on graph-level classification, an important problem with numerous real-life applications such as biochemistry and social network analysis. The few existing methods often require unrealistic setups, such as access to internal information of the victim models, or an impractically-large number of queries. We present a novel Bayesian optimisation-based attack method for graph classification models. Our method is black-box, query-efficient and parsimonious with respect to the perturbation applied. We empirically validate the effectiveness and flexibility of the proposed method on a wide range of graph classification tasks involving varying graph properties, constraints and modes of attack. Finally, we analyse common interpretable patterns behind the adversarial samples produced, which may shed further light on the adversarial robustness of graph classification models. An open-source implementation is available at https://github.com/xingchenwan/grabnel. 1 Introduction Graphs are a general-purpose data structure consisting of entities represented by nodes and edges which encode pairwise relationships. Graph-based machine learning models has been widely used in a variety of important applications such as semi-supervised learning, link prediction, community detection and graph classification [3, 51, 14]. Despite the growing interest in graph-based machine learning, it has been shown that, like many other machine learning models, graph-based models are vulnerable to adversarial attacks [33, 17]. If we want to deploy such models in environments where the risk and costs associated with a model failure are high e.g. in social networks, it would be crucial to understand and assess the model stability and vulnerability by simulating adversarial attacks. Adversarial attacks on graphs can be aimed at different learning tasks. This paper focuses on graphlevel classification, where given an input graph (potentially with node and edge attributes), we wish to learn a function that predicts a property of interest related to the graph. Graph classification is an important task with many real-life applications, especially in bioinformatics and chemistry [24, 25]. For example, the task may be to accurately classify if a molecule, modelled as a graph whereby nodes represent atoms and edges model bonds, inhibits HIV replication or not. Although there are a few attempts on performing adversarial attacks on graph classification [10, 23], they all operate under unrealistic assumptions such as the need to query the target model a large number of times or access a portion of the test set to train the attacking agent. To address these limitations, we formulate the adversarial attack on graph classification as a black-box optimisation problem and solve it with Bayesian optimisation (BO), a query-efficient state-of-the-art zeroth-order black-box optimiser. Unlike existing work, our method is query-efficient, parsimonious in perturbations and 35th Conference on Neural Information Processing Systems (NeurIPS 2021) does not require policy training on a separate labelled dataset to effectively attack a new sample. Another benefit of our method is that it can be easily adapted to perform various modes of attacks such as deleting or rewiring edges and node injection. Furthermore, we investigate the topological properties of the successful adversarial examples found by our method and offer valuable insights on the connection between the graph topology change and the model robustness. The main contributions of our paper are as follows. First, we introduce a novel black-box attack for graph classification, GRABNEL1, which is both query efficient and parsimonious. We believe this is the first work on using BO for adversarial attacks on graph data. Second, we analyse the generated adversarial examples to link the vulnerability of graph-based machine learning models to the topological properties of the perturbed graph, an important step towards interpretable adversarial examples that has been overlooked by the majority of the literature. Finally, we evaluate our method on a range of real-world datasets and scenarios including detecting the spread of fake news on Twitter, which to the best of our knowledge is the first analysis of this kind in the literature. 2 Proposed Method: GRABNEL Problem Setup A graph G = (V, E) is defined by a set of nodes V = {vi}ni=1 and edges E = {ei}mi=1 where each edge ek = {vi, vj} connects between nodes vi and vj . The overall topology can be represented by the adjacency matrix A ∈ {0, 1}n×n where Aij = 1 if the edge {vi, vj} is present2. The attack objective in our case is to degrade the predictive performance of the trained victim graph classifier fθ by finding a graph G′ perturbed from the original test graph G (ideally with the minimum amount of perturbation) such that fθ produces an incorrect class label for G. In this paper, we consider the black-box evasion attack setting, where the adversary agent cannot access/modify the the victim model fθ (i.e. network architecture, weights θ or gradients) or its training data {(Gi, yi)}Li=1; the adversary can only interact with fθ by querying it with an input graph G′ and observe the model output fθ(G′) as pseudo-probabilities over all classes in a C-dimensional standard simplex. Additionally, we assume that sample efficiency is highly valued: we aim to find adversarial examples with the minimum number of queries to the victim model. We believe that this is a practical and difficult setup that accounts for the prohibitive monetary, logistic and/or opportunity costs of repeatedly querying a (possibly huge and complicated) real-life victim model. With a high query count, the attacker may also run a higher risk of getting detected. Formally, the objective function of our BO attack agent can be formulated as a black-box maximisation problem: max G′∈Ψ(G) Lattack ( fθ(G′), y ) s.t. y = arg max fθ(G) (1) 1Stands for Graph Adversarial attack via BayesiaN Efficient Loss-minimisation. 2We discuss the unweighted graphs for simplicity; our method may also handle other graph types. where fθ is the pretrained victim model that remains fixed in the evasion attack setup and y is the correct label of the original input G. Denote the output logit for the class y as fθ(G)y , the attack loss Lattack can be defined as: Lattack ( fθ(G′), y ) = { maxt∈Y,t6=y log fθ(G′)t − log fθ(G′)y (untargeted attack) log fθ(G′)t − log fθ(G′)y (targeted attack on class t), (2) where fθ(·)t denotes the logit output for class t. Such an attack loss definition is commonly used both in the traditional image attack and the graph attack literature [4, 52] although our method is compatible with any choice of loss function. Furthermore, Ψ(G) refers to the set of possible G′ generated from perturbing G. In this work, we experiment with a diverse modes of attacks to show that our attack method can be generalised to different set-ups: • creating/removing an edge: we create perturbed graphs by flipping the connection of a small set of node pairs δA = {{ui, vi}}∆i=1 of G following previous works [52, 10]; • rewiring or swapping edges: similar to [23], we select a triplet (u, v, s) where we either rewire the edge (u→ v) to (u→ s) (rewire), or exchange the edge weights w(u, v) and w(u, s) (swap); • node injection: we create new nodes together with their attributes and connections in the graph. The overall routine of our proposed GRABNEL is presented in Fig 1 (and in pseudo-code form in App A), and we now elaborate each of its key components. Surrogate model The success of BO hinges upon the surrogate model choice. Specifically, such a surrogate model needs to 1) be flexible and expressive enough to locally learn the latent mapping from a perturbed graph G′ to its attack lossLattack(fθ(G′), y) (note that this is different and generally easier than learning G′ → y, which is the goal of the classifier fθ), 2) admit a probabilistic interpretation of uncertainty – this is key for the exploration-exploitation trade-off in BO, yet also 3) be simple enough such that the said mapping can be learned with a small number of queries to fθ to preserve sample efficiency. Furthermore, given the combinatorial nature of the graph search space, it also needs to 4) be capable of scaling to large graphs (e.g. in the order of 103 nodes or more) typical of common graph classification tasks with reasonable run-time efficiency. Additionally, given the fact that BO has been predominantly studied in the continuous domain which is significantly different from the present setup, the design of a appropriate surrogate is highly non-trivial. To handle this set of conflicting desiderata, we propose to first use a Weisfeiler-Lehman (WL) feature extractor to extracts a vector space representation of G, followed by a sparse Bayesian linear regression which balances performance with efficiency and gives an probabilistic output. With reference to Fig. 1, given a perturbation graph G′ as a proposed adversarial sample, the WL feature extractor first extracts a vector representation φ(G′) in line with the WL subtree kernel procedure (but without the final kernel computation) [30]. For the case where the node features are discrete, let x0(v) be the initial node feature of node v ∈ V (note that the node features can be either scalars or vectors) , we iteratively aggregate and hash the features of v with its neighbours, {ui}deg(v)i=1 , using the original WL procedure at all nodes to transform them into discrete labels: xh+1(v) = hash ( xh(v), xh(u1), ..., xh(udeg(v)) ) , ∀h ∈ {0, 1, . . . ,H − 1}, (3) where H is the total number of WL iterations, a hyperparameter of the procedure. At each level h, we compute the feature vector φh(G′) = [c(G′,Xh1), ..., c(G′,Xh|Xh|)]>, where Xh is the set of distinct node features xh that occur in all input graphs at the current level and c(G′, xh) is the counting function that counts the number of times a particular node feature xh appears in G′. For the case with continuous node features and/or weighted edges, we instead use the modified WL procedure proprosed in [36]: xh+1(v) = 12 ( xh(v) + 1deg(v) deg(v)∑ i=1 w(v, ui)xh(ui) ) , ∀h ∈ {0, 1, . . . ,H − 1}, (4) where w(v, ui) denotes the (non-negative) weight of edge e{v,ui} (1 if the graph is unweighted) and we simply have feature at level h φh(G′) = vec(Xh), where Xh is the feature matrix of graph G′ at level h by collecting the features at each node Xh = [ xh(1), ...xh(v) ] and vec(·) denotes the vectorisation operator. In both cases, at the end of H WL iterations we obtain the final feature vector φ(G′) = concat ( φ1(G′), ...,φH(G′) ) for each training graph in [1, nG′ ] to form the feature matrix Φ = [φ(G′1), ...φ(G′|nG′ |)] > ∈ R|nG′ |×D to be passed to the Bayesian regressor – it is particularly worth noting that the training graphs here denote inputs to train the surrogate model of the attack agent and are typically perturbed versions of a test graph G of the victim model; they are not the graphs that are used to train the victim model itself: in an evasion attack setup, the model is considered frozen and the training inputs cannot be accessed by the attack agent any point in the pipeline. The WL iterations capture both information related to individual nodes and topological information (via neighbourhood aggregation), and have been shown to have comparable distinguishing power to some Graph Neural Network (GNN) models [26], and hence the procedure is expressive. Alternative surrogate choices could be, for example, GNNs with the final fully-connected layer replaced by a probabilistic linear regression layer such as the one proposed in [31]. However, in contrast to these, our extraction process G′ → φ(G′) requires no learning from data (we only need to learn the Bayesian linear regression weights) and therefore should lead to better sample efficiency. Alternatively, we may also use a Gaussian Process (GP) surrogate, such as the Gaussian Process with Weisfeiler-Lehman Kernel (GPWL) model proposed in [29] that directly uses a GP model together with a WL kernel. Nonetheless, while GPs are theoretically more expressive (although we empirically show in App. D.1 that in most of the cases their predictive performances are comparable), they are also much more expensive with a cubic scaling w.r.t the number of training inputs. Furthermore, GPWL is designed specifically for neural architecture search, which features small, directed graphs with discrete node features only; on the other hand, the GRABNEL surrogate covers a much wider scope of applications When we select a large H or if there are many training inputs and/or input graph(s) have a large number of nodes/edges, there will likely be many unique WL features and the resulting feature matrix will be very high-dimensional, which would lead to high-variance regression coefficients α being estimated if nG′ (number of graphs to train the surrogate of the attack agent) is comparatively few. To attain a good predictive performance in such a case, we employ Bayesian regression surrogate with the Automatic Relevance Determination (ARD) prior to learn the mapping Φ→ Lattack(fθ(G′), y), which regularises weights and encourages sparsity in α [42]: Lattack|Φ,α, σ2n ∼ N (α>Φ, σ2nI), (5) α|λ ∼ N (0,Λ), diag(Λ) = λ−1 = {λ−11 , ..., λ −1 D }, (6) λi ∼ Gamma(k, θ) ∀i ∈ [1, D], (7) where Λ is a diagonal covariance matrix. To estimate α and noise variance σ2n, we optimise the model marginal log-likelihood. Overall, the WL routines scales as O(Hm) and Bayesian linear regression has a linear runtime scaling w.r.t. the number of queries; these ensure the surrogate is scalable to both larger graphs and/or a large number of graphs, both of which are commonly encountered in graph classification (See App D.6 for a detailed empirical runtime analysis). Sequential perturbation selection In the default structural perturbation setting, given an attack budget of ∆ (i.e. we are allowed to flip up to ∆ edges from G), finding exactly the set of perturbations δA that leads to the largest increase in Lattack entails an combinatorial optimisation over ( n2 ∆ ) candidates. This is a huge search space that is difficult for the surrogate to learn meaningful patterns in a sample-efficient way even for modestly-sized graphs. To tackle this challenge, we adopt the strategy illustrated in Fig. 2: given the query budget B (i.e. the total number of times we are allowed to query fθ for a given G), we assume B ≥ ∆ and amortise B into ∆ stages and focus on selecting one edge perturbation at each stage. While this strategy is greedy in the sense that it always commits the perturbation leading to the largest increase in loss at each stage, it is worth noting that we do not treat the previously modified edges differently, and the agent can, and does occasionally as we observe empirically, “correct” previous modifications by flipping edges back: this is possible as the effect of edge selection is permutation invariant. Another benefit of this strategy is that it can potentially make full use of the entire attack budget ∆ while remaining parsimonious w.r.t. the amount of perturbation introduced, as it only progresses to the next stage and modifies the G further when it fails to find a successful adversarial example in the current stage. Optimisation of acquisition function At each BO iteration, acquisition function α(·) is optimised to select the next point(s) to query the victim model fθ. However, commonly used gradient-based optimisers cannot be used on the discrete graph search space; a naïve strategy would be to randomly generate many perturbed graphs, evaluate α on all of them, and choose the maximiser(s) to query fθ next. While potentially effective on modestly-sized G especially with our sequential selection strategy, this strategy nevertheless discards any known information about the search space. Inspired by recent advances in BO in non-continuous domains [8, 38], we optimise α via an adapted version of the Genetic algorithm (GA) in [10], which is well-suited for our purpose but is not particularly sample efficient since many evolution cycles could be required for convergence. However, the latter is not a serious issue here as we only use GA for acquisition optimisation where we only query the surrogate instead of the victim model, a subroutine of BO that does not require sample efficiency. We outline its ingredients below: • Initialisation: While GA typically starts with random sampling in the search space to fill the initial population, in our case we are not totally ignorant about the search space as we could have already queried and observed fθ with a few different perturbed graphs G′. A smoothness assumption on the search space would be that if a G′ with an edge (u, v) flipped from G led to a large Lattack, then another G′ with (u, s), s 6∈ {u, v} flipped is more likely to do so too. To reflect this, we fill the initial population by mutating the top-k queried G′s leading to the largest Lattack seen so far in the current stage, where for G′ with (u, v) flipped from the base graph we 1) randomly choose an end node (u or v) and 2) change that node to another node in the graph except u or v such that the perturbed edges in all children shares one common end node with the parent. • Evolution: After the initial population is built, we follow the standard evolution routine by evaluating the acquisition function value for each member as its fitness, selecting the top-k performing members as the breeding population and repeating the mutation procedure in initialisation for a fixed number of rounds. At termination, we simply query fθ with the graph(s) seen so far (i.e. computing the loss in Fig. 2) with the largest acquisition function value(s) seen during GA. 3 Related Works Adversarial attack on graph-based models There has been an increasing attention in the study of adversarial attacks in the context of GNNs [33, 17]. One of the earliest models, Nettack, attacks a Graph Convolution Network (GCN) node classifier by optimising the attack loss of a surrogate model using a greedy algorithm [52]. Using a simple heuristic, DICE attacks node classifiers by adding edges between nodes of different classes and deleting edges connecting nodes of the same class [41]. However, they cannot be straightforwardly transferred graph classification: for Nettack, unlike node classification tasks, we have no access to training input graphs or labels for the victim model during test time to train a similar surrogate in graph classification; for DICE (and also more recent works like [39]), node labels do not exist in graph classification (we only have a single label for the entire graph). We nonetheless acknowledge the other contributions in these works, such as the introduction of constraints to improve imperceptibility, in our experiments in Sec 4. First methods that do extend to graph classification include [10, 23]: [10] propose a number of techniques, including RL-S2V, which uses reinforcement learning to attack both node and graph classifiers in a black-box manner, and the GA-based attack, which we adapt into our BO acquisition optimisation. However, [10] primarily focus on the S2V victim model, do not emphasise on sample efficiency, and to train a policy that attacks in an one-shot manner on the test graphs, RL-S2V has to query repeatedly on a separate validation set. We empirically compare against it in App. D.2. Another related work is ReWatt [23], which similarly uses reinforcement learning but through rewiring. Compared to both these methods, GRABNEL does not require an additional validation set and is much more query efficient. Other black-box methods without surrogate models have also been proposed that could be potentially be applied to graph classification: [22] exploit common GNN structural bias to attack node features, while [5] relate graph embedding to graph signal processing and construct tailored attack objectives in different GNNs. In comparison to these works that exploit the characteristics of existing architectures to varying degrees, we argue that the optimisation-based method proposed in the our work is more flexible and agnostic to architecture choices, and should be more generalisable to new architectures. Nonetheless, in cases where some architectural information is available, we believe there could be combinable benefits: for example, the importance scores proposed in [22] could be used as sampling weights as priors to bias GRABNEL towards selecting more vulnerable nodes. We defer detailed investigations of such possibilities to a future work. Finally, there have also been various previous works that focus on a different setup than ours: A white-box optimisation strategy (alternating direction method of multipliers) is proposed in [16]. [48, 44, 2] propose back-door attacks that involve poisoning of the training data before training and/or the test data at inference. [35] attack hierarchical graph pooling networks, but similar to [52] the method requires access to training input/targets. Ultimately, a number of factors, including but not limited to 1) existence/strictness of the query budget, 2) strictness of the perturbation budget, 3) attacker capabilities and 4) sizes of the graphs, would decide which algorithm/setup is more appropriate and should be adopted in a problem-specific way. Nonetheless, we argue that our setup is both challenging and highly significant as it resembles the capabilities a real-life attacker might have (no access to training data; no access to model parameter/gradients and limited query/perturbation budgets). Adversarial attacks using BO BO as a means to find adversarial examples in the black-box evasion setting has been successfully proposed for classification models on tabular [34] and image data [28, 50, 32, 27]. However, we address the problem for graph classification models, which work on structurally and topologically fundamentally different inputs. This implies several nontrivial challenges that require our method to go beyond the vanilla usage of BO: for example, the inputs cannot be readily represented as vectors like for tabular or image data and the perturbations that we consider for such inputs are not defined on a continuous, but on a discrete domain. 4 Experiments We validate the performance of the proposed method in a wide range of graph classification tasks with varying graph properties, including but not limited to the typical TU datasets considered in previous works [10, 23]. As a demonstration of the versatility of the proposed method, instead of considering a single mode of attack which is often impossible in real-life, we also select the attack mode specific to each task. All additional details, including the statistics of the datasets used and implementation details of the victim models and attack methods, are presented in App. C. TU Datasets We first conduct experiments on four common TU datasets [25], namely (in ascending order of average graph sizes in the dataset) IMDB-M, PROTEINS, COLLAB and REDDIT-MULTI-5K. In all cases, unless specified otherwise, we define the attack budget ∆ in terms of the maximum structural perturbation ratio r defined in [7] where ∆ ≤ rn2. We similarly link the maximum numbers of queries B allowed for individual graphs to their sizes as B = 40∆, thereby giving larger graphs and thus potentially more difficult instances higher attack3 and query budgets similar to the conventional image adversarial attack literature [28]. In this work, unless otherwise specified we set r = 0.03 for all experiments, and for comparison we consider a number of baselines, including random search, GA introduced in [10]4. On some task/victim model combinations, we also consider an additional simple gradient-based method which greedily adds or delete edges based on the magnitude computed input gradient similar to the gradient based method described in [10] (note that this method is white-box as access to parameter weights and gradients is required), which is also similar in spirit to methods like Nettack [52]. To verify whether the proposed attack method can be used for a variety of classifier architectures we also consider various victim models: we first use GCN [19] and Graph Isomorphism Network (GIN) [45], which are most commonly used in related works [33]. Considering 3Due to computational constraints, we cap the maximum number of queries to be 2 × 104 on each graph. 4The original implementation of RL-S2V, the primary algorithm in [10], primarily focus on a S2V-based victim model [9]. We compare GRABNEL against it in the same dataset considered in [10] in App. D.2. the strong performance of hierarchical models in graph classification [12, 46], we also conduct some experiments on the Graph-U-Net [12] as a representative of such architectures. We show the classification performance of both victim models before and after attacks using various methods in Table 1, and we show the Attack Success Rate (ASR) against the (normalised) number of queries in Fig. 3. It is worth noting that in consistency with the image attack literature, we launch and consider attacks on the graphs that were originally classified correctly, and statistics, such as the ASR, are also computed on that basis. We report additional statistics, such as the evolution of the attack losses as a function of number of queries of selected individual data points in App. D.3. The results generally show that the attack method is effective against both GCN and GIN models with GRABNEL typically leading to the largest degradation in victim predictions in all tasks, often performing on par or better than Gradient-based, a white-box method. It is worth noting that although Gradient-based often performs strongly, there is no guarantee that it always does so: first, for general edge flipping problems, Gradient-based computes gradients w.r.t. all possible edges (including those that do not currently exist) and an accurate estimation of such high dimensional gradients can be highly difficult. Second, gradients only capture local information and they are not necessarily accurate when used to extrapolate function value beyond that neighbourhood. However, relying on gradients to select edge perturbations constitutes such an extrapolation, as edge addition/deletion is binary and discrete. Lastly, on the tasks with larger graphs (e.g. COLLAB on GCN and GIN), due to the huge search spaces, we find neither random search nor GA could flip predictions effectively except for some “easy” samples already lying close to the decision boundary; GRABNEL nonetheless performs well thanks to the effective constraint of the search space from the sequential selection of edge perturbation, which is typically more significant on the larger graphs. We report the results on the Graph U-net victim model in Table 2: as expected, Graph U-net performs better in terms of clean classification accuracy compared to the GCN and GIN models considered above, and it also seem more robust to all types adversarial attacks on the PROTEINS dataset. Nonetheless, in terms of relative performance margin, GRABNEL still outperforms both baselines considerably, demonstrating the flexibility and capability for it to conduct effective attacks even on the more complicated and realistic victim models. As discussed, in real life, adversarial agents might encounter additional constraints other than the number of queries to the victim model or the amount of perturbation introduced. To demonstrate that our framework can handle such constraints, we further carry out attacks on victim models using identical protocols as above but with a variety of additional constraints considered in several previous works. Specifically, the scenarios considered, in the ascending order of restrictiveness, are: • Base: The base scenario is identical to the setup in Table 1 and Fig 3; • 2-hop: Edge addition between nodes (u, v) is only permitted if v is within 2-hop distance of u; • 2-hop+rewire [23]: Instead of flipping edges, the adversarial agent is only allowed to rewire from nodes (u, v) (where an edge exists) to (u,w) (where no edge currently exists). Node w must be within 2-hop distance of u; We test on the PROTEINS dataset, and show the results in Fig. 4: interestingly, the imposition of the 2-hop constraint itself leads to no worsening of performance – in fact, as we elaborate in Sec. 5, we find the phenomenon of adversarial edges remaining relatively clustered within a relatively small neighbourhood is a general pattern in many tasks. This implies that the 2-hop condition, which constrains the spatial relations of the adversarial edges, might already hold even without explicit specification, thereby explaining the marginal difference between the base and the 2-hop constrained cases in Fig. 4. While the additional rewiring constraint leads to (slightly) lower attack success rates, the performance of GRABNEL remains relatively robust in all scenarios considered. Image Classification Beyond the typical “edge flipping” setup on which existing research has been mainly focused, we now consider a different setup involving attacks on the MNIST-75sp dataset [21, 20] with weighted graphs with continuous attributes – note that . The dataset is generated by first partitioning MNIST image into around 75 superpixels with SLIC [1, 11] as the graph nodes (with average superpixel intensity as node attributes). The pairwise distances between the superpixels form the edge weights. We use the pre-trained ChebyGIN with attention model released by the original authors [20] (with an average validation classification accuracy of around 95%) as the victim model. Given that the edge values are no longer binary, simply flipping the edges (equivalent to setting edge weights to 0 and 1) is no longer appropriate. To generalise the sparse perturbation setup and inspired by edge rewiring studied by previous literature, we instead adopt an attack mode via swapping edges: each perturbation can be defined by 3 end nodes (u, v, s) where edge weights wuv is swapped with edge weight w(u, s). We show the results in Fig. 5: GRABNEL-u and Random-u denotes the GRABNEL and random search under the untargeted attack, respectively, whereas GRABNEL-t denotes GRABNEL under the targeted attack with each line denoting 1 of the 9 possible target classes in MNIST. We find that GRABNEL is surprisingly effective in attacking this victim model, almost completely degrading the victim (Fig. 5) with very few swapping operations (Fig. 6) even in the more challenging targeted setup. This seems to suggest that, at least for the data considered, the victim model is very brittle towards carefully crafted edge swapping, with its predictive power seemingly hinged upon a very small number of key edges. We believe a thorough analysis of this phenomenon is of an independent interest, which we defer to a future work. Fake news detection As a final experiment, we consider a real-life task of attacking a GCNbased fake news detector trained on a labelled dataset in [37]. Each discussion cascade (i.e. a chain of tweets, replies and retweets) is represented as an undirected graph, where each node represents a Twitter account (with node features being the key properties of the account such as age and number of followers/followees; see App. C for details) and each edge represents a reply/retweet. As a reflection of what a real-life adversary may and may not do, we note that modifying the connections or properties of the existing nodes, which correspond to modifying existing accounts and tweets, is considered impractical and prohibited. Instead, we consider a node injection attack mode (i.e. creating new malicious nodes and connect them to existing ones): injecting nodes is equivalent to creating new Twitter accounts and connecting them to the rest of the graph is equivalent to retweeting/replying existing accounts. We limit the maximum number of injected nodes to be 0.05N and the maximum number of new edges that may be created per each new node is set to the average number of edges an existing node has – in this context, this limits the number of re-tweets and replies the new accounts may have to avoid easy detection. For the injected node, we initialise its node features in a way that reflects the characteristics of a new Twitter user (we outline the detailed way to do so in App. C). We show the result in Fig. 8, where GRABNEL is capable of reducing the effectiveness of a GCN-based fake news classifier by a third. In this case Random also performs reasonably well, as the discussion cascade is typically small, allowing any adversarial examples to be exhaustively found eventually. Ablation Studies GRABNEL benefits from a number of design choices and it is important to understand the relative contribution of each to the performance. We find that in some tasks GRABNEL without surrogate (i.e. random search with sequential perturbation selection. We term this variant SequentialRandom) is a very strong baseline in terms of final ASR, although the full GRABNEL is much better in terms of overall performance, sample efficiency and the ability to produce successful examples with few perturbations. The readers are referred to our ablation studies in App. D.5. Runtime Analysis Given the setup we consider (sample- efficient black-box attack with minimal amount of perturbation), the cost of the algorithm should not only be considered from the viewpoint of the computational runtime of the attack algorithm itself alone, and this is a primary reason why we use the (normalised) number of queries as the main cost criterion. Nonetheless, a runtime analysis is still informative which we provide in App. D.6. We find that GRABNEL maintains a reasonable overhead even on, e.g., graphs with ∼ 103 nodes/edges that are larger than most graphs in typical graph classification tasks. 5 Attack Analysis Having established the effectiveness of our method, in this section we provide a qualitative analysis on the common interpretable patterns behind the adversarial samples found, which provides further insights into the robustness of graph classification models against structural attacks. We believe such analysis is especially valuable, as it may facilitate the development of even more effective attack methods, and may provide insights that could be useful for identification of real-life vulnerabilities for more effective defence. We show examples of the adversarial samples in Fig. 7 (and Fig. 13 in App. D.3). We summarise some key findings below. • Adversarial edges tend to cluster closely together: We find the distribution of the adversarial edges (either removal or addition) in a graph to be highly uneven, with many adversarial edges often sharing common end-nodes or having small spatial distance to each other. This is empirically consistent with recent theoretical findings on the stability of spectral graph filters in [18]. From an attacker point of view, this may provide a “prior” on the attack to constrain the search space, as the regions around existing perturbations should be exploited more; we leave a practical investigation of the possibility of leveraging this to enhance attack performance to a future work. • Adversarial edges often attempt to destroy or modify community structures: for example, the original graphs in the IMDB-M dataset can be seen to have community structure, a graph-level topological property that is distinct from the existing works analysing attack patterns on node-level tasks [43, 53]. When the GCN model is attacked, the attack tends to flip the edges between the communities, and thereby destroying the structure by either merging communities or deleting edges within a cluster. On the other hand, the GIN examples tend to strengthen the community structures by adding edges within clusters and deleting edges between them. With similar observations also present in, for example, PROTEINS dataset, this may suggest that the models may be fragile to modification of the community structure. • Beware the low-degree nodes! While low-degree nodes are important in terms of degree centrality, we find some victim models are vulnerable to manipulations on such nodes. Most prominently, in the Twitter fake news example, the malicious nodes almost never connect directly to the central node (original tweet) but instead to a peripheral node. This finding corroborates the theoretical argument in [18] which shows that spectral graph filters are more robust towards edge flipping involving high-degree nodes than otherwise, and is also consistent with observations on node-level tasks [53] with the explanation being lower-degree nodes having larger influence in the neighbourhood aggregation in GCN. Nonetheless, we note that changes in a higher-degree node are likely to cascade to more nodes in the graph than low degree nodes, and since graph classifiers aggregate across all nodes in the readout layer the indirect change of node representations also matter. Therefore, we argue that this phenomenon in graph classification is still non-trivial. 6 Conclusion Summary This work proposes a novel and flexible black-box method to attack graph classifiers using Bayesian optimisation. We demonstrate the effectiveness and query efficiency of the method empirically. Unlike many existing works, we qualitatively analyse the adversarial examples generated. We believe such analysis is important to the understanding of adversarial robustness of graph-based learning models. Finally, we would like to point out that a potential negative social impact of our work is that bad actors might use our method to attack real-world systems such as a fake new detection system on social media platforms. Nevertheless, we believe that the experiment in our paper only serves as a proof-of-concept and the benefit of raising awareness of vulnerabilities of graph classification systems largely outweighs the risk. Limitations and Future Work Firstly, the current work only considers topological attack, although the surrogate used is also compatible with attack on node/edge features or hybrid attacks. Secondly, while we have evaluated several mainstream victim models, it would also be interesting to explore defences against adversarial attacks and to test GRABNEL in robust GNN setups such as those with advanced graph augmentations [47], randomised smoothing [48, 13] and adversarial detection [6]. Lastly, the current work is specific to graph classification; we believe it is possible to adapt it to attack other graph tasks by suitably modifying the loss function. We leave these for future works. Acknowledgement and Funding Disclosure The authors would like to acknowledge the following sources of funding in direct support of this work: XW and BR are supported by the Clarendon Scholarship at University of Oxford; HK is supported by the EPSRC Centre for Doctoral Training in Autonomous Intelligent Machines and Systems EP/L015897/1; AB thanks the Konrad-Adenauer-Stiftung and the Oxford-Man Institute of Quantitative Finance for their support. The authors would also like to thank the Oxford-Man Institute of Quantitative Finance for providing the computing resources necessary for this project. The authors declare no conflict in interests.
1. What is the main contribution of the paper regarding graph-level classifiers? 2. What are the strengths of the proposed approach, particularly in its use of Bayesian Optimization? 3. What are the weaknesses or concerns regarding the paper, such as the lack of detail in certain aspects of the proposed method? 4. How does the reviewer assess the computational efficiency of the proposed approach? 5. Are there any inconsistencies or errors in the paper that need to be addressed?
Summary Of The Paper Review
Summary Of The Paper In this paper, the authors propose GRABNEL, a black-box adversarial attack against graph-level classifiers. The intuition is that by leveraging Bayesian Optimization(BO) techniques, one can reduce the number of queries needed. The authors propose to use a Weisfeiler-Lehman (WL) feature extractor to extract vector representations of the input graphs. After that, a Sparse Bayesian linear regression surrogate model is used to approximate the adversarial loss. The query budget is divided into different stages. For each stage, the attacker modifies the graphs and queries the target model before it runs out of query budget. The modified graphs, as well as the observed predictions, are fed into a genetic algorithm to generate the perturbation for the current stage. After that, the attacker queries the target model and updates the surrogate model accordingly. Experiments show that the proposed approach achieves a higher success rate than the baselines. Review Strengths The idea of introducing BO to the graph adversarial attacks is interesting The authors provide a detailed analysis Comments The proposed approach achieves a higher success rate than the baselines and the idea of introducing BO to the graph adversarial attacks is interesting. However, there're some concerns about the paper. Sparse Bayesian linear regression models are used to estimate the adversarial loss of the target model. However, the details of these models are unclear. For example, what's the architecture of these models? How do they interact with other parts of the algorithm? The paper will be more concrete if the authors could provide these details. Besides, I think it is better to introduce Bayesian Optimization as background knowledge, as it will make the paper more comprehensive Instead of training a separate shadow model, the proposed approach uses a Sparse Bayesian linear regression surrogate model to approximate the adversarial loss. Since the approximation is done for a particular input sample, this process has to be repeated if the attacker wishes to generate more adversarial samples. This leads to the question that if the proposed approach is computationally efficient. It will be better if the authors could elaborate on this. Some inconsistencies found in the paper: No reference to Fig.6 is found. 'implementation details... are presented in App. C' (Line 211), which should be in App. D 'In fact, as we elaborate in Sec. 6 ..'(Line 247), which should be in Sec.5
NIPS
Title Adversarial Attacks on Graph Classifiers via Bayesian Optimisation Abstract Graph neural networks, a popular class of models effective in a wide range of graph-based learning tasks, have been shown to be vulnerable to adversarial attacks. While the majority of the literature focuses on such vulnerability in node-level classification tasks, little effort has been dedicated to analysing adversarial attacks on graph-level classification, an important problem with numerous real-life applications such as biochemistry and social network analysis. The few existing methods often require unrealistic setups, such as access to internal information of the victim models, or an impractically-large number of queries. We present a novel Bayesian optimisation-based attack method for graph classification models. Our method is black-box, query-efficient and parsimonious with respect to the perturbation applied. We empirically validate the effectiveness and flexibility of the proposed method on a wide range of graph classification tasks involving varying graph properties, constraints and modes of attack. Finally, we analyse common interpretable patterns behind the adversarial samples produced, which may shed further light on the adversarial robustness of graph classification models. An open-source implementation is available at https://github.com/xingchenwan/grabnel. 1 Introduction Graphs are a general-purpose data structure consisting of entities represented by nodes and edges which encode pairwise relationships. Graph-based machine learning models has been widely used in a variety of important applications such as semi-supervised learning, link prediction, community detection and graph classification [3, 51, 14]. Despite the growing interest in graph-based machine learning, it has been shown that, like many other machine learning models, graph-based models are vulnerable to adversarial attacks [33, 17]. If we want to deploy such models in environments where the risk and costs associated with a model failure are high e.g. in social networks, it would be crucial to understand and assess the model stability and vulnerability by simulating adversarial attacks. Adversarial attacks on graphs can be aimed at different learning tasks. This paper focuses on graphlevel classification, where given an input graph (potentially with node and edge attributes), we wish to learn a function that predicts a property of interest related to the graph. Graph classification is an important task with many real-life applications, especially in bioinformatics and chemistry [24, 25]. For example, the task may be to accurately classify if a molecule, modelled as a graph whereby nodes represent atoms and edges model bonds, inhibits HIV replication or not. Although there are a few attempts on performing adversarial attacks on graph classification [10, 23], they all operate under unrealistic assumptions such as the need to query the target model a large number of times or access a portion of the test set to train the attacking agent. To address these limitations, we formulate the adversarial attack on graph classification as a black-box optimisation problem and solve it with Bayesian optimisation (BO), a query-efficient state-of-the-art zeroth-order black-box optimiser. Unlike existing work, our method is query-efficient, parsimonious in perturbations and 35th Conference on Neural Information Processing Systems (NeurIPS 2021) does not require policy training on a separate labelled dataset to effectively attack a new sample. Another benefit of our method is that it can be easily adapted to perform various modes of attacks such as deleting or rewiring edges and node injection. Furthermore, we investigate the topological properties of the successful adversarial examples found by our method and offer valuable insights on the connection between the graph topology change and the model robustness. The main contributions of our paper are as follows. First, we introduce a novel black-box attack for graph classification, GRABNEL1, which is both query efficient and parsimonious. We believe this is the first work on using BO for adversarial attacks on graph data. Second, we analyse the generated adversarial examples to link the vulnerability of graph-based machine learning models to the topological properties of the perturbed graph, an important step towards interpretable adversarial examples that has been overlooked by the majority of the literature. Finally, we evaluate our method on a range of real-world datasets and scenarios including detecting the spread of fake news on Twitter, which to the best of our knowledge is the first analysis of this kind in the literature. 2 Proposed Method: GRABNEL Problem Setup A graph G = (V, E) is defined by a set of nodes V = {vi}ni=1 and edges E = {ei}mi=1 where each edge ek = {vi, vj} connects between nodes vi and vj . The overall topology can be represented by the adjacency matrix A ∈ {0, 1}n×n where Aij = 1 if the edge {vi, vj} is present2. The attack objective in our case is to degrade the predictive performance of the trained victim graph classifier fθ by finding a graph G′ perturbed from the original test graph G (ideally with the minimum amount of perturbation) such that fθ produces an incorrect class label for G. In this paper, we consider the black-box evasion attack setting, where the adversary agent cannot access/modify the the victim model fθ (i.e. network architecture, weights θ or gradients) or its training data {(Gi, yi)}Li=1; the adversary can only interact with fθ by querying it with an input graph G′ and observe the model output fθ(G′) as pseudo-probabilities over all classes in a C-dimensional standard simplex. Additionally, we assume that sample efficiency is highly valued: we aim to find adversarial examples with the minimum number of queries to the victim model. We believe that this is a practical and difficult setup that accounts for the prohibitive monetary, logistic and/or opportunity costs of repeatedly querying a (possibly huge and complicated) real-life victim model. With a high query count, the attacker may also run a higher risk of getting detected. Formally, the objective function of our BO attack agent can be formulated as a black-box maximisation problem: max G′∈Ψ(G) Lattack ( fθ(G′), y ) s.t. y = arg max fθ(G) (1) 1Stands for Graph Adversarial attack via BayesiaN Efficient Loss-minimisation. 2We discuss the unweighted graphs for simplicity; our method may also handle other graph types. where fθ is the pretrained victim model that remains fixed in the evasion attack setup and y is the correct label of the original input G. Denote the output logit for the class y as fθ(G)y , the attack loss Lattack can be defined as: Lattack ( fθ(G′), y ) = { maxt∈Y,t6=y log fθ(G′)t − log fθ(G′)y (untargeted attack) log fθ(G′)t − log fθ(G′)y (targeted attack on class t), (2) where fθ(·)t denotes the logit output for class t. Such an attack loss definition is commonly used both in the traditional image attack and the graph attack literature [4, 52] although our method is compatible with any choice of loss function. Furthermore, Ψ(G) refers to the set of possible G′ generated from perturbing G. In this work, we experiment with a diverse modes of attacks to show that our attack method can be generalised to different set-ups: • creating/removing an edge: we create perturbed graphs by flipping the connection of a small set of node pairs δA = {{ui, vi}}∆i=1 of G following previous works [52, 10]; • rewiring or swapping edges: similar to [23], we select a triplet (u, v, s) where we either rewire the edge (u→ v) to (u→ s) (rewire), or exchange the edge weights w(u, v) and w(u, s) (swap); • node injection: we create new nodes together with their attributes and connections in the graph. The overall routine of our proposed GRABNEL is presented in Fig 1 (and in pseudo-code form in App A), and we now elaborate each of its key components. Surrogate model The success of BO hinges upon the surrogate model choice. Specifically, such a surrogate model needs to 1) be flexible and expressive enough to locally learn the latent mapping from a perturbed graph G′ to its attack lossLattack(fθ(G′), y) (note that this is different and generally easier than learning G′ → y, which is the goal of the classifier fθ), 2) admit a probabilistic interpretation of uncertainty – this is key for the exploration-exploitation trade-off in BO, yet also 3) be simple enough such that the said mapping can be learned with a small number of queries to fθ to preserve sample efficiency. Furthermore, given the combinatorial nature of the graph search space, it also needs to 4) be capable of scaling to large graphs (e.g. in the order of 103 nodes or more) typical of common graph classification tasks with reasonable run-time efficiency. Additionally, given the fact that BO has been predominantly studied in the continuous domain which is significantly different from the present setup, the design of a appropriate surrogate is highly non-trivial. To handle this set of conflicting desiderata, we propose to first use a Weisfeiler-Lehman (WL) feature extractor to extracts a vector space representation of G, followed by a sparse Bayesian linear regression which balances performance with efficiency and gives an probabilistic output. With reference to Fig. 1, given a perturbation graph G′ as a proposed adversarial sample, the WL feature extractor first extracts a vector representation φ(G′) in line with the WL subtree kernel procedure (but without the final kernel computation) [30]. For the case where the node features are discrete, let x0(v) be the initial node feature of node v ∈ V (note that the node features can be either scalars or vectors) , we iteratively aggregate and hash the features of v with its neighbours, {ui}deg(v)i=1 , using the original WL procedure at all nodes to transform them into discrete labels: xh+1(v) = hash ( xh(v), xh(u1), ..., xh(udeg(v)) ) , ∀h ∈ {0, 1, . . . ,H − 1}, (3) where H is the total number of WL iterations, a hyperparameter of the procedure. At each level h, we compute the feature vector φh(G′) = [c(G′,Xh1), ..., c(G′,Xh|Xh|)]>, where Xh is the set of distinct node features xh that occur in all input graphs at the current level and c(G′, xh) is the counting function that counts the number of times a particular node feature xh appears in G′. For the case with continuous node features and/or weighted edges, we instead use the modified WL procedure proprosed in [36]: xh+1(v) = 12 ( xh(v) + 1deg(v) deg(v)∑ i=1 w(v, ui)xh(ui) ) , ∀h ∈ {0, 1, . . . ,H − 1}, (4) where w(v, ui) denotes the (non-negative) weight of edge e{v,ui} (1 if the graph is unweighted) and we simply have feature at level h φh(G′) = vec(Xh), where Xh is the feature matrix of graph G′ at level h by collecting the features at each node Xh = [ xh(1), ...xh(v) ] and vec(·) denotes the vectorisation operator. In both cases, at the end of H WL iterations we obtain the final feature vector φ(G′) = concat ( φ1(G′), ...,φH(G′) ) for each training graph in [1, nG′ ] to form the feature matrix Φ = [φ(G′1), ...φ(G′|nG′ |)] > ∈ R|nG′ |×D to be passed to the Bayesian regressor – it is particularly worth noting that the training graphs here denote inputs to train the surrogate model of the attack agent and are typically perturbed versions of a test graph G of the victim model; they are not the graphs that are used to train the victim model itself: in an evasion attack setup, the model is considered frozen and the training inputs cannot be accessed by the attack agent any point in the pipeline. The WL iterations capture both information related to individual nodes and topological information (via neighbourhood aggregation), and have been shown to have comparable distinguishing power to some Graph Neural Network (GNN) models [26], and hence the procedure is expressive. Alternative surrogate choices could be, for example, GNNs with the final fully-connected layer replaced by a probabilistic linear regression layer such as the one proposed in [31]. However, in contrast to these, our extraction process G′ → φ(G′) requires no learning from data (we only need to learn the Bayesian linear regression weights) and therefore should lead to better sample efficiency. Alternatively, we may also use a Gaussian Process (GP) surrogate, such as the Gaussian Process with Weisfeiler-Lehman Kernel (GPWL) model proposed in [29] that directly uses a GP model together with a WL kernel. Nonetheless, while GPs are theoretically more expressive (although we empirically show in App. D.1 that in most of the cases their predictive performances are comparable), they are also much more expensive with a cubic scaling w.r.t the number of training inputs. Furthermore, GPWL is designed specifically for neural architecture search, which features small, directed graphs with discrete node features only; on the other hand, the GRABNEL surrogate covers a much wider scope of applications When we select a large H or if there are many training inputs and/or input graph(s) have a large number of nodes/edges, there will likely be many unique WL features and the resulting feature matrix will be very high-dimensional, which would lead to high-variance regression coefficients α being estimated if nG′ (number of graphs to train the surrogate of the attack agent) is comparatively few. To attain a good predictive performance in such a case, we employ Bayesian regression surrogate with the Automatic Relevance Determination (ARD) prior to learn the mapping Φ→ Lattack(fθ(G′), y), which regularises weights and encourages sparsity in α [42]: Lattack|Φ,α, σ2n ∼ N (α>Φ, σ2nI), (5) α|λ ∼ N (0,Λ), diag(Λ) = λ−1 = {λ−11 , ..., λ −1 D }, (6) λi ∼ Gamma(k, θ) ∀i ∈ [1, D], (7) where Λ is a diagonal covariance matrix. To estimate α and noise variance σ2n, we optimise the model marginal log-likelihood. Overall, the WL routines scales as O(Hm) and Bayesian linear regression has a linear runtime scaling w.r.t. the number of queries; these ensure the surrogate is scalable to both larger graphs and/or a large number of graphs, both of which are commonly encountered in graph classification (See App D.6 for a detailed empirical runtime analysis). Sequential perturbation selection In the default structural perturbation setting, given an attack budget of ∆ (i.e. we are allowed to flip up to ∆ edges from G), finding exactly the set of perturbations δA that leads to the largest increase in Lattack entails an combinatorial optimisation over ( n2 ∆ ) candidates. This is a huge search space that is difficult for the surrogate to learn meaningful patterns in a sample-efficient way even for modestly-sized graphs. To tackle this challenge, we adopt the strategy illustrated in Fig. 2: given the query budget B (i.e. the total number of times we are allowed to query fθ for a given G), we assume B ≥ ∆ and amortise B into ∆ stages and focus on selecting one edge perturbation at each stage. While this strategy is greedy in the sense that it always commits the perturbation leading to the largest increase in loss at each stage, it is worth noting that we do not treat the previously modified edges differently, and the agent can, and does occasionally as we observe empirically, “correct” previous modifications by flipping edges back: this is possible as the effect of edge selection is permutation invariant. Another benefit of this strategy is that it can potentially make full use of the entire attack budget ∆ while remaining parsimonious w.r.t. the amount of perturbation introduced, as it only progresses to the next stage and modifies the G further when it fails to find a successful adversarial example in the current stage. Optimisation of acquisition function At each BO iteration, acquisition function α(·) is optimised to select the next point(s) to query the victim model fθ. However, commonly used gradient-based optimisers cannot be used on the discrete graph search space; a naïve strategy would be to randomly generate many perturbed graphs, evaluate α on all of them, and choose the maximiser(s) to query fθ next. While potentially effective on modestly-sized G especially with our sequential selection strategy, this strategy nevertheless discards any known information about the search space. Inspired by recent advances in BO in non-continuous domains [8, 38], we optimise α via an adapted version of the Genetic algorithm (GA) in [10], which is well-suited for our purpose but is not particularly sample efficient since many evolution cycles could be required for convergence. However, the latter is not a serious issue here as we only use GA for acquisition optimisation where we only query the surrogate instead of the victim model, a subroutine of BO that does not require sample efficiency. We outline its ingredients below: • Initialisation: While GA typically starts with random sampling in the search space to fill the initial population, in our case we are not totally ignorant about the search space as we could have already queried and observed fθ with a few different perturbed graphs G′. A smoothness assumption on the search space would be that if a G′ with an edge (u, v) flipped from G led to a large Lattack, then another G′ with (u, s), s 6∈ {u, v} flipped is more likely to do so too. To reflect this, we fill the initial population by mutating the top-k queried G′s leading to the largest Lattack seen so far in the current stage, where for G′ with (u, v) flipped from the base graph we 1) randomly choose an end node (u or v) and 2) change that node to another node in the graph except u or v such that the perturbed edges in all children shares one common end node with the parent. • Evolution: After the initial population is built, we follow the standard evolution routine by evaluating the acquisition function value for each member as its fitness, selecting the top-k performing members as the breeding population and repeating the mutation procedure in initialisation for a fixed number of rounds. At termination, we simply query fθ with the graph(s) seen so far (i.e. computing the loss in Fig. 2) with the largest acquisition function value(s) seen during GA. 3 Related Works Adversarial attack on graph-based models There has been an increasing attention in the study of adversarial attacks in the context of GNNs [33, 17]. One of the earliest models, Nettack, attacks a Graph Convolution Network (GCN) node classifier by optimising the attack loss of a surrogate model using a greedy algorithm [52]. Using a simple heuristic, DICE attacks node classifiers by adding edges between nodes of different classes and deleting edges connecting nodes of the same class [41]. However, they cannot be straightforwardly transferred graph classification: for Nettack, unlike node classification tasks, we have no access to training input graphs or labels for the victim model during test time to train a similar surrogate in graph classification; for DICE (and also more recent works like [39]), node labels do not exist in graph classification (we only have a single label for the entire graph). We nonetheless acknowledge the other contributions in these works, such as the introduction of constraints to improve imperceptibility, in our experiments in Sec 4. First methods that do extend to graph classification include [10, 23]: [10] propose a number of techniques, including RL-S2V, which uses reinforcement learning to attack both node and graph classifiers in a black-box manner, and the GA-based attack, which we adapt into our BO acquisition optimisation. However, [10] primarily focus on the S2V victim model, do not emphasise on sample efficiency, and to train a policy that attacks in an one-shot manner on the test graphs, RL-S2V has to query repeatedly on a separate validation set. We empirically compare against it in App. D.2. Another related work is ReWatt [23], which similarly uses reinforcement learning but through rewiring. Compared to both these methods, GRABNEL does not require an additional validation set and is much more query efficient. Other black-box methods without surrogate models have also been proposed that could be potentially be applied to graph classification: [22] exploit common GNN structural bias to attack node features, while [5] relate graph embedding to graph signal processing and construct tailored attack objectives in different GNNs. In comparison to these works that exploit the characteristics of existing architectures to varying degrees, we argue that the optimisation-based method proposed in the our work is more flexible and agnostic to architecture choices, and should be more generalisable to new architectures. Nonetheless, in cases where some architectural information is available, we believe there could be combinable benefits: for example, the importance scores proposed in [22] could be used as sampling weights as priors to bias GRABNEL towards selecting more vulnerable nodes. We defer detailed investigations of such possibilities to a future work. Finally, there have also been various previous works that focus on a different setup than ours: A white-box optimisation strategy (alternating direction method of multipliers) is proposed in [16]. [48, 44, 2] propose back-door attacks that involve poisoning of the training data before training and/or the test data at inference. [35] attack hierarchical graph pooling networks, but similar to [52] the method requires access to training input/targets. Ultimately, a number of factors, including but not limited to 1) existence/strictness of the query budget, 2) strictness of the perturbation budget, 3) attacker capabilities and 4) sizes of the graphs, would decide which algorithm/setup is more appropriate and should be adopted in a problem-specific way. Nonetheless, we argue that our setup is both challenging and highly significant as it resembles the capabilities a real-life attacker might have (no access to training data; no access to model parameter/gradients and limited query/perturbation budgets). Adversarial attacks using BO BO as a means to find adversarial examples in the black-box evasion setting has been successfully proposed for classification models on tabular [34] and image data [28, 50, 32, 27]. However, we address the problem for graph classification models, which work on structurally and topologically fundamentally different inputs. This implies several nontrivial challenges that require our method to go beyond the vanilla usage of BO: for example, the inputs cannot be readily represented as vectors like for tabular or image data and the perturbations that we consider for such inputs are not defined on a continuous, but on a discrete domain. 4 Experiments We validate the performance of the proposed method in a wide range of graph classification tasks with varying graph properties, including but not limited to the typical TU datasets considered in previous works [10, 23]. As a demonstration of the versatility of the proposed method, instead of considering a single mode of attack which is often impossible in real-life, we also select the attack mode specific to each task. All additional details, including the statistics of the datasets used and implementation details of the victim models and attack methods, are presented in App. C. TU Datasets We first conduct experiments on four common TU datasets [25], namely (in ascending order of average graph sizes in the dataset) IMDB-M, PROTEINS, COLLAB and REDDIT-MULTI-5K. In all cases, unless specified otherwise, we define the attack budget ∆ in terms of the maximum structural perturbation ratio r defined in [7] where ∆ ≤ rn2. We similarly link the maximum numbers of queries B allowed for individual graphs to their sizes as B = 40∆, thereby giving larger graphs and thus potentially more difficult instances higher attack3 and query budgets similar to the conventional image adversarial attack literature [28]. In this work, unless otherwise specified we set r = 0.03 for all experiments, and for comparison we consider a number of baselines, including random search, GA introduced in [10]4. On some task/victim model combinations, we also consider an additional simple gradient-based method which greedily adds or delete edges based on the magnitude computed input gradient similar to the gradient based method described in [10] (note that this method is white-box as access to parameter weights and gradients is required), which is also similar in spirit to methods like Nettack [52]. To verify whether the proposed attack method can be used for a variety of classifier architectures we also consider various victim models: we first use GCN [19] and Graph Isomorphism Network (GIN) [45], which are most commonly used in related works [33]. Considering 3Due to computational constraints, we cap the maximum number of queries to be 2 × 104 on each graph. 4The original implementation of RL-S2V, the primary algorithm in [10], primarily focus on a S2V-based victim model [9]. We compare GRABNEL against it in the same dataset considered in [10] in App. D.2. the strong performance of hierarchical models in graph classification [12, 46], we also conduct some experiments on the Graph-U-Net [12] as a representative of such architectures. We show the classification performance of both victim models before and after attacks using various methods in Table 1, and we show the Attack Success Rate (ASR) against the (normalised) number of queries in Fig. 3. It is worth noting that in consistency with the image attack literature, we launch and consider attacks on the graphs that were originally classified correctly, and statistics, such as the ASR, are also computed on that basis. We report additional statistics, such as the evolution of the attack losses as a function of number of queries of selected individual data points in App. D.3. The results generally show that the attack method is effective against both GCN and GIN models with GRABNEL typically leading to the largest degradation in victim predictions in all tasks, often performing on par or better than Gradient-based, a white-box method. It is worth noting that although Gradient-based often performs strongly, there is no guarantee that it always does so: first, for general edge flipping problems, Gradient-based computes gradients w.r.t. all possible edges (including those that do not currently exist) and an accurate estimation of such high dimensional gradients can be highly difficult. Second, gradients only capture local information and they are not necessarily accurate when used to extrapolate function value beyond that neighbourhood. However, relying on gradients to select edge perturbations constitutes such an extrapolation, as edge addition/deletion is binary and discrete. Lastly, on the tasks with larger graphs (e.g. COLLAB on GCN and GIN), due to the huge search spaces, we find neither random search nor GA could flip predictions effectively except for some “easy” samples already lying close to the decision boundary; GRABNEL nonetheless performs well thanks to the effective constraint of the search space from the sequential selection of edge perturbation, which is typically more significant on the larger graphs. We report the results on the Graph U-net victim model in Table 2: as expected, Graph U-net performs better in terms of clean classification accuracy compared to the GCN and GIN models considered above, and it also seem more robust to all types adversarial attacks on the PROTEINS dataset. Nonetheless, in terms of relative performance margin, GRABNEL still outperforms both baselines considerably, demonstrating the flexibility and capability for it to conduct effective attacks even on the more complicated and realistic victim models. As discussed, in real life, adversarial agents might encounter additional constraints other than the number of queries to the victim model or the amount of perturbation introduced. To demonstrate that our framework can handle such constraints, we further carry out attacks on victim models using identical protocols as above but with a variety of additional constraints considered in several previous works. Specifically, the scenarios considered, in the ascending order of restrictiveness, are: • Base: The base scenario is identical to the setup in Table 1 and Fig 3; • 2-hop: Edge addition between nodes (u, v) is only permitted if v is within 2-hop distance of u; • 2-hop+rewire [23]: Instead of flipping edges, the adversarial agent is only allowed to rewire from nodes (u, v) (where an edge exists) to (u,w) (where no edge currently exists). Node w must be within 2-hop distance of u; We test on the PROTEINS dataset, and show the results in Fig. 4: interestingly, the imposition of the 2-hop constraint itself leads to no worsening of performance – in fact, as we elaborate in Sec. 5, we find the phenomenon of adversarial edges remaining relatively clustered within a relatively small neighbourhood is a general pattern in many tasks. This implies that the 2-hop condition, which constrains the spatial relations of the adversarial edges, might already hold even without explicit specification, thereby explaining the marginal difference between the base and the 2-hop constrained cases in Fig. 4. While the additional rewiring constraint leads to (slightly) lower attack success rates, the performance of GRABNEL remains relatively robust in all scenarios considered. Image Classification Beyond the typical “edge flipping” setup on which existing research has been mainly focused, we now consider a different setup involving attacks on the MNIST-75sp dataset [21, 20] with weighted graphs with continuous attributes – note that . The dataset is generated by first partitioning MNIST image into around 75 superpixels with SLIC [1, 11] as the graph nodes (with average superpixel intensity as node attributes). The pairwise distances between the superpixels form the edge weights. We use the pre-trained ChebyGIN with attention model released by the original authors [20] (with an average validation classification accuracy of around 95%) as the victim model. Given that the edge values are no longer binary, simply flipping the edges (equivalent to setting edge weights to 0 and 1) is no longer appropriate. To generalise the sparse perturbation setup and inspired by edge rewiring studied by previous literature, we instead adopt an attack mode via swapping edges: each perturbation can be defined by 3 end nodes (u, v, s) where edge weights wuv is swapped with edge weight w(u, s). We show the results in Fig. 5: GRABNEL-u and Random-u denotes the GRABNEL and random search under the untargeted attack, respectively, whereas GRABNEL-t denotes GRABNEL under the targeted attack with each line denoting 1 of the 9 possible target classes in MNIST. We find that GRABNEL is surprisingly effective in attacking this victim model, almost completely degrading the victim (Fig. 5) with very few swapping operations (Fig. 6) even in the more challenging targeted setup. This seems to suggest that, at least for the data considered, the victim model is very brittle towards carefully crafted edge swapping, with its predictive power seemingly hinged upon a very small number of key edges. We believe a thorough analysis of this phenomenon is of an independent interest, which we defer to a future work. Fake news detection As a final experiment, we consider a real-life task of attacking a GCNbased fake news detector trained on a labelled dataset in [37]. Each discussion cascade (i.e. a chain of tweets, replies and retweets) is represented as an undirected graph, where each node represents a Twitter account (with node features being the key properties of the account such as age and number of followers/followees; see App. C for details) and each edge represents a reply/retweet. As a reflection of what a real-life adversary may and may not do, we note that modifying the connections or properties of the existing nodes, which correspond to modifying existing accounts and tweets, is considered impractical and prohibited. Instead, we consider a node injection attack mode (i.e. creating new malicious nodes and connect them to existing ones): injecting nodes is equivalent to creating new Twitter accounts and connecting them to the rest of the graph is equivalent to retweeting/replying existing accounts. We limit the maximum number of injected nodes to be 0.05N and the maximum number of new edges that may be created per each new node is set to the average number of edges an existing node has – in this context, this limits the number of re-tweets and replies the new accounts may have to avoid easy detection. For the injected node, we initialise its node features in a way that reflects the characteristics of a new Twitter user (we outline the detailed way to do so in App. C). We show the result in Fig. 8, where GRABNEL is capable of reducing the effectiveness of a GCN-based fake news classifier by a third. In this case Random also performs reasonably well, as the discussion cascade is typically small, allowing any adversarial examples to be exhaustively found eventually. Ablation Studies GRABNEL benefits from a number of design choices and it is important to understand the relative contribution of each to the performance. We find that in some tasks GRABNEL without surrogate (i.e. random search with sequential perturbation selection. We term this variant SequentialRandom) is a very strong baseline in terms of final ASR, although the full GRABNEL is much better in terms of overall performance, sample efficiency and the ability to produce successful examples with few perturbations. The readers are referred to our ablation studies in App. D.5. Runtime Analysis Given the setup we consider (sample- efficient black-box attack with minimal amount of perturbation), the cost of the algorithm should not only be considered from the viewpoint of the computational runtime of the attack algorithm itself alone, and this is a primary reason why we use the (normalised) number of queries as the main cost criterion. Nonetheless, a runtime analysis is still informative which we provide in App. D.6. We find that GRABNEL maintains a reasonable overhead even on, e.g., graphs with ∼ 103 nodes/edges that are larger than most graphs in typical graph classification tasks. 5 Attack Analysis Having established the effectiveness of our method, in this section we provide a qualitative analysis on the common interpretable patterns behind the adversarial samples found, which provides further insights into the robustness of graph classification models against structural attacks. We believe such analysis is especially valuable, as it may facilitate the development of even more effective attack methods, and may provide insights that could be useful for identification of real-life vulnerabilities for more effective defence. We show examples of the adversarial samples in Fig. 7 (and Fig. 13 in App. D.3). We summarise some key findings below. • Adversarial edges tend to cluster closely together: We find the distribution of the adversarial edges (either removal or addition) in a graph to be highly uneven, with many adversarial edges often sharing common end-nodes or having small spatial distance to each other. This is empirically consistent with recent theoretical findings on the stability of spectral graph filters in [18]. From an attacker point of view, this may provide a “prior” on the attack to constrain the search space, as the regions around existing perturbations should be exploited more; we leave a practical investigation of the possibility of leveraging this to enhance attack performance to a future work. • Adversarial edges often attempt to destroy or modify community structures: for example, the original graphs in the IMDB-M dataset can be seen to have community structure, a graph-level topological property that is distinct from the existing works analysing attack patterns on node-level tasks [43, 53]. When the GCN model is attacked, the attack tends to flip the edges between the communities, and thereby destroying the structure by either merging communities or deleting edges within a cluster. On the other hand, the GIN examples tend to strengthen the community structures by adding edges within clusters and deleting edges between them. With similar observations also present in, for example, PROTEINS dataset, this may suggest that the models may be fragile to modification of the community structure. • Beware the low-degree nodes! While low-degree nodes are important in terms of degree centrality, we find some victim models are vulnerable to manipulations on such nodes. Most prominently, in the Twitter fake news example, the malicious nodes almost never connect directly to the central node (original tweet) but instead to a peripheral node. This finding corroborates the theoretical argument in [18] which shows that spectral graph filters are more robust towards edge flipping involving high-degree nodes than otherwise, and is also consistent with observations on node-level tasks [53] with the explanation being lower-degree nodes having larger influence in the neighbourhood aggregation in GCN. Nonetheless, we note that changes in a higher-degree node are likely to cascade to more nodes in the graph than low degree nodes, and since graph classifiers aggregate across all nodes in the readout layer the indirect change of node representations also matter. Therefore, we argue that this phenomenon in graph classification is still non-trivial. 6 Conclusion Summary This work proposes a novel and flexible black-box method to attack graph classifiers using Bayesian optimisation. We demonstrate the effectiveness and query efficiency of the method empirically. Unlike many existing works, we qualitatively analyse the adversarial examples generated. We believe such analysis is important to the understanding of adversarial robustness of graph-based learning models. Finally, we would like to point out that a potential negative social impact of our work is that bad actors might use our method to attack real-world systems such as a fake new detection system on social media platforms. Nevertheless, we believe that the experiment in our paper only serves as a proof-of-concept and the benefit of raising awareness of vulnerabilities of graph classification systems largely outweighs the risk. Limitations and Future Work Firstly, the current work only considers topological attack, although the surrogate used is also compatible with attack on node/edge features or hybrid attacks. Secondly, while we have evaluated several mainstream victim models, it would also be interesting to explore defences against adversarial attacks and to test GRABNEL in robust GNN setups such as those with advanced graph augmentations [47], randomised smoothing [48, 13] and adversarial detection [6]. Lastly, the current work is specific to graph classification; we believe it is possible to adapt it to attack other graph tasks by suitably modifying the loss function. We leave these for future works. Acknowledgement and Funding Disclosure The authors would like to acknowledge the following sources of funding in direct support of this work: XW and BR are supported by the Clarendon Scholarship at University of Oxford; HK is supported by the EPSRC Centre for Doctoral Training in Autonomous Intelligent Machines and Systems EP/L015897/1; AB thanks the Konrad-Adenauer-Stiftung and the Oxford-Man Institute of Quantitative Finance for their support. The authors would also like to thank the Oxford-Man Institute of Quantitative Finance for providing the computing resources necessary for this project. The authors declare no conflict in interests.
1. What is the main contribution of the paper regarding adversarial attacks on graph classification? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. How does the reviewer assess the novelty and innovation of the method? 4. Are there any concerns or limitations regarding the applicability of the approach to different victim models and robust GNNs? 5. Is there enough discussion and exploration of the efficacy against robust models and existing defenses?
Summary Of The Paper Review
Summary Of The Paper The paper addresses the task of designing adversarial attacks that perturb the structure of a graph to induce errors in graph classification. The approach is based on Bayesian optimization – the authors employ a surrogate model that couples a WL feature extractor with sparse Bayesian linear regression to learn a model of the attack loss. The algorithm operates in the black-box evasion setting, and aims to limit the number of queries to the model. A genetic algorithm is employed to derive graph proposals which are subsequently evaluated using the learned surrogate model. Numerical experiments demonstrate that the proposed method outperforms the best existing methods and significantly reduces the number of required model queries Review The paper is well-written and for the most part, the algorithm is clearly described. It is always difficult to manage space constraints, but I thought there could have been more information included in the actual paper about the WL feature extractor. The paper also seems to rush through the Bayesian optimization component of the algorithm (for example, the use of EI is mentioned only parenthetically and there is no justification of this choice). Since this is a focus of the contribution, I feel more description could have been allocated to it. For example, considerable space is devoted to the genetic algorithm and this seems to have been borrowed directly from [7] – the authors don’t highlight the modifications they have made. Strengths: (1) The paper provides a novel adversarial attack procedure. The approach is well-motivated and clearly described and there is an innovative combination of methods to improve performance. (2) Numerical experiments demonstrate that the developed approach outperforms existing methods for some victim models. Weaknesses: (1) Novelty: Overall I believe there is sufficient novelty to warrant acceptance. To the best of my knowledge this is the first application of Bayesian optimization for adversarial attacks on graph data, so that is a positive. On the other hand, most of the ingredients used in the algorithm are relatively minor adaptations of previous work. For example, the Bayesian optimization to search over graphs in [23] is closely related to the core of the technique presented in this paper, and while the adaptation from (Gaussian process + WL kernel) to (WL feature extractor plus linear regression) is important to reduce computation time, it is not a major innovation. The algorithm also employs (a variant) of the genetic algorithm from [7] to generate graph proposals. So the paper consists of combinations and adaptations of existing methods to address a problem that has not received a significant amount of attention. (2) The paper only considers GCN and GIN as the victim models (and ChebyGIN). These are far from the state-of-the-art in terms of graph classification. The authors do not really explain why they have only explored these graph models. It leads to concerns that the results may not carry over to the improved methods. (3) The authors do not examine the efficacy of the approach for settings where robust GNNs are used (eg. [R1], but there are multiple others). Many of these robust GNNs involve training over perturbations of the observed graphs. This may well substantially reduce how effective the proposed method is. (4) The authors do not discuss or explore the possibility of some form of adversarial defence. Although most of the proposed defences do not target the black-box, evasion setting, something like the GCN-Jaccard defence in [R2], which just pre-processes the graph, removing dubious edges, could easily be adapted to the setting. In general, when reading a paper that proposes attacks, one expects to see (i) an investigation of the efficacy with respect to multiple victim models; (ii) a discussion and exploration of the effectiveness against robust models; (iii) a discussion and exploration of the ability to navigate existing defences (or if they do not exist, baseline defences that are simple to propose). [R1] Wei Jin, Yao Ma, Xiaorui Liu, Xianfeng Tang, Suhang Wang, and Jiliang Tang. 2020. Graph Structure Learning for Robust Graph Neural Networks. In Proc. ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD '20). [R2] Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, and Liming Zhu. Adversarial examples for graph data: Deep insights into attack and defense. In International Joint Conference on Artificial Intelligence, IJCAI, pp. 4816–4823, 2019.
NIPS
Title Adversarial Attacks on Graph Classifiers via Bayesian Optimisation Abstract Graph neural networks, a popular class of models effective in a wide range of graph-based learning tasks, have been shown to be vulnerable to adversarial attacks. While the majority of the literature focuses on such vulnerability in node-level classification tasks, little effort has been dedicated to analysing adversarial attacks on graph-level classification, an important problem with numerous real-life applications such as biochemistry and social network analysis. The few existing methods often require unrealistic setups, such as access to internal information of the victim models, or an impractically-large number of queries. We present a novel Bayesian optimisation-based attack method for graph classification models. Our method is black-box, query-efficient and parsimonious with respect to the perturbation applied. We empirically validate the effectiveness and flexibility of the proposed method on a wide range of graph classification tasks involving varying graph properties, constraints and modes of attack. Finally, we analyse common interpretable patterns behind the adversarial samples produced, which may shed further light on the adversarial robustness of graph classification models. An open-source implementation is available at https://github.com/xingchenwan/grabnel. 1 Introduction Graphs are a general-purpose data structure consisting of entities represented by nodes and edges which encode pairwise relationships. Graph-based machine learning models has been widely used in a variety of important applications such as semi-supervised learning, link prediction, community detection and graph classification [3, 51, 14]. Despite the growing interest in graph-based machine learning, it has been shown that, like many other machine learning models, graph-based models are vulnerable to adversarial attacks [33, 17]. If we want to deploy such models in environments where the risk and costs associated with a model failure are high e.g. in social networks, it would be crucial to understand and assess the model stability and vulnerability by simulating adversarial attacks. Adversarial attacks on graphs can be aimed at different learning tasks. This paper focuses on graphlevel classification, where given an input graph (potentially with node and edge attributes), we wish to learn a function that predicts a property of interest related to the graph. Graph classification is an important task with many real-life applications, especially in bioinformatics and chemistry [24, 25]. For example, the task may be to accurately classify if a molecule, modelled as a graph whereby nodes represent atoms and edges model bonds, inhibits HIV replication or not. Although there are a few attempts on performing adversarial attacks on graph classification [10, 23], they all operate under unrealistic assumptions such as the need to query the target model a large number of times or access a portion of the test set to train the attacking agent. To address these limitations, we formulate the adversarial attack on graph classification as a black-box optimisation problem and solve it with Bayesian optimisation (BO), a query-efficient state-of-the-art zeroth-order black-box optimiser. Unlike existing work, our method is query-efficient, parsimonious in perturbations and 35th Conference on Neural Information Processing Systems (NeurIPS 2021) does not require policy training on a separate labelled dataset to effectively attack a new sample. Another benefit of our method is that it can be easily adapted to perform various modes of attacks such as deleting or rewiring edges and node injection. Furthermore, we investigate the topological properties of the successful adversarial examples found by our method and offer valuable insights on the connection between the graph topology change and the model robustness. The main contributions of our paper are as follows. First, we introduce a novel black-box attack for graph classification, GRABNEL1, which is both query efficient and parsimonious. We believe this is the first work on using BO for adversarial attacks on graph data. Second, we analyse the generated adversarial examples to link the vulnerability of graph-based machine learning models to the topological properties of the perturbed graph, an important step towards interpretable adversarial examples that has been overlooked by the majority of the literature. Finally, we evaluate our method on a range of real-world datasets and scenarios including detecting the spread of fake news on Twitter, which to the best of our knowledge is the first analysis of this kind in the literature. 2 Proposed Method: GRABNEL Problem Setup A graph G = (V, E) is defined by a set of nodes V = {vi}ni=1 and edges E = {ei}mi=1 where each edge ek = {vi, vj} connects between nodes vi and vj . The overall topology can be represented by the adjacency matrix A ∈ {0, 1}n×n where Aij = 1 if the edge {vi, vj} is present2. The attack objective in our case is to degrade the predictive performance of the trained victim graph classifier fθ by finding a graph G′ perturbed from the original test graph G (ideally with the minimum amount of perturbation) such that fθ produces an incorrect class label for G. In this paper, we consider the black-box evasion attack setting, where the adversary agent cannot access/modify the the victim model fθ (i.e. network architecture, weights θ or gradients) or its training data {(Gi, yi)}Li=1; the adversary can only interact with fθ by querying it with an input graph G′ and observe the model output fθ(G′) as pseudo-probabilities over all classes in a C-dimensional standard simplex. Additionally, we assume that sample efficiency is highly valued: we aim to find adversarial examples with the minimum number of queries to the victim model. We believe that this is a practical and difficult setup that accounts for the prohibitive monetary, logistic and/or opportunity costs of repeatedly querying a (possibly huge and complicated) real-life victim model. With a high query count, the attacker may also run a higher risk of getting detected. Formally, the objective function of our BO attack agent can be formulated as a black-box maximisation problem: max G′∈Ψ(G) Lattack ( fθ(G′), y ) s.t. y = arg max fθ(G) (1) 1Stands for Graph Adversarial attack via BayesiaN Efficient Loss-minimisation. 2We discuss the unweighted graphs for simplicity; our method may also handle other graph types. where fθ is the pretrained victim model that remains fixed in the evasion attack setup and y is the correct label of the original input G. Denote the output logit for the class y as fθ(G)y , the attack loss Lattack can be defined as: Lattack ( fθ(G′), y ) = { maxt∈Y,t6=y log fθ(G′)t − log fθ(G′)y (untargeted attack) log fθ(G′)t − log fθ(G′)y (targeted attack on class t), (2) where fθ(·)t denotes the logit output for class t. Such an attack loss definition is commonly used both in the traditional image attack and the graph attack literature [4, 52] although our method is compatible with any choice of loss function. Furthermore, Ψ(G) refers to the set of possible G′ generated from perturbing G. In this work, we experiment with a diverse modes of attacks to show that our attack method can be generalised to different set-ups: • creating/removing an edge: we create perturbed graphs by flipping the connection of a small set of node pairs δA = {{ui, vi}}∆i=1 of G following previous works [52, 10]; • rewiring or swapping edges: similar to [23], we select a triplet (u, v, s) where we either rewire the edge (u→ v) to (u→ s) (rewire), or exchange the edge weights w(u, v) and w(u, s) (swap); • node injection: we create new nodes together with their attributes and connections in the graph. The overall routine of our proposed GRABNEL is presented in Fig 1 (and in pseudo-code form in App A), and we now elaborate each of its key components. Surrogate model The success of BO hinges upon the surrogate model choice. Specifically, such a surrogate model needs to 1) be flexible and expressive enough to locally learn the latent mapping from a perturbed graph G′ to its attack lossLattack(fθ(G′), y) (note that this is different and generally easier than learning G′ → y, which is the goal of the classifier fθ), 2) admit a probabilistic interpretation of uncertainty – this is key for the exploration-exploitation trade-off in BO, yet also 3) be simple enough such that the said mapping can be learned with a small number of queries to fθ to preserve sample efficiency. Furthermore, given the combinatorial nature of the graph search space, it also needs to 4) be capable of scaling to large graphs (e.g. in the order of 103 nodes or more) typical of common graph classification tasks with reasonable run-time efficiency. Additionally, given the fact that BO has been predominantly studied in the continuous domain which is significantly different from the present setup, the design of a appropriate surrogate is highly non-trivial. To handle this set of conflicting desiderata, we propose to first use a Weisfeiler-Lehman (WL) feature extractor to extracts a vector space representation of G, followed by a sparse Bayesian linear regression which balances performance with efficiency and gives an probabilistic output. With reference to Fig. 1, given a perturbation graph G′ as a proposed adversarial sample, the WL feature extractor first extracts a vector representation φ(G′) in line with the WL subtree kernel procedure (but without the final kernel computation) [30]. For the case where the node features are discrete, let x0(v) be the initial node feature of node v ∈ V (note that the node features can be either scalars or vectors) , we iteratively aggregate and hash the features of v with its neighbours, {ui}deg(v)i=1 , using the original WL procedure at all nodes to transform them into discrete labels: xh+1(v) = hash ( xh(v), xh(u1), ..., xh(udeg(v)) ) , ∀h ∈ {0, 1, . . . ,H − 1}, (3) where H is the total number of WL iterations, a hyperparameter of the procedure. At each level h, we compute the feature vector φh(G′) = [c(G′,Xh1), ..., c(G′,Xh|Xh|)]>, where Xh is the set of distinct node features xh that occur in all input graphs at the current level and c(G′, xh) is the counting function that counts the number of times a particular node feature xh appears in G′. For the case with continuous node features and/or weighted edges, we instead use the modified WL procedure proprosed in [36]: xh+1(v) = 12 ( xh(v) + 1deg(v) deg(v)∑ i=1 w(v, ui)xh(ui) ) , ∀h ∈ {0, 1, . . . ,H − 1}, (4) where w(v, ui) denotes the (non-negative) weight of edge e{v,ui} (1 if the graph is unweighted) and we simply have feature at level h φh(G′) = vec(Xh), where Xh is the feature matrix of graph G′ at level h by collecting the features at each node Xh = [ xh(1), ...xh(v) ] and vec(·) denotes the vectorisation operator. In both cases, at the end of H WL iterations we obtain the final feature vector φ(G′) = concat ( φ1(G′), ...,φH(G′) ) for each training graph in [1, nG′ ] to form the feature matrix Φ = [φ(G′1), ...φ(G′|nG′ |)] > ∈ R|nG′ |×D to be passed to the Bayesian regressor – it is particularly worth noting that the training graphs here denote inputs to train the surrogate model of the attack agent and are typically perturbed versions of a test graph G of the victim model; they are not the graphs that are used to train the victim model itself: in an evasion attack setup, the model is considered frozen and the training inputs cannot be accessed by the attack agent any point in the pipeline. The WL iterations capture both information related to individual nodes and topological information (via neighbourhood aggregation), and have been shown to have comparable distinguishing power to some Graph Neural Network (GNN) models [26], and hence the procedure is expressive. Alternative surrogate choices could be, for example, GNNs with the final fully-connected layer replaced by a probabilistic linear regression layer such as the one proposed in [31]. However, in contrast to these, our extraction process G′ → φ(G′) requires no learning from data (we only need to learn the Bayesian linear regression weights) and therefore should lead to better sample efficiency. Alternatively, we may also use a Gaussian Process (GP) surrogate, such as the Gaussian Process with Weisfeiler-Lehman Kernel (GPWL) model proposed in [29] that directly uses a GP model together with a WL kernel. Nonetheless, while GPs are theoretically more expressive (although we empirically show in App. D.1 that in most of the cases their predictive performances are comparable), they are also much more expensive with a cubic scaling w.r.t the number of training inputs. Furthermore, GPWL is designed specifically for neural architecture search, which features small, directed graphs with discrete node features only; on the other hand, the GRABNEL surrogate covers a much wider scope of applications When we select a large H or if there are many training inputs and/or input graph(s) have a large number of nodes/edges, there will likely be many unique WL features and the resulting feature matrix will be very high-dimensional, which would lead to high-variance regression coefficients α being estimated if nG′ (number of graphs to train the surrogate of the attack agent) is comparatively few. To attain a good predictive performance in such a case, we employ Bayesian regression surrogate with the Automatic Relevance Determination (ARD) prior to learn the mapping Φ→ Lattack(fθ(G′), y), which regularises weights and encourages sparsity in α [42]: Lattack|Φ,α, σ2n ∼ N (α>Φ, σ2nI), (5) α|λ ∼ N (0,Λ), diag(Λ) = λ−1 = {λ−11 , ..., λ −1 D }, (6) λi ∼ Gamma(k, θ) ∀i ∈ [1, D], (7) where Λ is a diagonal covariance matrix. To estimate α and noise variance σ2n, we optimise the model marginal log-likelihood. Overall, the WL routines scales as O(Hm) and Bayesian linear regression has a linear runtime scaling w.r.t. the number of queries; these ensure the surrogate is scalable to both larger graphs and/or a large number of graphs, both of which are commonly encountered in graph classification (See App D.6 for a detailed empirical runtime analysis). Sequential perturbation selection In the default structural perturbation setting, given an attack budget of ∆ (i.e. we are allowed to flip up to ∆ edges from G), finding exactly the set of perturbations δA that leads to the largest increase in Lattack entails an combinatorial optimisation over ( n2 ∆ ) candidates. This is a huge search space that is difficult for the surrogate to learn meaningful patterns in a sample-efficient way even for modestly-sized graphs. To tackle this challenge, we adopt the strategy illustrated in Fig. 2: given the query budget B (i.e. the total number of times we are allowed to query fθ for a given G), we assume B ≥ ∆ and amortise B into ∆ stages and focus on selecting one edge perturbation at each stage. While this strategy is greedy in the sense that it always commits the perturbation leading to the largest increase in loss at each stage, it is worth noting that we do not treat the previously modified edges differently, and the agent can, and does occasionally as we observe empirically, “correct” previous modifications by flipping edges back: this is possible as the effect of edge selection is permutation invariant. Another benefit of this strategy is that it can potentially make full use of the entire attack budget ∆ while remaining parsimonious w.r.t. the amount of perturbation introduced, as it only progresses to the next stage and modifies the G further when it fails to find a successful adversarial example in the current stage. Optimisation of acquisition function At each BO iteration, acquisition function α(·) is optimised to select the next point(s) to query the victim model fθ. However, commonly used gradient-based optimisers cannot be used on the discrete graph search space; a naïve strategy would be to randomly generate many perturbed graphs, evaluate α on all of them, and choose the maximiser(s) to query fθ next. While potentially effective on modestly-sized G especially with our sequential selection strategy, this strategy nevertheless discards any known information about the search space. Inspired by recent advances in BO in non-continuous domains [8, 38], we optimise α via an adapted version of the Genetic algorithm (GA) in [10], which is well-suited for our purpose but is not particularly sample efficient since many evolution cycles could be required for convergence. However, the latter is not a serious issue here as we only use GA for acquisition optimisation where we only query the surrogate instead of the victim model, a subroutine of BO that does not require sample efficiency. We outline its ingredients below: • Initialisation: While GA typically starts with random sampling in the search space to fill the initial population, in our case we are not totally ignorant about the search space as we could have already queried and observed fθ with a few different perturbed graphs G′. A smoothness assumption on the search space would be that if a G′ with an edge (u, v) flipped from G led to a large Lattack, then another G′ with (u, s), s 6∈ {u, v} flipped is more likely to do so too. To reflect this, we fill the initial population by mutating the top-k queried G′s leading to the largest Lattack seen so far in the current stage, where for G′ with (u, v) flipped from the base graph we 1) randomly choose an end node (u or v) and 2) change that node to another node in the graph except u or v such that the perturbed edges in all children shares one common end node with the parent. • Evolution: After the initial population is built, we follow the standard evolution routine by evaluating the acquisition function value for each member as its fitness, selecting the top-k performing members as the breeding population and repeating the mutation procedure in initialisation for a fixed number of rounds. At termination, we simply query fθ with the graph(s) seen so far (i.e. computing the loss in Fig. 2) with the largest acquisition function value(s) seen during GA. 3 Related Works Adversarial attack on graph-based models There has been an increasing attention in the study of adversarial attacks in the context of GNNs [33, 17]. One of the earliest models, Nettack, attacks a Graph Convolution Network (GCN) node classifier by optimising the attack loss of a surrogate model using a greedy algorithm [52]. Using a simple heuristic, DICE attacks node classifiers by adding edges between nodes of different classes and deleting edges connecting nodes of the same class [41]. However, they cannot be straightforwardly transferred graph classification: for Nettack, unlike node classification tasks, we have no access to training input graphs or labels for the victim model during test time to train a similar surrogate in graph classification; for DICE (and also more recent works like [39]), node labels do not exist in graph classification (we only have a single label for the entire graph). We nonetheless acknowledge the other contributions in these works, such as the introduction of constraints to improve imperceptibility, in our experiments in Sec 4. First methods that do extend to graph classification include [10, 23]: [10] propose a number of techniques, including RL-S2V, which uses reinforcement learning to attack both node and graph classifiers in a black-box manner, and the GA-based attack, which we adapt into our BO acquisition optimisation. However, [10] primarily focus on the S2V victim model, do not emphasise on sample efficiency, and to train a policy that attacks in an one-shot manner on the test graphs, RL-S2V has to query repeatedly on a separate validation set. We empirically compare against it in App. D.2. Another related work is ReWatt [23], which similarly uses reinforcement learning but through rewiring. Compared to both these methods, GRABNEL does not require an additional validation set and is much more query efficient. Other black-box methods without surrogate models have also been proposed that could be potentially be applied to graph classification: [22] exploit common GNN structural bias to attack node features, while [5] relate graph embedding to graph signal processing and construct tailored attack objectives in different GNNs. In comparison to these works that exploit the characteristics of existing architectures to varying degrees, we argue that the optimisation-based method proposed in the our work is more flexible and agnostic to architecture choices, and should be more generalisable to new architectures. Nonetheless, in cases where some architectural information is available, we believe there could be combinable benefits: for example, the importance scores proposed in [22] could be used as sampling weights as priors to bias GRABNEL towards selecting more vulnerable nodes. We defer detailed investigations of such possibilities to a future work. Finally, there have also been various previous works that focus on a different setup than ours: A white-box optimisation strategy (alternating direction method of multipliers) is proposed in [16]. [48, 44, 2] propose back-door attacks that involve poisoning of the training data before training and/or the test data at inference. [35] attack hierarchical graph pooling networks, but similar to [52] the method requires access to training input/targets. Ultimately, a number of factors, including but not limited to 1) existence/strictness of the query budget, 2) strictness of the perturbation budget, 3) attacker capabilities and 4) sizes of the graphs, would decide which algorithm/setup is more appropriate and should be adopted in a problem-specific way. Nonetheless, we argue that our setup is both challenging and highly significant as it resembles the capabilities a real-life attacker might have (no access to training data; no access to model parameter/gradients and limited query/perturbation budgets). Adversarial attacks using BO BO as a means to find adversarial examples in the black-box evasion setting has been successfully proposed for classification models on tabular [34] and image data [28, 50, 32, 27]. However, we address the problem for graph classification models, which work on structurally and topologically fundamentally different inputs. This implies several nontrivial challenges that require our method to go beyond the vanilla usage of BO: for example, the inputs cannot be readily represented as vectors like for tabular or image data and the perturbations that we consider for such inputs are not defined on a continuous, but on a discrete domain. 4 Experiments We validate the performance of the proposed method in a wide range of graph classification tasks with varying graph properties, including but not limited to the typical TU datasets considered in previous works [10, 23]. As a demonstration of the versatility of the proposed method, instead of considering a single mode of attack which is often impossible in real-life, we also select the attack mode specific to each task. All additional details, including the statistics of the datasets used and implementation details of the victim models and attack methods, are presented in App. C. TU Datasets We first conduct experiments on four common TU datasets [25], namely (in ascending order of average graph sizes in the dataset) IMDB-M, PROTEINS, COLLAB and REDDIT-MULTI-5K. In all cases, unless specified otherwise, we define the attack budget ∆ in terms of the maximum structural perturbation ratio r defined in [7] where ∆ ≤ rn2. We similarly link the maximum numbers of queries B allowed for individual graphs to their sizes as B = 40∆, thereby giving larger graphs and thus potentially more difficult instances higher attack3 and query budgets similar to the conventional image adversarial attack literature [28]. In this work, unless otherwise specified we set r = 0.03 for all experiments, and for comparison we consider a number of baselines, including random search, GA introduced in [10]4. On some task/victim model combinations, we also consider an additional simple gradient-based method which greedily adds or delete edges based on the magnitude computed input gradient similar to the gradient based method described in [10] (note that this method is white-box as access to parameter weights and gradients is required), which is also similar in spirit to methods like Nettack [52]. To verify whether the proposed attack method can be used for a variety of classifier architectures we also consider various victim models: we first use GCN [19] and Graph Isomorphism Network (GIN) [45], which are most commonly used in related works [33]. Considering 3Due to computational constraints, we cap the maximum number of queries to be 2 × 104 on each graph. 4The original implementation of RL-S2V, the primary algorithm in [10], primarily focus on a S2V-based victim model [9]. We compare GRABNEL against it in the same dataset considered in [10] in App. D.2. the strong performance of hierarchical models in graph classification [12, 46], we also conduct some experiments on the Graph-U-Net [12] as a representative of such architectures. We show the classification performance of both victim models before and after attacks using various methods in Table 1, and we show the Attack Success Rate (ASR) against the (normalised) number of queries in Fig. 3. It is worth noting that in consistency with the image attack literature, we launch and consider attacks on the graphs that were originally classified correctly, and statistics, such as the ASR, are also computed on that basis. We report additional statistics, such as the evolution of the attack losses as a function of number of queries of selected individual data points in App. D.3. The results generally show that the attack method is effective against both GCN and GIN models with GRABNEL typically leading to the largest degradation in victim predictions in all tasks, often performing on par or better than Gradient-based, a white-box method. It is worth noting that although Gradient-based often performs strongly, there is no guarantee that it always does so: first, for general edge flipping problems, Gradient-based computes gradients w.r.t. all possible edges (including those that do not currently exist) and an accurate estimation of such high dimensional gradients can be highly difficult. Second, gradients only capture local information and they are not necessarily accurate when used to extrapolate function value beyond that neighbourhood. However, relying on gradients to select edge perturbations constitutes such an extrapolation, as edge addition/deletion is binary and discrete. Lastly, on the tasks with larger graphs (e.g. COLLAB on GCN and GIN), due to the huge search spaces, we find neither random search nor GA could flip predictions effectively except for some “easy” samples already lying close to the decision boundary; GRABNEL nonetheless performs well thanks to the effective constraint of the search space from the sequential selection of edge perturbation, which is typically more significant on the larger graphs. We report the results on the Graph U-net victim model in Table 2: as expected, Graph U-net performs better in terms of clean classification accuracy compared to the GCN and GIN models considered above, and it also seem more robust to all types adversarial attacks on the PROTEINS dataset. Nonetheless, in terms of relative performance margin, GRABNEL still outperforms both baselines considerably, demonstrating the flexibility and capability for it to conduct effective attacks even on the more complicated and realistic victim models. As discussed, in real life, adversarial agents might encounter additional constraints other than the number of queries to the victim model or the amount of perturbation introduced. To demonstrate that our framework can handle such constraints, we further carry out attacks on victim models using identical protocols as above but with a variety of additional constraints considered in several previous works. Specifically, the scenarios considered, in the ascending order of restrictiveness, are: • Base: The base scenario is identical to the setup in Table 1 and Fig 3; • 2-hop: Edge addition between nodes (u, v) is only permitted if v is within 2-hop distance of u; • 2-hop+rewire [23]: Instead of flipping edges, the adversarial agent is only allowed to rewire from nodes (u, v) (where an edge exists) to (u,w) (where no edge currently exists). Node w must be within 2-hop distance of u; We test on the PROTEINS dataset, and show the results in Fig. 4: interestingly, the imposition of the 2-hop constraint itself leads to no worsening of performance – in fact, as we elaborate in Sec. 5, we find the phenomenon of adversarial edges remaining relatively clustered within a relatively small neighbourhood is a general pattern in many tasks. This implies that the 2-hop condition, which constrains the spatial relations of the adversarial edges, might already hold even without explicit specification, thereby explaining the marginal difference between the base and the 2-hop constrained cases in Fig. 4. While the additional rewiring constraint leads to (slightly) lower attack success rates, the performance of GRABNEL remains relatively robust in all scenarios considered. Image Classification Beyond the typical “edge flipping” setup on which existing research has been mainly focused, we now consider a different setup involving attacks on the MNIST-75sp dataset [21, 20] with weighted graphs with continuous attributes – note that . The dataset is generated by first partitioning MNIST image into around 75 superpixels with SLIC [1, 11] as the graph nodes (with average superpixel intensity as node attributes). The pairwise distances between the superpixels form the edge weights. We use the pre-trained ChebyGIN with attention model released by the original authors [20] (with an average validation classification accuracy of around 95%) as the victim model. Given that the edge values are no longer binary, simply flipping the edges (equivalent to setting edge weights to 0 and 1) is no longer appropriate. To generalise the sparse perturbation setup and inspired by edge rewiring studied by previous literature, we instead adopt an attack mode via swapping edges: each perturbation can be defined by 3 end nodes (u, v, s) where edge weights wuv is swapped with edge weight w(u, s). We show the results in Fig. 5: GRABNEL-u and Random-u denotes the GRABNEL and random search under the untargeted attack, respectively, whereas GRABNEL-t denotes GRABNEL under the targeted attack with each line denoting 1 of the 9 possible target classes in MNIST. We find that GRABNEL is surprisingly effective in attacking this victim model, almost completely degrading the victim (Fig. 5) with very few swapping operations (Fig. 6) even in the more challenging targeted setup. This seems to suggest that, at least for the data considered, the victim model is very brittle towards carefully crafted edge swapping, with its predictive power seemingly hinged upon a very small number of key edges. We believe a thorough analysis of this phenomenon is of an independent interest, which we defer to a future work. Fake news detection As a final experiment, we consider a real-life task of attacking a GCNbased fake news detector trained on a labelled dataset in [37]. Each discussion cascade (i.e. a chain of tweets, replies and retweets) is represented as an undirected graph, where each node represents a Twitter account (with node features being the key properties of the account such as age and number of followers/followees; see App. C for details) and each edge represents a reply/retweet. As a reflection of what a real-life adversary may and may not do, we note that modifying the connections or properties of the existing nodes, which correspond to modifying existing accounts and tweets, is considered impractical and prohibited. Instead, we consider a node injection attack mode (i.e. creating new malicious nodes and connect them to existing ones): injecting nodes is equivalent to creating new Twitter accounts and connecting them to the rest of the graph is equivalent to retweeting/replying existing accounts. We limit the maximum number of injected nodes to be 0.05N and the maximum number of new edges that may be created per each new node is set to the average number of edges an existing node has – in this context, this limits the number of re-tweets and replies the new accounts may have to avoid easy detection. For the injected node, we initialise its node features in a way that reflects the characteristics of a new Twitter user (we outline the detailed way to do so in App. C). We show the result in Fig. 8, where GRABNEL is capable of reducing the effectiveness of a GCN-based fake news classifier by a third. In this case Random also performs reasonably well, as the discussion cascade is typically small, allowing any adversarial examples to be exhaustively found eventually. Ablation Studies GRABNEL benefits from a number of design choices and it is important to understand the relative contribution of each to the performance. We find that in some tasks GRABNEL without surrogate (i.e. random search with sequential perturbation selection. We term this variant SequentialRandom) is a very strong baseline in terms of final ASR, although the full GRABNEL is much better in terms of overall performance, sample efficiency and the ability to produce successful examples with few perturbations. The readers are referred to our ablation studies in App. D.5. Runtime Analysis Given the setup we consider (sample- efficient black-box attack with minimal amount of perturbation), the cost of the algorithm should not only be considered from the viewpoint of the computational runtime of the attack algorithm itself alone, and this is a primary reason why we use the (normalised) number of queries as the main cost criterion. Nonetheless, a runtime analysis is still informative which we provide in App. D.6. We find that GRABNEL maintains a reasonable overhead even on, e.g., graphs with ∼ 103 nodes/edges that are larger than most graphs in typical graph classification tasks. 5 Attack Analysis Having established the effectiveness of our method, in this section we provide a qualitative analysis on the common interpretable patterns behind the adversarial samples found, which provides further insights into the robustness of graph classification models against structural attacks. We believe such analysis is especially valuable, as it may facilitate the development of even more effective attack methods, and may provide insights that could be useful for identification of real-life vulnerabilities for more effective defence. We show examples of the adversarial samples in Fig. 7 (and Fig. 13 in App. D.3). We summarise some key findings below. • Adversarial edges tend to cluster closely together: We find the distribution of the adversarial edges (either removal or addition) in a graph to be highly uneven, with many adversarial edges often sharing common end-nodes or having small spatial distance to each other. This is empirically consistent with recent theoretical findings on the stability of spectral graph filters in [18]. From an attacker point of view, this may provide a “prior” on the attack to constrain the search space, as the regions around existing perturbations should be exploited more; we leave a practical investigation of the possibility of leveraging this to enhance attack performance to a future work. • Adversarial edges often attempt to destroy or modify community structures: for example, the original graphs in the IMDB-M dataset can be seen to have community structure, a graph-level topological property that is distinct from the existing works analysing attack patterns on node-level tasks [43, 53]. When the GCN model is attacked, the attack tends to flip the edges between the communities, and thereby destroying the structure by either merging communities or deleting edges within a cluster. On the other hand, the GIN examples tend to strengthen the community structures by adding edges within clusters and deleting edges between them. With similar observations also present in, for example, PROTEINS dataset, this may suggest that the models may be fragile to modification of the community structure. • Beware the low-degree nodes! While low-degree nodes are important in terms of degree centrality, we find some victim models are vulnerable to manipulations on such nodes. Most prominently, in the Twitter fake news example, the malicious nodes almost never connect directly to the central node (original tweet) but instead to a peripheral node. This finding corroborates the theoretical argument in [18] which shows that spectral graph filters are more robust towards edge flipping involving high-degree nodes than otherwise, and is also consistent with observations on node-level tasks [53] with the explanation being lower-degree nodes having larger influence in the neighbourhood aggregation in GCN. Nonetheless, we note that changes in a higher-degree node are likely to cascade to more nodes in the graph than low degree nodes, and since graph classifiers aggregate across all nodes in the readout layer the indirect change of node representations also matter. Therefore, we argue that this phenomenon in graph classification is still non-trivial. 6 Conclusion Summary This work proposes a novel and flexible black-box method to attack graph classifiers using Bayesian optimisation. We demonstrate the effectiveness and query efficiency of the method empirically. Unlike many existing works, we qualitatively analyse the adversarial examples generated. We believe such analysis is important to the understanding of adversarial robustness of graph-based learning models. Finally, we would like to point out that a potential negative social impact of our work is that bad actors might use our method to attack real-world systems such as a fake new detection system on social media platforms. Nevertheless, we believe that the experiment in our paper only serves as a proof-of-concept and the benefit of raising awareness of vulnerabilities of graph classification systems largely outweighs the risk. Limitations and Future Work Firstly, the current work only considers topological attack, although the surrogate used is also compatible with attack on node/edge features or hybrid attacks. Secondly, while we have evaluated several mainstream victim models, it would also be interesting to explore defences against adversarial attacks and to test GRABNEL in robust GNN setups such as those with advanced graph augmentations [47], randomised smoothing [48, 13] and adversarial detection [6]. Lastly, the current work is specific to graph classification; we believe it is possible to adapt it to attack other graph tasks by suitably modifying the loss function. We leave these for future works. Acknowledgement and Funding Disclosure The authors would like to acknowledge the following sources of funding in direct support of this work: XW and BR are supported by the Clarendon Scholarship at University of Oxford; HK is supported by the EPSRC Centre for Doctoral Training in Autonomous Intelligent Machines and Systems EP/L015897/1; AB thanks the Konrad-Adenauer-Stiftung and the Oxford-Man Institute of Quantitative Finance for their support. The authors would also like to thank the Oxford-Man Institute of Quantitative Finance for providing the computing resources necessary for this project. The authors declare no conflict in interests.
1. What is the reason for rejecting the paper? 2. Is the paper a duplicate submission? 3. Where was the similar version of the paper accepted? 4. How does the reviewer know that the authors are the same? 5. Does the reviewer think the paper has any other issues besides being a duplicate submission?
Summary Of The Paper Review
Summary Of The Paper A very similar version has been accepted into the ICML 2021 Workshop, probably from the same authors. See the link https://openreview.net/forum?id=7oziDfK4Fs for details. Hence, I have to reject this paper, due to the duplicate submission. Review A very similar version has been accepted into the ICML 2021 Workshop, probably from the same authors. See the link https://openreview.net/forum?id=7oziDfK4Fs for details. Hence, I have to reject this paper, due to the duplicate submission.
NIPS
Title Adversarial Attacks on Graph Classifiers via Bayesian Optimisation Abstract Graph neural networks, a popular class of models effective in a wide range of graph-based learning tasks, have been shown to be vulnerable to adversarial attacks. While the majority of the literature focuses on such vulnerability in node-level classification tasks, little effort has been dedicated to analysing adversarial attacks on graph-level classification, an important problem with numerous real-life applications such as biochemistry and social network analysis. The few existing methods often require unrealistic setups, such as access to internal information of the victim models, or an impractically-large number of queries. We present a novel Bayesian optimisation-based attack method for graph classification models. Our method is black-box, query-efficient and parsimonious with respect to the perturbation applied. We empirically validate the effectiveness and flexibility of the proposed method on a wide range of graph classification tasks involving varying graph properties, constraints and modes of attack. Finally, we analyse common interpretable patterns behind the adversarial samples produced, which may shed further light on the adversarial robustness of graph classification models. An open-source implementation is available at https://github.com/xingchenwan/grabnel. 1 Introduction Graphs are a general-purpose data structure consisting of entities represented by nodes and edges which encode pairwise relationships. Graph-based machine learning models has been widely used in a variety of important applications such as semi-supervised learning, link prediction, community detection and graph classification [3, 51, 14]. Despite the growing interest in graph-based machine learning, it has been shown that, like many other machine learning models, graph-based models are vulnerable to adversarial attacks [33, 17]. If we want to deploy such models in environments where the risk and costs associated with a model failure are high e.g. in social networks, it would be crucial to understand and assess the model stability and vulnerability by simulating adversarial attacks. Adversarial attacks on graphs can be aimed at different learning tasks. This paper focuses on graphlevel classification, where given an input graph (potentially with node and edge attributes), we wish to learn a function that predicts a property of interest related to the graph. Graph classification is an important task with many real-life applications, especially in bioinformatics and chemistry [24, 25]. For example, the task may be to accurately classify if a molecule, modelled as a graph whereby nodes represent atoms and edges model bonds, inhibits HIV replication or not. Although there are a few attempts on performing adversarial attacks on graph classification [10, 23], they all operate under unrealistic assumptions such as the need to query the target model a large number of times or access a portion of the test set to train the attacking agent. To address these limitations, we formulate the adversarial attack on graph classification as a black-box optimisation problem and solve it with Bayesian optimisation (BO), a query-efficient state-of-the-art zeroth-order black-box optimiser. Unlike existing work, our method is query-efficient, parsimonious in perturbations and 35th Conference on Neural Information Processing Systems (NeurIPS 2021) does not require policy training on a separate labelled dataset to effectively attack a new sample. Another benefit of our method is that it can be easily adapted to perform various modes of attacks such as deleting or rewiring edges and node injection. Furthermore, we investigate the topological properties of the successful adversarial examples found by our method and offer valuable insights on the connection between the graph topology change and the model robustness. The main contributions of our paper are as follows. First, we introduce a novel black-box attack for graph classification, GRABNEL1, which is both query efficient and parsimonious. We believe this is the first work on using BO for adversarial attacks on graph data. Second, we analyse the generated adversarial examples to link the vulnerability of graph-based machine learning models to the topological properties of the perturbed graph, an important step towards interpretable adversarial examples that has been overlooked by the majority of the literature. Finally, we evaluate our method on a range of real-world datasets and scenarios including detecting the spread of fake news on Twitter, which to the best of our knowledge is the first analysis of this kind in the literature. 2 Proposed Method: GRABNEL Problem Setup A graph G = (V, E) is defined by a set of nodes V = {vi}ni=1 and edges E = {ei}mi=1 where each edge ek = {vi, vj} connects between nodes vi and vj . The overall topology can be represented by the adjacency matrix A ∈ {0, 1}n×n where Aij = 1 if the edge {vi, vj} is present2. The attack objective in our case is to degrade the predictive performance of the trained victim graph classifier fθ by finding a graph G′ perturbed from the original test graph G (ideally with the minimum amount of perturbation) such that fθ produces an incorrect class label for G. In this paper, we consider the black-box evasion attack setting, where the adversary agent cannot access/modify the the victim model fθ (i.e. network architecture, weights θ or gradients) or its training data {(Gi, yi)}Li=1; the adversary can only interact with fθ by querying it with an input graph G′ and observe the model output fθ(G′) as pseudo-probabilities over all classes in a C-dimensional standard simplex. Additionally, we assume that sample efficiency is highly valued: we aim to find adversarial examples with the minimum number of queries to the victim model. We believe that this is a practical and difficult setup that accounts for the prohibitive monetary, logistic and/or opportunity costs of repeatedly querying a (possibly huge and complicated) real-life victim model. With a high query count, the attacker may also run a higher risk of getting detected. Formally, the objective function of our BO attack agent can be formulated as a black-box maximisation problem: max G′∈Ψ(G) Lattack ( fθ(G′), y ) s.t. y = arg max fθ(G) (1) 1Stands for Graph Adversarial attack via BayesiaN Efficient Loss-minimisation. 2We discuss the unweighted graphs for simplicity; our method may also handle other graph types. where fθ is the pretrained victim model that remains fixed in the evasion attack setup and y is the correct label of the original input G. Denote the output logit for the class y as fθ(G)y , the attack loss Lattack can be defined as: Lattack ( fθ(G′), y ) = { maxt∈Y,t6=y log fθ(G′)t − log fθ(G′)y (untargeted attack) log fθ(G′)t − log fθ(G′)y (targeted attack on class t), (2) where fθ(·)t denotes the logit output for class t. Such an attack loss definition is commonly used both in the traditional image attack and the graph attack literature [4, 52] although our method is compatible with any choice of loss function. Furthermore, Ψ(G) refers to the set of possible G′ generated from perturbing G. In this work, we experiment with a diverse modes of attacks to show that our attack method can be generalised to different set-ups: • creating/removing an edge: we create perturbed graphs by flipping the connection of a small set of node pairs δA = {{ui, vi}}∆i=1 of G following previous works [52, 10]; • rewiring or swapping edges: similar to [23], we select a triplet (u, v, s) where we either rewire the edge (u→ v) to (u→ s) (rewire), or exchange the edge weights w(u, v) and w(u, s) (swap); • node injection: we create new nodes together with their attributes and connections in the graph. The overall routine of our proposed GRABNEL is presented in Fig 1 (and in pseudo-code form in App A), and we now elaborate each of its key components. Surrogate model The success of BO hinges upon the surrogate model choice. Specifically, such a surrogate model needs to 1) be flexible and expressive enough to locally learn the latent mapping from a perturbed graph G′ to its attack lossLattack(fθ(G′), y) (note that this is different and generally easier than learning G′ → y, which is the goal of the classifier fθ), 2) admit a probabilistic interpretation of uncertainty – this is key for the exploration-exploitation trade-off in BO, yet also 3) be simple enough such that the said mapping can be learned with a small number of queries to fθ to preserve sample efficiency. Furthermore, given the combinatorial nature of the graph search space, it also needs to 4) be capable of scaling to large graphs (e.g. in the order of 103 nodes or more) typical of common graph classification tasks with reasonable run-time efficiency. Additionally, given the fact that BO has been predominantly studied in the continuous domain which is significantly different from the present setup, the design of a appropriate surrogate is highly non-trivial. To handle this set of conflicting desiderata, we propose to first use a Weisfeiler-Lehman (WL) feature extractor to extracts a vector space representation of G, followed by a sparse Bayesian linear regression which balances performance with efficiency and gives an probabilistic output. With reference to Fig. 1, given a perturbation graph G′ as a proposed adversarial sample, the WL feature extractor first extracts a vector representation φ(G′) in line with the WL subtree kernel procedure (but without the final kernel computation) [30]. For the case where the node features are discrete, let x0(v) be the initial node feature of node v ∈ V (note that the node features can be either scalars or vectors) , we iteratively aggregate and hash the features of v with its neighbours, {ui}deg(v)i=1 , using the original WL procedure at all nodes to transform them into discrete labels: xh+1(v) = hash ( xh(v), xh(u1), ..., xh(udeg(v)) ) , ∀h ∈ {0, 1, . . . ,H − 1}, (3) where H is the total number of WL iterations, a hyperparameter of the procedure. At each level h, we compute the feature vector φh(G′) = [c(G′,Xh1), ..., c(G′,Xh|Xh|)]>, where Xh is the set of distinct node features xh that occur in all input graphs at the current level and c(G′, xh) is the counting function that counts the number of times a particular node feature xh appears in G′. For the case with continuous node features and/or weighted edges, we instead use the modified WL procedure proprosed in [36]: xh+1(v) = 12 ( xh(v) + 1deg(v) deg(v)∑ i=1 w(v, ui)xh(ui) ) , ∀h ∈ {0, 1, . . . ,H − 1}, (4) where w(v, ui) denotes the (non-negative) weight of edge e{v,ui} (1 if the graph is unweighted) and we simply have feature at level h φh(G′) = vec(Xh), where Xh is the feature matrix of graph G′ at level h by collecting the features at each node Xh = [ xh(1), ...xh(v) ] and vec(·) denotes the vectorisation operator. In both cases, at the end of H WL iterations we obtain the final feature vector φ(G′) = concat ( φ1(G′), ...,φH(G′) ) for each training graph in [1, nG′ ] to form the feature matrix Φ = [φ(G′1), ...φ(G′|nG′ |)] > ∈ R|nG′ |×D to be passed to the Bayesian regressor – it is particularly worth noting that the training graphs here denote inputs to train the surrogate model of the attack agent and are typically perturbed versions of a test graph G of the victim model; they are not the graphs that are used to train the victim model itself: in an evasion attack setup, the model is considered frozen and the training inputs cannot be accessed by the attack agent any point in the pipeline. The WL iterations capture both information related to individual nodes and topological information (via neighbourhood aggregation), and have been shown to have comparable distinguishing power to some Graph Neural Network (GNN) models [26], and hence the procedure is expressive. Alternative surrogate choices could be, for example, GNNs with the final fully-connected layer replaced by a probabilistic linear regression layer such as the one proposed in [31]. However, in contrast to these, our extraction process G′ → φ(G′) requires no learning from data (we only need to learn the Bayesian linear regression weights) and therefore should lead to better sample efficiency. Alternatively, we may also use a Gaussian Process (GP) surrogate, such as the Gaussian Process with Weisfeiler-Lehman Kernel (GPWL) model proposed in [29] that directly uses a GP model together with a WL kernel. Nonetheless, while GPs are theoretically more expressive (although we empirically show in App. D.1 that in most of the cases their predictive performances are comparable), they are also much more expensive with a cubic scaling w.r.t the number of training inputs. Furthermore, GPWL is designed specifically for neural architecture search, which features small, directed graphs with discrete node features only; on the other hand, the GRABNEL surrogate covers a much wider scope of applications When we select a large H or if there are many training inputs and/or input graph(s) have a large number of nodes/edges, there will likely be many unique WL features and the resulting feature matrix will be very high-dimensional, which would lead to high-variance regression coefficients α being estimated if nG′ (number of graphs to train the surrogate of the attack agent) is comparatively few. To attain a good predictive performance in such a case, we employ Bayesian regression surrogate with the Automatic Relevance Determination (ARD) prior to learn the mapping Φ→ Lattack(fθ(G′), y), which regularises weights and encourages sparsity in α [42]: Lattack|Φ,α, σ2n ∼ N (α>Φ, σ2nI), (5) α|λ ∼ N (0,Λ), diag(Λ) = λ−1 = {λ−11 , ..., λ −1 D }, (6) λi ∼ Gamma(k, θ) ∀i ∈ [1, D], (7) where Λ is a diagonal covariance matrix. To estimate α and noise variance σ2n, we optimise the model marginal log-likelihood. Overall, the WL routines scales as O(Hm) and Bayesian linear regression has a linear runtime scaling w.r.t. the number of queries; these ensure the surrogate is scalable to both larger graphs and/or a large number of graphs, both of which are commonly encountered in graph classification (See App D.6 for a detailed empirical runtime analysis). Sequential perturbation selection In the default structural perturbation setting, given an attack budget of ∆ (i.e. we are allowed to flip up to ∆ edges from G), finding exactly the set of perturbations δA that leads to the largest increase in Lattack entails an combinatorial optimisation over ( n2 ∆ ) candidates. This is a huge search space that is difficult for the surrogate to learn meaningful patterns in a sample-efficient way even for modestly-sized graphs. To tackle this challenge, we adopt the strategy illustrated in Fig. 2: given the query budget B (i.e. the total number of times we are allowed to query fθ for a given G), we assume B ≥ ∆ and amortise B into ∆ stages and focus on selecting one edge perturbation at each stage. While this strategy is greedy in the sense that it always commits the perturbation leading to the largest increase in loss at each stage, it is worth noting that we do not treat the previously modified edges differently, and the agent can, and does occasionally as we observe empirically, “correct” previous modifications by flipping edges back: this is possible as the effect of edge selection is permutation invariant. Another benefit of this strategy is that it can potentially make full use of the entire attack budget ∆ while remaining parsimonious w.r.t. the amount of perturbation introduced, as it only progresses to the next stage and modifies the G further when it fails to find a successful adversarial example in the current stage. Optimisation of acquisition function At each BO iteration, acquisition function α(·) is optimised to select the next point(s) to query the victim model fθ. However, commonly used gradient-based optimisers cannot be used on the discrete graph search space; a naïve strategy would be to randomly generate many perturbed graphs, evaluate α on all of them, and choose the maximiser(s) to query fθ next. While potentially effective on modestly-sized G especially with our sequential selection strategy, this strategy nevertheless discards any known information about the search space. Inspired by recent advances in BO in non-continuous domains [8, 38], we optimise α via an adapted version of the Genetic algorithm (GA) in [10], which is well-suited for our purpose but is not particularly sample efficient since many evolution cycles could be required for convergence. However, the latter is not a serious issue here as we only use GA for acquisition optimisation where we only query the surrogate instead of the victim model, a subroutine of BO that does not require sample efficiency. We outline its ingredients below: • Initialisation: While GA typically starts with random sampling in the search space to fill the initial population, in our case we are not totally ignorant about the search space as we could have already queried and observed fθ with a few different perturbed graphs G′. A smoothness assumption on the search space would be that if a G′ with an edge (u, v) flipped from G led to a large Lattack, then another G′ with (u, s), s 6∈ {u, v} flipped is more likely to do so too. To reflect this, we fill the initial population by mutating the top-k queried G′s leading to the largest Lattack seen so far in the current stage, where for G′ with (u, v) flipped from the base graph we 1) randomly choose an end node (u or v) and 2) change that node to another node in the graph except u or v such that the perturbed edges in all children shares one common end node with the parent. • Evolution: After the initial population is built, we follow the standard evolution routine by evaluating the acquisition function value for each member as its fitness, selecting the top-k performing members as the breeding population and repeating the mutation procedure in initialisation for a fixed number of rounds. At termination, we simply query fθ with the graph(s) seen so far (i.e. computing the loss in Fig. 2) with the largest acquisition function value(s) seen during GA. 3 Related Works Adversarial attack on graph-based models There has been an increasing attention in the study of adversarial attacks in the context of GNNs [33, 17]. One of the earliest models, Nettack, attacks a Graph Convolution Network (GCN) node classifier by optimising the attack loss of a surrogate model using a greedy algorithm [52]. Using a simple heuristic, DICE attacks node classifiers by adding edges between nodes of different classes and deleting edges connecting nodes of the same class [41]. However, they cannot be straightforwardly transferred graph classification: for Nettack, unlike node classification tasks, we have no access to training input graphs or labels for the victim model during test time to train a similar surrogate in graph classification; for DICE (and also more recent works like [39]), node labels do not exist in graph classification (we only have a single label for the entire graph). We nonetheless acknowledge the other contributions in these works, such as the introduction of constraints to improve imperceptibility, in our experiments in Sec 4. First methods that do extend to graph classification include [10, 23]: [10] propose a number of techniques, including RL-S2V, which uses reinforcement learning to attack both node and graph classifiers in a black-box manner, and the GA-based attack, which we adapt into our BO acquisition optimisation. However, [10] primarily focus on the S2V victim model, do not emphasise on sample efficiency, and to train a policy that attacks in an one-shot manner on the test graphs, RL-S2V has to query repeatedly on a separate validation set. We empirically compare against it in App. D.2. Another related work is ReWatt [23], which similarly uses reinforcement learning but through rewiring. Compared to both these methods, GRABNEL does not require an additional validation set and is much more query efficient. Other black-box methods without surrogate models have also been proposed that could be potentially be applied to graph classification: [22] exploit common GNN structural bias to attack node features, while [5] relate graph embedding to graph signal processing and construct tailored attack objectives in different GNNs. In comparison to these works that exploit the characteristics of existing architectures to varying degrees, we argue that the optimisation-based method proposed in the our work is more flexible and agnostic to architecture choices, and should be more generalisable to new architectures. Nonetheless, in cases where some architectural information is available, we believe there could be combinable benefits: for example, the importance scores proposed in [22] could be used as sampling weights as priors to bias GRABNEL towards selecting more vulnerable nodes. We defer detailed investigations of such possibilities to a future work. Finally, there have also been various previous works that focus on a different setup than ours: A white-box optimisation strategy (alternating direction method of multipliers) is proposed in [16]. [48, 44, 2] propose back-door attacks that involve poisoning of the training data before training and/or the test data at inference. [35] attack hierarchical graph pooling networks, but similar to [52] the method requires access to training input/targets. Ultimately, a number of factors, including but not limited to 1) existence/strictness of the query budget, 2) strictness of the perturbation budget, 3) attacker capabilities and 4) sizes of the graphs, would decide which algorithm/setup is more appropriate and should be adopted in a problem-specific way. Nonetheless, we argue that our setup is both challenging and highly significant as it resembles the capabilities a real-life attacker might have (no access to training data; no access to model parameter/gradients and limited query/perturbation budgets). Adversarial attacks using BO BO as a means to find adversarial examples in the black-box evasion setting has been successfully proposed for classification models on tabular [34] and image data [28, 50, 32, 27]. However, we address the problem for graph classification models, which work on structurally and topologically fundamentally different inputs. This implies several nontrivial challenges that require our method to go beyond the vanilla usage of BO: for example, the inputs cannot be readily represented as vectors like for tabular or image data and the perturbations that we consider for such inputs are not defined on a continuous, but on a discrete domain. 4 Experiments We validate the performance of the proposed method in a wide range of graph classification tasks with varying graph properties, including but not limited to the typical TU datasets considered in previous works [10, 23]. As a demonstration of the versatility of the proposed method, instead of considering a single mode of attack which is often impossible in real-life, we also select the attack mode specific to each task. All additional details, including the statistics of the datasets used and implementation details of the victim models and attack methods, are presented in App. C. TU Datasets We first conduct experiments on four common TU datasets [25], namely (in ascending order of average graph sizes in the dataset) IMDB-M, PROTEINS, COLLAB and REDDIT-MULTI-5K. In all cases, unless specified otherwise, we define the attack budget ∆ in terms of the maximum structural perturbation ratio r defined in [7] where ∆ ≤ rn2. We similarly link the maximum numbers of queries B allowed for individual graphs to their sizes as B = 40∆, thereby giving larger graphs and thus potentially more difficult instances higher attack3 and query budgets similar to the conventional image adversarial attack literature [28]. In this work, unless otherwise specified we set r = 0.03 for all experiments, and for comparison we consider a number of baselines, including random search, GA introduced in [10]4. On some task/victim model combinations, we also consider an additional simple gradient-based method which greedily adds or delete edges based on the magnitude computed input gradient similar to the gradient based method described in [10] (note that this method is white-box as access to parameter weights and gradients is required), which is also similar in spirit to methods like Nettack [52]. To verify whether the proposed attack method can be used for a variety of classifier architectures we also consider various victim models: we first use GCN [19] and Graph Isomorphism Network (GIN) [45], which are most commonly used in related works [33]. Considering 3Due to computational constraints, we cap the maximum number of queries to be 2 × 104 on each graph. 4The original implementation of RL-S2V, the primary algorithm in [10], primarily focus on a S2V-based victim model [9]. We compare GRABNEL against it in the same dataset considered in [10] in App. D.2. the strong performance of hierarchical models in graph classification [12, 46], we also conduct some experiments on the Graph-U-Net [12] as a representative of such architectures. We show the classification performance of both victim models before and after attacks using various methods in Table 1, and we show the Attack Success Rate (ASR) against the (normalised) number of queries in Fig. 3. It is worth noting that in consistency with the image attack literature, we launch and consider attacks on the graphs that were originally classified correctly, and statistics, such as the ASR, are also computed on that basis. We report additional statistics, such as the evolution of the attack losses as a function of number of queries of selected individual data points in App. D.3. The results generally show that the attack method is effective against both GCN and GIN models with GRABNEL typically leading to the largest degradation in victim predictions in all tasks, often performing on par or better than Gradient-based, a white-box method. It is worth noting that although Gradient-based often performs strongly, there is no guarantee that it always does so: first, for general edge flipping problems, Gradient-based computes gradients w.r.t. all possible edges (including those that do not currently exist) and an accurate estimation of such high dimensional gradients can be highly difficult. Second, gradients only capture local information and they are not necessarily accurate when used to extrapolate function value beyond that neighbourhood. However, relying on gradients to select edge perturbations constitutes such an extrapolation, as edge addition/deletion is binary and discrete. Lastly, on the tasks with larger graphs (e.g. COLLAB on GCN and GIN), due to the huge search spaces, we find neither random search nor GA could flip predictions effectively except for some “easy” samples already lying close to the decision boundary; GRABNEL nonetheless performs well thanks to the effective constraint of the search space from the sequential selection of edge perturbation, which is typically more significant on the larger graphs. We report the results on the Graph U-net victim model in Table 2: as expected, Graph U-net performs better in terms of clean classification accuracy compared to the GCN and GIN models considered above, and it also seem more robust to all types adversarial attacks on the PROTEINS dataset. Nonetheless, in terms of relative performance margin, GRABNEL still outperforms both baselines considerably, demonstrating the flexibility and capability for it to conduct effective attacks even on the more complicated and realistic victim models. As discussed, in real life, adversarial agents might encounter additional constraints other than the number of queries to the victim model or the amount of perturbation introduced. To demonstrate that our framework can handle such constraints, we further carry out attacks on victim models using identical protocols as above but with a variety of additional constraints considered in several previous works. Specifically, the scenarios considered, in the ascending order of restrictiveness, are: • Base: The base scenario is identical to the setup in Table 1 and Fig 3; • 2-hop: Edge addition between nodes (u, v) is only permitted if v is within 2-hop distance of u; • 2-hop+rewire [23]: Instead of flipping edges, the adversarial agent is only allowed to rewire from nodes (u, v) (where an edge exists) to (u,w) (where no edge currently exists). Node w must be within 2-hop distance of u; We test on the PROTEINS dataset, and show the results in Fig. 4: interestingly, the imposition of the 2-hop constraint itself leads to no worsening of performance – in fact, as we elaborate in Sec. 5, we find the phenomenon of adversarial edges remaining relatively clustered within a relatively small neighbourhood is a general pattern in many tasks. This implies that the 2-hop condition, which constrains the spatial relations of the adversarial edges, might already hold even without explicit specification, thereby explaining the marginal difference between the base and the 2-hop constrained cases in Fig. 4. While the additional rewiring constraint leads to (slightly) lower attack success rates, the performance of GRABNEL remains relatively robust in all scenarios considered. Image Classification Beyond the typical “edge flipping” setup on which existing research has been mainly focused, we now consider a different setup involving attacks on the MNIST-75sp dataset [21, 20] with weighted graphs with continuous attributes – note that . The dataset is generated by first partitioning MNIST image into around 75 superpixels with SLIC [1, 11] as the graph nodes (with average superpixel intensity as node attributes). The pairwise distances between the superpixels form the edge weights. We use the pre-trained ChebyGIN with attention model released by the original authors [20] (with an average validation classification accuracy of around 95%) as the victim model. Given that the edge values are no longer binary, simply flipping the edges (equivalent to setting edge weights to 0 and 1) is no longer appropriate. To generalise the sparse perturbation setup and inspired by edge rewiring studied by previous literature, we instead adopt an attack mode via swapping edges: each perturbation can be defined by 3 end nodes (u, v, s) where edge weights wuv is swapped with edge weight w(u, s). We show the results in Fig. 5: GRABNEL-u and Random-u denotes the GRABNEL and random search under the untargeted attack, respectively, whereas GRABNEL-t denotes GRABNEL under the targeted attack with each line denoting 1 of the 9 possible target classes in MNIST. We find that GRABNEL is surprisingly effective in attacking this victim model, almost completely degrading the victim (Fig. 5) with very few swapping operations (Fig. 6) even in the more challenging targeted setup. This seems to suggest that, at least for the data considered, the victim model is very brittle towards carefully crafted edge swapping, with its predictive power seemingly hinged upon a very small number of key edges. We believe a thorough analysis of this phenomenon is of an independent interest, which we defer to a future work. Fake news detection As a final experiment, we consider a real-life task of attacking a GCNbased fake news detector trained on a labelled dataset in [37]. Each discussion cascade (i.e. a chain of tweets, replies and retweets) is represented as an undirected graph, where each node represents a Twitter account (with node features being the key properties of the account such as age and number of followers/followees; see App. C for details) and each edge represents a reply/retweet. As a reflection of what a real-life adversary may and may not do, we note that modifying the connections or properties of the existing nodes, which correspond to modifying existing accounts and tweets, is considered impractical and prohibited. Instead, we consider a node injection attack mode (i.e. creating new malicious nodes and connect them to existing ones): injecting nodes is equivalent to creating new Twitter accounts and connecting them to the rest of the graph is equivalent to retweeting/replying existing accounts. We limit the maximum number of injected nodes to be 0.05N and the maximum number of new edges that may be created per each new node is set to the average number of edges an existing node has – in this context, this limits the number of re-tweets and replies the new accounts may have to avoid easy detection. For the injected node, we initialise its node features in a way that reflects the characteristics of a new Twitter user (we outline the detailed way to do so in App. C). We show the result in Fig. 8, where GRABNEL is capable of reducing the effectiveness of a GCN-based fake news classifier by a third. In this case Random also performs reasonably well, as the discussion cascade is typically small, allowing any adversarial examples to be exhaustively found eventually. Ablation Studies GRABNEL benefits from a number of design choices and it is important to understand the relative contribution of each to the performance. We find that in some tasks GRABNEL without surrogate (i.e. random search with sequential perturbation selection. We term this variant SequentialRandom) is a very strong baseline in terms of final ASR, although the full GRABNEL is much better in terms of overall performance, sample efficiency and the ability to produce successful examples with few perturbations. The readers are referred to our ablation studies in App. D.5. Runtime Analysis Given the setup we consider (sample- efficient black-box attack with minimal amount of perturbation), the cost of the algorithm should not only be considered from the viewpoint of the computational runtime of the attack algorithm itself alone, and this is a primary reason why we use the (normalised) number of queries as the main cost criterion. Nonetheless, a runtime analysis is still informative which we provide in App. D.6. We find that GRABNEL maintains a reasonable overhead even on, e.g., graphs with ∼ 103 nodes/edges that are larger than most graphs in typical graph classification tasks. 5 Attack Analysis Having established the effectiveness of our method, in this section we provide a qualitative analysis on the common interpretable patterns behind the adversarial samples found, which provides further insights into the robustness of graph classification models against structural attacks. We believe such analysis is especially valuable, as it may facilitate the development of even more effective attack methods, and may provide insights that could be useful for identification of real-life vulnerabilities for more effective defence. We show examples of the adversarial samples in Fig. 7 (and Fig. 13 in App. D.3). We summarise some key findings below. • Adversarial edges tend to cluster closely together: We find the distribution of the adversarial edges (either removal or addition) in a graph to be highly uneven, with many adversarial edges often sharing common end-nodes or having small spatial distance to each other. This is empirically consistent with recent theoretical findings on the stability of spectral graph filters in [18]. From an attacker point of view, this may provide a “prior” on the attack to constrain the search space, as the regions around existing perturbations should be exploited more; we leave a practical investigation of the possibility of leveraging this to enhance attack performance to a future work. • Adversarial edges often attempt to destroy or modify community structures: for example, the original graphs in the IMDB-M dataset can be seen to have community structure, a graph-level topological property that is distinct from the existing works analysing attack patterns on node-level tasks [43, 53]. When the GCN model is attacked, the attack tends to flip the edges between the communities, and thereby destroying the structure by either merging communities or deleting edges within a cluster. On the other hand, the GIN examples tend to strengthen the community structures by adding edges within clusters and deleting edges between them. With similar observations also present in, for example, PROTEINS dataset, this may suggest that the models may be fragile to modification of the community structure. • Beware the low-degree nodes! While low-degree nodes are important in terms of degree centrality, we find some victim models are vulnerable to manipulations on such nodes. Most prominently, in the Twitter fake news example, the malicious nodes almost never connect directly to the central node (original tweet) but instead to a peripheral node. This finding corroborates the theoretical argument in [18] which shows that spectral graph filters are more robust towards edge flipping involving high-degree nodes than otherwise, and is also consistent with observations on node-level tasks [53] with the explanation being lower-degree nodes having larger influence in the neighbourhood aggregation in GCN. Nonetheless, we note that changes in a higher-degree node are likely to cascade to more nodes in the graph than low degree nodes, and since graph classifiers aggregate across all nodes in the readout layer the indirect change of node representations also matter. Therefore, we argue that this phenomenon in graph classification is still non-trivial. 6 Conclusion Summary This work proposes a novel and flexible black-box method to attack graph classifiers using Bayesian optimisation. We demonstrate the effectiveness and query efficiency of the method empirically. Unlike many existing works, we qualitatively analyse the adversarial examples generated. We believe such analysis is important to the understanding of adversarial robustness of graph-based learning models. Finally, we would like to point out that a potential negative social impact of our work is that bad actors might use our method to attack real-world systems such as a fake new detection system on social media platforms. Nevertheless, we believe that the experiment in our paper only serves as a proof-of-concept and the benefit of raising awareness of vulnerabilities of graph classification systems largely outweighs the risk. Limitations and Future Work Firstly, the current work only considers topological attack, although the surrogate used is also compatible with attack on node/edge features or hybrid attacks. Secondly, while we have evaluated several mainstream victim models, it would also be interesting to explore defences against adversarial attacks and to test GRABNEL in robust GNN setups such as those with advanced graph augmentations [47], randomised smoothing [48, 13] and adversarial detection [6]. Lastly, the current work is specific to graph classification; we believe it is possible to adapt it to attack other graph tasks by suitably modifying the loss function. We leave these for future works. Acknowledgement and Funding Disclosure The authors would like to acknowledge the following sources of funding in direct support of this work: XW and BR are supported by the Clarendon Scholarship at University of Oxford; HK is supported by the EPSRC Centre for Doctoral Training in Autonomous Intelligent Machines and Systems EP/L015897/1; AB thanks the Konrad-Adenauer-Stiftung and the Oxford-Man Institute of Quantitative Finance for their support. The authors would also like to thank the Oxford-Man Institute of Quantitative Finance for providing the computing resources necessary for this project. The authors declare no conflict in interests.
1. What are the strengths and weaknesses of the paper regarding its contribution to studying adversarial attacks on graph neural networks? 2. What are the questions or concerns regarding the proposed Bayesian optimization-based attack, particularly in terms of its flowchart and motivation? 3. How does the reviewer assess the novelty and effectiveness of the proposed attack compared to other works in the field? 4. What are the suggestions provided by the reviewer to improve the paper, including adding an overview, making the contributions clearer, comparing with existing attacks, and discussing defenses?
Summary Of The Paper Review
Summary Of The Paper The paper studies adversarial attacks to graph neural networks for graph classification. The authors design a Bayesian optimization-based attack that is black-box, query-efficient, and parsimonious with respect to the perturbation. The proposed attack is evaluated on three benchmark graph datasets and shows its effectiveness. Strengths The studied problem is important The proposed attack is evaluated on multiple application domains Attack analysis is interesting Weaknesses Novelty is unclear The proposed attack is hard to follow Evaluation is insufficient Missing important references No discussion on defense Review The flowchart of the proposed bayesian optimization-based attack is unclear to me. I suggest the authors add an overview to high-level show the motivation and the attack flowchart. As I do not fully understand the motivation and details of the attack, I also cannot quite catch the key contributions. For example, why the proposed bayesian optimization is suitable for attacking GNNs? Why GRABNEL outperforms white-box gradient-based attack? Why gradient-based attack performs poorly on PROTEINS with GCN, while performing the best on the other datasets? GRABNEL and random attack obtain comparable attack performance against fake news detector. This may reveal that GRABNEL is not effective enough. Lack of comparison with (2) and (3) The authors should cite and discuss the following references on adversarial attacks to graphs/graph neural networks (1) Attacking Graph-based Classification via Manipulating the Graph Structure (2) Black-box adversarial attacks on graph neural networks with limited node access (3) Node injection attacks on graphs via reinforcement learning (4) Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks (5) Evasion Attacks to Graph Neural Networks via Influence Function (6) Adversarial examples on graph data: Deep insights into attack and defense (7) Topology attack and defense for graph neural networks: An optimization perspective The authors also lack of discussion of defenses against adversarial attacks to graph neural networks. Other comments: [17] is not a reinforcement learning-based techniques a appropriate surrogate => an appropriate surrogate “we note that modifying ...., which correspond to modifying existing accounts and tweets, is considered impractical and prohibited.” => I do not think this claim is correct. Suggestions: -Add an overview to well motivate the proposed attack -Make the contributions more clear -Compare with the existing attacks -Discussion on defenses
NIPS
Title Independent mechanism analysis, a new concept? Abstract Independent component analysis provides a principled framework for unsupervised representation learning, with solid theory on the identifiability of the latent code that generated the data, given only observations of mixtures thereof. Unfortunately, when the mixing is nonlinear, the model is provably nonidentifiable, since statistical independence alone does not sufficiently constrain the problem. Identifiability can be recovered in settings where additional, typically observed variables are included in the generative process. We investigate an alternative path and consider instead including assumptions reflecting the principle of independent causal mechanisms exploited in the field of causality. Specifically, our approach is motivated by thinking of each source as independently influencing the mixing process. This gives rise to a framework which we term independent mechanism analysis. We provide theoretical and empirical evidence that our approach circumvents a number of nonidentifiability issues arising in nonlinear blind source separation. 1 Introduction One of the goals of unsupervised learning is to uncover properties of the data generating process, such as latent structures giving rise to the observed data. Identifiability [55] formalises this desideratum: under suitable assumptions, a model learnt from observations should match the ground truth, up to well-defined ambiguities. Within representation learning, identifiability has been studied mostly in the context of independent component analysis (ICA) [17, 40], which assumes that the observed data x results from mixing unobserved independent random variables si referred to as sources. The aim is to recover the sources based on the observed mixtures alone, also termed blind source separation (BSS). A major obstacle to BSS is that, in the nonlinear case, independent component estimation does not necessarily correspond to recovering the true sources: it is possible to give counterexamples where the observations are transformed into components yi which are independent, yet still mixed with respect to the true sources si [20, 39, 98]. In other words, nonlinear ICA is not identifiable. In order to achieve identifiability, a growing body of research postulates additional supervision or structure in the data generating process, often in the form of auxiliary variables [28, 30, 37, 38, 41]. In the present work, we investigate a different route to identifiability by drawing inspiration from the field of causal inference [71, 78] which has provided useful insights for a number of machine learning tasks, including semi-supervised [87, 103], transfer [6, 23, 27, 31, 61, 72, 84, 85, 97, 102, 107], reinforcement [7, 14, 22, 26, 53, 59, 60, 106], and unsupervised [9, 10, 54, 70, 88, 91, 104, 105] learning. To this end, we interpret the ICA mixing as a causal process and apply the principle of independent causal mechanisms (ICM) which postulates that the generative process consists of independent modules which do not share information [43, 78, 87]. In this context, “independent” does not refer to statistical independence of random variables, but rather to the notion that the distributions and functions composing the generative process are chosen independently by Nature [43, 48]. While a formalisation of ICM [43, 57] in terms of algorithmic (Kolmogorov) complexity [51] exists, it is not computable, and hence applying ICM in practice requires assessing such non-statistical independence ∗Equal contribution. Code available at: https://github.com/lgresele/independent-mechanism-analysis 35th Conference on Neural Information Processing Systems (NeurIPS 2021). with suitable domain specific criteria [96]. The goal of our work is thus to constrain the nonlinear ICA problem, in particular the mixing function, via suitable ICM measures, thereby ruling out common counterexamples to identifiability which intuitively violate the ICM principle. Traditionally, ICM criteria have been developed for causal discovery, where both cause and effect are observed [18, 45, 46, 110]. They enforce an independence between (i) the cause (source) distribution and (ii) the conditional or mechanism (mixing function) generating the effect (observations), and thus rely on the fact that the observed cause distribution is informative. As we will show, this renders them insufficient for nonlinear ICA, since the constraints they impose are satisfied by common counterexamples to identifiability. With this in mind, we introduce a new way to characterise or refine the ICM principle for unsupervised representation learning tasks such as nonlinear ICA. Motivating example. To build intuition, we turn to a famous example of ICA and BSS: the cocktail party problem, illustrated in Fig. 1 (Left). Here, a number of conversations are happening in parallel, and the task is to recover the individual voices si from the recorded mixtures xi. The mixing or recording process f is primarily determined by the room acoustics and the locations at which microphones are placed. Moreover, each speaker influences the recording through their positioning in the room, and we may think of this influence as ∂f/∂si. Our independence postulate then amounts to stating that the speakers’ positions are not fine-tuned to the room acoustics and microphone placement, or to each other, i.e., the contributions ∂f/∂si should be independent (in a non-statistical sense).1 Our approach. We formalise this notion of independence between the contributions ∂f/∂si of each source to the mixing process (i.e., the columns of the Jacobian matrix Jf of partial derivatives) as an orthogonality condition, see Fig. 1 (Right). Specifically, the absolute value of the determinant |Jf |, which describes the local change in infinitesimal volume induced by mixing the sources, should factorise or decompose as the product of the norms of its columns. This can be seen as a decoupling of the local influence of each partial derivative in the pushforward operation (mixing function) mapping the source distribution to the observed one, and gives rise to a novel framework which we term independent mechanism analysis (IMA). IMA can be understood as a refinement of the ICM principle that applies the idea of independence of mechanisms at the level of the mixing function. Contributions. The structure and contributions of this paper can be summarised as follows: • we review well-known obstacles to identifiability of nonlinear ICA (§ 2.1), as well as existing ICM criteria (§ 2.2), and show that the latter do not sufficiently constrain nonlinear ICA (§ 3); • we propose a more suitable ICM criterion for unsupervised representation learning which gives rise to a new framework that we term independent mechanism analysis (IMA) (§ 4); we provide geometric and information-theoretic interpretations of IMA (§ 4.1), introduce an IMA contrast function which is invariant to the inherent ambiguities of nonlinear ICA (§ 4.2), and show that it rules out a large class of counterexamples and is consistent with existing identifiability results (§ 4.3); • we experimentally validate our theoretical claims and propose a regularised maximum-likelihood learning approach based on the IMA constrast which outperforms the unregularised baseline (§ 5); additionally, we introduce a method to learn nonlinear ICA solutions with triangular Jacobian and a metric to assess BSS which can be of independent interest for the nonlinear ICA community. 1For additional intuition and possible violations in the context of the cocktail party problem, see Appendix B.4. 2 Background and preliminaries Our work builds on and connects related literature from the fields of independent component analysis (§ 2.1) and causal inference (§ 2.2). We review the most important concepts below. 2.1 Independent component analysis (ICA) Assume the following data-generating process for independent component analysis (ICA) x = f(s) , ps(s) = ∏n i=1 psi(si) , (1) where the observed mixtures x ∈ Rn result from applying a smooth and invertible mixing function f : Rn → Rn to a set of unobserved, independent signals or sources s ∈ Rn with smooth, factorised density ps with connected support (see illustration Fig. 2b). The goal of ICA is to learn an unmixing function g : Rn → Rn such that y = g(x) has independent components. Blind source separation (BSS), on the other hand, aims to recover the true unmixing f−1 and thus the true sources s (up to tolerable ambiguities, see below). Whether performing ICA corresponds to solving BSS is related to the concept of identifiability of the model class. Intuitively, identifiability is the desirable property that all models which give rise to the same mixture distribution should be “equivalent” up to certain ambiguities, formally defined as follows. Definition 2.1 (∼-identifiability). Let F be the set of all smooth, invertible functions f : Rn → Rn, and P be the set of all smooth, factorised densities ps with connected support on Rn. Let M ⊆ F×P be a subspace of models and let ∼ be an equivalence relation on M. Denote by f∗ps the push-forward density of ps via f . Then the generative process (1) is said to be ∼-identifiable on M if ∀(f , ps), (f̃ , ps̃) ∈ M : f∗ps = f̃∗ps̃ =⇒ (f , ps) ∼ (f̃ , ps̃) . (2) If the true model belongs to the model class M, then ∼-identifiability ensures that any model in M learnt from (infinite amounts of) data will be ∼-equivalent to the true one. An example is linear ICA which is identifiable up to permutation and rescaling of the sources on the subspace MLIN of pairs of (i) invertible matrices (constraint on F) and (ii) factorizing densities for which at most one si is Gaussian (constraint on P) [17, 21, 93], see Appendix A for a more detailed account. In the nonlinear case (i.e., without constraints on F), identifiability is much more challenging. If si and sj are independent, then so are hi(si) and hj(sj) for any functions hi and hj . In addition to permutation-ambiguity, such element-wise h(s) = (h1(s1), ..., hn(sn)) can therefore not be resolved either. We thus define the desired form of identifiability for nonlinear BSS as follows. Definition 2.2 (∼BSS). The equivalence relation ∼BSS on F × P defined as in Defn. 2.1 is given by (f , ps) ∼BSS (f̃ , ps̃) ⇐⇒ ∃P,h s.t. (f , ps) = (f̃ ◦ h−1 ◦P−1, (P ◦ h)∗ps̃) (3) where P is a permutation and h(s) = (h1(s1), ..., hn(sn)) is an invertible, element-wise function. A fundamental obstacle—and a crucial difference to the linear problem—is that in the nonlinear case, different mixtures of si and sj can be independent, i.e., solving ICA is not equivalent to solving BSS. A prominent example of this is given by the Darmois construction [20, 39]. Definition 2.3 (Darmois construction). The Darmois construction gD : Rn → (0, 1)n is obtained by recursively applying the conditional cumulative distribution function (CDF) transform: gDi (x1:i) := P(Xi ≤ xi|x1:i−1) = ∫ xi −∞ p(x ′ i|x1:i−1)dx′i (i = 1, ..., n). (4) The resulting estimated sources yD = gD(x) are mutually-independent uniform r.v.s by construction, see Fig. 2a for an illustration. However, they need not be meaningfully related to the true sources s, and will, in general, still be a nonlinear mixing thereof [39].2 Denoting the mixing function corresponding to (4) by fD = (gD)−1 and the uniform density on (0, 1)n by pu, the Darmois solution (fD, pu) thus allows construction of counterexamples to ∼BSS-identifiability on F × P .3 Remark 2.4. gD has lower-triangular Jacobian, i.e., ∂gDi /∂xj = 0 for i < j. Since the order of the xi is arbitrary, applying gD after a permutation yields a different Darmois solution. Moreover, (4) yields independent components yD even if the sources si were not independent to begin with.4 2Consider, e.g., a mixing f with full Jacobian which yields a contradiction to Defn. 2.2, due to Remark 2.4. 3By applying a change of variables, we can see that the transformed variables in (4) are uniformly distributed in the open unit cube, thereby corresponding to independent components [69, § 2.2]. 4This has broad implications for unsupervised learning, as it shows that, for i.i.d. observations, not only factorised priors, but any unconditional prior is insufficient for identifiability (see, e.g., [49], Appendix D.2). Another well-known obstacle to identifiability are measure-preserving automorphisms (MPAs) of the source distribution ps: these are functions a which map the source space to itself without affecting its distribution, i.e., a∗ps = ps [39]. A particularly instructive class of MPAs is the following [49, 58]. Definition 2.5 (“Rotated-Gaussian” MPA). Let R ∈ O(n) be an orthogonal matrix, and denote by Fs(s) = (Fs1(s1), ..., Fsn(sn)) and Φ(z) = (Φ(z1), ...,Φ(zn)) the element-wise CDFs of a smooth, factorised density ps and of a Gaussian, respectively. Then the “rotated-Gaussian” MPA aR(ps) is aR(ps) = F −1 s ◦Φ ◦R ◦Φ−1 ◦ Fs . (5) aR(ps) first maps to the (rotationally invariant) standard isotropic Gaussian (via Φ−1 ◦ Fs), then applies a rotation, and finally maps back, without affecting the distribution of the estimated sources. Hence, if (f̃ , ps̃) is a valid solution, then so is (f̃ ◦ aR(ps̃), ps̃) for any R ∈ O(n). Unless R is a permutation, this constitutes another common counterexample to ∼BSS-identifiability on F × P . Identifiability results for nonlinear ICA have recently been established for settings where an auxiliary variable u (e.g., environment index, time stamp, class label) renders the sources conditionally independent [37, 38, 41, 49]. The assumption on ps in (1) is replaced with ps|u(s|u) = ∏n i=1 psi|u(si|u), thus restricting P in Defn. 2.1. In most cases, u is assumed to be observed, though [30] is a notable exception. Similar results exist given access to a second noisy view x̃ [28]. 2.2 Causal inference and the principle of independent causal mechanisms (ICM) Rather than relying only on additional assumptions on P (e.g., via auxiliary variables), we seek to further constrain (1) by also placing assumptions on the set F of mixing functions f . To this end, we draw inspiration from the field of causal inference [71, 78]. Of central importance to our approach is the Principle of Independent Causal Mechanisms (ICM) [43, 56, 87]. Principle 2.6 (ICM principle [78]). The causal generative process of a system’s variables is composed of autonomous modules that do not inform or influence each other. These “modules” are typically thought of as the conditional distributions of each variable given its direct causes. Intuitively, the principle then states that these causal conditionals correspond to independent mechanisms of nature which do not share information. Crucially, here “independent” does not refer to statistical independence of random variables, but rather to independence of the underlying distributions as algorithmic objects. For a bivariate system comprising a cause c and an effect e, this idea reduces to an independence of cause and mechanism, see Fig. 2c. One way to formalise ICM uses Kolmogorov complexity K(·) [51] as a measure of algorithmic information [43]. However, since Kolmogorov complexity is is not computable, using ICM in practice requires assessing Principle 2.6 with other suitable proxy criteria [9, 11, 34, 42, 45, 65, 75–78, 90, 110].5 Allowing for deterministic relations between cause (sources) and effect (observations), the criterion which is most closely related to the ICA setting in (1) is information-geometric causal inference (IGCI) [18, 46].6 IGCI assumes a nonlinear relation e = f(c) and formulates a notion of indepen- 5“This can be seen as an algorithmic analog of replacing the empirically undecidable question of statistical independence with practical independence tests that are based on assumptions on the underlying distribution” [43]. 6For a similar criterion which assumes linearity [45, 110] and its relation to linear ICA, see Appendix B.1. dence between the cause distribution pc and the deterministic mechanism f (which we think of as a degenerate conditional pe|c) via the following condition (in practice, assumed to hold approximately), CIGCI(f , pc) := ∫ log |Jf (c)| pc(c)dc− ∫ log |Jf (c)| dc = 0 , (6) where (Jf (c))ij = ∂fi/∂cj(c) is the Jacobian matrix and | · | the absolute value of the determinant. CIGCI can be understood as the covariance between pc and log |Jf | (viewed as r.v.s on the unit cube w.r.t. the Lebesgue measure), so that CIGCI = 0 rules out a form of fine-tuning between pc and |Jf |. As its name suggests, IGCI can, from an information-geometric perspective, also be seen as an orthogonality condition between cause and mechanism in the space of probability distributions [46], see Appendix B.2, particularly eq. (19) for further details. 3 Existing ICM measures are insufficient for nonlinear ICA Our aim is to use the ICM Principle 2.6 to further constrain the space of models M ⊆ F ×P and rule out common counterexamples to identifiability such as those presented in § 2.1. Intuitively, both the Darmois construction (4) and the rotated Gaussian MPA (5) give rise to “non-generic” solutions which should violate ICM: the former, (fD, pu), due the triangular Jacobian of fD (see Remark 2.4), meaning that each observation xi = fDi (y1:i) only depends on a subset of the inferred independent components y1:i, and the latter, (f ◦ aR(ps), ps), due to the dependence of f ◦ aR(ps) on ps (5). However, the ICM criteria described in § 2.2 were developed for the task of cause-effect inference where both variables are observed. In contrast, in this work, we consider an unsupervised representation learning task where only the effects (mixtures x) are observed, but the causes (sources s) are not. It turns out that this renders existing ICM criteria insufficient for BSS: they can easily be satisfied by spurious solutions which are not equivalent to the true one. We can show this for IGCI. Denote by MIGCI = {(f , ps) ∈ F × P : CIGCI(f , ps) = 0} ⊂ F × P the class of nonlinear ICA models satisfying IGCI (6). Then the following negative result holds. Proposition 3.1 (IGCI is insufficient for ∼BSS-identifiability). (1) is not ∼BSS-identifiable on MIGCI. Proof. IGCI (6) is satisfied when ps is uniform. However, the Darmois construction (4) yields uniform sources, see Fig. 2a. This means that (fD ◦ aR(pu), pu) ∈ MIGCI, so IGCI can be satisfied by solutions which do not separate the sources in the sense of Defn. 2.2, see footnote 2 and [39]. As illustrated in Fig. 2c, condition (6) and other similar criteria enforce a notion of “genericity” or “decoupling” of the mechanism w.r.t. the observed input distribution.7 They thus rely on the fact that the cause (source) distribution is informative, and are generally not invariant to reparametrisation of the cause variables. In the (nonlinear) ICA setting, on the other hand, the learnt source distribution may be fairly uninformative. This poses a challenge for existing ICM criteria since any mechanism is generic w.r.t. an uninformative (uniform) input distribution. 4 Independent mechanism analysis (IMA) As argued in § 3, enforcing independence between the input distribution and the mechanism (Fig. 2c), as existing ICM criteria do, is insufficient for ruling out spurious solutions to nonlinear ICA. We therefore propose a new ICM-inspired framework which is more suitable for BSS and which we term independent mechanism analysis (IMA).8 All proofs are provided in Appendix C. 4.1 Intuition behind IMA As motivated using the cocktail party example in § 1 and Fig. 1 (Left), our main idea is to enforce a notion of independence between the contributions or influences of the different sources si on the observations x = f(s) as illustrated in Fig. 2d—as opposed to between the source distribution and mixing function, cf. Fig. 2c. These contributions or influences are captured by the vectors of partial derivatives ∂f/∂si. IMA can thus be understood as a refinement of ICM at the level of the mixing f : in addition to statistically independent components si, we look for a mixing with contributions ∂f/∂si which are independent, in a non-statistical sense which we formalise as follows. Principle 4.1 (IMA). The mechanisms by which each source si influences the observed distribution, as captured by the partial derivatives ∂f/∂si, are independent of each other in the sense that for all s: log |Jf (s)| = n∑ i=1 log ∥∥∥∥ ∂f∂si (s) ∥∥∥∥ (7) 7In fact, many ICM criteria can be phrased as special cases of a unifying group-invariance framework [9]. 8The title of the present work is thus a reverence to Pierre Comon’s seminal 1994 paper [17]. Geometric interpretation. Geometrically, the IMA principle can be understood as an orthogonality condition, as illustrated for n = 2 in Fig. 1 (Right). First, the vectors of partial derivatives ∂f/∂si, for which the IMA principle postulates independence, are the columns of Jf . |Jf | thus measures the volume of the n−dimensional parallelepiped spanned by these columns, as shown on the right. The product of their norms, on the other hand, corresponds to the volume of an n-dimensional box, or rectangular parallelepiped with side lengths ∥∂f/∂si∥, as shown on the left. The two volumes are equal if and only if all columns ∂f/∂si of Jf are orthogonal. Note that (7) is trivially satisfied for n = 1, i.e., if there is no mixing, further highlighting its difference from ICM for causal discovery. Independent influences and orthogonality. In a high dimensional setting (large n), this orthogonality can be intuitively interpreted from the ICM perspective as Nature choosing the direction of the influence of each source component in the observation space independently and from an isotropic prior. Indeed, it can be shown that the scalar product of two independent isotropic random vectors in Rn vanishes as the dimensionality n increases (equivalently: two high-dimensional isotropic vectors are typically orthogonal). This property was previously exploited in other linear ICM-based criteria (see [44, Lemma 5] and [45, Lemma 1 & Thm. 1]).9 The principle in (7) can be seen as a constraint on the function space, enforcing such orthogonality between the columns of the Jacobian of f at all points in the source domain, thus approximating the high-dimensional behavior described above.10 Information-geometric interpretation and comparison to IGCI. The additive contribution of the sources’ influences ∂f/∂si in (7) suggests their local decoupling at the level of the mechanism f . Note that IGCI (6), on the other hand, postulates a different type of decoupling: one between log |Jf | and ps. There, dependence between cause and mechanism can be conceived as a fine tuning between the derivative of the mechanism and the input density. The IMA principle leads to a complementary, non-statistical measure of independence between the influences ∂f/∂si of the individual sources on the vector of observations. Both the IGCI and IMA postulates have an information-geometric interpretation related to the influence of (“non-statistically”) independent modules on the observations: both lead to an additive decomposition of a KL-divergence between the effect distribution and a reference distribution. For IGCI, independent modules correspond to the cause distribution and the mechanism mapping the cause to the effect (see (19) in Appendix B.2). For IMA, on the other hand, these are the influences of each source component on the observations in an interventional setting (under soft interventions on individual sources), as measured by the KL-divergences between the original and intervened distributions. See Appendix B.3, and especially (22), for a more detailed account. We finally remark that while recent work based on the ICM principle has mostly used the term “mechanism” to refer to causal Markov kernels p(Xi|PAi) or structural equations [78], we employ it in line with the broader use of this concept in the philosophical literature.11 To highlight just two examples, [86] states that “Causal processes, causal interactions, and causal laws provide the mechanisms by which the world works; to understand why certain things happen, we need to see how they are produced by these mechanisms”; and [99] states that “Mechanisms are events that alter relations among some specified set of elements”. Following this perspective, we argue that a causal mechanism can more generally denote any process that describes the way in which causes influence their effects: the partial derivative ∂f/∂si thus reflects a causal mechanism in the sense that it describes the infinitesimal changes in the observations x, when an infinitesimal perturbation is applied to si. 4.2 Definition and useful properties of the IMA contrast We now introduce a contrast function based on the IMA principle (7) and show that it possesses several desirable properties in the context of nonlinear ICA. First, we define a local contrast as the difference between the two integrands of (7) for a particular value of the sources s. Definition 4.2 (Local IMA contrast). The local IMA contrast cIMA(f , s) of f at a point s is given by cIMA(f , s) = n∑ i=1 log ∥∥∥∥ ∂f∂si (s) ∥∥∥∥− log |Jf (s)| . (8) Remark 4.3. This corresponds to the left KL measure of diagonality [2] for √ Jf (s)⊤Jf (s). 9This has also been used as a “leading intuition” [sic] to interpret IGCI in [46]. 10To provide additional intuition on how IMA differs from existing principles of independence of cause and mechanism, we give examples, both technical and pictorial, of violations of both in Appendix B.4. 11See Table 1 in [62] for a long list of definitions from the literature. The local IMA contrast cIMA(f , s) quantifies the extent to which the IMA principle is violated at a given point s. We summarise some of its properties in the following proposition. Proposition 4.4 (Properties of cIMA(f , s)). The local IMA contrast cIMA(f , s) defined in (8) satisfies: (i) cIMA(f , s) ≥ 0, with equality if and only if all columns ∂f/∂si(s) of Jf (s) are orthogonal. (ii) cIMA(f , s) is invariant to left multiplication of Jf (s) by an orthogonal matrix and to right multiplication by permutation and diagonal matrices. Property (i) formalises the geometric interpretation of IMA as an orthogonality condition on the columns of the Jacobian from § 4.1, and property (ii) intuitively states that changes of orthonormal basis and permutations or rescalings of the columns of Jf do not affect their orthogonality. Next, we define a global IMA contrast w.r.t. a source distribution ps as the expected local IMA contrast. Definition 4.5 (Global IMA contrast). The global IMA contrast CIMA(f , ps) of f w.r.t. ps is given by CIMA(f , ps) = Es∼ps [cIMA(f , s)] = ∫ cIMA(f , s)ps(s)ds . (9) The global IMA contrast CIMA(f , ps) thus quantifies the extent to which the IMA principle is violated for a particular solution (f , ps) to the nonlinear ICA problem. We summarise its properties as follows. Proposition 4.6 (Properties of CIMA(f , ps)). The global IMA contrast CIMA(f , ps) from (9) satisfies: (i) CIMA(f , ps) ≥ 0, with equality iff. Jf (s) = O(s)D(s) almost surely w.r.t. ps, where O(s),D(s) ∈ Rn×n are orthogonal and diagonal matrices, respectively; (ii) CIMA(f , ps) = CIMA(f̃ , ps̃) for any f̃ = f ◦ h−1 ◦P−1 and s̃ = Ph(s), where P ∈ Rn×n is a permutation and h(s) = (h1(s1), ..., hn(sn)) an invertible element-wise function. 0.50 0.75 1.00 1.25 1.50 r /8 0 /8 0.5 1.0 1.5 x 0.6 0.4 0.2 0.0 0.2 0.4 0.6 y Figure 3: An example of a (non-conformal) orthogonal coordinate transformation from polar (left) to Cartesian (right) coordinates. Property (i) is the distribution-level analogue to (i) of Prop. 4.4 and only allows for orthogonality violations on sets of measure zero w.r.t. ps. This means that CIMA can only be zero if f is an orthogonal coordinate transformation almost everywhere [19, 52, 66], see Fig. 3 for an example. We particularly stress property (ii), as it precisely matches the inherent indeterminacy of nonlinear ICA: CIMA is blind to reparametrisation of the sources by permutation and element wise transformation. 4.3 Theoretical analysis and justification of CIMA We now show that, under suitable assumptions on the generative model (1), a large class of spurious solutions—such as those based on the Darmois construction (4) or measure preserving automorphisms such as aR from (5) as described in § 2.1—exhibit nonzero IMA contrast. Denote the class of nonlinear ICA models satisfying (7) (IMA) by MIMA = {(f , ps) ∈ F × P : CIMA(f , ps) = 0} ⊂ F × P . Our first main theoretical result is that, under mild assumptions on the observations, Darmois solutions will have strictly positive CIMA, making them distinguishable from those in MIMA. Theorem 4.7. Assume the data generating process in (1) and assume that xi ⊥̸⊥ xj for some i ̸= j. Then any Darmois solution (fD, pu) based on gD as defined in (4) satisfies CIMA(fD, pu) > 0. Thus a solution satisfying CIMA(f , ps) = 0 can be distinguished from (fD, pu) based on the contrast CIMA. The proof is based on the fact that the Jacobian of gD is triangular (see Remark 2.4) and on the specific form of (4). A specific example of a mixing process satisfying the IMA assumption is the case where f is a conformal (angle-preserving) map. Definition 4.8 (Conformal map). A smooth map f : Rn → Rn is conformal if Jf (s) = O(s)λ(s) ∀s, where λ : Rn → R is a scalar field, and O ∈ O(n) is an orthogonal matrix. Corollary 4.9. Under assumptions of Thm. 4.7, if additionally f is a conformal map, then (f , ps) ∈ MIMA for any ps ∈ P due to Prop. 4.6 (i), see Defn. 4.8. Based on Thm. 4.7, (f , ps) is thus distinguishable from Darmois solutions (fD, pu). This is consistent with a result that proves identifiability of conformal maps for n = 2 and conjectures it in general [39].12 However, conformal maps are only a small subset of all maps for which CIMA = 0, as is apparent from the more flexible condition of Prop. 4.6 (i), compared to the stricter Defn. 4.8. 12Note that Corollary 4.9 holds for any dimensionality n. Example 4.10 (Polar to Cartesian coordinate transform). Consider the non-conformal transformation from polar to Cartesian coordinates (see Fig. 3), defined as (x, y) = f(r, θ) := (r cos(θ), r sin(θ)) with independent sources s = (r, θ), with r ∼ U(0, R) and θ ∼ U(0, 2π).13 Then, CIMA(f , ps) = 0 and CIMA(fD, pu) > 0 for any Darmois solution (fD, pu) —see Appendix D for details. Finally, for the case in which the true mixing is linear, we obtain the following result. Corollary 4.11. Consider a linear ICA model, x = As, with E[s⊤s] = I, and A ∈ O(n) an orthogonal, non-trivial mixing matrix, i.e., not the product of a diagonal and a permutation matrix DP. If at most one of the si is Gaussian, then CIMA(A, ps) = 0 and CIMA(fD, pu) > 0. In a “blind” setting, we may not know a priori whether the true mixing is linear or not, and thus choose to learn a nonlinear unmixing. Corollary 4.11 shows that, in this case, Darmois solutions are still distinguishable from the true mixing via CIMA. Note that unlike in Corollary 4.9, the assumption that xi ⊥̸⊥ xj for some i ̸= j is not required for Corollary 4.11. In fact, due to Theorem 11 of [17], it follows from the assumed linear ICA model with non-Gaussian sources, and the fact that the mixing matrix is not the product of a diagonal and a permutation matrix (see also Appendix A). Having shown that the IMA principle allows to distinguish a class of models (including, but not limited to conformal maps) from Darmois solutions, we next turn to a second well-known counterexample to identifiability: the “rotated-Gaussian” MPA aR(ps) (5) from Defn. 2.5. Our second main theoretical result is that, under suitable assumptions, this class of MPAs can also be ruled out for “non-trivial” R. Theorem 4.12. Let (f , ps) ∈ MIMA and assume that f is a conformal map. Given R ∈ O(n), assume additionally that ∃ at least one non-Gaussian si whose associated canonical basis vector ei is not transformed by R−1 = R⊤ into another canonical basis vector ej . Then CIMA(f ◦ aR(ps), ps) > 0. Thm. 4.12 states that for conformal maps, applying the aR(ps) transformation at the level of the sources leads to an increase in CIMA except for very specific rotations R that are “fine-tuned” to ps in the sense that they permute all non-Gaussian sources si with another sj . Interestingly, as for the linear case, non-Gaussianity again plays an important role in the proof of Thm. 4.12. 5 Experiments Our theoretical results from § 4 suggest that CIMA is a promising contrast function for nonlinear blind source separation. We test this empirically by evaluating the CIMA of spurious nonlinear ICA solutions (§ 5.1), and using it as a learning objective to recover the true solution (§ 5.2). We sample the ground truth sources from a uniform distribution in [0, 1]n; the reconstructed sources are also mapped to the uniform hypercube as a reference measure via the CDF transform. Unless 13For different ps, (x, y) can be made to have independent Gaussian components ([98], II.B), and CIMAidentifiability is lost; this shows that the assumption of Thm. 4.7 that xi ⊥̸⊥ xj for some i ̸= j is crucial. otherwise specified, the ground truth mixing f is a Möbius transformation [81] (i.e., a conformal map) with randomly sampled parameters, thereby satisfying Principle 4.1. In all of our experiments, we use JAX [12] and Distrax [13]. For additional technical details, equations and plots see Appendix E. The code to reproduce our experiments is available at this link. 5.1 Numerical evaluation of the CIMA contrast for spurious nonlinear ICA solutions Learning the Darmois construction. To learn the Darmois construction from data, we use normalising flows, see [35, 69]. Since Darmois solutions have triangular Jacobian (Remark 2.4), we use an architecture based on residual flows [16] which we constrain such that the Jacobian of the full model is triangular. This yields an expressive model which we train effectively via maximum likelihood. CIMA of Darmois solutions. To check whether Darmois solutions (learnt from finite data) can be distinguished from the true one, as predicted by Thm. 4.7, we generate 1000 random mixing functions for n = 2, compute the CIMA values of learnt solutions, and find that all values are indeed significantly larger than zero, see Fig. 4 (a). The same holds for higher dimensions, see Fig. 4 (b) for results with 50 random mixings for n ∈ {2, 3, 5, 10}: with higher dimensionality, both the mean and variance of the CIMA distribution for the learnt Darmois solutions generally attain higher values.14 We confirmed these findings for mappings which are not conformal, while still satisfying (7), in Appendix E.5. CIMA of MPAs. We also investigate the effect on CIMA of applying an MPA aR(·) from (5) to the true solution or a learnt Darmois solution. Results for n = 2 dim. for different rotation matrices R (parametrised by the angle θ) are shown in Fig. 4 (c). As expected, the behavior is periodic in θ, and vanishes for the true solution (blue) at multiples of π/2, i.e., when R is a permutation matrix, as predicted by Thm. 4.12. For the learnt Darmois solution (red, dashed) CIMA remains larger than zero. CIMA values for random MLPs. Lastly, we study the behavior of spurious solutions based on the Darmois construction under deviations from our assumption of CIMA = 0 for the true mixing function. To this end, we use invertible MLPs with orthogonal weight initalisation and leaky_tanh activations [29] as mixing functions; the more layers L are added to the mixing MLP, the larger a deviation from our assumptions is expected. We compare the true mixing and learnt Darmois solutions over 20 realisations for each L ∈ {2, 3, 4}, n = 5. Results are shown in figure Fig. 4 (d): the CIMA of the mixing MLPs grows with L; still, the one of the Darmois solution is typically higher. Summary. We verify that spurious solutions can be distinguished from the true one based on CIMA. 5.2 Learning nonlinear ICA solutions with CIMA-regularised maximum likelihood Experimental setup. To use CIMA as a learning signal, we consider a regularised maximum-likelihood approach, with the following objective: L(g) = Ex[log pg(x)]− λCIMA(g−1, py), where g denotes the learnt unmixing, y = g(x) the reconstructed sources, and λ ≥ 0 a Lagrange multiplier. For λ = 0, this corresponds to standard maximum likelihood estimation, whereas for λ > 0, L lower-bounds the likelihood, and recovers it exactly iff. (g−1, py) ∈ MIMA. We train a residual flow g (with full Jacobian) to maximise L. For evaluation, we compute (i) the KL divergence to the true data likelihood, as a measure of goodness of fit for the learnt flow model; and (ii) the mean correlation coefficient (MCC) between ground truth and reconstructed sources [37, 49]. We also introduce (iii) a nonlinear extension of the Amari distance [5] between the true mixing and the learnt unmixing, which is larger than or equal to zero, with equality iff. the learnt model belongs to the BSS equivalence class (Defn. 2.2) of the true solution, see Appendix E.5 for details. Results. In Fig. 4 (Top), we show an example of the distortion induced by different spurious solutions for n = 2, and contrast it with a solution learnt using our proposed objective (rightmost plot). Visually, we find that the CIMA-regularised solution (with λ = 1) recovers the true sources most faithfully. Quantitative results for 50 learnt models for each λ ∈ {0.0, 0.5, 1.0} and n ∈ {5, 7} are summarised in Fig. 5 (see Appendix E for additional plots) . As indicated by the KL divergence values (left), most trained models achieve a good fit to the data across all values of λ.15 We observe that using CIMA (i.e., λ > 0) is beneficial for BSS, both in terms of our nonlinear Amari distance (center, lower is better) and MCC (right, higher is better), though we do not observe a substantial difference between λ = 0.5 and λ = 1.16 Summary: CIMA can be a useful learning signal to recover the true solution. 14the latter possibly due to the increased difficulty of the learning task for larger n 15models with n = 7 have high outlier KL values, seemingly less pronounced for nonzero values of λ 16In Appendix E.5, we also show that our method is superior to a linear ICA baseline, FastICA [36]. 6 Discussion Assumptions on the mixing function. Instead of relying on weak supervision in the form of auxiliary variables [28, 30, 37, 38, 41, 49], our IMA approach places additional constraints on the functional form of the mixing process. In a similar vein, the minimal nonlinear distortion principle [108] proposes to favor solutions that are as close to linear as possible. Another example is the post-nonlinear model [98, 109], which assumes an element-wise nonlinearity applied after a linear mixing. IMA is different in that it still allows for strongly nonlinear mixings (see, e.g., Fig. 3) provided that the columns of their Jacobians are (close to) orthogonal. In the related field of disentanglement [8, 58], a line of work that focuses on image generation with adversarial networks [24] similarly proposes to constrain the “generator” function via regularisation of its Jacobian [82] or Hessian [74], though mostly from an empirically-driven, rather than from an identifiability perspective as in the present work. Towards identifiability with CIMA. The IMA principle rules out a large class of spurious solutions to nonlinear ICA. While we do not present a full identifiability result, our experiments show that CIMA can be used to recover the BSS equivalence class, suggesting that identifiability might indeed hold, possibly under additional assumptions—e.g., for conformal maps [39]. IMA and independence of cause and mechanism. While inspired by measures of independence of cause and mechanism as traditionally used for cause-effect inference [18, 45, 46, 110], we view the IMA principle as addressing a different question, in the sense that they evaluate independence between different elements of the causal model. Any nonlinear ICA solution that satisfies the IMA Principle 4.1 can be turned into one with uniform reconstructed sources—thus satisfying IGCI as argued in § 3— through composition with an element-wise transformation which, according to Prop. 4.6 (ii), leaves the CIMA value unchanged. Both IGCI (6) and IMA (7) can therefore be fulfilled simultaneosly, while the former on its own is inconsequential for BSS as shown in Prop. 3.1. BSS through algorithmic information. Algorithmic information theory has previously been proposed as a unifying framework for identifiable approaches to linear BSS [67, 68], in the sense that commonly-used contrast functions could, under suitable assumptions, be interpreted as proxies for the total complexity of the mixing and the reconstructed sources. However, to the best of our knowledge, the problem of specifying suitable proxies for the complexity of nonlinear mixing functions has not yet been addressed. We conjecture that our framework could be linked to this view, based on the additional assumption of algorithmic independence of causal mechanisms [43], thus potentially representing an approach to nonlinear BSS by minimisation of algorithmic complexity. ICA for causal inference & causality for ICA. Past advances in ICA have inspired novel causal discovery methods [50, 64, 92]. The present work constitutes, to the best of our knowledge, the first effort to use ideas from causality (specifically ICM) for BSS. An application of the IMA principle to causal discovery or causal representation learning [88] is an interesting direction for future work. Conclusion. We introduce IMA, a path to nonlinear BSS inspired by concepts from causality. We postulate that the influences of different sources on the observed distribution should be approximately independent, and formalise this as an orthogonality condition on the columns of the Jacobian. We prove that this constraint is generally violated by well-known spurious nonlinear ICA solutions, and propose a regularised maximum likelihood approach which we empirically demonstrate to be effective in recovering the true solution. Our IMA principle holds exactly for orthogonal coordinate transformations, and is thus of potential interest for learning spatial representations [33], robot dynamics [63], or physics problems where orthogonal reference frames are common [66]. Acknowledgements The authors thank Aapo Hyvärinen, Adrián Javaloy Bornás, Dominik Janzing, Giambattista Parascandolo, Giancarlo Fissore, Nasim Rahaman, Patrick Burauel, Patrik Reizinger, Paul Rubenstein, Shubhangi Ghosh, and the anonymous reviewers for helpful comments and discussions. Funding Transparency Statement This work was supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039B; and by the Machine Learning Cluster of Excellence, EXC number 2064/1 - Project number 390727645.
1. What is the focus of the paper regarding non-linear ICA and its connection to Independence of Causal Mechanisms? 2. What are the strengths of the proposed approach, particularly in its ability to eliminate known counterexamples to identifiability? 3. What are the weaknesses of the paper, specifically regarding the discussion of ICM as motivation for IMA? 4. How does the reviewer suggest improving the clarity and concreteness of the paper's discussion of algorithmic independence? 5. Can you provide examples that would help readers better understand when this functional restriction may or may not be practical?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a criterion for non-linear ICA as applied to Blind Source Separation inspired by Independence of Causal Mechanisms in the causal discovery literature. The general idea is to constrain the gradients of the output with respect to the input sources to be orthogonal to each other; this is linked to the notion that the mixing mechanism is in some sense independent (or not "too fine-tuned") to the input sources. The authors show that this constraint eliminates a number of known counterexamples to identifiability in non-linear ICA, and test it experimentally to show that helps to characterize true solutions (both as an evaluation metric, and as a regularizer). The authors conclude with a discussion putting their contributions into context. Review Overall, the paper is clearly written and the technical properties and limitations of the method are well-discussed. The experiments are well-designed to support the claims in the paper. My main concern with the paper is that the discussion of ICM as motivation for IMA is a bit long and imprecise. While I do appreciate the review of ICM ideas, the notion of "algorithmic independence" is still rather vague (this is, IMO, often a problem in a lot of the ICM literature). Although two different measurements of the concept are given (in terms of Kolmogorov complexity and IGCI), a concrete counterexample of what it would mean to have "fine-tuning" between the cause and the conditional would be helpful. In particular, is there an example that could be given (e.g., a modification of the cocktail party example) where this independence would not hold? Perhaps, an operator might have changed some properties of the recording to better capture certain parts of the input sources? A concrete example of this type would also give a better sense of when this functional restriction may or may not be appropriate in practice. When specifying identifying assumptions, this kind of grounding is generally useful because, by definition, there is no way to test these assumptions in practice.
NIPS
Title Independent mechanism analysis, a new concept? Abstract Independent component analysis provides a principled framework for unsupervised representation learning, with solid theory on the identifiability of the latent code that generated the data, given only observations of mixtures thereof. Unfortunately, when the mixing is nonlinear, the model is provably nonidentifiable, since statistical independence alone does not sufficiently constrain the problem. Identifiability can be recovered in settings where additional, typically observed variables are included in the generative process. We investigate an alternative path and consider instead including assumptions reflecting the principle of independent causal mechanisms exploited in the field of causality. Specifically, our approach is motivated by thinking of each source as independently influencing the mixing process. This gives rise to a framework which we term independent mechanism analysis. We provide theoretical and empirical evidence that our approach circumvents a number of nonidentifiability issues arising in nonlinear blind source separation. 1 Introduction One of the goals of unsupervised learning is to uncover properties of the data generating process, such as latent structures giving rise to the observed data. Identifiability [55] formalises this desideratum: under suitable assumptions, a model learnt from observations should match the ground truth, up to well-defined ambiguities. Within representation learning, identifiability has been studied mostly in the context of independent component analysis (ICA) [17, 40], which assumes that the observed data x results from mixing unobserved independent random variables si referred to as sources. The aim is to recover the sources based on the observed mixtures alone, also termed blind source separation (BSS). A major obstacle to BSS is that, in the nonlinear case, independent component estimation does not necessarily correspond to recovering the true sources: it is possible to give counterexamples where the observations are transformed into components yi which are independent, yet still mixed with respect to the true sources si [20, 39, 98]. In other words, nonlinear ICA is not identifiable. In order to achieve identifiability, a growing body of research postulates additional supervision or structure in the data generating process, often in the form of auxiliary variables [28, 30, 37, 38, 41]. In the present work, we investigate a different route to identifiability by drawing inspiration from the field of causal inference [71, 78] which has provided useful insights for a number of machine learning tasks, including semi-supervised [87, 103], transfer [6, 23, 27, 31, 61, 72, 84, 85, 97, 102, 107], reinforcement [7, 14, 22, 26, 53, 59, 60, 106], and unsupervised [9, 10, 54, 70, 88, 91, 104, 105] learning. To this end, we interpret the ICA mixing as a causal process and apply the principle of independent causal mechanisms (ICM) which postulates that the generative process consists of independent modules which do not share information [43, 78, 87]. In this context, “independent” does not refer to statistical independence of random variables, but rather to the notion that the distributions and functions composing the generative process are chosen independently by Nature [43, 48]. While a formalisation of ICM [43, 57] in terms of algorithmic (Kolmogorov) complexity [51] exists, it is not computable, and hence applying ICM in practice requires assessing such non-statistical independence ∗Equal contribution. Code available at: https://github.com/lgresele/independent-mechanism-analysis 35th Conference on Neural Information Processing Systems (NeurIPS 2021). with suitable domain specific criteria [96]. The goal of our work is thus to constrain the nonlinear ICA problem, in particular the mixing function, via suitable ICM measures, thereby ruling out common counterexamples to identifiability which intuitively violate the ICM principle. Traditionally, ICM criteria have been developed for causal discovery, where both cause and effect are observed [18, 45, 46, 110]. They enforce an independence between (i) the cause (source) distribution and (ii) the conditional or mechanism (mixing function) generating the effect (observations), and thus rely on the fact that the observed cause distribution is informative. As we will show, this renders them insufficient for nonlinear ICA, since the constraints they impose are satisfied by common counterexamples to identifiability. With this in mind, we introduce a new way to characterise or refine the ICM principle for unsupervised representation learning tasks such as nonlinear ICA. Motivating example. To build intuition, we turn to a famous example of ICA and BSS: the cocktail party problem, illustrated in Fig. 1 (Left). Here, a number of conversations are happening in parallel, and the task is to recover the individual voices si from the recorded mixtures xi. The mixing or recording process f is primarily determined by the room acoustics and the locations at which microphones are placed. Moreover, each speaker influences the recording through their positioning in the room, and we may think of this influence as ∂f/∂si. Our independence postulate then amounts to stating that the speakers’ positions are not fine-tuned to the room acoustics and microphone placement, or to each other, i.e., the contributions ∂f/∂si should be independent (in a non-statistical sense).1 Our approach. We formalise this notion of independence between the contributions ∂f/∂si of each source to the mixing process (i.e., the columns of the Jacobian matrix Jf of partial derivatives) as an orthogonality condition, see Fig. 1 (Right). Specifically, the absolute value of the determinant |Jf |, which describes the local change in infinitesimal volume induced by mixing the sources, should factorise or decompose as the product of the norms of its columns. This can be seen as a decoupling of the local influence of each partial derivative in the pushforward operation (mixing function) mapping the source distribution to the observed one, and gives rise to a novel framework which we term independent mechanism analysis (IMA). IMA can be understood as a refinement of the ICM principle that applies the idea of independence of mechanisms at the level of the mixing function. Contributions. The structure and contributions of this paper can be summarised as follows: • we review well-known obstacles to identifiability of nonlinear ICA (§ 2.1), as well as existing ICM criteria (§ 2.2), and show that the latter do not sufficiently constrain nonlinear ICA (§ 3); • we propose a more suitable ICM criterion for unsupervised representation learning which gives rise to a new framework that we term independent mechanism analysis (IMA) (§ 4); we provide geometric and information-theoretic interpretations of IMA (§ 4.1), introduce an IMA contrast function which is invariant to the inherent ambiguities of nonlinear ICA (§ 4.2), and show that it rules out a large class of counterexamples and is consistent with existing identifiability results (§ 4.3); • we experimentally validate our theoretical claims and propose a regularised maximum-likelihood learning approach based on the IMA constrast which outperforms the unregularised baseline (§ 5); additionally, we introduce a method to learn nonlinear ICA solutions with triangular Jacobian and a metric to assess BSS which can be of independent interest for the nonlinear ICA community. 1For additional intuition and possible violations in the context of the cocktail party problem, see Appendix B.4. 2 Background and preliminaries Our work builds on and connects related literature from the fields of independent component analysis (§ 2.1) and causal inference (§ 2.2). We review the most important concepts below. 2.1 Independent component analysis (ICA) Assume the following data-generating process for independent component analysis (ICA) x = f(s) , ps(s) = ∏n i=1 psi(si) , (1) where the observed mixtures x ∈ Rn result from applying a smooth and invertible mixing function f : Rn → Rn to a set of unobserved, independent signals or sources s ∈ Rn with smooth, factorised density ps with connected support (see illustration Fig. 2b). The goal of ICA is to learn an unmixing function g : Rn → Rn such that y = g(x) has independent components. Blind source separation (BSS), on the other hand, aims to recover the true unmixing f−1 and thus the true sources s (up to tolerable ambiguities, see below). Whether performing ICA corresponds to solving BSS is related to the concept of identifiability of the model class. Intuitively, identifiability is the desirable property that all models which give rise to the same mixture distribution should be “equivalent” up to certain ambiguities, formally defined as follows. Definition 2.1 (∼-identifiability). Let F be the set of all smooth, invertible functions f : Rn → Rn, and P be the set of all smooth, factorised densities ps with connected support on Rn. Let M ⊆ F×P be a subspace of models and let ∼ be an equivalence relation on M. Denote by f∗ps the push-forward density of ps via f . Then the generative process (1) is said to be ∼-identifiable on M if ∀(f , ps), (f̃ , ps̃) ∈ M : f∗ps = f̃∗ps̃ =⇒ (f , ps) ∼ (f̃ , ps̃) . (2) If the true model belongs to the model class M, then ∼-identifiability ensures that any model in M learnt from (infinite amounts of) data will be ∼-equivalent to the true one. An example is linear ICA which is identifiable up to permutation and rescaling of the sources on the subspace MLIN of pairs of (i) invertible matrices (constraint on F) and (ii) factorizing densities for which at most one si is Gaussian (constraint on P) [17, 21, 93], see Appendix A for a more detailed account. In the nonlinear case (i.e., without constraints on F), identifiability is much more challenging. If si and sj are independent, then so are hi(si) and hj(sj) for any functions hi and hj . In addition to permutation-ambiguity, such element-wise h(s) = (h1(s1), ..., hn(sn)) can therefore not be resolved either. We thus define the desired form of identifiability for nonlinear BSS as follows. Definition 2.2 (∼BSS). The equivalence relation ∼BSS on F × P defined as in Defn. 2.1 is given by (f , ps) ∼BSS (f̃ , ps̃) ⇐⇒ ∃P,h s.t. (f , ps) = (f̃ ◦ h−1 ◦P−1, (P ◦ h)∗ps̃) (3) where P is a permutation and h(s) = (h1(s1), ..., hn(sn)) is an invertible, element-wise function. A fundamental obstacle—and a crucial difference to the linear problem—is that in the nonlinear case, different mixtures of si and sj can be independent, i.e., solving ICA is not equivalent to solving BSS. A prominent example of this is given by the Darmois construction [20, 39]. Definition 2.3 (Darmois construction). The Darmois construction gD : Rn → (0, 1)n is obtained by recursively applying the conditional cumulative distribution function (CDF) transform: gDi (x1:i) := P(Xi ≤ xi|x1:i−1) = ∫ xi −∞ p(x ′ i|x1:i−1)dx′i (i = 1, ..., n). (4) The resulting estimated sources yD = gD(x) are mutually-independent uniform r.v.s by construction, see Fig. 2a for an illustration. However, they need not be meaningfully related to the true sources s, and will, in general, still be a nonlinear mixing thereof [39].2 Denoting the mixing function corresponding to (4) by fD = (gD)−1 and the uniform density on (0, 1)n by pu, the Darmois solution (fD, pu) thus allows construction of counterexamples to ∼BSS-identifiability on F × P .3 Remark 2.4. gD has lower-triangular Jacobian, i.e., ∂gDi /∂xj = 0 for i < j. Since the order of the xi is arbitrary, applying gD after a permutation yields a different Darmois solution. Moreover, (4) yields independent components yD even if the sources si were not independent to begin with.4 2Consider, e.g., a mixing f with full Jacobian which yields a contradiction to Defn. 2.2, due to Remark 2.4. 3By applying a change of variables, we can see that the transformed variables in (4) are uniformly distributed in the open unit cube, thereby corresponding to independent components [69, § 2.2]. 4This has broad implications for unsupervised learning, as it shows that, for i.i.d. observations, not only factorised priors, but any unconditional prior is insufficient for identifiability (see, e.g., [49], Appendix D.2). Another well-known obstacle to identifiability are measure-preserving automorphisms (MPAs) of the source distribution ps: these are functions a which map the source space to itself without affecting its distribution, i.e., a∗ps = ps [39]. A particularly instructive class of MPAs is the following [49, 58]. Definition 2.5 (“Rotated-Gaussian” MPA). Let R ∈ O(n) be an orthogonal matrix, and denote by Fs(s) = (Fs1(s1), ..., Fsn(sn)) and Φ(z) = (Φ(z1), ...,Φ(zn)) the element-wise CDFs of a smooth, factorised density ps and of a Gaussian, respectively. Then the “rotated-Gaussian” MPA aR(ps) is aR(ps) = F −1 s ◦Φ ◦R ◦Φ−1 ◦ Fs . (5) aR(ps) first maps to the (rotationally invariant) standard isotropic Gaussian (via Φ−1 ◦ Fs), then applies a rotation, and finally maps back, without affecting the distribution of the estimated sources. Hence, if (f̃ , ps̃) is a valid solution, then so is (f̃ ◦ aR(ps̃), ps̃) for any R ∈ O(n). Unless R is a permutation, this constitutes another common counterexample to ∼BSS-identifiability on F × P . Identifiability results for nonlinear ICA have recently been established for settings where an auxiliary variable u (e.g., environment index, time stamp, class label) renders the sources conditionally independent [37, 38, 41, 49]. The assumption on ps in (1) is replaced with ps|u(s|u) = ∏n i=1 psi|u(si|u), thus restricting P in Defn. 2.1. In most cases, u is assumed to be observed, though [30] is a notable exception. Similar results exist given access to a second noisy view x̃ [28]. 2.2 Causal inference and the principle of independent causal mechanisms (ICM) Rather than relying only on additional assumptions on P (e.g., via auxiliary variables), we seek to further constrain (1) by also placing assumptions on the set F of mixing functions f . To this end, we draw inspiration from the field of causal inference [71, 78]. Of central importance to our approach is the Principle of Independent Causal Mechanisms (ICM) [43, 56, 87]. Principle 2.6 (ICM principle [78]). The causal generative process of a system’s variables is composed of autonomous modules that do not inform or influence each other. These “modules” are typically thought of as the conditional distributions of each variable given its direct causes. Intuitively, the principle then states that these causal conditionals correspond to independent mechanisms of nature which do not share information. Crucially, here “independent” does not refer to statistical independence of random variables, but rather to independence of the underlying distributions as algorithmic objects. For a bivariate system comprising a cause c and an effect e, this idea reduces to an independence of cause and mechanism, see Fig. 2c. One way to formalise ICM uses Kolmogorov complexity K(·) [51] as a measure of algorithmic information [43]. However, since Kolmogorov complexity is is not computable, using ICM in practice requires assessing Principle 2.6 with other suitable proxy criteria [9, 11, 34, 42, 45, 65, 75–78, 90, 110].5 Allowing for deterministic relations between cause (sources) and effect (observations), the criterion which is most closely related to the ICA setting in (1) is information-geometric causal inference (IGCI) [18, 46].6 IGCI assumes a nonlinear relation e = f(c) and formulates a notion of indepen- 5“This can be seen as an algorithmic analog of replacing the empirically undecidable question of statistical independence with practical independence tests that are based on assumptions on the underlying distribution” [43]. 6For a similar criterion which assumes linearity [45, 110] and its relation to linear ICA, see Appendix B.1. dence between the cause distribution pc and the deterministic mechanism f (which we think of as a degenerate conditional pe|c) via the following condition (in practice, assumed to hold approximately), CIGCI(f , pc) := ∫ log |Jf (c)| pc(c)dc− ∫ log |Jf (c)| dc = 0 , (6) where (Jf (c))ij = ∂fi/∂cj(c) is the Jacobian matrix and | · | the absolute value of the determinant. CIGCI can be understood as the covariance between pc and log |Jf | (viewed as r.v.s on the unit cube w.r.t. the Lebesgue measure), so that CIGCI = 0 rules out a form of fine-tuning between pc and |Jf |. As its name suggests, IGCI can, from an information-geometric perspective, also be seen as an orthogonality condition between cause and mechanism in the space of probability distributions [46], see Appendix B.2, particularly eq. (19) for further details. 3 Existing ICM measures are insufficient for nonlinear ICA Our aim is to use the ICM Principle 2.6 to further constrain the space of models M ⊆ F ×P and rule out common counterexamples to identifiability such as those presented in § 2.1. Intuitively, both the Darmois construction (4) and the rotated Gaussian MPA (5) give rise to “non-generic” solutions which should violate ICM: the former, (fD, pu), due the triangular Jacobian of fD (see Remark 2.4), meaning that each observation xi = fDi (y1:i) only depends on a subset of the inferred independent components y1:i, and the latter, (f ◦ aR(ps), ps), due to the dependence of f ◦ aR(ps) on ps (5). However, the ICM criteria described in § 2.2 were developed for the task of cause-effect inference where both variables are observed. In contrast, in this work, we consider an unsupervised representation learning task where only the effects (mixtures x) are observed, but the causes (sources s) are not. It turns out that this renders existing ICM criteria insufficient for BSS: they can easily be satisfied by spurious solutions which are not equivalent to the true one. We can show this for IGCI. Denote by MIGCI = {(f , ps) ∈ F × P : CIGCI(f , ps) = 0} ⊂ F × P the class of nonlinear ICA models satisfying IGCI (6). Then the following negative result holds. Proposition 3.1 (IGCI is insufficient for ∼BSS-identifiability). (1) is not ∼BSS-identifiable on MIGCI. Proof. IGCI (6) is satisfied when ps is uniform. However, the Darmois construction (4) yields uniform sources, see Fig. 2a. This means that (fD ◦ aR(pu), pu) ∈ MIGCI, so IGCI can be satisfied by solutions which do not separate the sources in the sense of Defn. 2.2, see footnote 2 and [39]. As illustrated in Fig. 2c, condition (6) and other similar criteria enforce a notion of “genericity” or “decoupling” of the mechanism w.r.t. the observed input distribution.7 They thus rely on the fact that the cause (source) distribution is informative, and are generally not invariant to reparametrisation of the cause variables. In the (nonlinear) ICA setting, on the other hand, the learnt source distribution may be fairly uninformative. This poses a challenge for existing ICM criteria since any mechanism is generic w.r.t. an uninformative (uniform) input distribution. 4 Independent mechanism analysis (IMA) As argued in § 3, enforcing independence between the input distribution and the mechanism (Fig. 2c), as existing ICM criteria do, is insufficient for ruling out spurious solutions to nonlinear ICA. We therefore propose a new ICM-inspired framework which is more suitable for BSS and which we term independent mechanism analysis (IMA).8 All proofs are provided in Appendix C. 4.1 Intuition behind IMA As motivated using the cocktail party example in § 1 and Fig. 1 (Left), our main idea is to enforce a notion of independence between the contributions or influences of the different sources si on the observations x = f(s) as illustrated in Fig. 2d—as opposed to between the source distribution and mixing function, cf. Fig. 2c. These contributions or influences are captured by the vectors of partial derivatives ∂f/∂si. IMA can thus be understood as a refinement of ICM at the level of the mixing f : in addition to statistically independent components si, we look for a mixing with contributions ∂f/∂si which are independent, in a non-statistical sense which we formalise as follows. Principle 4.1 (IMA). The mechanisms by which each source si influences the observed distribution, as captured by the partial derivatives ∂f/∂si, are independent of each other in the sense that for all s: log |Jf (s)| = n∑ i=1 log ∥∥∥∥ ∂f∂si (s) ∥∥∥∥ (7) 7In fact, many ICM criteria can be phrased as special cases of a unifying group-invariance framework [9]. 8The title of the present work is thus a reverence to Pierre Comon’s seminal 1994 paper [17]. Geometric interpretation. Geometrically, the IMA principle can be understood as an orthogonality condition, as illustrated for n = 2 in Fig. 1 (Right). First, the vectors of partial derivatives ∂f/∂si, for which the IMA principle postulates independence, are the columns of Jf . |Jf | thus measures the volume of the n−dimensional parallelepiped spanned by these columns, as shown on the right. The product of their norms, on the other hand, corresponds to the volume of an n-dimensional box, or rectangular parallelepiped with side lengths ∥∂f/∂si∥, as shown on the left. The two volumes are equal if and only if all columns ∂f/∂si of Jf are orthogonal. Note that (7) is trivially satisfied for n = 1, i.e., if there is no mixing, further highlighting its difference from ICM for causal discovery. Independent influences and orthogonality. In a high dimensional setting (large n), this orthogonality can be intuitively interpreted from the ICM perspective as Nature choosing the direction of the influence of each source component in the observation space independently and from an isotropic prior. Indeed, it can be shown that the scalar product of two independent isotropic random vectors in Rn vanishes as the dimensionality n increases (equivalently: two high-dimensional isotropic vectors are typically orthogonal). This property was previously exploited in other linear ICM-based criteria (see [44, Lemma 5] and [45, Lemma 1 & Thm. 1]).9 The principle in (7) can be seen as a constraint on the function space, enforcing such orthogonality between the columns of the Jacobian of f at all points in the source domain, thus approximating the high-dimensional behavior described above.10 Information-geometric interpretation and comparison to IGCI. The additive contribution of the sources’ influences ∂f/∂si in (7) suggests their local decoupling at the level of the mechanism f . Note that IGCI (6), on the other hand, postulates a different type of decoupling: one between log |Jf | and ps. There, dependence between cause and mechanism can be conceived as a fine tuning between the derivative of the mechanism and the input density. The IMA principle leads to a complementary, non-statistical measure of independence between the influences ∂f/∂si of the individual sources on the vector of observations. Both the IGCI and IMA postulates have an information-geometric interpretation related to the influence of (“non-statistically”) independent modules on the observations: both lead to an additive decomposition of a KL-divergence between the effect distribution and a reference distribution. For IGCI, independent modules correspond to the cause distribution and the mechanism mapping the cause to the effect (see (19) in Appendix B.2). For IMA, on the other hand, these are the influences of each source component on the observations in an interventional setting (under soft interventions on individual sources), as measured by the KL-divergences between the original and intervened distributions. See Appendix B.3, and especially (22), for a more detailed account. We finally remark that while recent work based on the ICM principle has mostly used the term “mechanism” to refer to causal Markov kernels p(Xi|PAi) or structural equations [78], we employ it in line with the broader use of this concept in the philosophical literature.11 To highlight just two examples, [86] states that “Causal processes, causal interactions, and causal laws provide the mechanisms by which the world works; to understand why certain things happen, we need to see how they are produced by these mechanisms”; and [99] states that “Mechanisms are events that alter relations among some specified set of elements”. Following this perspective, we argue that a causal mechanism can more generally denote any process that describes the way in which causes influence their effects: the partial derivative ∂f/∂si thus reflects a causal mechanism in the sense that it describes the infinitesimal changes in the observations x, when an infinitesimal perturbation is applied to si. 4.2 Definition and useful properties of the IMA contrast We now introduce a contrast function based on the IMA principle (7) and show that it possesses several desirable properties in the context of nonlinear ICA. First, we define a local contrast as the difference between the two integrands of (7) for a particular value of the sources s. Definition 4.2 (Local IMA contrast). The local IMA contrast cIMA(f , s) of f at a point s is given by cIMA(f , s) = n∑ i=1 log ∥∥∥∥ ∂f∂si (s) ∥∥∥∥− log |Jf (s)| . (8) Remark 4.3. This corresponds to the left KL measure of diagonality [2] for √ Jf (s)⊤Jf (s). 9This has also been used as a “leading intuition” [sic] to interpret IGCI in [46]. 10To provide additional intuition on how IMA differs from existing principles of independence of cause and mechanism, we give examples, both technical and pictorial, of violations of both in Appendix B.4. 11See Table 1 in [62] for a long list of definitions from the literature. The local IMA contrast cIMA(f , s) quantifies the extent to which the IMA principle is violated at a given point s. We summarise some of its properties in the following proposition. Proposition 4.4 (Properties of cIMA(f , s)). The local IMA contrast cIMA(f , s) defined in (8) satisfies: (i) cIMA(f , s) ≥ 0, with equality if and only if all columns ∂f/∂si(s) of Jf (s) are orthogonal. (ii) cIMA(f , s) is invariant to left multiplication of Jf (s) by an orthogonal matrix and to right multiplication by permutation and diagonal matrices. Property (i) formalises the geometric interpretation of IMA as an orthogonality condition on the columns of the Jacobian from § 4.1, and property (ii) intuitively states that changes of orthonormal basis and permutations or rescalings of the columns of Jf do not affect their orthogonality. Next, we define a global IMA contrast w.r.t. a source distribution ps as the expected local IMA contrast. Definition 4.5 (Global IMA contrast). The global IMA contrast CIMA(f , ps) of f w.r.t. ps is given by CIMA(f , ps) = Es∼ps [cIMA(f , s)] = ∫ cIMA(f , s)ps(s)ds . (9) The global IMA contrast CIMA(f , ps) thus quantifies the extent to which the IMA principle is violated for a particular solution (f , ps) to the nonlinear ICA problem. We summarise its properties as follows. Proposition 4.6 (Properties of CIMA(f , ps)). The global IMA contrast CIMA(f , ps) from (9) satisfies: (i) CIMA(f , ps) ≥ 0, with equality iff. Jf (s) = O(s)D(s) almost surely w.r.t. ps, where O(s),D(s) ∈ Rn×n are orthogonal and diagonal matrices, respectively; (ii) CIMA(f , ps) = CIMA(f̃ , ps̃) for any f̃ = f ◦ h−1 ◦P−1 and s̃ = Ph(s), where P ∈ Rn×n is a permutation and h(s) = (h1(s1), ..., hn(sn)) an invertible element-wise function. 0.50 0.75 1.00 1.25 1.50 r /8 0 /8 0.5 1.0 1.5 x 0.6 0.4 0.2 0.0 0.2 0.4 0.6 y Figure 3: An example of a (non-conformal) orthogonal coordinate transformation from polar (left) to Cartesian (right) coordinates. Property (i) is the distribution-level analogue to (i) of Prop. 4.4 and only allows for orthogonality violations on sets of measure zero w.r.t. ps. This means that CIMA can only be zero if f is an orthogonal coordinate transformation almost everywhere [19, 52, 66], see Fig. 3 for an example. We particularly stress property (ii), as it precisely matches the inherent indeterminacy of nonlinear ICA: CIMA is blind to reparametrisation of the sources by permutation and element wise transformation. 4.3 Theoretical analysis and justification of CIMA We now show that, under suitable assumptions on the generative model (1), a large class of spurious solutions—such as those based on the Darmois construction (4) or measure preserving automorphisms such as aR from (5) as described in § 2.1—exhibit nonzero IMA contrast. Denote the class of nonlinear ICA models satisfying (7) (IMA) by MIMA = {(f , ps) ∈ F × P : CIMA(f , ps) = 0} ⊂ F × P . Our first main theoretical result is that, under mild assumptions on the observations, Darmois solutions will have strictly positive CIMA, making them distinguishable from those in MIMA. Theorem 4.7. Assume the data generating process in (1) and assume that xi ⊥̸⊥ xj for some i ̸= j. Then any Darmois solution (fD, pu) based on gD as defined in (4) satisfies CIMA(fD, pu) > 0. Thus a solution satisfying CIMA(f , ps) = 0 can be distinguished from (fD, pu) based on the contrast CIMA. The proof is based on the fact that the Jacobian of gD is triangular (see Remark 2.4) and on the specific form of (4). A specific example of a mixing process satisfying the IMA assumption is the case where f is a conformal (angle-preserving) map. Definition 4.8 (Conformal map). A smooth map f : Rn → Rn is conformal if Jf (s) = O(s)λ(s) ∀s, where λ : Rn → R is a scalar field, and O ∈ O(n) is an orthogonal matrix. Corollary 4.9. Under assumptions of Thm. 4.7, if additionally f is a conformal map, then (f , ps) ∈ MIMA for any ps ∈ P due to Prop. 4.6 (i), see Defn. 4.8. Based on Thm. 4.7, (f , ps) is thus distinguishable from Darmois solutions (fD, pu). This is consistent with a result that proves identifiability of conformal maps for n = 2 and conjectures it in general [39].12 However, conformal maps are only a small subset of all maps for which CIMA = 0, as is apparent from the more flexible condition of Prop. 4.6 (i), compared to the stricter Defn. 4.8. 12Note that Corollary 4.9 holds for any dimensionality n. Example 4.10 (Polar to Cartesian coordinate transform). Consider the non-conformal transformation from polar to Cartesian coordinates (see Fig. 3), defined as (x, y) = f(r, θ) := (r cos(θ), r sin(θ)) with independent sources s = (r, θ), with r ∼ U(0, R) and θ ∼ U(0, 2π).13 Then, CIMA(f , ps) = 0 and CIMA(fD, pu) > 0 for any Darmois solution (fD, pu) —see Appendix D for details. Finally, for the case in which the true mixing is linear, we obtain the following result. Corollary 4.11. Consider a linear ICA model, x = As, with E[s⊤s] = I, and A ∈ O(n) an orthogonal, non-trivial mixing matrix, i.e., not the product of a diagonal and a permutation matrix DP. If at most one of the si is Gaussian, then CIMA(A, ps) = 0 and CIMA(fD, pu) > 0. In a “blind” setting, we may not know a priori whether the true mixing is linear or not, and thus choose to learn a nonlinear unmixing. Corollary 4.11 shows that, in this case, Darmois solutions are still distinguishable from the true mixing via CIMA. Note that unlike in Corollary 4.9, the assumption that xi ⊥̸⊥ xj for some i ̸= j is not required for Corollary 4.11. In fact, due to Theorem 11 of [17], it follows from the assumed linear ICA model with non-Gaussian sources, and the fact that the mixing matrix is not the product of a diagonal and a permutation matrix (see also Appendix A). Having shown that the IMA principle allows to distinguish a class of models (including, but not limited to conformal maps) from Darmois solutions, we next turn to a second well-known counterexample to identifiability: the “rotated-Gaussian” MPA aR(ps) (5) from Defn. 2.5. Our second main theoretical result is that, under suitable assumptions, this class of MPAs can also be ruled out for “non-trivial” R. Theorem 4.12. Let (f , ps) ∈ MIMA and assume that f is a conformal map. Given R ∈ O(n), assume additionally that ∃ at least one non-Gaussian si whose associated canonical basis vector ei is not transformed by R−1 = R⊤ into another canonical basis vector ej . Then CIMA(f ◦ aR(ps), ps) > 0. Thm. 4.12 states that for conformal maps, applying the aR(ps) transformation at the level of the sources leads to an increase in CIMA except for very specific rotations R that are “fine-tuned” to ps in the sense that they permute all non-Gaussian sources si with another sj . Interestingly, as for the linear case, non-Gaussianity again plays an important role in the proof of Thm. 4.12. 5 Experiments Our theoretical results from § 4 suggest that CIMA is a promising contrast function for nonlinear blind source separation. We test this empirically by evaluating the CIMA of spurious nonlinear ICA solutions (§ 5.1), and using it as a learning objective to recover the true solution (§ 5.2). We sample the ground truth sources from a uniform distribution in [0, 1]n; the reconstructed sources are also mapped to the uniform hypercube as a reference measure via the CDF transform. Unless 13For different ps, (x, y) can be made to have independent Gaussian components ([98], II.B), and CIMAidentifiability is lost; this shows that the assumption of Thm. 4.7 that xi ⊥̸⊥ xj for some i ̸= j is crucial. otherwise specified, the ground truth mixing f is a Möbius transformation [81] (i.e., a conformal map) with randomly sampled parameters, thereby satisfying Principle 4.1. In all of our experiments, we use JAX [12] and Distrax [13]. For additional technical details, equations and plots see Appendix E. The code to reproduce our experiments is available at this link. 5.1 Numerical evaluation of the CIMA contrast for spurious nonlinear ICA solutions Learning the Darmois construction. To learn the Darmois construction from data, we use normalising flows, see [35, 69]. Since Darmois solutions have triangular Jacobian (Remark 2.4), we use an architecture based on residual flows [16] which we constrain such that the Jacobian of the full model is triangular. This yields an expressive model which we train effectively via maximum likelihood. CIMA of Darmois solutions. To check whether Darmois solutions (learnt from finite data) can be distinguished from the true one, as predicted by Thm. 4.7, we generate 1000 random mixing functions for n = 2, compute the CIMA values of learnt solutions, and find that all values are indeed significantly larger than zero, see Fig. 4 (a). The same holds for higher dimensions, see Fig. 4 (b) for results with 50 random mixings for n ∈ {2, 3, 5, 10}: with higher dimensionality, both the mean and variance of the CIMA distribution for the learnt Darmois solutions generally attain higher values.14 We confirmed these findings for mappings which are not conformal, while still satisfying (7), in Appendix E.5. CIMA of MPAs. We also investigate the effect on CIMA of applying an MPA aR(·) from (5) to the true solution or a learnt Darmois solution. Results for n = 2 dim. for different rotation matrices R (parametrised by the angle θ) are shown in Fig. 4 (c). As expected, the behavior is periodic in θ, and vanishes for the true solution (blue) at multiples of π/2, i.e., when R is a permutation matrix, as predicted by Thm. 4.12. For the learnt Darmois solution (red, dashed) CIMA remains larger than zero. CIMA values for random MLPs. Lastly, we study the behavior of spurious solutions based on the Darmois construction under deviations from our assumption of CIMA = 0 for the true mixing function. To this end, we use invertible MLPs with orthogonal weight initalisation and leaky_tanh activations [29] as mixing functions; the more layers L are added to the mixing MLP, the larger a deviation from our assumptions is expected. We compare the true mixing and learnt Darmois solutions over 20 realisations for each L ∈ {2, 3, 4}, n = 5. Results are shown in figure Fig. 4 (d): the CIMA of the mixing MLPs grows with L; still, the one of the Darmois solution is typically higher. Summary. We verify that spurious solutions can be distinguished from the true one based on CIMA. 5.2 Learning nonlinear ICA solutions with CIMA-regularised maximum likelihood Experimental setup. To use CIMA as a learning signal, we consider a regularised maximum-likelihood approach, with the following objective: L(g) = Ex[log pg(x)]− λCIMA(g−1, py), where g denotes the learnt unmixing, y = g(x) the reconstructed sources, and λ ≥ 0 a Lagrange multiplier. For λ = 0, this corresponds to standard maximum likelihood estimation, whereas for λ > 0, L lower-bounds the likelihood, and recovers it exactly iff. (g−1, py) ∈ MIMA. We train a residual flow g (with full Jacobian) to maximise L. For evaluation, we compute (i) the KL divergence to the true data likelihood, as a measure of goodness of fit for the learnt flow model; and (ii) the mean correlation coefficient (MCC) between ground truth and reconstructed sources [37, 49]. We also introduce (iii) a nonlinear extension of the Amari distance [5] between the true mixing and the learnt unmixing, which is larger than or equal to zero, with equality iff. the learnt model belongs to the BSS equivalence class (Defn. 2.2) of the true solution, see Appendix E.5 for details. Results. In Fig. 4 (Top), we show an example of the distortion induced by different spurious solutions for n = 2, and contrast it with a solution learnt using our proposed objective (rightmost plot). Visually, we find that the CIMA-regularised solution (with λ = 1) recovers the true sources most faithfully. Quantitative results for 50 learnt models for each λ ∈ {0.0, 0.5, 1.0} and n ∈ {5, 7} are summarised in Fig. 5 (see Appendix E for additional plots) . As indicated by the KL divergence values (left), most trained models achieve a good fit to the data across all values of λ.15 We observe that using CIMA (i.e., λ > 0) is beneficial for BSS, both in terms of our nonlinear Amari distance (center, lower is better) and MCC (right, higher is better), though we do not observe a substantial difference between λ = 0.5 and λ = 1.16 Summary: CIMA can be a useful learning signal to recover the true solution. 14the latter possibly due to the increased difficulty of the learning task for larger n 15models with n = 7 have high outlier KL values, seemingly less pronounced for nonzero values of λ 16In Appendix E.5, we also show that our method is superior to a linear ICA baseline, FastICA [36]. 6 Discussion Assumptions on the mixing function. Instead of relying on weak supervision in the form of auxiliary variables [28, 30, 37, 38, 41, 49], our IMA approach places additional constraints on the functional form of the mixing process. In a similar vein, the minimal nonlinear distortion principle [108] proposes to favor solutions that are as close to linear as possible. Another example is the post-nonlinear model [98, 109], which assumes an element-wise nonlinearity applied after a linear mixing. IMA is different in that it still allows for strongly nonlinear mixings (see, e.g., Fig. 3) provided that the columns of their Jacobians are (close to) orthogonal. In the related field of disentanglement [8, 58], a line of work that focuses on image generation with adversarial networks [24] similarly proposes to constrain the “generator” function via regularisation of its Jacobian [82] or Hessian [74], though mostly from an empirically-driven, rather than from an identifiability perspective as in the present work. Towards identifiability with CIMA. The IMA principle rules out a large class of spurious solutions to nonlinear ICA. While we do not present a full identifiability result, our experiments show that CIMA can be used to recover the BSS equivalence class, suggesting that identifiability might indeed hold, possibly under additional assumptions—e.g., for conformal maps [39]. IMA and independence of cause and mechanism. While inspired by measures of independence of cause and mechanism as traditionally used for cause-effect inference [18, 45, 46, 110], we view the IMA principle as addressing a different question, in the sense that they evaluate independence between different elements of the causal model. Any nonlinear ICA solution that satisfies the IMA Principle 4.1 can be turned into one with uniform reconstructed sources—thus satisfying IGCI as argued in § 3— through composition with an element-wise transformation which, according to Prop. 4.6 (ii), leaves the CIMA value unchanged. Both IGCI (6) and IMA (7) can therefore be fulfilled simultaneosly, while the former on its own is inconsequential for BSS as shown in Prop. 3.1. BSS through algorithmic information. Algorithmic information theory has previously been proposed as a unifying framework for identifiable approaches to linear BSS [67, 68], in the sense that commonly-used contrast functions could, under suitable assumptions, be interpreted as proxies for the total complexity of the mixing and the reconstructed sources. However, to the best of our knowledge, the problem of specifying suitable proxies for the complexity of nonlinear mixing functions has not yet been addressed. We conjecture that our framework could be linked to this view, based on the additional assumption of algorithmic independence of causal mechanisms [43], thus potentially representing an approach to nonlinear BSS by minimisation of algorithmic complexity. ICA for causal inference & causality for ICA. Past advances in ICA have inspired novel causal discovery methods [50, 64, 92]. The present work constitutes, to the best of our knowledge, the first effort to use ideas from causality (specifically ICM) for BSS. An application of the IMA principle to causal discovery or causal representation learning [88] is an interesting direction for future work. Conclusion. We introduce IMA, a path to nonlinear BSS inspired by concepts from causality. We postulate that the influences of different sources on the observed distribution should be approximately independent, and formalise this as an orthogonality condition on the columns of the Jacobian. We prove that this constraint is generally violated by well-known spurious nonlinear ICA solutions, and propose a regularised maximum likelihood approach which we empirically demonstrate to be effective in recovering the true solution. Our IMA principle holds exactly for orthogonal coordinate transformations, and is thus of potential interest for learning spatial representations [33], robot dynamics [63], or physics problems where orthogonal reference frames are common [66]. Acknowledgements The authors thank Aapo Hyvärinen, Adrián Javaloy Bornás, Dominik Janzing, Giambattista Parascandolo, Giancarlo Fissore, Nasim Rahaman, Patrick Burauel, Patrik Reizinger, Paul Rubenstein, Shubhangi Ghosh, and the anonymous reviewers for helpful comments and discussions. Funding Transparency Statement This work was supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039B; and by the Machine Learning Cluster of Excellence, EXC number 2064/1 - Project number 390727645.
1. What is the main contribution of the paper regarding independent causal mechanisms and blind source separation? 2. What are the strengths of the paper, particularly in its theoretical analysis and identifiability condition? 3. What are the weaknesses of the paper, especially regarding experimentation and comparison with other ICA approaches? 4. How does the reviewer assess the significance of the class of spurious solutions referenced in the paper? 5. Are there any questions or concerns regarding the paper's content, such as its relevance to real-world obstacles or the necessity of additional experiments?
Summary Of The Paper Review
Summary Of The Paper The paper proposes applying the theory of independent causal mechanisms (ICM), from causal discovery, to nonlinear ICA approaches to the blind source separation problem (BSS). ICM is based on the idea that variables in a system are algorithmically independent or do not share information with each other. The paper shows, however, that ICM is not sufficient for BSS as spurious solutions exist which are not equivalent to the truth. To address these difficulties, the paper proposes a new condition for identifiability which places an orthogonality condition on the columns of the Jacobian and provides information theoretic and geometric interpretations. The paper then shows theoretically that a large class of spurious solutions are not admitted under the IMA formulation. Experiments on toy examples demonstrate that the model is identifiable under these spurious solutions. Review In general the paper is well written (though a bit difficult to parse for a non-specialist) and the theoretical contributions are rigorous. I did not check proofs. I am familiar with the related causal inference literature but less so the ICA literature so it is difficult for me to assess the significance of the class of spurious solutions references in the paper and whether these correspond to real world obstacles or are more of general theoretical interest. If the authors could clarify this, it may influence my final score as it seems this is sort of the crux of whether the paper constitutes a significant advance since IMA is not compared to other ICA approaches in any other way. I found the experimental results section to be particularly weak as no baselines are included, only the spurious cases are investigated and there are no results on real data. This leaves the reader unclear on several points: (i) whether IMA works similarly in practice to existing ICA approaches (despite the theoretical results), (ii) whether IMA is competitive enough with other ICA approaches on the non-spurious cases to justify the support for the spurious cases, (iii) whether the spurious cases are of practical significance in any real world datasets. The paper would be significantly improved if other ICA baselines were included as well as experiments on the non spurious cases. === I increased my review to 7 based on the discussions and other reviews. See discussion below.
NIPS
Title Independent mechanism analysis, a new concept? Abstract Independent component analysis provides a principled framework for unsupervised representation learning, with solid theory on the identifiability of the latent code that generated the data, given only observations of mixtures thereof. Unfortunately, when the mixing is nonlinear, the model is provably nonidentifiable, since statistical independence alone does not sufficiently constrain the problem. Identifiability can be recovered in settings where additional, typically observed variables are included in the generative process. We investigate an alternative path and consider instead including assumptions reflecting the principle of independent causal mechanisms exploited in the field of causality. Specifically, our approach is motivated by thinking of each source as independently influencing the mixing process. This gives rise to a framework which we term independent mechanism analysis. We provide theoretical and empirical evidence that our approach circumvents a number of nonidentifiability issues arising in nonlinear blind source separation. 1 Introduction One of the goals of unsupervised learning is to uncover properties of the data generating process, such as latent structures giving rise to the observed data. Identifiability [55] formalises this desideratum: under suitable assumptions, a model learnt from observations should match the ground truth, up to well-defined ambiguities. Within representation learning, identifiability has been studied mostly in the context of independent component analysis (ICA) [17, 40], which assumes that the observed data x results from mixing unobserved independent random variables si referred to as sources. The aim is to recover the sources based on the observed mixtures alone, also termed blind source separation (BSS). A major obstacle to BSS is that, in the nonlinear case, independent component estimation does not necessarily correspond to recovering the true sources: it is possible to give counterexamples where the observations are transformed into components yi which are independent, yet still mixed with respect to the true sources si [20, 39, 98]. In other words, nonlinear ICA is not identifiable. In order to achieve identifiability, a growing body of research postulates additional supervision or structure in the data generating process, often in the form of auxiliary variables [28, 30, 37, 38, 41]. In the present work, we investigate a different route to identifiability by drawing inspiration from the field of causal inference [71, 78] which has provided useful insights for a number of machine learning tasks, including semi-supervised [87, 103], transfer [6, 23, 27, 31, 61, 72, 84, 85, 97, 102, 107], reinforcement [7, 14, 22, 26, 53, 59, 60, 106], and unsupervised [9, 10, 54, 70, 88, 91, 104, 105] learning. To this end, we interpret the ICA mixing as a causal process and apply the principle of independent causal mechanisms (ICM) which postulates that the generative process consists of independent modules which do not share information [43, 78, 87]. In this context, “independent” does not refer to statistical independence of random variables, but rather to the notion that the distributions and functions composing the generative process are chosen independently by Nature [43, 48]. While a formalisation of ICM [43, 57] in terms of algorithmic (Kolmogorov) complexity [51] exists, it is not computable, and hence applying ICM in practice requires assessing such non-statistical independence ∗Equal contribution. Code available at: https://github.com/lgresele/independent-mechanism-analysis 35th Conference on Neural Information Processing Systems (NeurIPS 2021). with suitable domain specific criteria [96]. The goal of our work is thus to constrain the nonlinear ICA problem, in particular the mixing function, via suitable ICM measures, thereby ruling out common counterexamples to identifiability which intuitively violate the ICM principle. Traditionally, ICM criteria have been developed for causal discovery, where both cause and effect are observed [18, 45, 46, 110]. They enforce an independence between (i) the cause (source) distribution and (ii) the conditional or mechanism (mixing function) generating the effect (observations), and thus rely on the fact that the observed cause distribution is informative. As we will show, this renders them insufficient for nonlinear ICA, since the constraints they impose are satisfied by common counterexamples to identifiability. With this in mind, we introduce a new way to characterise or refine the ICM principle for unsupervised representation learning tasks such as nonlinear ICA. Motivating example. To build intuition, we turn to a famous example of ICA and BSS: the cocktail party problem, illustrated in Fig. 1 (Left). Here, a number of conversations are happening in parallel, and the task is to recover the individual voices si from the recorded mixtures xi. The mixing or recording process f is primarily determined by the room acoustics and the locations at which microphones are placed. Moreover, each speaker influences the recording through their positioning in the room, and we may think of this influence as ∂f/∂si. Our independence postulate then amounts to stating that the speakers’ positions are not fine-tuned to the room acoustics and microphone placement, or to each other, i.e., the contributions ∂f/∂si should be independent (in a non-statistical sense).1 Our approach. We formalise this notion of independence between the contributions ∂f/∂si of each source to the mixing process (i.e., the columns of the Jacobian matrix Jf of partial derivatives) as an orthogonality condition, see Fig. 1 (Right). Specifically, the absolute value of the determinant |Jf |, which describes the local change in infinitesimal volume induced by mixing the sources, should factorise or decompose as the product of the norms of its columns. This can be seen as a decoupling of the local influence of each partial derivative in the pushforward operation (mixing function) mapping the source distribution to the observed one, and gives rise to a novel framework which we term independent mechanism analysis (IMA). IMA can be understood as a refinement of the ICM principle that applies the idea of independence of mechanisms at the level of the mixing function. Contributions. The structure and contributions of this paper can be summarised as follows: • we review well-known obstacles to identifiability of nonlinear ICA (§ 2.1), as well as existing ICM criteria (§ 2.2), and show that the latter do not sufficiently constrain nonlinear ICA (§ 3); • we propose a more suitable ICM criterion for unsupervised representation learning which gives rise to a new framework that we term independent mechanism analysis (IMA) (§ 4); we provide geometric and information-theoretic interpretations of IMA (§ 4.1), introduce an IMA contrast function which is invariant to the inherent ambiguities of nonlinear ICA (§ 4.2), and show that it rules out a large class of counterexamples and is consistent with existing identifiability results (§ 4.3); • we experimentally validate our theoretical claims and propose a regularised maximum-likelihood learning approach based on the IMA constrast which outperforms the unregularised baseline (§ 5); additionally, we introduce a method to learn nonlinear ICA solutions with triangular Jacobian and a metric to assess BSS which can be of independent interest for the nonlinear ICA community. 1For additional intuition and possible violations in the context of the cocktail party problem, see Appendix B.4. 2 Background and preliminaries Our work builds on and connects related literature from the fields of independent component analysis (§ 2.1) and causal inference (§ 2.2). We review the most important concepts below. 2.1 Independent component analysis (ICA) Assume the following data-generating process for independent component analysis (ICA) x = f(s) , ps(s) = ∏n i=1 psi(si) , (1) where the observed mixtures x ∈ Rn result from applying a smooth and invertible mixing function f : Rn → Rn to a set of unobserved, independent signals or sources s ∈ Rn with smooth, factorised density ps with connected support (see illustration Fig. 2b). The goal of ICA is to learn an unmixing function g : Rn → Rn such that y = g(x) has independent components. Blind source separation (BSS), on the other hand, aims to recover the true unmixing f−1 and thus the true sources s (up to tolerable ambiguities, see below). Whether performing ICA corresponds to solving BSS is related to the concept of identifiability of the model class. Intuitively, identifiability is the desirable property that all models which give rise to the same mixture distribution should be “equivalent” up to certain ambiguities, formally defined as follows. Definition 2.1 (∼-identifiability). Let F be the set of all smooth, invertible functions f : Rn → Rn, and P be the set of all smooth, factorised densities ps with connected support on Rn. Let M ⊆ F×P be a subspace of models and let ∼ be an equivalence relation on M. Denote by f∗ps the push-forward density of ps via f . Then the generative process (1) is said to be ∼-identifiable on M if ∀(f , ps), (f̃ , ps̃) ∈ M : f∗ps = f̃∗ps̃ =⇒ (f , ps) ∼ (f̃ , ps̃) . (2) If the true model belongs to the model class M, then ∼-identifiability ensures that any model in M learnt from (infinite amounts of) data will be ∼-equivalent to the true one. An example is linear ICA which is identifiable up to permutation and rescaling of the sources on the subspace MLIN of pairs of (i) invertible matrices (constraint on F) and (ii) factorizing densities for which at most one si is Gaussian (constraint on P) [17, 21, 93], see Appendix A for a more detailed account. In the nonlinear case (i.e., without constraints on F), identifiability is much more challenging. If si and sj are independent, then so are hi(si) and hj(sj) for any functions hi and hj . In addition to permutation-ambiguity, such element-wise h(s) = (h1(s1), ..., hn(sn)) can therefore not be resolved either. We thus define the desired form of identifiability for nonlinear BSS as follows. Definition 2.2 (∼BSS). The equivalence relation ∼BSS on F × P defined as in Defn. 2.1 is given by (f , ps) ∼BSS (f̃ , ps̃) ⇐⇒ ∃P,h s.t. (f , ps) = (f̃ ◦ h−1 ◦P−1, (P ◦ h)∗ps̃) (3) where P is a permutation and h(s) = (h1(s1), ..., hn(sn)) is an invertible, element-wise function. A fundamental obstacle—and a crucial difference to the linear problem—is that in the nonlinear case, different mixtures of si and sj can be independent, i.e., solving ICA is not equivalent to solving BSS. A prominent example of this is given by the Darmois construction [20, 39]. Definition 2.3 (Darmois construction). The Darmois construction gD : Rn → (0, 1)n is obtained by recursively applying the conditional cumulative distribution function (CDF) transform: gDi (x1:i) := P(Xi ≤ xi|x1:i−1) = ∫ xi −∞ p(x ′ i|x1:i−1)dx′i (i = 1, ..., n). (4) The resulting estimated sources yD = gD(x) are mutually-independent uniform r.v.s by construction, see Fig. 2a for an illustration. However, they need not be meaningfully related to the true sources s, and will, in general, still be a nonlinear mixing thereof [39].2 Denoting the mixing function corresponding to (4) by fD = (gD)−1 and the uniform density on (0, 1)n by pu, the Darmois solution (fD, pu) thus allows construction of counterexamples to ∼BSS-identifiability on F × P .3 Remark 2.4. gD has lower-triangular Jacobian, i.e., ∂gDi /∂xj = 0 for i < j. Since the order of the xi is arbitrary, applying gD after a permutation yields a different Darmois solution. Moreover, (4) yields independent components yD even if the sources si were not independent to begin with.4 2Consider, e.g., a mixing f with full Jacobian which yields a contradiction to Defn. 2.2, due to Remark 2.4. 3By applying a change of variables, we can see that the transformed variables in (4) are uniformly distributed in the open unit cube, thereby corresponding to independent components [69, § 2.2]. 4This has broad implications for unsupervised learning, as it shows that, for i.i.d. observations, not only factorised priors, but any unconditional prior is insufficient for identifiability (see, e.g., [49], Appendix D.2). Another well-known obstacle to identifiability are measure-preserving automorphisms (MPAs) of the source distribution ps: these are functions a which map the source space to itself without affecting its distribution, i.e., a∗ps = ps [39]. A particularly instructive class of MPAs is the following [49, 58]. Definition 2.5 (“Rotated-Gaussian” MPA). Let R ∈ O(n) be an orthogonal matrix, and denote by Fs(s) = (Fs1(s1), ..., Fsn(sn)) and Φ(z) = (Φ(z1), ...,Φ(zn)) the element-wise CDFs of a smooth, factorised density ps and of a Gaussian, respectively. Then the “rotated-Gaussian” MPA aR(ps) is aR(ps) = F −1 s ◦Φ ◦R ◦Φ−1 ◦ Fs . (5) aR(ps) first maps to the (rotationally invariant) standard isotropic Gaussian (via Φ−1 ◦ Fs), then applies a rotation, and finally maps back, without affecting the distribution of the estimated sources. Hence, if (f̃ , ps̃) is a valid solution, then so is (f̃ ◦ aR(ps̃), ps̃) for any R ∈ O(n). Unless R is a permutation, this constitutes another common counterexample to ∼BSS-identifiability on F × P . Identifiability results for nonlinear ICA have recently been established for settings where an auxiliary variable u (e.g., environment index, time stamp, class label) renders the sources conditionally independent [37, 38, 41, 49]. The assumption on ps in (1) is replaced with ps|u(s|u) = ∏n i=1 psi|u(si|u), thus restricting P in Defn. 2.1. In most cases, u is assumed to be observed, though [30] is a notable exception. Similar results exist given access to a second noisy view x̃ [28]. 2.2 Causal inference and the principle of independent causal mechanisms (ICM) Rather than relying only on additional assumptions on P (e.g., via auxiliary variables), we seek to further constrain (1) by also placing assumptions on the set F of mixing functions f . To this end, we draw inspiration from the field of causal inference [71, 78]. Of central importance to our approach is the Principle of Independent Causal Mechanisms (ICM) [43, 56, 87]. Principle 2.6 (ICM principle [78]). The causal generative process of a system’s variables is composed of autonomous modules that do not inform or influence each other. These “modules” are typically thought of as the conditional distributions of each variable given its direct causes. Intuitively, the principle then states that these causal conditionals correspond to independent mechanisms of nature which do not share information. Crucially, here “independent” does not refer to statistical independence of random variables, but rather to independence of the underlying distributions as algorithmic objects. For a bivariate system comprising a cause c and an effect e, this idea reduces to an independence of cause and mechanism, see Fig. 2c. One way to formalise ICM uses Kolmogorov complexity K(·) [51] as a measure of algorithmic information [43]. However, since Kolmogorov complexity is is not computable, using ICM in practice requires assessing Principle 2.6 with other suitable proxy criteria [9, 11, 34, 42, 45, 65, 75–78, 90, 110].5 Allowing for deterministic relations between cause (sources) and effect (observations), the criterion which is most closely related to the ICA setting in (1) is information-geometric causal inference (IGCI) [18, 46].6 IGCI assumes a nonlinear relation e = f(c) and formulates a notion of indepen- 5“This can be seen as an algorithmic analog of replacing the empirically undecidable question of statistical independence with practical independence tests that are based on assumptions on the underlying distribution” [43]. 6For a similar criterion which assumes linearity [45, 110] and its relation to linear ICA, see Appendix B.1. dence between the cause distribution pc and the deterministic mechanism f (which we think of as a degenerate conditional pe|c) via the following condition (in practice, assumed to hold approximately), CIGCI(f , pc) := ∫ log |Jf (c)| pc(c)dc− ∫ log |Jf (c)| dc = 0 , (6) where (Jf (c))ij = ∂fi/∂cj(c) is the Jacobian matrix and | · | the absolute value of the determinant. CIGCI can be understood as the covariance between pc and log |Jf | (viewed as r.v.s on the unit cube w.r.t. the Lebesgue measure), so that CIGCI = 0 rules out a form of fine-tuning between pc and |Jf |. As its name suggests, IGCI can, from an information-geometric perspective, also be seen as an orthogonality condition between cause and mechanism in the space of probability distributions [46], see Appendix B.2, particularly eq. (19) for further details. 3 Existing ICM measures are insufficient for nonlinear ICA Our aim is to use the ICM Principle 2.6 to further constrain the space of models M ⊆ F ×P and rule out common counterexamples to identifiability such as those presented in § 2.1. Intuitively, both the Darmois construction (4) and the rotated Gaussian MPA (5) give rise to “non-generic” solutions which should violate ICM: the former, (fD, pu), due the triangular Jacobian of fD (see Remark 2.4), meaning that each observation xi = fDi (y1:i) only depends on a subset of the inferred independent components y1:i, and the latter, (f ◦ aR(ps), ps), due to the dependence of f ◦ aR(ps) on ps (5). However, the ICM criteria described in § 2.2 were developed for the task of cause-effect inference where both variables are observed. In contrast, in this work, we consider an unsupervised representation learning task where only the effects (mixtures x) are observed, but the causes (sources s) are not. It turns out that this renders existing ICM criteria insufficient for BSS: they can easily be satisfied by spurious solutions which are not equivalent to the true one. We can show this for IGCI. Denote by MIGCI = {(f , ps) ∈ F × P : CIGCI(f , ps) = 0} ⊂ F × P the class of nonlinear ICA models satisfying IGCI (6). Then the following negative result holds. Proposition 3.1 (IGCI is insufficient for ∼BSS-identifiability). (1) is not ∼BSS-identifiable on MIGCI. Proof. IGCI (6) is satisfied when ps is uniform. However, the Darmois construction (4) yields uniform sources, see Fig. 2a. This means that (fD ◦ aR(pu), pu) ∈ MIGCI, so IGCI can be satisfied by solutions which do not separate the sources in the sense of Defn. 2.2, see footnote 2 and [39]. As illustrated in Fig. 2c, condition (6) and other similar criteria enforce a notion of “genericity” or “decoupling” of the mechanism w.r.t. the observed input distribution.7 They thus rely on the fact that the cause (source) distribution is informative, and are generally not invariant to reparametrisation of the cause variables. In the (nonlinear) ICA setting, on the other hand, the learnt source distribution may be fairly uninformative. This poses a challenge for existing ICM criteria since any mechanism is generic w.r.t. an uninformative (uniform) input distribution. 4 Independent mechanism analysis (IMA) As argued in § 3, enforcing independence between the input distribution and the mechanism (Fig. 2c), as existing ICM criteria do, is insufficient for ruling out spurious solutions to nonlinear ICA. We therefore propose a new ICM-inspired framework which is more suitable for BSS and which we term independent mechanism analysis (IMA).8 All proofs are provided in Appendix C. 4.1 Intuition behind IMA As motivated using the cocktail party example in § 1 and Fig. 1 (Left), our main idea is to enforce a notion of independence between the contributions or influences of the different sources si on the observations x = f(s) as illustrated in Fig. 2d—as opposed to between the source distribution and mixing function, cf. Fig. 2c. These contributions or influences are captured by the vectors of partial derivatives ∂f/∂si. IMA can thus be understood as a refinement of ICM at the level of the mixing f : in addition to statistically independent components si, we look for a mixing with contributions ∂f/∂si which are independent, in a non-statistical sense which we formalise as follows. Principle 4.1 (IMA). The mechanisms by which each source si influences the observed distribution, as captured by the partial derivatives ∂f/∂si, are independent of each other in the sense that for all s: log |Jf (s)| = n∑ i=1 log ∥∥∥∥ ∂f∂si (s) ∥∥∥∥ (7) 7In fact, many ICM criteria can be phrased as special cases of a unifying group-invariance framework [9]. 8The title of the present work is thus a reverence to Pierre Comon’s seminal 1994 paper [17]. Geometric interpretation. Geometrically, the IMA principle can be understood as an orthogonality condition, as illustrated for n = 2 in Fig. 1 (Right). First, the vectors of partial derivatives ∂f/∂si, for which the IMA principle postulates independence, are the columns of Jf . |Jf | thus measures the volume of the n−dimensional parallelepiped spanned by these columns, as shown on the right. The product of their norms, on the other hand, corresponds to the volume of an n-dimensional box, or rectangular parallelepiped with side lengths ∥∂f/∂si∥, as shown on the left. The two volumes are equal if and only if all columns ∂f/∂si of Jf are orthogonal. Note that (7) is trivially satisfied for n = 1, i.e., if there is no mixing, further highlighting its difference from ICM for causal discovery. Independent influences and orthogonality. In a high dimensional setting (large n), this orthogonality can be intuitively interpreted from the ICM perspective as Nature choosing the direction of the influence of each source component in the observation space independently and from an isotropic prior. Indeed, it can be shown that the scalar product of two independent isotropic random vectors in Rn vanishes as the dimensionality n increases (equivalently: two high-dimensional isotropic vectors are typically orthogonal). This property was previously exploited in other linear ICM-based criteria (see [44, Lemma 5] and [45, Lemma 1 & Thm. 1]).9 The principle in (7) can be seen as a constraint on the function space, enforcing such orthogonality between the columns of the Jacobian of f at all points in the source domain, thus approximating the high-dimensional behavior described above.10 Information-geometric interpretation and comparison to IGCI. The additive contribution of the sources’ influences ∂f/∂si in (7) suggests their local decoupling at the level of the mechanism f . Note that IGCI (6), on the other hand, postulates a different type of decoupling: one between log |Jf | and ps. There, dependence between cause and mechanism can be conceived as a fine tuning between the derivative of the mechanism and the input density. The IMA principle leads to a complementary, non-statistical measure of independence between the influences ∂f/∂si of the individual sources on the vector of observations. Both the IGCI and IMA postulates have an information-geometric interpretation related to the influence of (“non-statistically”) independent modules on the observations: both lead to an additive decomposition of a KL-divergence between the effect distribution and a reference distribution. For IGCI, independent modules correspond to the cause distribution and the mechanism mapping the cause to the effect (see (19) in Appendix B.2). For IMA, on the other hand, these are the influences of each source component on the observations in an interventional setting (under soft interventions on individual sources), as measured by the KL-divergences between the original and intervened distributions. See Appendix B.3, and especially (22), for a more detailed account. We finally remark that while recent work based on the ICM principle has mostly used the term “mechanism” to refer to causal Markov kernels p(Xi|PAi) or structural equations [78], we employ it in line with the broader use of this concept in the philosophical literature.11 To highlight just two examples, [86] states that “Causal processes, causal interactions, and causal laws provide the mechanisms by which the world works; to understand why certain things happen, we need to see how they are produced by these mechanisms”; and [99] states that “Mechanisms are events that alter relations among some specified set of elements”. Following this perspective, we argue that a causal mechanism can more generally denote any process that describes the way in which causes influence their effects: the partial derivative ∂f/∂si thus reflects a causal mechanism in the sense that it describes the infinitesimal changes in the observations x, when an infinitesimal perturbation is applied to si. 4.2 Definition and useful properties of the IMA contrast We now introduce a contrast function based on the IMA principle (7) and show that it possesses several desirable properties in the context of nonlinear ICA. First, we define a local contrast as the difference between the two integrands of (7) for a particular value of the sources s. Definition 4.2 (Local IMA contrast). The local IMA contrast cIMA(f , s) of f at a point s is given by cIMA(f , s) = n∑ i=1 log ∥∥∥∥ ∂f∂si (s) ∥∥∥∥− log |Jf (s)| . (8) Remark 4.3. This corresponds to the left KL measure of diagonality [2] for √ Jf (s)⊤Jf (s). 9This has also been used as a “leading intuition” [sic] to interpret IGCI in [46]. 10To provide additional intuition on how IMA differs from existing principles of independence of cause and mechanism, we give examples, both technical and pictorial, of violations of both in Appendix B.4. 11See Table 1 in [62] for a long list of definitions from the literature. The local IMA contrast cIMA(f , s) quantifies the extent to which the IMA principle is violated at a given point s. We summarise some of its properties in the following proposition. Proposition 4.4 (Properties of cIMA(f , s)). The local IMA contrast cIMA(f , s) defined in (8) satisfies: (i) cIMA(f , s) ≥ 0, with equality if and only if all columns ∂f/∂si(s) of Jf (s) are orthogonal. (ii) cIMA(f , s) is invariant to left multiplication of Jf (s) by an orthogonal matrix and to right multiplication by permutation and diagonal matrices. Property (i) formalises the geometric interpretation of IMA as an orthogonality condition on the columns of the Jacobian from § 4.1, and property (ii) intuitively states that changes of orthonormal basis and permutations or rescalings of the columns of Jf do not affect their orthogonality. Next, we define a global IMA contrast w.r.t. a source distribution ps as the expected local IMA contrast. Definition 4.5 (Global IMA contrast). The global IMA contrast CIMA(f , ps) of f w.r.t. ps is given by CIMA(f , ps) = Es∼ps [cIMA(f , s)] = ∫ cIMA(f , s)ps(s)ds . (9) The global IMA contrast CIMA(f , ps) thus quantifies the extent to which the IMA principle is violated for a particular solution (f , ps) to the nonlinear ICA problem. We summarise its properties as follows. Proposition 4.6 (Properties of CIMA(f , ps)). The global IMA contrast CIMA(f , ps) from (9) satisfies: (i) CIMA(f , ps) ≥ 0, with equality iff. Jf (s) = O(s)D(s) almost surely w.r.t. ps, where O(s),D(s) ∈ Rn×n are orthogonal and diagonal matrices, respectively; (ii) CIMA(f , ps) = CIMA(f̃ , ps̃) for any f̃ = f ◦ h−1 ◦P−1 and s̃ = Ph(s), where P ∈ Rn×n is a permutation and h(s) = (h1(s1), ..., hn(sn)) an invertible element-wise function. 0.50 0.75 1.00 1.25 1.50 r /8 0 /8 0.5 1.0 1.5 x 0.6 0.4 0.2 0.0 0.2 0.4 0.6 y Figure 3: An example of a (non-conformal) orthogonal coordinate transformation from polar (left) to Cartesian (right) coordinates. Property (i) is the distribution-level analogue to (i) of Prop. 4.4 and only allows for orthogonality violations on sets of measure zero w.r.t. ps. This means that CIMA can only be zero if f is an orthogonal coordinate transformation almost everywhere [19, 52, 66], see Fig. 3 for an example. We particularly stress property (ii), as it precisely matches the inherent indeterminacy of nonlinear ICA: CIMA is blind to reparametrisation of the sources by permutation and element wise transformation. 4.3 Theoretical analysis and justification of CIMA We now show that, under suitable assumptions on the generative model (1), a large class of spurious solutions—such as those based on the Darmois construction (4) or measure preserving automorphisms such as aR from (5) as described in § 2.1—exhibit nonzero IMA contrast. Denote the class of nonlinear ICA models satisfying (7) (IMA) by MIMA = {(f , ps) ∈ F × P : CIMA(f , ps) = 0} ⊂ F × P . Our first main theoretical result is that, under mild assumptions on the observations, Darmois solutions will have strictly positive CIMA, making them distinguishable from those in MIMA. Theorem 4.7. Assume the data generating process in (1) and assume that xi ⊥̸⊥ xj for some i ̸= j. Then any Darmois solution (fD, pu) based on gD as defined in (4) satisfies CIMA(fD, pu) > 0. Thus a solution satisfying CIMA(f , ps) = 0 can be distinguished from (fD, pu) based on the contrast CIMA. The proof is based on the fact that the Jacobian of gD is triangular (see Remark 2.4) and on the specific form of (4). A specific example of a mixing process satisfying the IMA assumption is the case where f is a conformal (angle-preserving) map. Definition 4.8 (Conformal map). A smooth map f : Rn → Rn is conformal if Jf (s) = O(s)λ(s) ∀s, where λ : Rn → R is a scalar field, and O ∈ O(n) is an orthogonal matrix. Corollary 4.9. Under assumptions of Thm. 4.7, if additionally f is a conformal map, then (f , ps) ∈ MIMA for any ps ∈ P due to Prop. 4.6 (i), see Defn. 4.8. Based on Thm. 4.7, (f , ps) is thus distinguishable from Darmois solutions (fD, pu). This is consistent with a result that proves identifiability of conformal maps for n = 2 and conjectures it in general [39].12 However, conformal maps are only a small subset of all maps for which CIMA = 0, as is apparent from the more flexible condition of Prop. 4.6 (i), compared to the stricter Defn. 4.8. 12Note that Corollary 4.9 holds for any dimensionality n. Example 4.10 (Polar to Cartesian coordinate transform). Consider the non-conformal transformation from polar to Cartesian coordinates (see Fig. 3), defined as (x, y) = f(r, θ) := (r cos(θ), r sin(θ)) with independent sources s = (r, θ), with r ∼ U(0, R) and θ ∼ U(0, 2π).13 Then, CIMA(f , ps) = 0 and CIMA(fD, pu) > 0 for any Darmois solution (fD, pu) —see Appendix D for details. Finally, for the case in which the true mixing is linear, we obtain the following result. Corollary 4.11. Consider a linear ICA model, x = As, with E[s⊤s] = I, and A ∈ O(n) an orthogonal, non-trivial mixing matrix, i.e., not the product of a diagonal and a permutation matrix DP. If at most one of the si is Gaussian, then CIMA(A, ps) = 0 and CIMA(fD, pu) > 0. In a “blind” setting, we may not know a priori whether the true mixing is linear or not, and thus choose to learn a nonlinear unmixing. Corollary 4.11 shows that, in this case, Darmois solutions are still distinguishable from the true mixing via CIMA. Note that unlike in Corollary 4.9, the assumption that xi ⊥̸⊥ xj for some i ̸= j is not required for Corollary 4.11. In fact, due to Theorem 11 of [17], it follows from the assumed linear ICA model with non-Gaussian sources, and the fact that the mixing matrix is not the product of a diagonal and a permutation matrix (see also Appendix A). Having shown that the IMA principle allows to distinguish a class of models (including, but not limited to conformal maps) from Darmois solutions, we next turn to a second well-known counterexample to identifiability: the “rotated-Gaussian” MPA aR(ps) (5) from Defn. 2.5. Our second main theoretical result is that, under suitable assumptions, this class of MPAs can also be ruled out for “non-trivial” R. Theorem 4.12. Let (f , ps) ∈ MIMA and assume that f is a conformal map. Given R ∈ O(n), assume additionally that ∃ at least one non-Gaussian si whose associated canonical basis vector ei is not transformed by R−1 = R⊤ into another canonical basis vector ej . Then CIMA(f ◦ aR(ps), ps) > 0. Thm. 4.12 states that for conformal maps, applying the aR(ps) transformation at the level of the sources leads to an increase in CIMA except for very specific rotations R that are “fine-tuned” to ps in the sense that they permute all non-Gaussian sources si with another sj . Interestingly, as for the linear case, non-Gaussianity again plays an important role in the proof of Thm. 4.12. 5 Experiments Our theoretical results from § 4 suggest that CIMA is a promising contrast function for nonlinear blind source separation. We test this empirically by evaluating the CIMA of spurious nonlinear ICA solutions (§ 5.1), and using it as a learning objective to recover the true solution (§ 5.2). We sample the ground truth sources from a uniform distribution in [0, 1]n; the reconstructed sources are also mapped to the uniform hypercube as a reference measure via the CDF transform. Unless 13For different ps, (x, y) can be made to have independent Gaussian components ([98], II.B), and CIMAidentifiability is lost; this shows that the assumption of Thm. 4.7 that xi ⊥̸⊥ xj for some i ̸= j is crucial. otherwise specified, the ground truth mixing f is a Möbius transformation [81] (i.e., a conformal map) with randomly sampled parameters, thereby satisfying Principle 4.1. In all of our experiments, we use JAX [12] and Distrax [13]. For additional technical details, equations and plots see Appendix E. The code to reproduce our experiments is available at this link. 5.1 Numerical evaluation of the CIMA contrast for spurious nonlinear ICA solutions Learning the Darmois construction. To learn the Darmois construction from data, we use normalising flows, see [35, 69]. Since Darmois solutions have triangular Jacobian (Remark 2.4), we use an architecture based on residual flows [16] which we constrain such that the Jacobian of the full model is triangular. This yields an expressive model which we train effectively via maximum likelihood. CIMA of Darmois solutions. To check whether Darmois solutions (learnt from finite data) can be distinguished from the true one, as predicted by Thm. 4.7, we generate 1000 random mixing functions for n = 2, compute the CIMA values of learnt solutions, and find that all values are indeed significantly larger than zero, see Fig. 4 (a). The same holds for higher dimensions, see Fig. 4 (b) for results with 50 random mixings for n ∈ {2, 3, 5, 10}: with higher dimensionality, both the mean and variance of the CIMA distribution for the learnt Darmois solutions generally attain higher values.14 We confirmed these findings for mappings which are not conformal, while still satisfying (7), in Appendix E.5. CIMA of MPAs. We also investigate the effect on CIMA of applying an MPA aR(·) from (5) to the true solution or a learnt Darmois solution. Results for n = 2 dim. for different rotation matrices R (parametrised by the angle θ) are shown in Fig. 4 (c). As expected, the behavior is periodic in θ, and vanishes for the true solution (blue) at multiples of π/2, i.e., when R is a permutation matrix, as predicted by Thm. 4.12. For the learnt Darmois solution (red, dashed) CIMA remains larger than zero. CIMA values for random MLPs. Lastly, we study the behavior of spurious solutions based on the Darmois construction under deviations from our assumption of CIMA = 0 for the true mixing function. To this end, we use invertible MLPs with orthogonal weight initalisation and leaky_tanh activations [29] as mixing functions; the more layers L are added to the mixing MLP, the larger a deviation from our assumptions is expected. We compare the true mixing and learnt Darmois solutions over 20 realisations for each L ∈ {2, 3, 4}, n = 5. Results are shown in figure Fig. 4 (d): the CIMA of the mixing MLPs grows with L; still, the one of the Darmois solution is typically higher. Summary. We verify that spurious solutions can be distinguished from the true one based on CIMA. 5.2 Learning nonlinear ICA solutions with CIMA-regularised maximum likelihood Experimental setup. To use CIMA as a learning signal, we consider a regularised maximum-likelihood approach, with the following objective: L(g) = Ex[log pg(x)]− λCIMA(g−1, py), where g denotes the learnt unmixing, y = g(x) the reconstructed sources, and λ ≥ 0 a Lagrange multiplier. For λ = 0, this corresponds to standard maximum likelihood estimation, whereas for λ > 0, L lower-bounds the likelihood, and recovers it exactly iff. (g−1, py) ∈ MIMA. We train a residual flow g (with full Jacobian) to maximise L. For evaluation, we compute (i) the KL divergence to the true data likelihood, as a measure of goodness of fit for the learnt flow model; and (ii) the mean correlation coefficient (MCC) between ground truth and reconstructed sources [37, 49]. We also introduce (iii) a nonlinear extension of the Amari distance [5] between the true mixing and the learnt unmixing, which is larger than or equal to zero, with equality iff. the learnt model belongs to the BSS equivalence class (Defn. 2.2) of the true solution, see Appendix E.5 for details. Results. In Fig. 4 (Top), we show an example of the distortion induced by different spurious solutions for n = 2, and contrast it with a solution learnt using our proposed objective (rightmost plot). Visually, we find that the CIMA-regularised solution (with λ = 1) recovers the true sources most faithfully. Quantitative results for 50 learnt models for each λ ∈ {0.0, 0.5, 1.0} and n ∈ {5, 7} are summarised in Fig. 5 (see Appendix E for additional plots) . As indicated by the KL divergence values (left), most trained models achieve a good fit to the data across all values of λ.15 We observe that using CIMA (i.e., λ > 0) is beneficial for BSS, both in terms of our nonlinear Amari distance (center, lower is better) and MCC (right, higher is better), though we do not observe a substantial difference between λ = 0.5 and λ = 1.16 Summary: CIMA can be a useful learning signal to recover the true solution. 14the latter possibly due to the increased difficulty of the learning task for larger n 15models with n = 7 have high outlier KL values, seemingly less pronounced for nonzero values of λ 16In Appendix E.5, we also show that our method is superior to a linear ICA baseline, FastICA [36]. 6 Discussion Assumptions on the mixing function. Instead of relying on weak supervision in the form of auxiliary variables [28, 30, 37, 38, 41, 49], our IMA approach places additional constraints on the functional form of the mixing process. In a similar vein, the minimal nonlinear distortion principle [108] proposes to favor solutions that are as close to linear as possible. Another example is the post-nonlinear model [98, 109], which assumes an element-wise nonlinearity applied after a linear mixing. IMA is different in that it still allows for strongly nonlinear mixings (see, e.g., Fig. 3) provided that the columns of their Jacobians are (close to) orthogonal. In the related field of disentanglement [8, 58], a line of work that focuses on image generation with adversarial networks [24] similarly proposes to constrain the “generator” function via regularisation of its Jacobian [82] or Hessian [74], though mostly from an empirically-driven, rather than from an identifiability perspective as in the present work. Towards identifiability with CIMA. The IMA principle rules out a large class of spurious solutions to nonlinear ICA. While we do not present a full identifiability result, our experiments show that CIMA can be used to recover the BSS equivalence class, suggesting that identifiability might indeed hold, possibly under additional assumptions—e.g., for conformal maps [39]. IMA and independence of cause and mechanism. While inspired by measures of independence of cause and mechanism as traditionally used for cause-effect inference [18, 45, 46, 110], we view the IMA principle as addressing a different question, in the sense that they evaluate independence between different elements of the causal model. Any nonlinear ICA solution that satisfies the IMA Principle 4.1 can be turned into one with uniform reconstructed sources—thus satisfying IGCI as argued in § 3— through composition with an element-wise transformation which, according to Prop. 4.6 (ii), leaves the CIMA value unchanged. Both IGCI (6) and IMA (7) can therefore be fulfilled simultaneosly, while the former on its own is inconsequential for BSS as shown in Prop. 3.1. BSS through algorithmic information. Algorithmic information theory has previously been proposed as a unifying framework for identifiable approaches to linear BSS [67, 68], in the sense that commonly-used contrast functions could, under suitable assumptions, be interpreted as proxies for the total complexity of the mixing and the reconstructed sources. However, to the best of our knowledge, the problem of specifying suitable proxies for the complexity of nonlinear mixing functions has not yet been addressed. We conjecture that our framework could be linked to this view, based on the additional assumption of algorithmic independence of causal mechanisms [43], thus potentially representing an approach to nonlinear BSS by minimisation of algorithmic complexity. ICA for causal inference & causality for ICA. Past advances in ICA have inspired novel causal discovery methods [50, 64, 92]. The present work constitutes, to the best of our knowledge, the first effort to use ideas from causality (specifically ICM) for BSS. An application of the IMA principle to causal discovery or causal representation learning [88] is an interesting direction for future work. Conclusion. We introduce IMA, a path to nonlinear BSS inspired by concepts from causality. We postulate that the influences of different sources on the observed distribution should be approximately independent, and formalise this as an orthogonality condition on the columns of the Jacobian. We prove that this constraint is generally violated by well-known spurious nonlinear ICA solutions, and propose a regularised maximum likelihood approach which we empirically demonstrate to be effective in recovering the true solution. Our IMA principle holds exactly for orthogonal coordinate transformations, and is thus of potential interest for learning spatial representations [33], robot dynamics [63], or physics problems where orthogonal reference frames are common [66]. Acknowledgements The authors thank Aapo Hyvärinen, Adrián Javaloy Bornás, Dominik Janzing, Giambattista Parascandolo, Giancarlo Fissore, Nasim Rahaman, Patrick Burauel, Patrik Reizinger, Paul Rubenstein, Shubhangi Ghosh, and the anonymous reviewers for helpful comments and discussions. Funding Transparency Statement This work was supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039B; and by the Machine Learning Cluster of Excellence, EXC number 2064/1 - Project number 390727645.
1. What is the main contribution of the paper regarding nonlinear blind source separation? 2. What are the strengths and weaknesses of the proposed approach, particularly in its connection to independent causal mechanisms and algorithmic independence? 3. Do you have any concerns or suggestions regarding the framing and presentation of the paper, including terminology usage and clarity? 4. How does the reviewer assess the significance and potential impact of the work, especially considering its limitations and lack of real-world experiments?
Summary Of The Paper Review
Summary Of The Paper This work proposes a new regularization scheme to improve identifiability in the nonlinear blind source separation problem (nonlinear ICA) inspired by the notion of independence of mechanism coming from causal inference. The idea is to consider only mixing functions which have a Jacobian matrix with orthogonal columns everywhere. They show how this extra assumption allows to exclude two classical types of degeneracies in nonlinear ICA, although without showing identifiability. The theoretical claims are validated by synthetic experiments. Review Originality: Recent works on nonlinear ICA have focused on imposing structure on the distribution of the source via auxiliary variables. Instead, this work proposes to put restrictions on the mixing function f. Although different assumptions on f have been explored in the past (cited by the authors), their independence of mechanism assumption (IMA) hasn’t (to the best of my knowledge). Quality: I have not reviewed the proofs, however the various theoretical claims seem reasonable and are clearly presented. I was happy to read about new findings in that interesting direction. However, I have some concerns with the overall framing of the paper. The inspiration for the authors' approach is the principle of independent causal mechanism (ICM). However, it is not clear to me what is the link between ICM and the orthogonality of the columns of the Jacobian of the mixing function. The authors seem to suggest a connection, but it felt somewhat vague. ICM is about algorithmic independence of mechanisms in a structural causal model (SCM), and here we regularize for orthogonality of the columns of the Jacobian of the mixing function. In causality, the term “mechanism” is typically reserved to the conditional distribution of a variable given its direct causal parents (or its associated structural equation). In this work, “mechanism” is used to denote a partial derivative of f w.r.t. some source s_i. This is an unnecessary and confusing clash of terminology in my opinion. Consequently, I do not think independent mechanism analysis is a good name for this contribution. The authors also seem to suggest a connection between the orthogonality of partial derivatives and their algorithmic independence (lines 192-200), but again, this is a bit vague and unjustified (or maybe I missed an important point). The above points can be summarized as follows: I am not convinced that ICM and algorithmic independence is the correct language to describe the authors’ contribution, unless more justification is added to the paper. Section 3 shows how “regularizing” for algorithmic independence between p(s) and f via a proxy (IGCI) is not sufficient for identifiability. It is always interesting to have negative results, but, following up on the above paragraph, the connection to ICM feels a bit forced. Instead of showing identifiability for the model class they consider, the authors show how it excludes two types of degeneracies, namely the Darmois construction and the “rotated-Gaussian measuring preserving automorphisms”. This is not sufficient to show the model class is identifiable, although it is a good step in that direction. The authors acknowledge this fact at the very end of the paper (in the discussion section), I believe this should’ve been stressed earlier in the paper as well as near the theorem statements, since I believe it is not a typical thing to do in this literature (usually, there’s a theorem showing identifiability). In corollary 4.11, it would be nice to have an intuition for why we need non-Gaussianity, can’t we just apply corollary 4.9 directly (since A is a conformal map)? What am I missing? It would be good to have experiments with non-conformal maps satisfying the IMA assumption (I didn’t see any). Minor: Would be great to have an intuition for why the Darmois construction yields independent sources. The presentation of information geometric causal inference is very fast and I could not understand its meaning. Line 332: Would be good to have a brief description of the MCC and the Amari metrics. Clarity: I enjoyed reading the paper overall. Significance: I believe this work has average significance, given how strong the conditions are on the mixing function and the fact that we do not know if it holds in practice (even partially) due to lack of real world experiments. That being said, I believe imposing constraints on the mixing function to improve identifiability is an interesting direction with potential significance. After reading the author's rebuttal, I decided to raise my score from 5 to 6. See my comment above.
NIPS
Title Independent mechanism analysis, a new concept? Abstract Independent component analysis provides a principled framework for unsupervised representation learning, with solid theory on the identifiability of the latent code that generated the data, given only observations of mixtures thereof. Unfortunately, when the mixing is nonlinear, the model is provably nonidentifiable, since statistical independence alone does not sufficiently constrain the problem. Identifiability can be recovered in settings where additional, typically observed variables are included in the generative process. We investigate an alternative path and consider instead including assumptions reflecting the principle of independent causal mechanisms exploited in the field of causality. Specifically, our approach is motivated by thinking of each source as independently influencing the mixing process. This gives rise to a framework which we term independent mechanism analysis. We provide theoretical and empirical evidence that our approach circumvents a number of nonidentifiability issues arising in nonlinear blind source separation. 1 Introduction One of the goals of unsupervised learning is to uncover properties of the data generating process, such as latent structures giving rise to the observed data. Identifiability [55] formalises this desideratum: under suitable assumptions, a model learnt from observations should match the ground truth, up to well-defined ambiguities. Within representation learning, identifiability has been studied mostly in the context of independent component analysis (ICA) [17, 40], which assumes that the observed data x results from mixing unobserved independent random variables si referred to as sources. The aim is to recover the sources based on the observed mixtures alone, also termed blind source separation (BSS). A major obstacle to BSS is that, in the nonlinear case, independent component estimation does not necessarily correspond to recovering the true sources: it is possible to give counterexamples where the observations are transformed into components yi which are independent, yet still mixed with respect to the true sources si [20, 39, 98]. In other words, nonlinear ICA is not identifiable. In order to achieve identifiability, a growing body of research postulates additional supervision or structure in the data generating process, often in the form of auxiliary variables [28, 30, 37, 38, 41]. In the present work, we investigate a different route to identifiability by drawing inspiration from the field of causal inference [71, 78] which has provided useful insights for a number of machine learning tasks, including semi-supervised [87, 103], transfer [6, 23, 27, 31, 61, 72, 84, 85, 97, 102, 107], reinforcement [7, 14, 22, 26, 53, 59, 60, 106], and unsupervised [9, 10, 54, 70, 88, 91, 104, 105] learning. To this end, we interpret the ICA mixing as a causal process and apply the principle of independent causal mechanisms (ICM) which postulates that the generative process consists of independent modules which do not share information [43, 78, 87]. In this context, “independent” does not refer to statistical independence of random variables, but rather to the notion that the distributions and functions composing the generative process are chosen independently by Nature [43, 48]. While a formalisation of ICM [43, 57] in terms of algorithmic (Kolmogorov) complexity [51] exists, it is not computable, and hence applying ICM in practice requires assessing such non-statistical independence ∗Equal contribution. Code available at: https://github.com/lgresele/independent-mechanism-analysis 35th Conference on Neural Information Processing Systems (NeurIPS 2021). with suitable domain specific criteria [96]. The goal of our work is thus to constrain the nonlinear ICA problem, in particular the mixing function, via suitable ICM measures, thereby ruling out common counterexamples to identifiability which intuitively violate the ICM principle. Traditionally, ICM criteria have been developed for causal discovery, where both cause and effect are observed [18, 45, 46, 110]. They enforce an independence between (i) the cause (source) distribution and (ii) the conditional or mechanism (mixing function) generating the effect (observations), and thus rely on the fact that the observed cause distribution is informative. As we will show, this renders them insufficient for nonlinear ICA, since the constraints they impose are satisfied by common counterexamples to identifiability. With this in mind, we introduce a new way to characterise or refine the ICM principle for unsupervised representation learning tasks such as nonlinear ICA. Motivating example. To build intuition, we turn to a famous example of ICA and BSS: the cocktail party problem, illustrated in Fig. 1 (Left). Here, a number of conversations are happening in parallel, and the task is to recover the individual voices si from the recorded mixtures xi. The mixing or recording process f is primarily determined by the room acoustics and the locations at which microphones are placed. Moreover, each speaker influences the recording through their positioning in the room, and we may think of this influence as ∂f/∂si. Our independence postulate then amounts to stating that the speakers’ positions are not fine-tuned to the room acoustics and microphone placement, or to each other, i.e., the contributions ∂f/∂si should be independent (in a non-statistical sense).1 Our approach. We formalise this notion of independence between the contributions ∂f/∂si of each source to the mixing process (i.e., the columns of the Jacobian matrix Jf of partial derivatives) as an orthogonality condition, see Fig. 1 (Right). Specifically, the absolute value of the determinant |Jf |, which describes the local change in infinitesimal volume induced by mixing the sources, should factorise or decompose as the product of the norms of its columns. This can be seen as a decoupling of the local influence of each partial derivative in the pushforward operation (mixing function) mapping the source distribution to the observed one, and gives rise to a novel framework which we term independent mechanism analysis (IMA). IMA can be understood as a refinement of the ICM principle that applies the idea of independence of mechanisms at the level of the mixing function. Contributions. The structure and contributions of this paper can be summarised as follows: • we review well-known obstacles to identifiability of nonlinear ICA (§ 2.1), as well as existing ICM criteria (§ 2.2), and show that the latter do not sufficiently constrain nonlinear ICA (§ 3); • we propose a more suitable ICM criterion for unsupervised representation learning which gives rise to a new framework that we term independent mechanism analysis (IMA) (§ 4); we provide geometric and information-theoretic interpretations of IMA (§ 4.1), introduce an IMA contrast function which is invariant to the inherent ambiguities of nonlinear ICA (§ 4.2), and show that it rules out a large class of counterexamples and is consistent with existing identifiability results (§ 4.3); • we experimentally validate our theoretical claims and propose a regularised maximum-likelihood learning approach based on the IMA constrast which outperforms the unregularised baseline (§ 5); additionally, we introduce a method to learn nonlinear ICA solutions with triangular Jacobian and a metric to assess BSS which can be of independent interest for the nonlinear ICA community. 1For additional intuition and possible violations in the context of the cocktail party problem, see Appendix B.4. 2 Background and preliminaries Our work builds on and connects related literature from the fields of independent component analysis (§ 2.1) and causal inference (§ 2.2). We review the most important concepts below. 2.1 Independent component analysis (ICA) Assume the following data-generating process for independent component analysis (ICA) x = f(s) , ps(s) = ∏n i=1 psi(si) , (1) where the observed mixtures x ∈ Rn result from applying a smooth and invertible mixing function f : Rn → Rn to a set of unobserved, independent signals or sources s ∈ Rn with smooth, factorised density ps with connected support (see illustration Fig. 2b). The goal of ICA is to learn an unmixing function g : Rn → Rn such that y = g(x) has independent components. Blind source separation (BSS), on the other hand, aims to recover the true unmixing f−1 and thus the true sources s (up to tolerable ambiguities, see below). Whether performing ICA corresponds to solving BSS is related to the concept of identifiability of the model class. Intuitively, identifiability is the desirable property that all models which give rise to the same mixture distribution should be “equivalent” up to certain ambiguities, formally defined as follows. Definition 2.1 (∼-identifiability). Let F be the set of all smooth, invertible functions f : Rn → Rn, and P be the set of all smooth, factorised densities ps with connected support on Rn. Let M ⊆ F×P be a subspace of models and let ∼ be an equivalence relation on M. Denote by f∗ps the push-forward density of ps via f . Then the generative process (1) is said to be ∼-identifiable on M if ∀(f , ps), (f̃ , ps̃) ∈ M : f∗ps = f̃∗ps̃ =⇒ (f , ps) ∼ (f̃ , ps̃) . (2) If the true model belongs to the model class M, then ∼-identifiability ensures that any model in M learnt from (infinite amounts of) data will be ∼-equivalent to the true one. An example is linear ICA which is identifiable up to permutation and rescaling of the sources on the subspace MLIN of pairs of (i) invertible matrices (constraint on F) and (ii) factorizing densities for which at most one si is Gaussian (constraint on P) [17, 21, 93], see Appendix A for a more detailed account. In the nonlinear case (i.e., without constraints on F), identifiability is much more challenging. If si and sj are independent, then so are hi(si) and hj(sj) for any functions hi and hj . In addition to permutation-ambiguity, such element-wise h(s) = (h1(s1), ..., hn(sn)) can therefore not be resolved either. We thus define the desired form of identifiability for nonlinear BSS as follows. Definition 2.2 (∼BSS). The equivalence relation ∼BSS on F × P defined as in Defn. 2.1 is given by (f , ps) ∼BSS (f̃ , ps̃) ⇐⇒ ∃P,h s.t. (f , ps) = (f̃ ◦ h−1 ◦P−1, (P ◦ h)∗ps̃) (3) where P is a permutation and h(s) = (h1(s1), ..., hn(sn)) is an invertible, element-wise function. A fundamental obstacle—and a crucial difference to the linear problem—is that in the nonlinear case, different mixtures of si and sj can be independent, i.e., solving ICA is not equivalent to solving BSS. A prominent example of this is given by the Darmois construction [20, 39]. Definition 2.3 (Darmois construction). The Darmois construction gD : Rn → (0, 1)n is obtained by recursively applying the conditional cumulative distribution function (CDF) transform: gDi (x1:i) := P(Xi ≤ xi|x1:i−1) = ∫ xi −∞ p(x ′ i|x1:i−1)dx′i (i = 1, ..., n). (4) The resulting estimated sources yD = gD(x) are mutually-independent uniform r.v.s by construction, see Fig. 2a for an illustration. However, they need not be meaningfully related to the true sources s, and will, in general, still be a nonlinear mixing thereof [39].2 Denoting the mixing function corresponding to (4) by fD = (gD)−1 and the uniform density on (0, 1)n by pu, the Darmois solution (fD, pu) thus allows construction of counterexamples to ∼BSS-identifiability on F × P .3 Remark 2.4. gD has lower-triangular Jacobian, i.e., ∂gDi /∂xj = 0 for i < j. Since the order of the xi is arbitrary, applying gD after a permutation yields a different Darmois solution. Moreover, (4) yields independent components yD even if the sources si were not independent to begin with.4 2Consider, e.g., a mixing f with full Jacobian which yields a contradiction to Defn. 2.2, due to Remark 2.4. 3By applying a change of variables, we can see that the transformed variables in (4) are uniformly distributed in the open unit cube, thereby corresponding to independent components [69, § 2.2]. 4This has broad implications for unsupervised learning, as it shows that, for i.i.d. observations, not only factorised priors, but any unconditional prior is insufficient for identifiability (see, e.g., [49], Appendix D.2). Another well-known obstacle to identifiability are measure-preserving automorphisms (MPAs) of the source distribution ps: these are functions a which map the source space to itself without affecting its distribution, i.e., a∗ps = ps [39]. A particularly instructive class of MPAs is the following [49, 58]. Definition 2.5 (“Rotated-Gaussian” MPA). Let R ∈ O(n) be an orthogonal matrix, and denote by Fs(s) = (Fs1(s1), ..., Fsn(sn)) and Φ(z) = (Φ(z1), ...,Φ(zn)) the element-wise CDFs of a smooth, factorised density ps and of a Gaussian, respectively. Then the “rotated-Gaussian” MPA aR(ps) is aR(ps) = F −1 s ◦Φ ◦R ◦Φ−1 ◦ Fs . (5) aR(ps) first maps to the (rotationally invariant) standard isotropic Gaussian (via Φ−1 ◦ Fs), then applies a rotation, and finally maps back, without affecting the distribution of the estimated sources. Hence, if (f̃ , ps̃) is a valid solution, then so is (f̃ ◦ aR(ps̃), ps̃) for any R ∈ O(n). Unless R is a permutation, this constitutes another common counterexample to ∼BSS-identifiability on F × P . Identifiability results for nonlinear ICA have recently been established for settings where an auxiliary variable u (e.g., environment index, time stamp, class label) renders the sources conditionally independent [37, 38, 41, 49]. The assumption on ps in (1) is replaced with ps|u(s|u) = ∏n i=1 psi|u(si|u), thus restricting P in Defn. 2.1. In most cases, u is assumed to be observed, though [30] is a notable exception. Similar results exist given access to a second noisy view x̃ [28]. 2.2 Causal inference and the principle of independent causal mechanisms (ICM) Rather than relying only on additional assumptions on P (e.g., via auxiliary variables), we seek to further constrain (1) by also placing assumptions on the set F of mixing functions f . To this end, we draw inspiration from the field of causal inference [71, 78]. Of central importance to our approach is the Principle of Independent Causal Mechanisms (ICM) [43, 56, 87]. Principle 2.6 (ICM principle [78]). The causal generative process of a system’s variables is composed of autonomous modules that do not inform or influence each other. These “modules” are typically thought of as the conditional distributions of each variable given its direct causes. Intuitively, the principle then states that these causal conditionals correspond to independent mechanisms of nature which do not share information. Crucially, here “independent” does not refer to statistical independence of random variables, but rather to independence of the underlying distributions as algorithmic objects. For a bivariate system comprising a cause c and an effect e, this idea reduces to an independence of cause and mechanism, see Fig. 2c. One way to formalise ICM uses Kolmogorov complexity K(·) [51] as a measure of algorithmic information [43]. However, since Kolmogorov complexity is is not computable, using ICM in practice requires assessing Principle 2.6 with other suitable proxy criteria [9, 11, 34, 42, 45, 65, 75–78, 90, 110].5 Allowing for deterministic relations between cause (sources) and effect (observations), the criterion which is most closely related to the ICA setting in (1) is information-geometric causal inference (IGCI) [18, 46].6 IGCI assumes a nonlinear relation e = f(c) and formulates a notion of indepen- 5“This can be seen as an algorithmic analog of replacing the empirically undecidable question of statistical independence with practical independence tests that are based on assumptions on the underlying distribution” [43]. 6For a similar criterion which assumes linearity [45, 110] and its relation to linear ICA, see Appendix B.1. dence between the cause distribution pc and the deterministic mechanism f (which we think of as a degenerate conditional pe|c) via the following condition (in practice, assumed to hold approximately), CIGCI(f , pc) := ∫ log |Jf (c)| pc(c)dc− ∫ log |Jf (c)| dc = 0 , (6) where (Jf (c))ij = ∂fi/∂cj(c) is the Jacobian matrix and | · | the absolute value of the determinant. CIGCI can be understood as the covariance between pc and log |Jf | (viewed as r.v.s on the unit cube w.r.t. the Lebesgue measure), so that CIGCI = 0 rules out a form of fine-tuning between pc and |Jf |. As its name suggests, IGCI can, from an information-geometric perspective, also be seen as an orthogonality condition between cause and mechanism in the space of probability distributions [46], see Appendix B.2, particularly eq. (19) for further details. 3 Existing ICM measures are insufficient for nonlinear ICA Our aim is to use the ICM Principle 2.6 to further constrain the space of models M ⊆ F ×P and rule out common counterexamples to identifiability such as those presented in § 2.1. Intuitively, both the Darmois construction (4) and the rotated Gaussian MPA (5) give rise to “non-generic” solutions which should violate ICM: the former, (fD, pu), due the triangular Jacobian of fD (see Remark 2.4), meaning that each observation xi = fDi (y1:i) only depends on a subset of the inferred independent components y1:i, and the latter, (f ◦ aR(ps), ps), due to the dependence of f ◦ aR(ps) on ps (5). However, the ICM criteria described in § 2.2 were developed for the task of cause-effect inference where both variables are observed. In contrast, in this work, we consider an unsupervised representation learning task where only the effects (mixtures x) are observed, but the causes (sources s) are not. It turns out that this renders existing ICM criteria insufficient for BSS: they can easily be satisfied by spurious solutions which are not equivalent to the true one. We can show this for IGCI. Denote by MIGCI = {(f , ps) ∈ F × P : CIGCI(f , ps) = 0} ⊂ F × P the class of nonlinear ICA models satisfying IGCI (6). Then the following negative result holds. Proposition 3.1 (IGCI is insufficient for ∼BSS-identifiability). (1) is not ∼BSS-identifiable on MIGCI. Proof. IGCI (6) is satisfied when ps is uniform. However, the Darmois construction (4) yields uniform sources, see Fig. 2a. This means that (fD ◦ aR(pu), pu) ∈ MIGCI, so IGCI can be satisfied by solutions which do not separate the sources in the sense of Defn. 2.2, see footnote 2 and [39]. As illustrated in Fig. 2c, condition (6) and other similar criteria enforce a notion of “genericity” or “decoupling” of the mechanism w.r.t. the observed input distribution.7 They thus rely on the fact that the cause (source) distribution is informative, and are generally not invariant to reparametrisation of the cause variables. In the (nonlinear) ICA setting, on the other hand, the learnt source distribution may be fairly uninformative. This poses a challenge for existing ICM criteria since any mechanism is generic w.r.t. an uninformative (uniform) input distribution. 4 Independent mechanism analysis (IMA) As argued in § 3, enforcing independence between the input distribution and the mechanism (Fig. 2c), as existing ICM criteria do, is insufficient for ruling out spurious solutions to nonlinear ICA. We therefore propose a new ICM-inspired framework which is more suitable for BSS and which we term independent mechanism analysis (IMA).8 All proofs are provided in Appendix C. 4.1 Intuition behind IMA As motivated using the cocktail party example in § 1 and Fig. 1 (Left), our main idea is to enforce a notion of independence between the contributions or influences of the different sources si on the observations x = f(s) as illustrated in Fig. 2d—as opposed to between the source distribution and mixing function, cf. Fig. 2c. These contributions or influences are captured by the vectors of partial derivatives ∂f/∂si. IMA can thus be understood as a refinement of ICM at the level of the mixing f : in addition to statistically independent components si, we look for a mixing with contributions ∂f/∂si which are independent, in a non-statistical sense which we formalise as follows. Principle 4.1 (IMA). The mechanisms by which each source si influences the observed distribution, as captured by the partial derivatives ∂f/∂si, are independent of each other in the sense that for all s: log |Jf (s)| = n∑ i=1 log ∥∥∥∥ ∂f∂si (s) ∥∥∥∥ (7) 7In fact, many ICM criteria can be phrased as special cases of a unifying group-invariance framework [9]. 8The title of the present work is thus a reverence to Pierre Comon’s seminal 1994 paper [17]. Geometric interpretation. Geometrically, the IMA principle can be understood as an orthogonality condition, as illustrated for n = 2 in Fig. 1 (Right). First, the vectors of partial derivatives ∂f/∂si, for which the IMA principle postulates independence, are the columns of Jf . |Jf | thus measures the volume of the n−dimensional parallelepiped spanned by these columns, as shown on the right. The product of their norms, on the other hand, corresponds to the volume of an n-dimensional box, or rectangular parallelepiped with side lengths ∥∂f/∂si∥, as shown on the left. The two volumes are equal if and only if all columns ∂f/∂si of Jf are orthogonal. Note that (7) is trivially satisfied for n = 1, i.e., if there is no mixing, further highlighting its difference from ICM for causal discovery. Independent influences and orthogonality. In a high dimensional setting (large n), this orthogonality can be intuitively interpreted from the ICM perspective as Nature choosing the direction of the influence of each source component in the observation space independently and from an isotropic prior. Indeed, it can be shown that the scalar product of two independent isotropic random vectors in Rn vanishes as the dimensionality n increases (equivalently: two high-dimensional isotropic vectors are typically orthogonal). This property was previously exploited in other linear ICM-based criteria (see [44, Lemma 5] and [45, Lemma 1 & Thm. 1]).9 The principle in (7) can be seen as a constraint on the function space, enforcing such orthogonality between the columns of the Jacobian of f at all points in the source domain, thus approximating the high-dimensional behavior described above.10 Information-geometric interpretation and comparison to IGCI. The additive contribution of the sources’ influences ∂f/∂si in (7) suggests their local decoupling at the level of the mechanism f . Note that IGCI (6), on the other hand, postulates a different type of decoupling: one between log |Jf | and ps. There, dependence between cause and mechanism can be conceived as a fine tuning between the derivative of the mechanism and the input density. The IMA principle leads to a complementary, non-statistical measure of independence between the influences ∂f/∂si of the individual sources on the vector of observations. Both the IGCI and IMA postulates have an information-geometric interpretation related to the influence of (“non-statistically”) independent modules on the observations: both lead to an additive decomposition of a KL-divergence between the effect distribution and a reference distribution. For IGCI, independent modules correspond to the cause distribution and the mechanism mapping the cause to the effect (see (19) in Appendix B.2). For IMA, on the other hand, these are the influences of each source component on the observations in an interventional setting (under soft interventions on individual sources), as measured by the KL-divergences between the original and intervened distributions. See Appendix B.3, and especially (22), for a more detailed account. We finally remark that while recent work based on the ICM principle has mostly used the term “mechanism” to refer to causal Markov kernels p(Xi|PAi) or structural equations [78], we employ it in line with the broader use of this concept in the philosophical literature.11 To highlight just two examples, [86] states that “Causal processes, causal interactions, and causal laws provide the mechanisms by which the world works; to understand why certain things happen, we need to see how they are produced by these mechanisms”; and [99] states that “Mechanisms are events that alter relations among some specified set of elements”. Following this perspective, we argue that a causal mechanism can more generally denote any process that describes the way in which causes influence their effects: the partial derivative ∂f/∂si thus reflects a causal mechanism in the sense that it describes the infinitesimal changes in the observations x, when an infinitesimal perturbation is applied to si. 4.2 Definition and useful properties of the IMA contrast We now introduce a contrast function based on the IMA principle (7) and show that it possesses several desirable properties in the context of nonlinear ICA. First, we define a local contrast as the difference between the two integrands of (7) for a particular value of the sources s. Definition 4.2 (Local IMA contrast). The local IMA contrast cIMA(f , s) of f at a point s is given by cIMA(f , s) = n∑ i=1 log ∥∥∥∥ ∂f∂si (s) ∥∥∥∥− log |Jf (s)| . (8) Remark 4.3. This corresponds to the left KL measure of diagonality [2] for √ Jf (s)⊤Jf (s). 9This has also been used as a “leading intuition” [sic] to interpret IGCI in [46]. 10To provide additional intuition on how IMA differs from existing principles of independence of cause and mechanism, we give examples, both technical and pictorial, of violations of both in Appendix B.4. 11See Table 1 in [62] for a long list of definitions from the literature. The local IMA contrast cIMA(f , s) quantifies the extent to which the IMA principle is violated at a given point s. We summarise some of its properties in the following proposition. Proposition 4.4 (Properties of cIMA(f , s)). The local IMA contrast cIMA(f , s) defined in (8) satisfies: (i) cIMA(f , s) ≥ 0, with equality if and only if all columns ∂f/∂si(s) of Jf (s) are orthogonal. (ii) cIMA(f , s) is invariant to left multiplication of Jf (s) by an orthogonal matrix and to right multiplication by permutation and diagonal matrices. Property (i) formalises the geometric interpretation of IMA as an orthogonality condition on the columns of the Jacobian from § 4.1, and property (ii) intuitively states that changes of orthonormal basis and permutations or rescalings of the columns of Jf do not affect their orthogonality. Next, we define a global IMA contrast w.r.t. a source distribution ps as the expected local IMA contrast. Definition 4.5 (Global IMA contrast). The global IMA contrast CIMA(f , ps) of f w.r.t. ps is given by CIMA(f , ps) = Es∼ps [cIMA(f , s)] = ∫ cIMA(f , s)ps(s)ds . (9) The global IMA contrast CIMA(f , ps) thus quantifies the extent to which the IMA principle is violated for a particular solution (f , ps) to the nonlinear ICA problem. We summarise its properties as follows. Proposition 4.6 (Properties of CIMA(f , ps)). The global IMA contrast CIMA(f , ps) from (9) satisfies: (i) CIMA(f , ps) ≥ 0, with equality iff. Jf (s) = O(s)D(s) almost surely w.r.t. ps, where O(s),D(s) ∈ Rn×n are orthogonal and diagonal matrices, respectively; (ii) CIMA(f , ps) = CIMA(f̃ , ps̃) for any f̃ = f ◦ h−1 ◦P−1 and s̃ = Ph(s), where P ∈ Rn×n is a permutation and h(s) = (h1(s1), ..., hn(sn)) an invertible element-wise function. 0.50 0.75 1.00 1.25 1.50 r /8 0 /8 0.5 1.0 1.5 x 0.6 0.4 0.2 0.0 0.2 0.4 0.6 y Figure 3: An example of a (non-conformal) orthogonal coordinate transformation from polar (left) to Cartesian (right) coordinates. Property (i) is the distribution-level analogue to (i) of Prop. 4.4 and only allows for orthogonality violations on sets of measure zero w.r.t. ps. This means that CIMA can only be zero if f is an orthogonal coordinate transformation almost everywhere [19, 52, 66], see Fig. 3 for an example. We particularly stress property (ii), as it precisely matches the inherent indeterminacy of nonlinear ICA: CIMA is blind to reparametrisation of the sources by permutation and element wise transformation. 4.3 Theoretical analysis and justification of CIMA We now show that, under suitable assumptions on the generative model (1), a large class of spurious solutions—such as those based on the Darmois construction (4) or measure preserving automorphisms such as aR from (5) as described in § 2.1—exhibit nonzero IMA contrast. Denote the class of nonlinear ICA models satisfying (7) (IMA) by MIMA = {(f , ps) ∈ F × P : CIMA(f , ps) = 0} ⊂ F × P . Our first main theoretical result is that, under mild assumptions on the observations, Darmois solutions will have strictly positive CIMA, making them distinguishable from those in MIMA. Theorem 4.7. Assume the data generating process in (1) and assume that xi ⊥̸⊥ xj for some i ̸= j. Then any Darmois solution (fD, pu) based on gD as defined in (4) satisfies CIMA(fD, pu) > 0. Thus a solution satisfying CIMA(f , ps) = 0 can be distinguished from (fD, pu) based on the contrast CIMA. The proof is based on the fact that the Jacobian of gD is triangular (see Remark 2.4) and on the specific form of (4). A specific example of a mixing process satisfying the IMA assumption is the case where f is a conformal (angle-preserving) map. Definition 4.8 (Conformal map). A smooth map f : Rn → Rn is conformal if Jf (s) = O(s)λ(s) ∀s, where λ : Rn → R is a scalar field, and O ∈ O(n) is an orthogonal matrix. Corollary 4.9. Under assumptions of Thm. 4.7, if additionally f is a conformal map, then (f , ps) ∈ MIMA for any ps ∈ P due to Prop. 4.6 (i), see Defn. 4.8. Based on Thm. 4.7, (f , ps) is thus distinguishable from Darmois solutions (fD, pu). This is consistent with a result that proves identifiability of conformal maps for n = 2 and conjectures it in general [39].12 However, conformal maps are only a small subset of all maps for which CIMA = 0, as is apparent from the more flexible condition of Prop. 4.6 (i), compared to the stricter Defn. 4.8. 12Note that Corollary 4.9 holds for any dimensionality n. Example 4.10 (Polar to Cartesian coordinate transform). Consider the non-conformal transformation from polar to Cartesian coordinates (see Fig. 3), defined as (x, y) = f(r, θ) := (r cos(θ), r sin(θ)) with independent sources s = (r, θ), with r ∼ U(0, R) and θ ∼ U(0, 2π).13 Then, CIMA(f , ps) = 0 and CIMA(fD, pu) > 0 for any Darmois solution (fD, pu) —see Appendix D for details. Finally, for the case in which the true mixing is linear, we obtain the following result. Corollary 4.11. Consider a linear ICA model, x = As, with E[s⊤s] = I, and A ∈ O(n) an orthogonal, non-trivial mixing matrix, i.e., not the product of a diagonal and a permutation matrix DP. If at most one of the si is Gaussian, then CIMA(A, ps) = 0 and CIMA(fD, pu) > 0. In a “blind” setting, we may not know a priori whether the true mixing is linear or not, and thus choose to learn a nonlinear unmixing. Corollary 4.11 shows that, in this case, Darmois solutions are still distinguishable from the true mixing via CIMA. Note that unlike in Corollary 4.9, the assumption that xi ⊥̸⊥ xj for some i ̸= j is not required for Corollary 4.11. In fact, due to Theorem 11 of [17], it follows from the assumed linear ICA model with non-Gaussian sources, and the fact that the mixing matrix is not the product of a diagonal and a permutation matrix (see also Appendix A). Having shown that the IMA principle allows to distinguish a class of models (including, but not limited to conformal maps) from Darmois solutions, we next turn to a second well-known counterexample to identifiability: the “rotated-Gaussian” MPA aR(ps) (5) from Defn. 2.5. Our second main theoretical result is that, under suitable assumptions, this class of MPAs can also be ruled out for “non-trivial” R. Theorem 4.12. Let (f , ps) ∈ MIMA and assume that f is a conformal map. Given R ∈ O(n), assume additionally that ∃ at least one non-Gaussian si whose associated canonical basis vector ei is not transformed by R−1 = R⊤ into another canonical basis vector ej . Then CIMA(f ◦ aR(ps), ps) > 0. Thm. 4.12 states that for conformal maps, applying the aR(ps) transformation at the level of the sources leads to an increase in CIMA except for very specific rotations R that are “fine-tuned” to ps in the sense that they permute all non-Gaussian sources si with another sj . Interestingly, as for the linear case, non-Gaussianity again plays an important role in the proof of Thm. 4.12. 5 Experiments Our theoretical results from § 4 suggest that CIMA is a promising contrast function for nonlinear blind source separation. We test this empirically by evaluating the CIMA of spurious nonlinear ICA solutions (§ 5.1), and using it as a learning objective to recover the true solution (§ 5.2). We sample the ground truth sources from a uniform distribution in [0, 1]n; the reconstructed sources are also mapped to the uniform hypercube as a reference measure via the CDF transform. Unless 13For different ps, (x, y) can be made to have independent Gaussian components ([98], II.B), and CIMAidentifiability is lost; this shows that the assumption of Thm. 4.7 that xi ⊥̸⊥ xj for some i ̸= j is crucial. otherwise specified, the ground truth mixing f is a Möbius transformation [81] (i.e., a conformal map) with randomly sampled parameters, thereby satisfying Principle 4.1. In all of our experiments, we use JAX [12] and Distrax [13]. For additional technical details, equations and plots see Appendix E. The code to reproduce our experiments is available at this link. 5.1 Numerical evaluation of the CIMA contrast for spurious nonlinear ICA solutions Learning the Darmois construction. To learn the Darmois construction from data, we use normalising flows, see [35, 69]. Since Darmois solutions have triangular Jacobian (Remark 2.4), we use an architecture based on residual flows [16] which we constrain such that the Jacobian of the full model is triangular. This yields an expressive model which we train effectively via maximum likelihood. CIMA of Darmois solutions. To check whether Darmois solutions (learnt from finite data) can be distinguished from the true one, as predicted by Thm. 4.7, we generate 1000 random mixing functions for n = 2, compute the CIMA values of learnt solutions, and find that all values are indeed significantly larger than zero, see Fig. 4 (a). The same holds for higher dimensions, see Fig. 4 (b) for results with 50 random mixings for n ∈ {2, 3, 5, 10}: with higher dimensionality, both the mean and variance of the CIMA distribution for the learnt Darmois solutions generally attain higher values.14 We confirmed these findings for mappings which are not conformal, while still satisfying (7), in Appendix E.5. CIMA of MPAs. We also investigate the effect on CIMA of applying an MPA aR(·) from (5) to the true solution or a learnt Darmois solution. Results for n = 2 dim. for different rotation matrices R (parametrised by the angle θ) are shown in Fig. 4 (c). As expected, the behavior is periodic in θ, and vanishes for the true solution (blue) at multiples of π/2, i.e., when R is a permutation matrix, as predicted by Thm. 4.12. For the learnt Darmois solution (red, dashed) CIMA remains larger than zero. CIMA values for random MLPs. Lastly, we study the behavior of spurious solutions based on the Darmois construction under deviations from our assumption of CIMA = 0 for the true mixing function. To this end, we use invertible MLPs with orthogonal weight initalisation and leaky_tanh activations [29] as mixing functions; the more layers L are added to the mixing MLP, the larger a deviation from our assumptions is expected. We compare the true mixing and learnt Darmois solutions over 20 realisations for each L ∈ {2, 3, 4}, n = 5. Results are shown in figure Fig. 4 (d): the CIMA of the mixing MLPs grows with L; still, the one of the Darmois solution is typically higher. Summary. We verify that spurious solutions can be distinguished from the true one based on CIMA. 5.2 Learning nonlinear ICA solutions with CIMA-regularised maximum likelihood Experimental setup. To use CIMA as a learning signal, we consider a regularised maximum-likelihood approach, with the following objective: L(g) = Ex[log pg(x)]− λCIMA(g−1, py), where g denotes the learnt unmixing, y = g(x) the reconstructed sources, and λ ≥ 0 a Lagrange multiplier. For λ = 0, this corresponds to standard maximum likelihood estimation, whereas for λ > 0, L lower-bounds the likelihood, and recovers it exactly iff. (g−1, py) ∈ MIMA. We train a residual flow g (with full Jacobian) to maximise L. For evaluation, we compute (i) the KL divergence to the true data likelihood, as a measure of goodness of fit for the learnt flow model; and (ii) the mean correlation coefficient (MCC) between ground truth and reconstructed sources [37, 49]. We also introduce (iii) a nonlinear extension of the Amari distance [5] between the true mixing and the learnt unmixing, which is larger than or equal to zero, with equality iff. the learnt model belongs to the BSS equivalence class (Defn. 2.2) of the true solution, see Appendix E.5 for details. Results. In Fig. 4 (Top), we show an example of the distortion induced by different spurious solutions for n = 2, and contrast it with a solution learnt using our proposed objective (rightmost plot). Visually, we find that the CIMA-regularised solution (with λ = 1) recovers the true sources most faithfully. Quantitative results for 50 learnt models for each λ ∈ {0.0, 0.5, 1.0} and n ∈ {5, 7} are summarised in Fig. 5 (see Appendix E for additional plots) . As indicated by the KL divergence values (left), most trained models achieve a good fit to the data across all values of λ.15 We observe that using CIMA (i.e., λ > 0) is beneficial for BSS, both in terms of our nonlinear Amari distance (center, lower is better) and MCC (right, higher is better), though we do not observe a substantial difference between λ = 0.5 and λ = 1.16 Summary: CIMA can be a useful learning signal to recover the true solution. 14the latter possibly due to the increased difficulty of the learning task for larger n 15models with n = 7 have high outlier KL values, seemingly less pronounced for nonzero values of λ 16In Appendix E.5, we also show that our method is superior to a linear ICA baseline, FastICA [36]. 6 Discussion Assumptions on the mixing function. Instead of relying on weak supervision in the form of auxiliary variables [28, 30, 37, 38, 41, 49], our IMA approach places additional constraints on the functional form of the mixing process. In a similar vein, the minimal nonlinear distortion principle [108] proposes to favor solutions that are as close to linear as possible. Another example is the post-nonlinear model [98, 109], which assumes an element-wise nonlinearity applied after a linear mixing. IMA is different in that it still allows for strongly nonlinear mixings (see, e.g., Fig. 3) provided that the columns of their Jacobians are (close to) orthogonal. In the related field of disentanglement [8, 58], a line of work that focuses on image generation with adversarial networks [24] similarly proposes to constrain the “generator” function via regularisation of its Jacobian [82] or Hessian [74], though mostly from an empirically-driven, rather than from an identifiability perspective as in the present work. Towards identifiability with CIMA. The IMA principle rules out a large class of spurious solutions to nonlinear ICA. While we do not present a full identifiability result, our experiments show that CIMA can be used to recover the BSS equivalence class, suggesting that identifiability might indeed hold, possibly under additional assumptions—e.g., for conformal maps [39]. IMA and independence of cause and mechanism. While inspired by measures of independence of cause and mechanism as traditionally used for cause-effect inference [18, 45, 46, 110], we view the IMA principle as addressing a different question, in the sense that they evaluate independence between different elements of the causal model. Any nonlinear ICA solution that satisfies the IMA Principle 4.1 can be turned into one with uniform reconstructed sources—thus satisfying IGCI as argued in § 3— through composition with an element-wise transformation which, according to Prop. 4.6 (ii), leaves the CIMA value unchanged. Both IGCI (6) and IMA (7) can therefore be fulfilled simultaneosly, while the former on its own is inconsequential for BSS as shown in Prop. 3.1. BSS through algorithmic information. Algorithmic information theory has previously been proposed as a unifying framework for identifiable approaches to linear BSS [67, 68], in the sense that commonly-used contrast functions could, under suitable assumptions, be interpreted as proxies for the total complexity of the mixing and the reconstructed sources. However, to the best of our knowledge, the problem of specifying suitable proxies for the complexity of nonlinear mixing functions has not yet been addressed. We conjecture that our framework could be linked to this view, based on the additional assumption of algorithmic independence of causal mechanisms [43], thus potentially representing an approach to nonlinear BSS by minimisation of algorithmic complexity. ICA for causal inference & causality for ICA. Past advances in ICA have inspired novel causal discovery methods [50, 64, 92]. The present work constitutes, to the best of our knowledge, the first effort to use ideas from causality (specifically ICM) for BSS. An application of the IMA principle to causal discovery or causal representation learning [88] is an interesting direction for future work. Conclusion. We introduce IMA, a path to nonlinear BSS inspired by concepts from causality. We postulate that the influences of different sources on the observed distribution should be approximately independent, and formalise this as an orthogonality condition on the columns of the Jacobian. We prove that this constraint is generally violated by well-known spurious nonlinear ICA solutions, and propose a regularised maximum likelihood approach which we empirically demonstrate to be effective in recovering the true solution. Our IMA principle holds exactly for orthogonal coordinate transformations, and is thus of potential interest for learning spatial representations [33], robot dynamics [63], or physics problems where orthogonal reference frames are common [66]. Acknowledgements The authors thank Aapo Hyvärinen, Adrián Javaloy Bornás, Dominik Janzing, Giambattista Parascandolo, Giancarlo Fissore, Nasim Rahaman, Patrick Burauel, Patrik Reizinger, Paul Rubenstein, Shubhangi Ghosh, and the anonymous reviewers for helpful comments and discussions. Funding Transparency Statement This work was supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039B; and by the Machine Learning Cluster of Excellence, EXC number 2064/1 - Project number 390727645.
1. What is the focus and contribution of the paper on independent mechanism analysis? 2. What are the strengths of the proposed approach, particularly in its novelty and ability to provide non-spurious solutions to nonlinear blind source separation? 3. How does the reviewer assess the clarity and quality of the paper's content, particularly in its introduction and theoretical analysis? 4. Are there any concerns or suggestions regarding the paper's contributions and potential impact on unsupervised learning?
Summary Of The Paper Review
Summary Of The Paper The authors propose to borrow concepts from causality, in particular "independent causal mechanisms" , to propose a novel framework termed "independent mechanism analysis" (IMA), which can provide non-spurious solutions to nonlinear blind source separation. IMA can be seen as a restricted nonlinear ICA model, where the contributions of the latent variables z i to the nonlinear mixing f are independent, i.e. the columns of the Jacobian J f are orthogonal. Review The proposed IMA framework is novel, and provides an alternative solution to the nonlinear ICA (equivalently nonlinear BSS) problem. It is by now known that nonlinear ICA is not possible when dealing with general nonlinear mixing functions by only assuming independence of the latent components. Recent approaches usually involve using inductive biases in the form of auxiliary information to condition the latent distribution, without constraining the mixing. The authors propose to take a different path, by constraining the mixing instead. The paper is very well written, and was a joy to read. Section 2 does a very good job of laying the ground for the IMA framework. Readers without knowledge about nonlinear ICA or the ICM framework will find this section very helpful. The theoretical results are sound, and the assumptions are well justified. The new IMA contrast functions is very useful, and can be used in practice to rule out a lot of spurious solutions to the nonlinear ICA problem. Overall, this work is original, and of great significance. Learning identifiable representations is an important topic in unsupervised learning, and the provided framework is an important step in this direction. I don't have much to criticize about the paper. I would have liked to see a full identifiability theory, but I believe it will follow in future work, and I am satisfied with the quality and quantity of the current contribution.
NIPS
Title Independent mechanism analysis, a new concept? Abstract Independent component analysis provides a principled framework for unsupervised representation learning, with solid theory on the identifiability of the latent code that generated the data, given only observations of mixtures thereof. Unfortunately, when the mixing is nonlinear, the model is provably nonidentifiable, since statistical independence alone does not sufficiently constrain the problem. Identifiability can be recovered in settings where additional, typically observed variables are included in the generative process. We investigate an alternative path and consider instead including assumptions reflecting the principle of independent causal mechanisms exploited in the field of causality. Specifically, our approach is motivated by thinking of each source as independently influencing the mixing process. This gives rise to a framework which we term independent mechanism analysis. We provide theoretical and empirical evidence that our approach circumvents a number of nonidentifiability issues arising in nonlinear blind source separation. 1 Introduction One of the goals of unsupervised learning is to uncover properties of the data generating process, such as latent structures giving rise to the observed data. Identifiability [55] formalises this desideratum: under suitable assumptions, a model learnt from observations should match the ground truth, up to well-defined ambiguities. Within representation learning, identifiability has been studied mostly in the context of independent component analysis (ICA) [17, 40], which assumes that the observed data x results from mixing unobserved independent random variables si referred to as sources. The aim is to recover the sources based on the observed mixtures alone, also termed blind source separation (BSS). A major obstacle to BSS is that, in the nonlinear case, independent component estimation does not necessarily correspond to recovering the true sources: it is possible to give counterexamples where the observations are transformed into components yi which are independent, yet still mixed with respect to the true sources si [20, 39, 98]. In other words, nonlinear ICA is not identifiable. In order to achieve identifiability, a growing body of research postulates additional supervision or structure in the data generating process, often in the form of auxiliary variables [28, 30, 37, 38, 41]. In the present work, we investigate a different route to identifiability by drawing inspiration from the field of causal inference [71, 78] which has provided useful insights for a number of machine learning tasks, including semi-supervised [87, 103], transfer [6, 23, 27, 31, 61, 72, 84, 85, 97, 102, 107], reinforcement [7, 14, 22, 26, 53, 59, 60, 106], and unsupervised [9, 10, 54, 70, 88, 91, 104, 105] learning. To this end, we interpret the ICA mixing as a causal process and apply the principle of independent causal mechanisms (ICM) which postulates that the generative process consists of independent modules which do not share information [43, 78, 87]. In this context, “independent” does not refer to statistical independence of random variables, but rather to the notion that the distributions and functions composing the generative process are chosen independently by Nature [43, 48]. While a formalisation of ICM [43, 57] in terms of algorithmic (Kolmogorov) complexity [51] exists, it is not computable, and hence applying ICM in practice requires assessing such non-statistical independence ∗Equal contribution. Code available at: https://github.com/lgresele/independent-mechanism-analysis 35th Conference on Neural Information Processing Systems (NeurIPS 2021). with suitable domain specific criteria [96]. The goal of our work is thus to constrain the nonlinear ICA problem, in particular the mixing function, via suitable ICM measures, thereby ruling out common counterexamples to identifiability which intuitively violate the ICM principle. Traditionally, ICM criteria have been developed for causal discovery, where both cause and effect are observed [18, 45, 46, 110]. They enforce an independence between (i) the cause (source) distribution and (ii) the conditional or mechanism (mixing function) generating the effect (observations), and thus rely on the fact that the observed cause distribution is informative. As we will show, this renders them insufficient for nonlinear ICA, since the constraints they impose are satisfied by common counterexamples to identifiability. With this in mind, we introduce a new way to characterise or refine the ICM principle for unsupervised representation learning tasks such as nonlinear ICA. Motivating example. To build intuition, we turn to a famous example of ICA and BSS: the cocktail party problem, illustrated in Fig. 1 (Left). Here, a number of conversations are happening in parallel, and the task is to recover the individual voices si from the recorded mixtures xi. The mixing or recording process f is primarily determined by the room acoustics and the locations at which microphones are placed. Moreover, each speaker influences the recording through their positioning in the room, and we may think of this influence as ∂f/∂si. Our independence postulate then amounts to stating that the speakers’ positions are not fine-tuned to the room acoustics and microphone placement, or to each other, i.e., the contributions ∂f/∂si should be independent (in a non-statistical sense).1 Our approach. We formalise this notion of independence between the contributions ∂f/∂si of each source to the mixing process (i.e., the columns of the Jacobian matrix Jf of partial derivatives) as an orthogonality condition, see Fig. 1 (Right). Specifically, the absolute value of the determinant |Jf |, which describes the local change in infinitesimal volume induced by mixing the sources, should factorise or decompose as the product of the norms of its columns. This can be seen as a decoupling of the local influence of each partial derivative in the pushforward operation (mixing function) mapping the source distribution to the observed one, and gives rise to a novel framework which we term independent mechanism analysis (IMA). IMA can be understood as a refinement of the ICM principle that applies the idea of independence of mechanisms at the level of the mixing function. Contributions. The structure and contributions of this paper can be summarised as follows: • we review well-known obstacles to identifiability of nonlinear ICA (§ 2.1), as well as existing ICM criteria (§ 2.2), and show that the latter do not sufficiently constrain nonlinear ICA (§ 3); • we propose a more suitable ICM criterion for unsupervised representation learning which gives rise to a new framework that we term independent mechanism analysis (IMA) (§ 4); we provide geometric and information-theoretic interpretations of IMA (§ 4.1), introduce an IMA contrast function which is invariant to the inherent ambiguities of nonlinear ICA (§ 4.2), and show that it rules out a large class of counterexamples and is consistent with existing identifiability results (§ 4.3); • we experimentally validate our theoretical claims and propose a regularised maximum-likelihood learning approach based on the IMA constrast which outperforms the unregularised baseline (§ 5); additionally, we introduce a method to learn nonlinear ICA solutions with triangular Jacobian and a metric to assess BSS which can be of independent interest for the nonlinear ICA community. 1For additional intuition and possible violations in the context of the cocktail party problem, see Appendix B.4. 2 Background and preliminaries Our work builds on and connects related literature from the fields of independent component analysis (§ 2.1) and causal inference (§ 2.2). We review the most important concepts below. 2.1 Independent component analysis (ICA) Assume the following data-generating process for independent component analysis (ICA) x = f(s) , ps(s) = ∏n i=1 psi(si) , (1) where the observed mixtures x ∈ Rn result from applying a smooth and invertible mixing function f : Rn → Rn to a set of unobserved, independent signals or sources s ∈ Rn with smooth, factorised density ps with connected support (see illustration Fig. 2b). The goal of ICA is to learn an unmixing function g : Rn → Rn such that y = g(x) has independent components. Blind source separation (BSS), on the other hand, aims to recover the true unmixing f−1 and thus the true sources s (up to tolerable ambiguities, see below). Whether performing ICA corresponds to solving BSS is related to the concept of identifiability of the model class. Intuitively, identifiability is the desirable property that all models which give rise to the same mixture distribution should be “equivalent” up to certain ambiguities, formally defined as follows. Definition 2.1 (∼-identifiability). Let F be the set of all smooth, invertible functions f : Rn → Rn, and P be the set of all smooth, factorised densities ps with connected support on Rn. Let M ⊆ F×P be a subspace of models and let ∼ be an equivalence relation on M. Denote by f∗ps the push-forward density of ps via f . Then the generative process (1) is said to be ∼-identifiable on M if ∀(f , ps), (f̃ , ps̃) ∈ M : f∗ps = f̃∗ps̃ =⇒ (f , ps) ∼ (f̃ , ps̃) . (2) If the true model belongs to the model class M, then ∼-identifiability ensures that any model in M learnt from (infinite amounts of) data will be ∼-equivalent to the true one. An example is linear ICA which is identifiable up to permutation and rescaling of the sources on the subspace MLIN of pairs of (i) invertible matrices (constraint on F) and (ii) factorizing densities for which at most one si is Gaussian (constraint on P) [17, 21, 93], see Appendix A for a more detailed account. In the nonlinear case (i.e., without constraints on F), identifiability is much more challenging. If si and sj are independent, then so are hi(si) and hj(sj) for any functions hi and hj . In addition to permutation-ambiguity, such element-wise h(s) = (h1(s1), ..., hn(sn)) can therefore not be resolved either. We thus define the desired form of identifiability for nonlinear BSS as follows. Definition 2.2 (∼BSS). The equivalence relation ∼BSS on F × P defined as in Defn. 2.1 is given by (f , ps) ∼BSS (f̃ , ps̃) ⇐⇒ ∃P,h s.t. (f , ps) = (f̃ ◦ h−1 ◦P−1, (P ◦ h)∗ps̃) (3) where P is a permutation and h(s) = (h1(s1), ..., hn(sn)) is an invertible, element-wise function. A fundamental obstacle—and a crucial difference to the linear problem—is that in the nonlinear case, different mixtures of si and sj can be independent, i.e., solving ICA is not equivalent to solving BSS. A prominent example of this is given by the Darmois construction [20, 39]. Definition 2.3 (Darmois construction). The Darmois construction gD : Rn → (0, 1)n is obtained by recursively applying the conditional cumulative distribution function (CDF) transform: gDi (x1:i) := P(Xi ≤ xi|x1:i−1) = ∫ xi −∞ p(x ′ i|x1:i−1)dx′i (i = 1, ..., n). (4) The resulting estimated sources yD = gD(x) are mutually-independent uniform r.v.s by construction, see Fig. 2a for an illustration. However, they need not be meaningfully related to the true sources s, and will, in general, still be a nonlinear mixing thereof [39].2 Denoting the mixing function corresponding to (4) by fD = (gD)−1 and the uniform density on (0, 1)n by pu, the Darmois solution (fD, pu) thus allows construction of counterexamples to ∼BSS-identifiability on F × P .3 Remark 2.4. gD has lower-triangular Jacobian, i.e., ∂gDi /∂xj = 0 for i < j. Since the order of the xi is arbitrary, applying gD after a permutation yields a different Darmois solution. Moreover, (4) yields independent components yD even if the sources si were not independent to begin with.4 2Consider, e.g., a mixing f with full Jacobian which yields a contradiction to Defn. 2.2, due to Remark 2.4. 3By applying a change of variables, we can see that the transformed variables in (4) are uniformly distributed in the open unit cube, thereby corresponding to independent components [69, § 2.2]. 4This has broad implications for unsupervised learning, as it shows that, for i.i.d. observations, not only factorised priors, but any unconditional prior is insufficient for identifiability (see, e.g., [49], Appendix D.2). Another well-known obstacle to identifiability are measure-preserving automorphisms (MPAs) of the source distribution ps: these are functions a which map the source space to itself without affecting its distribution, i.e., a∗ps = ps [39]. A particularly instructive class of MPAs is the following [49, 58]. Definition 2.5 (“Rotated-Gaussian” MPA). Let R ∈ O(n) be an orthogonal matrix, and denote by Fs(s) = (Fs1(s1), ..., Fsn(sn)) and Φ(z) = (Φ(z1), ...,Φ(zn)) the element-wise CDFs of a smooth, factorised density ps and of a Gaussian, respectively. Then the “rotated-Gaussian” MPA aR(ps) is aR(ps) = F −1 s ◦Φ ◦R ◦Φ−1 ◦ Fs . (5) aR(ps) first maps to the (rotationally invariant) standard isotropic Gaussian (via Φ−1 ◦ Fs), then applies a rotation, and finally maps back, without affecting the distribution of the estimated sources. Hence, if (f̃ , ps̃) is a valid solution, then so is (f̃ ◦ aR(ps̃), ps̃) for any R ∈ O(n). Unless R is a permutation, this constitutes another common counterexample to ∼BSS-identifiability on F × P . Identifiability results for nonlinear ICA have recently been established for settings where an auxiliary variable u (e.g., environment index, time stamp, class label) renders the sources conditionally independent [37, 38, 41, 49]. The assumption on ps in (1) is replaced with ps|u(s|u) = ∏n i=1 psi|u(si|u), thus restricting P in Defn. 2.1. In most cases, u is assumed to be observed, though [30] is a notable exception. Similar results exist given access to a second noisy view x̃ [28]. 2.2 Causal inference and the principle of independent causal mechanisms (ICM) Rather than relying only on additional assumptions on P (e.g., via auxiliary variables), we seek to further constrain (1) by also placing assumptions on the set F of mixing functions f . To this end, we draw inspiration from the field of causal inference [71, 78]. Of central importance to our approach is the Principle of Independent Causal Mechanisms (ICM) [43, 56, 87]. Principle 2.6 (ICM principle [78]). The causal generative process of a system’s variables is composed of autonomous modules that do not inform or influence each other. These “modules” are typically thought of as the conditional distributions of each variable given its direct causes. Intuitively, the principle then states that these causal conditionals correspond to independent mechanisms of nature which do not share information. Crucially, here “independent” does not refer to statistical independence of random variables, but rather to independence of the underlying distributions as algorithmic objects. For a bivariate system comprising a cause c and an effect e, this idea reduces to an independence of cause and mechanism, see Fig. 2c. One way to formalise ICM uses Kolmogorov complexity K(·) [51] as a measure of algorithmic information [43]. However, since Kolmogorov complexity is is not computable, using ICM in practice requires assessing Principle 2.6 with other suitable proxy criteria [9, 11, 34, 42, 45, 65, 75–78, 90, 110].5 Allowing for deterministic relations between cause (sources) and effect (observations), the criterion which is most closely related to the ICA setting in (1) is information-geometric causal inference (IGCI) [18, 46].6 IGCI assumes a nonlinear relation e = f(c) and formulates a notion of indepen- 5“This can be seen as an algorithmic analog of replacing the empirically undecidable question of statistical independence with practical independence tests that are based on assumptions on the underlying distribution” [43]. 6For a similar criterion which assumes linearity [45, 110] and its relation to linear ICA, see Appendix B.1. dence between the cause distribution pc and the deterministic mechanism f (which we think of as a degenerate conditional pe|c) via the following condition (in practice, assumed to hold approximately), CIGCI(f , pc) := ∫ log |Jf (c)| pc(c)dc− ∫ log |Jf (c)| dc = 0 , (6) where (Jf (c))ij = ∂fi/∂cj(c) is the Jacobian matrix and | · | the absolute value of the determinant. CIGCI can be understood as the covariance between pc and log |Jf | (viewed as r.v.s on the unit cube w.r.t. the Lebesgue measure), so that CIGCI = 0 rules out a form of fine-tuning between pc and |Jf |. As its name suggests, IGCI can, from an information-geometric perspective, also be seen as an orthogonality condition between cause and mechanism in the space of probability distributions [46], see Appendix B.2, particularly eq. (19) for further details. 3 Existing ICM measures are insufficient for nonlinear ICA Our aim is to use the ICM Principle 2.6 to further constrain the space of models M ⊆ F ×P and rule out common counterexamples to identifiability such as those presented in § 2.1. Intuitively, both the Darmois construction (4) and the rotated Gaussian MPA (5) give rise to “non-generic” solutions which should violate ICM: the former, (fD, pu), due the triangular Jacobian of fD (see Remark 2.4), meaning that each observation xi = fDi (y1:i) only depends on a subset of the inferred independent components y1:i, and the latter, (f ◦ aR(ps), ps), due to the dependence of f ◦ aR(ps) on ps (5). However, the ICM criteria described in § 2.2 were developed for the task of cause-effect inference where both variables are observed. In contrast, in this work, we consider an unsupervised representation learning task where only the effects (mixtures x) are observed, but the causes (sources s) are not. It turns out that this renders existing ICM criteria insufficient for BSS: they can easily be satisfied by spurious solutions which are not equivalent to the true one. We can show this for IGCI. Denote by MIGCI = {(f , ps) ∈ F × P : CIGCI(f , ps) = 0} ⊂ F × P the class of nonlinear ICA models satisfying IGCI (6). Then the following negative result holds. Proposition 3.1 (IGCI is insufficient for ∼BSS-identifiability). (1) is not ∼BSS-identifiable on MIGCI. Proof. IGCI (6) is satisfied when ps is uniform. However, the Darmois construction (4) yields uniform sources, see Fig. 2a. This means that (fD ◦ aR(pu), pu) ∈ MIGCI, so IGCI can be satisfied by solutions which do not separate the sources in the sense of Defn. 2.2, see footnote 2 and [39]. As illustrated in Fig. 2c, condition (6) and other similar criteria enforce a notion of “genericity” or “decoupling” of the mechanism w.r.t. the observed input distribution.7 They thus rely on the fact that the cause (source) distribution is informative, and are generally not invariant to reparametrisation of the cause variables. In the (nonlinear) ICA setting, on the other hand, the learnt source distribution may be fairly uninformative. This poses a challenge for existing ICM criteria since any mechanism is generic w.r.t. an uninformative (uniform) input distribution. 4 Independent mechanism analysis (IMA) As argued in § 3, enforcing independence between the input distribution and the mechanism (Fig. 2c), as existing ICM criteria do, is insufficient for ruling out spurious solutions to nonlinear ICA. We therefore propose a new ICM-inspired framework which is more suitable for BSS and which we term independent mechanism analysis (IMA).8 All proofs are provided in Appendix C. 4.1 Intuition behind IMA As motivated using the cocktail party example in § 1 and Fig. 1 (Left), our main idea is to enforce a notion of independence between the contributions or influences of the different sources si on the observations x = f(s) as illustrated in Fig. 2d—as opposed to between the source distribution and mixing function, cf. Fig. 2c. These contributions or influences are captured by the vectors of partial derivatives ∂f/∂si. IMA can thus be understood as a refinement of ICM at the level of the mixing f : in addition to statistically independent components si, we look for a mixing with contributions ∂f/∂si which are independent, in a non-statistical sense which we formalise as follows. Principle 4.1 (IMA). The mechanisms by which each source si influences the observed distribution, as captured by the partial derivatives ∂f/∂si, are independent of each other in the sense that for all s: log |Jf (s)| = n∑ i=1 log ∥∥∥∥ ∂f∂si (s) ∥∥∥∥ (7) 7In fact, many ICM criteria can be phrased as special cases of a unifying group-invariance framework [9]. 8The title of the present work is thus a reverence to Pierre Comon’s seminal 1994 paper [17]. Geometric interpretation. Geometrically, the IMA principle can be understood as an orthogonality condition, as illustrated for n = 2 in Fig. 1 (Right). First, the vectors of partial derivatives ∂f/∂si, for which the IMA principle postulates independence, are the columns of Jf . |Jf | thus measures the volume of the n−dimensional parallelepiped spanned by these columns, as shown on the right. The product of their norms, on the other hand, corresponds to the volume of an n-dimensional box, or rectangular parallelepiped with side lengths ∥∂f/∂si∥, as shown on the left. The two volumes are equal if and only if all columns ∂f/∂si of Jf are orthogonal. Note that (7) is trivially satisfied for n = 1, i.e., if there is no mixing, further highlighting its difference from ICM for causal discovery. Independent influences and orthogonality. In a high dimensional setting (large n), this orthogonality can be intuitively interpreted from the ICM perspective as Nature choosing the direction of the influence of each source component in the observation space independently and from an isotropic prior. Indeed, it can be shown that the scalar product of two independent isotropic random vectors in Rn vanishes as the dimensionality n increases (equivalently: two high-dimensional isotropic vectors are typically orthogonal). This property was previously exploited in other linear ICM-based criteria (see [44, Lemma 5] and [45, Lemma 1 & Thm. 1]).9 The principle in (7) can be seen as a constraint on the function space, enforcing such orthogonality between the columns of the Jacobian of f at all points in the source domain, thus approximating the high-dimensional behavior described above.10 Information-geometric interpretation and comparison to IGCI. The additive contribution of the sources’ influences ∂f/∂si in (7) suggests their local decoupling at the level of the mechanism f . Note that IGCI (6), on the other hand, postulates a different type of decoupling: one between log |Jf | and ps. There, dependence between cause and mechanism can be conceived as a fine tuning between the derivative of the mechanism and the input density. The IMA principle leads to a complementary, non-statistical measure of independence between the influences ∂f/∂si of the individual sources on the vector of observations. Both the IGCI and IMA postulates have an information-geometric interpretation related to the influence of (“non-statistically”) independent modules on the observations: both lead to an additive decomposition of a KL-divergence between the effect distribution and a reference distribution. For IGCI, independent modules correspond to the cause distribution and the mechanism mapping the cause to the effect (see (19) in Appendix B.2). For IMA, on the other hand, these are the influences of each source component on the observations in an interventional setting (under soft interventions on individual sources), as measured by the KL-divergences between the original and intervened distributions. See Appendix B.3, and especially (22), for a more detailed account. We finally remark that while recent work based on the ICM principle has mostly used the term “mechanism” to refer to causal Markov kernels p(Xi|PAi) or structural equations [78], we employ it in line with the broader use of this concept in the philosophical literature.11 To highlight just two examples, [86] states that “Causal processes, causal interactions, and causal laws provide the mechanisms by which the world works; to understand why certain things happen, we need to see how they are produced by these mechanisms”; and [99] states that “Mechanisms are events that alter relations among some specified set of elements”. Following this perspective, we argue that a causal mechanism can more generally denote any process that describes the way in which causes influence their effects: the partial derivative ∂f/∂si thus reflects a causal mechanism in the sense that it describes the infinitesimal changes in the observations x, when an infinitesimal perturbation is applied to si. 4.2 Definition and useful properties of the IMA contrast We now introduce a contrast function based on the IMA principle (7) and show that it possesses several desirable properties in the context of nonlinear ICA. First, we define a local contrast as the difference between the two integrands of (7) for a particular value of the sources s. Definition 4.2 (Local IMA contrast). The local IMA contrast cIMA(f , s) of f at a point s is given by cIMA(f , s) = n∑ i=1 log ∥∥∥∥ ∂f∂si (s) ∥∥∥∥− log |Jf (s)| . (8) Remark 4.3. This corresponds to the left KL measure of diagonality [2] for √ Jf (s)⊤Jf (s). 9This has also been used as a “leading intuition” [sic] to interpret IGCI in [46]. 10To provide additional intuition on how IMA differs from existing principles of independence of cause and mechanism, we give examples, both technical and pictorial, of violations of both in Appendix B.4. 11See Table 1 in [62] for a long list of definitions from the literature. The local IMA contrast cIMA(f , s) quantifies the extent to which the IMA principle is violated at a given point s. We summarise some of its properties in the following proposition. Proposition 4.4 (Properties of cIMA(f , s)). The local IMA contrast cIMA(f , s) defined in (8) satisfies: (i) cIMA(f , s) ≥ 0, with equality if and only if all columns ∂f/∂si(s) of Jf (s) are orthogonal. (ii) cIMA(f , s) is invariant to left multiplication of Jf (s) by an orthogonal matrix and to right multiplication by permutation and diagonal matrices. Property (i) formalises the geometric interpretation of IMA as an orthogonality condition on the columns of the Jacobian from § 4.1, and property (ii) intuitively states that changes of orthonormal basis and permutations or rescalings of the columns of Jf do not affect their orthogonality. Next, we define a global IMA contrast w.r.t. a source distribution ps as the expected local IMA contrast. Definition 4.5 (Global IMA contrast). The global IMA contrast CIMA(f , ps) of f w.r.t. ps is given by CIMA(f , ps) = Es∼ps [cIMA(f , s)] = ∫ cIMA(f , s)ps(s)ds . (9) The global IMA contrast CIMA(f , ps) thus quantifies the extent to which the IMA principle is violated for a particular solution (f , ps) to the nonlinear ICA problem. We summarise its properties as follows. Proposition 4.6 (Properties of CIMA(f , ps)). The global IMA contrast CIMA(f , ps) from (9) satisfies: (i) CIMA(f , ps) ≥ 0, with equality iff. Jf (s) = O(s)D(s) almost surely w.r.t. ps, where O(s),D(s) ∈ Rn×n are orthogonal and diagonal matrices, respectively; (ii) CIMA(f , ps) = CIMA(f̃ , ps̃) for any f̃ = f ◦ h−1 ◦P−1 and s̃ = Ph(s), where P ∈ Rn×n is a permutation and h(s) = (h1(s1), ..., hn(sn)) an invertible element-wise function. 0.50 0.75 1.00 1.25 1.50 r /8 0 /8 0.5 1.0 1.5 x 0.6 0.4 0.2 0.0 0.2 0.4 0.6 y Figure 3: An example of a (non-conformal) orthogonal coordinate transformation from polar (left) to Cartesian (right) coordinates. Property (i) is the distribution-level analogue to (i) of Prop. 4.4 and only allows for orthogonality violations on sets of measure zero w.r.t. ps. This means that CIMA can only be zero if f is an orthogonal coordinate transformation almost everywhere [19, 52, 66], see Fig. 3 for an example. We particularly stress property (ii), as it precisely matches the inherent indeterminacy of nonlinear ICA: CIMA is blind to reparametrisation of the sources by permutation and element wise transformation. 4.3 Theoretical analysis and justification of CIMA We now show that, under suitable assumptions on the generative model (1), a large class of spurious solutions—such as those based on the Darmois construction (4) or measure preserving automorphisms such as aR from (5) as described in § 2.1—exhibit nonzero IMA contrast. Denote the class of nonlinear ICA models satisfying (7) (IMA) by MIMA = {(f , ps) ∈ F × P : CIMA(f , ps) = 0} ⊂ F × P . Our first main theoretical result is that, under mild assumptions on the observations, Darmois solutions will have strictly positive CIMA, making them distinguishable from those in MIMA. Theorem 4.7. Assume the data generating process in (1) and assume that xi ⊥̸⊥ xj for some i ̸= j. Then any Darmois solution (fD, pu) based on gD as defined in (4) satisfies CIMA(fD, pu) > 0. Thus a solution satisfying CIMA(f , ps) = 0 can be distinguished from (fD, pu) based on the contrast CIMA. The proof is based on the fact that the Jacobian of gD is triangular (see Remark 2.4) and on the specific form of (4). A specific example of a mixing process satisfying the IMA assumption is the case where f is a conformal (angle-preserving) map. Definition 4.8 (Conformal map). A smooth map f : Rn → Rn is conformal if Jf (s) = O(s)λ(s) ∀s, where λ : Rn → R is a scalar field, and O ∈ O(n) is an orthogonal matrix. Corollary 4.9. Under assumptions of Thm. 4.7, if additionally f is a conformal map, then (f , ps) ∈ MIMA for any ps ∈ P due to Prop. 4.6 (i), see Defn. 4.8. Based on Thm. 4.7, (f , ps) is thus distinguishable from Darmois solutions (fD, pu). This is consistent with a result that proves identifiability of conformal maps for n = 2 and conjectures it in general [39].12 However, conformal maps are only a small subset of all maps for which CIMA = 0, as is apparent from the more flexible condition of Prop. 4.6 (i), compared to the stricter Defn. 4.8. 12Note that Corollary 4.9 holds for any dimensionality n. Example 4.10 (Polar to Cartesian coordinate transform). Consider the non-conformal transformation from polar to Cartesian coordinates (see Fig. 3), defined as (x, y) = f(r, θ) := (r cos(θ), r sin(θ)) with independent sources s = (r, θ), with r ∼ U(0, R) and θ ∼ U(0, 2π).13 Then, CIMA(f , ps) = 0 and CIMA(fD, pu) > 0 for any Darmois solution (fD, pu) —see Appendix D for details. Finally, for the case in which the true mixing is linear, we obtain the following result. Corollary 4.11. Consider a linear ICA model, x = As, with E[s⊤s] = I, and A ∈ O(n) an orthogonal, non-trivial mixing matrix, i.e., not the product of a diagonal and a permutation matrix DP. If at most one of the si is Gaussian, then CIMA(A, ps) = 0 and CIMA(fD, pu) > 0. In a “blind” setting, we may not know a priori whether the true mixing is linear or not, and thus choose to learn a nonlinear unmixing. Corollary 4.11 shows that, in this case, Darmois solutions are still distinguishable from the true mixing via CIMA. Note that unlike in Corollary 4.9, the assumption that xi ⊥̸⊥ xj for some i ̸= j is not required for Corollary 4.11. In fact, due to Theorem 11 of [17], it follows from the assumed linear ICA model with non-Gaussian sources, and the fact that the mixing matrix is not the product of a diagonal and a permutation matrix (see also Appendix A). Having shown that the IMA principle allows to distinguish a class of models (including, but not limited to conformal maps) from Darmois solutions, we next turn to a second well-known counterexample to identifiability: the “rotated-Gaussian” MPA aR(ps) (5) from Defn. 2.5. Our second main theoretical result is that, under suitable assumptions, this class of MPAs can also be ruled out for “non-trivial” R. Theorem 4.12. Let (f , ps) ∈ MIMA and assume that f is a conformal map. Given R ∈ O(n), assume additionally that ∃ at least one non-Gaussian si whose associated canonical basis vector ei is not transformed by R−1 = R⊤ into another canonical basis vector ej . Then CIMA(f ◦ aR(ps), ps) > 0. Thm. 4.12 states that for conformal maps, applying the aR(ps) transformation at the level of the sources leads to an increase in CIMA except for very specific rotations R that are “fine-tuned” to ps in the sense that they permute all non-Gaussian sources si with another sj . Interestingly, as for the linear case, non-Gaussianity again plays an important role in the proof of Thm. 4.12. 5 Experiments Our theoretical results from § 4 suggest that CIMA is a promising contrast function for nonlinear blind source separation. We test this empirically by evaluating the CIMA of spurious nonlinear ICA solutions (§ 5.1), and using it as a learning objective to recover the true solution (§ 5.2). We sample the ground truth sources from a uniform distribution in [0, 1]n; the reconstructed sources are also mapped to the uniform hypercube as a reference measure via the CDF transform. Unless 13For different ps, (x, y) can be made to have independent Gaussian components ([98], II.B), and CIMAidentifiability is lost; this shows that the assumption of Thm. 4.7 that xi ⊥̸⊥ xj for some i ̸= j is crucial. otherwise specified, the ground truth mixing f is a Möbius transformation [81] (i.e., a conformal map) with randomly sampled parameters, thereby satisfying Principle 4.1. In all of our experiments, we use JAX [12] and Distrax [13]. For additional technical details, equations and plots see Appendix E. The code to reproduce our experiments is available at this link. 5.1 Numerical evaluation of the CIMA contrast for spurious nonlinear ICA solutions Learning the Darmois construction. To learn the Darmois construction from data, we use normalising flows, see [35, 69]. Since Darmois solutions have triangular Jacobian (Remark 2.4), we use an architecture based on residual flows [16] which we constrain such that the Jacobian of the full model is triangular. This yields an expressive model which we train effectively via maximum likelihood. CIMA of Darmois solutions. To check whether Darmois solutions (learnt from finite data) can be distinguished from the true one, as predicted by Thm. 4.7, we generate 1000 random mixing functions for n = 2, compute the CIMA values of learnt solutions, and find that all values are indeed significantly larger than zero, see Fig. 4 (a). The same holds for higher dimensions, see Fig. 4 (b) for results with 50 random mixings for n ∈ {2, 3, 5, 10}: with higher dimensionality, both the mean and variance of the CIMA distribution for the learnt Darmois solutions generally attain higher values.14 We confirmed these findings for mappings which are not conformal, while still satisfying (7), in Appendix E.5. CIMA of MPAs. We also investigate the effect on CIMA of applying an MPA aR(·) from (5) to the true solution or a learnt Darmois solution. Results for n = 2 dim. for different rotation matrices R (parametrised by the angle θ) are shown in Fig. 4 (c). As expected, the behavior is periodic in θ, and vanishes for the true solution (blue) at multiples of π/2, i.e., when R is a permutation matrix, as predicted by Thm. 4.12. For the learnt Darmois solution (red, dashed) CIMA remains larger than zero. CIMA values for random MLPs. Lastly, we study the behavior of spurious solutions based on the Darmois construction under deviations from our assumption of CIMA = 0 for the true mixing function. To this end, we use invertible MLPs with orthogonal weight initalisation and leaky_tanh activations [29] as mixing functions; the more layers L are added to the mixing MLP, the larger a deviation from our assumptions is expected. We compare the true mixing and learnt Darmois solutions over 20 realisations for each L ∈ {2, 3, 4}, n = 5. Results are shown in figure Fig. 4 (d): the CIMA of the mixing MLPs grows with L; still, the one of the Darmois solution is typically higher. Summary. We verify that spurious solutions can be distinguished from the true one based on CIMA. 5.2 Learning nonlinear ICA solutions with CIMA-regularised maximum likelihood Experimental setup. To use CIMA as a learning signal, we consider a regularised maximum-likelihood approach, with the following objective: L(g) = Ex[log pg(x)]− λCIMA(g−1, py), where g denotes the learnt unmixing, y = g(x) the reconstructed sources, and λ ≥ 0 a Lagrange multiplier. For λ = 0, this corresponds to standard maximum likelihood estimation, whereas for λ > 0, L lower-bounds the likelihood, and recovers it exactly iff. (g−1, py) ∈ MIMA. We train a residual flow g (with full Jacobian) to maximise L. For evaluation, we compute (i) the KL divergence to the true data likelihood, as a measure of goodness of fit for the learnt flow model; and (ii) the mean correlation coefficient (MCC) between ground truth and reconstructed sources [37, 49]. We also introduce (iii) a nonlinear extension of the Amari distance [5] between the true mixing and the learnt unmixing, which is larger than or equal to zero, with equality iff. the learnt model belongs to the BSS equivalence class (Defn. 2.2) of the true solution, see Appendix E.5 for details. Results. In Fig. 4 (Top), we show an example of the distortion induced by different spurious solutions for n = 2, and contrast it with a solution learnt using our proposed objective (rightmost plot). Visually, we find that the CIMA-regularised solution (with λ = 1) recovers the true sources most faithfully. Quantitative results for 50 learnt models for each λ ∈ {0.0, 0.5, 1.0} and n ∈ {5, 7} are summarised in Fig. 5 (see Appendix E for additional plots) . As indicated by the KL divergence values (left), most trained models achieve a good fit to the data across all values of λ.15 We observe that using CIMA (i.e., λ > 0) is beneficial for BSS, both in terms of our nonlinear Amari distance (center, lower is better) and MCC (right, higher is better), though we do not observe a substantial difference between λ = 0.5 and λ = 1.16 Summary: CIMA can be a useful learning signal to recover the true solution. 14the latter possibly due to the increased difficulty of the learning task for larger n 15models with n = 7 have high outlier KL values, seemingly less pronounced for nonzero values of λ 16In Appendix E.5, we also show that our method is superior to a linear ICA baseline, FastICA [36]. 6 Discussion Assumptions on the mixing function. Instead of relying on weak supervision in the form of auxiliary variables [28, 30, 37, 38, 41, 49], our IMA approach places additional constraints on the functional form of the mixing process. In a similar vein, the minimal nonlinear distortion principle [108] proposes to favor solutions that are as close to linear as possible. Another example is the post-nonlinear model [98, 109], which assumes an element-wise nonlinearity applied after a linear mixing. IMA is different in that it still allows for strongly nonlinear mixings (see, e.g., Fig. 3) provided that the columns of their Jacobians are (close to) orthogonal. In the related field of disentanglement [8, 58], a line of work that focuses on image generation with adversarial networks [24] similarly proposes to constrain the “generator” function via regularisation of its Jacobian [82] or Hessian [74], though mostly from an empirically-driven, rather than from an identifiability perspective as in the present work. Towards identifiability with CIMA. The IMA principle rules out a large class of spurious solutions to nonlinear ICA. While we do not present a full identifiability result, our experiments show that CIMA can be used to recover the BSS equivalence class, suggesting that identifiability might indeed hold, possibly under additional assumptions—e.g., for conformal maps [39]. IMA and independence of cause and mechanism. While inspired by measures of independence of cause and mechanism as traditionally used for cause-effect inference [18, 45, 46, 110], we view the IMA principle as addressing a different question, in the sense that they evaluate independence between different elements of the causal model. Any nonlinear ICA solution that satisfies the IMA Principle 4.1 can be turned into one with uniform reconstructed sources—thus satisfying IGCI as argued in § 3— through composition with an element-wise transformation which, according to Prop. 4.6 (ii), leaves the CIMA value unchanged. Both IGCI (6) and IMA (7) can therefore be fulfilled simultaneosly, while the former on its own is inconsequential for BSS as shown in Prop. 3.1. BSS through algorithmic information. Algorithmic information theory has previously been proposed as a unifying framework for identifiable approaches to linear BSS [67, 68], in the sense that commonly-used contrast functions could, under suitable assumptions, be interpreted as proxies for the total complexity of the mixing and the reconstructed sources. However, to the best of our knowledge, the problem of specifying suitable proxies for the complexity of nonlinear mixing functions has not yet been addressed. We conjecture that our framework could be linked to this view, based on the additional assumption of algorithmic independence of causal mechanisms [43], thus potentially representing an approach to nonlinear BSS by minimisation of algorithmic complexity. ICA for causal inference & causality for ICA. Past advances in ICA have inspired novel causal discovery methods [50, 64, 92]. The present work constitutes, to the best of our knowledge, the first effort to use ideas from causality (specifically ICM) for BSS. An application of the IMA principle to causal discovery or causal representation learning [88] is an interesting direction for future work. Conclusion. We introduce IMA, a path to nonlinear BSS inspired by concepts from causality. We postulate that the influences of different sources on the observed distribution should be approximately independent, and formalise this as an orthogonality condition on the columns of the Jacobian. We prove that this constraint is generally violated by well-known spurious nonlinear ICA solutions, and propose a regularised maximum likelihood approach which we empirically demonstrate to be effective in recovering the true solution. Our IMA principle holds exactly for orthogonal coordinate transformations, and is thus of potential interest for learning spatial representations [33], robot dynamics [63], or physics problems where orthogonal reference frames are common [66]. Acknowledgements The authors thank Aapo Hyvärinen, Adrián Javaloy Bornás, Dominik Janzing, Giambattista Parascandolo, Giancarlo Fissore, Nasim Rahaman, Patrick Burauel, Patrik Reizinger, Paul Rubenstein, Shubhangi Ghosh, and the anonymous reviewers for helpful comments and discussions. Funding Transparency Statement This work was supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039B; and by the Machine Learning Cluster of Excellence, EXC number 2064/1 - Project number 390727645.
1. What is the focus and contribution of the paper regarding nonlinear ICA? 2. What is the proposed principle or constraint in the paper, and how does it relate to the existing literature on causal discovery? 3. How does the paper address identifiability in nonlinear ICA, and what are the results? 4. Are there any concerns or limitations regarding the proposed approach, particularly with regards to the IMA in line 199? 5. Can you provide more explanations or interpretations of the IMA, especially its information-theoretic and geometric interpretations?
Summary Of The Paper Review
Summary Of The Paper The paper aims at the identifiability of nonlinear ICA. It proposes a principle as a constraint of mixture functions based on the modified independent causal mechanism principle in the causal discovery literature; investigates the identifiability of nonlinear ICA under such an assumption; then applies it to nonlinear ICA. It also shows that the identifiability results can cover some non-identifiable cases in the existed literature. Review The idea of the work is interesting: it adjusted the ICM principle in the related field of causal discovery for nonlinear ICA. The paper is in general well written and properly introduces the method. The theoretical analysis and examples are appreciated. My minor concern is how restrictive is the proposed IMA in line 199 which imposes constraints on the determinant of the Jacobin matrix. Although the information-theoretic interpretation and geometric interpretation were trying to further explain IMA, I didn't get help from the given information without further details and explanation.
NIPS
Title Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID Abstract Domain adaptive object re-ID aims to transfer the learned knowledge from the labeled source domain to the unlabeled target domain to tackle the open-class re-identification problems. Although state-of-the-art pseudo-label-based methods [11, 54, 10, 55, 14] have achieved great success, they did not make full use of all valuable information because of the domain gap and unsatisfying clustering performance. To solve these problems, we propose a novel self-paced contrastive learning framework with hybrid memory. The hybrid memory dynamically generates source-domain class-level, target-domain cluster-level and un-clustered instance-level supervisory signals for learning feature representations. Different from the conventional contrastive learning strategy, the proposed framework jointly distinguishes source-domain classes, and target-domain clusters and un-clustered instances. Most importantly, the proposed self-paced method gradually creates more reliable clusters to refine the hybrid memory and learning targets, and is shown to be the key to our outstanding performance. Our method outperforms state-ofthe-arts on multiple domain adaptation tasks of object re-ID and even boosts the performance on the source domain without any extra annotations. Our generalized version on unsupervised object re-ID surpasses state-of-the-art algorithms by considerable 16.7% and 7.9% on Market-1501 and MSMT17 benchmarks†. 1 Introduction Unsupervised domain adaptation (UDA) for object re-identification (re-ID) aims at transferring the learned knowledge from the labeled source domain (dataset) to properly measure the inter-instance affinities in the unlabeled target domain (dataset). Common object re-ID problems include person re-ID and vehicle re-ID, where the source-domain and target-domain data do not share the same identities (classes). Existing UDA methods on object re-ID [38, 11, 54, 10, 55, 45] generally tackled this problem following a two-stage training scheme: (1) supervised pre-training on the source domain, and (2) unsupervised fine-tuning on the target domain. For stage-2 unsupervised fine-tuning, a pseudo-label-based strategy was found effective in state-of-the-art methods [11, 54, 10, 55], which alternates between generating pseudo classes by clustering target-domain instances and training the network with generated pseudo classes. In this way, the source-domain pre-trained network can be adapted to capture the inter-sample relations in the target domain with noisy pseudo-class labels. Although the pseudo-label-based methods have led to great performance advances, we argue that there exist two major limitations that hinder their further improvements (Figure 1 (a)). (1) During the target-domain fine-tuning, the source-domain images were either not considered [11, 54, 10, 55] or were even found harmful to the final performance [14] because of the limitations of their methodology ⇤Dapeng Chen is the corresponding author. †Code is available at https://github.com/yxgeee/SpCL. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. designs. The accurate source-domain ground-truth labels are valuable but were ignored during target-domain training. (2) Since the clustering process might result in individual outliers, to ensure the reliability of the generated pseudo labels, existing methods [11, 10, 55, 14] simply discarded the outliers from being used for training. However, such outliers might actually be difficult but valuable samples in the target domain and there are generally many outliers especially in early epochs. Simply abandoning them might critically hurt the final performance. To overcome the problems, we propose a hybrid memory to encode all available information from both source and target domains for feature learning. For the source-domain data, their ground-truth class labels can naturally provide valuable supervisions. For the target-domain data, clustering can be conducted to obtain relatively confident clusters as well as un-clustered outliers. All the sourcedomain class centroids, target-domain cluster centroids, and target-domain un-clustered instance features from the hybrid memory can provide supervisory signals for jointly learning discriminative feature representations across the two domains (Figure 1 (b)). A unified framework is developed for dynamically updating and distinguishing different entries in the proposed hybrid memory. Specifically, since all the target-domain clusters and un-clustered instances are equally treated as independent classes, the clustering reliability would significantly impact the learned representations. We thus propose a self-paced contrastive learning strategy, which initializes the learning process by using the hybrid memory with the most reliable target-domain clusters. Trained with such reliable clusters, the discriminativeness of feature representations can be gradually improved and additional reliable clusters can be formed by incorporating more un-clustered instances into the new clusters. Such a strategy can effectively mitigate the effects of noisy pseudo labels and boost the feature learning process. To properly measure the cluster reliability, a novel multi-scale clustering reliability criterion is proposed, based on which only reliable clusters are preserved and other confusing clusters are disassembled back to un-clustered instances. In this way, our self-paced learning strategy gradually creates more reliable clusters to dynamically refine the hybrid memory and learning targets. Our contributions are summarized as three-fold. (1) We propose a unified contrastive learning framework to incorporate all available information from both source and target domains for joint feature learning. It dynamically updates the hybrid memory to provide class-level, cluster-level and instance-level supervisions. (2) We design a self-paced contrastive learning strategy with a novel clustering reliability criterion to prevent training error amplification caused by noisy pseudo-class labels. It gradually generates more reliable target-domain clusters for learning better features in the hybrid memory, which in turn, improves clustering. (3) Our method significantly outperforms state-of-the-arts [11, 54, 10, 55, 45] on multiple domain adaptation tasks of object re-ID with up to 5.0% mAP gains. The proposed unified framework could even boost the performance on the source domain with large margins (6.6%) by jointly training with un-annotated target-domain data, while most existing UDA methods “forget” the source domain after fine-tuning on the target domain. Our unsupervised version without labeled source-domain data on object re-ID task significantly outperforms state-of-the-arts [26, 45, 53] by 16.7% and 7.9% in terms of mAP on Market-1501 and MSMT17 benchmarks. 2 Related Works Unsupervised domain adaptation (UDA) for object re-ID. Existing UDA methods for object re-ID can be divided into two main categories, including pseudo-label-based methods [38, 10, 55, 11, 54, 62, 52, 45] and domain translation-based methods [8, 46, 5, 14]. This paper follows the former one since the pseudo labels were found more effective to capture the target-domain distributions. Though driven by different motivations, previous pseudo-label-based methods generally adopted a two-stage training scheme: (1) pre-training on the source domain with ground-truth IDs, and (2) adapting the target domain with pseudo labels. The pseudo labels can be generated by either clustering instance features [38, 10, 55, 11, 54] or measuring similarities with exemplar features [62, 52, 45], where the clustering-based pipeline maintains state-of-the-art performance to date. The major challenges faced by clustering-based methods is how to improve the precision of pseudo labels and how to mitigate the effects caused by noisy pseudo labels. SSG [10] adopted human local features to assign multi-scale pseudo labels. PAST [55] introduced to utilize multiple regularizations alternately. MMT [11] proposed to generate more robust soft labels via the mutual mean-teaching. AD-Cluster [54] incorporated style-translated images to improve the discriminativeness of instance features. Although various attempts along this direction have led to great performance advances, they ignored to fully exploit all valuable information across the two domains which limits their further improvements, i.e., they simply discarded both the source-domain labeled images and target-domain un-clustered outliers when fine-tuning the model on the target domain with pseudo labels. Contrastive learning. State-of-the-art methods on unsupervised visual representation learning [33, 48, 19, 44, 65, 17, 4] are based on the contrastive learning. Being cast as either the dictionary look-up task [48, 17] or the consistent learning task [44, 4], a contrastive loss was adopted to learn instance discriminative representations by treating each unlabeled sample as a distinct class. Although the instance-level contrastive loss could be used to train embeddings that can be generalized well to downstream tasks with fine-tuning, it does not perform well on the domain adaptive object re-ID tasks which require to correctly measure the inter-class affinities on the unsupervised target domain. Self-paced learning. The “easy-to-hard” training scheme is at the core of self-paced learning [21], which was originally found effective in supervised learning methods, especially with noisy labels [15, 20, 24, 13]. Recently, some methods [41, 16, 6, 56, 67] incorporated the conception of self-paced learning into unsupervised learning tasks by starting the training process with the most confident pseudo labels. However, the self-paced policies designed in these methods were all based on the close-set problems with pre-defined classes, which cannot be generalized to our open-set object re-ID task with completely unknown classes on the target domain. Moreover, they did not consider how to plausibly train with hard samples that cannot be assigned confident pseudo labels all the time. 3 Methodology To tackle the challenges in unsupervised domain adaptation (UDA) on object re-ID, we propose a self-paced contrastive learning framework (Figure 2 (a)), which consists of a CNN [22]-based encoder f✓ and a novel hybrid memory. The key innovation of the proposed framework lies in jointly training the encoder with all the source-domain class-level, target-domain cluster-level and target-domain un-clustered instance-level supervisions, which are dynamically updated in the hybrid memory to gradually provide more confident learning targets. In order to avoid training error amplification caused by noisy clusters, the self-paced learning strategy initializes the training process with the most ‡Throughout this paper, the term independence is used in its idiomatic sense rather than the statistical sense. reliable clusters and gradually incorporates more un-clustered instances to form new reliable clusters. A novel reliability criterion is introduced to measure the quality of clusters (Figure 2 (b)). Our training scheme alternates between two steps: (1) grouping the target-domain samples into clusters and un-clustered instances by clustering the target-domain instance features in the hybrid memory with the self-paced strategy (Section 3.2), and (2) optimizing the encoder f✓ with a unified contrastive loss and dynamically updating the hybrid memory with encoded features (Section 3.1). 3.1 Constructing and Updating Hybrid Memory for Contrastive Learning Given the target-domain training samples Xt without any ground-truth label, we employ the selfpaced clustering strategy (Section 3.2) to group the samples into clusters and the un-clustered outliers. The whole training set of both domains can therefore be divided into three parts, including the source-domain samples Xs with ground-truth identity labels, the target-domain pseudo-labeled data Xtc within clusters and the target-domain instances Xto not belonging to any cluster, i.e., Xt = Xtc[Xto. State-of-the-art UDA methods [11, 54, 10, 55] simply abandon all source-domain data and targetdomain un-clustered instances, and utilize only the target-domain pseudo labels for adapting the network to the target domain, which, in our opinion, is a sub-optimal solution. Instead, we design a novel contrastive loss to fully exploit available data by treating all the source-domain classes, target-domain clusters and target-domain un-clustered instances as independent classes. 3.1.1 Unified Contrastive Learning Given a general feature vector f = f✓(x), x 2 Xs [ Xtc [ Xto, our unified contrastive loss is Lf = log exp (hf , z+i/⌧) Pns k=1 exp (hf ,wki/⌧) + Pntc k=1 exp (hf , cki/⌧) + Pnto k=1 exp (hf ,vki/⌧) , (1) where z+ indicates the positive class prototype corresponding to f , the temperature ⌧ is empirically set as 0.05 and h·, ·i denotes the inner product between two feature vectors to measure their similarity. ns is the number of source-domain classes, ntc is the number of target-domain clusters and nto is the number of target-domain un-clustered instances. More specifically, if f is a source-domain feature, z+ = wk is the centroid of the source-domain class k that f belongs to. If f belongs to the k-th target-domain cluster, z+ = ck is the k-th cluster centroid. If f is a target-domain un-clustered outlier, we would have z+ = vk as the outlier instance feature corresponding to f . Intuitively, the above joint contrastive loss encourages the encoded feature vector to approach its assigned classes, clusters or instances. Note that we utilize class centroids {w} instead of learnable class weights for encoding source-domain classes to match their semantics to those of the clusters’ or outliers’ centroids. Our experiments (Section 4.4) show that, if the semantics of class-level, cluster-level and instance-level supervisions do not match, the performance drops significantly. Discussion. The most significant difference between our unified contrastive loss (Eq. (1)) and previous contrastive losses [48, 17, 4, 33] is that ours jointly distinguishes classes, clusters, and unclustered instances, while previous ones only focus on separating instances without considering any ground-truth classes or pseudo-class labels as our method does. They target at instance discrimination task but fail in properly modeling intra-/inter-class affinities on domain adaptive re-ID tasks. 3.1.2 Hybrid Memory As the cluster number ntc and outlier instance number nto may change during training with the alternate clustering strategy, the class prototypes for the unified contrastive loss (Eq. (1)) are built in a nonparametric and dynamic manner. We propose a novel hybrid memory to provide the source-domain class centroids {w1, · · · ,wns}, target-domain cluster centroids {c1, · · · , cntc} and target-domain un-clustered instance features {v1, · · · ,vnto}. For continuously storing and updating the above three types of entries, we propose to cache source-domain class centroids {w1, · · · ,wns} and all the target-domain instance features {v1, · · · ,vnt} simultaneously in the hybrid memory, where nt is the number of all the target-domain instances and nt 6= ntc + nto. Without loss of generality, we assume that un-clustered features in {v} have indices {1, · · · , nto}, while other clustered features in {v} have indices from nto + 1 to nt. In other words, {vnto+1, · · · ,vnt} dynamically form the cluster centroids {c} while {v1, · · · ,vnto} remain un-clustered instances. Memory initialization. The hybrid memory is initialized with the extracted features by performing forward computation of f✓: the initial source-domain class centroids {w} can be obtained as the mean feature vectors of each class, while the initial target-domain instance features {v} are directly encoded by f✓. After that, the target-domain cluster centroids {c} are initialized with the mean feature vectors of each cluster from {v}, i.e., ck = 1 |Ik| X vi2Ik vi, (2) where Ik denotes the k-th cluster set that contains all the feature vectors within cluster k and | · | denotes the number of features in the set. Note that the source-domain class centroids {w} and the target-domain instance features {v} are only initialized once by performing the forward computation at the beginning of the learning algorithm, and then can be continuously updated during training. Memory update. At each iteration, the encoded feature vectors in each mini-batch would be involved in hybrid memory updating. For the source-domain class centroids {w}, the k-th centroid wk is updated by the mean of the encoded features belonging to class k in the mini-batch as wk mswk + (1 ms) · 1 |Bk| X fsi 2Bk fsi , (3) where Bk denotes the feature set belonging to source-domain class k in the current mini-batch and ms 2 [0, 1] is a momentum coefficient for updating source-domain class centroids. ms is empirically set as 0.2. The target-domain cluster centroids cannot be stored and updated in the same way as the sourcedomain class centroids, since the clustered set Xtc and un-clustered set Xto are constantly changing. As the hybrid memory caches all the target-domain features {v}, each encoded feature vector f ti in the mini-batch is utilized to update its corresponding instance entry vi by vi mtvi + (1 mt)f ti , (4) where mt 2 [0, 1] is the momentum coefficient for update target-domain instance features and is set as 0.2 in our experiments. Given the updated instance memory vi, if f ti belongs to the cluster k, the corresponding centroid ck needs to be updated with Eq. (2). Discussion. The hybrid memory has two main differences from the memory used in [48, 17]: (1) Our hybrid memory caches prototypes for both the centroids and instances, while the memory in [48, 17] only provides instance-level prototypes. Other than the centroids, we for the first time treat clusters and instances as equal classes; (2) The cluster/instance learning targets provided by our hybrid memory are gradually updated and refined, while previous memory [48, 17] only supports fixed instance-level targets. Note that our self-paced strategy (will be discussed in Section 3.2) dynamically determines confident clusters and un-clustered instances. The momentum updating strategy is inspired by [17, 43], and we further introduce how to update hybrid prototypes, i.e., centroids and instances. Note that we employ different updating strategies for class centroids (Eq. (3)) and cluster centroids (Eq. (4)&(2)) since source-domain classes are fixed while target-domain clusters are dynamically changed. 3.2 Self-paced Learning with Reliable Clusters A simple way to split the target-domain data into clusters Xtc and un-clustered outliers Xto is to cluster the target-domain instance features {v1, · · · ,vnt} from the hybrid memory by a certain algorithm (e.g., DBSCAN [9]). Since all the target-domain clusters and un-clustered outlier instances are treated as distinct classes in Eq. (1), the clustering reliability would significantly impact the learned representations. If the clustering is perfect, merging all the instances into their true clusters would no doubt improve the final performance (denotes as “oracle” in Table 5). However, in practice, merging an instance into a wrong cluster does more harm than good. A self-paced learning strategy is therefore introduced, where in the re-clustering step before each epoch, only the most reliable clusters are preserved and the unreliable clusters are disassembled back to un-clustered instances. A reliability criterion is proposed to identify unreliable clusters by measuring the independence and compactness. Independence of clusters. A reliable cluster should be independent from other clusters and individual samples. Intuitively, if a cluster is far away from other samples, it can be considered as highly independent. However, due to the uneven density in the latent space, we cannot naïvely use the distances between the cluster centroid and outside-cluster samples to measure the cluster independence. Generally, the clustering results can be tuned by altering certain hyper-parameters of the clustering criterion. One can loosen the clustering criterion to possibly include more samples in each cluster or tighten the clustering criterion to possibly include fewer samples in each cluster. We denote the samples within the same cluster of f ti as I(f ti ). We propose the following metric to measure the cluster independence, which is formulated as an intersection-over-union (IoU) score, Rindep(f ti ) = |I(f ti ) \ Iloose(f ti )| |I(f ti ) [ Iloose(f ti )| 2 [0, 1], (5) where Iloose(f ti ) is the cluster set containing f ti when the clustering criterion becomes looser. Larger Rindep(f ti ) indicates a more independent cluster for f ti , i.e., even one looses the clustering criterion, there would be no more sample to be included into the new cluster Iloose(f ti ). Samples within the same cluster set (e.g., I(f ti )) generally have the same independence score. Compactness of clusters. A reliable cluster should also be compact, i.e., the samples within the same cluster should have small inter-sample distances. In an extreme case, when a cluster is most compact, all the samples in the cluster have zero inter-sample distances. Its samples would not be split into different clusters even when the clustering criterion is tightened. Based on this assumption, we can define the following metric to determine the compactness of the clustered point f ti as Rcomp(f ti ) = |I(f ti ) \ Itight(f ti )| |I(f ti ) [ Itight(f ti )| 2 [0, 1], (6) where Itight(f ti ) is the cluster set containing f ti when tightening the criterion. Larger Rcomp(f ti ) indicates smaller inter-sample distances around f ti within I(f ti ), since a cluster with larger intersample distances is more likely to include fewer points when a tightened criterion is adopted. The same cluster’s data points may have different compactness scores due to the uneven density. Given the above metrics for measuring the cluster reliability, we could compute the independence and compactness scores for each data point within clusters. We set up ↵, 2 [0, 1] as independence and compactness thresholds for determining reliable clusters. Specifically, we preserve independent clusters with compact data points whose Rindep > ↵ and Rcomp > , while the remaining data are treated as un-clustered outlier instances. With the update of the encoder f✓ and target-domain instance features {v} from the hybrid memory, more reliable clusters can be gradually created to further improve the feature learning. The overall algorithm is detailed in Alg. 1 of Appendix A. 4 Experiments 4.1 Datasets and Evaluation Protocol We evaluate our proposed method on both the mainstream real!real adaptation tasks and the more challenging synthetic!real adaptation tasks in person re-ID and vehicle re-ID problems. As shown in Table 1, two real-world person datasets and one synthetic person dataset, as well as two real-world vehicle datasets and one synthetic vehicle dataset, are adopted in our experiments. Person re-ID datasets†. The Market-1501 and MSMT17 are widely used real-world person image datasets in domain adaptive tasks, among which, MSMT17 has the most images and is most challenging. The synthetic PersonX [39] is generated based on Unity [36] with manually designed obstacles, e.g., random occlusion, resolution and illumination differences, etc. †DukeMTMC-reID [37] dataset has been taken down and should no longer be used. Vehicle re-ID datasets. Although domain adaptive person re-ID has been long studied, the same task on the vehicle has not been fully explored. We conduct experiments with the real-world VeRi776, VehicleID and the synthetic VehicleX datasets. VehicleX [32] is also generated by the Unity engine [51, 42] and further translated to have the real-world style by SPGAN [8]. Evaluation protocol. In the experiments, only ground-truth IDs on the source-domain datasets are provided for training. Mean average precision (mAP) and cumulative matching characteristic (CMC), proposed in [58], are adopted to evaluate the methods’ performances on the target-domain datasets. No post-processing technique, e.g., re-ranking [60] or multi-query fusion [58], is adopted. 4.2 Implementation Details We adopt an ImageNet-pretrained [7] ResNet-50 [18] as the backbone for the encoder f✓. Following the clustering-based UDA methods [11, 10, 38], we use DBSCAN [9] for clustering before each epoch. The maximum distance between neighbor points, which is the most important parameter in DBSCAN, is tuned to loosen or tighten the clustering in our proposed self-paced learning strategy. We use a constant threshold ↵ and dynamic threshold for identifying independent clusters with the most compact points by the reliability criterion. More details can be found in Appendix C. 4.3 Comparison with State-of-the-arts UDA performance on the target domain. We compare our proposed framework with state-ofthe-art UDA methods on multiple domain adaptation tasks in Table 2, including three real!real and three synthetic!real tasks. The tasks in Tables 2b & 2c were not surveyed by previous methods, so we implement state-of-the-art MMT [11] on these datasets for comparison. Our method significantly outperforms all state-of-the-arts on both person and vehicle datasets with a plain ResNet-50 backbone, achieving 2-4% improvements in terms of mAP on the common real!real tasks and up to 5.0% increases on the challenging synthetic!real tasks. An inspiring discovery is that the synthetic!real task could achieve competitive performance as the real!real task with the same target-domain dataset (b) Experiments on unsupervised person re-ID. (e.g., VeRi-776), which indicates that we are one more step closer towards no longer needing any manually annotated real-world images in the future. Further improvements on the source domain. State-of-the-art UDA methods inevitably forget the source-domain knowledge after fine-tuning the pretrained networks on the target domain, as demonstrated by MMT [11] in Table 3. In contrast, our proposed unified framework could effectively model complex inter-sample relations across the two domains, boosting the source-domain performance by up to 6.6% mAP. Note that experiments of “Encoder train/test on the source domain” adopt the same training objective (Eq. (1)) as our proposed method, except for that only source-domain class centroids {w} are available. Our method also outperforms state-of-the-art supervised re-ID methods [12, 59, 64, 40] on the source domain without either using multiple losses or more complex networks. Such a phenomenon indicates that our method could be applied to improve the supervised training by incorporating unlabeled data without extra human labor. Unsupervised re-ID without any labeled training data. Another stream of research focuses on training the re-ID model without any labeled data, i.e., excluding source-domain data from the training set. Our method can be easily generalized to such a setting by discarding the source-domain class centroids {w} from both the hybrid memory and training objective (See Alg. 2 in Appendix A for details). As shown in Table 4, our method considerably outperforms state-of-the-arts by up to 16.7% improvements in terms of mAP. We also implement state-of-the-art unsupervised method MoCo [17], which adopts the conventional contrastive loss, and unfortunately, it is inapplicable on unsupervised re-ID tasks. MoCo [17] underperforms because it treats each instance as a single class, while the core of re-ID tasks is to encode and model intra-/inter-class variations. MoCo [17] is good at unsupervised pre-training but its resulting networks need finetuning with (pseudo) class labels. 4.4 Ablation Studies We analyse the effectiveness of our proposed unified contrastive loss with hybrid memory and selfpaced learning strategy in Table 5. The “oracle” experiment adopts the target-domain ground-truth IDs as cluster labels for training, reflecting the maximal performance with our pipeline. Unified contrastive learning mechanism. In order to verify the necessity of each type of classes in the unified contrastive loss (Eq. (1)), we conduct experiments when removing any one of the source-domain class-level, target-domain cluster-level or un-clustered instance-level supervisions (Table 5a). Baseline “Src. class” adopts only source-domain images with ground-truth IDs for training. “Src. class + tgt. instance” treats each target-domain sample as a distinct class. It totally fails with even worse results than the baseline “Src. class”, showing that directly generalizing conventional contrastive loss to UDA tasks is inapplicable. “Src. class + tgt. cluster” follows existing UDA methods [11, 10, 55, 14], by simply discarding un-clustered instances from training. Noticeable performance drops are observed, especially without the self-paced policy to constrain reliable clusters. Note that the only difference between “Src. class + tgt. cluster (w/ self-paced)” and “Ours (full)” is whether using outliers for training and the large performance gaps are due to the facts that: 1) There are many un-clustered outliers (> half of all samples), especially in early epochs; 2) Outliers serve as difficult samples and excluding them over-simplifies the training task; 3) “Src. class + tgt. cluster” doesn’t update outliers in the memory, making them unsuitable to be clustered in the later epochs. As illustrated in Table 5b, we further verify the necessity of unified training in unsupervised object re-ID tasks. We observe the same trend as domain adaptive tasks: solving the problem via instance discrimination (“tgt. instance”) would fail. What is different is that, even with our self-paced strategy, training with clusters alone (“tgt. cluster”) would fail. That is due to the fact that only a few samples take part in the training if discarding the outliers, undoubtedly leading to training collapse. Note that previous unsupervised re-ID methods [25, 53] which abandoned outliers did not fail, since they did not utilize a memory bank that requires all the entries to be continuously updated. We adopt the non-parametric class centroids to supervise the source-domain feature learning, however, conventional methods generally adopt a learnable classifier for supervised learning. “Src. class! Src. learnable weights” in Table 5a is therefore conducted to verify the necessity of using source-domain class centroids for training to match the semantics of target-domain training supervisions. We also test the effect of not extending negative classes across different types of contrasts. For instance, source-domain samples only treat non-corresponding source-domain classes as their negative classes. “Ours w/o unified contrast” shows inferior performance in both Table 5a and 5b. This indicates the effectiveness of the unified contrastive learning between all types of classes in Eq. (1). Self-paced learning strategy. We propose the self-paced learning strategy to preserve the most reliable clusters for providing stronger supervisions. The intuition is to measure the stability of clusters by hierarchical structures, i.e., a reliable cluster should be consistent in clusters at multiple levels. Rindep and Rcomp are therefore proposed to measure the independence and compactness of clusters, respectively. To verify the effectiveness of such a strategy, we evaluate our framework when removing either Rindep or Rcomp, or both of them. Obvious performance drops are observed under all these settings, e.g., 4.9% mAP drops are shown when removing Rindep&Rcomp in Table 5b. We illustrate the number of clusters and their corresponding Normalized Mutual Information (NMI) scores during training on MSMT17!Market-1501 in Figure 3. It can be observed that the quantity and quality of clusters are closer to the ground-truth IDs with the proposed self-paced learning strategy regardless of the un-clustered instance-level contrast, indicating higher reliability of the clusters and the effectiveness of the self-paced strategy. 5 Discussion and Conclusion Our method has shown considerable improvements over a variety of unsupervised or domain adaptive object re-ID tasks. The supervised performance can also be promoted labor-free by incorporating unlabeled data for training in our framework. The core is at exploiting all available data for jointly training with hybrid supervision. Positive as the results are, there still exists a gap from the oracle, suggesting that the pseudo-class labels may not be satisfactory enough even with the proposed self-paced strategy. Further studies are called for. Beyond the object re-ID task, our method has great potential on other unsupervised learning tasks, which needs to be explored. Broader Impact Our method can help to identify and track different types of objects (e.g., vehicles, cyclists, pedestrians, etc. ) across different cameras (domains), thus boosting the development of smart retail, smart transportation, and smart security systems in the future metropolises. In addition, our proposed self-paced contrastive learning is quite general and not limited to the specific research field of object re-ID. It can be well extended to broader research areas, including unsupervised and semi-supervised representation learning. However, object re-ID systems, when applied to identify pedestrians and vehicles in surveillance systems, might give rise to the infringement of people’s privacy, since such re-ID systems often rely on non-consensual surveillance data for training, i.e., it is unlikely that all human subjects even knew they were being recorded. Therefore, governments and officials need to carefully establish strict regulations and laws to control the usage of re-ID technologies. Otherwise, re-ID technologies can potentially equip malicious actors with the ability to surveil pedestrians or vehicles through multiple CCTV cameras without their consent. The research committee should also avoid using the datasets with ethics issues, e.g., DukeMTMC [37], which has been taken down due to the violation of data collection terms, should no longer be used. We would not evaluate our method on DukeMTMC related benchmarks as well. Furthermore, we should be cautious of the misidentification of the re-ID systems to avoid possible disturbance. Also, note that the demographic makeup of the datasets used is not representative of the broader population. Acknowledgements This work is supported in part by the General Research Fund through the Research Grants Council of Hong Kong under Grants (Nos. CUHK14208417, CUHK14207319), in part by the Hong Kong Innovation and Technology Support Program (No. ITS/312/18FX), in part by CUHK Strategic Fund.
1. What is the main contribution of the paper in the field of domain adaptation for ReID? 2. What are the strengths of the proposed method, particularly in utilizing all data during domain adaptation? 3. Do you have any concerns or questions regarding the experimental results, such as the poor performance of reimplemented MoCo? 4. How does the reviewer assess the novelty and effectiveness of the proposed hybrid memory design and self-paced learning mechanism? 5. Are there any areas where the paper could be improved, such as providing clearer descriptions of certain setups and differences between variants of the method?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposed a self-paced contrastive learning framework and utilise hybrid memory to jointly distinguishes source-domain classes, and target-domain clusters and un-clustered instances. The significant improvement of the proposed method over the baselines are shown on multiple benchmarks. Strengths 1. The paper proposes an innovative way to fully utilize all data during domain adaptation for ReID, while other methods discard source-domain knowledge and target-domain outliers. 2. To include different sources for training, the author defines a unified contrastive loss to jointly consider three sources of supervision, with appropriate adaptation on source-domain to match semantics. 3. The hybrid memory design provides centroids/instance for the unified contrastive loss, while live update for source-domain and self-paced learning for clustering in target-domain refresh the memory. 4. The self-paced learning mechanism helps forming more reliable cluster centroids by introducing two metrics, independence and compactness, to make the clustering process self-adaptive. 5. Abundant studies show the effectiveness of the designed components, together with an oracle setup to reveal a possible upper bound. Weaknesses 1. The poor result of reimplemented MoCo in 4.3 Table 4 needs further explanation and reasoning to account for. 2. Need clearer description on difference between the setup of ‘Src. Class + tgt. Cluster (w/ self-paced)’ in ablation study Table 5 and the full model. If the only difference is target instance, where does the ~10 difference in mAP come from?
NIPS
Title Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID Abstract Domain adaptive object re-ID aims to transfer the learned knowledge from the labeled source domain to the unlabeled target domain to tackle the open-class re-identification problems. Although state-of-the-art pseudo-label-based methods [11, 54, 10, 55, 14] have achieved great success, they did not make full use of all valuable information because of the domain gap and unsatisfying clustering performance. To solve these problems, we propose a novel self-paced contrastive learning framework with hybrid memory. The hybrid memory dynamically generates source-domain class-level, target-domain cluster-level and un-clustered instance-level supervisory signals for learning feature representations. Different from the conventional contrastive learning strategy, the proposed framework jointly distinguishes source-domain classes, and target-domain clusters and un-clustered instances. Most importantly, the proposed self-paced method gradually creates more reliable clusters to refine the hybrid memory and learning targets, and is shown to be the key to our outstanding performance. Our method outperforms state-ofthe-arts on multiple domain adaptation tasks of object re-ID and even boosts the performance on the source domain without any extra annotations. Our generalized version on unsupervised object re-ID surpasses state-of-the-art algorithms by considerable 16.7% and 7.9% on Market-1501 and MSMT17 benchmarks†. 1 Introduction Unsupervised domain adaptation (UDA) for object re-identification (re-ID) aims at transferring the learned knowledge from the labeled source domain (dataset) to properly measure the inter-instance affinities in the unlabeled target domain (dataset). Common object re-ID problems include person re-ID and vehicle re-ID, where the source-domain and target-domain data do not share the same identities (classes). Existing UDA methods on object re-ID [38, 11, 54, 10, 55, 45] generally tackled this problem following a two-stage training scheme: (1) supervised pre-training on the source domain, and (2) unsupervised fine-tuning on the target domain. For stage-2 unsupervised fine-tuning, a pseudo-label-based strategy was found effective in state-of-the-art methods [11, 54, 10, 55], which alternates between generating pseudo classes by clustering target-domain instances and training the network with generated pseudo classes. In this way, the source-domain pre-trained network can be adapted to capture the inter-sample relations in the target domain with noisy pseudo-class labels. Although the pseudo-label-based methods have led to great performance advances, we argue that there exist two major limitations that hinder their further improvements (Figure 1 (a)). (1) During the target-domain fine-tuning, the source-domain images were either not considered [11, 54, 10, 55] or were even found harmful to the final performance [14] because of the limitations of their methodology ⇤Dapeng Chen is the corresponding author. †Code is available at https://github.com/yxgeee/SpCL. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. designs. The accurate source-domain ground-truth labels are valuable but were ignored during target-domain training. (2) Since the clustering process might result in individual outliers, to ensure the reliability of the generated pseudo labels, existing methods [11, 10, 55, 14] simply discarded the outliers from being used for training. However, such outliers might actually be difficult but valuable samples in the target domain and there are generally many outliers especially in early epochs. Simply abandoning them might critically hurt the final performance. To overcome the problems, we propose a hybrid memory to encode all available information from both source and target domains for feature learning. For the source-domain data, their ground-truth class labels can naturally provide valuable supervisions. For the target-domain data, clustering can be conducted to obtain relatively confident clusters as well as un-clustered outliers. All the sourcedomain class centroids, target-domain cluster centroids, and target-domain un-clustered instance features from the hybrid memory can provide supervisory signals for jointly learning discriminative feature representations across the two domains (Figure 1 (b)). A unified framework is developed for dynamically updating and distinguishing different entries in the proposed hybrid memory. Specifically, since all the target-domain clusters and un-clustered instances are equally treated as independent classes, the clustering reliability would significantly impact the learned representations. We thus propose a self-paced contrastive learning strategy, which initializes the learning process by using the hybrid memory with the most reliable target-domain clusters. Trained with such reliable clusters, the discriminativeness of feature representations can be gradually improved and additional reliable clusters can be formed by incorporating more un-clustered instances into the new clusters. Such a strategy can effectively mitigate the effects of noisy pseudo labels and boost the feature learning process. To properly measure the cluster reliability, a novel multi-scale clustering reliability criterion is proposed, based on which only reliable clusters are preserved and other confusing clusters are disassembled back to un-clustered instances. In this way, our self-paced learning strategy gradually creates more reliable clusters to dynamically refine the hybrid memory and learning targets. Our contributions are summarized as three-fold. (1) We propose a unified contrastive learning framework to incorporate all available information from both source and target domains for joint feature learning. It dynamically updates the hybrid memory to provide class-level, cluster-level and instance-level supervisions. (2) We design a self-paced contrastive learning strategy with a novel clustering reliability criterion to prevent training error amplification caused by noisy pseudo-class labels. It gradually generates more reliable target-domain clusters for learning better features in the hybrid memory, which in turn, improves clustering. (3) Our method significantly outperforms state-of-the-arts [11, 54, 10, 55, 45] on multiple domain adaptation tasks of object re-ID with up to 5.0% mAP gains. The proposed unified framework could even boost the performance on the source domain with large margins (6.6%) by jointly training with un-annotated target-domain data, while most existing UDA methods “forget” the source domain after fine-tuning on the target domain. Our unsupervised version without labeled source-domain data on object re-ID task significantly outperforms state-of-the-arts [26, 45, 53] by 16.7% and 7.9% in terms of mAP on Market-1501 and MSMT17 benchmarks. 2 Related Works Unsupervised domain adaptation (UDA) for object re-ID. Existing UDA methods for object re-ID can be divided into two main categories, including pseudo-label-based methods [38, 10, 55, 11, 54, 62, 52, 45] and domain translation-based methods [8, 46, 5, 14]. This paper follows the former one since the pseudo labels were found more effective to capture the target-domain distributions. Though driven by different motivations, previous pseudo-label-based methods generally adopted a two-stage training scheme: (1) pre-training on the source domain with ground-truth IDs, and (2) adapting the target domain with pseudo labels. The pseudo labels can be generated by either clustering instance features [38, 10, 55, 11, 54] or measuring similarities with exemplar features [62, 52, 45], where the clustering-based pipeline maintains state-of-the-art performance to date. The major challenges faced by clustering-based methods is how to improve the precision of pseudo labels and how to mitigate the effects caused by noisy pseudo labels. SSG [10] adopted human local features to assign multi-scale pseudo labels. PAST [55] introduced to utilize multiple regularizations alternately. MMT [11] proposed to generate more robust soft labels via the mutual mean-teaching. AD-Cluster [54] incorporated style-translated images to improve the discriminativeness of instance features. Although various attempts along this direction have led to great performance advances, they ignored to fully exploit all valuable information across the two domains which limits their further improvements, i.e., they simply discarded both the source-domain labeled images and target-domain un-clustered outliers when fine-tuning the model on the target domain with pseudo labels. Contrastive learning. State-of-the-art methods on unsupervised visual representation learning [33, 48, 19, 44, 65, 17, 4] are based on the contrastive learning. Being cast as either the dictionary look-up task [48, 17] or the consistent learning task [44, 4], a contrastive loss was adopted to learn instance discriminative representations by treating each unlabeled sample as a distinct class. Although the instance-level contrastive loss could be used to train embeddings that can be generalized well to downstream tasks with fine-tuning, it does not perform well on the domain adaptive object re-ID tasks which require to correctly measure the inter-class affinities on the unsupervised target domain. Self-paced learning. The “easy-to-hard” training scheme is at the core of self-paced learning [21], which was originally found effective in supervised learning methods, especially with noisy labels [15, 20, 24, 13]. Recently, some methods [41, 16, 6, 56, 67] incorporated the conception of self-paced learning into unsupervised learning tasks by starting the training process with the most confident pseudo labels. However, the self-paced policies designed in these methods were all based on the close-set problems with pre-defined classes, which cannot be generalized to our open-set object re-ID task with completely unknown classes on the target domain. Moreover, they did not consider how to plausibly train with hard samples that cannot be assigned confident pseudo labels all the time. 3 Methodology To tackle the challenges in unsupervised domain adaptation (UDA) on object re-ID, we propose a self-paced contrastive learning framework (Figure 2 (a)), which consists of a CNN [22]-based encoder f✓ and a novel hybrid memory. The key innovation of the proposed framework lies in jointly training the encoder with all the source-domain class-level, target-domain cluster-level and target-domain un-clustered instance-level supervisions, which are dynamically updated in the hybrid memory to gradually provide more confident learning targets. In order to avoid training error amplification caused by noisy clusters, the self-paced learning strategy initializes the training process with the most ‡Throughout this paper, the term independence is used in its idiomatic sense rather than the statistical sense. reliable clusters and gradually incorporates more un-clustered instances to form new reliable clusters. A novel reliability criterion is introduced to measure the quality of clusters (Figure 2 (b)). Our training scheme alternates between two steps: (1) grouping the target-domain samples into clusters and un-clustered instances by clustering the target-domain instance features in the hybrid memory with the self-paced strategy (Section 3.2), and (2) optimizing the encoder f✓ with a unified contrastive loss and dynamically updating the hybrid memory with encoded features (Section 3.1). 3.1 Constructing and Updating Hybrid Memory for Contrastive Learning Given the target-domain training samples Xt without any ground-truth label, we employ the selfpaced clustering strategy (Section 3.2) to group the samples into clusters and the un-clustered outliers. The whole training set of both domains can therefore be divided into three parts, including the source-domain samples Xs with ground-truth identity labels, the target-domain pseudo-labeled data Xtc within clusters and the target-domain instances Xto not belonging to any cluster, i.e., Xt = Xtc[Xto. State-of-the-art UDA methods [11, 54, 10, 55] simply abandon all source-domain data and targetdomain un-clustered instances, and utilize only the target-domain pseudo labels for adapting the network to the target domain, which, in our opinion, is a sub-optimal solution. Instead, we design a novel contrastive loss to fully exploit available data by treating all the source-domain classes, target-domain clusters and target-domain un-clustered instances as independent classes. 3.1.1 Unified Contrastive Learning Given a general feature vector f = f✓(x), x 2 Xs [ Xtc [ Xto, our unified contrastive loss is Lf = log exp (hf , z+i/⌧) Pns k=1 exp (hf ,wki/⌧) + Pntc k=1 exp (hf , cki/⌧) + Pnto k=1 exp (hf ,vki/⌧) , (1) where z+ indicates the positive class prototype corresponding to f , the temperature ⌧ is empirically set as 0.05 and h·, ·i denotes the inner product between two feature vectors to measure their similarity. ns is the number of source-domain classes, ntc is the number of target-domain clusters and nto is the number of target-domain un-clustered instances. More specifically, if f is a source-domain feature, z+ = wk is the centroid of the source-domain class k that f belongs to. If f belongs to the k-th target-domain cluster, z+ = ck is the k-th cluster centroid. If f is a target-domain un-clustered outlier, we would have z+ = vk as the outlier instance feature corresponding to f . Intuitively, the above joint contrastive loss encourages the encoded feature vector to approach its assigned classes, clusters or instances. Note that we utilize class centroids {w} instead of learnable class weights for encoding source-domain classes to match their semantics to those of the clusters’ or outliers’ centroids. Our experiments (Section 4.4) show that, if the semantics of class-level, cluster-level and instance-level supervisions do not match, the performance drops significantly. Discussion. The most significant difference between our unified contrastive loss (Eq. (1)) and previous contrastive losses [48, 17, 4, 33] is that ours jointly distinguishes classes, clusters, and unclustered instances, while previous ones only focus on separating instances without considering any ground-truth classes or pseudo-class labels as our method does. They target at instance discrimination task but fail in properly modeling intra-/inter-class affinities on domain adaptive re-ID tasks. 3.1.2 Hybrid Memory As the cluster number ntc and outlier instance number nto may change during training with the alternate clustering strategy, the class prototypes for the unified contrastive loss (Eq. (1)) are built in a nonparametric and dynamic manner. We propose a novel hybrid memory to provide the source-domain class centroids {w1, · · · ,wns}, target-domain cluster centroids {c1, · · · , cntc} and target-domain un-clustered instance features {v1, · · · ,vnto}. For continuously storing and updating the above three types of entries, we propose to cache source-domain class centroids {w1, · · · ,wns} and all the target-domain instance features {v1, · · · ,vnt} simultaneously in the hybrid memory, where nt is the number of all the target-domain instances and nt 6= ntc + nto. Without loss of generality, we assume that un-clustered features in {v} have indices {1, · · · , nto}, while other clustered features in {v} have indices from nto + 1 to nt. In other words, {vnto+1, · · · ,vnt} dynamically form the cluster centroids {c} while {v1, · · · ,vnto} remain un-clustered instances. Memory initialization. The hybrid memory is initialized with the extracted features by performing forward computation of f✓: the initial source-domain class centroids {w} can be obtained as the mean feature vectors of each class, while the initial target-domain instance features {v} are directly encoded by f✓. After that, the target-domain cluster centroids {c} are initialized with the mean feature vectors of each cluster from {v}, i.e., ck = 1 |Ik| X vi2Ik vi, (2) where Ik denotes the k-th cluster set that contains all the feature vectors within cluster k and | · | denotes the number of features in the set. Note that the source-domain class centroids {w} and the target-domain instance features {v} are only initialized once by performing the forward computation at the beginning of the learning algorithm, and then can be continuously updated during training. Memory update. At each iteration, the encoded feature vectors in each mini-batch would be involved in hybrid memory updating. For the source-domain class centroids {w}, the k-th centroid wk is updated by the mean of the encoded features belonging to class k in the mini-batch as wk mswk + (1 ms) · 1 |Bk| X fsi 2Bk fsi , (3) where Bk denotes the feature set belonging to source-domain class k in the current mini-batch and ms 2 [0, 1] is a momentum coefficient for updating source-domain class centroids. ms is empirically set as 0.2. The target-domain cluster centroids cannot be stored and updated in the same way as the sourcedomain class centroids, since the clustered set Xtc and un-clustered set Xto are constantly changing. As the hybrid memory caches all the target-domain features {v}, each encoded feature vector f ti in the mini-batch is utilized to update its corresponding instance entry vi by vi mtvi + (1 mt)f ti , (4) where mt 2 [0, 1] is the momentum coefficient for update target-domain instance features and is set as 0.2 in our experiments. Given the updated instance memory vi, if f ti belongs to the cluster k, the corresponding centroid ck needs to be updated with Eq. (2). Discussion. The hybrid memory has two main differences from the memory used in [48, 17]: (1) Our hybrid memory caches prototypes for both the centroids and instances, while the memory in [48, 17] only provides instance-level prototypes. Other than the centroids, we for the first time treat clusters and instances as equal classes; (2) The cluster/instance learning targets provided by our hybrid memory are gradually updated and refined, while previous memory [48, 17] only supports fixed instance-level targets. Note that our self-paced strategy (will be discussed in Section 3.2) dynamically determines confident clusters and un-clustered instances. The momentum updating strategy is inspired by [17, 43], and we further introduce how to update hybrid prototypes, i.e., centroids and instances. Note that we employ different updating strategies for class centroids (Eq. (3)) and cluster centroids (Eq. (4)&(2)) since source-domain classes are fixed while target-domain clusters are dynamically changed. 3.2 Self-paced Learning with Reliable Clusters A simple way to split the target-domain data into clusters Xtc and un-clustered outliers Xto is to cluster the target-domain instance features {v1, · · · ,vnt} from the hybrid memory by a certain algorithm (e.g., DBSCAN [9]). Since all the target-domain clusters and un-clustered outlier instances are treated as distinct classes in Eq. (1), the clustering reliability would significantly impact the learned representations. If the clustering is perfect, merging all the instances into their true clusters would no doubt improve the final performance (denotes as “oracle” in Table 5). However, in practice, merging an instance into a wrong cluster does more harm than good. A self-paced learning strategy is therefore introduced, where in the re-clustering step before each epoch, only the most reliable clusters are preserved and the unreliable clusters are disassembled back to un-clustered instances. A reliability criterion is proposed to identify unreliable clusters by measuring the independence and compactness. Independence of clusters. A reliable cluster should be independent from other clusters and individual samples. Intuitively, if a cluster is far away from other samples, it can be considered as highly independent. However, due to the uneven density in the latent space, we cannot naïvely use the distances between the cluster centroid and outside-cluster samples to measure the cluster independence. Generally, the clustering results can be tuned by altering certain hyper-parameters of the clustering criterion. One can loosen the clustering criterion to possibly include more samples in each cluster or tighten the clustering criterion to possibly include fewer samples in each cluster. We denote the samples within the same cluster of f ti as I(f ti ). We propose the following metric to measure the cluster independence, which is formulated as an intersection-over-union (IoU) score, Rindep(f ti ) = |I(f ti ) \ Iloose(f ti )| |I(f ti ) [ Iloose(f ti )| 2 [0, 1], (5) where Iloose(f ti ) is the cluster set containing f ti when the clustering criterion becomes looser. Larger Rindep(f ti ) indicates a more independent cluster for f ti , i.e., even one looses the clustering criterion, there would be no more sample to be included into the new cluster Iloose(f ti ). Samples within the same cluster set (e.g., I(f ti )) generally have the same independence score. Compactness of clusters. A reliable cluster should also be compact, i.e., the samples within the same cluster should have small inter-sample distances. In an extreme case, when a cluster is most compact, all the samples in the cluster have zero inter-sample distances. Its samples would not be split into different clusters even when the clustering criterion is tightened. Based on this assumption, we can define the following metric to determine the compactness of the clustered point f ti as Rcomp(f ti ) = |I(f ti ) \ Itight(f ti )| |I(f ti ) [ Itight(f ti )| 2 [0, 1], (6) where Itight(f ti ) is the cluster set containing f ti when tightening the criterion. Larger Rcomp(f ti ) indicates smaller inter-sample distances around f ti within I(f ti ), since a cluster with larger intersample distances is more likely to include fewer points when a tightened criterion is adopted. The same cluster’s data points may have different compactness scores due to the uneven density. Given the above metrics for measuring the cluster reliability, we could compute the independence and compactness scores for each data point within clusters. We set up ↵, 2 [0, 1] as independence and compactness thresholds for determining reliable clusters. Specifically, we preserve independent clusters with compact data points whose Rindep > ↵ and Rcomp > , while the remaining data are treated as un-clustered outlier instances. With the update of the encoder f✓ and target-domain instance features {v} from the hybrid memory, more reliable clusters can be gradually created to further improve the feature learning. The overall algorithm is detailed in Alg. 1 of Appendix A. 4 Experiments 4.1 Datasets and Evaluation Protocol We evaluate our proposed method on both the mainstream real!real adaptation tasks and the more challenging synthetic!real adaptation tasks in person re-ID and vehicle re-ID problems. As shown in Table 1, two real-world person datasets and one synthetic person dataset, as well as two real-world vehicle datasets and one synthetic vehicle dataset, are adopted in our experiments. Person re-ID datasets†. The Market-1501 and MSMT17 are widely used real-world person image datasets in domain adaptive tasks, among which, MSMT17 has the most images and is most challenging. The synthetic PersonX [39] is generated based on Unity [36] with manually designed obstacles, e.g., random occlusion, resolution and illumination differences, etc. †DukeMTMC-reID [37] dataset has been taken down and should no longer be used. Vehicle re-ID datasets. Although domain adaptive person re-ID has been long studied, the same task on the vehicle has not been fully explored. We conduct experiments with the real-world VeRi776, VehicleID and the synthetic VehicleX datasets. VehicleX [32] is also generated by the Unity engine [51, 42] and further translated to have the real-world style by SPGAN [8]. Evaluation protocol. In the experiments, only ground-truth IDs on the source-domain datasets are provided for training. Mean average precision (mAP) and cumulative matching characteristic (CMC), proposed in [58], are adopted to evaluate the methods’ performances on the target-domain datasets. No post-processing technique, e.g., re-ranking [60] or multi-query fusion [58], is adopted. 4.2 Implementation Details We adopt an ImageNet-pretrained [7] ResNet-50 [18] as the backbone for the encoder f✓. Following the clustering-based UDA methods [11, 10, 38], we use DBSCAN [9] for clustering before each epoch. The maximum distance between neighbor points, which is the most important parameter in DBSCAN, is tuned to loosen or tighten the clustering in our proposed self-paced learning strategy. We use a constant threshold ↵ and dynamic threshold for identifying independent clusters with the most compact points by the reliability criterion. More details can be found in Appendix C. 4.3 Comparison with State-of-the-arts UDA performance on the target domain. We compare our proposed framework with state-ofthe-art UDA methods on multiple domain adaptation tasks in Table 2, including three real!real and three synthetic!real tasks. The tasks in Tables 2b & 2c were not surveyed by previous methods, so we implement state-of-the-art MMT [11] on these datasets for comparison. Our method significantly outperforms all state-of-the-arts on both person and vehicle datasets with a plain ResNet-50 backbone, achieving 2-4% improvements in terms of mAP on the common real!real tasks and up to 5.0% increases on the challenging synthetic!real tasks. An inspiring discovery is that the synthetic!real task could achieve competitive performance as the real!real task with the same target-domain dataset (b) Experiments on unsupervised person re-ID. (e.g., VeRi-776), which indicates that we are one more step closer towards no longer needing any manually annotated real-world images in the future. Further improvements on the source domain. State-of-the-art UDA methods inevitably forget the source-domain knowledge after fine-tuning the pretrained networks on the target domain, as demonstrated by MMT [11] in Table 3. In contrast, our proposed unified framework could effectively model complex inter-sample relations across the two domains, boosting the source-domain performance by up to 6.6% mAP. Note that experiments of “Encoder train/test on the source domain” adopt the same training objective (Eq. (1)) as our proposed method, except for that only source-domain class centroids {w} are available. Our method also outperforms state-of-the-art supervised re-ID methods [12, 59, 64, 40] on the source domain without either using multiple losses or more complex networks. Such a phenomenon indicates that our method could be applied to improve the supervised training by incorporating unlabeled data without extra human labor. Unsupervised re-ID without any labeled training data. Another stream of research focuses on training the re-ID model without any labeled data, i.e., excluding source-domain data from the training set. Our method can be easily generalized to such a setting by discarding the source-domain class centroids {w} from both the hybrid memory and training objective (See Alg. 2 in Appendix A for details). As shown in Table 4, our method considerably outperforms state-of-the-arts by up to 16.7% improvements in terms of mAP. We also implement state-of-the-art unsupervised method MoCo [17], which adopts the conventional contrastive loss, and unfortunately, it is inapplicable on unsupervised re-ID tasks. MoCo [17] underperforms because it treats each instance as a single class, while the core of re-ID tasks is to encode and model intra-/inter-class variations. MoCo [17] is good at unsupervised pre-training but its resulting networks need finetuning with (pseudo) class labels. 4.4 Ablation Studies We analyse the effectiveness of our proposed unified contrastive loss with hybrid memory and selfpaced learning strategy in Table 5. The “oracle” experiment adopts the target-domain ground-truth IDs as cluster labels for training, reflecting the maximal performance with our pipeline. Unified contrastive learning mechanism. In order to verify the necessity of each type of classes in the unified contrastive loss (Eq. (1)), we conduct experiments when removing any one of the source-domain class-level, target-domain cluster-level or un-clustered instance-level supervisions (Table 5a). Baseline “Src. class” adopts only source-domain images with ground-truth IDs for training. “Src. class + tgt. instance” treats each target-domain sample as a distinct class. It totally fails with even worse results than the baseline “Src. class”, showing that directly generalizing conventional contrastive loss to UDA tasks is inapplicable. “Src. class + tgt. cluster” follows existing UDA methods [11, 10, 55, 14], by simply discarding un-clustered instances from training. Noticeable performance drops are observed, especially without the self-paced policy to constrain reliable clusters. Note that the only difference between “Src. class + tgt. cluster (w/ self-paced)” and “Ours (full)” is whether using outliers for training and the large performance gaps are due to the facts that: 1) There are many un-clustered outliers (> half of all samples), especially in early epochs; 2) Outliers serve as difficult samples and excluding them over-simplifies the training task; 3) “Src. class + tgt. cluster” doesn’t update outliers in the memory, making them unsuitable to be clustered in the later epochs. As illustrated in Table 5b, we further verify the necessity of unified training in unsupervised object re-ID tasks. We observe the same trend as domain adaptive tasks: solving the problem via instance discrimination (“tgt. instance”) would fail. What is different is that, even with our self-paced strategy, training with clusters alone (“tgt. cluster”) would fail. That is due to the fact that only a few samples take part in the training if discarding the outliers, undoubtedly leading to training collapse. Note that previous unsupervised re-ID methods [25, 53] which abandoned outliers did not fail, since they did not utilize a memory bank that requires all the entries to be continuously updated. We adopt the non-parametric class centroids to supervise the source-domain feature learning, however, conventional methods generally adopt a learnable classifier for supervised learning. “Src. class! Src. learnable weights” in Table 5a is therefore conducted to verify the necessity of using source-domain class centroids for training to match the semantics of target-domain training supervisions. We also test the effect of not extending negative classes across different types of contrasts. For instance, source-domain samples only treat non-corresponding source-domain classes as their negative classes. “Ours w/o unified contrast” shows inferior performance in both Table 5a and 5b. This indicates the effectiveness of the unified contrastive learning between all types of classes in Eq. (1). Self-paced learning strategy. We propose the self-paced learning strategy to preserve the most reliable clusters for providing stronger supervisions. The intuition is to measure the stability of clusters by hierarchical structures, i.e., a reliable cluster should be consistent in clusters at multiple levels. Rindep and Rcomp are therefore proposed to measure the independence and compactness of clusters, respectively. To verify the effectiveness of such a strategy, we evaluate our framework when removing either Rindep or Rcomp, or both of them. Obvious performance drops are observed under all these settings, e.g., 4.9% mAP drops are shown when removing Rindep&Rcomp in Table 5b. We illustrate the number of clusters and their corresponding Normalized Mutual Information (NMI) scores during training on MSMT17!Market-1501 in Figure 3. It can be observed that the quantity and quality of clusters are closer to the ground-truth IDs with the proposed self-paced learning strategy regardless of the un-clustered instance-level contrast, indicating higher reliability of the clusters and the effectiveness of the self-paced strategy. 5 Discussion and Conclusion Our method has shown considerable improvements over a variety of unsupervised or domain adaptive object re-ID tasks. The supervised performance can also be promoted labor-free by incorporating unlabeled data for training in our framework. The core is at exploiting all available data for jointly training with hybrid supervision. Positive as the results are, there still exists a gap from the oracle, suggesting that the pseudo-class labels may not be satisfactory enough even with the proposed self-paced strategy. Further studies are called for. Beyond the object re-ID task, our method has great potential on other unsupervised learning tasks, which needs to be explored. Broader Impact Our method can help to identify and track different types of objects (e.g., vehicles, cyclists, pedestrians, etc. ) across different cameras (domains), thus boosting the development of smart retail, smart transportation, and smart security systems in the future metropolises. In addition, our proposed self-paced contrastive learning is quite general and not limited to the specific research field of object re-ID. It can be well extended to broader research areas, including unsupervised and semi-supervised representation learning. However, object re-ID systems, when applied to identify pedestrians and vehicles in surveillance systems, might give rise to the infringement of people’s privacy, since such re-ID systems often rely on non-consensual surveillance data for training, i.e., it is unlikely that all human subjects even knew they were being recorded. Therefore, governments and officials need to carefully establish strict regulations and laws to control the usage of re-ID technologies. Otherwise, re-ID technologies can potentially equip malicious actors with the ability to surveil pedestrians or vehicles through multiple CCTV cameras without their consent. The research committee should also avoid using the datasets with ethics issues, e.g., DukeMTMC [37], which has been taken down due to the violation of data collection terms, should no longer be used. We would not evaluate our method on DukeMTMC related benchmarks as well. Furthermore, we should be cautious of the misidentification of the re-ID systems to avoid possible disturbance. Also, note that the demographic makeup of the datasets used is not representative of the broader population. Acknowledgements This work is supported in part by the General Research Fund through the Research Grants Council of Hong Kong under Grants (Nos. CUHK14208417, CUHK14207319), in part by the Hong Kong Innovation and Technology Support Program (No. ITS/312/18FX), in part by CUHK Strategic Fund.
1. What is the main contribution of the paper in object re-identification? 2. What are the strengths of the proposed approach, particularly in combining previous works? 3. What are the weaknesses of the method, especially regarding scalability and cluster reliability measures? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes a self-paced contrastive learning framework for object re-identification. They learn feature representations by contrasting each feature against the source classes, target clusters (unsupervised), and remaining target samples (which kind of act as a cluster with a single member). They also use a memory bank to keep prototype representations for each source class, target cluster and target outlier. The update of these prototypes are performed through momentum updates. They claim the state of the art results in object re-identification benchmarks. Strengths + Smoothly combines the ideas of [13] and [45] in a unified framework specifically targeting the object re-identification problem. + They present the state of the art results in several object re-identification benchmarks. + The authors provide a decent ablation of their components. Weaknesses - As system keeps features for all the instances in the target domain in a memory bank, there scale well with large number of unlabelled instances. - Cluster reliability measures are a bit adhoc and should be explained better. Also it doesn't seem to have too large effect on the final results.
NIPS
Title Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID Abstract Domain adaptive object re-ID aims to transfer the learned knowledge from the labeled source domain to the unlabeled target domain to tackle the open-class re-identification problems. Although state-of-the-art pseudo-label-based methods [11, 54, 10, 55, 14] have achieved great success, they did not make full use of all valuable information because of the domain gap and unsatisfying clustering performance. To solve these problems, we propose a novel self-paced contrastive learning framework with hybrid memory. The hybrid memory dynamically generates source-domain class-level, target-domain cluster-level and un-clustered instance-level supervisory signals for learning feature representations. Different from the conventional contrastive learning strategy, the proposed framework jointly distinguishes source-domain classes, and target-domain clusters and un-clustered instances. Most importantly, the proposed self-paced method gradually creates more reliable clusters to refine the hybrid memory and learning targets, and is shown to be the key to our outstanding performance. Our method outperforms state-ofthe-arts on multiple domain adaptation tasks of object re-ID and even boosts the performance on the source domain without any extra annotations. Our generalized version on unsupervised object re-ID surpasses state-of-the-art algorithms by considerable 16.7% and 7.9% on Market-1501 and MSMT17 benchmarks†. 1 Introduction Unsupervised domain adaptation (UDA) for object re-identification (re-ID) aims at transferring the learned knowledge from the labeled source domain (dataset) to properly measure the inter-instance affinities in the unlabeled target domain (dataset). Common object re-ID problems include person re-ID and vehicle re-ID, where the source-domain and target-domain data do not share the same identities (classes). Existing UDA methods on object re-ID [38, 11, 54, 10, 55, 45] generally tackled this problem following a two-stage training scheme: (1) supervised pre-training on the source domain, and (2) unsupervised fine-tuning on the target domain. For stage-2 unsupervised fine-tuning, a pseudo-label-based strategy was found effective in state-of-the-art methods [11, 54, 10, 55], which alternates between generating pseudo classes by clustering target-domain instances and training the network with generated pseudo classes. In this way, the source-domain pre-trained network can be adapted to capture the inter-sample relations in the target domain with noisy pseudo-class labels. Although the pseudo-label-based methods have led to great performance advances, we argue that there exist two major limitations that hinder their further improvements (Figure 1 (a)). (1) During the target-domain fine-tuning, the source-domain images were either not considered [11, 54, 10, 55] or were even found harmful to the final performance [14] because of the limitations of their methodology ⇤Dapeng Chen is the corresponding author. †Code is available at https://github.com/yxgeee/SpCL. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. designs. The accurate source-domain ground-truth labels are valuable but were ignored during target-domain training. (2) Since the clustering process might result in individual outliers, to ensure the reliability of the generated pseudo labels, existing methods [11, 10, 55, 14] simply discarded the outliers from being used for training. However, such outliers might actually be difficult but valuable samples in the target domain and there are generally many outliers especially in early epochs. Simply abandoning them might critically hurt the final performance. To overcome the problems, we propose a hybrid memory to encode all available information from both source and target domains for feature learning. For the source-domain data, their ground-truth class labels can naturally provide valuable supervisions. For the target-domain data, clustering can be conducted to obtain relatively confident clusters as well as un-clustered outliers. All the sourcedomain class centroids, target-domain cluster centroids, and target-domain un-clustered instance features from the hybrid memory can provide supervisory signals for jointly learning discriminative feature representations across the two domains (Figure 1 (b)). A unified framework is developed for dynamically updating and distinguishing different entries in the proposed hybrid memory. Specifically, since all the target-domain clusters and un-clustered instances are equally treated as independent classes, the clustering reliability would significantly impact the learned representations. We thus propose a self-paced contrastive learning strategy, which initializes the learning process by using the hybrid memory with the most reliable target-domain clusters. Trained with such reliable clusters, the discriminativeness of feature representations can be gradually improved and additional reliable clusters can be formed by incorporating more un-clustered instances into the new clusters. Such a strategy can effectively mitigate the effects of noisy pseudo labels and boost the feature learning process. To properly measure the cluster reliability, a novel multi-scale clustering reliability criterion is proposed, based on which only reliable clusters are preserved and other confusing clusters are disassembled back to un-clustered instances. In this way, our self-paced learning strategy gradually creates more reliable clusters to dynamically refine the hybrid memory and learning targets. Our contributions are summarized as three-fold. (1) We propose a unified contrastive learning framework to incorporate all available information from both source and target domains for joint feature learning. It dynamically updates the hybrid memory to provide class-level, cluster-level and instance-level supervisions. (2) We design a self-paced contrastive learning strategy with a novel clustering reliability criterion to prevent training error amplification caused by noisy pseudo-class labels. It gradually generates more reliable target-domain clusters for learning better features in the hybrid memory, which in turn, improves clustering. (3) Our method significantly outperforms state-of-the-arts [11, 54, 10, 55, 45] on multiple domain adaptation tasks of object re-ID with up to 5.0% mAP gains. The proposed unified framework could even boost the performance on the source domain with large margins (6.6%) by jointly training with un-annotated target-domain data, while most existing UDA methods “forget” the source domain after fine-tuning on the target domain. Our unsupervised version without labeled source-domain data on object re-ID task significantly outperforms state-of-the-arts [26, 45, 53] by 16.7% and 7.9% in terms of mAP on Market-1501 and MSMT17 benchmarks. 2 Related Works Unsupervised domain adaptation (UDA) for object re-ID. Existing UDA methods for object re-ID can be divided into two main categories, including pseudo-label-based methods [38, 10, 55, 11, 54, 62, 52, 45] and domain translation-based methods [8, 46, 5, 14]. This paper follows the former one since the pseudo labels were found more effective to capture the target-domain distributions. Though driven by different motivations, previous pseudo-label-based methods generally adopted a two-stage training scheme: (1) pre-training on the source domain with ground-truth IDs, and (2) adapting the target domain with pseudo labels. The pseudo labels can be generated by either clustering instance features [38, 10, 55, 11, 54] or measuring similarities with exemplar features [62, 52, 45], where the clustering-based pipeline maintains state-of-the-art performance to date. The major challenges faced by clustering-based methods is how to improve the precision of pseudo labels and how to mitigate the effects caused by noisy pseudo labels. SSG [10] adopted human local features to assign multi-scale pseudo labels. PAST [55] introduced to utilize multiple regularizations alternately. MMT [11] proposed to generate more robust soft labels via the mutual mean-teaching. AD-Cluster [54] incorporated style-translated images to improve the discriminativeness of instance features. Although various attempts along this direction have led to great performance advances, they ignored to fully exploit all valuable information across the two domains which limits their further improvements, i.e., they simply discarded both the source-domain labeled images and target-domain un-clustered outliers when fine-tuning the model on the target domain with pseudo labels. Contrastive learning. State-of-the-art methods on unsupervised visual representation learning [33, 48, 19, 44, 65, 17, 4] are based on the contrastive learning. Being cast as either the dictionary look-up task [48, 17] or the consistent learning task [44, 4], a contrastive loss was adopted to learn instance discriminative representations by treating each unlabeled sample as a distinct class. Although the instance-level contrastive loss could be used to train embeddings that can be generalized well to downstream tasks with fine-tuning, it does not perform well on the domain adaptive object re-ID tasks which require to correctly measure the inter-class affinities on the unsupervised target domain. Self-paced learning. The “easy-to-hard” training scheme is at the core of self-paced learning [21], which was originally found effective in supervised learning methods, especially with noisy labels [15, 20, 24, 13]. Recently, some methods [41, 16, 6, 56, 67] incorporated the conception of self-paced learning into unsupervised learning tasks by starting the training process with the most confident pseudo labels. However, the self-paced policies designed in these methods were all based on the close-set problems with pre-defined classes, which cannot be generalized to our open-set object re-ID task with completely unknown classes on the target domain. Moreover, they did not consider how to plausibly train with hard samples that cannot be assigned confident pseudo labels all the time. 3 Methodology To tackle the challenges in unsupervised domain adaptation (UDA) on object re-ID, we propose a self-paced contrastive learning framework (Figure 2 (a)), which consists of a CNN [22]-based encoder f✓ and a novel hybrid memory. The key innovation of the proposed framework lies in jointly training the encoder with all the source-domain class-level, target-domain cluster-level and target-domain un-clustered instance-level supervisions, which are dynamically updated in the hybrid memory to gradually provide more confident learning targets. In order to avoid training error amplification caused by noisy clusters, the self-paced learning strategy initializes the training process with the most ‡Throughout this paper, the term independence is used in its idiomatic sense rather than the statistical sense. reliable clusters and gradually incorporates more un-clustered instances to form new reliable clusters. A novel reliability criterion is introduced to measure the quality of clusters (Figure 2 (b)). Our training scheme alternates between two steps: (1) grouping the target-domain samples into clusters and un-clustered instances by clustering the target-domain instance features in the hybrid memory with the self-paced strategy (Section 3.2), and (2) optimizing the encoder f✓ with a unified contrastive loss and dynamically updating the hybrid memory with encoded features (Section 3.1). 3.1 Constructing and Updating Hybrid Memory for Contrastive Learning Given the target-domain training samples Xt without any ground-truth label, we employ the selfpaced clustering strategy (Section 3.2) to group the samples into clusters and the un-clustered outliers. The whole training set of both domains can therefore be divided into three parts, including the source-domain samples Xs with ground-truth identity labels, the target-domain pseudo-labeled data Xtc within clusters and the target-domain instances Xto not belonging to any cluster, i.e., Xt = Xtc[Xto. State-of-the-art UDA methods [11, 54, 10, 55] simply abandon all source-domain data and targetdomain un-clustered instances, and utilize only the target-domain pseudo labels for adapting the network to the target domain, which, in our opinion, is a sub-optimal solution. Instead, we design a novel contrastive loss to fully exploit available data by treating all the source-domain classes, target-domain clusters and target-domain un-clustered instances as independent classes. 3.1.1 Unified Contrastive Learning Given a general feature vector f = f✓(x), x 2 Xs [ Xtc [ Xto, our unified contrastive loss is Lf = log exp (hf , z+i/⌧) Pns k=1 exp (hf ,wki/⌧) + Pntc k=1 exp (hf , cki/⌧) + Pnto k=1 exp (hf ,vki/⌧) , (1) where z+ indicates the positive class prototype corresponding to f , the temperature ⌧ is empirically set as 0.05 and h·, ·i denotes the inner product between two feature vectors to measure their similarity. ns is the number of source-domain classes, ntc is the number of target-domain clusters and nto is the number of target-domain un-clustered instances. More specifically, if f is a source-domain feature, z+ = wk is the centroid of the source-domain class k that f belongs to. If f belongs to the k-th target-domain cluster, z+ = ck is the k-th cluster centroid. If f is a target-domain un-clustered outlier, we would have z+ = vk as the outlier instance feature corresponding to f . Intuitively, the above joint contrastive loss encourages the encoded feature vector to approach its assigned classes, clusters or instances. Note that we utilize class centroids {w} instead of learnable class weights for encoding source-domain classes to match their semantics to those of the clusters’ or outliers’ centroids. Our experiments (Section 4.4) show that, if the semantics of class-level, cluster-level and instance-level supervisions do not match, the performance drops significantly. Discussion. The most significant difference between our unified contrastive loss (Eq. (1)) and previous contrastive losses [48, 17, 4, 33] is that ours jointly distinguishes classes, clusters, and unclustered instances, while previous ones only focus on separating instances without considering any ground-truth classes or pseudo-class labels as our method does. They target at instance discrimination task but fail in properly modeling intra-/inter-class affinities on domain adaptive re-ID tasks. 3.1.2 Hybrid Memory As the cluster number ntc and outlier instance number nto may change during training with the alternate clustering strategy, the class prototypes for the unified contrastive loss (Eq. (1)) are built in a nonparametric and dynamic manner. We propose a novel hybrid memory to provide the source-domain class centroids {w1, · · · ,wns}, target-domain cluster centroids {c1, · · · , cntc} and target-domain un-clustered instance features {v1, · · · ,vnto}. For continuously storing and updating the above three types of entries, we propose to cache source-domain class centroids {w1, · · · ,wns} and all the target-domain instance features {v1, · · · ,vnt} simultaneously in the hybrid memory, where nt is the number of all the target-domain instances and nt 6= ntc + nto. Without loss of generality, we assume that un-clustered features in {v} have indices {1, · · · , nto}, while other clustered features in {v} have indices from nto + 1 to nt. In other words, {vnto+1, · · · ,vnt} dynamically form the cluster centroids {c} while {v1, · · · ,vnto} remain un-clustered instances. Memory initialization. The hybrid memory is initialized with the extracted features by performing forward computation of f✓: the initial source-domain class centroids {w} can be obtained as the mean feature vectors of each class, while the initial target-domain instance features {v} are directly encoded by f✓. After that, the target-domain cluster centroids {c} are initialized with the mean feature vectors of each cluster from {v}, i.e., ck = 1 |Ik| X vi2Ik vi, (2) where Ik denotes the k-th cluster set that contains all the feature vectors within cluster k and | · | denotes the number of features in the set. Note that the source-domain class centroids {w} and the target-domain instance features {v} are only initialized once by performing the forward computation at the beginning of the learning algorithm, and then can be continuously updated during training. Memory update. At each iteration, the encoded feature vectors in each mini-batch would be involved in hybrid memory updating. For the source-domain class centroids {w}, the k-th centroid wk is updated by the mean of the encoded features belonging to class k in the mini-batch as wk mswk + (1 ms) · 1 |Bk| X fsi 2Bk fsi , (3) where Bk denotes the feature set belonging to source-domain class k in the current mini-batch and ms 2 [0, 1] is a momentum coefficient for updating source-domain class centroids. ms is empirically set as 0.2. The target-domain cluster centroids cannot be stored and updated in the same way as the sourcedomain class centroids, since the clustered set Xtc and un-clustered set Xto are constantly changing. As the hybrid memory caches all the target-domain features {v}, each encoded feature vector f ti in the mini-batch is utilized to update its corresponding instance entry vi by vi mtvi + (1 mt)f ti , (4) where mt 2 [0, 1] is the momentum coefficient for update target-domain instance features and is set as 0.2 in our experiments. Given the updated instance memory vi, if f ti belongs to the cluster k, the corresponding centroid ck needs to be updated with Eq. (2). Discussion. The hybrid memory has two main differences from the memory used in [48, 17]: (1) Our hybrid memory caches prototypes for both the centroids and instances, while the memory in [48, 17] only provides instance-level prototypes. Other than the centroids, we for the first time treat clusters and instances as equal classes; (2) The cluster/instance learning targets provided by our hybrid memory are gradually updated and refined, while previous memory [48, 17] only supports fixed instance-level targets. Note that our self-paced strategy (will be discussed in Section 3.2) dynamically determines confident clusters and un-clustered instances. The momentum updating strategy is inspired by [17, 43], and we further introduce how to update hybrid prototypes, i.e., centroids and instances. Note that we employ different updating strategies for class centroids (Eq. (3)) and cluster centroids (Eq. (4)&(2)) since source-domain classes are fixed while target-domain clusters are dynamically changed. 3.2 Self-paced Learning with Reliable Clusters A simple way to split the target-domain data into clusters Xtc and un-clustered outliers Xto is to cluster the target-domain instance features {v1, · · · ,vnt} from the hybrid memory by a certain algorithm (e.g., DBSCAN [9]). Since all the target-domain clusters and un-clustered outlier instances are treated as distinct classes in Eq. (1), the clustering reliability would significantly impact the learned representations. If the clustering is perfect, merging all the instances into their true clusters would no doubt improve the final performance (denotes as “oracle” in Table 5). However, in practice, merging an instance into a wrong cluster does more harm than good. A self-paced learning strategy is therefore introduced, where in the re-clustering step before each epoch, only the most reliable clusters are preserved and the unreliable clusters are disassembled back to un-clustered instances. A reliability criterion is proposed to identify unreliable clusters by measuring the independence and compactness. Independence of clusters. A reliable cluster should be independent from other clusters and individual samples. Intuitively, if a cluster is far away from other samples, it can be considered as highly independent. However, due to the uneven density in the latent space, we cannot naïvely use the distances between the cluster centroid and outside-cluster samples to measure the cluster independence. Generally, the clustering results can be tuned by altering certain hyper-parameters of the clustering criterion. One can loosen the clustering criterion to possibly include more samples in each cluster or tighten the clustering criterion to possibly include fewer samples in each cluster. We denote the samples within the same cluster of f ti as I(f ti ). We propose the following metric to measure the cluster independence, which is formulated as an intersection-over-union (IoU) score, Rindep(f ti ) = |I(f ti ) \ Iloose(f ti )| |I(f ti ) [ Iloose(f ti )| 2 [0, 1], (5) where Iloose(f ti ) is the cluster set containing f ti when the clustering criterion becomes looser. Larger Rindep(f ti ) indicates a more independent cluster for f ti , i.e., even one looses the clustering criterion, there would be no more sample to be included into the new cluster Iloose(f ti ). Samples within the same cluster set (e.g., I(f ti )) generally have the same independence score. Compactness of clusters. A reliable cluster should also be compact, i.e., the samples within the same cluster should have small inter-sample distances. In an extreme case, when a cluster is most compact, all the samples in the cluster have zero inter-sample distances. Its samples would not be split into different clusters even when the clustering criterion is tightened. Based on this assumption, we can define the following metric to determine the compactness of the clustered point f ti as Rcomp(f ti ) = |I(f ti ) \ Itight(f ti )| |I(f ti ) [ Itight(f ti )| 2 [0, 1], (6) where Itight(f ti ) is the cluster set containing f ti when tightening the criterion. Larger Rcomp(f ti ) indicates smaller inter-sample distances around f ti within I(f ti ), since a cluster with larger intersample distances is more likely to include fewer points when a tightened criterion is adopted. The same cluster’s data points may have different compactness scores due to the uneven density. Given the above metrics for measuring the cluster reliability, we could compute the independence and compactness scores for each data point within clusters. We set up ↵, 2 [0, 1] as independence and compactness thresholds for determining reliable clusters. Specifically, we preserve independent clusters with compact data points whose Rindep > ↵ and Rcomp > , while the remaining data are treated as un-clustered outlier instances. With the update of the encoder f✓ and target-domain instance features {v} from the hybrid memory, more reliable clusters can be gradually created to further improve the feature learning. The overall algorithm is detailed in Alg. 1 of Appendix A. 4 Experiments 4.1 Datasets and Evaluation Protocol We evaluate our proposed method on both the mainstream real!real adaptation tasks and the more challenging synthetic!real adaptation tasks in person re-ID and vehicle re-ID problems. As shown in Table 1, two real-world person datasets and one synthetic person dataset, as well as two real-world vehicle datasets and one synthetic vehicle dataset, are adopted in our experiments. Person re-ID datasets†. The Market-1501 and MSMT17 are widely used real-world person image datasets in domain adaptive tasks, among which, MSMT17 has the most images and is most challenging. The synthetic PersonX [39] is generated based on Unity [36] with manually designed obstacles, e.g., random occlusion, resolution and illumination differences, etc. †DukeMTMC-reID [37] dataset has been taken down and should no longer be used. Vehicle re-ID datasets. Although domain adaptive person re-ID has been long studied, the same task on the vehicle has not been fully explored. We conduct experiments with the real-world VeRi776, VehicleID and the synthetic VehicleX datasets. VehicleX [32] is also generated by the Unity engine [51, 42] and further translated to have the real-world style by SPGAN [8]. Evaluation protocol. In the experiments, only ground-truth IDs on the source-domain datasets are provided for training. Mean average precision (mAP) and cumulative matching characteristic (CMC), proposed in [58], are adopted to evaluate the methods’ performances on the target-domain datasets. No post-processing technique, e.g., re-ranking [60] or multi-query fusion [58], is adopted. 4.2 Implementation Details We adopt an ImageNet-pretrained [7] ResNet-50 [18] as the backbone for the encoder f✓. Following the clustering-based UDA methods [11, 10, 38], we use DBSCAN [9] for clustering before each epoch. The maximum distance between neighbor points, which is the most important parameter in DBSCAN, is tuned to loosen or tighten the clustering in our proposed self-paced learning strategy. We use a constant threshold ↵ and dynamic threshold for identifying independent clusters with the most compact points by the reliability criterion. More details can be found in Appendix C. 4.3 Comparison with State-of-the-arts UDA performance on the target domain. We compare our proposed framework with state-ofthe-art UDA methods on multiple domain adaptation tasks in Table 2, including three real!real and three synthetic!real tasks. The tasks in Tables 2b & 2c were not surveyed by previous methods, so we implement state-of-the-art MMT [11] on these datasets for comparison. Our method significantly outperforms all state-of-the-arts on both person and vehicle datasets with a plain ResNet-50 backbone, achieving 2-4% improvements in terms of mAP on the common real!real tasks and up to 5.0% increases on the challenging synthetic!real tasks. An inspiring discovery is that the synthetic!real task could achieve competitive performance as the real!real task with the same target-domain dataset (b) Experiments on unsupervised person re-ID. (e.g., VeRi-776), which indicates that we are one more step closer towards no longer needing any manually annotated real-world images in the future. Further improvements on the source domain. State-of-the-art UDA methods inevitably forget the source-domain knowledge after fine-tuning the pretrained networks on the target domain, as demonstrated by MMT [11] in Table 3. In contrast, our proposed unified framework could effectively model complex inter-sample relations across the two domains, boosting the source-domain performance by up to 6.6% mAP. Note that experiments of “Encoder train/test on the source domain” adopt the same training objective (Eq. (1)) as our proposed method, except for that only source-domain class centroids {w} are available. Our method also outperforms state-of-the-art supervised re-ID methods [12, 59, 64, 40] on the source domain without either using multiple losses or more complex networks. Such a phenomenon indicates that our method could be applied to improve the supervised training by incorporating unlabeled data without extra human labor. Unsupervised re-ID without any labeled training data. Another stream of research focuses on training the re-ID model without any labeled data, i.e., excluding source-domain data from the training set. Our method can be easily generalized to such a setting by discarding the source-domain class centroids {w} from both the hybrid memory and training objective (See Alg. 2 in Appendix A for details). As shown in Table 4, our method considerably outperforms state-of-the-arts by up to 16.7% improvements in terms of mAP. We also implement state-of-the-art unsupervised method MoCo [17], which adopts the conventional contrastive loss, and unfortunately, it is inapplicable on unsupervised re-ID tasks. MoCo [17] underperforms because it treats each instance as a single class, while the core of re-ID tasks is to encode and model intra-/inter-class variations. MoCo [17] is good at unsupervised pre-training but its resulting networks need finetuning with (pseudo) class labels. 4.4 Ablation Studies We analyse the effectiveness of our proposed unified contrastive loss with hybrid memory and selfpaced learning strategy in Table 5. The “oracle” experiment adopts the target-domain ground-truth IDs as cluster labels for training, reflecting the maximal performance with our pipeline. Unified contrastive learning mechanism. In order to verify the necessity of each type of classes in the unified contrastive loss (Eq. (1)), we conduct experiments when removing any one of the source-domain class-level, target-domain cluster-level or un-clustered instance-level supervisions (Table 5a). Baseline “Src. class” adopts only source-domain images with ground-truth IDs for training. “Src. class + tgt. instance” treats each target-domain sample as a distinct class. It totally fails with even worse results than the baseline “Src. class”, showing that directly generalizing conventional contrastive loss to UDA tasks is inapplicable. “Src. class + tgt. cluster” follows existing UDA methods [11, 10, 55, 14], by simply discarding un-clustered instances from training. Noticeable performance drops are observed, especially without the self-paced policy to constrain reliable clusters. Note that the only difference between “Src. class + tgt. cluster (w/ self-paced)” and “Ours (full)” is whether using outliers for training and the large performance gaps are due to the facts that: 1) There are many un-clustered outliers (> half of all samples), especially in early epochs; 2) Outliers serve as difficult samples and excluding them over-simplifies the training task; 3) “Src. class + tgt. cluster” doesn’t update outliers in the memory, making them unsuitable to be clustered in the later epochs. As illustrated in Table 5b, we further verify the necessity of unified training in unsupervised object re-ID tasks. We observe the same trend as domain adaptive tasks: solving the problem via instance discrimination (“tgt. instance”) would fail. What is different is that, even with our self-paced strategy, training with clusters alone (“tgt. cluster”) would fail. That is due to the fact that only a few samples take part in the training if discarding the outliers, undoubtedly leading to training collapse. Note that previous unsupervised re-ID methods [25, 53] which abandoned outliers did not fail, since they did not utilize a memory bank that requires all the entries to be continuously updated. We adopt the non-parametric class centroids to supervise the source-domain feature learning, however, conventional methods generally adopt a learnable classifier for supervised learning. “Src. class! Src. learnable weights” in Table 5a is therefore conducted to verify the necessity of using source-domain class centroids for training to match the semantics of target-domain training supervisions. We also test the effect of not extending negative classes across different types of contrasts. For instance, source-domain samples only treat non-corresponding source-domain classes as their negative classes. “Ours w/o unified contrast” shows inferior performance in both Table 5a and 5b. This indicates the effectiveness of the unified contrastive learning between all types of classes in Eq. (1). Self-paced learning strategy. We propose the self-paced learning strategy to preserve the most reliable clusters for providing stronger supervisions. The intuition is to measure the stability of clusters by hierarchical structures, i.e., a reliable cluster should be consistent in clusters at multiple levels. Rindep and Rcomp are therefore proposed to measure the independence and compactness of clusters, respectively. To verify the effectiveness of such a strategy, we evaluate our framework when removing either Rindep or Rcomp, or both of them. Obvious performance drops are observed under all these settings, e.g., 4.9% mAP drops are shown when removing Rindep&Rcomp in Table 5b. We illustrate the number of clusters and their corresponding Normalized Mutual Information (NMI) scores during training on MSMT17!Market-1501 in Figure 3. It can be observed that the quantity and quality of clusters are closer to the ground-truth IDs with the proposed self-paced learning strategy regardless of the un-clustered instance-level contrast, indicating higher reliability of the clusters and the effectiveness of the self-paced strategy. 5 Discussion and Conclusion Our method has shown considerable improvements over a variety of unsupervised or domain adaptive object re-ID tasks. The supervised performance can also be promoted labor-free by incorporating unlabeled data for training in our framework. The core is at exploiting all available data for jointly training with hybrid supervision. Positive as the results are, there still exists a gap from the oracle, suggesting that the pseudo-class labels may not be satisfactory enough even with the proposed self-paced strategy. Further studies are called for. Beyond the object re-ID task, our method has great potential on other unsupervised learning tasks, which needs to be explored. Broader Impact Our method can help to identify and track different types of objects (e.g., vehicles, cyclists, pedestrians, etc. ) across different cameras (domains), thus boosting the development of smart retail, smart transportation, and smart security systems in the future metropolises. In addition, our proposed self-paced contrastive learning is quite general and not limited to the specific research field of object re-ID. It can be well extended to broader research areas, including unsupervised and semi-supervised representation learning. However, object re-ID systems, when applied to identify pedestrians and vehicles in surveillance systems, might give rise to the infringement of people’s privacy, since such re-ID systems often rely on non-consensual surveillance data for training, i.e., it is unlikely that all human subjects even knew they were being recorded. Therefore, governments and officials need to carefully establish strict regulations and laws to control the usage of re-ID technologies. Otherwise, re-ID technologies can potentially equip malicious actors with the ability to surveil pedestrians or vehicles through multiple CCTV cameras without their consent. The research committee should also avoid using the datasets with ethics issues, e.g., DukeMTMC [37], which has been taken down due to the violation of data collection terms, should no longer be used. We would not evaluate our method on DukeMTMC related benchmarks as well. Furthermore, we should be cautious of the misidentification of the re-ID systems to avoid possible disturbance. Also, note that the demographic makeup of the datasets used is not representative of the broader population. Acknowledgements This work is supported in part by the General Research Fund through the Research Grants Council of Hong Kong under Grants (Nos. CUHK14208417, CUHK14207319), in part by the Hong Kong Innovation and Technology Support Program (No. ITS/312/18FX), in part by CUHK Strategic Fund.
1. What is the focus and contribution of the paper on unsupervised domain adaptation? 2. What are the strengths of the proposed approach, particularly in terms of its novel combination of pseudo label clustering and network training within a self-paced framework? 3. What are the weaknesses of the paper regarding its lack of theoretical novelty and the issue with the DukeMTMC dataset? 4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper addresses the problem of unsupervised domain adaptation (UDA) with a strong emphasis on the task of re-identification. The contributions include: the extension of the contrastive softmax with specific selection of class centroids from the labeled source domain along with cluster centroids and outlier instances of the target domain, a hybrid memory which can be considered as a non-parametric model used to maintain and update the state of clusters and outliers between epochs. A form of self-paced learning is proposed which considers cluster reliability of pseudo labels to remove difficult/noisy samples from the target domain each epoch for smoother and more reliable training of domain transfer. Predominantly an empirical paper without theoretical novelty. That said the novelty combination of pseudo label clustering and network training within a self-paced framework can be considered as a novel contribution that leads the proposed method to outstanding results. Strengths The paper is written reasonably well with clear structure and presentation. An extensive evaluation is given for several domain transfer scenarios across seven different datasets. In all experiments the proposed method significantly outperforms other UDA methods. The authors have provided links to their code. Despite not yet having run the code to check for reproducibility the code appears to be complete (also see DukeMTMC note in weaknesses). The UDA-ReID problem is interesting from the perspective of learning representations, generalising across domains and towards unsupervised learning. Weaknesses I wasn't able to download the DukeMTMC dataset used in most experiments. The link http://vision.cs.duke.edu/DukeMTMC in the reference from the provided source. It would appear that this dataset has been taken down by Duke university due to a potential privacy infringement since June 2019. This would make it difficult to ethically reproduce many of the experiments of this paper. There are enough other datasets used in this work to validate the proposed method without DukeMTMC. Given that DukeMTMC has been taken down so long ago I do not know what it is used in the experiments. Especially for Tables 3, 4, and 5 where MSMT17 could be used instead. For a NeurIPS paper I would expect more of a theoretical grounding for the various design choices. However, the paper does motive several choices using common intuition e.g. measuring cluster reliability due to the uneven density in the latent space.
NIPS
Title Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID Abstract Domain adaptive object re-ID aims to transfer the learned knowledge from the labeled source domain to the unlabeled target domain to tackle the open-class re-identification problems. Although state-of-the-art pseudo-label-based methods [11, 54, 10, 55, 14] have achieved great success, they did not make full use of all valuable information because of the domain gap and unsatisfying clustering performance. To solve these problems, we propose a novel self-paced contrastive learning framework with hybrid memory. The hybrid memory dynamically generates source-domain class-level, target-domain cluster-level and un-clustered instance-level supervisory signals for learning feature representations. Different from the conventional contrastive learning strategy, the proposed framework jointly distinguishes source-domain classes, and target-domain clusters and un-clustered instances. Most importantly, the proposed self-paced method gradually creates more reliable clusters to refine the hybrid memory and learning targets, and is shown to be the key to our outstanding performance. Our method outperforms state-ofthe-arts on multiple domain adaptation tasks of object re-ID and even boosts the performance on the source domain without any extra annotations. Our generalized version on unsupervised object re-ID surpasses state-of-the-art algorithms by considerable 16.7% and 7.9% on Market-1501 and MSMT17 benchmarks†. 1 Introduction Unsupervised domain adaptation (UDA) for object re-identification (re-ID) aims at transferring the learned knowledge from the labeled source domain (dataset) to properly measure the inter-instance affinities in the unlabeled target domain (dataset). Common object re-ID problems include person re-ID and vehicle re-ID, where the source-domain and target-domain data do not share the same identities (classes). Existing UDA methods on object re-ID [38, 11, 54, 10, 55, 45] generally tackled this problem following a two-stage training scheme: (1) supervised pre-training on the source domain, and (2) unsupervised fine-tuning on the target domain. For stage-2 unsupervised fine-tuning, a pseudo-label-based strategy was found effective in state-of-the-art methods [11, 54, 10, 55], which alternates between generating pseudo classes by clustering target-domain instances and training the network with generated pseudo classes. In this way, the source-domain pre-trained network can be adapted to capture the inter-sample relations in the target domain with noisy pseudo-class labels. Although the pseudo-label-based methods have led to great performance advances, we argue that there exist two major limitations that hinder their further improvements (Figure 1 (a)). (1) During the target-domain fine-tuning, the source-domain images were either not considered [11, 54, 10, 55] or were even found harmful to the final performance [14] because of the limitations of their methodology ⇤Dapeng Chen is the corresponding author. †Code is available at https://github.com/yxgeee/SpCL. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. designs. The accurate source-domain ground-truth labels are valuable but were ignored during target-domain training. (2) Since the clustering process might result in individual outliers, to ensure the reliability of the generated pseudo labels, existing methods [11, 10, 55, 14] simply discarded the outliers from being used for training. However, such outliers might actually be difficult but valuable samples in the target domain and there are generally many outliers especially in early epochs. Simply abandoning them might critically hurt the final performance. To overcome the problems, we propose a hybrid memory to encode all available information from both source and target domains for feature learning. For the source-domain data, their ground-truth class labels can naturally provide valuable supervisions. For the target-domain data, clustering can be conducted to obtain relatively confident clusters as well as un-clustered outliers. All the sourcedomain class centroids, target-domain cluster centroids, and target-domain un-clustered instance features from the hybrid memory can provide supervisory signals for jointly learning discriminative feature representations across the two domains (Figure 1 (b)). A unified framework is developed for dynamically updating and distinguishing different entries in the proposed hybrid memory. Specifically, since all the target-domain clusters and un-clustered instances are equally treated as independent classes, the clustering reliability would significantly impact the learned representations. We thus propose a self-paced contrastive learning strategy, which initializes the learning process by using the hybrid memory with the most reliable target-domain clusters. Trained with such reliable clusters, the discriminativeness of feature representations can be gradually improved and additional reliable clusters can be formed by incorporating more un-clustered instances into the new clusters. Such a strategy can effectively mitigate the effects of noisy pseudo labels and boost the feature learning process. To properly measure the cluster reliability, a novel multi-scale clustering reliability criterion is proposed, based on which only reliable clusters are preserved and other confusing clusters are disassembled back to un-clustered instances. In this way, our self-paced learning strategy gradually creates more reliable clusters to dynamically refine the hybrid memory and learning targets. Our contributions are summarized as three-fold. (1) We propose a unified contrastive learning framework to incorporate all available information from both source and target domains for joint feature learning. It dynamically updates the hybrid memory to provide class-level, cluster-level and instance-level supervisions. (2) We design a self-paced contrastive learning strategy with a novel clustering reliability criterion to prevent training error amplification caused by noisy pseudo-class labels. It gradually generates more reliable target-domain clusters for learning better features in the hybrid memory, which in turn, improves clustering. (3) Our method significantly outperforms state-of-the-arts [11, 54, 10, 55, 45] on multiple domain adaptation tasks of object re-ID with up to 5.0% mAP gains. The proposed unified framework could even boost the performance on the source domain with large margins (6.6%) by jointly training with un-annotated target-domain data, while most existing UDA methods “forget” the source domain after fine-tuning on the target domain. Our unsupervised version without labeled source-domain data on object re-ID task significantly outperforms state-of-the-arts [26, 45, 53] by 16.7% and 7.9% in terms of mAP on Market-1501 and MSMT17 benchmarks. 2 Related Works Unsupervised domain adaptation (UDA) for object re-ID. Existing UDA methods for object re-ID can be divided into two main categories, including pseudo-label-based methods [38, 10, 55, 11, 54, 62, 52, 45] and domain translation-based methods [8, 46, 5, 14]. This paper follows the former one since the pseudo labels were found more effective to capture the target-domain distributions. Though driven by different motivations, previous pseudo-label-based methods generally adopted a two-stage training scheme: (1) pre-training on the source domain with ground-truth IDs, and (2) adapting the target domain with pseudo labels. The pseudo labels can be generated by either clustering instance features [38, 10, 55, 11, 54] or measuring similarities with exemplar features [62, 52, 45], where the clustering-based pipeline maintains state-of-the-art performance to date. The major challenges faced by clustering-based methods is how to improve the precision of pseudo labels and how to mitigate the effects caused by noisy pseudo labels. SSG [10] adopted human local features to assign multi-scale pseudo labels. PAST [55] introduced to utilize multiple regularizations alternately. MMT [11] proposed to generate more robust soft labels via the mutual mean-teaching. AD-Cluster [54] incorporated style-translated images to improve the discriminativeness of instance features. Although various attempts along this direction have led to great performance advances, they ignored to fully exploit all valuable information across the two domains which limits their further improvements, i.e., they simply discarded both the source-domain labeled images and target-domain un-clustered outliers when fine-tuning the model on the target domain with pseudo labels. Contrastive learning. State-of-the-art methods on unsupervised visual representation learning [33, 48, 19, 44, 65, 17, 4] are based on the contrastive learning. Being cast as either the dictionary look-up task [48, 17] or the consistent learning task [44, 4], a contrastive loss was adopted to learn instance discriminative representations by treating each unlabeled sample as a distinct class. Although the instance-level contrastive loss could be used to train embeddings that can be generalized well to downstream tasks with fine-tuning, it does not perform well on the domain adaptive object re-ID tasks which require to correctly measure the inter-class affinities on the unsupervised target domain. Self-paced learning. The “easy-to-hard” training scheme is at the core of self-paced learning [21], which was originally found effective in supervised learning methods, especially with noisy labels [15, 20, 24, 13]. Recently, some methods [41, 16, 6, 56, 67] incorporated the conception of self-paced learning into unsupervised learning tasks by starting the training process with the most confident pseudo labels. However, the self-paced policies designed in these methods were all based on the close-set problems with pre-defined classes, which cannot be generalized to our open-set object re-ID task with completely unknown classes on the target domain. Moreover, they did not consider how to plausibly train with hard samples that cannot be assigned confident pseudo labels all the time. 3 Methodology To tackle the challenges in unsupervised domain adaptation (UDA) on object re-ID, we propose a self-paced contrastive learning framework (Figure 2 (a)), which consists of a CNN [22]-based encoder f✓ and a novel hybrid memory. The key innovation of the proposed framework lies in jointly training the encoder with all the source-domain class-level, target-domain cluster-level and target-domain un-clustered instance-level supervisions, which are dynamically updated in the hybrid memory to gradually provide more confident learning targets. In order to avoid training error amplification caused by noisy clusters, the self-paced learning strategy initializes the training process with the most ‡Throughout this paper, the term independence is used in its idiomatic sense rather than the statistical sense. reliable clusters and gradually incorporates more un-clustered instances to form new reliable clusters. A novel reliability criterion is introduced to measure the quality of clusters (Figure 2 (b)). Our training scheme alternates between two steps: (1) grouping the target-domain samples into clusters and un-clustered instances by clustering the target-domain instance features in the hybrid memory with the self-paced strategy (Section 3.2), and (2) optimizing the encoder f✓ with a unified contrastive loss and dynamically updating the hybrid memory with encoded features (Section 3.1). 3.1 Constructing and Updating Hybrid Memory for Contrastive Learning Given the target-domain training samples Xt without any ground-truth label, we employ the selfpaced clustering strategy (Section 3.2) to group the samples into clusters and the un-clustered outliers. The whole training set of both domains can therefore be divided into three parts, including the source-domain samples Xs with ground-truth identity labels, the target-domain pseudo-labeled data Xtc within clusters and the target-domain instances Xto not belonging to any cluster, i.e., Xt = Xtc[Xto. State-of-the-art UDA methods [11, 54, 10, 55] simply abandon all source-domain data and targetdomain un-clustered instances, and utilize only the target-domain pseudo labels for adapting the network to the target domain, which, in our opinion, is a sub-optimal solution. Instead, we design a novel contrastive loss to fully exploit available data by treating all the source-domain classes, target-domain clusters and target-domain un-clustered instances as independent classes. 3.1.1 Unified Contrastive Learning Given a general feature vector f = f✓(x), x 2 Xs [ Xtc [ Xto, our unified contrastive loss is Lf = log exp (hf , z+i/⌧) Pns k=1 exp (hf ,wki/⌧) + Pntc k=1 exp (hf , cki/⌧) + Pnto k=1 exp (hf ,vki/⌧) , (1) where z+ indicates the positive class prototype corresponding to f , the temperature ⌧ is empirically set as 0.05 and h·, ·i denotes the inner product between two feature vectors to measure their similarity. ns is the number of source-domain classes, ntc is the number of target-domain clusters and nto is the number of target-domain un-clustered instances. More specifically, if f is a source-domain feature, z+ = wk is the centroid of the source-domain class k that f belongs to. If f belongs to the k-th target-domain cluster, z+ = ck is the k-th cluster centroid. If f is a target-domain un-clustered outlier, we would have z+ = vk as the outlier instance feature corresponding to f . Intuitively, the above joint contrastive loss encourages the encoded feature vector to approach its assigned classes, clusters or instances. Note that we utilize class centroids {w} instead of learnable class weights for encoding source-domain classes to match their semantics to those of the clusters’ or outliers’ centroids. Our experiments (Section 4.4) show that, if the semantics of class-level, cluster-level and instance-level supervisions do not match, the performance drops significantly. Discussion. The most significant difference between our unified contrastive loss (Eq. (1)) and previous contrastive losses [48, 17, 4, 33] is that ours jointly distinguishes classes, clusters, and unclustered instances, while previous ones only focus on separating instances without considering any ground-truth classes or pseudo-class labels as our method does. They target at instance discrimination task but fail in properly modeling intra-/inter-class affinities on domain adaptive re-ID tasks. 3.1.2 Hybrid Memory As the cluster number ntc and outlier instance number nto may change during training with the alternate clustering strategy, the class prototypes for the unified contrastive loss (Eq. (1)) are built in a nonparametric and dynamic manner. We propose a novel hybrid memory to provide the source-domain class centroids {w1, · · · ,wns}, target-domain cluster centroids {c1, · · · , cntc} and target-domain un-clustered instance features {v1, · · · ,vnto}. For continuously storing and updating the above three types of entries, we propose to cache source-domain class centroids {w1, · · · ,wns} and all the target-domain instance features {v1, · · · ,vnt} simultaneously in the hybrid memory, where nt is the number of all the target-domain instances and nt 6= ntc + nto. Without loss of generality, we assume that un-clustered features in {v} have indices {1, · · · , nto}, while other clustered features in {v} have indices from nto + 1 to nt. In other words, {vnto+1, · · · ,vnt} dynamically form the cluster centroids {c} while {v1, · · · ,vnto} remain un-clustered instances. Memory initialization. The hybrid memory is initialized with the extracted features by performing forward computation of f✓: the initial source-domain class centroids {w} can be obtained as the mean feature vectors of each class, while the initial target-domain instance features {v} are directly encoded by f✓. After that, the target-domain cluster centroids {c} are initialized with the mean feature vectors of each cluster from {v}, i.e., ck = 1 |Ik| X vi2Ik vi, (2) where Ik denotes the k-th cluster set that contains all the feature vectors within cluster k and | · | denotes the number of features in the set. Note that the source-domain class centroids {w} and the target-domain instance features {v} are only initialized once by performing the forward computation at the beginning of the learning algorithm, and then can be continuously updated during training. Memory update. At each iteration, the encoded feature vectors in each mini-batch would be involved in hybrid memory updating. For the source-domain class centroids {w}, the k-th centroid wk is updated by the mean of the encoded features belonging to class k in the mini-batch as wk mswk + (1 ms) · 1 |Bk| X fsi 2Bk fsi , (3) where Bk denotes the feature set belonging to source-domain class k in the current mini-batch and ms 2 [0, 1] is a momentum coefficient for updating source-domain class centroids. ms is empirically set as 0.2. The target-domain cluster centroids cannot be stored and updated in the same way as the sourcedomain class centroids, since the clustered set Xtc and un-clustered set Xto are constantly changing. As the hybrid memory caches all the target-domain features {v}, each encoded feature vector f ti in the mini-batch is utilized to update its corresponding instance entry vi by vi mtvi + (1 mt)f ti , (4) where mt 2 [0, 1] is the momentum coefficient for update target-domain instance features and is set as 0.2 in our experiments. Given the updated instance memory vi, if f ti belongs to the cluster k, the corresponding centroid ck needs to be updated with Eq. (2). Discussion. The hybrid memory has two main differences from the memory used in [48, 17]: (1) Our hybrid memory caches prototypes for both the centroids and instances, while the memory in [48, 17] only provides instance-level prototypes. Other than the centroids, we for the first time treat clusters and instances as equal classes; (2) The cluster/instance learning targets provided by our hybrid memory are gradually updated and refined, while previous memory [48, 17] only supports fixed instance-level targets. Note that our self-paced strategy (will be discussed in Section 3.2) dynamically determines confident clusters and un-clustered instances. The momentum updating strategy is inspired by [17, 43], and we further introduce how to update hybrid prototypes, i.e., centroids and instances. Note that we employ different updating strategies for class centroids (Eq. (3)) and cluster centroids (Eq. (4)&(2)) since source-domain classes are fixed while target-domain clusters are dynamically changed. 3.2 Self-paced Learning with Reliable Clusters A simple way to split the target-domain data into clusters Xtc and un-clustered outliers Xto is to cluster the target-domain instance features {v1, · · · ,vnt} from the hybrid memory by a certain algorithm (e.g., DBSCAN [9]). Since all the target-domain clusters and un-clustered outlier instances are treated as distinct classes in Eq. (1), the clustering reliability would significantly impact the learned representations. If the clustering is perfect, merging all the instances into their true clusters would no doubt improve the final performance (denotes as “oracle” in Table 5). However, in practice, merging an instance into a wrong cluster does more harm than good. A self-paced learning strategy is therefore introduced, where in the re-clustering step before each epoch, only the most reliable clusters are preserved and the unreliable clusters are disassembled back to un-clustered instances. A reliability criterion is proposed to identify unreliable clusters by measuring the independence and compactness. Independence of clusters. A reliable cluster should be independent from other clusters and individual samples. Intuitively, if a cluster is far away from other samples, it can be considered as highly independent. However, due to the uneven density in the latent space, we cannot naïvely use the distances between the cluster centroid and outside-cluster samples to measure the cluster independence. Generally, the clustering results can be tuned by altering certain hyper-parameters of the clustering criterion. One can loosen the clustering criterion to possibly include more samples in each cluster or tighten the clustering criterion to possibly include fewer samples in each cluster. We denote the samples within the same cluster of f ti as I(f ti ). We propose the following metric to measure the cluster independence, which is formulated as an intersection-over-union (IoU) score, Rindep(f ti ) = |I(f ti ) \ Iloose(f ti )| |I(f ti ) [ Iloose(f ti )| 2 [0, 1], (5) where Iloose(f ti ) is the cluster set containing f ti when the clustering criterion becomes looser. Larger Rindep(f ti ) indicates a more independent cluster for f ti , i.e., even one looses the clustering criterion, there would be no more sample to be included into the new cluster Iloose(f ti ). Samples within the same cluster set (e.g., I(f ti )) generally have the same independence score. Compactness of clusters. A reliable cluster should also be compact, i.e., the samples within the same cluster should have small inter-sample distances. In an extreme case, when a cluster is most compact, all the samples in the cluster have zero inter-sample distances. Its samples would not be split into different clusters even when the clustering criterion is tightened. Based on this assumption, we can define the following metric to determine the compactness of the clustered point f ti as Rcomp(f ti ) = |I(f ti ) \ Itight(f ti )| |I(f ti ) [ Itight(f ti )| 2 [0, 1], (6) where Itight(f ti ) is the cluster set containing f ti when tightening the criterion. Larger Rcomp(f ti ) indicates smaller inter-sample distances around f ti within I(f ti ), since a cluster with larger intersample distances is more likely to include fewer points when a tightened criterion is adopted. The same cluster’s data points may have different compactness scores due to the uneven density. Given the above metrics for measuring the cluster reliability, we could compute the independence and compactness scores for each data point within clusters. We set up ↵, 2 [0, 1] as independence and compactness thresholds for determining reliable clusters. Specifically, we preserve independent clusters with compact data points whose Rindep > ↵ and Rcomp > , while the remaining data are treated as un-clustered outlier instances. With the update of the encoder f✓ and target-domain instance features {v} from the hybrid memory, more reliable clusters can be gradually created to further improve the feature learning. The overall algorithm is detailed in Alg. 1 of Appendix A. 4 Experiments 4.1 Datasets and Evaluation Protocol We evaluate our proposed method on both the mainstream real!real adaptation tasks and the more challenging synthetic!real adaptation tasks in person re-ID and vehicle re-ID problems. As shown in Table 1, two real-world person datasets and one synthetic person dataset, as well as two real-world vehicle datasets and one synthetic vehicle dataset, are adopted in our experiments. Person re-ID datasets†. The Market-1501 and MSMT17 are widely used real-world person image datasets in domain adaptive tasks, among which, MSMT17 has the most images and is most challenging. The synthetic PersonX [39] is generated based on Unity [36] with manually designed obstacles, e.g., random occlusion, resolution and illumination differences, etc. †DukeMTMC-reID [37] dataset has been taken down and should no longer be used. Vehicle re-ID datasets. Although domain adaptive person re-ID has been long studied, the same task on the vehicle has not been fully explored. We conduct experiments with the real-world VeRi776, VehicleID and the synthetic VehicleX datasets. VehicleX [32] is also generated by the Unity engine [51, 42] and further translated to have the real-world style by SPGAN [8]. Evaluation protocol. In the experiments, only ground-truth IDs on the source-domain datasets are provided for training. Mean average precision (mAP) and cumulative matching characteristic (CMC), proposed in [58], are adopted to evaluate the methods’ performances on the target-domain datasets. No post-processing technique, e.g., re-ranking [60] or multi-query fusion [58], is adopted. 4.2 Implementation Details We adopt an ImageNet-pretrained [7] ResNet-50 [18] as the backbone for the encoder f✓. Following the clustering-based UDA methods [11, 10, 38], we use DBSCAN [9] for clustering before each epoch. The maximum distance between neighbor points, which is the most important parameter in DBSCAN, is tuned to loosen or tighten the clustering in our proposed self-paced learning strategy. We use a constant threshold ↵ and dynamic threshold for identifying independent clusters with the most compact points by the reliability criterion. More details can be found in Appendix C. 4.3 Comparison with State-of-the-arts UDA performance on the target domain. We compare our proposed framework with state-ofthe-art UDA methods on multiple domain adaptation tasks in Table 2, including three real!real and three synthetic!real tasks. The tasks in Tables 2b & 2c were not surveyed by previous methods, so we implement state-of-the-art MMT [11] on these datasets for comparison. Our method significantly outperforms all state-of-the-arts on both person and vehicle datasets with a plain ResNet-50 backbone, achieving 2-4% improvements in terms of mAP on the common real!real tasks and up to 5.0% increases on the challenging synthetic!real tasks. An inspiring discovery is that the synthetic!real task could achieve competitive performance as the real!real task with the same target-domain dataset (b) Experiments on unsupervised person re-ID. (e.g., VeRi-776), which indicates that we are one more step closer towards no longer needing any manually annotated real-world images in the future. Further improvements on the source domain. State-of-the-art UDA methods inevitably forget the source-domain knowledge after fine-tuning the pretrained networks on the target domain, as demonstrated by MMT [11] in Table 3. In contrast, our proposed unified framework could effectively model complex inter-sample relations across the two domains, boosting the source-domain performance by up to 6.6% mAP. Note that experiments of “Encoder train/test on the source domain” adopt the same training objective (Eq. (1)) as our proposed method, except for that only source-domain class centroids {w} are available. Our method also outperforms state-of-the-art supervised re-ID methods [12, 59, 64, 40] on the source domain without either using multiple losses or more complex networks. Such a phenomenon indicates that our method could be applied to improve the supervised training by incorporating unlabeled data without extra human labor. Unsupervised re-ID without any labeled training data. Another stream of research focuses on training the re-ID model without any labeled data, i.e., excluding source-domain data from the training set. Our method can be easily generalized to such a setting by discarding the source-domain class centroids {w} from both the hybrid memory and training objective (See Alg. 2 in Appendix A for details). As shown in Table 4, our method considerably outperforms state-of-the-arts by up to 16.7% improvements in terms of mAP. We also implement state-of-the-art unsupervised method MoCo [17], which adopts the conventional contrastive loss, and unfortunately, it is inapplicable on unsupervised re-ID tasks. MoCo [17] underperforms because it treats each instance as a single class, while the core of re-ID tasks is to encode and model intra-/inter-class variations. MoCo [17] is good at unsupervised pre-training but its resulting networks need finetuning with (pseudo) class labels. 4.4 Ablation Studies We analyse the effectiveness of our proposed unified contrastive loss with hybrid memory and selfpaced learning strategy in Table 5. The “oracle” experiment adopts the target-domain ground-truth IDs as cluster labels for training, reflecting the maximal performance with our pipeline. Unified contrastive learning mechanism. In order to verify the necessity of each type of classes in the unified contrastive loss (Eq. (1)), we conduct experiments when removing any one of the source-domain class-level, target-domain cluster-level or un-clustered instance-level supervisions (Table 5a). Baseline “Src. class” adopts only source-domain images with ground-truth IDs for training. “Src. class + tgt. instance” treats each target-domain sample as a distinct class. It totally fails with even worse results than the baseline “Src. class”, showing that directly generalizing conventional contrastive loss to UDA tasks is inapplicable. “Src. class + tgt. cluster” follows existing UDA methods [11, 10, 55, 14], by simply discarding un-clustered instances from training. Noticeable performance drops are observed, especially without the self-paced policy to constrain reliable clusters. Note that the only difference between “Src. class + tgt. cluster (w/ self-paced)” and “Ours (full)” is whether using outliers for training and the large performance gaps are due to the facts that: 1) There are many un-clustered outliers (> half of all samples), especially in early epochs; 2) Outliers serve as difficult samples and excluding them over-simplifies the training task; 3) “Src. class + tgt. cluster” doesn’t update outliers in the memory, making them unsuitable to be clustered in the later epochs. As illustrated in Table 5b, we further verify the necessity of unified training in unsupervised object re-ID tasks. We observe the same trend as domain adaptive tasks: solving the problem via instance discrimination (“tgt. instance”) would fail. What is different is that, even with our self-paced strategy, training with clusters alone (“tgt. cluster”) would fail. That is due to the fact that only a few samples take part in the training if discarding the outliers, undoubtedly leading to training collapse. Note that previous unsupervised re-ID methods [25, 53] which abandoned outliers did not fail, since they did not utilize a memory bank that requires all the entries to be continuously updated. We adopt the non-parametric class centroids to supervise the source-domain feature learning, however, conventional methods generally adopt a learnable classifier for supervised learning. “Src. class! Src. learnable weights” in Table 5a is therefore conducted to verify the necessity of using source-domain class centroids for training to match the semantics of target-domain training supervisions. We also test the effect of not extending negative classes across different types of contrasts. For instance, source-domain samples only treat non-corresponding source-domain classes as their negative classes. “Ours w/o unified contrast” shows inferior performance in both Table 5a and 5b. This indicates the effectiveness of the unified contrastive learning between all types of classes in Eq. (1). Self-paced learning strategy. We propose the self-paced learning strategy to preserve the most reliable clusters for providing stronger supervisions. The intuition is to measure the stability of clusters by hierarchical structures, i.e., a reliable cluster should be consistent in clusters at multiple levels. Rindep and Rcomp are therefore proposed to measure the independence and compactness of clusters, respectively. To verify the effectiveness of such a strategy, we evaluate our framework when removing either Rindep or Rcomp, or both of them. Obvious performance drops are observed under all these settings, e.g., 4.9% mAP drops are shown when removing Rindep&Rcomp in Table 5b. We illustrate the number of clusters and their corresponding Normalized Mutual Information (NMI) scores during training on MSMT17!Market-1501 in Figure 3. It can be observed that the quantity and quality of clusters are closer to the ground-truth IDs with the proposed self-paced learning strategy regardless of the un-clustered instance-level contrast, indicating higher reliability of the clusters and the effectiveness of the self-paced strategy. 5 Discussion and Conclusion Our method has shown considerable improvements over a variety of unsupervised or domain adaptive object re-ID tasks. The supervised performance can also be promoted labor-free by incorporating unlabeled data for training in our framework. The core is at exploiting all available data for jointly training with hybrid supervision. Positive as the results are, there still exists a gap from the oracle, suggesting that the pseudo-class labels may not be satisfactory enough even with the proposed self-paced strategy. Further studies are called for. Beyond the object re-ID task, our method has great potential on other unsupervised learning tasks, which needs to be explored. Broader Impact Our method can help to identify and track different types of objects (e.g., vehicles, cyclists, pedestrians, etc. ) across different cameras (domains), thus boosting the development of smart retail, smart transportation, and smart security systems in the future metropolises. In addition, our proposed self-paced contrastive learning is quite general and not limited to the specific research field of object re-ID. It can be well extended to broader research areas, including unsupervised and semi-supervised representation learning. However, object re-ID systems, when applied to identify pedestrians and vehicles in surveillance systems, might give rise to the infringement of people’s privacy, since such re-ID systems often rely on non-consensual surveillance data for training, i.e., it is unlikely that all human subjects even knew they were being recorded. Therefore, governments and officials need to carefully establish strict regulations and laws to control the usage of re-ID technologies. Otherwise, re-ID technologies can potentially equip malicious actors with the ability to surveil pedestrians or vehicles through multiple CCTV cameras without their consent. The research committee should also avoid using the datasets with ethics issues, e.g., DukeMTMC [37], which has been taken down due to the violation of data collection terms, should no longer be used. We would not evaluate our method on DukeMTMC related benchmarks as well. Furthermore, we should be cautious of the misidentification of the re-ID systems to avoid possible disturbance. Also, note that the demographic makeup of the datasets used is not representative of the broader population. Acknowledgements This work is supported in part by the General Research Fund through the Research Grants Council of Hong Kong under Grants (Nos. CUHK14208417, CUHK14207319), in part by the Hong Kong Innovation and Technology Support Program (No. ITS/312/18FX), in part by CUHK Strategic Fund.
1. What is the main contribution of the paper regarding unsupervised domain adaptation for object re-ID? 2. What are the strengths and weaknesses of the proposed approach, particularly in its assumption, methodology, and experimental design? 3. Do you have any questions or concerns about the clarity and sufficiency of the ablation studies presented in the paper? 4. How does the reviewer assess the novelty and generality of the proposed method compared to existing works in UDA and object re-ID? 5. Are there any suggestions or recommendations for improving the method's robustness, explanatory power, and comparisons with other approaches?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This work address the task of unsupervised domain adaptation for object re-ID. It proposes to use a contrastive learning framework with source-domain class-level, target-domain cluster-level and target-domain instance-level supervision. It also defines two criteria of independence and compactness to help obtain reliable clusters for learning. Experiments are conducted on person and vehicle re-ID and some ablation studies are also presented. Strengths + The task of unsupervised domain adaptation is interesting and challenging. + Multiple datasets are used for evaluations. + Related works are appropriately discussed and compared. Weaknesses - The main idea of this method is unified contrastive learning. However, the strategy of joint learning of source and target domain is not new although different methods implement with different losses (e.g., in [57,58]). It is also natural that the performance on source domain with joint learning of source and target domains is higher than finetuing with target data only. Besides, the form of non-parametric contrastive learning is widely used in general unsupervised visual representation learning methods (such as MoCo and SimCLR) and is not new in this method. - The assumption of the proposed unified contrastive learning is that the source domain has disjoint classes with target domain as it needs to collect cross-domain samples as negatives. It may meet with the current UDA benchmarks but the generality of this method based on such assumption is limited in those real-world practical application scenarios where no prior knowledge are available on target data. Existing methods which optimize source and target domains separately thus show more advantages in this aspect. - It is not clear why optimizing class-level and instance-level contrastive losses simultaneously will work. Class-level supervision is different with instance-level supervision as optimization target. The experiments of MoCo on UDA do not work, which also implies that instance-level supervision is not suitable for distinguishing semantic classes in the object re-ID tasks. It lacks sufficient explanations and corresponding ablation studies to clarify this point. It is hard to convince me why such contrastive loss can work with current content and experiments. - The ablation studies are not clear and sufficient enough. (1) What are the differences between "src class + tgt class (w/o self-paced)" and "ours w/o self-paced r_comp & r_indep"? It is not clear which algorithmic components self-paced learning contains and it lacks necessary detailed descriptions of the setting of these experiments. (2) The ablation studies of different combinations of class-level, cluster-level and instance-level are not presented. Since the unified contrastive learning is the core idea of this method, these experiments are necessary but missing unfortunately. (3) I'm also confused with the differences between w/o self-paced and Delta_d=0. (4) Why did using learnable classifiers perform worst than using class centroids for source domain? It also lacks necessary theoretical analysis and explanations. - The strategies of independence and compactness of clusters seem to be tricky and incremental. The strategies need multiple manual parameters based on DBSCAN clustering. From Table 5, on Market-to-Duke task, only 0.8% mAP drops w/o r_indep and only 1.3% mAP drops w/o r_comp. The results implies incremental contributions of such strategies. - In the unified contrastive learning (Eq. 1), if f is a target-domain un-clustered outlier, it is not clear how to collect its corresponding positive samples. - Softmax-based losses and triplet losses are widely used in object re-ID tasks. It is necessary to compare them with the contrasitve loss. But the comparisons and analysis are missing in this work. - The parameter analysis experiments show that tuning temperature param has a large impact on final re-ID performance, e.g., 68.8% with 0.05 vs 57.4% with 0.09. Such large gap (11.4% mAP) is even higher than other major algorithmic components. It may imply that this method is sensitive to this param and not robust enough to extend to other tasks. It also raises the concerns that whether the improvement mainly comes from hyper-parameters tuning. - In Figure 3, the metric of cluster number is not good enough to show the quality of clustering. A better way is to use some quantitative metrics (e.g., NMI or F-measure) to check how good/bad clusters a method obtains.
NIPS
Title Scalable Spike Source Localization in Extracellular Recordings using Amortized Variational Inference Abstract Determining the positions of neurons in an extracellular recording is useful for investigating functional properties of the underlying neural circuitry. In this work, we present a Bayesian modelling approach for localizing the source of individual spikes on high-density, microelectrode arrays. To allow for scalable inference, we implement our model as a variational autoencoder and perform amortized variational inference. We evaluate our method on both biophysically realistic simulated and real extracellular datasets, demonstrating that it is more accurate than and can improve spike sorting performance over heuristic localization methods such as center of mass. 1 Introduction Extracellular recordings, which measure local potential changes due to ionic currents flowing through cell membranes, are an essential source of data in experimental and clinical neuroscience. The most prominent signals in these recordings originate from action potentials (spikes), the all or none events neurons produce in response to inputs and transmit as outputs to other neurons. Traditionally, a small number of electrodes (channels) are used to monitor spiking activity from a few neurons simultaneously. Recent progress in microfabrication now allows for extracellular recordings from thousands of neurons using microelectrode arrays (MEAs), which have thousands of closely spaced electrodes [13, 2, 14, 1, 36, 55, 32, 25, 12]. These recordings provide insights that cannot be obtained by pooling multiple single-electrode recordings [27]. This is a significant development as it enables systematic investigations of large circuits of neurons to better understand their function and structure, as well as how they are affected by injury, disease, and pharmacological interventions [20]. On dense MEAs, each recording channel may record spikes from multiple, nearby neurons, while each neuron may leave an extracellular footprint on multiple channels. Inferring the spiking activity of individual neurons, a task called spike sorting, is therefore a challenging blind source separation problem, complicated by the large volume of recorded data [46]. Despite the challenges presented by 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. spike sorting large-scale recordings, its importance cannot be overstated as it has been shown that isolating the activity of individual neurons is essential to understanding brain function [35]. Recent efforts have concentrated on providing scalable spike sorting algorithms for large scale MEAs and already several methods can be used for recordings taken from hundreds to thousands of channels [42, 31, 10, 54, 22, 26]. However, scalability, and in particular automation, of spike sorting pipelines remains challenging [8]. One strategy for spike sorting on dense MEAs is to spatially localize detected spikes before clustering. In theory, spikes from the same neuron should be localized to the same region of the recording area (near the cell body of the firing neuron), providing discriminatory, low-dimensional features for each spike that can be utilized with efficient density-based clustering algorithms to sort large data sets with tens of millions of detected spikes [22, 26]. These location estimates, while useful for spike sorting, can also be exploited in downstream analyses, for instance to register recorded neurons with anatomical information or to identify the same units from trial to trial [9, 22, 41]. Despite the potential benefits of localization, preexisting methods have a number of limitations. First, most methods are designed for low-channel count recording devices, making them difficult to use with dense MEAs [9, 51, 3, 30, 29, 34, 33, 50]. Second, current methods for dense MEAs utilize cleaned extracellular action potentials (through spike-triggered averaging), disallowing their use before spike sorting [48, 6]. Third, all current model-based methods, to our knowledge, are non-Bayesian, relying primarily on numerical optimization methods to infer the underlying parameters. Given these current limitations, the only localization methods used consistently before spike sorting are simple heuristics such as a center of mass calculation [38, 44, 22, 26]. In this paper, we present a scalable Bayesian modelling approach for spike localization on dense MEAs (less than ∼ 50µm between channels) that can be performed before spike sorting. Our method consists of a generative model, a data augmentation scheme, and an amortized variational inference method implemented with a variational autoencoder (VAE) [11, 28, 47]. Amortized variational inference has been used in neuroscience for applications such as predicting action potentials from calcium imaging data [52] and recovering latent dynamics from single-trial neural spiking data [43], however, to our knowledge, it has not been used in applications to extracellular recordings. After training, our method allows for localization of one million spikes (from high-density MEAs) in approximately 37 seconds on a TITAN X GPU, enabling real-time analysis of massive extracellular datasets. To evaluate our method, we use biophysically realistic simulated data, demonstrating that our localization performance is significantly better than the center of mass baseline and can lead to higher-accuracy spike sorting results across multiple probe geometries and noise levels. We also show that our trained VAE can generalize to recordings on which it was not trained. To demonstrate the applicability of our method to real data, we assess our method qualitatively on real extracellular datasets from a Neuropixels [25] probe and from a BioCam4096 recording platform. To clarify, our contribution is not full spike sorting solution. Although we envision that our method can be used to improve spike sorting algorithms that currently rely center of mass location estimates, interfacing with and evaluating these algorithms was beyond the scope of our paper. 2 Background 2.1 Spike localization We start with introducing relevant notation. First, we define the identities and positions of neurons and channels. Let n := {ni}Mi=1, be the set of M neurons in the recording and c := {cj}Nj=1, the set of N channels on the MEA. The position of a neuron, ni, can be defined as pni := (xniyni , zni) ∈ R3 and similarly the position of a channel, cj , pcj := (xcj , ycj , zcj ) ∈ R3. We further denote pc := {pcj}Nj=1 to be the position of all N channels on the MEA. In our treatment of this problem, the neuron and channel positions are single points that represent the centers of the somas and the centers of the channels, respectively. These positions are relative to the origin, which we set to be the center of the MEA. For the neuron, ni, let si := {si,k}Kik=1, be the set of spikes detected during the recording where Ki is the total number of spikes fired by ni. The recorded extracellular waveform of si,k on a channel, cj , can then be defined as wi,k,j := {r(0)i,k,j , r (1) i,k,j , ..., r (t) i,k,j , ..., r (T ) i,k,j} where r (t) i,k,j ∈ R and t = 0, . . . , T . The set of waveforms recorded by each of the N channels of the MEA during the spike, si,k, is defined as wi,k := {wi,k,j}Nj=1. Finally, for the spike, si,k, the point source location can be defined as psi,k := (xsi,k , ysi,k , zsi,k) ∈ R3. The problem we attempt to solve can now be stated as follows: Localizing a spike, si,k, is the task of finding the corresponding point source location, psi,k , given the observed waveforms wi,k and the channel positions, pc. We make the assumption that the point source location, psi,k is actually the location of the firing neuron’s soma, pni . Given the complex morphological structure of many neurons, this assumption may not always be correct, but it provides a simple way to assess localization performance and evaluate future models. 2.2 Center of mass Many modern spike sorting algorithms localize spikes on MEAs using the center of mass or barycenter method [44, 22, 26]. We summarize the traditional steps for localizing a spike, si,k using this method. First, let us define αi,k,j := mint wi,k,j to be the negative amplitude peak of the waveform, wi,k,j , generated by si,k and recorded on channel, cj . We consider the negative peak amplitude as a matter of convention since spikes are defined as inward currents. Then, let αi,k := (αi,k,j)Nj=1 be the vector of all amplitudes generated by si,k and recorded by all N channels on the MEA. To find the center of mass of a spike, si,k, the first step is to determine the central channel for the calculation. This central channel is set to be the channel which records the minimum amplitude for the spike, cjmin := cargminj αi,k,j The second and final step is to take the L closest channels to cjmin and compute, x̂si,k = ∑L+1 j=1 (xcj )|αi,k,j |∑L+1 j=1 |αi,k,j | , ŷsi,k = ∑L+1 j=1 (ycj )|αi,k,j |∑L+1 j=1 |αi,k,j | where all of the L+ 1 channels’ positions and recorded amplitudes contribute to the center of mass calculation. The center of mass method is inexpensive to compute and has been shown to give informative location estimates for spikes in both real and synthetic data [44, 37, 22, 26]. Center of mass, however, suffers from two main drawbacks: First, since the chosen channels will form a convex hull, the center of mass location estimates must lie inside the channels’ locations, negatively impacting location estimates for neurons outside of the MEA. Second, center of mass is biased towards the chosen central channel, potentially leading to artificial separation of location estimates for spikes from the same neuron [44]. 3 Method In this section, we introduce our scalable, model-based approach to spike localization. We describe the generative model, the data augmentation procedure, and the inference methods. 3.1 Model Our model uses the recorded amplitudes on each channel to determine the most likely source location of si,k. We assume that the peak signal from a spike decays exponentially with the distance from the source, r: a exp(br) where a, b ∈ R, r ∈ R+. This assumption is well-motivated by experimentally recorded extracellular potential decay in both a salamander and mouse retina [49, 22], as well as a cat cortex [16]. It has also been further corroborated using realistic biophysical simulations [18]. We utilize this exponential assumption to infer the source location of a spike, si,k, since localization is then equivalent to solving for si,k’s unknown parameters, θsi,k := {ai,k, bi,k, xsi,k , ysi,k , zsi,k} given the observed amplitudes, αi,k. To allow for localization without knowing the identity of the firing neuron, we assume that each spike has individual exponential decay parameters, ai,k, bi,k, and individual source locations, psi,k . We find, however, that fixing bi,k for all spikes to a constant that is equal to an empirical estimate from literature (decay length of ∼ 28µm) works best across multiple probe geometries and noise levels, so we did not infer the value for bi,k in our final method. We will refer to the fixed decay rate as b and exclude it from the unknown parameters moving forward. The generative process of our exponential model is as follows, ai,k ∼ N(µai,k , σa), xsi,k ∼ N(µxsi,k , σx), ysi,k ∼ N(µysi,k , σy), zsi,k ∼ N(µzsi,k , σz) r̂i,k = ‖(xsi,k , ysi,k , zsi,k)− pc‖2, αi,k ∼ N (ai,k exp(br̂i,k), I) (1) In our observation model, the amplitudes are drawn from an isotropic Gaussian distribution with a variance of one. We chose this Gaussian observation model for computational simplicity and since it is convenient to work with when using VAEs. We discuss the limitations of our modeling assumptions in Section 5 and propose several extensions for future works. For our prior distributions, we were careful to set sensible parameter values. We found that inference, especially for a spike detected near the edge of the MEA, is sensitive to the mean of the prior distribution of ai,k, therefore, we set µai,k = λαi,k,jmin where αi,k,jmin is the smallest negative amplitude peak of si,k. We choose this heuristic because the absolute value of αi,k,jmin will always be smaller than the absolute value of the amplitude of the spike at the source location, due to potential decay. Therefore, scaling αi,k,jmin by λ gives a sensible value for µai,k . We empirically choose λ = 2 for the final method after performing a grid search over λ = {1, 2, 3}. The parameter, σa, does not have a large affect on the inferred location so we set it to be approximately the standard deviation of the αi,k,jmin (50). The location prior means, µxsi,k , µysi,k , µzsi,k , are set to the location of the minimum amplitude channel, pcjmin , for the given spike. The location prior standard deviations, σx, σy, σz , are set to large constant values to flatten out the distributions since we do not want the location estimate to be overly biased towards pcjmin . 3.2 Data Augmentation For localization to work well, the input channels should be centered around the peak spike, which is hard for spikes near the edges (edge spikes). To address this issue, we employ a two-step data augmentation. First, inputs for edge spikes are padded such that the channel with the largest amplitude is at the center of the inputs. Second, all channels are augmented with an indicating variable which provides signal to distinguish them for the inference network. To be more specific, we introduce virtual channels outside of the MEA which have the same layout as the real, recording channels (see appendix C). We refer to a virtual channel as an "unobserved" channel, cju , and to a real channel on the MEA as an "observed" channel, cjo . We define the amplitude on an unobserved channel, αi,k,ju , to be zero since unobserved channels do not actually record any signals. We let the amplitude for an observed channel, αi,k,jo , be equal to mint wi,k,jo , as before. Before defining the augmented dataset, we must first introduce an indicator function, 1o : α→ {0, 1}: 1o(α) = { 1, if α is from an observed channel, 0, if α is from an unobserved channel. where α is an amplitude from any channel, observed or unobserved. To construct the augmented dataset for a spike, si,k, we take the set of L channels that lie within a bounding box of width W centered on the observed channel with the minimum recorded amplitude, cjomin . We define our newly augmented observed data for si,k as, βi,k := {(αi,k,j , 1o(αi,k,j)}Lj=1 (2) So, for a single spike, we construct a L × 2 dimensional vector that contains amplitudes from L channels and indices indicating whether the amplitudes came from observed or unobserved channels. Since the prior location for each spike is at the center of the subset of channels used for the observed data, for edge spikes, the data augmentation puts the prior closer to the edge and is, therefore, more informative for localizing spikes near/off the edge of the array. Also, since edge spikes are typically seen on less channels, the data augmentation serves to ignore channels which are away from the spike, which would otherwise be used if the augmentation is not employed. 3.3 Inference Now that we have defined the generative process and data augmentation procedure, we would like to compute the posterior distribution for the unknown parameters of a spike, si,k, p(ai,k, xsi,k , ysi,k , zsi,k |βi,k) (3) given the augmented dataset, βi,k. To infer the posterior distribution for each spike, we utilize two methods of Bayesian inference: MCMC sampling and amortized variational inference. 3.3.1 MCMC sampling We use MCMC to assess the validity and applicability of our model to extracellular data. We implement our model in Turing [15], a probabilistic modeling language in Julia. We run Hamiltonian Monte Carlo (HMC) [39] for 10,000 iterations with a step size of 0.01 and a step number of 10. We use the posterior means of the location distributions as the estimated location.1 Despite the ease use of probabilistic programming and asymptotically guaranteed inference quality of MCMC methods, the scalability of MCMC methods to large-scale datasets is limited. This leads us to implement our model as a VAE and to perform amortized variational inference for our final method. 3.3.2 Amortized variational inference To speed up inference of the spike parameters, we construct a VAE and use amortized variational inference to estimate posterior distributions for each spike. In variational inference, instead of sampling from the target intractable posterior distribution of interest, we construct a variational distribution that is tractable and minimize the Kullback–Leibler (KL) divergence between the variational posterior and the true posterior. Minimizing the KL divergence is equivalent to maximizing the evidence lower bound (ELBO) for the log marginal likelihood of the data. In VAEs, the parameters of the variational posterior are not optimized directly, but are, instead, computed by an inference network. We define our variational posterior for x, y, z as a multivariate Normal with diagonal covariance where the mean and diagonal of the covariance matrix are computed by an inference network qΦ(x, y, z) = N (µφ1(fφ0(υi,k)),σ2φ2(fφ0(υi,k))) (4) The inference network is implemented as a feed-forward, deep neural network parameterized by Φ = {φ0, φ1, φ2}. As one can see, the variational parameters are a function of the input υ . When using an inference network, the input can be any part of the dataset so for our method, we use, υi,k, as the input for each spike, si,k, which is defined as follows: υi,k := {(wi,k,j , 1o(αi,k,j)}Lj=1 (5) where wi,k,j is the waveform detected on the jth channel (defined in Section 2.1). Similar to our previous augmentation, the waveform for an unobserved channel is set to be all zeros. We choose to input the waveforms rather than the amplitudes because, empirically, it encourages the inferred location estimates for spikes from the same neuron to be better localized to the same region of the MEA. For both the real and simulated datasets, we used ∼2 ms of readings for each waveform. 1The code for our MCMC implementation is provided in Appendix H. The decoder for our method reconstructs the amplitudes from the observed data rather than the waveforms. Since we assume an exponential decay for the amplitudes, the decoder is a simple Gaussian likelihood function, where given the Euclidean distance vector ˆri,k, computed by samples from the variational posterior, the decoder reconstructs the mean value of the observed amplitudes with a fixed variance. The decoder is parameterized by the exponential parameters of the given spike, si,k, so it reconstructs the amplitudes of the augmented data, β (0) i,k , with the following expression: β̂ (0) i,k := ai,k exp(br̂i,k)× β1i,k where β̂ (0) i,k is the reconstructed observed amplitudes. By multiplying the reconstructed amplitude vector by β1i,k which consists of either zeros or ones (see Eq. 5), the unobserved channels will be reconstructed with amplitudes of zero and the observed channels will be reconstructed with the exponential function. For our VAE, instead of estimating the distribution of ai,k, we directly optimize ai,k when maximizing the lower bound. We set the initial value of ai,k to the mean of the prior. Thus, ai,k can be read as a parameter of the decoder. Given our inference network and decoder, the ELBO we maximize for each spike, si,k, is given by, log p(βi,k; ai,k) ≥ −KL [qΦ(x, y, z) ‖ pxpypz] + EqΦ [ L∑ l=1 N (β0i,k,l|ai,k exp(br̂i,k), I)β1i,k,l ] where KL is the KL-divergence. The location priors, px, py, pz , are normally distributed as described in 3.1, with means of zero (the position of the maximum amplitude channel in the observed data) and variances of 80. For more information about the architecture and training, see Appendix F. 3.3.3 Stabilized Location Estimation In this model, the channel on which the input is centered can bias the estimate of the spike location, in particular when amplitudes are small. To reduce this bias, we can create multiple inputs for the same spike where each input is centered on a different channel. During inference, we can average the inferred locations for each of these inputs, thus lowering the central channel bias. To this end, we introduce a hyperparameter, amplitude jitter, where for each spike, si,k, we create multiple inputs centered on channels with peak amplitudes within a small voltage of the maximum amplitude, αi,k,j . We use two values for the amplitude jitter in our experiments: 0µV and 10µV . When amplitude jitter is set to 0µV , no averaging is performed; when amplitude jitter is set to 10µV , all channels that have peak amplitudes within 10µV of αi,k,j are used as inputs to the VAE and averaged during inference. 4 Experiments 4.1 Datasets We simulate biophysically realistic ground-truth extracellular recordings to test our model against a variety of real-life complexities. The simulations are generated using the MEArec [4] package which includes 13 layer 5 juvenile rat somatosensory cortex neuron models from the neocortical microcircuit collaboration portal [45]. We simulate three recordings with increasing noise levels (ranging from 10µV to 30µV ) for two probe geometries, a 10x10 channel square MEA with a 15 µm inter-channel distance and 64 channels from a Neuropixels probe (∼25-40 µm inter-channel distance). Our simulations contain 40 excitatory cells and 10 inhibitory cells with random morphological subtypes, randomly distributed and rotated in 3D space around the probe (with a 20 µm minimum distance between somas). Each dataset has about 20,000 spikes in total (60 second duration). For more details on the simulation and noise model, see Appendix G. For the real datasets, we use public data from a Neuropixels probe [32] and from a mouse retina recorded with the BioCam4096 platform [24]. The two datasets have 6 million and 2.2 million spikes, respectively. Spike detection and sorting (with our location estimates) are done using the HerdingSpikes2 software [22]. 4.2 Evaluation Before evaluating the localization methods, we must detect the spikes from each neuron in the simulated recordings. To avoid biasing our results by our choice of detection algorithm, we assume perfect detection, extracting waveforms from channels near each spiking neuron. Once the waveforms are extracted from the recordings, we perform the data augmentation. For the square MEA we use W = 20, 40, which gives L = 4-9, 9-25 real channels in the observed data, respectively. For the simulated Neuropixels, we use W = 35, 45, which gives L = 3-6, 8-14 real channels in the observed data, respectively. Once we have the augmented dataset, we generate location estimates for all the datasets using each localization method. For straightforward comparison with center of mass, we only evaluate the 2D location estimates (in the plane of the recording device). In the first evaluation, we assess the accuracy of each method by computing the Euclidean distance between the estimated spike locations and the associated firing neurons. We report the mean and standard deviation of the localization error for all spikes in each recording. In the second evaluation, we cluster the location estimates of each method using Gaussian mixture models (GMMs). The GMMs are fit with spherical covariances ranging from 45 to 75 mixture components (with a step size of 5). We report the true positive rate and accuracy for each number of mixture components when matched back to ground truth. To be clear, our use of GMMs is not a proposed spike sorting method for real data (the number of clusters is never known apriori), but rather a systematic way to evaluate whether our location estimates are more discriminable features than those of center of mass. In the third evaluation, we again use GMMs to cluster the location estimates, however, this time combined with two principal components from each spike. We report the true positive rate and accuracy for each number of mixture components as before. Combining location estimates and principal components explicitly, to create a new, low-dimensional feature set, is introduced in Hilgen (2017). In this work, the principal components are whitened and then scaled with a hyperparameter, α. To remove any bias from choosing an α value in our evaluation, we conduct a grid search over α = {4, 6, 8, 10} and report the best metric scores for each method. In the fourth evaluation, we assess the generalization performance of the method by training a VAE on an extracellular dataset and then trying to infer the spike locations in another dataset where the neuron locations are different, but all other aspects are kept the same (10µV noise level, square MEA). The localization and sorting performance is then compared to that of a VAE trained directly on the second dataset and to center of mass. Taken together, the first evaluation demonstrates how useful each method is purely as a localization tool, the second evaluation demonstrates how useful the location estimates are for spike sorting immediately after localizing, the third evaluation demonstrates how much the performance can improve given extra waveform information, and the fourth evaluation demonstrates how our method can be used across similar datasets without retraining. For all of our sorting analysis, we use SpikeInterface version 0.9.1 [5]. 4.3 Results Table 1 reports the localization accuracy of the different localization methods for the square MEA with three different noise levels. Our model-based methods far outperform center of mass with any number of observed channels. As expected, introducing amplitude jitter helps lower the mean and 50 60 700.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 50 60 70 50 60 70 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 VAE-0 VAE-0-PCs VAE-10 VAE-10-PCs COM COM-PCs VAE 9-25 VAE 4-9 Number of Mixtures 10μV 20μV 30μV Precision Recall Accuracy Precision Recall Accuracy Precision Recall Accuracy 50 60 70 50 60 70 50 60 70 50 60 70 50 60 70 50 60 70 Figure 2: Spike Sorting Performance on square MEA. We compare the sorting performance of the VAE localization method and the COM localization method with and without principal components across all noise levels. For the VAE, we include the results with 0µV and 10µV amplitude jitter and with different amounts of observed channels (4-9 and 9-25). For COM, we plot the highest sorting performance (25 observed channels). The test data set has 50 neurons. standard deviation of the location spike distance. Using a small width of 20µm when constructing the augmented data (4-9 observed channels) has the highest performance for the square MEA. The location estimates for the square MEA are visualized in Figure 1. Recording channels are plotted as grey squares and the true soma locations are plotted as black stars. The estimated individual spike locations are colored according to their associated firing neuron identity. As can be seen in the plot, center of mass suffers both from artificial splitting of location estimates and poor performance on neurons outside the array, two areas in which the model-based approaches excel. The MCMC and VAE methods have very similar location estimates, highlighting the success of our variational inference in approximating the true posterior. See Appendix A for a location estimate plot when the VAE is trained and tested on simulated Neuropixels recordings. In Figure 2, spike sorting performance on the square MEA is visualized for all localization methods (with and without waveform information). Here, we only show the sorting results for center of mass on 25 observed channels, where it performs at its best. Overall, the model-based approaches have significantly higher precision, recall, and accuracy than center of mass across all noise levels and all different numbers of mixtures. This illustrates how model-based location estimates provide a much more discriminatory feature set than the location estimates from the center of mass approaches. We also find that the addition of waveform information (in the form of principal components) improves spike sorting performance for all localization methods. See Appendix A for a spike sorting performance plot when the VAE is trained and tested on simulated Neuropixels recordings. As shown in Appendix D, when our method is trained on one simulated recording, it can generalize well to another simulated recording with different neuron locations. The localization accuracy and sorting performance are only slightly lower than the VAE that is trained directly on the new recording. Our method also still outperforms center of mass on the new dataset even without training on it. Figure 3 shows our localization method as applied to two real, large-scale extracellular datasets. In these plots, we color the location estimates based on their unit identity after spike sorting with HerdingSpikes2. These extracellular recordings do not have ground truth information as current, ground-truth recordings are limited to a few labeled neurons [56, 19, 21, 40, 54]. Therefore, to demonstrate that the units we find likely correspond to individual neurons, we visualize waveforms from a local grouping of sorted units on the Neuropixels probe. This analysis illustrates that are method can already be applied to large-scale, real extracellular recordings. In Appendix E, we demonstrate that the inference time for the VAE is much faster than that of MCMC, highlighting the excellent scalability of our method. The inference speed of the VAE allows for localization of one million spikes in approximately 37 seconds on a TITAN X GPU, enabling real-time analysis of large-scale extracellular datasets. 5 Discussion Here, we introduce a Bayesian approach to spike localization using amortized variational inference. Our method significantly improves localization accuracy and spike sorting performance over the preexisting baseline while remaining scalable to the large volumes of data generated by MEAs. Scalability is particularly relevant for recordings from thousands of channels, where a single experiment may yield in the order of 100 million spikes. We validate the accuracy of our model assumptions and inference scheme using biophysically realistic ground truth simulated recordings that capture much of the variability seen in real recordings. Despite the realism of our simulated recordings, there are some factors that we did not account for, including: bursting cells with event amplitude fluctuations, electrode drift, and realistic intrinsic variability of recorded spike waveforms. As these factors are difficult to model, future analysis of real recordings or advances in modeling software will help to understand possible limitations of the method. Along with limitations of the simulated data, there are also limitations of our model. Although we assume a monopole current-source, every part of the neuronal membrane can produce action potentials [7]. This means that a more complicated model, such as a dipole current [50], line currentsource [50], or modified ball-and-stick [48], might be a better fit to the data. Since these models have only ever been used after spike sorting, however, the extent at which they can improve localization performance before spike sorting is unclear and is something we would like to explore in future work. Also, our model utilizes a Gaussian observation model for the spike amplitudes. In real recordings, the true noise distribution is often non-Gaussian and is better approximated by pink noise models ( 1f noise) [53]. We plan to explore more realistic observation models in future works. Since our method is Bayesian, we hope to better utilize the uncertainty of the location estimates in future works. Also, as our inference network is fully differentiable, we imagine that our method can be used as a submodule in a more complex, end-to-end method. Other work indicates there is scope for constructing more complicated models to perform event detection and classification [31], and to distinguish between different morphological neuron types based on their activity footprint on the array [6]. Our work is thus a first step towards using amortized variational inference methods for the unsupervised analysis of complex electrophysiological recordings.
1. What are the strengths and weaknesses of the proposed biologically inspired model for spike localization? 2. How does the reviewer assess the necessity and effectiveness of data augmentation in the model? 3. What are the issues with the spike sorting analysis, and how could it be improved with better comparisons and evaluation metrics? 4. How does the reviewer evaluate the scalability of the method, particularly regarding its ability to handle large numbers of channels? 5. Are there any concerns or suggestions regarding the potential applications and future developments of the proposed approach?
Review
Review The paper is fairly clear and proposes a novel biologically inspired model for spike localization. Largely, it is well-down, and provides new paths for exploring the link between individual neurons and electrophysiological properties. It could be used later on for identifying properties of subtypes of neurons and their biological role, for instance, by matching multiple sensing techniques. However, there are a few issues. 1. To me, it's unclear why the data augmentation is truly necessary. Under the model, I feel like it would work without this step. An ablation analysis of what it actually accomplishes and a clear, precise description of why it is helpful would be beneficial. 2. The spike sorting analysis is frustrating for a number of reasons. First, the authors sweep over the number of clusters to report the results. The fact that the number of clusters is unknown a priori is one of the biggest issues, so this is unrealistic. Second, there is a complete lack of comparisons to state-of-the-art methods. Many, many methods are available with publicly available code for such datasets, including more useful evaluation metrics (e.g., [1]). The claim that combining location estimates and waveform estimation was introduced in 2017 is somewhat tenuous; this is implicit in nearly every dense MEA sorting method. Because of this, it is unclear to this reader whether the approach would actually contribute to a state-of-the-art sorting package. 3. The scalability here is through time, but does not appear to directly address scalability in channels. Specifically, a neuropixels device is a good current dense MEA, but several research groups are building and evaluating devices with >10,000 electrodes. Since the current VAE takes all channels as inputs and the number of detection typically scales linearly with the number of channel, I estimate this method would be quadratic with the number of channels. While certainly not an issue for most devices today, it would be useful to comment on. [1] Barnett, Alex H., Jeremy F. Magland, and Leslie F. Greengard. "Validation of neural spike sorting algorithms without ground-truth information." Journal of neuroscience methods 264 (2016): 65-77. The author feedback was reasonable to address my criticisms, and I have revised my score appropriately.
NIPS
Title Scalable Spike Source Localization in Extracellular Recordings using Amortized Variational Inference Abstract Determining the positions of neurons in an extracellular recording is useful for investigating functional properties of the underlying neural circuitry. In this work, we present a Bayesian modelling approach for localizing the source of individual spikes on high-density, microelectrode arrays. To allow for scalable inference, we implement our model as a variational autoencoder and perform amortized variational inference. We evaluate our method on both biophysically realistic simulated and real extracellular datasets, demonstrating that it is more accurate than and can improve spike sorting performance over heuristic localization methods such as center of mass. 1 Introduction Extracellular recordings, which measure local potential changes due to ionic currents flowing through cell membranes, are an essential source of data in experimental and clinical neuroscience. The most prominent signals in these recordings originate from action potentials (spikes), the all or none events neurons produce in response to inputs and transmit as outputs to other neurons. Traditionally, a small number of electrodes (channels) are used to monitor spiking activity from a few neurons simultaneously. Recent progress in microfabrication now allows for extracellular recordings from thousands of neurons using microelectrode arrays (MEAs), which have thousands of closely spaced electrodes [13, 2, 14, 1, 36, 55, 32, 25, 12]. These recordings provide insights that cannot be obtained by pooling multiple single-electrode recordings [27]. This is a significant development as it enables systematic investigations of large circuits of neurons to better understand their function and structure, as well as how they are affected by injury, disease, and pharmacological interventions [20]. On dense MEAs, each recording channel may record spikes from multiple, nearby neurons, while each neuron may leave an extracellular footprint on multiple channels. Inferring the spiking activity of individual neurons, a task called spike sorting, is therefore a challenging blind source separation problem, complicated by the large volume of recorded data [46]. Despite the challenges presented by 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. spike sorting large-scale recordings, its importance cannot be overstated as it has been shown that isolating the activity of individual neurons is essential to understanding brain function [35]. Recent efforts have concentrated on providing scalable spike sorting algorithms for large scale MEAs and already several methods can be used for recordings taken from hundreds to thousands of channels [42, 31, 10, 54, 22, 26]. However, scalability, and in particular automation, of spike sorting pipelines remains challenging [8]. One strategy for spike sorting on dense MEAs is to spatially localize detected spikes before clustering. In theory, spikes from the same neuron should be localized to the same region of the recording area (near the cell body of the firing neuron), providing discriminatory, low-dimensional features for each spike that can be utilized with efficient density-based clustering algorithms to sort large data sets with tens of millions of detected spikes [22, 26]. These location estimates, while useful for spike sorting, can also be exploited in downstream analyses, for instance to register recorded neurons with anatomical information or to identify the same units from trial to trial [9, 22, 41]. Despite the potential benefits of localization, preexisting methods have a number of limitations. First, most methods are designed for low-channel count recording devices, making them difficult to use with dense MEAs [9, 51, 3, 30, 29, 34, 33, 50]. Second, current methods for dense MEAs utilize cleaned extracellular action potentials (through spike-triggered averaging), disallowing their use before spike sorting [48, 6]. Third, all current model-based methods, to our knowledge, are non-Bayesian, relying primarily on numerical optimization methods to infer the underlying parameters. Given these current limitations, the only localization methods used consistently before spike sorting are simple heuristics such as a center of mass calculation [38, 44, 22, 26]. In this paper, we present a scalable Bayesian modelling approach for spike localization on dense MEAs (less than ∼ 50µm between channels) that can be performed before spike sorting. Our method consists of a generative model, a data augmentation scheme, and an amortized variational inference method implemented with a variational autoencoder (VAE) [11, 28, 47]. Amortized variational inference has been used in neuroscience for applications such as predicting action potentials from calcium imaging data [52] and recovering latent dynamics from single-trial neural spiking data [43], however, to our knowledge, it has not been used in applications to extracellular recordings. After training, our method allows for localization of one million spikes (from high-density MEAs) in approximately 37 seconds on a TITAN X GPU, enabling real-time analysis of massive extracellular datasets. To evaluate our method, we use biophysically realistic simulated data, demonstrating that our localization performance is significantly better than the center of mass baseline and can lead to higher-accuracy spike sorting results across multiple probe geometries and noise levels. We also show that our trained VAE can generalize to recordings on which it was not trained. To demonstrate the applicability of our method to real data, we assess our method qualitatively on real extracellular datasets from a Neuropixels [25] probe and from a BioCam4096 recording platform. To clarify, our contribution is not full spike sorting solution. Although we envision that our method can be used to improve spike sorting algorithms that currently rely center of mass location estimates, interfacing with and evaluating these algorithms was beyond the scope of our paper. 2 Background 2.1 Spike localization We start with introducing relevant notation. First, we define the identities and positions of neurons and channels. Let n := {ni}Mi=1, be the set of M neurons in the recording and c := {cj}Nj=1, the set of N channels on the MEA. The position of a neuron, ni, can be defined as pni := (xniyni , zni) ∈ R3 and similarly the position of a channel, cj , pcj := (xcj , ycj , zcj ) ∈ R3. We further denote pc := {pcj}Nj=1 to be the position of all N channels on the MEA. In our treatment of this problem, the neuron and channel positions are single points that represent the centers of the somas and the centers of the channels, respectively. These positions are relative to the origin, which we set to be the center of the MEA. For the neuron, ni, let si := {si,k}Kik=1, be the set of spikes detected during the recording where Ki is the total number of spikes fired by ni. The recorded extracellular waveform of si,k on a channel, cj , can then be defined as wi,k,j := {r(0)i,k,j , r (1) i,k,j , ..., r (t) i,k,j , ..., r (T ) i,k,j} where r (t) i,k,j ∈ R and t = 0, . . . , T . The set of waveforms recorded by each of the N channels of the MEA during the spike, si,k, is defined as wi,k := {wi,k,j}Nj=1. Finally, for the spike, si,k, the point source location can be defined as psi,k := (xsi,k , ysi,k , zsi,k) ∈ R3. The problem we attempt to solve can now be stated as follows: Localizing a spike, si,k, is the task of finding the corresponding point source location, psi,k , given the observed waveforms wi,k and the channel positions, pc. We make the assumption that the point source location, psi,k is actually the location of the firing neuron’s soma, pni . Given the complex morphological structure of many neurons, this assumption may not always be correct, but it provides a simple way to assess localization performance and evaluate future models. 2.2 Center of mass Many modern spike sorting algorithms localize spikes on MEAs using the center of mass or barycenter method [44, 22, 26]. We summarize the traditional steps for localizing a spike, si,k using this method. First, let us define αi,k,j := mint wi,k,j to be the negative amplitude peak of the waveform, wi,k,j , generated by si,k and recorded on channel, cj . We consider the negative peak amplitude as a matter of convention since spikes are defined as inward currents. Then, let αi,k := (αi,k,j)Nj=1 be the vector of all amplitudes generated by si,k and recorded by all N channels on the MEA. To find the center of mass of a spike, si,k, the first step is to determine the central channel for the calculation. This central channel is set to be the channel which records the minimum amplitude for the spike, cjmin := cargminj αi,k,j The second and final step is to take the L closest channels to cjmin and compute, x̂si,k = ∑L+1 j=1 (xcj )|αi,k,j |∑L+1 j=1 |αi,k,j | , ŷsi,k = ∑L+1 j=1 (ycj )|αi,k,j |∑L+1 j=1 |αi,k,j | where all of the L+ 1 channels’ positions and recorded amplitudes contribute to the center of mass calculation. The center of mass method is inexpensive to compute and has been shown to give informative location estimates for spikes in both real and synthetic data [44, 37, 22, 26]. Center of mass, however, suffers from two main drawbacks: First, since the chosen channels will form a convex hull, the center of mass location estimates must lie inside the channels’ locations, negatively impacting location estimates for neurons outside of the MEA. Second, center of mass is biased towards the chosen central channel, potentially leading to artificial separation of location estimates for spikes from the same neuron [44]. 3 Method In this section, we introduce our scalable, model-based approach to spike localization. We describe the generative model, the data augmentation procedure, and the inference methods. 3.1 Model Our model uses the recorded amplitudes on each channel to determine the most likely source location of si,k. We assume that the peak signal from a spike decays exponentially with the distance from the source, r: a exp(br) where a, b ∈ R, r ∈ R+. This assumption is well-motivated by experimentally recorded extracellular potential decay in both a salamander and mouse retina [49, 22], as well as a cat cortex [16]. It has also been further corroborated using realistic biophysical simulations [18]. We utilize this exponential assumption to infer the source location of a spike, si,k, since localization is then equivalent to solving for si,k’s unknown parameters, θsi,k := {ai,k, bi,k, xsi,k , ysi,k , zsi,k} given the observed amplitudes, αi,k. To allow for localization without knowing the identity of the firing neuron, we assume that each spike has individual exponential decay parameters, ai,k, bi,k, and individual source locations, psi,k . We find, however, that fixing bi,k for all spikes to a constant that is equal to an empirical estimate from literature (decay length of ∼ 28µm) works best across multiple probe geometries and noise levels, so we did not infer the value for bi,k in our final method. We will refer to the fixed decay rate as b and exclude it from the unknown parameters moving forward. The generative process of our exponential model is as follows, ai,k ∼ N(µai,k , σa), xsi,k ∼ N(µxsi,k , σx), ysi,k ∼ N(µysi,k , σy), zsi,k ∼ N(µzsi,k , σz) r̂i,k = ‖(xsi,k , ysi,k , zsi,k)− pc‖2, αi,k ∼ N (ai,k exp(br̂i,k), I) (1) In our observation model, the amplitudes are drawn from an isotropic Gaussian distribution with a variance of one. We chose this Gaussian observation model for computational simplicity and since it is convenient to work with when using VAEs. We discuss the limitations of our modeling assumptions in Section 5 and propose several extensions for future works. For our prior distributions, we were careful to set sensible parameter values. We found that inference, especially for a spike detected near the edge of the MEA, is sensitive to the mean of the prior distribution of ai,k, therefore, we set µai,k = λαi,k,jmin where αi,k,jmin is the smallest negative amplitude peak of si,k. We choose this heuristic because the absolute value of αi,k,jmin will always be smaller than the absolute value of the amplitude of the spike at the source location, due to potential decay. Therefore, scaling αi,k,jmin by λ gives a sensible value for µai,k . We empirically choose λ = 2 for the final method after performing a grid search over λ = {1, 2, 3}. The parameter, σa, does not have a large affect on the inferred location so we set it to be approximately the standard deviation of the αi,k,jmin (50). The location prior means, µxsi,k , µysi,k , µzsi,k , are set to the location of the minimum amplitude channel, pcjmin , for the given spike. The location prior standard deviations, σx, σy, σz , are set to large constant values to flatten out the distributions since we do not want the location estimate to be overly biased towards pcjmin . 3.2 Data Augmentation For localization to work well, the input channels should be centered around the peak spike, which is hard for spikes near the edges (edge spikes). To address this issue, we employ a two-step data augmentation. First, inputs for edge spikes are padded such that the channel with the largest amplitude is at the center of the inputs. Second, all channels are augmented with an indicating variable which provides signal to distinguish them for the inference network. To be more specific, we introduce virtual channels outside of the MEA which have the same layout as the real, recording channels (see appendix C). We refer to a virtual channel as an "unobserved" channel, cju , and to a real channel on the MEA as an "observed" channel, cjo . We define the amplitude on an unobserved channel, αi,k,ju , to be zero since unobserved channels do not actually record any signals. We let the amplitude for an observed channel, αi,k,jo , be equal to mint wi,k,jo , as before. Before defining the augmented dataset, we must first introduce an indicator function, 1o : α→ {0, 1}: 1o(α) = { 1, if α is from an observed channel, 0, if α is from an unobserved channel. where α is an amplitude from any channel, observed or unobserved. To construct the augmented dataset for a spike, si,k, we take the set of L channels that lie within a bounding box of width W centered on the observed channel with the minimum recorded amplitude, cjomin . We define our newly augmented observed data for si,k as, βi,k := {(αi,k,j , 1o(αi,k,j)}Lj=1 (2) So, for a single spike, we construct a L × 2 dimensional vector that contains amplitudes from L channels and indices indicating whether the amplitudes came from observed or unobserved channels. Since the prior location for each spike is at the center of the subset of channels used for the observed data, for edge spikes, the data augmentation puts the prior closer to the edge and is, therefore, more informative for localizing spikes near/off the edge of the array. Also, since edge spikes are typically seen on less channels, the data augmentation serves to ignore channels which are away from the spike, which would otherwise be used if the augmentation is not employed. 3.3 Inference Now that we have defined the generative process and data augmentation procedure, we would like to compute the posterior distribution for the unknown parameters of a spike, si,k, p(ai,k, xsi,k , ysi,k , zsi,k |βi,k) (3) given the augmented dataset, βi,k. To infer the posterior distribution for each spike, we utilize two methods of Bayesian inference: MCMC sampling and amortized variational inference. 3.3.1 MCMC sampling We use MCMC to assess the validity and applicability of our model to extracellular data. We implement our model in Turing [15], a probabilistic modeling language in Julia. We run Hamiltonian Monte Carlo (HMC) [39] for 10,000 iterations with a step size of 0.01 and a step number of 10. We use the posterior means of the location distributions as the estimated location.1 Despite the ease use of probabilistic programming and asymptotically guaranteed inference quality of MCMC methods, the scalability of MCMC methods to large-scale datasets is limited. This leads us to implement our model as a VAE and to perform amortized variational inference for our final method. 3.3.2 Amortized variational inference To speed up inference of the spike parameters, we construct a VAE and use amortized variational inference to estimate posterior distributions for each spike. In variational inference, instead of sampling from the target intractable posterior distribution of interest, we construct a variational distribution that is tractable and minimize the Kullback–Leibler (KL) divergence between the variational posterior and the true posterior. Minimizing the KL divergence is equivalent to maximizing the evidence lower bound (ELBO) for the log marginal likelihood of the data. In VAEs, the parameters of the variational posterior are not optimized directly, but are, instead, computed by an inference network. We define our variational posterior for x, y, z as a multivariate Normal with diagonal covariance where the mean and diagonal of the covariance matrix are computed by an inference network qΦ(x, y, z) = N (µφ1(fφ0(υi,k)),σ2φ2(fφ0(υi,k))) (4) The inference network is implemented as a feed-forward, deep neural network parameterized by Φ = {φ0, φ1, φ2}. As one can see, the variational parameters are a function of the input υ . When using an inference network, the input can be any part of the dataset so for our method, we use, υi,k, as the input for each spike, si,k, which is defined as follows: υi,k := {(wi,k,j , 1o(αi,k,j)}Lj=1 (5) where wi,k,j is the waveform detected on the jth channel (defined in Section 2.1). Similar to our previous augmentation, the waveform for an unobserved channel is set to be all zeros. We choose to input the waveforms rather than the amplitudes because, empirically, it encourages the inferred location estimates for spikes from the same neuron to be better localized to the same region of the MEA. For both the real and simulated datasets, we used ∼2 ms of readings for each waveform. 1The code for our MCMC implementation is provided in Appendix H. The decoder for our method reconstructs the amplitudes from the observed data rather than the waveforms. Since we assume an exponential decay for the amplitudes, the decoder is a simple Gaussian likelihood function, where given the Euclidean distance vector ˆri,k, computed by samples from the variational posterior, the decoder reconstructs the mean value of the observed amplitudes with a fixed variance. The decoder is parameterized by the exponential parameters of the given spike, si,k, so it reconstructs the amplitudes of the augmented data, β (0) i,k , with the following expression: β̂ (0) i,k := ai,k exp(br̂i,k)× β1i,k where β̂ (0) i,k is the reconstructed observed amplitudes. By multiplying the reconstructed amplitude vector by β1i,k which consists of either zeros or ones (see Eq. 5), the unobserved channels will be reconstructed with amplitudes of zero and the observed channels will be reconstructed with the exponential function. For our VAE, instead of estimating the distribution of ai,k, we directly optimize ai,k when maximizing the lower bound. We set the initial value of ai,k to the mean of the prior. Thus, ai,k can be read as a parameter of the decoder. Given our inference network and decoder, the ELBO we maximize for each spike, si,k, is given by, log p(βi,k; ai,k) ≥ −KL [qΦ(x, y, z) ‖ pxpypz] + EqΦ [ L∑ l=1 N (β0i,k,l|ai,k exp(br̂i,k), I)β1i,k,l ] where KL is the KL-divergence. The location priors, px, py, pz , are normally distributed as described in 3.1, with means of zero (the position of the maximum amplitude channel in the observed data) and variances of 80. For more information about the architecture and training, see Appendix F. 3.3.3 Stabilized Location Estimation In this model, the channel on which the input is centered can bias the estimate of the spike location, in particular when amplitudes are small. To reduce this bias, we can create multiple inputs for the same spike where each input is centered on a different channel. During inference, we can average the inferred locations for each of these inputs, thus lowering the central channel bias. To this end, we introduce a hyperparameter, amplitude jitter, where for each spike, si,k, we create multiple inputs centered on channels with peak amplitudes within a small voltage of the maximum amplitude, αi,k,j . We use two values for the amplitude jitter in our experiments: 0µV and 10µV . When amplitude jitter is set to 0µV , no averaging is performed; when amplitude jitter is set to 10µV , all channels that have peak amplitudes within 10µV of αi,k,j are used as inputs to the VAE and averaged during inference. 4 Experiments 4.1 Datasets We simulate biophysically realistic ground-truth extracellular recordings to test our model against a variety of real-life complexities. The simulations are generated using the MEArec [4] package which includes 13 layer 5 juvenile rat somatosensory cortex neuron models from the neocortical microcircuit collaboration portal [45]. We simulate three recordings with increasing noise levels (ranging from 10µV to 30µV ) for two probe geometries, a 10x10 channel square MEA with a 15 µm inter-channel distance and 64 channels from a Neuropixels probe (∼25-40 µm inter-channel distance). Our simulations contain 40 excitatory cells and 10 inhibitory cells with random morphological subtypes, randomly distributed and rotated in 3D space around the probe (with a 20 µm minimum distance between somas). Each dataset has about 20,000 spikes in total (60 second duration). For more details on the simulation and noise model, see Appendix G. For the real datasets, we use public data from a Neuropixels probe [32] and from a mouse retina recorded with the BioCam4096 platform [24]. The two datasets have 6 million and 2.2 million spikes, respectively. Spike detection and sorting (with our location estimates) are done using the HerdingSpikes2 software [22]. 4.2 Evaluation Before evaluating the localization methods, we must detect the spikes from each neuron in the simulated recordings. To avoid biasing our results by our choice of detection algorithm, we assume perfect detection, extracting waveforms from channels near each spiking neuron. Once the waveforms are extracted from the recordings, we perform the data augmentation. For the square MEA we use W = 20, 40, which gives L = 4-9, 9-25 real channels in the observed data, respectively. For the simulated Neuropixels, we use W = 35, 45, which gives L = 3-6, 8-14 real channels in the observed data, respectively. Once we have the augmented dataset, we generate location estimates for all the datasets using each localization method. For straightforward comparison with center of mass, we only evaluate the 2D location estimates (in the plane of the recording device). In the first evaluation, we assess the accuracy of each method by computing the Euclidean distance between the estimated spike locations and the associated firing neurons. We report the mean and standard deviation of the localization error for all spikes in each recording. In the second evaluation, we cluster the location estimates of each method using Gaussian mixture models (GMMs). The GMMs are fit with spherical covariances ranging from 45 to 75 mixture components (with a step size of 5). We report the true positive rate and accuracy for each number of mixture components when matched back to ground truth. To be clear, our use of GMMs is not a proposed spike sorting method for real data (the number of clusters is never known apriori), but rather a systematic way to evaluate whether our location estimates are more discriminable features than those of center of mass. In the third evaluation, we again use GMMs to cluster the location estimates, however, this time combined with two principal components from each spike. We report the true positive rate and accuracy for each number of mixture components as before. Combining location estimates and principal components explicitly, to create a new, low-dimensional feature set, is introduced in Hilgen (2017). In this work, the principal components are whitened and then scaled with a hyperparameter, α. To remove any bias from choosing an α value in our evaluation, we conduct a grid search over α = {4, 6, 8, 10} and report the best metric scores for each method. In the fourth evaluation, we assess the generalization performance of the method by training a VAE on an extracellular dataset and then trying to infer the spike locations in another dataset where the neuron locations are different, but all other aspects are kept the same (10µV noise level, square MEA). The localization and sorting performance is then compared to that of a VAE trained directly on the second dataset and to center of mass. Taken together, the first evaluation demonstrates how useful each method is purely as a localization tool, the second evaluation demonstrates how useful the location estimates are for spike sorting immediately after localizing, the third evaluation demonstrates how much the performance can improve given extra waveform information, and the fourth evaluation demonstrates how our method can be used across similar datasets without retraining. For all of our sorting analysis, we use SpikeInterface version 0.9.1 [5]. 4.3 Results Table 1 reports the localization accuracy of the different localization methods for the square MEA with three different noise levels. Our model-based methods far outperform center of mass with any number of observed channels. As expected, introducing amplitude jitter helps lower the mean and 50 60 700.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 50 60 70 50 60 70 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 VAE-0 VAE-0-PCs VAE-10 VAE-10-PCs COM COM-PCs VAE 9-25 VAE 4-9 Number of Mixtures 10μV 20μV 30μV Precision Recall Accuracy Precision Recall Accuracy Precision Recall Accuracy 50 60 70 50 60 70 50 60 70 50 60 70 50 60 70 50 60 70 Figure 2: Spike Sorting Performance on square MEA. We compare the sorting performance of the VAE localization method and the COM localization method with and without principal components across all noise levels. For the VAE, we include the results with 0µV and 10µV amplitude jitter and with different amounts of observed channels (4-9 and 9-25). For COM, we plot the highest sorting performance (25 observed channels). The test data set has 50 neurons. standard deviation of the location spike distance. Using a small width of 20µm when constructing the augmented data (4-9 observed channels) has the highest performance for the square MEA. The location estimates for the square MEA are visualized in Figure 1. Recording channels are plotted as grey squares and the true soma locations are plotted as black stars. The estimated individual spike locations are colored according to their associated firing neuron identity. As can be seen in the plot, center of mass suffers both from artificial splitting of location estimates and poor performance on neurons outside the array, two areas in which the model-based approaches excel. The MCMC and VAE methods have very similar location estimates, highlighting the success of our variational inference in approximating the true posterior. See Appendix A for a location estimate plot when the VAE is trained and tested on simulated Neuropixels recordings. In Figure 2, spike sorting performance on the square MEA is visualized for all localization methods (with and without waveform information). Here, we only show the sorting results for center of mass on 25 observed channels, where it performs at its best. Overall, the model-based approaches have significantly higher precision, recall, and accuracy than center of mass across all noise levels and all different numbers of mixtures. This illustrates how model-based location estimates provide a much more discriminatory feature set than the location estimates from the center of mass approaches. We also find that the addition of waveform information (in the form of principal components) improves spike sorting performance for all localization methods. See Appendix A for a spike sorting performance plot when the VAE is trained and tested on simulated Neuropixels recordings. As shown in Appendix D, when our method is trained on one simulated recording, it can generalize well to another simulated recording with different neuron locations. The localization accuracy and sorting performance are only slightly lower than the VAE that is trained directly on the new recording. Our method also still outperforms center of mass on the new dataset even without training on it. Figure 3 shows our localization method as applied to two real, large-scale extracellular datasets. In these plots, we color the location estimates based on their unit identity after spike sorting with HerdingSpikes2. These extracellular recordings do not have ground truth information as current, ground-truth recordings are limited to a few labeled neurons [56, 19, 21, 40, 54]. Therefore, to demonstrate that the units we find likely correspond to individual neurons, we visualize waveforms from a local grouping of sorted units on the Neuropixels probe. This analysis illustrates that are method can already be applied to large-scale, real extracellular recordings. In Appendix E, we demonstrate that the inference time for the VAE is much faster than that of MCMC, highlighting the excellent scalability of our method. The inference speed of the VAE allows for localization of one million spikes in approximately 37 seconds on a TITAN X GPU, enabling real-time analysis of large-scale extracellular datasets. 5 Discussion Here, we introduce a Bayesian approach to spike localization using amortized variational inference. Our method significantly improves localization accuracy and spike sorting performance over the preexisting baseline while remaining scalable to the large volumes of data generated by MEAs. Scalability is particularly relevant for recordings from thousands of channels, where a single experiment may yield in the order of 100 million spikes. We validate the accuracy of our model assumptions and inference scheme using biophysically realistic ground truth simulated recordings that capture much of the variability seen in real recordings. Despite the realism of our simulated recordings, there are some factors that we did not account for, including: bursting cells with event amplitude fluctuations, electrode drift, and realistic intrinsic variability of recorded spike waveforms. As these factors are difficult to model, future analysis of real recordings or advances in modeling software will help to understand possible limitations of the method. Along with limitations of the simulated data, there are also limitations of our model. Although we assume a monopole current-source, every part of the neuronal membrane can produce action potentials [7]. This means that a more complicated model, such as a dipole current [50], line currentsource [50], or modified ball-and-stick [48], might be a better fit to the data. Since these models have only ever been used after spike sorting, however, the extent at which they can improve localization performance before spike sorting is unclear and is something we would like to explore in future work. Also, our model utilizes a Gaussian observation model for the spike amplitudes. In real recordings, the true noise distribution is often non-Gaussian and is better approximated by pink noise models ( 1f noise) [53]. We plan to explore more realistic observation models in future works. Since our method is Bayesian, we hope to better utilize the uncertainty of the location estimates in future works. Also, as our inference network is fully differentiable, we imagine that our method can be used as a submodule in a more complex, end-to-end method. Other work indicates there is scope for constructing more complicated models to perform event detection and classification [31], and to distinguish between different morphological neuron types based on their activity footprint on the array [6]. Our work is thus a first step towards using amortized variational inference methods for the unsupervised analysis of complex electrophysiological recordings.
1. What is the main contribution of the paper in the field of microelectrode array recordings? 2. What are the strengths and weaknesses of the proposed generative model for predicting spike locations? 3. How does the reviewer assess the novelty and significance of the paper's content? 4. What are the limitations of the paper regarding its comparisons with other works in the field? 5. Are there any concerns or suggestions for improving the paper, particularly in light of recent advancements in computational frameworks?
Review
Review The paper introduces a generative model for predicting the location of spikes from microelectrode array recordings. Inference is performed by reformulating the learning algorithm as a Variational Autoencoder. While the model is well defined it bothers me that the amplitudes are modelled as Gaussian random variables. Spikes are essentially defined as non-gaussian events on the potential. Even though spike detection is not addressed in this work, I would expect the amplitude of the spikes to be modeled as non-gaussian in order to learn the appropriate structure. The paper seems to deal with localisation in a much more sophisticated way than anything else that is currently available. However, the numerical comparisons do not provide sufficient comparison with other localisation approaches. The paper would improve considerably through a better comparison with other modern localization approaches. All in all a good paper but it needs more comparison with state of the art. After reviewing the rebuttal: The reply for non-gaussianity of spikes was not satisfactory. If the authors are going to use a variational inference approach you should try to work on models with the appropriate prior distributions. Modern computational frameworks allow for more modelling flexibility than what is exposed in this work. I think the original score is sufficient for this work.
NIPS
Title Scalable Spike Source Localization in Extracellular Recordings using Amortized Variational Inference Abstract Determining the positions of neurons in an extracellular recording is useful for investigating functional properties of the underlying neural circuitry. In this work, we present a Bayesian modelling approach for localizing the source of individual spikes on high-density, microelectrode arrays. To allow for scalable inference, we implement our model as a variational autoencoder and perform amortized variational inference. We evaluate our method on both biophysically realistic simulated and real extracellular datasets, demonstrating that it is more accurate than and can improve spike sorting performance over heuristic localization methods such as center of mass. 1 Introduction Extracellular recordings, which measure local potential changes due to ionic currents flowing through cell membranes, are an essential source of data in experimental and clinical neuroscience. The most prominent signals in these recordings originate from action potentials (spikes), the all or none events neurons produce in response to inputs and transmit as outputs to other neurons. Traditionally, a small number of electrodes (channels) are used to monitor spiking activity from a few neurons simultaneously. Recent progress in microfabrication now allows for extracellular recordings from thousands of neurons using microelectrode arrays (MEAs), which have thousands of closely spaced electrodes [13, 2, 14, 1, 36, 55, 32, 25, 12]. These recordings provide insights that cannot be obtained by pooling multiple single-electrode recordings [27]. This is a significant development as it enables systematic investigations of large circuits of neurons to better understand their function and structure, as well as how they are affected by injury, disease, and pharmacological interventions [20]. On dense MEAs, each recording channel may record spikes from multiple, nearby neurons, while each neuron may leave an extracellular footprint on multiple channels. Inferring the spiking activity of individual neurons, a task called spike sorting, is therefore a challenging blind source separation problem, complicated by the large volume of recorded data [46]. Despite the challenges presented by 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. spike sorting large-scale recordings, its importance cannot be overstated as it has been shown that isolating the activity of individual neurons is essential to understanding brain function [35]. Recent efforts have concentrated on providing scalable spike sorting algorithms for large scale MEAs and already several methods can be used for recordings taken from hundreds to thousands of channels [42, 31, 10, 54, 22, 26]. However, scalability, and in particular automation, of spike sorting pipelines remains challenging [8]. One strategy for spike sorting on dense MEAs is to spatially localize detected spikes before clustering. In theory, spikes from the same neuron should be localized to the same region of the recording area (near the cell body of the firing neuron), providing discriminatory, low-dimensional features for each spike that can be utilized with efficient density-based clustering algorithms to sort large data sets with tens of millions of detected spikes [22, 26]. These location estimates, while useful for spike sorting, can also be exploited in downstream analyses, for instance to register recorded neurons with anatomical information or to identify the same units from trial to trial [9, 22, 41]. Despite the potential benefits of localization, preexisting methods have a number of limitations. First, most methods are designed for low-channel count recording devices, making them difficult to use with dense MEAs [9, 51, 3, 30, 29, 34, 33, 50]. Second, current methods for dense MEAs utilize cleaned extracellular action potentials (through spike-triggered averaging), disallowing their use before spike sorting [48, 6]. Third, all current model-based methods, to our knowledge, are non-Bayesian, relying primarily on numerical optimization methods to infer the underlying parameters. Given these current limitations, the only localization methods used consistently before spike sorting are simple heuristics such as a center of mass calculation [38, 44, 22, 26]. In this paper, we present a scalable Bayesian modelling approach for spike localization on dense MEAs (less than ∼ 50µm between channels) that can be performed before spike sorting. Our method consists of a generative model, a data augmentation scheme, and an amortized variational inference method implemented with a variational autoencoder (VAE) [11, 28, 47]. Amortized variational inference has been used in neuroscience for applications such as predicting action potentials from calcium imaging data [52] and recovering latent dynamics from single-trial neural spiking data [43], however, to our knowledge, it has not been used in applications to extracellular recordings. After training, our method allows for localization of one million spikes (from high-density MEAs) in approximately 37 seconds on a TITAN X GPU, enabling real-time analysis of massive extracellular datasets. To evaluate our method, we use biophysically realistic simulated data, demonstrating that our localization performance is significantly better than the center of mass baseline and can lead to higher-accuracy spike sorting results across multiple probe geometries and noise levels. We also show that our trained VAE can generalize to recordings on which it was not trained. To demonstrate the applicability of our method to real data, we assess our method qualitatively on real extracellular datasets from a Neuropixels [25] probe and from a BioCam4096 recording platform. To clarify, our contribution is not full spike sorting solution. Although we envision that our method can be used to improve spike sorting algorithms that currently rely center of mass location estimates, interfacing with and evaluating these algorithms was beyond the scope of our paper. 2 Background 2.1 Spike localization We start with introducing relevant notation. First, we define the identities and positions of neurons and channels. Let n := {ni}Mi=1, be the set of M neurons in the recording and c := {cj}Nj=1, the set of N channels on the MEA. The position of a neuron, ni, can be defined as pni := (xniyni , zni) ∈ R3 and similarly the position of a channel, cj , pcj := (xcj , ycj , zcj ) ∈ R3. We further denote pc := {pcj}Nj=1 to be the position of all N channels on the MEA. In our treatment of this problem, the neuron and channel positions are single points that represent the centers of the somas and the centers of the channels, respectively. These positions are relative to the origin, which we set to be the center of the MEA. For the neuron, ni, let si := {si,k}Kik=1, be the set of spikes detected during the recording where Ki is the total number of spikes fired by ni. The recorded extracellular waveform of si,k on a channel, cj , can then be defined as wi,k,j := {r(0)i,k,j , r (1) i,k,j , ..., r (t) i,k,j , ..., r (T ) i,k,j} where r (t) i,k,j ∈ R and t = 0, . . . , T . The set of waveforms recorded by each of the N channels of the MEA during the spike, si,k, is defined as wi,k := {wi,k,j}Nj=1. Finally, for the spike, si,k, the point source location can be defined as psi,k := (xsi,k , ysi,k , zsi,k) ∈ R3. The problem we attempt to solve can now be stated as follows: Localizing a spike, si,k, is the task of finding the corresponding point source location, psi,k , given the observed waveforms wi,k and the channel positions, pc. We make the assumption that the point source location, psi,k is actually the location of the firing neuron’s soma, pni . Given the complex morphological structure of many neurons, this assumption may not always be correct, but it provides a simple way to assess localization performance and evaluate future models. 2.2 Center of mass Many modern spike sorting algorithms localize spikes on MEAs using the center of mass or barycenter method [44, 22, 26]. We summarize the traditional steps for localizing a spike, si,k using this method. First, let us define αi,k,j := mint wi,k,j to be the negative amplitude peak of the waveform, wi,k,j , generated by si,k and recorded on channel, cj . We consider the negative peak amplitude as a matter of convention since spikes are defined as inward currents. Then, let αi,k := (αi,k,j)Nj=1 be the vector of all amplitudes generated by si,k and recorded by all N channels on the MEA. To find the center of mass of a spike, si,k, the first step is to determine the central channel for the calculation. This central channel is set to be the channel which records the minimum amplitude for the spike, cjmin := cargminj αi,k,j The second and final step is to take the L closest channels to cjmin and compute, x̂si,k = ∑L+1 j=1 (xcj )|αi,k,j |∑L+1 j=1 |αi,k,j | , ŷsi,k = ∑L+1 j=1 (ycj )|αi,k,j |∑L+1 j=1 |αi,k,j | where all of the L+ 1 channels’ positions and recorded amplitudes contribute to the center of mass calculation. The center of mass method is inexpensive to compute and has been shown to give informative location estimates for spikes in both real and synthetic data [44, 37, 22, 26]. Center of mass, however, suffers from two main drawbacks: First, since the chosen channels will form a convex hull, the center of mass location estimates must lie inside the channels’ locations, negatively impacting location estimates for neurons outside of the MEA. Second, center of mass is biased towards the chosen central channel, potentially leading to artificial separation of location estimates for spikes from the same neuron [44]. 3 Method In this section, we introduce our scalable, model-based approach to spike localization. We describe the generative model, the data augmentation procedure, and the inference methods. 3.1 Model Our model uses the recorded amplitudes on each channel to determine the most likely source location of si,k. We assume that the peak signal from a spike decays exponentially with the distance from the source, r: a exp(br) where a, b ∈ R, r ∈ R+. This assumption is well-motivated by experimentally recorded extracellular potential decay in both a salamander and mouse retina [49, 22], as well as a cat cortex [16]. It has also been further corroborated using realistic biophysical simulations [18]. We utilize this exponential assumption to infer the source location of a spike, si,k, since localization is then equivalent to solving for si,k’s unknown parameters, θsi,k := {ai,k, bi,k, xsi,k , ysi,k , zsi,k} given the observed amplitudes, αi,k. To allow for localization without knowing the identity of the firing neuron, we assume that each spike has individual exponential decay parameters, ai,k, bi,k, and individual source locations, psi,k . We find, however, that fixing bi,k for all spikes to a constant that is equal to an empirical estimate from literature (decay length of ∼ 28µm) works best across multiple probe geometries and noise levels, so we did not infer the value for bi,k in our final method. We will refer to the fixed decay rate as b and exclude it from the unknown parameters moving forward. The generative process of our exponential model is as follows, ai,k ∼ N(µai,k , σa), xsi,k ∼ N(µxsi,k , σx), ysi,k ∼ N(µysi,k , σy), zsi,k ∼ N(µzsi,k , σz) r̂i,k = ‖(xsi,k , ysi,k , zsi,k)− pc‖2, αi,k ∼ N (ai,k exp(br̂i,k), I) (1) In our observation model, the amplitudes are drawn from an isotropic Gaussian distribution with a variance of one. We chose this Gaussian observation model for computational simplicity and since it is convenient to work with when using VAEs. We discuss the limitations of our modeling assumptions in Section 5 and propose several extensions for future works. For our prior distributions, we were careful to set sensible parameter values. We found that inference, especially for a spike detected near the edge of the MEA, is sensitive to the mean of the prior distribution of ai,k, therefore, we set µai,k = λαi,k,jmin where αi,k,jmin is the smallest negative amplitude peak of si,k. We choose this heuristic because the absolute value of αi,k,jmin will always be smaller than the absolute value of the amplitude of the spike at the source location, due to potential decay. Therefore, scaling αi,k,jmin by λ gives a sensible value for µai,k . We empirically choose λ = 2 for the final method after performing a grid search over λ = {1, 2, 3}. The parameter, σa, does not have a large affect on the inferred location so we set it to be approximately the standard deviation of the αi,k,jmin (50). The location prior means, µxsi,k , µysi,k , µzsi,k , are set to the location of the minimum amplitude channel, pcjmin , for the given spike. The location prior standard deviations, σx, σy, σz , are set to large constant values to flatten out the distributions since we do not want the location estimate to be overly biased towards pcjmin . 3.2 Data Augmentation For localization to work well, the input channels should be centered around the peak spike, which is hard for spikes near the edges (edge spikes). To address this issue, we employ a two-step data augmentation. First, inputs for edge spikes are padded such that the channel with the largest amplitude is at the center of the inputs. Second, all channels are augmented with an indicating variable which provides signal to distinguish them for the inference network. To be more specific, we introduce virtual channels outside of the MEA which have the same layout as the real, recording channels (see appendix C). We refer to a virtual channel as an "unobserved" channel, cju , and to a real channel on the MEA as an "observed" channel, cjo . We define the amplitude on an unobserved channel, αi,k,ju , to be zero since unobserved channels do not actually record any signals. We let the amplitude for an observed channel, αi,k,jo , be equal to mint wi,k,jo , as before. Before defining the augmented dataset, we must first introduce an indicator function, 1o : α→ {0, 1}: 1o(α) = { 1, if α is from an observed channel, 0, if α is from an unobserved channel. where α is an amplitude from any channel, observed or unobserved. To construct the augmented dataset for a spike, si,k, we take the set of L channels that lie within a bounding box of width W centered on the observed channel with the minimum recorded amplitude, cjomin . We define our newly augmented observed data for si,k as, βi,k := {(αi,k,j , 1o(αi,k,j)}Lj=1 (2) So, for a single spike, we construct a L × 2 dimensional vector that contains amplitudes from L channels and indices indicating whether the amplitudes came from observed or unobserved channels. Since the prior location for each spike is at the center of the subset of channels used for the observed data, for edge spikes, the data augmentation puts the prior closer to the edge and is, therefore, more informative for localizing spikes near/off the edge of the array. Also, since edge spikes are typically seen on less channels, the data augmentation serves to ignore channels which are away from the spike, which would otherwise be used if the augmentation is not employed. 3.3 Inference Now that we have defined the generative process and data augmentation procedure, we would like to compute the posterior distribution for the unknown parameters of a spike, si,k, p(ai,k, xsi,k , ysi,k , zsi,k |βi,k) (3) given the augmented dataset, βi,k. To infer the posterior distribution for each spike, we utilize two methods of Bayesian inference: MCMC sampling and amortized variational inference. 3.3.1 MCMC sampling We use MCMC to assess the validity and applicability of our model to extracellular data. We implement our model in Turing [15], a probabilistic modeling language in Julia. We run Hamiltonian Monte Carlo (HMC) [39] for 10,000 iterations with a step size of 0.01 and a step number of 10. We use the posterior means of the location distributions as the estimated location.1 Despite the ease use of probabilistic programming and asymptotically guaranteed inference quality of MCMC methods, the scalability of MCMC methods to large-scale datasets is limited. This leads us to implement our model as a VAE and to perform amortized variational inference for our final method. 3.3.2 Amortized variational inference To speed up inference of the spike parameters, we construct a VAE and use amortized variational inference to estimate posterior distributions for each spike. In variational inference, instead of sampling from the target intractable posterior distribution of interest, we construct a variational distribution that is tractable and minimize the Kullback–Leibler (KL) divergence between the variational posterior and the true posterior. Minimizing the KL divergence is equivalent to maximizing the evidence lower bound (ELBO) for the log marginal likelihood of the data. In VAEs, the parameters of the variational posterior are not optimized directly, but are, instead, computed by an inference network. We define our variational posterior for x, y, z as a multivariate Normal with diagonal covariance where the mean and diagonal of the covariance matrix are computed by an inference network qΦ(x, y, z) = N (µφ1(fφ0(υi,k)),σ2φ2(fφ0(υi,k))) (4) The inference network is implemented as a feed-forward, deep neural network parameterized by Φ = {φ0, φ1, φ2}. As one can see, the variational parameters are a function of the input υ . When using an inference network, the input can be any part of the dataset so for our method, we use, υi,k, as the input for each spike, si,k, which is defined as follows: υi,k := {(wi,k,j , 1o(αi,k,j)}Lj=1 (5) where wi,k,j is the waveform detected on the jth channel (defined in Section 2.1). Similar to our previous augmentation, the waveform for an unobserved channel is set to be all zeros. We choose to input the waveforms rather than the amplitudes because, empirically, it encourages the inferred location estimates for spikes from the same neuron to be better localized to the same region of the MEA. For both the real and simulated datasets, we used ∼2 ms of readings for each waveform. 1The code for our MCMC implementation is provided in Appendix H. The decoder for our method reconstructs the amplitudes from the observed data rather than the waveforms. Since we assume an exponential decay for the amplitudes, the decoder is a simple Gaussian likelihood function, where given the Euclidean distance vector ˆri,k, computed by samples from the variational posterior, the decoder reconstructs the mean value of the observed amplitudes with a fixed variance. The decoder is parameterized by the exponential parameters of the given spike, si,k, so it reconstructs the amplitudes of the augmented data, β (0) i,k , with the following expression: β̂ (0) i,k := ai,k exp(br̂i,k)× β1i,k where β̂ (0) i,k is the reconstructed observed amplitudes. By multiplying the reconstructed amplitude vector by β1i,k which consists of either zeros or ones (see Eq. 5), the unobserved channels will be reconstructed with amplitudes of zero and the observed channels will be reconstructed with the exponential function. For our VAE, instead of estimating the distribution of ai,k, we directly optimize ai,k when maximizing the lower bound. We set the initial value of ai,k to the mean of the prior. Thus, ai,k can be read as a parameter of the decoder. Given our inference network and decoder, the ELBO we maximize for each spike, si,k, is given by, log p(βi,k; ai,k) ≥ −KL [qΦ(x, y, z) ‖ pxpypz] + EqΦ [ L∑ l=1 N (β0i,k,l|ai,k exp(br̂i,k), I)β1i,k,l ] where KL is the KL-divergence. The location priors, px, py, pz , are normally distributed as described in 3.1, with means of zero (the position of the maximum amplitude channel in the observed data) and variances of 80. For more information about the architecture and training, see Appendix F. 3.3.3 Stabilized Location Estimation In this model, the channel on which the input is centered can bias the estimate of the spike location, in particular when amplitudes are small. To reduce this bias, we can create multiple inputs for the same spike where each input is centered on a different channel. During inference, we can average the inferred locations for each of these inputs, thus lowering the central channel bias. To this end, we introduce a hyperparameter, amplitude jitter, where for each spike, si,k, we create multiple inputs centered on channels with peak amplitudes within a small voltage of the maximum amplitude, αi,k,j . We use two values for the amplitude jitter in our experiments: 0µV and 10µV . When amplitude jitter is set to 0µV , no averaging is performed; when amplitude jitter is set to 10µV , all channels that have peak amplitudes within 10µV of αi,k,j are used as inputs to the VAE and averaged during inference. 4 Experiments 4.1 Datasets We simulate biophysically realistic ground-truth extracellular recordings to test our model against a variety of real-life complexities. The simulations are generated using the MEArec [4] package which includes 13 layer 5 juvenile rat somatosensory cortex neuron models from the neocortical microcircuit collaboration portal [45]. We simulate three recordings with increasing noise levels (ranging from 10µV to 30µV ) for two probe geometries, a 10x10 channel square MEA with a 15 µm inter-channel distance and 64 channels from a Neuropixels probe (∼25-40 µm inter-channel distance). Our simulations contain 40 excitatory cells and 10 inhibitory cells with random morphological subtypes, randomly distributed and rotated in 3D space around the probe (with a 20 µm minimum distance between somas). Each dataset has about 20,000 spikes in total (60 second duration). For more details on the simulation and noise model, see Appendix G. For the real datasets, we use public data from a Neuropixels probe [32] and from a mouse retina recorded with the BioCam4096 platform [24]. The two datasets have 6 million and 2.2 million spikes, respectively. Spike detection and sorting (with our location estimates) are done using the HerdingSpikes2 software [22]. 4.2 Evaluation Before evaluating the localization methods, we must detect the spikes from each neuron in the simulated recordings. To avoid biasing our results by our choice of detection algorithm, we assume perfect detection, extracting waveforms from channels near each spiking neuron. Once the waveforms are extracted from the recordings, we perform the data augmentation. For the square MEA we use W = 20, 40, which gives L = 4-9, 9-25 real channels in the observed data, respectively. For the simulated Neuropixels, we use W = 35, 45, which gives L = 3-6, 8-14 real channels in the observed data, respectively. Once we have the augmented dataset, we generate location estimates for all the datasets using each localization method. For straightforward comparison with center of mass, we only evaluate the 2D location estimates (in the plane of the recording device). In the first evaluation, we assess the accuracy of each method by computing the Euclidean distance between the estimated spike locations and the associated firing neurons. We report the mean and standard deviation of the localization error for all spikes in each recording. In the second evaluation, we cluster the location estimates of each method using Gaussian mixture models (GMMs). The GMMs are fit with spherical covariances ranging from 45 to 75 mixture components (with a step size of 5). We report the true positive rate and accuracy for each number of mixture components when matched back to ground truth. To be clear, our use of GMMs is not a proposed spike sorting method for real data (the number of clusters is never known apriori), but rather a systematic way to evaluate whether our location estimates are more discriminable features than those of center of mass. In the third evaluation, we again use GMMs to cluster the location estimates, however, this time combined with two principal components from each spike. We report the true positive rate and accuracy for each number of mixture components as before. Combining location estimates and principal components explicitly, to create a new, low-dimensional feature set, is introduced in Hilgen (2017). In this work, the principal components are whitened and then scaled with a hyperparameter, α. To remove any bias from choosing an α value in our evaluation, we conduct a grid search over α = {4, 6, 8, 10} and report the best metric scores for each method. In the fourth evaluation, we assess the generalization performance of the method by training a VAE on an extracellular dataset and then trying to infer the spike locations in another dataset where the neuron locations are different, but all other aspects are kept the same (10µV noise level, square MEA). The localization and sorting performance is then compared to that of a VAE trained directly on the second dataset and to center of mass. Taken together, the first evaluation demonstrates how useful each method is purely as a localization tool, the second evaluation demonstrates how useful the location estimates are for spike sorting immediately after localizing, the third evaluation demonstrates how much the performance can improve given extra waveform information, and the fourth evaluation demonstrates how our method can be used across similar datasets without retraining. For all of our sorting analysis, we use SpikeInterface version 0.9.1 [5]. 4.3 Results Table 1 reports the localization accuracy of the different localization methods for the square MEA with three different noise levels. Our model-based methods far outperform center of mass with any number of observed channels. As expected, introducing amplitude jitter helps lower the mean and 50 60 700.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 50 60 70 50 60 70 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 VAE-0 VAE-0-PCs VAE-10 VAE-10-PCs COM COM-PCs VAE 9-25 VAE 4-9 Number of Mixtures 10μV 20μV 30μV Precision Recall Accuracy Precision Recall Accuracy Precision Recall Accuracy 50 60 70 50 60 70 50 60 70 50 60 70 50 60 70 50 60 70 Figure 2: Spike Sorting Performance on square MEA. We compare the sorting performance of the VAE localization method and the COM localization method with and without principal components across all noise levels. For the VAE, we include the results with 0µV and 10µV amplitude jitter and with different amounts of observed channels (4-9 and 9-25). For COM, we plot the highest sorting performance (25 observed channels). The test data set has 50 neurons. standard deviation of the location spike distance. Using a small width of 20µm when constructing the augmented data (4-9 observed channels) has the highest performance for the square MEA. The location estimates for the square MEA are visualized in Figure 1. Recording channels are plotted as grey squares and the true soma locations are plotted as black stars. The estimated individual spike locations are colored according to their associated firing neuron identity. As can be seen in the plot, center of mass suffers both from artificial splitting of location estimates and poor performance on neurons outside the array, two areas in which the model-based approaches excel. The MCMC and VAE methods have very similar location estimates, highlighting the success of our variational inference in approximating the true posterior. See Appendix A for a location estimate plot when the VAE is trained and tested on simulated Neuropixels recordings. In Figure 2, spike sorting performance on the square MEA is visualized for all localization methods (with and without waveform information). Here, we only show the sorting results for center of mass on 25 observed channels, where it performs at its best. Overall, the model-based approaches have significantly higher precision, recall, and accuracy than center of mass across all noise levels and all different numbers of mixtures. This illustrates how model-based location estimates provide a much more discriminatory feature set than the location estimates from the center of mass approaches. We also find that the addition of waveform information (in the form of principal components) improves spike sorting performance for all localization methods. See Appendix A for a spike sorting performance plot when the VAE is trained and tested on simulated Neuropixels recordings. As shown in Appendix D, when our method is trained on one simulated recording, it can generalize well to another simulated recording with different neuron locations. The localization accuracy and sorting performance are only slightly lower than the VAE that is trained directly on the new recording. Our method also still outperforms center of mass on the new dataset even without training on it. Figure 3 shows our localization method as applied to two real, large-scale extracellular datasets. In these plots, we color the location estimates based on their unit identity after spike sorting with HerdingSpikes2. These extracellular recordings do not have ground truth information as current, ground-truth recordings are limited to a few labeled neurons [56, 19, 21, 40, 54]. Therefore, to demonstrate that the units we find likely correspond to individual neurons, we visualize waveforms from a local grouping of sorted units on the Neuropixels probe. This analysis illustrates that are method can already be applied to large-scale, real extracellular recordings. In Appendix E, we demonstrate that the inference time for the VAE is much faster than that of MCMC, highlighting the excellent scalability of our method. The inference speed of the VAE allows for localization of one million spikes in approximately 37 seconds on a TITAN X GPU, enabling real-time analysis of large-scale extracellular datasets. 5 Discussion Here, we introduce a Bayesian approach to spike localization using amortized variational inference. Our method significantly improves localization accuracy and spike sorting performance over the preexisting baseline while remaining scalable to the large volumes of data generated by MEAs. Scalability is particularly relevant for recordings from thousands of channels, where a single experiment may yield in the order of 100 million spikes. We validate the accuracy of our model assumptions and inference scheme using biophysically realistic ground truth simulated recordings that capture much of the variability seen in real recordings. Despite the realism of our simulated recordings, there are some factors that we did not account for, including: bursting cells with event amplitude fluctuations, electrode drift, and realistic intrinsic variability of recorded spike waveforms. As these factors are difficult to model, future analysis of real recordings or advances in modeling software will help to understand possible limitations of the method. Along with limitations of the simulated data, there are also limitations of our model. Although we assume a monopole current-source, every part of the neuronal membrane can produce action potentials [7]. This means that a more complicated model, such as a dipole current [50], line currentsource [50], or modified ball-and-stick [48], might be a better fit to the data. Since these models have only ever been used after spike sorting, however, the extent at which they can improve localization performance before spike sorting is unclear and is something we would like to explore in future work. Also, our model utilizes a Gaussian observation model for the spike amplitudes. In real recordings, the true noise distribution is often non-Gaussian and is better approximated by pink noise models ( 1f noise) [53]. We plan to explore more realistic observation models in future works. Since our method is Bayesian, we hope to better utilize the uncertainty of the location estimates in future works. Also, as our inference network is fully differentiable, we imagine that our method can be used as a submodule in a more complex, end-to-end method. Other work indicates there is scope for constructing more complicated models to perform event detection and classification [31], and to distinguish between different morphological neuron types based on their activity footprint on the array [6]. Our work is thus a first step towards using amortized variational inference methods for the unsupervised analysis of complex electrophysiological recordings.
1. What is the main contribution of the paper regarding spike localization? 2. What are the strengths of the proposed approach, particularly in contrast to previous methods? 3. Do you have any concerns or questions regarding the application and validation of the proposed method? 4. How does the reviewer assess the clarity and technical soundness of the paper? 5. Are there any minor issues or suggestions for improvement in the paper?
Review
Review The authors develop an unsupervised, probabilistic, and scalable approach for spike localization from MEA recordings, in contrast to previous approaches which either required supervision, did not scale to large datasets, or relied on a simple heuristic (e.g., COM). Though the proposed model is relatively straightforward and the authors use standard approximate inference techniques to learn the desired posterior, the application to this domain and empirical validation of the approach seem to be novel contributions. The work is technically sound, with empirical results that demonstrate improved performance as compared to the COM heuristic for spike localization. However, there is no experimental validation of the proposed data augmentation scheme alone, as it seems that this scheme is used in both the MCMC and VAE approaches which makes it unclear what fraction of the performance improvement over COM is due to data augmentation versus the model-based posterior inference. A comparison to alternative spike localization methods besides COM would also strengthen the work, though I'm not sure if the works cited by the authors would even be feasible for the scale of the datasets being analyzed and therefore don't consider this a major shortcoming of the work. Though the writing is clear overall, some minor details could be clarified: the description of the model in Section 3.1 describes a procedure for choosing the location prior means (lines 122-123), but the proposed inference methods are stated as using a location prior mean of zero, which seems to be a discrepancy. Section 3.2 describes a bounding box of width W and number of channels L that are used in the data augmentation scheme, but the values for these used in the experiments are not explicitly stated (based on the captions from the figures/tables, it seems like values of W = 3, 5 and therefore L = 4-9, 9-25 were used, but this could be more made more clear). However, these are more minor issues that could easily be fixed. Update based on author feedback: Having read the authors' response, I feel like my concern about the effect of the data augmentation scheme was properly addressed, particularly if the authors commit to including an empirical analysis of the effect of the data augmentation on overall performance in the appendix as they mention. However, the other reviewers' comments on the lack of comparison to state-of-the-art methods makes me feel that this is more of a shortcoming of the submission than I had initially thought, and I'm not familiar enough with the alternative methods to know if leaving out any comparison to them is justified. Overall, I would still lean more towards accepting the submission but don't feel confident enough to strongly recommend acceptance, and therefore maintain my original score.
NIPS
Title Faster Deep Reinforcement Learning with Slower Online Network Abstract Deep reinforcement learning algorithms often use two networks for value function optimization: an online network, and a target network that tracks the online network with some delay. Using two separate networks enables the agent to hedge against issues that arise when performing bootstrapping. In this paper we endow two popular deep reinforcement learning algorithms, namely DQN and Rainbow, with updates that incentivize the online network to remain in the proximity of the target network. This improves the robustness of deep reinforcement learning in presence of noisy updates. The resultant agents, called DQN Pro and Rainbow Pro, exhibit significant performance improvements over their original counterparts on the Atari benchmark demonstrating the effectiveness of this simple idea in deep reinforcement learning. The code for our paper is available here: Github.com/amazon-research/fast-rl-with-slow-updates. 1 Introduction An important competency of reinforcement-learning (RL) agents is learning in environments with large state spaces like those found in robotics (Kober et al., 2013), dialog systems (Williams et al., 2017), and games (Tesauro, 1994; Silver et al., 2017). Recent breakthroughs in deep RL have demonstrated that simple approaches such as Q-learning (Watkins & Dayan, 1992) can surpass human-level performance in challenging environments when equipped with deep neural networks for function approximation (Mnih et al., 2015). Two components of a gradient-based deep RL agent are its objective function and optimization procedure. The optimization procedure takes estimates of the gradient of the objective with respect to network parameters and updates the parameters accordingly. In DQN (Mnih et al., 2015), for example, the objective function is the empirical expectation of the temporal difference (TD) error (Sutton, 1988) on a buffered set of environmental interactions (Lin, 1992), and variants of stochastic gradient descent are employed to best minimize this objective function. A fundamental difficulty in this context stems from the use of bootstrapping. Here, bootstrapping refers to the dependence of the target of updates on the parameters of the neural network, which is itself continuously updated during training. Employing bootstrapping in RL stands in contrast to supervised-learning techniques and Monte-Carlo RL (Sutton & Barto, 2018), where the target of our gradient updates does not depend on the parameters of the neural network. Mnih et al. (2015) proposed a simple approach to hedging against issues that arise when using bootstrapping, namely to use a target network in value-function optimization. The target network is updated periodically, and tracks the online network with some delay. While this modification 36th Conference on Neural Information Processing Systems (NeurIPS 2022). constituted a major step towards combating misbehavior in Q-learning (Lee & He, 2019; Kim et al., 2019; Zhang et al., 2021), optimization instability is still prevalent (van Hasselt et al., 2018). Our primary contribution is to endow DQN and Rainbow (Hessel et al., 2018) with a term that ensures the parameters of the online-network component remain in the proximity of the parameters of the target network. Our theoretical and empirical results show that our simple proximal updates can remarkably increase robustness to noise without incurring additional computational or memory costs. In particular, we present comprehensive experiments on the Atari benchmark (Bellemare et al., 2013) where proximal updates yield significant improvements, thus revealing the benefits of using this simple technique for deep RL. 2 Background and Notation RL is the study of the interaction between an environment and an agent that learns to maximize reward through experience. The Markov Decision Process (Puterman, 1994), or MDP, is used to mathematically define the RL problem. An MDP is specified by the tuple hS,A,R,P, i, where S is the set of states and A is the set of actions. The functions R : S ⇥A! R and P : S ⇥A⇥ S ! [0, 1] denote the reward and transition dynamics of the MDP. Finally, a discounting factor is used to formalize the intuition that short-term rewards are more valuable than those received later. The goal in the RL problem is to learn a policy, a mapping from states to a probability distribution over actions, ⇡ : S ! P(A), that obtains high sums of future discounted rewards. An important concept in RL is the state value function. Formally, it denotes the expected discounted sum of future rewards when committing to a policy ⇡ in a state s: v⇡(s) := E ⇥P1 t=0 tRt S0 = s,⇡ ⇤ . We define the Bellman operator T ⇡ as follows: ⇥ T ⇡v ⇤ (s) := X a2A ⇡(a | s) R(s, a) + X s02S P(s, a, s0)v(s0) , which we can write compactly as: T ⇡v := R⇡ + P⇡v , where ⇥ R⇡ ⇤ (s) = P a2A ⇡(a|s)R(s, a) and ⇥ P⇡v ⇤ (s) = P a2A ⇡(a | s) P s02SP(s, a, s0)v(s0). We also denote: (T ⇡)nv := T ⇡ · · · T ⇡| {z } n compositions v . Notice that v⇡ is the unique fixed-point of (T ⇡)n for all natural numbers n, meaning that v⇡ = (T ⇡)nv⇡ , for all n. Define v? as the optimal value of a state, namely: v?(s) := max⇡ v⇡(s), and ⇡? as a policy that achieves v?(s) for all states. We define the Bellman Optimality Operator T ?: ⇥ T ?v ⇤ (s) :=max a2A R(s, a) + X s02S P(s, a, s0)v(s0) , whose fixed point is v?. These operators are at the heart of many planning and RL algorithms including Value Iteration (Bellman, 1957) and Policy Iteration (Howard, 1960). 3 Proximal Bellman Operator In this section, we introduce a new class of Bellman operators that ensure that the next iterate in planning and RL remain in the vicinity of the previous iterate. To this end, we define the Bregman Divergence generated by a convex function f : Df (v 0, v) := f(v0) f(v) hrf(v), v0 vi . Examples include the lp norm generated by f(v) = 12 kvk 2 p and the Mahalanobis Distance generated by f(v) = 12 hv,Qvi for a positive semi-definite matrix Q. We now define the Proximal Bellman Operator (T ⇡c,f )n: (T ⇡c,f )nv := argmin v0 ||v0 (T ⇡)nv||22 + 1 c Df (v 0, v) , (1) where c 2 (0,1). Intuitively, this operator encourages the next iterate to be in the proximity of the previous iterate, while also having a small difference relative to the point recommended by the original Bellman Operator. The parameter c could, therefore, be thought of as a knob that controls the degree of gravitation towards the previous iterate. Our goal is to understand the behavior of Proximal Bellman Operator when used in conjunction with the Modified Policy Iteration (MPI) algorithm (Puterman, 1994; Scherrer et al., 2015). Define Gv as the greedy policy with respect to v. At a certain iteration k, Proximal Modified Policy Iteration (PMPI) proceeds as follows: ⇡k Gvk 1 , (2) vk (T ⇡kc,f ) nvk 1 . (3) The pair of updates above generalize existing algorithms. Notably, with c!1 and general n we get MPI, with c!1 and n = 1 the algorithm reduces to Value Iteration, and with c!1 and n =1 we have a reduction to Policy Iteration. For finite c, the two extremes of n, namely n = 1 and n =1, could be thought of as the proximal versions of Value Iteration and Policy Iteration, respectively. To analyze this approach, it is first natural to ask if each iteration of PMPI could be thought of as a contraction so we can get sound and convergent behavior in planning and learning. For n > 1, Scherrer et al. (2015) constructed a contrived MDP demonstrating that one iteration of MPI can unfortunately expand. As PMPI is just a generalization of MPI, the same example from Scherrer et al. (2015) shows that PMPI can expand. In the case of n = 1, we can rewrite the pair of equations (2) and (3) in a single update as follows: vk T ?c,fvk 1 . When c ! 1, standard proofs can be employed to show that the operator is a contraction (Littman & Szepesvári, 1996). We now show that T ?c,f is a contraction for finite values of c. See our appendix for proofs. Theorem 1. The Proximal Bellman Optimality Operator T ?c,f is a contraction with fixed point v?. Therefore, we get convergent behavior when using T ?c,f in planning and RL. The addition of the proximal term is fortunately not changing the fixed point, thus not negatively affecting the final solution. This could be thought of as a form of regularization that vanishes in the limit; the algorithm converges to v? even without decaying 1/c. Going back to the general n 1 case, we cannot show contraction, but following previous work (Bertsekas & Tsitsiklis, 1996; Scherrer et al., 2015), we study error propagation in PMPI in presence of additive noise where we get a noisy sample of the original Bellman Operator (T ⇡k)nvk 1 + ✏k. The noise can stem from a variety of reasons, such as approximation or estimation error. For simplicity, we restrict the analysis to Df (v0, v) = ||v0 v||22, so we rewrite update (3) as: vk argmin v0 ||v0 (T ⇡k)nvk 1 + ✏k ||22 + 1 c kv0 vk 1k 2 2 which can further be simplified to: vk (1 )(T ⇡k)nvk 1 + vk 1| {z } :=(T ⇡k )nvk 1 +(1 )✏k, where = 11+c . This operator is a generalization of the operator proposed by Smirnova & Dohmatob (2020) who focused on the case of n = 1. To build some intuition, notice that the update is multiplying error ✏k by a term that is smaller than one, thus better hedging against large noise. While the update may slow progress when there is no noise, it is entirely conceivable that for large enough values of ✏k, it is better to use non-zero values. In the following theorem we formalize this intuition. Our result leans on the theory provided by Scherrer et al. (2015) and could be thought of as a generalization of their theorem for non-zero values. Theorem 2. Consider the PMPI algorithm specified by: ⇡k G✏0kvk 1 , (4) vk (T ⇡k ) nvk 1 + (1 )✏k . (5) Define the Bellman residual bk := vk T ⇡k+1vk, and error terms xk := (I P⇡k)✏k and yk := P⇡ ⇤ ✏k. After k steps: v⇤ v⇡k = v⇡ ⇤ (T ⇡k+1 ) nvk | {z } dk +(T ⇡k+1 ) nvk v⇡k| {z } sk • where dk P⇡ ⇤ dk 1 (1 )yk 1 + bk 1 + (1 ) Pn 1 j=1 ( P ⇡k)jbk 1 + ✏0k • sk (1 )( P⇡k)n + I (I P⇡k) 1bk 1 • bk (1 )( P⇡k)n + I bk 1 + (1 )xk + ✏0k+1 The bound provides intuition as to how the proximal Bellman Operator can accelerate convergence in the presence of high noise. For simplicity, we will only analyze the effect of the ✏ noise term, and ignore the ✏0 term. We first look at the Bellman residual, bk. Given the Bellman residual in iteration k 1, bk 1, the only influence of the noise term ✏k on bk is through the (1 )xk, term, and we see that bk decreases linearly with larger . The analysis of sk is slightly more involved but follows similar logic. The bound for sk can be decomposed into a term proportional to bk 1 and a term proportional to bk 1, where both are multiplied with positive semi-definite matrices. Since bk 1 itself linearly decreases with , we conclude that larger decreases the bound quadratically. The effect of on the bound for dk is more complex. The terms yk 1 and Pn 1 j=1 ( P ⇡k)jbk 1 introduce a linear decrease of the bound on dk with , while the term (I Pn 1 j=1 ( P ⇡k)j)bk 1 introduces a quadratic dependence whose curvature depends on I Pn 1 j=1 ( P⇡k) j . This complex dependence on highlights the trade-off between noise reduction and magnitude of updates. To understand this trade-off better, we examine two extreme cases for the magnitude on the noise. When the noise is very large, we may set = 1, equivalent to an infinitely strong proximal term. It is easy to see that for = 1, the values of dk and sk remain unchanged, which is preferable to the increase they would suffer in the presence of very large noise. On the other extreme, when no noise is present, the xk and yk terms in Theorem 2 vanish, and the bounds on dk and sk can be minimized by setting = 0, i.e. without noise the proximal term should not be used and the original Bellman update performed. Intermediate noise magnitudes thus require a value of that balances the noise reduction and update size. 4 Deep Q-Network with Proximal Updates We now endow DQN-style algorithms with proximal updates. Let hs, a, r, s0i denote a buffered tuple of interaction. Define the following objective function: h(✓, w) := bEhs,a,r,s0i h r + max a0 bQ(s0, a0; ✓) bQ(s, a;w) 2i . (6) Our proximal update is defined as follows: wt+1 argmin w h(wt, w) + 1 2c̃ kw wtk22 . (7) This algorithm closely resembles the standard proximal-point algorithm (Rockafellar, 1976; Parikh & Boyd, 2014) with the important caveat that the function h is now taking two vectors as input. At each iteration, we hold the first input constant while optimizing over the second input. In the optimization literature, the proximal-point algorithm is well-studied in contexts where an analytical solution to (7) is available. With deep learning no closed-form solution exists, so we approximately solve (7) by taking a fixed number of descent steps using stochastic gradients. Specifically, starting each iteration with w = wt, we perform multiple w updates w w ↵ r2h(wt, w) + 1c̃ (w wt) . We end the iteration by setting wt+1 w. To make a connection to standard deep RL, the online weights w could be thought of as the weights we maintain in the interim to solve (7) due to lack of a closed-form solution. Also, what is commonly referred to as the target network could better be thought of as just the previous iterate in the above proximal-point algorithm. Observe that the update can be written as: w 1 (↵/c̃) · w + (↵/c̃) · wt ↵r2h(wt, w) . Notice the intuitively appealing form: we first compute a convex combination of wt and w, based on the hyper-parameters ↵ and c̃, then add the gradient term to arrive at the next iterate of w. If wt and w are close, the convex combination is close to w itself and so this DQN with proximal update (DQN Pro) would behave similarly to the original DQN. However, when w strays too far from wt, taking the convex combination ensures that w gravitates towards the previous iterate wt. The gradient signal from minimizing the squared TD error (6) should then be strong enough to cancel this default gravitation towards wt. The update includes standard DQN as a special case when c̃ ! 1. The pseudo-code for DQN is presented in the Appendix. The difference between DQN and DQN Pro is minimal (shown in gray), and corresponds with a few lines of code in our implementation. 5 Experiments In this section, we empirically investigate the effectiveness of proximal updates in planning and reinforcement-learning algorithms. We begin by conducting experiments with PMPI in the context of approximate planning, and then move to large-scale RL experiments in Atari. 5.1 PMPI Experiments We now focus on understanding the empirical impact of adding the proximal term on the performance of approximate PMPI. To this end, we use the pair of update equations: ⇡k Gvk 1 , vk (1 ) (T ⇡k)nvk 1 + ✏k + vk 1 . For this experiment, we chose the toy 8⇥8 Frozen Lake environment from Open AI Gym (Brockman et al., 2016), where the transition and reward model of the environment is available to the planner. Using a small environment allows us to understand the impact of the proximal term in the simplest and most clear setting. Note also that we arranged the experiment so that the policy greedification step Gvk 1 8k is error-free, so we can solely focus on the interplay between the proximal term and the error caused by imperfect policy evaluation. We applied 100 iterations of PMPI, then measured the quality of the resultant policy ⇡ := ⇡100 as defined by the distance between its true value and that of the optimal policy, namely kV ? V ⇡k1. We repeated the experiment with different magnitudes of error, as well as different values of the parameter. From Figure 1, it is clear that the final performance exhibits a U-shape with respect to the parameter . It is also noticable that the best-performing is shifting to the right side (larger values) as we increase the magnitude of noise. This trend makes sense, and is consistent with what is predicted by Theorem 2: As the noise level rises, we have more incentive to use larger (but not too large) values to hedge against it. 5.2 Atari Experiments In this section, we evaluate the proximal (or Pro) agents relative to their original DQN-style counterparts on the Atari benchmark (Bellemare et al., 2013), and show that endowing the agent with the proximal term can lead into significant improvements in the interim as well as in the final performance. We next investigate the utility of our proposed proximal term through further experiments. Please see the Appendix for a complete description of our experimental pipeline. 5.2.1 Setup We used 55 Atari games (Bellemare et al., 2013) to conduct our experimental evaluations. Following Machado et al. (2018) and Castro et al. (2018), we used sticky actions to inject stochasticity into the otherwise deterministic Atari emulator. Our training and evaluation protocols and the hyper-parameter settings follow those of the Dopamine baseline (Castro et al., 2018). To report performance, we measured the undiscounted sum of rewards obtained by the learned policy during evaluation. We further report the learning curve for all experiments averaged across 5 random seeds. We reiterate that we used the exact same hyper-parameters for all agents to ensure a sound comparison. Our Pro agents have a single additional hyper-parameter c̃. We did a minimal random search on 6 games to tune c̃. Figure 2 visualizes the performance of Pro agents as a function of c̃. In light of this result, we set c̃ = 0.2 for DQN Pro and c̃ = 0.05 for Rainbow Pro. We used these values of c̃ for all 55 games, and note that we performed no further hyper-parameter tuning at all. 5.2.2 Results The first question is whether endowing the DQN agent with the proximal term can yield significant improvements over the original DQN. Figure 3 (top) shows a comparison between DQN and DQN Pro in terms of the final performance. In particular, following standard practice (Wang et al., 2016; Dabney et al., 2018; van Seijen et al., 2019), for each game we compute: ScoreDQN Pro ScoreDQN max(ScoreDQN, ScoreHuman) ScoreRandom . Bars shown in red indicate the games in which we observed better final performance for DQN Pro relative to DQN, and bars in blue indicate the opposite. The height of a bar denotes the magnitude of this improvement for the corresponding benchmark; notice that the y-axis is scaled logarithmically. We took human and random scores from previous work (Nair et al., 2015; Dabney et al., 2018). It is clear that DQN Pro dramatically improves upon DQN. We defer to the Appendix for full learning curves on all games tested. Can we fruitfully combine the proximal term with some of the existing algorithmic improvements in DQN? To answer this question, we build on the Rainbow algorithm of Hessel et al. (2018) who successfully combined numerous important algorithmic ideas in the value-based RL literature. We present this result in Figure 3 (bottom). Observe that the overall trend is for Rainbow Pro to yield large performance improvements over Rainbow. Additionally, we measured the performance of our agents relative to human players. To this end, and again following previous work (Wang et al., 2016; Dabney et al., 2018; van Seijen et al., 2019), for each agent we compute the human-normalized score: ScoreAgent ScoreRandom ScoreHuman ScoreRandom . In Figure 4 (left), we show the median of this score for all agents, which Wang et al. (2016) and Hessel et al. (2018) argued is a sensible quantity to track. We also show per-game learning curves with standard error in the Appendix. We make two key observations from this figure. First, the very basic DQN Pro agent is capable of achieving human-level performance (1.0 on the y-axis) after 120 million frames. Second, the Rainbow Pro agent achieves 220 percent human-normalized score after only 120 million frames. 5.2.3 Additional Experiments Our purpose in endowing the agent with the proximal term was to keep the online network in the vicinity of the target network, so it would be natural to ask if this desirable property can manifest itself in practice when using the proximal term. In Figure 4, we answer this question affirmatively by plotting the magnitude of the update to the target network during synchronization. Notice that we periodically synchronize online and target networks, so the proximity of the online and target network should manifest itself in a low distance between two consecutive target networks. Indeed, the results demonstrate the success of the proximal term in terms of obtaining the desired proximity of online and target networks. While using the proximal term leads to significant improvements, one may still wonder if the advantage of DQN Pro over DQN is merely stemming from a poorly-chosen period hyper-parameter in the original DQN, as opposed to a truly more stable optimization in DQN Pro. To refute this hypothesis, we ran DQN with various settings of the period hyper-parameter {2000, 4000, 8000, 12000}. This set included the default value of the hyper-parameter (8000) from the original paper (Mnih et al., 2015), but also covered a wider set of settings. Additionally, we tried an alternative update strategy for the target network, referred to as Polyak averaging, which was popularized in the context of continuous-action RL (Lillicrap et al., 2015): ✓ ⌧w+(1 ⌧)✓. For this update strategy, too, we tried different settings of the ⌧ hyper-parameter, namely {0.05, 0.005, 0.0005}, which includes the value 0.005 used in numerous papers (Lillicrap et al., 2015; Fujimoto et al., 2018; Asadi et al., 2021). Figure 5 presents a comparison between DQN Pro and DQN with periodic and Polyak target updates for various hyper-parameter settings of period and ⌧ . It is clear that DQN Pro is consistently outperforming the two alternatives regardless of the specific values of period and ⌧ , thus clearly demonstrating that the improvement is stemming from a more stable optimization procedure leading to a better interplay between the two networks. Finally, an alternative approach to ensuring lower distance between the online and the target network is to anneal the step size based on the number of updates performed on the online network since the last online-target synchronization. In this case we performed this experiment in 4 games where we knew proximal updates provide improvements based on our DQN Pro versus DQN resulst in Figure 3. In this case we linearly decreased the step size from the original DQN learning rate ↵ to ↵0 ⌧ ↵ where we tuned ↵0 using random search. Annealing indeed improves DQN, but DQN Pro outperforms the improved version of DQN. Our intuition is that Pro agents only perform small updates when the target network is far from the online network, but naively decaying the learning rate can harm progress when the two networks are in vicinity of each other. 6 Discussion In our experience using proximal updates in the parameter space were far superior than proximal updates in the value space. We believe this is because the parameter-space definition can enforce the proximity globally, while in the value space one can only hope to obtain proximity locally and on a batch of samples. One may hope to use natural gradients to enforce value-space proximity in a more principled way, but doing so usually requires significantly more computational resources Knight & Lerner (2018). This is in contrast to our proximal updates which add negligible computational cost in the simple form of taking a dimension-wise weighted average of two weight vectors. In addition, for a smooth (Lipschitz) Q function, performing parameter-space regularization guarantees function-space regularization. Concretely: 8s, 8a |Q(s, a; ✓) Q(s, a; ✓0)| L||✓ ✓0|| , where L is the Lipschitz constant of Q. Moreover, deep networks are Lipschitz (Neyshabur et al., 2015; Asadi et al., 2018), because they are constructed using compositions of Lipschitz functions (such as ReLU, convolutions, etc) and that composition of Lipschitz functions is Lipschitz. So performing value-space updates may be an overkill. Lipschitz property of deep networks has successfully been leveraged in other contexts, such as in generative adversarial training Arjovsky et al. (2017). A key selling point of our result is simplicity, because simple results are easy to understand, implement, and reproduce. We obtained significant performance improvements by adding just a few lines of codes to the publicly available implementations of DQN and Rainbow Castro et al. (2018). 7 Related Work The introduction of proximal operators could be traced back to the seminal work of Moreau (1962, 1965), Martinet (1970) and Rockafellar (1976), and the use of the proximal operators has since expanded into many areas of science such as signal processing (Combettes & Pesquet, 2009), statistics and machine learning (Beck & Teboulle, 2009; Polson et al., 2015; Reddi et al., 2015), and convex optimization (Parikh & Boyd, 2014; Bertsekas, 2011b,a). In the context of RL, Mahadevan et al. (2014) introduced a proximal theory for deriving convergent off-policy algorithms with linear function approximation. One intriguing characteristic of their work is that they perform updates in primal-dual space, a property that was leveraged in sample complexity analysis (Liu et al., 2020) for the proximal counterparts of the gradient temporal-difference algorithm (Sutton et al., 2008). Proximal operators also have appeared in the deep RL literature. For instance, Fakoor et al. (2020b) used proximal operators for meta learning, and Maggipinto et al. (2020) improved TD3 (Fujimoto et al., 2018) by employing a stochastic proximal-point interpretation. The effect of the proximal term in our work is reminiscent of the use of trust regions in policy-gradient algorithms (Schulman et al., 2015, 2017; Wang et al., 2019; Fakoor et al., 2020a; Tomar et al., 2021). However, three factors differentiate our work: we define the proximal term using the value function, not the policy, we enforce the proximal term in the parameter space, as opposed to the function space, and we use the target network as the previous iterate in our proximal definition. 8 Conclusion and Future work We showed a clear advantage for using proximal terms to perform slower but more effective updates in approximate planning and reinforcement learning. Our results demonstrated that proximal updates lead to more robustness with respect to noise. Several improvements to proximal methods exist, such as the acceleration algorithm (Nesterov, 1983; Li & Lin, 2015), as well as using other proximal terms (Combettes & Pesquet, 2009), which we leave for future work. 9 Acknowledgment We thank Lihong Li, Pratik Chaudhari, and Shoham Sabach for their valuable insights in different stages of this work.
1. What is the focus of the paper in the context of reinforcement learning? 2. What are the strengths of the proposed approach, particularly in addressing the issue of high noise in value function approximation? 3. What are the weaknesses of the paper, especially regarding the introduction of a new hyperparameter and its potential limitations in continuous action cases? 4. Do you have any questions or concerns about the experimental results presented in the paper? 5. Are there any suggestions or recommendations for improving the presentation or content of the paper?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper addresses the foundamental problem of Bellman operator in RL. The authors proposed a proximal Bellman operator that updates that incentivize the online network to remain in the vicinity of the target network. Theoretical analysis and toy task results suggested that the proximal Bellman Operator can accelerate convergence in the presence of high noise in value function approximation. Then the proximal value update is implemented on DQN and Rainbow agents and showed performance gain in many Atari games. Strengths And Weaknesses [Strength] The work tackles a generally interested problem in deep RL about target value network. The paper is basically well-written and easy to follow. Both theoretical analysis and experimental results are well-presented The proposed method is simple and effective. [Weakness] A new hyperparameter c ~ is introduced and need be to tunned to achieve best performance. It is unclear that how can proposed PMPI generalize to continuous action case. Questions Questions: I understand α as the learning rate same as the orginal algorithm, is this correct? While the experiments are based on Atari which are mostly deterministic environments, I wonder whether the proposed PMPI works even better in noisy environments. May the author provide some insights? Fig. 1, is ϵ equal to ϵ k ? Suggestions: Fig.2 and 5 are not that friendly to color-blind readers. Some efforts could be made to improved the plots. Limitations N/A
NIPS
Title Faster Deep Reinforcement Learning with Slower Online Network Abstract Deep reinforcement learning algorithms often use two networks for value function optimization: an online network, and a target network that tracks the online network with some delay. Using two separate networks enables the agent to hedge against issues that arise when performing bootstrapping. In this paper we endow two popular deep reinforcement learning algorithms, namely DQN and Rainbow, with updates that incentivize the online network to remain in the proximity of the target network. This improves the robustness of deep reinforcement learning in presence of noisy updates. The resultant agents, called DQN Pro and Rainbow Pro, exhibit significant performance improvements over their original counterparts on the Atari benchmark demonstrating the effectiveness of this simple idea in deep reinforcement learning. The code for our paper is available here: Github.com/amazon-research/fast-rl-with-slow-updates. 1 Introduction An important competency of reinforcement-learning (RL) agents is learning in environments with large state spaces like those found in robotics (Kober et al., 2013), dialog systems (Williams et al., 2017), and games (Tesauro, 1994; Silver et al., 2017). Recent breakthroughs in deep RL have demonstrated that simple approaches such as Q-learning (Watkins & Dayan, 1992) can surpass human-level performance in challenging environments when equipped with deep neural networks for function approximation (Mnih et al., 2015). Two components of a gradient-based deep RL agent are its objective function and optimization procedure. The optimization procedure takes estimates of the gradient of the objective with respect to network parameters and updates the parameters accordingly. In DQN (Mnih et al., 2015), for example, the objective function is the empirical expectation of the temporal difference (TD) error (Sutton, 1988) on a buffered set of environmental interactions (Lin, 1992), and variants of stochastic gradient descent are employed to best minimize this objective function. A fundamental difficulty in this context stems from the use of bootstrapping. Here, bootstrapping refers to the dependence of the target of updates on the parameters of the neural network, which is itself continuously updated during training. Employing bootstrapping in RL stands in contrast to supervised-learning techniques and Monte-Carlo RL (Sutton & Barto, 2018), where the target of our gradient updates does not depend on the parameters of the neural network. Mnih et al. (2015) proposed a simple approach to hedging against issues that arise when using bootstrapping, namely to use a target network in value-function optimization. The target network is updated periodically, and tracks the online network with some delay. While this modification 36th Conference on Neural Information Processing Systems (NeurIPS 2022). constituted a major step towards combating misbehavior in Q-learning (Lee & He, 2019; Kim et al., 2019; Zhang et al., 2021), optimization instability is still prevalent (van Hasselt et al., 2018). Our primary contribution is to endow DQN and Rainbow (Hessel et al., 2018) with a term that ensures the parameters of the online-network component remain in the proximity of the parameters of the target network. Our theoretical and empirical results show that our simple proximal updates can remarkably increase robustness to noise without incurring additional computational or memory costs. In particular, we present comprehensive experiments on the Atari benchmark (Bellemare et al., 2013) where proximal updates yield significant improvements, thus revealing the benefits of using this simple technique for deep RL. 2 Background and Notation RL is the study of the interaction between an environment and an agent that learns to maximize reward through experience. The Markov Decision Process (Puterman, 1994), or MDP, is used to mathematically define the RL problem. An MDP is specified by the tuple hS,A,R,P, i, where S is the set of states and A is the set of actions. The functions R : S ⇥A! R and P : S ⇥A⇥ S ! [0, 1] denote the reward and transition dynamics of the MDP. Finally, a discounting factor is used to formalize the intuition that short-term rewards are more valuable than those received later. The goal in the RL problem is to learn a policy, a mapping from states to a probability distribution over actions, ⇡ : S ! P(A), that obtains high sums of future discounted rewards. An important concept in RL is the state value function. Formally, it denotes the expected discounted sum of future rewards when committing to a policy ⇡ in a state s: v⇡(s) := E ⇥P1 t=0 tRt S0 = s,⇡ ⇤ . We define the Bellman operator T ⇡ as follows: ⇥ T ⇡v ⇤ (s) := X a2A ⇡(a | s) R(s, a) + X s02S P(s, a, s0)v(s0) , which we can write compactly as: T ⇡v := R⇡ + P⇡v , where ⇥ R⇡ ⇤ (s) = P a2A ⇡(a|s)R(s, a) and ⇥ P⇡v ⇤ (s) = P a2A ⇡(a | s) P s02SP(s, a, s0)v(s0). We also denote: (T ⇡)nv := T ⇡ · · · T ⇡| {z } n compositions v . Notice that v⇡ is the unique fixed-point of (T ⇡)n for all natural numbers n, meaning that v⇡ = (T ⇡)nv⇡ , for all n. Define v? as the optimal value of a state, namely: v?(s) := max⇡ v⇡(s), and ⇡? as a policy that achieves v?(s) for all states. We define the Bellman Optimality Operator T ?: ⇥ T ?v ⇤ (s) :=max a2A R(s, a) + X s02S P(s, a, s0)v(s0) , whose fixed point is v?. These operators are at the heart of many planning and RL algorithms including Value Iteration (Bellman, 1957) and Policy Iteration (Howard, 1960). 3 Proximal Bellman Operator In this section, we introduce a new class of Bellman operators that ensure that the next iterate in planning and RL remain in the vicinity of the previous iterate. To this end, we define the Bregman Divergence generated by a convex function f : Df (v 0, v) := f(v0) f(v) hrf(v), v0 vi . Examples include the lp norm generated by f(v) = 12 kvk 2 p and the Mahalanobis Distance generated by f(v) = 12 hv,Qvi for a positive semi-definite matrix Q. We now define the Proximal Bellman Operator (T ⇡c,f )n: (T ⇡c,f )nv := argmin v0 ||v0 (T ⇡)nv||22 + 1 c Df (v 0, v) , (1) where c 2 (0,1). Intuitively, this operator encourages the next iterate to be in the proximity of the previous iterate, while also having a small difference relative to the point recommended by the original Bellman Operator. The parameter c could, therefore, be thought of as a knob that controls the degree of gravitation towards the previous iterate. Our goal is to understand the behavior of Proximal Bellman Operator when used in conjunction with the Modified Policy Iteration (MPI) algorithm (Puterman, 1994; Scherrer et al., 2015). Define Gv as the greedy policy with respect to v. At a certain iteration k, Proximal Modified Policy Iteration (PMPI) proceeds as follows: ⇡k Gvk 1 , (2) vk (T ⇡kc,f ) nvk 1 . (3) The pair of updates above generalize existing algorithms. Notably, with c!1 and general n we get MPI, with c!1 and n = 1 the algorithm reduces to Value Iteration, and with c!1 and n =1 we have a reduction to Policy Iteration. For finite c, the two extremes of n, namely n = 1 and n =1, could be thought of as the proximal versions of Value Iteration and Policy Iteration, respectively. To analyze this approach, it is first natural to ask if each iteration of PMPI could be thought of as a contraction so we can get sound and convergent behavior in planning and learning. For n > 1, Scherrer et al. (2015) constructed a contrived MDP demonstrating that one iteration of MPI can unfortunately expand. As PMPI is just a generalization of MPI, the same example from Scherrer et al. (2015) shows that PMPI can expand. In the case of n = 1, we can rewrite the pair of equations (2) and (3) in a single update as follows: vk T ?c,fvk 1 . When c ! 1, standard proofs can be employed to show that the operator is a contraction (Littman & Szepesvári, 1996). We now show that T ?c,f is a contraction for finite values of c. See our appendix for proofs. Theorem 1. The Proximal Bellman Optimality Operator T ?c,f is a contraction with fixed point v?. Therefore, we get convergent behavior when using T ?c,f in planning and RL. The addition of the proximal term is fortunately not changing the fixed point, thus not negatively affecting the final solution. This could be thought of as a form of regularization that vanishes in the limit; the algorithm converges to v? even without decaying 1/c. Going back to the general n 1 case, we cannot show contraction, but following previous work (Bertsekas & Tsitsiklis, 1996; Scherrer et al., 2015), we study error propagation in PMPI in presence of additive noise where we get a noisy sample of the original Bellman Operator (T ⇡k)nvk 1 + ✏k. The noise can stem from a variety of reasons, such as approximation or estimation error. For simplicity, we restrict the analysis to Df (v0, v) = ||v0 v||22, so we rewrite update (3) as: vk argmin v0 ||v0 (T ⇡k)nvk 1 + ✏k ||22 + 1 c kv0 vk 1k 2 2 which can further be simplified to: vk (1 )(T ⇡k)nvk 1 + vk 1| {z } :=(T ⇡k )nvk 1 +(1 )✏k, where = 11+c . This operator is a generalization of the operator proposed by Smirnova & Dohmatob (2020) who focused on the case of n = 1. To build some intuition, notice that the update is multiplying error ✏k by a term that is smaller than one, thus better hedging against large noise. While the update may slow progress when there is no noise, it is entirely conceivable that for large enough values of ✏k, it is better to use non-zero values. In the following theorem we formalize this intuition. Our result leans on the theory provided by Scherrer et al. (2015) and could be thought of as a generalization of their theorem for non-zero values. Theorem 2. Consider the PMPI algorithm specified by: ⇡k G✏0kvk 1 , (4) vk (T ⇡k ) nvk 1 + (1 )✏k . (5) Define the Bellman residual bk := vk T ⇡k+1vk, and error terms xk := (I P⇡k)✏k and yk := P⇡ ⇤ ✏k. After k steps: v⇤ v⇡k = v⇡ ⇤ (T ⇡k+1 ) nvk | {z } dk +(T ⇡k+1 ) nvk v⇡k| {z } sk • where dk P⇡ ⇤ dk 1 (1 )yk 1 + bk 1 + (1 ) Pn 1 j=1 ( P ⇡k)jbk 1 + ✏0k • sk (1 )( P⇡k)n + I (I P⇡k) 1bk 1 • bk (1 )( P⇡k)n + I bk 1 + (1 )xk + ✏0k+1 The bound provides intuition as to how the proximal Bellman Operator can accelerate convergence in the presence of high noise. For simplicity, we will only analyze the effect of the ✏ noise term, and ignore the ✏0 term. We first look at the Bellman residual, bk. Given the Bellman residual in iteration k 1, bk 1, the only influence of the noise term ✏k on bk is through the (1 )xk, term, and we see that bk decreases linearly with larger . The analysis of sk is slightly more involved but follows similar logic. The bound for sk can be decomposed into a term proportional to bk 1 and a term proportional to bk 1, where both are multiplied with positive semi-definite matrices. Since bk 1 itself linearly decreases with , we conclude that larger decreases the bound quadratically. The effect of on the bound for dk is more complex. The terms yk 1 and Pn 1 j=1 ( P ⇡k)jbk 1 introduce a linear decrease of the bound on dk with , while the term (I Pn 1 j=1 ( P ⇡k)j)bk 1 introduces a quadratic dependence whose curvature depends on I Pn 1 j=1 ( P⇡k) j . This complex dependence on highlights the trade-off between noise reduction and magnitude of updates. To understand this trade-off better, we examine two extreme cases for the magnitude on the noise. When the noise is very large, we may set = 1, equivalent to an infinitely strong proximal term. It is easy to see that for = 1, the values of dk and sk remain unchanged, which is preferable to the increase they would suffer in the presence of very large noise. On the other extreme, when no noise is present, the xk and yk terms in Theorem 2 vanish, and the bounds on dk and sk can be minimized by setting = 0, i.e. without noise the proximal term should not be used and the original Bellman update performed. Intermediate noise magnitudes thus require a value of that balances the noise reduction and update size. 4 Deep Q-Network with Proximal Updates We now endow DQN-style algorithms with proximal updates. Let hs, a, r, s0i denote a buffered tuple of interaction. Define the following objective function: h(✓, w) := bEhs,a,r,s0i h r + max a0 bQ(s0, a0; ✓) bQ(s, a;w) 2i . (6) Our proximal update is defined as follows: wt+1 argmin w h(wt, w) + 1 2c̃ kw wtk22 . (7) This algorithm closely resembles the standard proximal-point algorithm (Rockafellar, 1976; Parikh & Boyd, 2014) with the important caveat that the function h is now taking two vectors as input. At each iteration, we hold the first input constant while optimizing over the second input. In the optimization literature, the proximal-point algorithm is well-studied in contexts where an analytical solution to (7) is available. With deep learning no closed-form solution exists, so we approximately solve (7) by taking a fixed number of descent steps using stochastic gradients. Specifically, starting each iteration with w = wt, we perform multiple w updates w w ↵ r2h(wt, w) + 1c̃ (w wt) . We end the iteration by setting wt+1 w. To make a connection to standard deep RL, the online weights w could be thought of as the weights we maintain in the interim to solve (7) due to lack of a closed-form solution. Also, what is commonly referred to as the target network could better be thought of as just the previous iterate in the above proximal-point algorithm. Observe that the update can be written as: w 1 (↵/c̃) · w + (↵/c̃) · wt ↵r2h(wt, w) . Notice the intuitively appealing form: we first compute a convex combination of wt and w, based on the hyper-parameters ↵ and c̃, then add the gradient term to arrive at the next iterate of w. If wt and w are close, the convex combination is close to w itself and so this DQN with proximal update (DQN Pro) would behave similarly to the original DQN. However, when w strays too far from wt, taking the convex combination ensures that w gravitates towards the previous iterate wt. The gradient signal from minimizing the squared TD error (6) should then be strong enough to cancel this default gravitation towards wt. The update includes standard DQN as a special case when c̃ ! 1. The pseudo-code for DQN is presented in the Appendix. The difference between DQN and DQN Pro is minimal (shown in gray), and corresponds with a few lines of code in our implementation. 5 Experiments In this section, we empirically investigate the effectiveness of proximal updates in planning and reinforcement-learning algorithms. We begin by conducting experiments with PMPI in the context of approximate planning, and then move to large-scale RL experiments in Atari. 5.1 PMPI Experiments We now focus on understanding the empirical impact of adding the proximal term on the performance of approximate PMPI. To this end, we use the pair of update equations: ⇡k Gvk 1 , vk (1 ) (T ⇡k)nvk 1 + ✏k + vk 1 . For this experiment, we chose the toy 8⇥8 Frozen Lake environment from Open AI Gym (Brockman et al., 2016), where the transition and reward model of the environment is available to the planner. Using a small environment allows us to understand the impact of the proximal term in the simplest and most clear setting. Note also that we arranged the experiment so that the policy greedification step Gvk 1 8k is error-free, so we can solely focus on the interplay between the proximal term and the error caused by imperfect policy evaluation. We applied 100 iterations of PMPI, then measured the quality of the resultant policy ⇡ := ⇡100 as defined by the distance between its true value and that of the optimal policy, namely kV ? V ⇡k1. We repeated the experiment with different magnitudes of error, as well as different values of the parameter. From Figure 1, it is clear that the final performance exhibits a U-shape with respect to the parameter . It is also noticable that the best-performing is shifting to the right side (larger values) as we increase the magnitude of noise. This trend makes sense, and is consistent with what is predicted by Theorem 2: As the noise level rises, we have more incentive to use larger (but not too large) values to hedge against it. 5.2 Atari Experiments In this section, we evaluate the proximal (or Pro) agents relative to their original DQN-style counterparts on the Atari benchmark (Bellemare et al., 2013), and show that endowing the agent with the proximal term can lead into significant improvements in the interim as well as in the final performance. We next investigate the utility of our proposed proximal term through further experiments. Please see the Appendix for a complete description of our experimental pipeline. 5.2.1 Setup We used 55 Atari games (Bellemare et al., 2013) to conduct our experimental evaluations. Following Machado et al. (2018) and Castro et al. (2018), we used sticky actions to inject stochasticity into the otherwise deterministic Atari emulator. Our training and evaluation protocols and the hyper-parameter settings follow those of the Dopamine baseline (Castro et al., 2018). To report performance, we measured the undiscounted sum of rewards obtained by the learned policy during evaluation. We further report the learning curve for all experiments averaged across 5 random seeds. We reiterate that we used the exact same hyper-parameters for all agents to ensure a sound comparison. Our Pro agents have a single additional hyper-parameter c̃. We did a minimal random search on 6 games to tune c̃. Figure 2 visualizes the performance of Pro agents as a function of c̃. In light of this result, we set c̃ = 0.2 for DQN Pro and c̃ = 0.05 for Rainbow Pro. We used these values of c̃ for all 55 games, and note that we performed no further hyper-parameter tuning at all. 5.2.2 Results The first question is whether endowing the DQN agent with the proximal term can yield significant improvements over the original DQN. Figure 3 (top) shows a comparison between DQN and DQN Pro in terms of the final performance. In particular, following standard practice (Wang et al., 2016; Dabney et al., 2018; van Seijen et al., 2019), for each game we compute: ScoreDQN Pro ScoreDQN max(ScoreDQN, ScoreHuman) ScoreRandom . Bars shown in red indicate the games in which we observed better final performance for DQN Pro relative to DQN, and bars in blue indicate the opposite. The height of a bar denotes the magnitude of this improvement for the corresponding benchmark; notice that the y-axis is scaled logarithmically. We took human and random scores from previous work (Nair et al., 2015; Dabney et al., 2018). It is clear that DQN Pro dramatically improves upon DQN. We defer to the Appendix for full learning curves on all games tested. Can we fruitfully combine the proximal term with some of the existing algorithmic improvements in DQN? To answer this question, we build on the Rainbow algorithm of Hessel et al. (2018) who successfully combined numerous important algorithmic ideas in the value-based RL literature. We present this result in Figure 3 (bottom). Observe that the overall trend is for Rainbow Pro to yield large performance improvements over Rainbow. Additionally, we measured the performance of our agents relative to human players. To this end, and again following previous work (Wang et al., 2016; Dabney et al., 2018; van Seijen et al., 2019), for each agent we compute the human-normalized score: ScoreAgent ScoreRandom ScoreHuman ScoreRandom . In Figure 4 (left), we show the median of this score for all agents, which Wang et al. (2016) and Hessel et al. (2018) argued is a sensible quantity to track. We also show per-game learning curves with standard error in the Appendix. We make two key observations from this figure. First, the very basic DQN Pro agent is capable of achieving human-level performance (1.0 on the y-axis) after 120 million frames. Second, the Rainbow Pro agent achieves 220 percent human-normalized score after only 120 million frames. 5.2.3 Additional Experiments Our purpose in endowing the agent with the proximal term was to keep the online network in the vicinity of the target network, so it would be natural to ask if this desirable property can manifest itself in practice when using the proximal term. In Figure 4, we answer this question affirmatively by plotting the magnitude of the update to the target network during synchronization. Notice that we periodically synchronize online and target networks, so the proximity of the online and target network should manifest itself in a low distance between two consecutive target networks. Indeed, the results demonstrate the success of the proximal term in terms of obtaining the desired proximity of online and target networks. While using the proximal term leads to significant improvements, one may still wonder if the advantage of DQN Pro over DQN is merely stemming from a poorly-chosen period hyper-parameter in the original DQN, as opposed to a truly more stable optimization in DQN Pro. To refute this hypothesis, we ran DQN with various settings of the period hyper-parameter {2000, 4000, 8000, 12000}. This set included the default value of the hyper-parameter (8000) from the original paper (Mnih et al., 2015), but also covered a wider set of settings. Additionally, we tried an alternative update strategy for the target network, referred to as Polyak averaging, which was popularized in the context of continuous-action RL (Lillicrap et al., 2015): ✓ ⌧w+(1 ⌧)✓. For this update strategy, too, we tried different settings of the ⌧ hyper-parameter, namely {0.05, 0.005, 0.0005}, which includes the value 0.005 used in numerous papers (Lillicrap et al., 2015; Fujimoto et al., 2018; Asadi et al., 2021). Figure 5 presents a comparison between DQN Pro and DQN with periodic and Polyak target updates for various hyper-parameter settings of period and ⌧ . It is clear that DQN Pro is consistently outperforming the two alternatives regardless of the specific values of period and ⌧ , thus clearly demonstrating that the improvement is stemming from a more stable optimization procedure leading to a better interplay between the two networks. Finally, an alternative approach to ensuring lower distance between the online and the target network is to anneal the step size based on the number of updates performed on the online network since the last online-target synchronization. In this case we performed this experiment in 4 games where we knew proximal updates provide improvements based on our DQN Pro versus DQN resulst in Figure 3. In this case we linearly decreased the step size from the original DQN learning rate ↵ to ↵0 ⌧ ↵ where we tuned ↵0 using random search. Annealing indeed improves DQN, but DQN Pro outperforms the improved version of DQN. Our intuition is that Pro agents only perform small updates when the target network is far from the online network, but naively decaying the learning rate can harm progress when the two networks are in vicinity of each other. 6 Discussion In our experience using proximal updates in the parameter space were far superior than proximal updates in the value space. We believe this is because the parameter-space definition can enforce the proximity globally, while in the value space one can only hope to obtain proximity locally and on a batch of samples. One may hope to use natural gradients to enforce value-space proximity in a more principled way, but doing so usually requires significantly more computational resources Knight & Lerner (2018). This is in contrast to our proximal updates which add negligible computational cost in the simple form of taking a dimension-wise weighted average of two weight vectors. In addition, for a smooth (Lipschitz) Q function, performing parameter-space regularization guarantees function-space regularization. Concretely: 8s, 8a |Q(s, a; ✓) Q(s, a; ✓0)| L||✓ ✓0|| , where L is the Lipschitz constant of Q. Moreover, deep networks are Lipschitz (Neyshabur et al., 2015; Asadi et al., 2018), because they are constructed using compositions of Lipschitz functions (such as ReLU, convolutions, etc) and that composition of Lipschitz functions is Lipschitz. So performing value-space updates may be an overkill. Lipschitz property of deep networks has successfully been leveraged in other contexts, such as in generative adversarial training Arjovsky et al. (2017). A key selling point of our result is simplicity, because simple results are easy to understand, implement, and reproduce. We obtained significant performance improvements by adding just a few lines of codes to the publicly available implementations of DQN and Rainbow Castro et al. (2018). 7 Related Work The introduction of proximal operators could be traced back to the seminal work of Moreau (1962, 1965), Martinet (1970) and Rockafellar (1976), and the use of the proximal operators has since expanded into many areas of science such as signal processing (Combettes & Pesquet, 2009), statistics and machine learning (Beck & Teboulle, 2009; Polson et al., 2015; Reddi et al., 2015), and convex optimization (Parikh & Boyd, 2014; Bertsekas, 2011b,a). In the context of RL, Mahadevan et al. (2014) introduced a proximal theory for deriving convergent off-policy algorithms with linear function approximation. One intriguing characteristic of their work is that they perform updates in primal-dual space, a property that was leveraged in sample complexity analysis (Liu et al., 2020) for the proximal counterparts of the gradient temporal-difference algorithm (Sutton et al., 2008). Proximal operators also have appeared in the deep RL literature. For instance, Fakoor et al. (2020b) used proximal operators for meta learning, and Maggipinto et al. (2020) improved TD3 (Fujimoto et al., 2018) by employing a stochastic proximal-point interpretation. The effect of the proximal term in our work is reminiscent of the use of trust regions in policy-gradient algorithms (Schulman et al., 2015, 2017; Wang et al., 2019; Fakoor et al., 2020a; Tomar et al., 2021). However, three factors differentiate our work: we define the proximal term using the value function, not the policy, we enforce the proximal term in the parameter space, as opposed to the function space, and we use the target network as the previous iterate in our proximal definition. 8 Conclusion and Future work We showed a clear advantage for using proximal terms to perform slower but more effective updates in approximate planning and reinforcement learning. Our results demonstrated that proximal updates lead to more robustness with respect to noise. Several improvements to proximal methods exist, such as the acceleration algorithm (Nesterov, 1983; Li & Lin, 2015), as well as using other proximal terms (Combettes & Pesquet, 2009), which we leave for future work. 9 Acknowledgment We thank Lihong Li, Pratik Chaudhari, and Shoham Sabach for their valuable insights in different stages of this work.
1. What is the focus and contribution of the paper regarding reinforcement learning robustness? 2. What are the strengths of the proposed approach, particularly its simplicity and theoretical support? 3. What are the weaknesses of the paper, especially regarding novelty and the lack of explanation on how the proposed method differs from alternatives? 4. Do you have any questions or concerns about the experimental results, such as additional analysis on environments where the method did not perform well or results for tasks with continuous action spaces? 5. Are there any minor issues or typos in the paper that could be improved?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this paper, the authors propose to improve reinforcement learning robustness by essentially adding a regularization term to the Q updates, so that the weight of the online Q network will stay closer to the weight of the target Q network. The authors provide both theoretical and empirical results and show that when the proposed method is added to DQN and Rainbow baselines, it improves performance quite significantly on Atari tasks. Strengths And Weaknesses Strengths: the simplicity of the method: as the authors discussed, the proposed method is quite simple and effective. And the simplicity brings a number of benefits. clean presentation and adequate technical details: the authors did a very good job presenting the method and also the many technical details, overall, the results seem to be reliable. interesting ablations: very important ablations to show that the effect of the proposed method is not the same as alternatives such as polyak averaging and learning rate decay. theoretical support: the theoretical results are good. significant results: although the proposed method did not improve on all tasks, but overall, the results show significant improvement in performance, and seems to be reliable results given the ablation provided and the discussion on hyperparameters. Weaknesses: the novelty of the idea: the idea to make network weight stay close to the weight of an older version of the same network is not new. However, it would seem that this particular simple design will have a similar effect to changing polyak or learning rate decay, but the authors show that this is not the case and those ablations are very interesting. So probably should not be considered a major weakness. would love to see some more discussion on how exactly the proposed regularization can have superior effect compared to alternatives such as using polyak and smaller learning rates. Currently, there are empirical results that indicate the effects are very different, but why? For example, if we use polyak instead, will it perhaps... always produce a much larger shift in network weights or maybe the output Q values? I think it will be great if the authors can dig deeper and provide a hypothesis on what is the fundamental difference here that allowed the proposed method to achieve much stronger performance than alternative regularizations. would love to see some additional analysis on environments where the proposed method did not do well, in these cases, is it because the Q networks are now updated much slower (due to the fact that a consistent hyperparameter is selected for all environments)? Then for example, will increasing the learning rate in these environments recover the performance? Not a major point but would be good to see how that works. would be great if we can see results also for tasks with continuous action spaces, such as MuJoCo, where there seems to be a larger problem with bootstrapping. But I understand this might be significant extra work. very minor typos: line 195 xCan Summary: Overall, I like this work, it is presented in a very clean fashion, with adequate details and good results. My current major concern is probably on the lack of explanation on how maintaining vicinity in the weight parameter space will give a very different effect compared to alternative methods? I am willing to increase my score if the rebuttal can address my concern. Questions My main question is: How is the proposed method fundamentally different from the alternatives? Why is it that it can achieve good performance improvement while others fail? (I also mentioned this in the previous section). In the current version of the paper, I don't quite find a very clear and comprehensive explanation for this. Limitations I saw the authors indicate they have discussed the limitations of the work, but I did not seem to find where the discussion is, maybe I missed it? Would be great if you can point it out for me.
NIPS
Title Faster Deep Reinforcement Learning with Slower Online Network Abstract Deep reinforcement learning algorithms often use two networks for value function optimization: an online network, and a target network that tracks the online network with some delay. Using two separate networks enables the agent to hedge against issues that arise when performing bootstrapping. In this paper we endow two popular deep reinforcement learning algorithms, namely DQN and Rainbow, with updates that incentivize the online network to remain in the proximity of the target network. This improves the robustness of deep reinforcement learning in presence of noisy updates. The resultant agents, called DQN Pro and Rainbow Pro, exhibit significant performance improvements over their original counterparts on the Atari benchmark demonstrating the effectiveness of this simple idea in deep reinforcement learning. The code for our paper is available here: Github.com/amazon-research/fast-rl-with-slow-updates. 1 Introduction An important competency of reinforcement-learning (RL) agents is learning in environments with large state spaces like those found in robotics (Kober et al., 2013), dialog systems (Williams et al., 2017), and games (Tesauro, 1994; Silver et al., 2017). Recent breakthroughs in deep RL have demonstrated that simple approaches such as Q-learning (Watkins & Dayan, 1992) can surpass human-level performance in challenging environments when equipped with deep neural networks for function approximation (Mnih et al., 2015). Two components of a gradient-based deep RL agent are its objective function and optimization procedure. The optimization procedure takes estimates of the gradient of the objective with respect to network parameters and updates the parameters accordingly. In DQN (Mnih et al., 2015), for example, the objective function is the empirical expectation of the temporal difference (TD) error (Sutton, 1988) on a buffered set of environmental interactions (Lin, 1992), and variants of stochastic gradient descent are employed to best minimize this objective function. A fundamental difficulty in this context stems from the use of bootstrapping. Here, bootstrapping refers to the dependence of the target of updates on the parameters of the neural network, which is itself continuously updated during training. Employing bootstrapping in RL stands in contrast to supervised-learning techniques and Monte-Carlo RL (Sutton & Barto, 2018), where the target of our gradient updates does not depend on the parameters of the neural network. Mnih et al. (2015) proposed a simple approach to hedging against issues that arise when using bootstrapping, namely to use a target network in value-function optimization. The target network is updated periodically, and tracks the online network with some delay. While this modification 36th Conference on Neural Information Processing Systems (NeurIPS 2022). constituted a major step towards combating misbehavior in Q-learning (Lee & He, 2019; Kim et al., 2019; Zhang et al., 2021), optimization instability is still prevalent (van Hasselt et al., 2018). Our primary contribution is to endow DQN and Rainbow (Hessel et al., 2018) with a term that ensures the parameters of the online-network component remain in the proximity of the parameters of the target network. Our theoretical and empirical results show that our simple proximal updates can remarkably increase robustness to noise without incurring additional computational or memory costs. In particular, we present comprehensive experiments on the Atari benchmark (Bellemare et al., 2013) where proximal updates yield significant improvements, thus revealing the benefits of using this simple technique for deep RL. 2 Background and Notation RL is the study of the interaction between an environment and an agent that learns to maximize reward through experience. The Markov Decision Process (Puterman, 1994), or MDP, is used to mathematically define the RL problem. An MDP is specified by the tuple hS,A,R,P, i, where S is the set of states and A is the set of actions. The functions R : S ⇥A! R and P : S ⇥A⇥ S ! [0, 1] denote the reward and transition dynamics of the MDP. Finally, a discounting factor is used to formalize the intuition that short-term rewards are more valuable than those received later. The goal in the RL problem is to learn a policy, a mapping from states to a probability distribution over actions, ⇡ : S ! P(A), that obtains high sums of future discounted rewards. An important concept in RL is the state value function. Formally, it denotes the expected discounted sum of future rewards when committing to a policy ⇡ in a state s: v⇡(s) := E ⇥P1 t=0 tRt S0 = s,⇡ ⇤ . We define the Bellman operator T ⇡ as follows: ⇥ T ⇡v ⇤ (s) := X a2A ⇡(a | s) R(s, a) + X s02S P(s, a, s0)v(s0) , which we can write compactly as: T ⇡v := R⇡ + P⇡v , where ⇥ R⇡ ⇤ (s) = P a2A ⇡(a|s)R(s, a) and ⇥ P⇡v ⇤ (s) = P a2A ⇡(a | s) P s02SP(s, a, s0)v(s0). We also denote: (T ⇡)nv := T ⇡ · · · T ⇡| {z } n compositions v . Notice that v⇡ is the unique fixed-point of (T ⇡)n for all natural numbers n, meaning that v⇡ = (T ⇡)nv⇡ , for all n. Define v? as the optimal value of a state, namely: v?(s) := max⇡ v⇡(s), and ⇡? as a policy that achieves v?(s) for all states. We define the Bellman Optimality Operator T ?: ⇥ T ?v ⇤ (s) :=max a2A R(s, a) + X s02S P(s, a, s0)v(s0) , whose fixed point is v?. These operators are at the heart of many planning and RL algorithms including Value Iteration (Bellman, 1957) and Policy Iteration (Howard, 1960). 3 Proximal Bellman Operator In this section, we introduce a new class of Bellman operators that ensure that the next iterate in planning and RL remain in the vicinity of the previous iterate. To this end, we define the Bregman Divergence generated by a convex function f : Df (v 0, v) := f(v0) f(v) hrf(v), v0 vi . Examples include the lp norm generated by f(v) = 12 kvk 2 p and the Mahalanobis Distance generated by f(v) = 12 hv,Qvi for a positive semi-definite matrix Q. We now define the Proximal Bellman Operator (T ⇡c,f )n: (T ⇡c,f )nv := argmin v0 ||v0 (T ⇡)nv||22 + 1 c Df (v 0, v) , (1) where c 2 (0,1). Intuitively, this operator encourages the next iterate to be in the proximity of the previous iterate, while also having a small difference relative to the point recommended by the original Bellman Operator. The parameter c could, therefore, be thought of as a knob that controls the degree of gravitation towards the previous iterate. Our goal is to understand the behavior of Proximal Bellman Operator when used in conjunction with the Modified Policy Iteration (MPI) algorithm (Puterman, 1994; Scherrer et al., 2015). Define Gv as the greedy policy with respect to v. At a certain iteration k, Proximal Modified Policy Iteration (PMPI) proceeds as follows: ⇡k Gvk 1 , (2) vk (T ⇡kc,f ) nvk 1 . (3) The pair of updates above generalize existing algorithms. Notably, with c!1 and general n we get MPI, with c!1 and n = 1 the algorithm reduces to Value Iteration, and with c!1 and n =1 we have a reduction to Policy Iteration. For finite c, the two extremes of n, namely n = 1 and n =1, could be thought of as the proximal versions of Value Iteration and Policy Iteration, respectively. To analyze this approach, it is first natural to ask if each iteration of PMPI could be thought of as a contraction so we can get sound and convergent behavior in planning and learning. For n > 1, Scherrer et al. (2015) constructed a contrived MDP demonstrating that one iteration of MPI can unfortunately expand. As PMPI is just a generalization of MPI, the same example from Scherrer et al. (2015) shows that PMPI can expand. In the case of n = 1, we can rewrite the pair of equations (2) and (3) in a single update as follows: vk T ?c,fvk 1 . When c ! 1, standard proofs can be employed to show that the operator is a contraction (Littman & Szepesvári, 1996). We now show that T ?c,f is a contraction for finite values of c. See our appendix for proofs. Theorem 1. The Proximal Bellman Optimality Operator T ?c,f is a contraction with fixed point v?. Therefore, we get convergent behavior when using T ?c,f in planning and RL. The addition of the proximal term is fortunately not changing the fixed point, thus not negatively affecting the final solution. This could be thought of as a form of regularization that vanishes in the limit; the algorithm converges to v? even without decaying 1/c. Going back to the general n 1 case, we cannot show contraction, but following previous work (Bertsekas & Tsitsiklis, 1996; Scherrer et al., 2015), we study error propagation in PMPI in presence of additive noise where we get a noisy sample of the original Bellman Operator (T ⇡k)nvk 1 + ✏k. The noise can stem from a variety of reasons, such as approximation or estimation error. For simplicity, we restrict the analysis to Df (v0, v) = ||v0 v||22, so we rewrite update (3) as: vk argmin v0 ||v0 (T ⇡k)nvk 1 + ✏k ||22 + 1 c kv0 vk 1k 2 2 which can further be simplified to: vk (1 )(T ⇡k)nvk 1 + vk 1| {z } :=(T ⇡k )nvk 1 +(1 )✏k, where = 11+c . This operator is a generalization of the operator proposed by Smirnova & Dohmatob (2020) who focused on the case of n = 1. To build some intuition, notice that the update is multiplying error ✏k by a term that is smaller than one, thus better hedging against large noise. While the update may slow progress when there is no noise, it is entirely conceivable that for large enough values of ✏k, it is better to use non-zero values. In the following theorem we formalize this intuition. Our result leans on the theory provided by Scherrer et al. (2015) and could be thought of as a generalization of their theorem for non-zero values. Theorem 2. Consider the PMPI algorithm specified by: ⇡k G✏0kvk 1 , (4) vk (T ⇡k ) nvk 1 + (1 )✏k . (5) Define the Bellman residual bk := vk T ⇡k+1vk, and error terms xk := (I P⇡k)✏k and yk := P⇡ ⇤ ✏k. After k steps: v⇤ v⇡k = v⇡ ⇤ (T ⇡k+1 ) nvk | {z } dk +(T ⇡k+1 ) nvk v⇡k| {z } sk • where dk P⇡ ⇤ dk 1 (1 )yk 1 + bk 1 + (1 ) Pn 1 j=1 ( P ⇡k)jbk 1 + ✏0k • sk (1 )( P⇡k)n + I (I P⇡k) 1bk 1 • bk (1 )( P⇡k)n + I bk 1 + (1 )xk + ✏0k+1 The bound provides intuition as to how the proximal Bellman Operator can accelerate convergence in the presence of high noise. For simplicity, we will only analyze the effect of the ✏ noise term, and ignore the ✏0 term. We first look at the Bellman residual, bk. Given the Bellman residual in iteration k 1, bk 1, the only influence of the noise term ✏k on bk is through the (1 )xk, term, and we see that bk decreases linearly with larger . The analysis of sk is slightly more involved but follows similar logic. The bound for sk can be decomposed into a term proportional to bk 1 and a term proportional to bk 1, where both are multiplied with positive semi-definite matrices. Since bk 1 itself linearly decreases with , we conclude that larger decreases the bound quadratically. The effect of on the bound for dk is more complex. The terms yk 1 and Pn 1 j=1 ( P ⇡k)jbk 1 introduce a linear decrease of the bound on dk with , while the term (I Pn 1 j=1 ( P ⇡k)j)bk 1 introduces a quadratic dependence whose curvature depends on I Pn 1 j=1 ( P⇡k) j . This complex dependence on highlights the trade-off between noise reduction and magnitude of updates. To understand this trade-off better, we examine two extreme cases for the magnitude on the noise. When the noise is very large, we may set = 1, equivalent to an infinitely strong proximal term. It is easy to see that for = 1, the values of dk and sk remain unchanged, which is preferable to the increase they would suffer in the presence of very large noise. On the other extreme, when no noise is present, the xk and yk terms in Theorem 2 vanish, and the bounds on dk and sk can be minimized by setting = 0, i.e. without noise the proximal term should not be used and the original Bellman update performed. Intermediate noise magnitudes thus require a value of that balances the noise reduction and update size. 4 Deep Q-Network with Proximal Updates We now endow DQN-style algorithms with proximal updates. Let hs, a, r, s0i denote a buffered tuple of interaction. Define the following objective function: h(✓, w) := bEhs,a,r,s0i h r + max a0 bQ(s0, a0; ✓) bQ(s, a;w) 2i . (6) Our proximal update is defined as follows: wt+1 argmin w h(wt, w) + 1 2c̃ kw wtk22 . (7) This algorithm closely resembles the standard proximal-point algorithm (Rockafellar, 1976; Parikh & Boyd, 2014) with the important caveat that the function h is now taking two vectors as input. At each iteration, we hold the first input constant while optimizing over the second input. In the optimization literature, the proximal-point algorithm is well-studied in contexts where an analytical solution to (7) is available. With deep learning no closed-form solution exists, so we approximately solve (7) by taking a fixed number of descent steps using stochastic gradients. Specifically, starting each iteration with w = wt, we perform multiple w updates w w ↵ r2h(wt, w) + 1c̃ (w wt) . We end the iteration by setting wt+1 w. To make a connection to standard deep RL, the online weights w could be thought of as the weights we maintain in the interim to solve (7) due to lack of a closed-form solution. Also, what is commonly referred to as the target network could better be thought of as just the previous iterate in the above proximal-point algorithm. Observe that the update can be written as: w 1 (↵/c̃) · w + (↵/c̃) · wt ↵r2h(wt, w) . Notice the intuitively appealing form: we first compute a convex combination of wt and w, based on the hyper-parameters ↵ and c̃, then add the gradient term to arrive at the next iterate of w. If wt and w are close, the convex combination is close to w itself and so this DQN with proximal update (DQN Pro) would behave similarly to the original DQN. However, when w strays too far from wt, taking the convex combination ensures that w gravitates towards the previous iterate wt. The gradient signal from minimizing the squared TD error (6) should then be strong enough to cancel this default gravitation towards wt. The update includes standard DQN as a special case when c̃ ! 1. The pseudo-code for DQN is presented in the Appendix. The difference between DQN and DQN Pro is minimal (shown in gray), and corresponds with a few lines of code in our implementation. 5 Experiments In this section, we empirically investigate the effectiveness of proximal updates in planning and reinforcement-learning algorithms. We begin by conducting experiments with PMPI in the context of approximate planning, and then move to large-scale RL experiments in Atari. 5.1 PMPI Experiments We now focus on understanding the empirical impact of adding the proximal term on the performance of approximate PMPI. To this end, we use the pair of update equations: ⇡k Gvk 1 , vk (1 ) (T ⇡k)nvk 1 + ✏k + vk 1 . For this experiment, we chose the toy 8⇥8 Frozen Lake environment from Open AI Gym (Brockman et al., 2016), where the transition and reward model of the environment is available to the planner. Using a small environment allows us to understand the impact of the proximal term in the simplest and most clear setting. Note also that we arranged the experiment so that the policy greedification step Gvk 1 8k is error-free, so we can solely focus on the interplay between the proximal term and the error caused by imperfect policy evaluation. We applied 100 iterations of PMPI, then measured the quality of the resultant policy ⇡ := ⇡100 as defined by the distance between its true value and that of the optimal policy, namely kV ? V ⇡k1. We repeated the experiment with different magnitudes of error, as well as different values of the parameter. From Figure 1, it is clear that the final performance exhibits a U-shape with respect to the parameter . It is also noticable that the best-performing is shifting to the right side (larger values) as we increase the magnitude of noise. This trend makes sense, and is consistent with what is predicted by Theorem 2: As the noise level rises, we have more incentive to use larger (but not too large) values to hedge against it. 5.2 Atari Experiments In this section, we evaluate the proximal (or Pro) agents relative to their original DQN-style counterparts on the Atari benchmark (Bellemare et al., 2013), and show that endowing the agent with the proximal term can lead into significant improvements in the interim as well as in the final performance. We next investigate the utility of our proposed proximal term through further experiments. Please see the Appendix for a complete description of our experimental pipeline. 5.2.1 Setup We used 55 Atari games (Bellemare et al., 2013) to conduct our experimental evaluations. Following Machado et al. (2018) and Castro et al. (2018), we used sticky actions to inject stochasticity into the otherwise deterministic Atari emulator. Our training and evaluation protocols and the hyper-parameter settings follow those of the Dopamine baseline (Castro et al., 2018). To report performance, we measured the undiscounted sum of rewards obtained by the learned policy during evaluation. We further report the learning curve for all experiments averaged across 5 random seeds. We reiterate that we used the exact same hyper-parameters for all agents to ensure a sound comparison. Our Pro agents have a single additional hyper-parameter c̃. We did a minimal random search on 6 games to tune c̃. Figure 2 visualizes the performance of Pro agents as a function of c̃. In light of this result, we set c̃ = 0.2 for DQN Pro and c̃ = 0.05 for Rainbow Pro. We used these values of c̃ for all 55 games, and note that we performed no further hyper-parameter tuning at all. 5.2.2 Results The first question is whether endowing the DQN agent with the proximal term can yield significant improvements over the original DQN. Figure 3 (top) shows a comparison between DQN and DQN Pro in terms of the final performance. In particular, following standard practice (Wang et al., 2016; Dabney et al., 2018; van Seijen et al., 2019), for each game we compute: ScoreDQN Pro ScoreDQN max(ScoreDQN, ScoreHuman) ScoreRandom . Bars shown in red indicate the games in which we observed better final performance for DQN Pro relative to DQN, and bars in blue indicate the opposite. The height of a bar denotes the magnitude of this improvement for the corresponding benchmark; notice that the y-axis is scaled logarithmically. We took human and random scores from previous work (Nair et al., 2015; Dabney et al., 2018). It is clear that DQN Pro dramatically improves upon DQN. We defer to the Appendix for full learning curves on all games tested. Can we fruitfully combine the proximal term with some of the existing algorithmic improvements in DQN? To answer this question, we build on the Rainbow algorithm of Hessel et al. (2018) who successfully combined numerous important algorithmic ideas in the value-based RL literature. We present this result in Figure 3 (bottom). Observe that the overall trend is for Rainbow Pro to yield large performance improvements over Rainbow. Additionally, we measured the performance of our agents relative to human players. To this end, and again following previous work (Wang et al., 2016; Dabney et al., 2018; van Seijen et al., 2019), for each agent we compute the human-normalized score: ScoreAgent ScoreRandom ScoreHuman ScoreRandom . In Figure 4 (left), we show the median of this score for all agents, which Wang et al. (2016) and Hessel et al. (2018) argued is a sensible quantity to track. We also show per-game learning curves with standard error in the Appendix. We make two key observations from this figure. First, the very basic DQN Pro agent is capable of achieving human-level performance (1.0 on the y-axis) after 120 million frames. Second, the Rainbow Pro agent achieves 220 percent human-normalized score after only 120 million frames. 5.2.3 Additional Experiments Our purpose in endowing the agent with the proximal term was to keep the online network in the vicinity of the target network, so it would be natural to ask if this desirable property can manifest itself in practice when using the proximal term. In Figure 4, we answer this question affirmatively by plotting the magnitude of the update to the target network during synchronization. Notice that we periodically synchronize online and target networks, so the proximity of the online and target network should manifest itself in a low distance between two consecutive target networks. Indeed, the results demonstrate the success of the proximal term in terms of obtaining the desired proximity of online and target networks. While using the proximal term leads to significant improvements, one may still wonder if the advantage of DQN Pro over DQN is merely stemming from a poorly-chosen period hyper-parameter in the original DQN, as opposed to a truly more stable optimization in DQN Pro. To refute this hypothesis, we ran DQN with various settings of the period hyper-parameter {2000, 4000, 8000, 12000}. This set included the default value of the hyper-parameter (8000) from the original paper (Mnih et al., 2015), but also covered a wider set of settings. Additionally, we tried an alternative update strategy for the target network, referred to as Polyak averaging, which was popularized in the context of continuous-action RL (Lillicrap et al., 2015): ✓ ⌧w+(1 ⌧)✓. For this update strategy, too, we tried different settings of the ⌧ hyper-parameter, namely {0.05, 0.005, 0.0005}, which includes the value 0.005 used in numerous papers (Lillicrap et al., 2015; Fujimoto et al., 2018; Asadi et al., 2021). Figure 5 presents a comparison between DQN Pro and DQN with periodic and Polyak target updates for various hyper-parameter settings of period and ⌧ . It is clear that DQN Pro is consistently outperforming the two alternatives regardless of the specific values of period and ⌧ , thus clearly demonstrating that the improvement is stemming from a more stable optimization procedure leading to a better interplay between the two networks. Finally, an alternative approach to ensuring lower distance between the online and the target network is to anneal the step size based on the number of updates performed on the online network since the last online-target synchronization. In this case we performed this experiment in 4 games where we knew proximal updates provide improvements based on our DQN Pro versus DQN resulst in Figure 3. In this case we linearly decreased the step size from the original DQN learning rate ↵ to ↵0 ⌧ ↵ where we tuned ↵0 using random search. Annealing indeed improves DQN, but DQN Pro outperforms the improved version of DQN. Our intuition is that Pro agents only perform small updates when the target network is far from the online network, but naively decaying the learning rate can harm progress when the two networks are in vicinity of each other. 6 Discussion In our experience using proximal updates in the parameter space were far superior than proximal updates in the value space. We believe this is because the parameter-space definition can enforce the proximity globally, while in the value space one can only hope to obtain proximity locally and on a batch of samples. One may hope to use natural gradients to enforce value-space proximity in a more principled way, but doing so usually requires significantly more computational resources Knight & Lerner (2018). This is in contrast to our proximal updates which add negligible computational cost in the simple form of taking a dimension-wise weighted average of two weight vectors. In addition, for a smooth (Lipschitz) Q function, performing parameter-space regularization guarantees function-space regularization. Concretely: 8s, 8a |Q(s, a; ✓) Q(s, a; ✓0)| L||✓ ✓0|| , where L is the Lipschitz constant of Q. Moreover, deep networks are Lipschitz (Neyshabur et al., 2015; Asadi et al., 2018), because they are constructed using compositions of Lipschitz functions (such as ReLU, convolutions, etc) and that composition of Lipschitz functions is Lipschitz. So performing value-space updates may be an overkill. Lipschitz property of deep networks has successfully been leveraged in other contexts, such as in generative adversarial training Arjovsky et al. (2017). A key selling point of our result is simplicity, because simple results are easy to understand, implement, and reproduce. We obtained significant performance improvements by adding just a few lines of codes to the publicly available implementations of DQN and Rainbow Castro et al. (2018). 7 Related Work The introduction of proximal operators could be traced back to the seminal work of Moreau (1962, 1965), Martinet (1970) and Rockafellar (1976), and the use of the proximal operators has since expanded into many areas of science such as signal processing (Combettes & Pesquet, 2009), statistics and machine learning (Beck & Teboulle, 2009; Polson et al., 2015; Reddi et al., 2015), and convex optimization (Parikh & Boyd, 2014; Bertsekas, 2011b,a). In the context of RL, Mahadevan et al. (2014) introduced a proximal theory for deriving convergent off-policy algorithms with linear function approximation. One intriguing characteristic of their work is that they perform updates in primal-dual space, a property that was leveraged in sample complexity analysis (Liu et al., 2020) for the proximal counterparts of the gradient temporal-difference algorithm (Sutton et al., 2008). Proximal operators also have appeared in the deep RL literature. For instance, Fakoor et al. (2020b) used proximal operators for meta learning, and Maggipinto et al. (2020) improved TD3 (Fujimoto et al., 2018) by employing a stochastic proximal-point interpretation. The effect of the proximal term in our work is reminiscent of the use of trust regions in policy-gradient algorithms (Schulman et al., 2015, 2017; Wang et al., 2019; Fakoor et al., 2020a; Tomar et al., 2021). However, three factors differentiate our work: we define the proximal term using the value function, not the policy, we enforce the proximal term in the parameter space, as opposed to the function space, and we use the target network as the previous iterate in our proximal definition. 8 Conclusion and Future work We showed a clear advantage for using proximal terms to perform slower but more effective updates in approximate planning and reinforcement learning. Our results demonstrated that proximal updates lead to more robustness with respect to noise. Several improvements to proximal methods exist, such as the acceleration algorithm (Nesterov, 1983; Li & Lin, 2015), as well as using other proximal terms (Combettes & Pesquet, 2009), which we leave for future work. 9 Acknowledgment We thank Lihong Li, Pratik Chaudhari, and Shoham Sabach for their valuable insights in different stages of this work.
1. What is the focus of the paper regarding deep reinforcement learning algorithms? 2. What are the strengths of the proposed approach, particularly in terms of regularization and stability? 3. How does the reviewer assess the clarity and quality of the paper's content? 4. Are there any concerns or limitations regarding the proposed method?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors consider the class of deep reinforcement learning algorithms that are based on an online network that is updated at every interaction and a target network that is updated periodically. This technique, in general, ensures that the slower moving target network stabilizes the learning. The paper proposes a type of regularization which keeps the online network in the proximity of the target network (this technique is referred to as the "slowing down" in the title of the paper). The paper considers the DQN and Rainbow algorithms, and modifies them with the proposed approach. Experiments on the OpenAI Gym Atari game benchmarks show a significant improvement compared to the baseline algorithms. Experiments show that the performance improvement generated by technique depends on the level of noise in the reward function of the RL process. Strengths And Weaknesses The paper is written very clearly. Whenever feasible, theoretical motivations of the approach are given. The paper clearly points out the parts where the current level of understanding does not allow for theoretical proofs. Questions The paper provides sufficient information to make an informed opinion. Limitations The paper adequately addresses the limitations of the work. The theoretical nature of the paper does not raise issues of negative societal impact.
NIPS
Title Learning Graph Models for Retrosynthesis Prediction Abstract Retrosynthesis prediction is a fundamental problem in organic synthesis, where the task is to identify precursor molecules that can be used to synthesize a target molecule. A key consideration in building neural models for this task is aligning model design with strategies adopted by chemists. Building on this viewpoint, this paper introduces a graph-based approach that capitalizes on the idea that the graph topology of precursor molecules is largely unaltered during a chemical reaction. The model first predicts the set of graph edits transforming the target into incomplete molecules called synthons. Next, the model learns to expand synthons into complete molecules by attaching relevant leaving groups. This decomposition simplifies the architecture, making its predictions more interpretable, and also amenable to manual correction. Our model achieves a top-1 accuracy of 53.7%, outperforming previous template-free and semi-template-based methods. N/A Retrosynthesis prediction is a fundamental problem in organic synthesis, where the task is to identify precursor molecules that can be used to synthesize a target molecule. A key consideration in building neural models for this task is aligning model design with strategies adopted by chemists. Building on this viewpoint, this paper introduces a graph-based approach that capitalizes on the idea that the graph topology of precursor molecules is largely unaltered during a chemical reaction. The model first predicts the set of graph edits transforming the target into incomplete molecules called synthons. Next, the model learns to expand synthons into complete molecules by attaching relevant leaving groups. This decomposition simplifies the architecture, making its predictions more interpretable, and also amenable to manual correction. Our model achieves a top-1 accuracy of 53.7%, outperforming previous template-free and semi-template-based methods. 1 Introduction Retrosynthesis prediction, first formalized by E. J. Corey [Corey, 1991] is a fundamental problem in organic synthesis that attempts to identify a series of chemical transformations for synthesizing a target molecule. In the single-step formulation, the task is to identify a set of reactant molecules given a target. Beyond simple reactions, many practical tasks involving complex organic molecules are difficult even for expert chemists. As a result, substantial experimental exploration is needed to cover for deficiencies of analytical approaches. This has motivated interest in computer-assisted retrosynthesis [Corey and Wipke, 1969], with a recent surge in machine learning methods [Chen et al., 2019, Coley et al., 2017b, Dai et al., 2019, Zheng et al., 2019, Genheden et al., 2020]. Computationally, the main challenge is how to explore the combinatorial space of reactions that can yield the target molecule. Largely, previous methods for retrosynthesis prediction can be divided into template-based [Coley et al., 2017b, Dai et al., 2019, Segler and Waller, 2017] and template-free [Chen et al., 2019, Zheng et al., 2019] approaches. Template-based methods match a target molecule against a large set of templates, which are molecular subgraph patterns that highlight changes during a chemical reaction. Despite their interpretability, these methods fail to generalize to new reactions. Template-free methods bypass templates by learning a direct mapping from the SMILES [Weininger, 1988] representations of the product to reactants. Despite their greater generalization potential, these methods generate reactant SMILES character by character, increasing generation complexity. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Another important consideration in building retrosynthesis models is aligning model design with strategies adopted by expert chemists. These strategies are influenced by fundamental properties of chemical reactions, independent of complexity level: (i.) the product atoms are always a subset of the reactant atoms1, and (ii.) the molecular graph topology is largely unaltered from products to reactants. For example, in the standard retrosynthesis dataset, only 6.3% of the atoms in the product undergo any change in connectivity. This consideration has received more attention in recent semi-template-based methods [Shi et al., 2020, Yan et al., 2020], that generate reactants from a product in two stages: (i.) first identify intermediate molecules called synthons, (ii.) and then complete synthons into reactants by sequential generation of atoms or SMILES characters.. Our model GRAPHRETRO also uses a similar workflow. However, we avoid sequential generation for completing synthons by instead selecting subgraphs called leaving groups from a precomputed vocabulary. This vocabulary is constructed during preprocessing by extracting subgraphs that differ between a synthon and the corresponding reactant. The vocabulary has a small size (170 for USPTO-50k) indicating remarkable redundancy, while covering 99.7% of the test set. Operating at the level of these subgraphs greatly reduces the complexity of reactant generation, with improved empirical performance. This formulation also simplifies our architecture, and makes our predictions more transparent, interpretable and amenable to manual correction. The benchmark dataset for evaluating retrosynthesis models is USPTO-50k [Schneider et al., 2016], which consists of 50000 reactions across 10 reaction classes. The dataset contains an unexpected shortcut towards predicting the edit, in that the product atom with atom-mapping 1 is part of the edit in 75% of the cases, allowing predictions that depend on the position of the atom to overestimate performance. We canonicalize the product SMILES and remap the existing dataset, thereby removing the shortcut. On this remapped dataset, GRAPHRETRO achieves a top-1 accuracy of 53.7% when the reaction class is not known, outperforming both template-free and semi-template-based methods. 2 Related Work Retrosynthesis Prediction Existing machine learning methods for retrosynthesis prediction can be divided into template-based, template-free and recent semi-template-based approaches. Template-Based: Templates are either hand-crafted by experts [Hartenfeller et al., 2011, Szymkuć et al., 2016], or extracted algorithmically from large databases Coley et al. [2017a], Law et al. [2009]. Exhaustively applying large template sets is expensive due to the involved subgraph 1ignoring impurities matching procedure. Template-based methods therefore utilize different ways of prioritizing templates, by either learning a conditional distribution over the template set [Segler and Waller, 2017], ranking templates based on molecular similarities to precedent reactions [Coley et al., 2017b] or directly modelling the joint distribution of templates and reactants using logic variables [Dai et al., 2019]. Despite their interpretability, these methods fail to generalize outside their rule set. Template-Free: Template-free methods [Liu et al., 2017, Zheng et al., 2019, Chen et al., 2019] learn a direct transformation from products to reactants using architectures from neural machine translation and a string based representation of molecules called SMILES [Weininger, 1988]. Linearizing molecules as strings does not utilize the inherently rich chemical structure. In addition, the reactant SMILES are generated from scratch, character by character. Attempts have been made to improve validity by adding a syntax correcter [Zheng et al., 2019] and a mixture model to improve diversity of suggestions [Chen et al., 2019], but the performance remains worse than [Dai et al., 2019] on the standard retrosynthesis dataset. Sun et al. [2021] formulate retrosynthesis using energy-based models, with additional parameterizations and loss terms to enforce the duality between forward (reaction prediction) and backward (retrosynthesis) prediction. Semi-Template-Based: Our work is closely related to recently proposed semi-template-based methods [Shi et al., 2020, Yan et al., 2020], which first identify synthons and then expand synthons into reactants through sequential generation using either a graph generative model [Shi et al., 2020] or a Transformer [Yan et al., 2020]. To reduce the complexity of reactant generation, we instead complete synthons using subgraphs called leaving groups selected from a precomputed vocabulary. This allows us to view synthon completion as a classification problem instead of a generative one. We also utilize the dependency graph between possible edits, and update edit predictions using a message passing network (MPN) [Gilmer et al., 2017] on this graph. Both innovations together yield a 4.8% and 3.3% performance improvement respectively over previous semi-template-based methods. Reaction Center Identification The reaction center covers a small number of participating atoms involved in the reaction. Our work is also related to models that predict reaction outcomes by learning to rank atom pairs based on their likelihood to be in the reaction center [Coley et al., 2019, Jin et al., 2017]. The task of identifying the reaction center is related to the step of deriving the synthons in our formulation. Our work departs from [Coley et al., 2019, Jin et al., 2017] as we utilize the property that new bond formations occur rarely (~0.1%) from products to synthons, allowing us to predict a score only for existing bonds and atoms and reduce prediction complexity from O(N2) to O(N). We also utilize the dependency graph between possible edits, and update edit predictions using a MPN on this graph. Utilizing Substructures Substructures have been utilized in various tasks from sentence generation by fusing phrases to molecule generation and optimization [Jin et al., 2018, 2020]. Our work is closely related to [Jin et al., 2020] which uses precomputed substructures as building blocks for property-conditioned molecule generation. However, instead of precomputing, synthons —analogous building blocks for reactants— are indirectly learnt during training. 3 Model Design Our approach leverages the property that graph topology is largely unaltered from products to reactants. To achieve this, we first derive suitable building blocks from the product called synthons, and then complete them into valid reactants by adding specific functionalities called leaving groups. These derivations, called edits, are characterized by modifications to bonds or hydrogen counts on atoms. We first train a neural network to predict a score for possible edits (Section 3.1). The edit with the highest score is then applied to the product to obtain synthons. Since the number of unique leaving groups are small, we model leaving group selection as a classification problem over a precomputed vocabulary (Section 3.2). To produce candidate reactants, we attach the predicted leaving group to the corresponding synthon through chemically constrained rules. The overall process is outlined in Figure 1. Before describing the two modules, we introduce relevant preliminaries that set the background for the remainder of the paper. Retrosynthesis Prediction A retrosynthesis pair R is described by a pair of molecular graphs (Gp,Gr), where Gp are the products and Gr the reactants. A molecular graph is described as G = (V, E) with atoms V as nodes and bonds E as edges. Prior work has focused on the single product case, while reactants can have multiple connected components, i.e. Gr = {Grc}Cc=1. Retrosynthesis pairs are atom-mapped so that each product atom has a unique corresponding reactant atom. The retrosynthesis task then, is to infer {Grc}Cc=1 given Gp. Edits Edits consist of (i.) atom pairs {(ai, aj)} where the bond type changes from products to reactants, and (ii.) atoms {ai} where the number of hydrogens attached to the atom change from products to reactants. We denote the set of edits by E. Since retrosynthesis pairs in the training set are atom-mapped, edits can be automatically identified by comparing the atoms and atom pairs in the product to their corresponding reactant counterparts. Synthons and Leaving Groups Applying editsE to the product Gp results in incomplete molecules called synthons. Synthons are analogous to rationales or building blocks, which are expanded into valid reactants by adding specific functionalities called leaving groups that are responsible for its reactivity. We denote synthons by Gs and leaving groups by Gl. We further assume that synthons and leaving groups have the same number of connected components as the reactants, i.e Gs = {Gsc}Cc=1 and Gl = {Glc}Cc=1. This assumption holds for 99.97% reactions in the training set. Formally, our model generates reactants by first predicting the set of edits E that transform Gp into Gs, followed by predicting a leaving group Glc to attach to each synthon Gsc . The model is defined as P (Gr|Gp) = ∑ E,Gl P (E|Gp)P (Gl|Gp,Gs), (1) where Gs,Gr are deterministic given E,Gl, and Gp. 3.1 Edit Prediction For a given retrosynthesis pair R = (Gp,Gr), we predict an edit score only for existing bonds and atoms, instead of every atom pair as in [Coley et al., 2019, Jin et al., 2017]. This choice is motivated by the low frequency (~0.1%) of new bond formations in the training set examples. Coupled with the sparsity of molecular graphs, this reduces the prediction complexity from O(N2) to O(N) for a product with N atoms. Our edit prediction model has variants tailored to single and multiple edit prediction. Since 95% of the training set consists of single edit examples, the remainder of this section describes the setup for single edit prediction. A detailed description of our multiple edit prediction model can be found in Appendix ??. Each bond (u, v) in Gp is associated with a label yuvk ∈ {0, 1} indicating whether its bond type k has changed from the products to reactants. Each atom u is associated with a label yu ∈ {0, 1} indicating a change in hydrogen count. We predict edit scores using representations that are learnt using a graph encoder. Graph Encoder To obtain atom representations, we use a variant of the message passing network (MPN) described in [Gilmer et al., 2017]. Each atom u has a feature vector xu indicating its atom type, degree and other properties. Each bond (u, v) has a feature vector xuv indicating its aromaticity, bond type and ring membership. For simplicity, we denote the encoding process by MPN(·) and describe architectural details in Appendix ??. The MPN computes atom representations {cu|u ∈ G} via {cu} = MPN(G, {xu}, {xuv}v∈N (u)), (2) where N (u) denotes the neighbors of atom u. The graph representation cG is an aggregation of atom representations, i.e. cG = ∑ u∈V cu. When G has connected components {Gi}, we get a set of graph representations {cGi}. For a bond (u, v), we define its representation cuv = (ABS(cu, cv)||cu+cv), where ABS denotes absolute difference and || refers to concatenation. This ensures our representations are permutation invariant. These representations are then used to predict atom and bond edit scores using corresponding neural networks, su = ua T τ(Wacu + b) (3) suvk = uk T τ(Wkcuv + bk), (4) where τ(·) is the ReLU activation function. Updating Bond Edit Scores Unlike a typical classification problem where the labels are independent, edits can have possible dependencies between each other. For example, bonds part of a stable system such as an aromatic ring have a greater tendency to remain unchanged (label 0). We attempt to leverage such dependencies to update initial edit scores. To this end, we build a graph with bonds (u, v) as nodes, and introduce an edge between bonds sharing an atom. We use another MPN(·) on this graph to learn aggregated neighborhood messages muv, and update the edit scores suvk in a manner similar to how LSTMs update representations, fuvk = σ(W f kxxuv +W f kmmuv) (5) iuvk = σ(W i kxxuv +W i kmmuv) (6) m̃uvk = umτ(W m kxxuv +W m kmmuv) (7) s̃uvk = fuvk · suvk + iuvk · m̃uvk. (8) Training We train by minimizing the cross-entropy loss over possible bond and atom edits Le = − ∑ (Gp,E) ∑ ((u,v),k)∈E yuvklog(s̃uvk) + ∑ u∈E yulog(su) . (9) The cross-entropy loss enforces the model to learn a distribution over possible edits instead of reasoning about each edit independently, as with the binary cross entropy loss used in [Jin et al., 2017, Coley et al., 2019]. 3.2 Synthon Completion Synthons are completed into valid reactants by adding specific functionalities called leaving groups. This involves two complementary tasks: (i.) selecting the appropriate leaving group, and (ii.) attaching the leaving group to the synthon. As ground truth leaving groups are not directly provided, we extract the leaving groups and construct a vocabulary X of unique leaving groups during preprocessing. The vocabulary has a limited size (|X | = 170 for a standard dataset with 50, 000 examples, and 72000 synthons) indicating the redundancy of leaving groups used in accomplishing retrosynthetic transformations. This redundancy also allows us to formulate leaving group selection as a classification problem over X , while retaining the ability to generate diverse reactants using different combinations of leaving groups. Vocabulary Construction Before constructing the vocabulary, we align connected components of synthon and reactant graphs by comparing atom mapping overlaps. Using aligned pairs Gsc = (Vsc , Esc) and Grc = (Vrc , Erc) as input, the leaving group vocabulary X is constructed by extracting subgraphs Glc = (Vlc , Elc) such that Vlc = Vrc \ Vsc . Atoms {ai} in the leaving groups that attach to synthons are marked with a special symbol. We also add three tokens to X namely START, which indicates the start of synthon completion, END, which indicates that there is no leaving group to add and PAD, which is used to handle variable numbers of synthon components in a minibatch. Leaving Group Selection For synthon component c ≤ C, where C is the number of connected components in the synthon graph, we use three inputs for leaving group selection – the product representation cGp , the synthon component representation cGsc , and the leaving group representation for the previous synthon component, elc−1 . The product and synthon representations are learnt using the MPN(·). For each xi ∈ X , representations can be learnt by either training independent embedding vectors (ind) or by treating each xi as a subgraph and using the MPN(·) (shared). In the shared setting, we use the same MPN(·) as the product and synthons. The leaving group probabilities are then computed by combining cGp , cGsc and elc−1 via a single layer neural network and softmax function q̂lc = softmax ( Uτ ( W1cGp +W2cGsc +W3el(c−1) )) , (10) where q̂lc is distribution learnt over X . Using the representation of the previous leaving group elc−1 allows the model to understand combinations of leaving groups that generate the desired product from the reactants. We also include the product representation cGp as the synthon graphs are derived from the product graph. Training For step c, given the one hot encoding of the true leaving group qlc , we minimize the cross-entropy loss Ls = C∑ c=1 L(q̂lc , qlc). (11) Training utilizes teacher-forcing [Williams and Zipser, 1989] so that the model makes predictions given correct histories. During inference, at every step, we use the representation of leaving group from the previous step with the highest predicted probability. Leaving Group Attachment Attaching leaving groups to synthons is a deterministic process and not learnt during training. The task involves identification of the type of bonds to add between attaching atoms in the leaving group (marked during vocabulary construction), and the atom(s) participating in the edit. These bonds can be inferred by applying the valency constraint, which determines the maximum number of neighbors for each atom. The attachment process does not modify any stereochemistry. Given synthons and leaving groups, the attachment process has a 100% accuracy. The detailed procedure is described in Appendix ??. 3.3 Inference Inference is performed using beam search with a log-likelihood scoring function. For a beam width n, we select n edits with highest scores and apply them to the product to obtain n synthons, where each synthon can consist of multiple connected components. The synthons form the nodes for beam search. Each node maintains a cumulative score by aggregating the log-likelihoods of the edit and predicted leaving groups. Leaving group inference starts with a connected component for each synthon, and selects n leaving groups with highest log-likelihoods. From the n2 possibilities, we select n nodes with the highest cumulative scores. This process is repeated until all nodes have a leaving group predicted for each synthon component. 4 Evaluation Evaluating retrosynthesis models is challenging as multiple sets of reactants can be generated from the same product. To deal with this, previous works [Coley et al., 2017b, Dai et al., 2019] evaluate the ability of the model to recover retrosynthetic strategies recorded in the dataset. Data We use the benchmark dataset USPTO-50k [Schneider et al., 2016] for all our experiments. The dataset contains 50, 000 atom-mapped reactions across 10 reaction classes. We use the same dataset version and splits as provided by [Dai et al., 2019]. The USPTO-50k dataset contains a shortcut in that the product atom with atom-mapping 1 is part of the edit in ~75% of the cases. If the product SMILES is not canonicalized, predictions utilizing operations that depend on the position of the atom or bond will be able to use the shortcut, and overestimate performance. We canonicalize the product SMILES, and reassign atom-mappings to the reactant atoms based on the canonical ordering, which removes the shortcut. Details on the remapping procedure can be found in Appendix ??. Evaluation We use the top-n accuracy (n = 1, 3, 5, 10) as our evaluation metric, defined as the fraction of examples where the recorded reactants are suggested by the model with rank ≤ n. Following prior work [Coley et al., 2017b, Zheng et al., 2019, Dai et al., 2019], we compute the accuracy by comparing the canonical SMILES of predicted reactants to the ground truth. Atom-mapping is excluded from this comparison, but stereochemistry, which describes the relative orientation of atoms in the molecule, is retained. The evaluation is carried out for two settings, with the reaction class being known or unknown. Baselines For evaluating overall performance, we compare GRAPHRETRO to nine baselines — four template-based, three template-free, and two semi-template-based methods. These include: Template-Based: RETROSIM Coley et al. [2017b] ranks templates for a given target molecule by computing molecular similarities to precedent reactions. NEURALSYM [Segler and Waller, 2017] trains a model to rank templates given a target molecule. GLN [Dai et al., 2019] models the joint distribution of templates and reactants in a hierarchical fashion using logic variables. DUALTB [Sun et al., 2021] uses an energy-based model formulation for retrosynthesis, with additional parameterizations and loss terms to enforce the duality between forward (reaction prediction) and backward (retrosynthesis prediction). Inference is carried out using reactant candidates obtained by applying an extracted template set to the products. Template-Free: SCROP [Zheng et al., 2019], LV-TRANSFORMER [Chen et al., 2019] and DUALTF [Sun et al., 2021] use the Transformer architecture [Vaswani et al., 2017] to output reactant SMILES given a product SMILES. To improve the validity of their suggestions, SCROP include a second Transformer that functions as a syntax correcter. LV-TRANSFORMER uses a latent variable mixture model to improve diversity of suggestions. DUALTF utilizes additional parameterizations and loss terms to enforce the duality between forward (reaction prediction) and backward (retrosynthesis prediction). Semi-Template-Based: G2GS [Shi et al., 2020] and RETROXPERT [Yan et al., 2020] first identify synthons, and then expand the synthons into reactants by either sequential generation of atoms and bonds (G2Gs), or using the Transformer architecture (RETROXPERT). The training dataset for the Transformer in [Yan et al., 2020] is augmented with incorrectly predicted synthons with the goal of learning a correction mechanism. Results for NEURALSYM are taken from [Dai et al., 2019]. The authors in [Yan et al., 2020] report their performance being affected by the dataset leakage2. Thus, we use the most recent results from their website on the canonicalized dataset. For remaining baselines, we directly use the values reported in their paper. For the synthon completion module, we use the ind configuration given its better empirical performance. 2https://github.com/uta-smile/RetroXpert 4.1 Overall Performance Reaction class unknown As shown in Table 1, when the reaction class is unknown, GRAPHRETRO outperforms G2GS by 4.8% and and RETROXPERT by 3.3% in top-1 accuracy. Performance improvements are also seen for larger n, except for n = 5. Barring DUALTB, the top-1 accuracy is also better than other template-free and template-based methods. For larger n, one reason for lower top-n accuracies than most template-based methods is that templates already contain combinations of leaving group patterns. In contrast, our model learns to discover these during training. A second hypothesis to this end is that simply adding log-likelihood scores from edit prediction and synthon completion models may be suboptimal and bias the beam search in the direction of the more dominating term. We leave it to future work to investigate scoring functions that rank the attachment. Reaction class known When the reaction class is known, GRAPHRETRO outperforms G2GS and RETROXPERT by a margin of 3% and 2% respectively in top-1 accuracy. GRAPHRETRO also outperforms all the template-free methods in top-n accuracy. for GRAPHRETRO are also better than most template-based and template-free methods. When the reaction class is known, RETROSIM and GLN restrict template sets corresponding to the reaction class, thus improving performance. The increased edit prediction performance (Section 4.2) for GRAPHRETRO helps outweigh this factor, achieving comparable or better performance till n = 5. 4.2 Individual Module Performance To gain more insight into the working of GRAPHRETRO, we evaluate the top-n accuracy (n = 1, 2, 3, 5) of edit prediction and synthon completion modules, along with corresponding ablation studies, with results shown in Table 2. Edit Prediction For the edit prediction module, we compare the true edit(s) to top-n edits predicted by the model. We also consider two ablation studies, one where we directly use the initial edit scores without updating them, and the other where we predict edits using atom-pairs instead of existing bonds and atoms. Both design choices lead to improvements in performance, as shown in Table 2. We hypothesize that the larger improvement compared to edit prediction using atom-pairs is due to the easier optimization procedure, with lesser imbalance between labels 1 and 0. Synthon Completion For evaluating the synthon completion module, we first apply the true edits to obtain synthons, and compare the true leaving groups to top-n leaving groups predicted by the model. We test the performance of both the ind and shared configurations. Both configurations perform similarly, and are able to identify ~ 97% (close to its upper bound of 99.7%) of the true leaving groups in its top-5 choices, when the reaction class is known. The top-1, 3 and 5 accuracies of the synthon completion for unknown reaction classes for G2Gs are 61.1%, 81.5% and 86.7% respectively, while ours are 75.6%, 92.5% and 96.1%, indicating a 10-14% performance improvement using a classification formulation over the generative one adopted by G2Gs. 4.3 Example Predictions In Figure 2, we visualize the model predictions and the ground truth for three cases. Figure 2a shows an example where the model identifies both the edits and leaving groups correctly. In Figure 2b, the correct edit is identified but the predicted leaving groups are incorrect. We hypothesize this is due to the fact that in the training set, leaving groups attaching to the carbonyl carbon (C=O) are small (e.g. -OH, -NH2, halides). The true leaving group in this example, however, is large. The model is unable to reason about this and predicts the small leaving group -I. In Figure 2c, the model identifies the edit and consequently the leaving group incorrectly. This highlights a limitation of our model. If the edit is predicted incorrectly, the model cannot suggest the true precursors. 4.4 Limitations The simplified and interpretable construction of GRAPHRETRO comes with certain limitations. First, the overall performance of the model is limited by the performance of the edit prediction step. If the predicted edit is incorrect, the true reactants cannot be salvaged. This limitation is partly remedied by our model design, that allows for user intervention to correct the edit. Second, our method is reliant on atom-mapping for extracting edits and leaving groups. Extracting edits directly based on substructure matching currently suffer from false positives, and heuristics to correct for these result in correct edits in only ~90% of the cases. Third, our formulation assumes that we have as many synthons as reactants, which is violated in some reactions. We leave it to future work to extend the model to realize a single reactant from multiple synthons, and introduce more chemically meaningful edit correction mechanisms. 5 Conclusion Previous methods for single-step retrosynthesis either restrict prediction to a template set, are insensitive to molecular graph structure or generate molecules from scratch. We address these shortcomings by introducing a graph-based semi-template-based model inspired by a chemist’s workflow, enhancing the interpretability of retrosynthesis models. Given a target molecule, we first identify synthetic building blocks (synthons) which are then realized into valid reactants, thus avoiding molecule generation from scratch. Our model outperforms previous semi-template-methods by significant margins on the benchmark dataset. Future work aims to extend the model to realize a single reactant from multiple synthons, and introduce more chemically meaningful components to improve the synergy between such tools for retrosynthesis prediction and a practitioner’s expertise. Acknowledgements This research was supported by the Machine Learning for Pharmaceutical Discovery and Synthesis Consortium at MIT. V.R.S. was also supported by the Zeno Karl Schindler Foundation. C.B. was supported by the Swiss National Science Foundation under the National Center of Competence in Research (NCCR) Catalysis under grant agreement 51NF40 180544. We thank the Leonhard scientific computing cluster at ETH Zürich for providing computational resources.
1. What is the main contribution of the paper in retrosynthesis? 2. What are the limitations of the proposed approach regarding its generative capabilities and comparison with other works? 3. How does the reviewer assess the clarity and quality of the paper's content?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a retrosynthesis model which first predicts the set of graph edits transforming the target into incomplete molecules called synthons, then the model learns to expand synthons into complete molecules by attaching relevant leaving groups. Review The paper is overall well written and easy to follow. Point of criticism about this paper. (1) Synthon completion boils down retrosynthesis (a generative problem) to a much simpler classification problem (classify leaving groups). First, it is possible that the leaving groups of the unseen test set is not included in the predefined leaving groups. i.e. not a generative model. Second, (Line 163) 50M samples are used to extract leaving group candidates. The testing examples are included in 50M, which is not allowed. Third, it is not fair to compare a classification problem with generative models (other methods) (2) The ideas of "identifying reaction center and then complete the synthons" is largely the same with previous work, such as G2G, retroXert (3) The performance is not superior compared with Dual model etc.
NIPS
Title Learning Graph Models for Retrosynthesis Prediction Abstract Retrosynthesis prediction is a fundamental problem in organic synthesis, where the task is to identify precursor molecules that can be used to synthesize a target molecule. A key consideration in building neural models for this task is aligning model design with strategies adopted by chemists. Building on this viewpoint, this paper introduces a graph-based approach that capitalizes on the idea that the graph topology of precursor molecules is largely unaltered during a chemical reaction. The model first predicts the set of graph edits transforming the target into incomplete molecules called synthons. Next, the model learns to expand synthons into complete molecules by attaching relevant leaving groups. This decomposition simplifies the architecture, making its predictions more interpretable, and also amenable to manual correction. Our model achieves a top-1 accuracy of 53.7%, outperforming previous template-free and semi-template-based methods. N/A Retrosynthesis prediction is a fundamental problem in organic synthesis, where the task is to identify precursor molecules that can be used to synthesize a target molecule. A key consideration in building neural models for this task is aligning model design with strategies adopted by chemists. Building on this viewpoint, this paper introduces a graph-based approach that capitalizes on the idea that the graph topology of precursor molecules is largely unaltered during a chemical reaction. The model first predicts the set of graph edits transforming the target into incomplete molecules called synthons. Next, the model learns to expand synthons into complete molecules by attaching relevant leaving groups. This decomposition simplifies the architecture, making its predictions more interpretable, and also amenable to manual correction. Our model achieves a top-1 accuracy of 53.7%, outperforming previous template-free and semi-template-based methods. 1 Introduction Retrosynthesis prediction, first formalized by E. J. Corey [Corey, 1991] is a fundamental problem in organic synthesis that attempts to identify a series of chemical transformations for synthesizing a target molecule. In the single-step formulation, the task is to identify a set of reactant molecules given a target. Beyond simple reactions, many practical tasks involving complex organic molecules are difficult even for expert chemists. As a result, substantial experimental exploration is needed to cover for deficiencies of analytical approaches. This has motivated interest in computer-assisted retrosynthesis [Corey and Wipke, 1969], with a recent surge in machine learning methods [Chen et al., 2019, Coley et al., 2017b, Dai et al., 2019, Zheng et al., 2019, Genheden et al., 2020]. Computationally, the main challenge is how to explore the combinatorial space of reactions that can yield the target molecule. Largely, previous methods for retrosynthesis prediction can be divided into template-based [Coley et al., 2017b, Dai et al., 2019, Segler and Waller, 2017] and template-free [Chen et al., 2019, Zheng et al., 2019] approaches. Template-based methods match a target molecule against a large set of templates, which are molecular subgraph patterns that highlight changes during a chemical reaction. Despite their interpretability, these methods fail to generalize to new reactions. Template-free methods bypass templates by learning a direct mapping from the SMILES [Weininger, 1988] representations of the product to reactants. Despite their greater generalization potential, these methods generate reactant SMILES character by character, increasing generation complexity. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Another important consideration in building retrosynthesis models is aligning model design with strategies adopted by expert chemists. These strategies are influenced by fundamental properties of chemical reactions, independent of complexity level: (i.) the product atoms are always a subset of the reactant atoms1, and (ii.) the molecular graph topology is largely unaltered from products to reactants. For example, in the standard retrosynthesis dataset, only 6.3% of the atoms in the product undergo any change in connectivity. This consideration has received more attention in recent semi-template-based methods [Shi et al., 2020, Yan et al., 2020], that generate reactants from a product in two stages: (i.) first identify intermediate molecules called synthons, (ii.) and then complete synthons into reactants by sequential generation of atoms or SMILES characters.. Our model GRAPHRETRO also uses a similar workflow. However, we avoid sequential generation for completing synthons by instead selecting subgraphs called leaving groups from a precomputed vocabulary. This vocabulary is constructed during preprocessing by extracting subgraphs that differ between a synthon and the corresponding reactant. The vocabulary has a small size (170 for USPTO-50k) indicating remarkable redundancy, while covering 99.7% of the test set. Operating at the level of these subgraphs greatly reduces the complexity of reactant generation, with improved empirical performance. This formulation also simplifies our architecture, and makes our predictions more transparent, interpretable and amenable to manual correction. The benchmark dataset for evaluating retrosynthesis models is USPTO-50k [Schneider et al., 2016], which consists of 50000 reactions across 10 reaction classes. The dataset contains an unexpected shortcut towards predicting the edit, in that the product atom with atom-mapping 1 is part of the edit in 75% of the cases, allowing predictions that depend on the position of the atom to overestimate performance. We canonicalize the product SMILES and remap the existing dataset, thereby removing the shortcut. On this remapped dataset, GRAPHRETRO achieves a top-1 accuracy of 53.7% when the reaction class is not known, outperforming both template-free and semi-template-based methods. 2 Related Work Retrosynthesis Prediction Existing machine learning methods for retrosynthesis prediction can be divided into template-based, template-free and recent semi-template-based approaches. Template-Based: Templates are either hand-crafted by experts [Hartenfeller et al., 2011, Szymkuć et al., 2016], or extracted algorithmically from large databases Coley et al. [2017a], Law et al. [2009]. Exhaustively applying large template sets is expensive due to the involved subgraph 1ignoring impurities matching procedure. Template-based methods therefore utilize different ways of prioritizing templates, by either learning a conditional distribution over the template set [Segler and Waller, 2017], ranking templates based on molecular similarities to precedent reactions [Coley et al., 2017b] or directly modelling the joint distribution of templates and reactants using logic variables [Dai et al., 2019]. Despite their interpretability, these methods fail to generalize outside their rule set. Template-Free: Template-free methods [Liu et al., 2017, Zheng et al., 2019, Chen et al., 2019] learn a direct transformation from products to reactants using architectures from neural machine translation and a string based representation of molecules called SMILES [Weininger, 1988]. Linearizing molecules as strings does not utilize the inherently rich chemical structure. In addition, the reactant SMILES are generated from scratch, character by character. Attempts have been made to improve validity by adding a syntax correcter [Zheng et al., 2019] and a mixture model to improve diversity of suggestions [Chen et al., 2019], but the performance remains worse than [Dai et al., 2019] on the standard retrosynthesis dataset. Sun et al. [2021] formulate retrosynthesis using energy-based models, with additional parameterizations and loss terms to enforce the duality between forward (reaction prediction) and backward (retrosynthesis) prediction. Semi-Template-Based: Our work is closely related to recently proposed semi-template-based methods [Shi et al., 2020, Yan et al., 2020], which first identify synthons and then expand synthons into reactants through sequential generation using either a graph generative model [Shi et al., 2020] or a Transformer [Yan et al., 2020]. To reduce the complexity of reactant generation, we instead complete synthons using subgraphs called leaving groups selected from a precomputed vocabulary. This allows us to view synthon completion as a classification problem instead of a generative one. We also utilize the dependency graph between possible edits, and update edit predictions using a message passing network (MPN) [Gilmer et al., 2017] on this graph. Both innovations together yield a 4.8% and 3.3% performance improvement respectively over previous semi-template-based methods. Reaction Center Identification The reaction center covers a small number of participating atoms involved in the reaction. Our work is also related to models that predict reaction outcomes by learning to rank atom pairs based on their likelihood to be in the reaction center [Coley et al., 2019, Jin et al., 2017]. The task of identifying the reaction center is related to the step of deriving the synthons in our formulation. Our work departs from [Coley et al., 2019, Jin et al., 2017] as we utilize the property that new bond formations occur rarely (~0.1%) from products to synthons, allowing us to predict a score only for existing bonds and atoms and reduce prediction complexity from O(N2) to O(N). We also utilize the dependency graph between possible edits, and update edit predictions using a MPN on this graph. Utilizing Substructures Substructures have been utilized in various tasks from sentence generation by fusing phrases to molecule generation and optimization [Jin et al., 2018, 2020]. Our work is closely related to [Jin et al., 2020] which uses precomputed substructures as building blocks for property-conditioned molecule generation. However, instead of precomputing, synthons —analogous building blocks for reactants— are indirectly learnt during training. 3 Model Design Our approach leverages the property that graph topology is largely unaltered from products to reactants. To achieve this, we first derive suitable building blocks from the product called synthons, and then complete them into valid reactants by adding specific functionalities called leaving groups. These derivations, called edits, are characterized by modifications to bonds or hydrogen counts on atoms. We first train a neural network to predict a score for possible edits (Section 3.1). The edit with the highest score is then applied to the product to obtain synthons. Since the number of unique leaving groups are small, we model leaving group selection as a classification problem over a precomputed vocabulary (Section 3.2). To produce candidate reactants, we attach the predicted leaving group to the corresponding synthon through chemically constrained rules. The overall process is outlined in Figure 1. Before describing the two modules, we introduce relevant preliminaries that set the background for the remainder of the paper. Retrosynthesis Prediction A retrosynthesis pair R is described by a pair of molecular graphs (Gp,Gr), where Gp are the products and Gr the reactants. A molecular graph is described as G = (V, E) with atoms V as nodes and bonds E as edges. Prior work has focused on the single product case, while reactants can have multiple connected components, i.e. Gr = {Grc}Cc=1. Retrosynthesis pairs are atom-mapped so that each product atom has a unique corresponding reactant atom. The retrosynthesis task then, is to infer {Grc}Cc=1 given Gp. Edits Edits consist of (i.) atom pairs {(ai, aj)} where the bond type changes from products to reactants, and (ii.) atoms {ai} where the number of hydrogens attached to the atom change from products to reactants. We denote the set of edits by E. Since retrosynthesis pairs in the training set are atom-mapped, edits can be automatically identified by comparing the atoms and atom pairs in the product to their corresponding reactant counterparts. Synthons and Leaving Groups Applying editsE to the product Gp results in incomplete molecules called synthons. Synthons are analogous to rationales or building blocks, which are expanded into valid reactants by adding specific functionalities called leaving groups that are responsible for its reactivity. We denote synthons by Gs and leaving groups by Gl. We further assume that synthons and leaving groups have the same number of connected components as the reactants, i.e Gs = {Gsc}Cc=1 and Gl = {Glc}Cc=1. This assumption holds for 99.97% reactions in the training set. Formally, our model generates reactants by first predicting the set of edits E that transform Gp into Gs, followed by predicting a leaving group Glc to attach to each synthon Gsc . The model is defined as P (Gr|Gp) = ∑ E,Gl P (E|Gp)P (Gl|Gp,Gs), (1) where Gs,Gr are deterministic given E,Gl, and Gp. 3.1 Edit Prediction For a given retrosynthesis pair R = (Gp,Gr), we predict an edit score only for existing bonds and atoms, instead of every atom pair as in [Coley et al., 2019, Jin et al., 2017]. This choice is motivated by the low frequency (~0.1%) of new bond formations in the training set examples. Coupled with the sparsity of molecular graphs, this reduces the prediction complexity from O(N2) to O(N) for a product with N atoms. Our edit prediction model has variants tailored to single and multiple edit prediction. Since 95% of the training set consists of single edit examples, the remainder of this section describes the setup for single edit prediction. A detailed description of our multiple edit prediction model can be found in Appendix ??. Each bond (u, v) in Gp is associated with a label yuvk ∈ {0, 1} indicating whether its bond type k has changed from the products to reactants. Each atom u is associated with a label yu ∈ {0, 1} indicating a change in hydrogen count. We predict edit scores using representations that are learnt using a graph encoder. Graph Encoder To obtain atom representations, we use a variant of the message passing network (MPN) described in [Gilmer et al., 2017]. Each atom u has a feature vector xu indicating its atom type, degree and other properties. Each bond (u, v) has a feature vector xuv indicating its aromaticity, bond type and ring membership. For simplicity, we denote the encoding process by MPN(·) and describe architectural details in Appendix ??. The MPN computes atom representations {cu|u ∈ G} via {cu} = MPN(G, {xu}, {xuv}v∈N (u)), (2) where N (u) denotes the neighbors of atom u. The graph representation cG is an aggregation of atom representations, i.e. cG = ∑ u∈V cu. When G has connected components {Gi}, we get a set of graph representations {cGi}. For a bond (u, v), we define its representation cuv = (ABS(cu, cv)||cu+cv), where ABS denotes absolute difference and || refers to concatenation. This ensures our representations are permutation invariant. These representations are then used to predict atom and bond edit scores using corresponding neural networks, su = ua T τ(Wacu + b) (3) suvk = uk T τ(Wkcuv + bk), (4) where τ(·) is the ReLU activation function. Updating Bond Edit Scores Unlike a typical classification problem where the labels are independent, edits can have possible dependencies between each other. For example, bonds part of a stable system such as an aromatic ring have a greater tendency to remain unchanged (label 0). We attempt to leverage such dependencies to update initial edit scores. To this end, we build a graph with bonds (u, v) as nodes, and introduce an edge between bonds sharing an atom. We use another MPN(·) on this graph to learn aggregated neighborhood messages muv, and update the edit scores suvk in a manner similar to how LSTMs update representations, fuvk = σ(W f kxxuv +W f kmmuv) (5) iuvk = σ(W i kxxuv +W i kmmuv) (6) m̃uvk = umτ(W m kxxuv +W m kmmuv) (7) s̃uvk = fuvk · suvk + iuvk · m̃uvk. (8) Training We train by minimizing the cross-entropy loss over possible bond and atom edits Le = − ∑ (Gp,E) ∑ ((u,v),k)∈E yuvklog(s̃uvk) + ∑ u∈E yulog(su) . (9) The cross-entropy loss enforces the model to learn a distribution over possible edits instead of reasoning about each edit independently, as with the binary cross entropy loss used in [Jin et al., 2017, Coley et al., 2019]. 3.2 Synthon Completion Synthons are completed into valid reactants by adding specific functionalities called leaving groups. This involves two complementary tasks: (i.) selecting the appropriate leaving group, and (ii.) attaching the leaving group to the synthon. As ground truth leaving groups are not directly provided, we extract the leaving groups and construct a vocabulary X of unique leaving groups during preprocessing. The vocabulary has a limited size (|X | = 170 for a standard dataset with 50, 000 examples, and 72000 synthons) indicating the redundancy of leaving groups used in accomplishing retrosynthetic transformations. This redundancy also allows us to formulate leaving group selection as a classification problem over X , while retaining the ability to generate diverse reactants using different combinations of leaving groups. Vocabulary Construction Before constructing the vocabulary, we align connected components of synthon and reactant graphs by comparing atom mapping overlaps. Using aligned pairs Gsc = (Vsc , Esc) and Grc = (Vrc , Erc) as input, the leaving group vocabulary X is constructed by extracting subgraphs Glc = (Vlc , Elc) such that Vlc = Vrc \ Vsc . Atoms {ai} in the leaving groups that attach to synthons are marked with a special symbol. We also add three tokens to X namely START, which indicates the start of synthon completion, END, which indicates that there is no leaving group to add and PAD, which is used to handle variable numbers of synthon components in a minibatch. Leaving Group Selection For synthon component c ≤ C, where C is the number of connected components in the synthon graph, we use three inputs for leaving group selection – the product representation cGp , the synthon component representation cGsc , and the leaving group representation for the previous synthon component, elc−1 . The product and synthon representations are learnt using the MPN(·). For each xi ∈ X , representations can be learnt by either training independent embedding vectors (ind) or by treating each xi as a subgraph and using the MPN(·) (shared). In the shared setting, we use the same MPN(·) as the product and synthons. The leaving group probabilities are then computed by combining cGp , cGsc and elc−1 via a single layer neural network and softmax function q̂lc = softmax ( Uτ ( W1cGp +W2cGsc +W3el(c−1) )) , (10) where q̂lc is distribution learnt over X . Using the representation of the previous leaving group elc−1 allows the model to understand combinations of leaving groups that generate the desired product from the reactants. We also include the product representation cGp as the synthon graphs are derived from the product graph. Training For step c, given the one hot encoding of the true leaving group qlc , we minimize the cross-entropy loss Ls = C∑ c=1 L(q̂lc , qlc). (11) Training utilizes teacher-forcing [Williams and Zipser, 1989] so that the model makes predictions given correct histories. During inference, at every step, we use the representation of leaving group from the previous step with the highest predicted probability. Leaving Group Attachment Attaching leaving groups to synthons is a deterministic process and not learnt during training. The task involves identification of the type of bonds to add between attaching atoms in the leaving group (marked during vocabulary construction), and the atom(s) participating in the edit. These bonds can be inferred by applying the valency constraint, which determines the maximum number of neighbors for each atom. The attachment process does not modify any stereochemistry. Given synthons and leaving groups, the attachment process has a 100% accuracy. The detailed procedure is described in Appendix ??. 3.3 Inference Inference is performed using beam search with a log-likelihood scoring function. For a beam width n, we select n edits with highest scores and apply them to the product to obtain n synthons, where each synthon can consist of multiple connected components. The synthons form the nodes for beam search. Each node maintains a cumulative score by aggregating the log-likelihoods of the edit and predicted leaving groups. Leaving group inference starts with a connected component for each synthon, and selects n leaving groups with highest log-likelihoods. From the n2 possibilities, we select n nodes with the highest cumulative scores. This process is repeated until all nodes have a leaving group predicted for each synthon component. 4 Evaluation Evaluating retrosynthesis models is challenging as multiple sets of reactants can be generated from the same product. To deal with this, previous works [Coley et al., 2017b, Dai et al., 2019] evaluate the ability of the model to recover retrosynthetic strategies recorded in the dataset. Data We use the benchmark dataset USPTO-50k [Schneider et al., 2016] for all our experiments. The dataset contains 50, 000 atom-mapped reactions across 10 reaction classes. We use the same dataset version and splits as provided by [Dai et al., 2019]. The USPTO-50k dataset contains a shortcut in that the product atom with atom-mapping 1 is part of the edit in ~75% of the cases. If the product SMILES is not canonicalized, predictions utilizing operations that depend on the position of the atom or bond will be able to use the shortcut, and overestimate performance. We canonicalize the product SMILES, and reassign atom-mappings to the reactant atoms based on the canonical ordering, which removes the shortcut. Details on the remapping procedure can be found in Appendix ??. Evaluation We use the top-n accuracy (n = 1, 3, 5, 10) as our evaluation metric, defined as the fraction of examples where the recorded reactants are suggested by the model with rank ≤ n. Following prior work [Coley et al., 2017b, Zheng et al., 2019, Dai et al., 2019], we compute the accuracy by comparing the canonical SMILES of predicted reactants to the ground truth. Atom-mapping is excluded from this comparison, but stereochemistry, which describes the relative orientation of atoms in the molecule, is retained. The evaluation is carried out for two settings, with the reaction class being known or unknown. Baselines For evaluating overall performance, we compare GRAPHRETRO to nine baselines — four template-based, three template-free, and two semi-template-based methods. These include: Template-Based: RETROSIM Coley et al. [2017b] ranks templates for a given target molecule by computing molecular similarities to precedent reactions. NEURALSYM [Segler and Waller, 2017] trains a model to rank templates given a target molecule. GLN [Dai et al., 2019] models the joint distribution of templates and reactants in a hierarchical fashion using logic variables. DUALTB [Sun et al., 2021] uses an energy-based model formulation for retrosynthesis, with additional parameterizations and loss terms to enforce the duality between forward (reaction prediction) and backward (retrosynthesis prediction). Inference is carried out using reactant candidates obtained by applying an extracted template set to the products. Template-Free: SCROP [Zheng et al., 2019], LV-TRANSFORMER [Chen et al., 2019] and DUALTF [Sun et al., 2021] use the Transformer architecture [Vaswani et al., 2017] to output reactant SMILES given a product SMILES. To improve the validity of their suggestions, SCROP include a second Transformer that functions as a syntax correcter. LV-TRANSFORMER uses a latent variable mixture model to improve diversity of suggestions. DUALTF utilizes additional parameterizations and loss terms to enforce the duality between forward (reaction prediction) and backward (retrosynthesis prediction). Semi-Template-Based: G2GS [Shi et al., 2020] and RETROXPERT [Yan et al., 2020] first identify synthons, and then expand the synthons into reactants by either sequential generation of atoms and bonds (G2Gs), or using the Transformer architecture (RETROXPERT). The training dataset for the Transformer in [Yan et al., 2020] is augmented with incorrectly predicted synthons with the goal of learning a correction mechanism. Results for NEURALSYM are taken from [Dai et al., 2019]. The authors in [Yan et al., 2020] report their performance being affected by the dataset leakage2. Thus, we use the most recent results from their website on the canonicalized dataset. For remaining baselines, we directly use the values reported in their paper. For the synthon completion module, we use the ind configuration given its better empirical performance. 2https://github.com/uta-smile/RetroXpert 4.1 Overall Performance Reaction class unknown As shown in Table 1, when the reaction class is unknown, GRAPHRETRO outperforms G2GS by 4.8% and and RETROXPERT by 3.3% in top-1 accuracy. Performance improvements are also seen for larger n, except for n = 5. Barring DUALTB, the top-1 accuracy is also better than other template-free and template-based methods. For larger n, one reason for lower top-n accuracies than most template-based methods is that templates already contain combinations of leaving group patterns. In contrast, our model learns to discover these during training. A second hypothesis to this end is that simply adding log-likelihood scores from edit prediction and synthon completion models may be suboptimal and bias the beam search in the direction of the more dominating term. We leave it to future work to investigate scoring functions that rank the attachment. Reaction class known When the reaction class is known, GRAPHRETRO outperforms G2GS and RETROXPERT by a margin of 3% and 2% respectively in top-1 accuracy. GRAPHRETRO also outperforms all the template-free methods in top-n accuracy. for GRAPHRETRO are also better than most template-based and template-free methods. When the reaction class is known, RETROSIM and GLN restrict template sets corresponding to the reaction class, thus improving performance. The increased edit prediction performance (Section 4.2) for GRAPHRETRO helps outweigh this factor, achieving comparable or better performance till n = 5. 4.2 Individual Module Performance To gain more insight into the working of GRAPHRETRO, we evaluate the top-n accuracy (n = 1, 2, 3, 5) of edit prediction and synthon completion modules, along with corresponding ablation studies, with results shown in Table 2. Edit Prediction For the edit prediction module, we compare the true edit(s) to top-n edits predicted by the model. We also consider two ablation studies, one where we directly use the initial edit scores without updating them, and the other where we predict edits using atom-pairs instead of existing bonds and atoms. Both design choices lead to improvements in performance, as shown in Table 2. We hypothesize that the larger improvement compared to edit prediction using atom-pairs is due to the easier optimization procedure, with lesser imbalance between labels 1 and 0. Synthon Completion For evaluating the synthon completion module, we first apply the true edits to obtain synthons, and compare the true leaving groups to top-n leaving groups predicted by the model. We test the performance of both the ind and shared configurations. Both configurations perform similarly, and are able to identify ~ 97% (close to its upper bound of 99.7%) of the true leaving groups in its top-5 choices, when the reaction class is known. The top-1, 3 and 5 accuracies of the synthon completion for unknown reaction classes for G2Gs are 61.1%, 81.5% and 86.7% respectively, while ours are 75.6%, 92.5% and 96.1%, indicating a 10-14% performance improvement using a classification formulation over the generative one adopted by G2Gs. 4.3 Example Predictions In Figure 2, we visualize the model predictions and the ground truth for three cases. Figure 2a shows an example where the model identifies both the edits and leaving groups correctly. In Figure 2b, the correct edit is identified but the predicted leaving groups are incorrect. We hypothesize this is due to the fact that in the training set, leaving groups attaching to the carbonyl carbon (C=O) are small (e.g. -OH, -NH2, halides). The true leaving group in this example, however, is large. The model is unable to reason about this and predicts the small leaving group -I. In Figure 2c, the model identifies the edit and consequently the leaving group incorrectly. This highlights a limitation of our model. If the edit is predicted incorrectly, the model cannot suggest the true precursors. 4.4 Limitations The simplified and interpretable construction of GRAPHRETRO comes with certain limitations. First, the overall performance of the model is limited by the performance of the edit prediction step. If the predicted edit is incorrect, the true reactants cannot be salvaged. This limitation is partly remedied by our model design, that allows for user intervention to correct the edit. Second, our method is reliant on atom-mapping for extracting edits and leaving groups. Extracting edits directly based on substructure matching currently suffer from false positives, and heuristics to correct for these result in correct edits in only ~90% of the cases. Third, our formulation assumes that we have as many synthons as reactants, which is violated in some reactions. We leave it to future work to extend the model to realize a single reactant from multiple synthons, and introduce more chemically meaningful edit correction mechanisms. 5 Conclusion Previous methods for single-step retrosynthesis either restrict prediction to a template set, are insensitive to molecular graph structure or generate molecules from scratch. We address these shortcomings by introducing a graph-based semi-template-based model inspired by a chemist’s workflow, enhancing the interpretability of retrosynthesis models. Given a target molecule, we first identify synthetic building blocks (synthons) which are then realized into valid reactants, thus avoiding molecule generation from scratch. Our model outperforms previous semi-template-methods by significant margins on the benchmark dataset. Future work aims to extend the model to realize a single reactant from multiple synthons, and introduce more chemically meaningful components to improve the synergy between such tools for retrosynthesis prediction and a practitioner’s expertise. Acknowledgements This research was supported by the Machine Learning for Pharmaceutical Discovery and Synthesis Consortium at MIT. V.R.S. was also supported by the Zeno Karl Schindler Foundation. C.B. was supported by the Swiss National Science Foundation under the National Center of Competence in Research (NCCR) Catalysis under grant agreement 51NF40 180544. We thank the Leonhard scientific computing cluster at ETH Zürich for providing computational resources.
1. What is the focus of the paper on retrosynthesis? 2. What are the strengths of the proposed approach, particularly in its advancement from previous state-of-the-art models? 3. What are the weaknesses of the paper regarding its comparisons with other works and limitations in its contributions? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any questions or concerns regarding the model's predictions and error sources?
Summary Of The Paper Review
Summary Of The Paper This paper presents GraphRetro, a single-step retrosynthesis model that represents an advance from last years' NeurIPS state of the art model on semi-template-based modeling of the USPTO-50k dataset. GraphRetro uses a graph neural network that predicts edits to transform a target into synthons and then expands the synthons into full molecules by attaching leaving groups (with the synthons and leaving groups learned from the training set). The authors compare the predictions on the USPTO-50k to existing baselines, and show examples of predictions, including wrong ones. Because of the particular construction of the model, the authors can decompose the sources of errors as wrong edit predictions or wrong synthon completions. Review The paper represents an improvement on the semi-template based methods for retrosynthesis. The presentation is clear and the model presents significant additions compared to RetroXpert. Although I like the organizational power of classifying retrosynthesis models as template-based, template-free, and semi-template-based, I think that as long as no data leakage happens and the templates are only generated using the training sets of a particular benchmark, then all these methods could be compared against each other in a particular benchmark (and yes, I know that we sorely need additional public benchmarks in this area). In that light, I think that this paper shows that the energy-based views of retrosynthesis outperform GraphRetro by a respectable margin in most categories, both in the template based, but importantly, also in most of the template-free methods. I understand the fundamental hesitation with including template-based methods in the comparison because they might not generalize to new reactions, but wouldn't a template-free method outperforming a semi-template-based model suggest that the former is much better yet for generalizing to unseen domains? Despite this important limitation of the work, I think that the contribution is still notable, and I hope that the authors will manage to amend this paper and publish it soon.
NIPS
Title Self-Supervised Learning Disentangled Group Representation as Feature Abstract A good visual representation is an inference map from observations (images) to features (vectors) that faithfully reflects the hidden modularized generative factors (semantics). In this paper, we formulate the notion of “good” representation from a group-theoretic view using Higgins’ definition of disentangled representation [40], and show that existing Self-Supervised Learning (SSL) only disentangles simple augmentation features such as rotation and colorization, thus unable to modularize the remaining semantics. To break the limitation, we propose an iterative SSL algorithm: Iterative Partition-based Invariant Risk Minimization (IP-IRM), which successfully grounds the abstract semantics and the group acting on them into concrete contrastive learning. At each iteration, IP-IRM first partitions the training samples into two subsets that correspond to an entangled group element. Then, it minimizes a subset-invariant contrastive loss, where the invariance guarantees to disentangle the group element. We prove that IP-IRM converges to a fully disentangled representation and show its effectiveness on various benchmarks. Codes are available at https://github.com/Wangt-CN/IP-IRM. N/A 1 Introduction Deep learning is all about learning feature representations [5]. Compared to the conventional end-to-end supervised learning, Self-Supervised Learning (SSL) first learns a generic feature representation (e.g., a network backbone) by training with unsupervised pretext tasks such as the prevailing contrastive objective [36, 16], and then the above stage-1 feature is expected to serve various stage-2 applications with proper fine-tuning. SSL for visual representation is so fascinating that it is the first time that we can obtain “good” visual features for free, just like the trending pre-training in NLP community [26, 8]. However, most SSL works only care how much stage-2 performance an SSL feature can improve, but overlook what feature SSL is learning, why it can be learned, what cannot be learned, what the gap between SSL and Supervised Learning (SL) is, and when SSL can surpass SL? The crux of answering those questions is to formally understand what a feature representation is and what a good one is. We postulate the classic world model of visual generation and feature representation [1, 69] as in Figure 1. Let U be a set of (unseen) semantics, e.g., attributes such as “digit” and “color”. There 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Entangle Semantics 1344D 1212D SimCLR SimCLR + PIRM Aug. Related L1 Norm = 0.06 L1 Norm = 0.06 (a) (c) Aug. Unrelated 20 40 60 80 SimCLR IP-IRM SimCLR IP-IRM Ac cu ra cy (b) IP-IRMSimCLR Layer 29 IP-IRMSimCLR Layer 18 IP-IRMSimCLR Layer 29 IP-IRMSimCLR Layer 18 IP-IRMSimCLR Layer 29 IP-IRMSimCLR Layer 18 0 1 2 3 Figure 2: (a) The heat map visualizes feature dimensions related to augmentations (aug. related) and unrelated to augmentations (aug. unrelated), whose respective classification accuracy is shown in the bar chart below. Dashed bar denotes the accuracy using full feature dimensions. Experiment was performed on STL10 [22] with representation learnt with SimCLR [16] and our IP-IRM. (b) Visualization of CNN activations [77] of 4 filters on layer 29 and 18 of VGG [75] trained on ImageNet100 [81]. The filters were chosen by first clustering the aug. unrelated filters with k-means (k = 4) and then selecting the filters corresponding to the cluster centers. is a set of independent and causal mechanisms [66] ϕ : U → I, generating images from semantics, e.g., writing a digit “0” when thinking of “0” [74]. A visual representation is the inference process φ : I → X that maps image pixels to vector space features, e.g., a neural network. We define semantic representation as the functional composition f : U → I → X . In this paper, we are only interested in the parameterization of the inference process for feature extraction, but not the generation process, i.e., we assume ∀I ∈ I , ∃u ∈ U , such that I = ϕ(u) is fixed as the observation of each image sample. Therefore, we consider semantic and visual representations the same as feature representation, or simply representation, and we slightly abuse φ(I) := f ( ϕ−1(I) ) , i.e., φ and f share the same trainable parameters. We call the vector x = φ(I) as feature, where x ∈ X . We propose to use Higgins’ definition of disentangled representation [40] to define what is “good”. Definition 1. (Disentangled Representation) Let G be the group acting on U , i.e., g · u ∈ U × U transforms u ∈ U , e.g., a “turn green” group element changing the semantic from “red” to “green”. Suppose there is a direct product decomposition1 G = g1× . . .× gm and U = U1× . . .×Um, where gi acts on Ui respectively. A feature representation is disentangled if there exists a group G acting on X such that: 1. Equivariant: ∀g ∈ G,∀u ∈ U , f(g ·u) = g ·f(u), e.g., the feature of the changed semantic: “red” to “green” in U , is equivalent to directly change the color vector in X from “red” to “green”. 2. Decomposable: there is a decomposition X = X1 × . . .×Xm, such that each Xi is fixed by the action of all gj , j 6= i and affected only by gi, e.g., changing the “color” semantic in U does not affect the “digit” vector in X . Compared to the previous definition of feature representation which is a static mapping, the disentangled representation in Definition 1 is dynamic as it explicitly incorporate group representation [35], which is a homomorphism from group to group actions on a space, e.g., G → X × X , and it is common to use the feature space X as a shorthand—this is where our title stands. Definition 1 defines “good” features in the common views: 1) Robustness: a good feature should be invariant to the change of environmental semantics, such as external interventions [45, 87] or domain shifts [32]. By the above definition, a change is always retained in a subspace Xi, while others are not affected. Hence, the subsequent classifier will focus on the invariant features and ignore the ever-changing Xi. 2) Zero-shot Generalization: even if a new combination of semantics is unseen in training, each semantic has been learned as features. So, the metrics of each Xi trained by seen samples remain valid for unseen samples [95]. Are the existing SSL methods learning disentangled representations? No. We show in Section 4 that they can only disentangle representations according to the hand-crafted augmentations, e.g., color jitter and rotation. For example, in Figure 2 (a), even if we only use the augmentation-related feature, 1Note that gi can also denote a cyclic subgroup Gi such as rotation [0◦ : 1◦ : 360◦], or a countable one but treated as cyclic such as translation [(0, 0) : (1, 1) : (width, height)] and color [0 : 1 : 255]. the classification accuracy of a standard SSL (SimCLR [16]) does not lose much as compared to the full feature use. Figure 2 (b) visualizes that the CNN features in each layer are indeed entangled (e.g., tyre, motor, and background in the motorcycle image). In contrast, our approach IP-IRM, to be introduced below, disentangles more useful features beyond augmentations. In this paper, we propose Iterative Partition-based Invariant Risk Minimization (IP-IRM [ai"p@:m]) that guarantees to learn disentangled representations in an SSL fashion. We present the algorithm in Section 3, followed by the theoretical justifications in Section 4. In a nutshell, at each iteration, IP-IRM first partitions the training data into two disjoint subsets, each of which is an orbit of the already disentangled group, and the cross-orbit group corresponds to an entangled group element gi. Then, we adopt the Invariant Risk Minimization (IRM) [2] to implement a partition-based SSL, which disentangles the representation Xi w.r.t. gi. Iterating the above two steps eventually converges to a fully disentangled representation w.r.t. ∏m i=1 gi. In Section 5, we show promising experimental results on various feature disentanglement and SSL benchmarks. 2 Related Work Self-Supervised Learning. SSL aims to learn representations from unlabeled data with hand-crafted pretext tasks [28, 63, 33]. Recently, Contrastive learning [65, 61, 38, 80, 16] prevails in most state-ofthe-art methods. The key is to map positive samples closer, while pushing apart negative ones in the feature space. Specifically, the positive samples are from the augmented views [82, 3, 94, 42] of each instance and the negative ones are other instances. Along this direction, follow-up methods are mainly four-fold: 1) Memory-bank [90, 61, 36, 18]: storing the prototypes of all the instances computed previously into a memory bank to benefit from a large number of negative samples. 2) Using siamese network [7] to avoid representation collapse [34, 19, 83]. 3) Assigning clusters to samples to integrate inter-instance similarity into contrastive learning [11, 12, 13, 88, 56]. 4) Seeking hard negative samples with adversarial training or better sampling strategies [73, 20, 44, 48]. In contrast, our proposed IP-IRM jumps out of the above frame and introduces the disentangled representation into SSL with group theory to show the limitations of existing SSL and how to break through them. Disentangled Representation. This notion dates back to [4], and henceforward becomes a highlevel goal of separating the factors of variations in the data [84, 79, 86, 58]. Several works aim to provide a more precise description [27, 29, 72] by adopting an information-theoretic view [17, 27] and measuring the properties of a disentangled representation explicitly [29, 72]. We adopt the recent group-theoretic definition from Higgins et al. [40], which not only unifies the existing, but also resolves the previous controversial points [78, 59]. Although supervised learning of disentangled representation is a well-studied field [100, 43, 10, 70, 49], unsupervised disentanglement based on GAN [17, 64, 57, 71] or VAE [39, 15, 99, 50] is still believed to be theoretically challenging [59]. Thanks to the Higgins’ definition, we prove that the proposed IP-IRM converges with full-semantic disentanglement using group representation theory. Notably, IP-IRM learns a disentangled representation with an inference process, without using generative models as in all the existing unsupervised methods, making IP-IRM applicable even on large-scale datasets. Group Representation Learning. A group representation has two elements [47, 35]: 1) a homomorphism (e.g., a mapping function) from the group to its group action acting on a vector space, and 2) the vector space. Usually, when there is no ambiguity, we can use either element as the definition. Most existing works focus on learning the first element. They first define the group of interest, such as spherical rotations [24] or image scaling [89, 76], and then learn the parameters of the group actions [23, 46, 68]. In contrast, we focus on the second element; more specifically, we are interested in learning a map between two vector spaces: image pixel space and feature vector space. Our representation learning is flexible because it delays the group action learning to downstream tasks on demand. For example, in a classification task, a classifier can be seen as a group action that is invariant to class-agnostic groups but equivariant to class-specific groups (see Section 4). 3 IP-IRM Algorithm Notations. Our goal is to learn the feature extractor φ in a self-supervised fashion. We define a partition matrix P ∈ {0, 1}N×2 that partitions N training images into 2 disjoint subsets. Pi,k = 1 if the i-th image belongs to the k-th subset and 0 otherwise. Suppose we have a pretext task loss function L(φ, θ = 1, k,P) defined on the samples in the k-th subset, where θ = 1 is a “dummy” parameter used to evaluate the invariance of the SSL loss across the subsets (later discussed in Step 1). For example, L can be defined as: L(φ, θ = 1, k,P) = ∑ x∈Xk −log exp ( xTx∗ · θ )∑ x′∈Xk∪X∗\x exp (x Tx′ · θ) , (1) where Xk = φ({Ii|Pi,k = 1}), and x∗ ∈ X ∗ is the augmented view feature of x ∈ Xk. Input. N training images. Randomly initialized φ. A partition matrix P initialized such that the first column of P is 1, i.e., all samples belong to the first subset. Set P = {P}. Output. Disentangled feature extractor φ. Step 1 [Update φ]. We update φ by: min φ ∑ P∈P 2∑ k=1 [ L(φ, θ = 1, k,P) + λ1 ‖∇θ=1L(φ, θ = 1, k,P)‖2 ] , (2) where λ1 is a hyper-parameter. The second term delineates how far the contrast in one subset is from a constant baseline θ = 1. The minimization of both of them encourages φ in different subsets close to the same baseline, i.e., invariance across the subsets. See IRM [2] for more details. In particular, the first iteration corresponds to the standard SSL with X1 in Eq. (1) containing all training images. Step 2 [Update P]. We fix φ and find a new partition P∗ by P∗ = argmax P 2∑ k=1 [ L(φ, θ = 1, k,P) + λ2 ‖∇θ=1L(φ, θ = 1, k,P)‖2 ] , (3) where λ2 is a hyper-parameter. In practice, we use a continuous partition matrix in RN×2 during optimization and then threshold it to {0, 1}N×2. We update P ← P ∪P∗ and iterate the above two steps until convergence. 4 Justification Recall that IP-IRM uses training sample partitions to learn the disentangled representations w.r.t.∏m i=1 gi. As we have a G-equivariant feature map between the sample space I and feature space X (the equivariance is later guaranteed by Lemma 1), we slightly abuse the notation by using X to denote both spaces. Also, we assume that X is a homogeneous space of G, i.e., any sample x′ ∈ X can be transited from another sample x by a group action g · x. Intuitively, G is all you need to describe the diversity of the training set. It is worth noting that g is any group element in G while gi is a Cartesian “building block” of G, e.g., g can be decomposed by (g1, g2, ..., gm). We show that partition and group are tightly connected by the concept of orbit. Given a sample x ∈ X , its group orbit w.r.t. G is a sample set G(x) = {g · x | g ∈ G}. As shown in Figure 3 (a), if G is a set of attributes shared by classes, e.g., “color” and “pose ”, the orbit is the sample set of the class of x; in Figure 3 (b), if G denotes augmentations, the orbit is the set of augmented images. In particular, we can see that the disjoint orbits in Figure 3 naturally form a partition. Formally, we have the following definition: Definition 2. (Orbit & Partition [47]) Given a subgroup D ⊂ G, it partitions X into the disjoint subsets: {D(c1 · x), ...,D(ck · x)}, where k is the number of cosets {c1D, ..., ckD}, and the cosets form a factor group1 G/D = {ci}ki=1. In particular, ci · x can be considered as a sample of the i-th class, transited from any sample x ∈ X . Interestingly, the partition offers a new perspective for the training data format in Supervised Learning (SL) and Self-Supervised Learning (SSL). In SL, as shown in Figure 3 (a), the data is labeled with k classes, each of which is an orbit with D(ci · x) training samples, whose variations are depicted 1Given G = D × K with K = c1 × . . .× ck, then D̄ = {(d, e) | d ∈ D} is a normal subgroup of G, and G/D̄ is isomorphic to K [47]. We write G/D = {ci}ki=1 with slight abuse of notation. by the class-sharing attribute group D. The cross-orbit group action, e.g., cdog · x, can be read as “turn x into a dog” and such “turn” is always valid due to the assumption that X is a homogeneous space of G. In SSL, as shown in Figure 3 (b), each training sample x is augmented by the group D. So, D(ci · x) consists of all the augmentations of the i-th sample, where the cross-orbit group action ci · x can be read as “turn x into the i-th sample”. Thanks to the orbit and partition view of training data, we are ready to revisit model generalization in a group-theoretic view by using invariance and equivariance—the two sides of the coin, whose name is disentanglement. For SL, we expect that a good feature is disentangled into a class-agnostic part and a class-specific part: the former (latter) is invariant (equivariant) to G/D—cross-orbit traverse, but equivariant (invariant) to D—in-orbit traverse. By using such feature, a model can generalize to diverse testing samples (limited to |D| variations) by only keeping the class-specific feature. Formally, we prove that we can achieve such disentanglement by contrastive learning: Lemma 1. (Disentanglement by Contrastive Learning) Training loss − log exp(x T i xj)∑ x∈X exp(x T j x) disentan- gles X w.r.t. (G/D)×D, where xi and xj are from the same orbit. We can draw the following interesting corollaries from Lemma 1 (details in Appendix): 1. If we use all the samples in the denominator of the loss, we can approximate to G-equivariant features given limited training samples. This is because the loss minimization guarantees ∀(xi,xj) ∈ X × X , i 6= j → xi 6= xj , i.e., any pair corresponds to a group action. 2. Conventional cross-entropy loss in SL is a special case, if we define x ∈ X = {x1, ...,xk} as k classifier weights. So, SL does not guarantee the disentanglement of G/D, which causes generalization error if the class domain of downstream task is different from SL pre-training, e.g., a subset of G/D. 3. In contrastive learning based SSL, D = “augmentations” (recall Figure 2), and the number of augmentations |Daug| is generally much smaller compared to the class-wise sample diversity |DSL| in SL. This enables the SL model to generalize to more diverse testing samples (|DSL|) by filtering out the class-agnostic features (e.g., background) and focusing on the class-specific ones (e.g., foreground), which explains why SSL is worse than SL in downstream classification. 4. In SL, if the number of training samples per orbit is not enough, i.e., smaller than |D(ci · x)|, the disentanglement between D and G/D cannot be guaranteed, such as the challenges in few-shot learning [96]. Fortunately, in SSL, the number is enough as we always include all the augmented samples in training. Moreover, we conjecture thatDaug only contains simple cyclic group elements such as rotation and colorization, which are easier for representation learning. Lemma 1 does not guarantee the decomposability of each d ∈ D. Nonetheless, the downstream model can still generalize by keeping the class-specific features affected by G/D. Therefore, the key to fill the gap or even let SSL surpass SL is to achieve the full disentanglement of G/Daug. Theorem 1. The representation is fully disentangled w.r.t. G/Daug if and only if ∀ci ∈ G/Daug, the contrastive loss in Eq. (1) is invariant to the 2 orbits of partition {G′(ci · x),G′(c−1i · x)}, where G′ = G/ci = Daug × c1 × . . .× ci−1 × ci+1 × . . .× ck. The maximization in Step 2 is based on the contra-position of the sufficient condition of Theorem 1. Denote the currently disentangled group as D (initially Daug). If we can find a partition P∗ to maximize the loss in Eq. (3), i.e., SSL loss is variant across the orbits, then ∃h ∈ G/D such that the representation of h is entangled, i.e., P∗ = {D(h · x),D(h−1 · x)}. Figure 3 (c) illustrates a discovered partition about color. The minimization in Step 1 is based on the necessary condition of Theorem 1. Based on the discovered P∗, if we minimize Eq. (2), we can further disentangle h and update D ← D × h. Overall, IP-IRM converges as G/Daug is finite. Note that an improved contrastive objective [92] can further disentangle each d ∈ Daug and achieve full disentanglement w.r.t. G. 5 Experiments 5.1 Unsupervised Disentanglement Datasets. We used two datasets. CMNIST [2] has 60,000 digit images with semantic labels of digits (0-9) and colors (red and green). These images differ in other semantics (e.g., slant and font) that are not labeled. Moreover, there is a strong correlation between digits and colors (most 0-4 in red and 5-9 in green), increasing the difficulty to disentangle them. Shapes3D [50] contains 480,000 images with 6 labelled semantics, i.e., size, type, azimuth, as well as floor, wall and object color. Note that we only considered the first three semantics for evaluation, as the standard augmentations in SSL will contaminate any color-related semantics. Settings. We adopted 6 representative disentanglement metrics: Disentangle Metric for Informativeness (DCI) [29], Interventional Robustness Score (IRS) [79], Explicitness Score (EXP) [72], Modularity Score (MOD) [72] and the accuracy of predicting the ground-truth semantic labels by two classification models called logistic regression (LR) and gradient boosted trees (GBT) [59]. Specifically, DCI and EXP measure the explicitness, i.e., the values of semantics can be decoded from the feature using a linear transformation. MOD and IRS measure the modularity, i.e., whether each feature dimension is equivariant to the shift of a single semantic. See Appendix for more detailed formula of the metrics. In evaluation, we trained CNN-based feature extractor backbones with comparable number of parameters for all the baselines and our IP-IRM. The full implementation details are in Appendix. Results. In Table 1, we compared the proposed IP-IRM to the standard SSL method SimCLR [16] as well as several generative disentanglement methods [51, 41, 9, 15, 50]. On both CMNIST and Shapes3D dataset, IP-IRM outperforms SimCLR regarding all metrics except for only IRS where the most relative gain is 8.8% for MOD. For this MOD, we notice that VAE performs better than our IP-IRM by 6 points, i.e., 0.82 v.s. 0.76 for Shapes3D. This is because VAE explicitly pursues a high modularity score through regularizing the dimension-wise independence in the feature space. However, this regularization is adversarial to discriminative objectives [14, 95]. Indeed, we can observe from the column of LR (i.e., the performance of downstream linear classification) that VAE methods have clearly poor performance especially on the more challenging dataset Shapes3D. We can draw the same conclusion from the results of GBT. Different from VAE methods, our IP-IRM is optimized towards disentanglement without such regularization, and is thus able to outperform the others in downstream tasks while obtaining a competitive value of modularity. What do IP-IRM features look like? Figure 4 visualizes the features learned by SimCLR and our IP-IRM on two datasets: CMNIST in Figure 4 (a) and STL10 dataset in Figure 4 (b). In the following, we use Figure 4 (a) as the example, and can easily draw the similar conclusions from Figure 4 (b). On the left-hand side of Figure 4 (a), it is obvious that there is no clear boundary to distinguish the semantic of color in the SimCLR feature space. Besides, the features of the same digit semantic are scattered in two regions. On the right-hand side of (a), we have 3 observations for IP-IRM. 1) The features are well clustered and each cluster corresponds to a specific semantic of either digit or color. This validates the equivariant property of IP-IRM representation that it responds to any changes of the existing semantics, e.g., digit and color on this dataset. 2) The feature space has the symmetrical structure for each individual semantic, validating the decomposable property of IP-IRM representation. More specifically, i) mirroring a feature (w.r.t. “*” in the figure center) indicates the change on the only semantic of color, regardless of the other semantic (digit); and ii) a counterclockwise rotation (denoted by black arrows from same-colored 1 to 7) indicates the change on the only semantic of digit. 3) IP-IRM reveals the true distribution (similarity) of different classes. For example, digits 3, 5, 8 sharing sub-parts (curved bottoms and turnings) have closer feature points in the IP-IRM feature space. How does IP-IRM disentangle features? 1) Discovered P∗: To visualize the discovered partitions P∗ at each maximization step, we performed an experiment on a binary CMNIST (digit 0 and 1 in color red and green), and show the results in Figure 5 (a). Please kindly refer to Appendix for the full results on CMNIST. First, each partition tells apart a specific semantic into two subsets, e.g., in Partition #1, red and green digits are separated. Second, besides the obvious semantics—digit and color (labelled on the dataset), we can discover new semantics, e.g., the digit slant shown in Partition #3. 2) Disentangled Representation: In Figure 5 (b), we aim to visualize how equivariant each feature dimension is to the change of each semantic, i.e., a darker color shows that a dimension is more equivariant w.r.t. the semantic indicated on the left. We can see that SimCLR fails to learn the decomposable representation, e.g., the 8-th dimension captures azimuth, type and size in Shapes3D. In contrast, our IP-IRM achieves disentanglement by representing the semantics into interpretable dimensions, e.g., the 6-th and 7-th dimensions captures the size, the 4-th for type and the 2-nd and 9-th for azimuth on the Shapes3D. Overall, the results support the justification in Section 4, i.e., we discover a new semantic (affected by h) through the partition P∗ at each iteration and IP-IRM eventually converges with a disentangled representation. 5.2 Self-Supervised Learning Datasets and Settings. We conducted the SSL evaluations on 2 standard benchmarks following [88, 20, 48]. Cifar100 [54] contains 60,000 images in 100 classes and STL10 [22] has 113,000 images in 10 classes. We used SimCLR [16], DCL [20] and HCL [48] as baselines, and learned the representations for 400 and 1000 epochs. We evaluated both linear and k-NN (k = 200) accuracies for the downstream classification task. Implementation details are in appendix. Method STL10 Cifar100 k-NN Linear k-NN Linear 400 epoch training SimCLR [16] 73.60 78.89 54.94 66.63 DCL [20] 78.82 82.56 57.29 68.59 HCL [48] 80.06 87.60 59.61 69.22 SimCLR+IP-IRM 79.66 84.44 59.10 69.55 DCL+IP-IRM 81.51 85.36 58.37 68.76 HCL+IP-IRM 84.29 87.81 60.05 69.95 1,000 epoch training SimCLR [16] 78.60 84.24 59.45 68.73 SimCLR† [55] 79.80 85.56 63.67 72.18 SimCLR†+IP-IRM 85.08 89.91 65.82 73.99 Supervised∗ - - - 73.72 Supervised∗+MixUp [97] - - - 74.19 Notably, our SimCLR+IP-IRM surpasses vanilla supervised learning on Cifar100 under the same evaluation setting. Still, the quality of disentanglement cannot be fully evaluated when the training and test samples are identically distributed—while the improved accuracy demonstrates that IP-IRM representation is more equivariant to class semantics, it does not reveal if the representation is decomposable. Hence we present an out-of-distribution (OOD) setting in Section 5.3 to further show this property. Is IP-IRM sensitive to the values of hyper-parameters? 1) λ1 and λ2 in Eq. (2) and Eq. (3). In Figure 6 (a), we observe that the best performance is achieved with λ1 and λ2 taking values from 0.2 to 0.5 on both datasets. All accuracies drop sharply if using λ1 = 1.0. The reason is that a higher λ1 forces the model to push the φ-induced similarity to fixed baseline θ = 1, rather than decrease the loss L on the pretext task, leading to poor convergence. 2) The number of epochs. In Figure 6 (b), we plot the Top-1 accuracies of using k-NN classifiers along the 700-epoch training of two kinds of SSL representations—SimCLR and IP-IRM. It is obvious that IP-IRM converges faster and achieves a higher accuracy than SimCLR. It is worth to highlight that on the STL10, the accuracy of SimCLR starts to oscillate and grow slowly after the 150-th epoch, while ours keeps on improving. This is an empirical evidence that IP-IRM keeps on disentangling more and more semantics in the feature space, and has the potential of improvement through long-term training. 5.3 Potential on Large-Scale Data Datasets. We evaluated on the standard benchmark of supervised learning ImageNet ILSVRC2012 [25] which has in total 1,331,167 images in 1,000 classes. To further reveal if a representation is decomposable, we used NICO [37], which is a real-world image dataset designed for OOD evaluations. It contains 25,000 images in 19 classes, with a strong correlation between the foreground and background in the train split (e.g., most dogs on grass). We also studied the transferability of the learned representation following [30, 52]: FGVC Aircraft (Aircraft) [60], Caltech-101 (Caltech) [31], Stanford Cars (Cars) [93], Cifar10 [53], Cifar100 [53], DTD [21], Oxford 102 Flowers (Flowers) [62], Food-101 (Food) [6], Oxford-IIIT Pets (Pets) [67] and SUN397 (SUN) [91]. These datasets include coarse- to fine-grained classification tasks, and vary in the amount of training data (2,000-75,000 images) and classes (10-397 classes), representing a wide range of transfer learning settings. Settings. For the ImageNet, all the representations were trained for 200 epochs due to limited computing resources. We followed the common setting [80, 36], using a linear classifier, and report Top-1 classification accuracies. For NICO, we fixed the ImageNet pre-trained ResNet-50 backbone and fine-tuned the classifier. See appendix for more training details. For the transfer learning, we followed [30, 52] to report the classification accuracies on Cars, Cifar-10, Cifar-100, DTD, Food, SUN and the average per-class accuracies on Aircraft, Caltech, Flowers, Pets. We call them uniformly as Accuracy. We used the few-shot n-way-k-shot setting for model evaluation. Specifically, we randomly sampled 2,000 episodes from the test splits of above datasets. An episode contains n classes, each with k training samples and 15 testing samples, where we fine-tuned the linear classifier (backbone weights frozen) for 100 epochs on the training samples, and evaluated the classifier on the testing samples. We evaluated with n = k = 5 (results of n = 5, k = 20 in Appendix). ImageNet and NICO. In Table 3 ImageNet accuracy, our IP-IRM achieves the best performance over all baseline models. Yet we believe that this does not show the full potential of IP-IRM, because ImageNet is a larger-scale dataset with many semantics, and it is hard to achieve a full disentanglement of all semantics within the limited 200 epochs. To evaluate the feature decomposability of IPIRM, we compared the performance on NICO with various SSL baselines in Table 3, where our approach significantly outperforms the baselines by 1.5-4.2%. This validates IP-IRM feature is more decomposable—if each semantic feature (e.g., background) is decomposed in some fixed dimensions and some classes vary with such semantic, then the classifier will recognize this as a non-discriminative variant feature and hence focus on other more discriminative features (i.e., foreground). In this way, even though some classes are confounded by those non-discriminative features (e.g., most of the “dog” images are with “grass” background), the fixed dimensions still help classifiers neglect those non-discriminative ones. We further visualized the CAM [98] on NICO in Figure 7, which indeed shows that IP-IRM helps the classifier focus on the foreground regions. Few-Shot Tasks. As shown in Table 4, our IP-IRM significantly improves the performance of 5-way-5-shot setting, e.g., we outperform the baseline MoCo-v2 by 2.2%. This is because IP-IRM can further disentangled G\Daug over SSL, which is essential for representations to generalize to different downstream class domains (recall Corollary 2 of Lemma 1). This is also in line with recent works [86] showing that a disentangled representation is especially beneficial in low-shot scenarios, and further demonstrates the importance of disentanglement in downstream tasks. 6 Conclusion We presented an unsupervised disentangled representation learning method called Iterative Partitionbased Invariant Risk Minimization (IP-IRM), based on Self-Supervised Learning (SSL). IP-IRM iteratively partitions the dataset into semantic-related subsets, and learns a representation invariant across the subsets using SSL with an IRM loss. We show that with theoretical guarantee, IP-IRM converges with a disentangled representation under the group-theoretical view, which fundamentally surpasses the capabilities of existing SSL and fully-supervised learning. Our proposed theory is backed by strong empirical results in disentanglement metrics, SSL classification accuracy and transfer performance. IP-IRM achieves disentanglement without using generative models, making it widely applicable on large-scale visual tasks. As future directions, we will continue to explore the application of group theory in representation learning and seek additional forms of inductive bias for faster convergence. Acknowledgments and Disclosure of Funding The authors would like to thank all reviewers for their constructive suggestions. This research is partly supported by the Alibaba-NTU Joint Research Institute, the A*STAR under its AME YIRG Grant (Project No. A20E6c0101), and the Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 2 grant.
1. What is the main contribution of the paper regarding self-supervised learning? 2. How effective is the proposed technique compared to other disentanglement methods? 3. Is there any supervision used in the technique, despite being marketed as fully unsupervised? If so, what kind and how much? 4. Can you provide an intuitive explanation of the regularization term from Invariant Risk Maximization and its purpose? 5. Could you elaborate on the claim that the technique can disentangle the full semantic space without hand-crafted augmentations? 6. How does the technique compare to Locatello et al.'s result regarding the impossibility of fully unsupervised disentangled representation learning? 7. Can you explain why finding the partition that maximizes loss is similar to finding hard negatives? 8. How did you combine IP-IRM with SimCLR in Table 2? 9. Are there any limitations discussed in Section 6 that are not mentioned in the checklist? 10. Would it be possible to add standard deviations to the results? 11. How does this paper's method compare to the method in "What Should Not Be Contrastive in Contrastive Learning"?
Summary Of The Paper Review
Summary Of The Paper This paper presents a new self-supervised learning technique that learns to map image inputs to "disentangled" vectors iteratively by alternating two steps. First, all of the data is assumed to be in the same subset and they learn a representation by minimizing a SimCLR like contrastive loss (here the positive pairs are augmented versions of the same image, while negative samples are other images in the batch). In the second step, they find the partition of the dataset into 2 subsets that maximize the contrastive loss. This new partition is added to the set of all partitions considered, and we repeat the two steps (again learning the representation that minimizes contrastive loss over all partitions and then finding the next partition and so on). They use a regularized form of the contrastive loss, which adds a term taken from earlier work ("Invariation Risk Minimization"). This terms measures the magnitude of the gradient of the contrastive loss wrt to the temperature parameter (at temperature=1). They evaluate their technique against alternatives on a wide range of disentanglement metrics and downstream classification accuracy, and show their technique performs well. Review Overall, I think the paper is quite interesting and should be of interest to the community. I especially like the idea of finding the worst partition in terms of the representation learned so far and re-learning the representation based on this partition. I also found the evaluations in the paper quite comprehensive. I think the main issue with the paper is that it makes rather strong claims but fails to support some of these. For example, the authors argue that their technique can disentangle the full semantic space (i.e., learn a fully disentangled representation) in a fully unsupervised manner. (in contrast to other SSL techniques, which can only learn to disentangle the semantic information wrt hand-crafted augmentations.) However, to me it is not clear how this squares with the Locatello et al. result that shows that fully unsupervised learning of disentangled representations is not possible. They mention in passing that Higgins et al.'s group theoretic definition of disentanglement solves the problem with Locatello et al.'s result but this is not clear to me. Locatello et al. in essence shows that there are multiple equivalent representations and it is impossible to pick one over another as more disentangled without some supervision. And Higgins et al. also accept this fact as they point out in their paper that their disentanglement definition assumes a "natural" decomposition is already given (i.e., what it means to be disentangled). I should say that I found it difficult to fully grasp what the authors' claim is, so please let me know if I misunderstood anything. This brings me to my other point. I think the paper right now is unfortunately hard to read. The paper gets rather technical at times, and it is hard to follow the argument (for example, discussion after Lemma 1 in Section 4 is hard to follow). While I appreciate the authors' presenting theoretical results, I think the paper would benefit greatly if the technique could be better motivated on an intuitive level, and the technical arguments similarly could be presented on an intuitive level. Please also see my further comments below: Figure 2 is hard to understand. What is being classified on the left panel? Also, it is not clear what I should see on the heatmaps on the right. line 105-106, the authors say "Higgins et al., ..., resolves the previous controversial points" wrt to Locatello et al. for example. As I mentioned above, this is not clear to me. It'd be worth expanding this a little bit further. What is the effect of regularizer term from "Invariant Risk Maximization"? The authors say it "regularizes phi to be invariant across subset in a partition". What does this mean exactly? Can you give an intuitive example of what this term does? What would happen if you didn't have this term? line 155, the authors say "f is a collapsed representation that maps all inputs to a fixed vector, which is impossible in today’s deep model training". What does this mean? It is impossible because of the regularization or tricks like EMA that people use. I don't see how the referenced work [88] is relevant to this point either. To me it seemed like finding the partition that maximizes loss is akin to finding hard negatives. Is this intuition correct? If so, would it be useful to mention sth along these lines in the paper? For the results in Table 2, how do you combine IP-IRM with SimCLR etc.? In the checklist, the authors mention limitations are discussed in section 6, but I couldn't see this. I understand that the authors may not have had enough time to get standard deviations on all of the results, but it'd be nice to add these. (I have seen the results in appendix). While reading this paper, I was reminded of the paper "What Should Not Be Contrastive in Contrastive Learning" ()https://arxiv.org/abs/2008.05659). This paper learns essentially a separate subspace for each augmentation (which encourages the representations not to throw away information like SimCLR etc. might do). It'd be interesting to see how well their method compares to this. (To be clear, this is totally optional. I understand the authors might not have time to work on this.) typos: line 74, "attributed first part" -> "attributed to first part" line 218, "regrading" line 228, "What does IP-IRM feature look like?" -> "What do IP-IRM features look like?" line 239, "regardless the other semantic" -> "regardless of the other semantic"
NIPS
Title Self-Supervised Learning Disentangled Group Representation as Feature Abstract A good visual representation is an inference map from observations (images) to features (vectors) that faithfully reflects the hidden modularized generative factors (semantics). In this paper, we formulate the notion of “good” representation from a group-theoretic view using Higgins’ definition of disentangled representation [40], and show that existing Self-Supervised Learning (SSL) only disentangles simple augmentation features such as rotation and colorization, thus unable to modularize the remaining semantics. To break the limitation, we propose an iterative SSL algorithm: Iterative Partition-based Invariant Risk Minimization (IP-IRM), which successfully grounds the abstract semantics and the group acting on them into concrete contrastive learning. At each iteration, IP-IRM first partitions the training samples into two subsets that correspond to an entangled group element. Then, it minimizes a subset-invariant contrastive loss, where the invariance guarantees to disentangle the group element. We prove that IP-IRM converges to a fully disentangled representation and show its effectiveness on various benchmarks. Codes are available at https://github.com/Wangt-CN/IP-IRM. N/A 1 Introduction Deep learning is all about learning feature representations [5]. Compared to the conventional end-to-end supervised learning, Self-Supervised Learning (SSL) first learns a generic feature representation (e.g., a network backbone) by training with unsupervised pretext tasks such as the prevailing contrastive objective [36, 16], and then the above stage-1 feature is expected to serve various stage-2 applications with proper fine-tuning. SSL for visual representation is so fascinating that it is the first time that we can obtain “good” visual features for free, just like the trending pre-training in NLP community [26, 8]. However, most SSL works only care how much stage-2 performance an SSL feature can improve, but overlook what feature SSL is learning, why it can be learned, what cannot be learned, what the gap between SSL and Supervised Learning (SL) is, and when SSL can surpass SL? The crux of answering those questions is to formally understand what a feature representation is and what a good one is. We postulate the classic world model of visual generation and feature representation [1, 69] as in Figure 1. Let U be a set of (unseen) semantics, e.g., attributes such as “digit” and “color”. There 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Entangle Semantics 1344D 1212D SimCLR SimCLR + PIRM Aug. Related L1 Norm = 0.06 L1 Norm = 0.06 (a) (c) Aug. Unrelated 20 40 60 80 SimCLR IP-IRM SimCLR IP-IRM Ac cu ra cy (b) IP-IRMSimCLR Layer 29 IP-IRMSimCLR Layer 18 IP-IRMSimCLR Layer 29 IP-IRMSimCLR Layer 18 IP-IRMSimCLR Layer 29 IP-IRMSimCLR Layer 18 0 1 2 3 Figure 2: (a) The heat map visualizes feature dimensions related to augmentations (aug. related) and unrelated to augmentations (aug. unrelated), whose respective classification accuracy is shown in the bar chart below. Dashed bar denotes the accuracy using full feature dimensions. Experiment was performed on STL10 [22] with representation learnt with SimCLR [16] and our IP-IRM. (b) Visualization of CNN activations [77] of 4 filters on layer 29 and 18 of VGG [75] trained on ImageNet100 [81]. The filters were chosen by first clustering the aug. unrelated filters with k-means (k = 4) and then selecting the filters corresponding to the cluster centers. is a set of independent and causal mechanisms [66] ϕ : U → I, generating images from semantics, e.g., writing a digit “0” when thinking of “0” [74]. A visual representation is the inference process φ : I → X that maps image pixels to vector space features, e.g., a neural network. We define semantic representation as the functional composition f : U → I → X . In this paper, we are only interested in the parameterization of the inference process for feature extraction, but not the generation process, i.e., we assume ∀I ∈ I , ∃u ∈ U , such that I = ϕ(u) is fixed as the observation of each image sample. Therefore, we consider semantic and visual representations the same as feature representation, or simply representation, and we slightly abuse φ(I) := f ( ϕ−1(I) ) , i.e., φ and f share the same trainable parameters. We call the vector x = φ(I) as feature, where x ∈ X . We propose to use Higgins’ definition of disentangled representation [40] to define what is “good”. Definition 1. (Disentangled Representation) Let G be the group acting on U , i.e., g · u ∈ U × U transforms u ∈ U , e.g., a “turn green” group element changing the semantic from “red” to “green”. Suppose there is a direct product decomposition1 G = g1× . . .× gm and U = U1× . . .×Um, where gi acts on Ui respectively. A feature representation is disentangled if there exists a group G acting on X such that: 1. Equivariant: ∀g ∈ G,∀u ∈ U , f(g ·u) = g ·f(u), e.g., the feature of the changed semantic: “red” to “green” in U , is equivalent to directly change the color vector in X from “red” to “green”. 2. Decomposable: there is a decomposition X = X1 × . . .×Xm, such that each Xi is fixed by the action of all gj , j 6= i and affected only by gi, e.g., changing the “color” semantic in U does not affect the “digit” vector in X . Compared to the previous definition of feature representation which is a static mapping, the disentangled representation in Definition 1 is dynamic as it explicitly incorporate group representation [35], which is a homomorphism from group to group actions on a space, e.g., G → X × X , and it is common to use the feature space X as a shorthand—this is where our title stands. Definition 1 defines “good” features in the common views: 1) Robustness: a good feature should be invariant to the change of environmental semantics, such as external interventions [45, 87] or domain shifts [32]. By the above definition, a change is always retained in a subspace Xi, while others are not affected. Hence, the subsequent classifier will focus on the invariant features and ignore the ever-changing Xi. 2) Zero-shot Generalization: even if a new combination of semantics is unseen in training, each semantic has been learned as features. So, the metrics of each Xi trained by seen samples remain valid for unseen samples [95]. Are the existing SSL methods learning disentangled representations? No. We show in Section 4 that they can only disentangle representations according to the hand-crafted augmentations, e.g., color jitter and rotation. For example, in Figure 2 (a), even if we only use the augmentation-related feature, 1Note that gi can also denote a cyclic subgroup Gi such as rotation [0◦ : 1◦ : 360◦], or a countable one but treated as cyclic such as translation [(0, 0) : (1, 1) : (width, height)] and color [0 : 1 : 255]. the classification accuracy of a standard SSL (SimCLR [16]) does not lose much as compared to the full feature use. Figure 2 (b) visualizes that the CNN features in each layer are indeed entangled (e.g., tyre, motor, and background in the motorcycle image). In contrast, our approach IP-IRM, to be introduced below, disentangles more useful features beyond augmentations. In this paper, we propose Iterative Partition-based Invariant Risk Minimization (IP-IRM [ai"p@:m]) that guarantees to learn disentangled representations in an SSL fashion. We present the algorithm in Section 3, followed by the theoretical justifications in Section 4. In a nutshell, at each iteration, IP-IRM first partitions the training data into two disjoint subsets, each of which is an orbit of the already disentangled group, and the cross-orbit group corresponds to an entangled group element gi. Then, we adopt the Invariant Risk Minimization (IRM) [2] to implement a partition-based SSL, which disentangles the representation Xi w.r.t. gi. Iterating the above two steps eventually converges to a fully disentangled representation w.r.t. ∏m i=1 gi. In Section 5, we show promising experimental results on various feature disentanglement and SSL benchmarks. 2 Related Work Self-Supervised Learning. SSL aims to learn representations from unlabeled data with hand-crafted pretext tasks [28, 63, 33]. Recently, Contrastive learning [65, 61, 38, 80, 16] prevails in most state-ofthe-art methods. The key is to map positive samples closer, while pushing apart negative ones in the feature space. Specifically, the positive samples are from the augmented views [82, 3, 94, 42] of each instance and the negative ones are other instances. Along this direction, follow-up methods are mainly four-fold: 1) Memory-bank [90, 61, 36, 18]: storing the prototypes of all the instances computed previously into a memory bank to benefit from a large number of negative samples. 2) Using siamese network [7] to avoid representation collapse [34, 19, 83]. 3) Assigning clusters to samples to integrate inter-instance similarity into contrastive learning [11, 12, 13, 88, 56]. 4) Seeking hard negative samples with adversarial training or better sampling strategies [73, 20, 44, 48]. In contrast, our proposed IP-IRM jumps out of the above frame and introduces the disentangled representation into SSL with group theory to show the limitations of existing SSL and how to break through them. Disentangled Representation. This notion dates back to [4], and henceforward becomes a highlevel goal of separating the factors of variations in the data [84, 79, 86, 58]. Several works aim to provide a more precise description [27, 29, 72] by adopting an information-theoretic view [17, 27] and measuring the properties of a disentangled representation explicitly [29, 72]. We adopt the recent group-theoretic definition from Higgins et al. [40], which not only unifies the existing, but also resolves the previous controversial points [78, 59]. Although supervised learning of disentangled representation is a well-studied field [100, 43, 10, 70, 49], unsupervised disentanglement based on GAN [17, 64, 57, 71] or VAE [39, 15, 99, 50] is still believed to be theoretically challenging [59]. Thanks to the Higgins’ definition, we prove that the proposed IP-IRM converges with full-semantic disentanglement using group representation theory. Notably, IP-IRM learns a disentangled representation with an inference process, without using generative models as in all the existing unsupervised methods, making IP-IRM applicable even on large-scale datasets. Group Representation Learning. A group representation has two elements [47, 35]: 1) a homomorphism (e.g., a mapping function) from the group to its group action acting on a vector space, and 2) the vector space. Usually, when there is no ambiguity, we can use either element as the definition. Most existing works focus on learning the first element. They first define the group of interest, such as spherical rotations [24] or image scaling [89, 76], and then learn the parameters of the group actions [23, 46, 68]. In contrast, we focus on the second element; more specifically, we are interested in learning a map between two vector spaces: image pixel space and feature vector space. Our representation learning is flexible because it delays the group action learning to downstream tasks on demand. For example, in a classification task, a classifier can be seen as a group action that is invariant to class-agnostic groups but equivariant to class-specific groups (see Section 4). 3 IP-IRM Algorithm Notations. Our goal is to learn the feature extractor φ in a self-supervised fashion. We define a partition matrix P ∈ {0, 1}N×2 that partitions N training images into 2 disjoint subsets. Pi,k = 1 if the i-th image belongs to the k-th subset and 0 otherwise. Suppose we have a pretext task loss function L(φ, θ = 1, k,P) defined on the samples in the k-th subset, where θ = 1 is a “dummy” parameter used to evaluate the invariance of the SSL loss across the subsets (later discussed in Step 1). For example, L can be defined as: L(φ, θ = 1, k,P) = ∑ x∈Xk −log exp ( xTx∗ · θ )∑ x′∈Xk∪X∗\x exp (x Tx′ · θ) , (1) where Xk = φ({Ii|Pi,k = 1}), and x∗ ∈ X ∗ is the augmented view feature of x ∈ Xk. Input. N training images. Randomly initialized φ. A partition matrix P initialized such that the first column of P is 1, i.e., all samples belong to the first subset. Set P = {P}. Output. Disentangled feature extractor φ. Step 1 [Update φ]. We update φ by: min φ ∑ P∈P 2∑ k=1 [ L(φ, θ = 1, k,P) + λ1 ‖∇θ=1L(φ, θ = 1, k,P)‖2 ] , (2) where λ1 is a hyper-parameter. The second term delineates how far the contrast in one subset is from a constant baseline θ = 1. The minimization of both of them encourages φ in different subsets close to the same baseline, i.e., invariance across the subsets. See IRM [2] for more details. In particular, the first iteration corresponds to the standard SSL with X1 in Eq. (1) containing all training images. Step 2 [Update P]. We fix φ and find a new partition P∗ by P∗ = argmax P 2∑ k=1 [ L(φ, θ = 1, k,P) + λ2 ‖∇θ=1L(φ, θ = 1, k,P)‖2 ] , (3) where λ2 is a hyper-parameter. In practice, we use a continuous partition matrix in RN×2 during optimization and then threshold it to {0, 1}N×2. We update P ← P ∪P∗ and iterate the above two steps until convergence. 4 Justification Recall that IP-IRM uses training sample partitions to learn the disentangled representations w.r.t.∏m i=1 gi. As we have a G-equivariant feature map between the sample space I and feature space X (the equivariance is later guaranteed by Lemma 1), we slightly abuse the notation by using X to denote both spaces. Also, we assume that X is a homogeneous space of G, i.e., any sample x′ ∈ X can be transited from another sample x by a group action g · x. Intuitively, G is all you need to describe the diversity of the training set. It is worth noting that g is any group element in G while gi is a Cartesian “building block” of G, e.g., g can be decomposed by (g1, g2, ..., gm). We show that partition and group are tightly connected by the concept of orbit. Given a sample x ∈ X , its group orbit w.r.t. G is a sample set G(x) = {g · x | g ∈ G}. As shown in Figure 3 (a), if G is a set of attributes shared by classes, e.g., “color” and “pose ”, the orbit is the sample set of the class of x; in Figure 3 (b), if G denotes augmentations, the orbit is the set of augmented images. In particular, we can see that the disjoint orbits in Figure 3 naturally form a partition. Formally, we have the following definition: Definition 2. (Orbit & Partition [47]) Given a subgroup D ⊂ G, it partitions X into the disjoint subsets: {D(c1 · x), ...,D(ck · x)}, where k is the number of cosets {c1D, ..., ckD}, and the cosets form a factor group1 G/D = {ci}ki=1. In particular, ci · x can be considered as a sample of the i-th class, transited from any sample x ∈ X . Interestingly, the partition offers a new perspective for the training data format in Supervised Learning (SL) and Self-Supervised Learning (SSL). In SL, as shown in Figure 3 (a), the data is labeled with k classes, each of which is an orbit with D(ci · x) training samples, whose variations are depicted 1Given G = D × K with K = c1 × . . .× ck, then D̄ = {(d, e) | d ∈ D} is a normal subgroup of G, and G/D̄ is isomorphic to K [47]. We write G/D = {ci}ki=1 with slight abuse of notation. by the class-sharing attribute group D. The cross-orbit group action, e.g., cdog · x, can be read as “turn x into a dog” and such “turn” is always valid due to the assumption that X is a homogeneous space of G. In SSL, as shown in Figure 3 (b), each training sample x is augmented by the group D. So, D(ci · x) consists of all the augmentations of the i-th sample, where the cross-orbit group action ci · x can be read as “turn x into the i-th sample”. Thanks to the orbit and partition view of training data, we are ready to revisit model generalization in a group-theoretic view by using invariance and equivariance—the two sides of the coin, whose name is disentanglement. For SL, we expect that a good feature is disentangled into a class-agnostic part and a class-specific part: the former (latter) is invariant (equivariant) to G/D—cross-orbit traverse, but equivariant (invariant) to D—in-orbit traverse. By using such feature, a model can generalize to diverse testing samples (limited to |D| variations) by only keeping the class-specific feature. Formally, we prove that we can achieve such disentanglement by contrastive learning: Lemma 1. (Disentanglement by Contrastive Learning) Training loss − log exp(x T i xj)∑ x∈X exp(x T j x) disentan- gles X w.r.t. (G/D)×D, where xi and xj are from the same orbit. We can draw the following interesting corollaries from Lemma 1 (details in Appendix): 1. If we use all the samples in the denominator of the loss, we can approximate to G-equivariant features given limited training samples. This is because the loss minimization guarantees ∀(xi,xj) ∈ X × X , i 6= j → xi 6= xj , i.e., any pair corresponds to a group action. 2. Conventional cross-entropy loss in SL is a special case, if we define x ∈ X = {x1, ...,xk} as k classifier weights. So, SL does not guarantee the disentanglement of G/D, which causes generalization error if the class domain of downstream task is different from SL pre-training, e.g., a subset of G/D. 3. In contrastive learning based SSL, D = “augmentations” (recall Figure 2), and the number of augmentations |Daug| is generally much smaller compared to the class-wise sample diversity |DSL| in SL. This enables the SL model to generalize to more diverse testing samples (|DSL|) by filtering out the class-agnostic features (e.g., background) and focusing on the class-specific ones (e.g., foreground), which explains why SSL is worse than SL in downstream classification. 4. In SL, if the number of training samples per orbit is not enough, i.e., smaller than |D(ci · x)|, the disentanglement between D and G/D cannot be guaranteed, such as the challenges in few-shot learning [96]. Fortunately, in SSL, the number is enough as we always include all the augmented samples in training. Moreover, we conjecture thatDaug only contains simple cyclic group elements such as rotation and colorization, which are easier for representation learning. Lemma 1 does not guarantee the decomposability of each d ∈ D. Nonetheless, the downstream model can still generalize by keeping the class-specific features affected by G/D. Therefore, the key to fill the gap or even let SSL surpass SL is to achieve the full disentanglement of G/Daug. Theorem 1. The representation is fully disentangled w.r.t. G/Daug if and only if ∀ci ∈ G/Daug, the contrastive loss in Eq. (1) is invariant to the 2 orbits of partition {G′(ci · x),G′(c−1i · x)}, where G′ = G/ci = Daug × c1 × . . .× ci−1 × ci+1 × . . .× ck. The maximization in Step 2 is based on the contra-position of the sufficient condition of Theorem 1. Denote the currently disentangled group as D (initially Daug). If we can find a partition P∗ to maximize the loss in Eq. (3), i.e., SSL loss is variant across the orbits, then ∃h ∈ G/D such that the representation of h is entangled, i.e., P∗ = {D(h · x),D(h−1 · x)}. Figure 3 (c) illustrates a discovered partition about color. The minimization in Step 1 is based on the necessary condition of Theorem 1. Based on the discovered P∗, if we minimize Eq. (2), we can further disentangle h and update D ← D × h. Overall, IP-IRM converges as G/Daug is finite. Note that an improved contrastive objective [92] can further disentangle each d ∈ Daug and achieve full disentanglement w.r.t. G. 5 Experiments 5.1 Unsupervised Disentanglement Datasets. We used two datasets. CMNIST [2] has 60,000 digit images with semantic labels of digits (0-9) and colors (red and green). These images differ in other semantics (e.g., slant and font) that are not labeled. Moreover, there is a strong correlation between digits and colors (most 0-4 in red and 5-9 in green), increasing the difficulty to disentangle them. Shapes3D [50] contains 480,000 images with 6 labelled semantics, i.e., size, type, azimuth, as well as floor, wall and object color. Note that we only considered the first three semantics for evaluation, as the standard augmentations in SSL will contaminate any color-related semantics. Settings. We adopted 6 representative disentanglement metrics: Disentangle Metric for Informativeness (DCI) [29], Interventional Robustness Score (IRS) [79], Explicitness Score (EXP) [72], Modularity Score (MOD) [72] and the accuracy of predicting the ground-truth semantic labels by two classification models called logistic regression (LR) and gradient boosted trees (GBT) [59]. Specifically, DCI and EXP measure the explicitness, i.e., the values of semantics can be decoded from the feature using a linear transformation. MOD and IRS measure the modularity, i.e., whether each feature dimension is equivariant to the shift of a single semantic. See Appendix for more detailed formula of the metrics. In evaluation, we trained CNN-based feature extractor backbones with comparable number of parameters for all the baselines and our IP-IRM. The full implementation details are in Appendix. Results. In Table 1, we compared the proposed IP-IRM to the standard SSL method SimCLR [16] as well as several generative disentanglement methods [51, 41, 9, 15, 50]. On both CMNIST and Shapes3D dataset, IP-IRM outperforms SimCLR regarding all metrics except for only IRS where the most relative gain is 8.8% for MOD. For this MOD, we notice that VAE performs better than our IP-IRM by 6 points, i.e., 0.82 v.s. 0.76 for Shapes3D. This is because VAE explicitly pursues a high modularity score through regularizing the dimension-wise independence in the feature space. However, this regularization is adversarial to discriminative objectives [14, 95]. Indeed, we can observe from the column of LR (i.e., the performance of downstream linear classification) that VAE methods have clearly poor performance especially on the more challenging dataset Shapes3D. We can draw the same conclusion from the results of GBT. Different from VAE methods, our IP-IRM is optimized towards disentanglement without such regularization, and is thus able to outperform the others in downstream tasks while obtaining a competitive value of modularity. What do IP-IRM features look like? Figure 4 visualizes the features learned by SimCLR and our IP-IRM on two datasets: CMNIST in Figure 4 (a) and STL10 dataset in Figure 4 (b). In the following, we use Figure 4 (a) as the example, and can easily draw the similar conclusions from Figure 4 (b). On the left-hand side of Figure 4 (a), it is obvious that there is no clear boundary to distinguish the semantic of color in the SimCLR feature space. Besides, the features of the same digit semantic are scattered in two regions. On the right-hand side of (a), we have 3 observations for IP-IRM. 1) The features are well clustered and each cluster corresponds to a specific semantic of either digit or color. This validates the equivariant property of IP-IRM representation that it responds to any changes of the existing semantics, e.g., digit and color on this dataset. 2) The feature space has the symmetrical structure for each individual semantic, validating the decomposable property of IP-IRM representation. More specifically, i) mirroring a feature (w.r.t. “*” in the figure center) indicates the change on the only semantic of color, regardless of the other semantic (digit); and ii) a counterclockwise rotation (denoted by black arrows from same-colored 1 to 7) indicates the change on the only semantic of digit. 3) IP-IRM reveals the true distribution (similarity) of different classes. For example, digits 3, 5, 8 sharing sub-parts (curved bottoms and turnings) have closer feature points in the IP-IRM feature space. How does IP-IRM disentangle features? 1) Discovered P∗: To visualize the discovered partitions P∗ at each maximization step, we performed an experiment on a binary CMNIST (digit 0 and 1 in color red and green), and show the results in Figure 5 (a). Please kindly refer to Appendix for the full results on CMNIST. First, each partition tells apart a specific semantic into two subsets, e.g., in Partition #1, red and green digits are separated. Second, besides the obvious semantics—digit and color (labelled on the dataset), we can discover new semantics, e.g., the digit slant shown in Partition #3. 2) Disentangled Representation: In Figure 5 (b), we aim to visualize how equivariant each feature dimension is to the change of each semantic, i.e., a darker color shows that a dimension is more equivariant w.r.t. the semantic indicated on the left. We can see that SimCLR fails to learn the decomposable representation, e.g., the 8-th dimension captures azimuth, type and size in Shapes3D. In contrast, our IP-IRM achieves disentanglement by representing the semantics into interpretable dimensions, e.g., the 6-th and 7-th dimensions captures the size, the 4-th for type and the 2-nd and 9-th for azimuth on the Shapes3D. Overall, the results support the justification in Section 4, i.e., we discover a new semantic (affected by h) through the partition P∗ at each iteration and IP-IRM eventually converges with a disentangled representation. 5.2 Self-Supervised Learning Datasets and Settings. We conducted the SSL evaluations on 2 standard benchmarks following [88, 20, 48]. Cifar100 [54] contains 60,000 images in 100 classes and STL10 [22] has 113,000 images in 10 classes. We used SimCLR [16], DCL [20] and HCL [48] as baselines, and learned the representations for 400 and 1000 epochs. We evaluated both linear and k-NN (k = 200) accuracies for the downstream classification task. Implementation details are in appendix. Method STL10 Cifar100 k-NN Linear k-NN Linear 400 epoch training SimCLR [16] 73.60 78.89 54.94 66.63 DCL [20] 78.82 82.56 57.29 68.59 HCL [48] 80.06 87.60 59.61 69.22 SimCLR+IP-IRM 79.66 84.44 59.10 69.55 DCL+IP-IRM 81.51 85.36 58.37 68.76 HCL+IP-IRM 84.29 87.81 60.05 69.95 1,000 epoch training SimCLR [16] 78.60 84.24 59.45 68.73 SimCLR† [55] 79.80 85.56 63.67 72.18 SimCLR†+IP-IRM 85.08 89.91 65.82 73.99 Supervised∗ - - - 73.72 Supervised∗+MixUp [97] - - - 74.19 Notably, our SimCLR+IP-IRM surpasses vanilla supervised learning on Cifar100 under the same evaluation setting. Still, the quality of disentanglement cannot be fully evaluated when the training and test samples are identically distributed—while the improved accuracy demonstrates that IP-IRM representation is more equivariant to class semantics, it does not reveal if the representation is decomposable. Hence we present an out-of-distribution (OOD) setting in Section 5.3 to further show this property. Is IP-IRM sensitive to the values of hyper-parameters? 1) λ1 and λ2 in Eq. (2) and Eq. (3). In Figure 6 (a), we observe that the best performance is achieved with λ1 and λ2 taking values from 0.2 to 0.5 on both datasets. All accuracies drop sharply if using λ1 = 1.0. The reason is that a higher λ1 forces the model to push the φ-induced similarity to fixed baseline θ = 1, rather than decrease the loss L on the pretext task, leading to poor convergence. 2) The number of epochs. In Figure 6 (b), we plot the Top-1 accuracies of using k-NN classifiers along the 700-epoch training of two kinds of SSL representations—SimCLR and IP-IRM. It is obvious that IP-IRM converges faster and achieves a higher accuracy than SimCLR. It is worth to highlight that on the STL10, the accuracy of SimCLR starts to oscillate and grow slowly after the 150-th epoch, while ours keeps on improving. This is an empirical evidence that IP-IRM keeps on disentangling more and more semantics in the feature space, and has the potential of improvement through long-term training. 5.3 Potential on Large-Scale Data Datasets. We evaluated on the standard benchmark of supervised learning ImageNet ILSVRC2012 [25] which has in total 1,331,167 images in 1,000 classes. To further reveal if a representation is decomposable, we used NICO [37], which is a real-world image dataset designed for OOD evaluations. It contains 25,000 images in 19 classes, with a strong correlation between the foreground and background in the train split (e.g., most dogs on grass). We also studied the transferability of the learned representation following [30, 52]: FGVC Aircraft (Aircraft) [60], Caltech-101 (Caltech) [31], Stanford Cars (Cars) [93], Cifar10 [53], Cifar100 [53], DTD [21], Oxford 102 Flowers (Flowers) [62], Food-101 (Food) [6], Oxford-IIIT Pets (Pets) [67] and SUN397 (SUN) [91]. These datasets include coarse- to fine-grained classification tasks, and vary in the amount of training data (2,000-75,000 images) and classes (10-397 classes), representing a wide range of transfer learning settings. Settings. For the ImageNet, all the representations were trained for 200 epochs due to limited computing resources. We followed the common setting [80, 36], using a linear classifier, and report Top-1 classification accuracies. For NICO, we fixed the ImageNet pre-trained ResNet-50 backbone and fine-tuned the classifier. See appendix for more training details. For the transfer learning, we followed [30, 52] to report the classification accuracies on Cars, Cifar-10, Cifar-100, DTD, Food, SUN and the average per-class accuracies on Aircraft, Caltech, Flowers, Pets. We call them uniformly as Accuracy. We used the few-shot n-way-k-shot setting for model evaluation. Specifically, we randomly sampled 2,000 episodes from the test splits of above datasets. An episode contains n classes, each with k training samples and 15 testing samples, where we fine-tuned the linear classifier (backbone weights frozen) for 100 epochs on the training samples, and evaluated the classifier on the testing samples. We evaluated with n = k = 5 (results of n = 5, k = 20 in Appendix). ImageNet and NICO. In Table 3 ImageNet accuracy, our IP-IRM achieves the best performance over all baseline models. Yet we believe that this does not show the full potential of IP-IRM, because ImageNet is a larger-scale dataset with many semantics, and it is hard to achieve a full disentanglement of all semantics within the limited 200 epochs. To evaluate the feature decomposability of IPIRM, we compared the performance on NICO with various SSL baselines in Table 3, where our approach significantly outperforms the baselines by 1.5-4.2%. This validates IP-IRM feature is more decomposable—if each semantic feature (e.g., background) is decomposed in some fixed dimensions and some classes vary with such semantic, then the classifier will recognize this as a non-discriminative variant feature and hence focus on other more discriminative features (i.e., foreground). In this way, even though some classes are confounded by those non-discriminative features (e.g., most of the “dog” images are with “grass” background), the fixed dimensions still help classifiers neglect those non-discriminative ones. We further visualized the CAM [98] on NICO in Figure 7, which indeed shows that IP-IRM helps the classifier focus on the foreground regions. Few-Shot Tasks. As shown in Table 4, our IP-IRM significantly improves the performance of 5-way-5-shot setting, e.g., we outperform the baseline MoCo-v2 by 2.2%. This is because IP-IRM can further disentangled G\Daug over SSL, which is essential for representations to generalize to different downstream class domains (recall Corollary 2 of Lemma 1). This is also in line with recent works [86] showing that a disentangled representation is especially beneficial in low-shot scenarios, and further demonstrates the importance of disentanglement in downstream tasks. 6 Conclusion We presented an unsupervised disentangled representation learning method called Iterative Partitionbased Invariant Risk Minimization (IP-IRM), based on Self-Supervised Learning (SSL). IP-IRM iteratively partitions the dataset into semantic-related subsets, and learns a representation invariant across the subsets using SSL with an IRM loss. We show that with theoretical guarantee, IP-IRM converges with a disentangled representation under the group-theoretical view, which fundamentally surpasses the capabilities of existing SSL and fully-supervised learning. Our proposed theory is backed by strong empirical results in disentanglement metrics, SSL classification accuracy and transfer performance. IP-IRM achieves disentanglement without using generative models, making it widely applicable on large-scale visual tasks. As future directions, we will continue to explore the application of group theory in representation learning and seek additional forms of inductive bias for faster convergence. Acknowledgments and Disclosure of Funding The authors would like to thank all reviewers for their constructive suggestions. This research is partly supported by the Alibaba-NTU Joint Research Institute, the A*STAR under its AME YIRG Grant (Project No. A20E6c0101), and the Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 2 grant.
1. What is the focus and contribution of the paper on disentangled representation learning? 2. What are the strengths of the proposed approach, particularly in terms of scalability and principled grounding? 3. What are the weaknesses of the paper, especially regarding the choice of hyperparameters and potential interference between augmentations and disentangling? 4. Do you have any concerns or suggestions for improving the proposed method? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a scalable approach to disentangled representation learning using the self supervised learning framework, rather than the typically used generative model framework. Their approach is principled, being grounded in the Higgins et al definition of disentangling from the symmetry perspective. It also appears to be scalable, which is a very exciting step in the subfield of unsupervised disentangled representation learning. Review The paper is very well written and makes a very exciting step in unsupervised disentangled representation learning. I am happy to argue for its acceptance. Saying this, I do have a few concerns. Is K a hyperparameter? How should one choose it? What were the choices for CMNIST and 3D shapes? Why does increasing it result in reduced performance? How do the authors propose to scale their algorithm to disentangling many different subspaces without the ability to increase K? What are the augmentations used, and how are the negative examples chosen? If the authors just augment the existing SSL algorithms with their native augmentations/negative examples with the proposed disentangling framework, then it should be stated in text, as otherwise the reader is left wondering. Is there a way to choose augmentations that would not interfere with disentangling, as the authors suggested e.g. that colour information was not disentangled because it was subsumed by the augmentations. Minor: Line 147 "In particular, SSL and fully-supervised learning are two special cases of Gi-disentanglement" - I think it would be better to move it out of the lemma, as this is not really a part of it, and instead add this to the following more in depth discussion of this point later in the section.
NIPS
Title Self-Supervised Learning Disentangled Group Representation as Feature Abstract A good visual representation is an inference map from observations (images) to features (vectors) that faithfully reflects the hidden modularized generative factors (semantics). In this paper, we formulate the notion of “good” representation from a group-theoretic view using Higgins’ definition of disentangled representation [40], and show that existing Self-Supervised Learning (SSL) only disentangles simple augmentation features such as rotation and colorization, thus unable to modularize the remaining semantics. To break the limitation, we propose an iterative SSL algorithm: Iterative Partition-based Invariant Risk Minimization (IP-IRM), which successfully grounds the abstract semantics and the group acting on them into concrete contrastive learning. At each iteration, IP-IRM first partitions the training samples into two subsets that correspond to an entangled group element. Then, it minimizes a subset-invariant contrastive loss, where the invariance guarantees to disentangle the group element. We prove that IP-IRM converges to a fully disentangled representation and show its effectiveness on various benchmarks. Codes are available at https://github.com/Wangt-CN/IP-IRM. N/A 1 Introduction Deep learning is all about learning feature representations [5]. Compared to the conventional end-to-end supervised learning, Self-Supervised Learning (SSL) first learns a generic feature representation (e.g., a network backbone) by training with unsupervised pretext tasks such as the prevailing contrastive objective [36, 16], and then the above stage-1 feature is expected to serve various stage-2 applications with proper fine-tuning. SSL for visual representation is so fascinating that it is the first time that we can obtain “good” visual features for free, just like the trending pre-training in NLP community [26, 8]. However, most SSL works only care how much stage-2 performance an SSL feature can improve, but overlook what feature SSL is learning, why it can be learned, what cannot be learned, what the gap between SSL and Supervised Learning (SL) is, and when SSL can surpass SL? The crux of answering those questions is to formally understand what a feature representation is and what a good one is. We postulate the classic world model of visual generation and feature representation [1, 69] as in Figure 1. Let U be a set of (unseen) semantics, e.g., attributes such as “digit” and “color”. There 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Entangle Semantics 1344D 1212D SimCLR SimCLR + PIRM Aug. Related L1 Norm = 0.06 L1 Norm = 0.06 (a) (c) Aug. Unrelated 20 40 60 80 SimCLR IP-IRM SimCLR IP-IRM Ac cu ra cy (b) IP-IRMSimCLR Layer 29 IP-IRMSimCLR Layer 18 IP-IRMSimCLR Layer 29 IP-IRMSimCLR Layer 18 IP-IRMSimCLR Layer 29 IP-IRMSimCLR Layer 18 0 1 2 3 Figure 2: (a) The heat map visualizes feature dimensions related to augmentations (aug. related) and unrelated to augmentations (aug. unrelated), whose respective classification accuracy is shown in the bar chart below. Dashed bar denotes the accuracy using full feature dimensions. Experiment was performed on STL10 [22] with representation learnt with SimCLR [16] and our IP-IRM. (b) Visualization of CNN activations [77] of 4 filters on layer 29 and 18 of VGG [75] trained on ImageNet100 [81]. The filters were chosen by first clustering the aug. unrelated filters with k-means (k = 4) and then selecting the filters corresponding to the cluster centers. is a set of independent and causal mechanisms [66] ϕ : U → I, generating images from semantics, e.g., writing a digit “0” when thinking of “0” [74]. A visual representation is the inference process φ : I → X that maps image pixels to vector space features, e.g., a neural network. We define semantic representation as the functional composition f : U → I → X . In this paper, we are only interested in the parameterization of the inference process for feature extraction, but not the generation process, i.e., we assume ∀I ∈ I , ∃u ∈ U , such that I = ϕ(u) is fixed as the observation of each image sample. Therefore, we consider semantic and visual representations the same as feature representation, or simply representation, and we slightly abuse φ(I) := f ( ϕ−1(I) ) , i.e., φ and f share the same trainable parameters. We call the vector x = φ(I) as feature, where x ∈ X . We propose to use Higgins’ definition of disentangled representation [40] to define what is “good”. Definition 1. (Disentangled Representation) Let G be the group acting on U , i.e., g · u ∈ U × U transforms u ∈ U , e.g., a “turn green” group element changing the semantic from “red” to “green”. Suppose there is a direct product decomposition1 G = g1× . . .× gm and U = U1× . . .×Um, where gi acts on Ui respectively. A feature representation is disentangled if there exists a group G acting on X such that: 1. Equivariant: ∀g ∈ G,∀u ∈ U , f(g ·u) = g ·f(u), e.g., the feature of the changed semantic: “red” to “green” in U , is equivalent to directly change the color vector in X from “red” to “green”. 2. Decomposable: there is a decomposition X = X1 × . . .×Xm, such that each Xi is fixed by the action of all gj , j 6= i and affected only by gi, e.g., changing the “color” semantic in U does not affect the “digit” vector in X . Compared to the previous definition of feature representation which is a static mapping, the disentangled representation in Definition 1 is dynamic as it explicitly incorporate group representation [35], which is a homomorphism from group to group actions on a space, e.g., G → X × X , and it is common to use the feature space X as a shorthand—this is where our title stands. Definition 1 defines “good” features in the common views: 1) Robustness: a good feature should be invariant to the change of environmental semantics, such as external interventions [45, 87] or domain shifts [32]. By the above definition, a change is always retained in a subspace Xi, while others are not affected. Hence, the subsequent classifier will focus on the invariant features and ignore the ever-changing Xi. 2) Zero-shot Generalization: even if a new combination of semantics is unseen in training, each semantic has been learned as features. So, the metrics of each Xi trained by seen samples remain valid for unseen samples [95]. Are the existing SSL methods learning disentangled representations? No. We show in Section 4 that they can only disentangle representations according to the hand-crafted augmentations, e.g., color jitter and rotation. For example, in Figure 2 (a), even if we only use the augmentation-related feature, 1Note that gi can also denote a cyclic subgroup Gi such as rotation [0◦ : 1◦ : 360◦], or a countable one but treated as cyclic such as translation [(0, 0) : (1, 1) : (width, height)] and color [0 : 1 : 255]. the classification accuracy of a standard SSL (SimCLR [16]) does not lose much as compared to the full feature use. Figure 2 (b) visualizes that the CNN features in each layer are indeed entangled (e.g., tyre, motor, and background in the motorcycle image). In contrast, our approach IP-IRM, to be introduced below, disentangles more useful features beyond augmentations. In this paper, we propose Iterative Partition-based Invariant Risk Minimization (IP-IRM [ai"p@:m]) that guarantees to learn disentangled representations in an SSL fashion. We present the algorithm in Section 3, followed by the theoretical justifications in Section 4. In a nutshell, at each iteration, IP-IRM first partitions the training data into two disjoint subsets, each of which is an orbit of the already disentangled group, and the cross-orbit group corresponds to an entangled group element gi. Then, we adopt the Invariant Risk Minimization (IRM) [2] to implement a partition-based SSL, which disentangles the representation Xi w.r.t. gi. Iterating the above two steps eventually converges to a fully disentangled representation w.r.t. ∏m i=1 gi. In Section 5, we show promising experimental results on various feature disentanglement and SSL benchmarks. 2 Related Work Self-Supervised Learning. SSL aims to learn representations from unlabeled data with hand-crafted pretext tasks [28, 63, 33]. Recently, Contrastive learning [65, 61, 38, 80, 16] prevails in most state-ofthe-art methods. The key is to map positive samples closer, while pushing apart negative ones in the feature space. Specifically, the positive samples are from the augmented views [82, 3, 94, 42] of each instance and the negative ones are other instances. Along this direction, follow-up methods are mainly four-fold: 1) Memory-bank [90, 61, 36, 18]: storing the prototypes of all the instances computed previously into a memory bank to benefit from a large number of negative samples. 2) Using siamese network [7] to avoid representation collapse [34, 19, 83]. 3) Assigning clusters to samples to integrate inter-instance similarity into contrastive learning [11, 12, 13, 88, 56]. 4) Seeking hard negative samples with adversarial training or better sampling strategies [73, 20, 44, 48]. In contrast, our proposed IP-IRM jumps out of the above frame and introduces the disentangled representation into SSL with group theory to show the limitations of existing SSL and how to break through them. Disentangled Representation. This notion dates back to [4], and henceforward becomes a highlevel goal of separating the factors of variations in the data [84, 79, 86, 58]. Several works aim to provide a more precise description [27, 29, 72] by adopting an information-theoretic view [17, 27] and measuring the properties of a disentangled representation explicitly [29, 72]. We adopt the recent group-theoretic definition from Higgins et al. [40], which not only unifies the existing, but also resolves the previous controversial points [78, 59]. Although supervised learning of disentangled representation is a well-studied field [100, 43, 10, 70, 49], unsupervised disentanglement based on GAN [17, 64, 57, 71] or VAE [39, 15, 99, 50] is still believed to be theoretically challenging [59]. Thanks to the Higgins’ definition, we prove that the proposed IP-IRM converges with full-semantic disentanglement using group representation theory. Notably, IP-IRM learns a disentangled representation with an inference process, without using generative models as in all the existing unsupervised methods, making IP-IRM applicable even on large-scale datasets. Group Representation Learning. A group representation has two elements [47, 35]: 1) a homomorphism (e.g., a mapping function) from the group to its group action acting on a vector space, and 2) the vector space. Usually, when there is no ambiguity, we can use either element as the definition. Most existing works focus on learning the first element. They first define the group of interest, such as spherical rotations [24] or image scaling [89, 76], and then learn the parameters of the group actions [23, 46, 68]. In contrast, we focus on the second element; more specifically, we are interested in learning a map between two vector spaces: image pixel space and feature vector space. Our representation learning is flexible because it delays the group action learning to downstream tasks on demand. For example, in a classification task, a classifier can be seen as a group action that is invariant to class-agnostic groups but equivariant to class-specific groups (see Section 4). 3 IP-IRM Algorithm Notations. Our goal is to learn the feature extractor φ in a self-supervised fashion. We define a partition matrix P ∈ {0, 1}N×2 that partitions N training images into 2 disjoint subsets. Pi,k = 1 if the i-th image belongs to the k-th subset and 0 otherwise. Suppose we have a pretext task loss function L(φ, θ = 1, k,P) defined on the samples in the k-th subset, where θ = 1 is a “dummy” parameter used to evaluate the invariance of the SSL loss across the subsets (later discussed in Step 1). For example, L can be defined as: L(φ, θ = 1, k,P) = ∑ x∈Xk −log exp ( xTx∗ · θ )∑ x′∈Xk∪X∗\x exp (x Tx′ · θ) , (1) where Xk = φ({Ii|Pi,k = 1}), and x∗ ∈ X ∗ is the augmented view feature of x ∈ Xk. Input. N training images. Randomly initialized φ. A partition matrix P initialized such that the first column of P is 1, i.e., all samples belong to the first subset. Set P = {P}. Output. Disentangled feature extractor φ. Step 1 [Update φ]. We update φ by: min φ ∑ P∈P 2∑ k=1 [ L(φ, θ = 1, k,P) + λ1 ‖∇θ=1L(φ, θ = 1, k,P)‖2 ] , (2) where λ1 is a hyper-parameter. The second term delineates how far the contrast in one subset is from a constant baseline θ = 1. The minimization of both of them encourages φ in different subsets close to the same baseline, i.e., invariance across the subsets. See IRM [2] for more details. In particular, the first iteration corresponds to the standard SSL with X1 in Eq. (1) containing all training images. Step 2 [Update P]. We fix φ and find a new partition P∗ by P∗ = argmax P 2∑ k=1 [ L(φ, θ = 1, k,P) + λ2 ‖∇θ=1L(φ, θ = 1, k,P)‖2 ] , (3) where λ2 is a hyper-parameter. In practice, we use a continuous partition matrix in RN×2 during optimization and then threshold it to {0, 1}N×2. We update P ← P ∪P∗ and iterate the above two steps until convergence. 4 Justification Recall that IP-IRM uses training sample partitions to learn the disentangled representations w.r.t.∏m i=1 gi. As we have a G-equivariant feature map between the sample space I and feature space X (the equivariance is later guaranteed by Lemma 1), we slightly abuse the notation by using X to denote both spaces. Also, we assume that X is a homogeneous space of G, i.e., any sample x′ ∈ X can be transited from another sample x by a group action g · x. Intuitively, G is all you need to describe the diversity of the training set. It is worth noting that g is any group element in G while gi is a Cartesian “building block” of G, e.g., g can be decomposed by (g1, g2, ..., gm). We show that partition and group are tightly connected by the concept of orbit. Given a sample x ∈ X , its group orbit w.r.t. G is a sample set G(x) = {g · x | g ∈ G}. As shown in Figure 3 (a), if G is a set of attributes shared by classes, e.g., “color” and “pose ”, the orbit is the sample set of the class of x; in Figure 3 (b), if G denotes augmentations, the orbit is the set of augmented images. In particular, we can see that the disjoint orbits in Figure 3 naturally form a partition. Formally, we have the following definition: Definition 2. (Orbit & Partition [47]) Given a subgroup D ⊂ G, it partitions X into the disjoint subsets: {D(c1 · x), ...,D(ck · x)}, where k is the number of cosets {c1D, ..., ckD}, and the cosets form a factor group1 G/D = {ci}ki=1. In particular, ci · x can be considered as a sample of the i-th class, transited from any sample x ∈ X . Interestingly, the partition offers a new perspective for the training data format in Supervised Learning (SL) and Self-Supervised Learning (SSL). In SL, as shown in Figure 3 (a), the data is labeled with k classes, each of which is an orbit with D(ci · x) training samples, whose variations are depicted 1Given G = D × K with K = c1 × . . .× ck, then D̄ = {(d, e) | d ∈ D} is a normal subgroup of G, and G/D̄ is isomorphic to K [47]. We write G/D = {ci}ki=1 with slight abuse of notation. by the class-sharing attribute group D. The cross-orbit group action, e.g., cdog · x, can be read as “turn x into a dog” and such “turn” is always valid due to the assumption that X is a homogeneous space of G. In SSL, as shown in Figure 3 (b), each training sample x is augmented by the group D. So, D(ci · x) consists of all the augmentations of the i-th sample, where the cross-orbit group action ci · x can be read as “turn x into the i-th sample”. Thanks to the orbit and partition view of training data, we are ready to revisit model generalization in a group-theoretic view by using invariance and equivariance—the two sides of the coin, whose name is disentanglement. For SL, we expect that a good feature is disentangled into a class-agnostic part and a class-specific part: the former (latter) is invariant (equivariant) to G/D—cross-orbit traverse, but equivariant (invariant) to D—in-orbit traverse. By using such feature, a model can generalize to diverse testing samples (limited to |D| variations) by only keeping the class-specific feature. Formally, we prove that we can achieve such disentanglement by contrastive learning: Lemma 1. (Disentanglement by Contrastive Learning) Training loss − log exp(x T i xj)∑ x∈X exp(x T j x) disentan- gles X w.r.t. (G/D)×D, where xi and xj are from the same orbit. We can draw the following interesting corollaries from Lemma 1 (details in Appendix): 1. If we use all the samples in the denominator of the loss, we can approximate to G-equivariant features given limited training samples. This is because the loss minimization guarantees ∀(xi,xj) ∈ X × X , i 6= j → xi 6= xj , i.e., any pair corresponds to a group action. 2. Conventional cross-entropy loss in SL is a special case, if we define x ∈ X = {x1, ...,xk} as k classifier weights. So, SL does not guarantee the disentanglement of G/D, which causes generalization error if the class domain of downstream task is different from SL pre-training, e.g., a subset of G/D. 3. In contrastive learning based SSL, D = “augmentations” (recall Figure 2), and the number of augmentations |Daug| is generally much smaller compared to the class-wise sample diversity |DSL| in SL. This enables the SL model to generalize to more diverse testing samples (|DSL|) by filtering out the class-agnostic features (e.g., background) and focusing on the class-specific ones (e.g., foreground), which explains why SSL is worse than SL in downstream classification. 4. In SL, if the number of training samples per orbit is not enough, i.e., smaller than |D(ci · x)|, the disentanglement between D and G/D cannot be guaranteed, such as the challenges in few-shot learning [96]. Fortunately, in SSL, the number is enough as we always include all the augmented samples in training. Moreover, we conjecture thatDaug only contains simple cyclic group elements such as rotation and colorization, which are easier for representation learning. Lemma 1 does not guarantee the decomposability of each d ∈ D. Nonetheless, the downstream model can still generalize by keeping the class-specific features affected by G/D. Therefore, the key to fill the gap or even let SSL surpass SL is to achieve the full disentanglement of G/Daug. Theorem 1. The representation is fully disentangled w.r.t. G/Daug if and only if ∀ci ∈ G/Daug, the contrastive loss in Eq. (1) is invariant to the 2 orbits of partition {G′(ci · x),G′(c−1i · x)}, where G′ = G/ci = Daug × c1 × . . .× ci−1 × ci+1 × . . .× ck. The maximization in Step 2 is based on the contra-position of the sufficient condition of Theorem 1. Denote the currently disentangled group as D (initially Daug). If we can find a partition P∗ to maximize the loss in Eq. (3), i.e., SSL loss is variant across the orbits, then ∃h ∈ G/D such that the representation of h is entangled, i.e., P∗ = {D(h · x),D(h−1 · x)}. Figure 3 (c) illustrates a discovered partition about color. The minimization in Step 1 is based on the necessary condition of Theorem 1. Based on the discovered P∗, if we minimize Eq. (2), we can further disentangle h and update D ← D × h. Overall, IP-IRM converges as G/Daug is finite. Note that an improved contrastive objective [92] can further disentangle each d ∈ Daug and achieve full disentanglement w.r.t. G. 5 Experiments 5.1 Unsupervised Disentanglement Datasets. We used two datasets. CMNIST [2] has 60,000 digit images with semantic labels of digits (0-9) and colors (red and green). These images differ in other semantics (e.g., slant and font) that are not labeled. Moreover, there is a strong correlation between digits and colors (most 0-4 in red and 5-9 in green), increasing the difficulty to disentangle them. Shapes3D [50] contains 480,000 images with 6 labelled semantics, i.e., size, type, azimuth, as well as floor, wall and object color. Note that we only considered the first three semantics for evaluation, as the standard augmentations in SSL will contaminate any color-related semantics. Settings. We adopted 6 representative disentanglement metrics: Disentangle Metric for Informativeness (DCI) [29], Interventional Robustness Score (IRS) [79], Explicitness Score (EXP) [72], Modularity Score (MOD) [72] and the accuracy of predicting the ground-truth semantic labels by two classification models called logistic regression (LR) and gradient boosted trees (GBT) [59]. Specifically, DCI and EXP measure the explicitness, i.e., the values of semantics can be decoded from the feature using a linear transformation. MOD and IRS measure the modularity, i.e., whether each feature dimension is equivariant to the shift of a single semantic. See Appendix for more detailed formula of the metrics. In evaluation, we trained CNN-based feature extractor backbones with comparable number of parameters for all the baselines and our IP-IRM. The full implementation details are in Appendix. Results. In Table 1, we compared the proposed IP-IRM to the standard SSL method SimCLR [16] as well as several generative disentanglement methods [51, 41, 9, 15, 50]. On both CMNIST and Shapes3D dataset, IP-IRM outperforms SimCLR regarding all metrics except for only IRS where the most relative gain is 8.8% for MOD. For this MOD, we notice that VAE performs better than our IP-IRM by 6 points, i.e., 0.82 v.s. 0.76 for Shapes3D. This is because VAE explicitly pursues a high modularity score through regularizing the dimension-wise independence in the feature space. However, this regularization is adversarial to discriminative objectives [14, 95]. Indeed, we can observe from the column of LR (i.e., the performance of downstream linear classification) that VAE methods have clearly poor performance especially on the more challenging dataset Shapes3D. We can draw the same conclusion from the results of GBT. Different from VAE methods, our IP-IRM is optimized towards disentanglement without such regularization, and is thus able to outperform the others in downstream tasks while obtaining a competitive value of modularity. What do IP-IRM features look like? Figure 4 visualizes the features learned by SimCLR and our IP-IRM on two datasets: CMNIST in Figure 4 (a) and STL10 dataset in Figure 4 (b). In the following, we use Figure 4 (a) as the example, and can easily draw the similar conclusions from Figure 4 (b). On the left-hand side of Figure 4 (a), it is obvious that there is no clear boundary to distinguish the semantic of color in the SimCLR feature space. Besides, the features of the same digit semantic are scattered in two regions. On the right-hand side of (a), we have 3 observations for IP-IRM. 1) The features are well clustered and each cluster corresponds to a specific semantic of either digit or color. This validates the equivariant property of IP-IRM representation that it responds to any changes of the existing semantics, e.g., digit and color on this dataset. 2) The feature space has the symmetrical structure for each individual semantic, validating the decomposable property of IP-IRM representation. More specifically, i) mirroring a feature (w.r.t. “*” in the figure center) indicates the change on the only semantic of color, regardless of the other semantic (digit); and ii) a counterclockwise rotation (denoted by black arrows from same-colored 1 to 7) indicates the change on the only semantic of digit. 3) IP-IRM reveals the true distribution (similarity) of different classes. For example, digits 3, 5, 8 sharing sub-parts (curved bottoms and turnings) have closer feature points in the IP-IRM feature space. How does IP-IRM disentangle features? 1) Discovered P∗: To visualize the discovered partitions P∗ at each maximization step, we performed an experiment on a binary CMNIST (digit 0 and 1 in color red and green), and show the results in Figure 5 (a). Please kindly refer to Appendix for the full results on CMNIST. First, each partition tells apart a specific semantic into two subsets, e.g., in Partition #1, red and green digits are separated. Second, besides the obvious semantics—digit and color (labelled on the dataset), we can discover new semantics, e.g., the digit slant shown in Partition #3. 2) Disentangled Representation: In Figure 5 (b), we aim to visualize how equivariant each feature dimension is to the change of each semantic, i.e., a darker color shows that a dimension is more equivariant w.r.t. the semantic indicated on the left. We can see that SimCLR fails to learn the decomposable representation, e.g., the 8-th dimension captures azimuth, type and size in Shapes3D. In contrast, our IP-IRM achieves disentanglement by representing the semantics into interpretable dimensions, e.g., the 6-th and 7-th dimensions captures the size, the 4-th for type and the 2-nd and 9-th for azimuth on the Shapes3D. Overall, the results support the justification in Section 4, i.e., we discover a new semantic (affected by h) through the partition P∗ at each iteration and IP-IRM eventually converges with a disentangled representation. 5.2 Self-Supervised Learning Datasets and Settings. We conducted the SSL evaluations on 2 standard benchmarks following [88, 20, 48]. Cifar100 [54] contains 60,000 images in 100 classes and STL10 [22] has 113,000 images in 10 classes. We used SimCLR [16], DCL [20] and HCL [48] as baselines, and learned the representations for 400 and 1000 epochs. We evaluated both linear and k-NN (k = 200) accuracies for the downstream classification task. Implementation details are in appendix. Method STL10 Cifar100 k-NN Linear k-NN Linear 400 epoch training SimCLR [16] 73.60 78.89 54.94 66.63 DCL [20] 78.82 82.56 57.29 68.59 HCL [48] 80.06 87.60 59.61 69.22 SimCLR+IP-IRM 79.66 84.44 59.10 69.55 DCL+IP-IRM 81.51 85.36 58.37 68.76 HCL+IP-IRM 84.29 87.81 60.05 69.95 1,000 epoch training SimCLR [16] 78.60 84.24 59.45 68.73 SimCLR† [55] 79.80 85.56 63.67 72.18 SimCLR†+IP-IRM 85.08 89.91 65.82 73.99 Supervised∗ - - - 73.72 Supervised∗+MixUp [97] - - - 74.19 Notably, our SimCLR+IP-IRM surpasses vanilla supervised learning on Cifar100 under the same evaluation setting. Still, the quality of disentanglement cannot be fully evaluated when the training and test samples are identically distributed—while the improved accuracy demonstrates that IP-IRM representation is more equivariant to class semantics, it does not reveal if the representation is decomposable. Hence we present an out-of-distribution (OOD) setting in Section 5.3 to further show this property. Is IP-IRM sensitive to the values of hyper-parameters? 1) λ1 and λ2 in Eq. (2) and Eq. (3). In Figure 6 (a), we observe that the best performance is achieved with λ1 and λ2 taking values from 0.2 to 0.5 on both datasets. All accuracies drop sharply if using λ1 = 1.0. The reason is that a higher λ1 forces the model to push the φ-induced similarity to fixed baseline θ = 1, rather than decrease the loss L on the pretext task, leading to poor convergence. 2) The number of epochs. In Figure 6 (b), we plot the Top-1 accuracies of using k-NN classifiers along the 700-epoch training of two kinds of SSL representations—SimCLR and IP-IRM. It is obvious that IP-IRM converges faster and achieves a higher accuracy than SimCLR. It is worth to highlight that on the STL10, the accuracy of SimCLR starts to oscillate and grow slowly after the 150-th epoch, while ours keeps on improving. This is an empirical evidence that IP-IRM keeps on disentangling more and more semantics in the feature space, and has the potential of improvement through long-term training. 5.3 Potential on Large-Scale Data Datasets. We evaluated on the standard benchmark of supervised learning ImageNet ILSVRC2012 [25] which has in total 1,331,167 images in 1,000 classes. To further reveal if a representation is decomposable, we used NICO [37], which is a real-world image dataset designed for OOD evaluations. It contains 25,000 images in 19 classes, with a strong correlation between the foreground and background in the train split (e.g., most dogs on grass). We also studied the transferability of the learned representation following [30, 52]: FGVC Aircraft (Aircraft) [60], Caltech-101 (Caltech) [31], Stanford Cars (Cars) [93], Cifar10 [53], Cifar100 [53], DTD [21], Oxford 102 Flowers (Flowers) [62], Food-101 (Food) [6], Oxford-IIIT Pets (Pets) [67] and SUN397 (SUN) [91]. These datasets include coarse- to fine-grained classification tasks, and vary in the amount of training data (2,000-75,000 images) and classes (10-397 classes), representing a wide range of transfer learning settings. Settings. For the ImageNet, all the representations were trained for 200 epochs due to limited computing resources. We followed the common setting [80, 36], using a linear classifier, and report Top-1 classification accuracies. For NICO, we fixed the ImageNet pre-trained ResNet-50 backbone and fine-tuned the classifier. See appendix for more training details. For the transfer learning, we followed [30, 52] to report the classification accuracies on Cars, Cifar-10, Cifar-100, DTD, Food, SUN and the average per-class accuracies on Aircraft, Caltech, Flowers, Pets. We call them uniformly as Accuracy. We used the few-shot n-way-k-shot setting for model evaluation. Specifically, we randomly sampled 2,000 episodes from the test splits of above datasets. An episode contains n classes, each with k training samples and 15 testing samples, where we fine-tuned the linear classifier (backbone weights frozen) for 100 epochs on the training samples, and evaluated the classifier on the testing samples. We evaluated with n = k = 5 (results of n = 5, k = 20 in Appendix). ImageNet and NICO. In Table 3 ImageNet accuracy, our IP-IRM achieves the best performance over all baseline models. Yet we believe that this does not show the full potential of IP-IRM, because ImageNet is a larger-scale dataset with many semantics, and it is hard to achieve a full disentanglement of all semantics within the limited 200 epochs. To evaluate the feature decomposability of IPIRM, we compared the performance on NICO with various SSL baselines in Table 3, where our approach significantly outperforms the baselines by 1.5-4.2%. This validates IP-IRM feature is more decomposable—if each semantic feature (e.g., background) is decomposed in some fixed dimensions and some classes vary with such semantic, then the classifier will recognize this as a non-discriminative variant feature and hence focus on other more discriminative features (i.e., foreground). In this way, even though some classes are confounded by those non-discriminative features (e.g., most of the “dog” images are with “grass” background), the fixed dimensions still help classifiers neglect those non-discriminative ones. We further visualized the CAM [98] on NICO in Figure 7, which indeed shows that IP-IRM helps the classifier focus on the foreground regions. Few-Shot Tasks. As shown in Table 4, our IP-IRM significantly improves the performance of 5-way-5-shot setting, e.g., we outperform the baseline MoCo-v2 by 2.2%. This is because IP-IRM can further disentangled G\Daug over SSL, which is essential for representations to generalize to different downstream class domains (recall Corollary 2 of Lemma 1). This is also in line with recent works [86] showing that a disentangled representation is especially beneficial in low-shot scenarios, and further demonstrates the importance of disentanglement in downstream tasks. 6 Conclusion We presented an unsupervised disentangled representation learning method called Iterative Partitionbased Invariant Risk Minimization (IP-IRM), based on Self-Supervised Learning (SSL). IP-IRM iteratively partitions the dataset into semantic-related subsets, and learns a representation invariant across the subsets using SSL with an IRM loss. We show that with theoretical guarantee, IP-IRM converges with a disentangled representation under the group-theoretical view, which fundamentally surpasses the capabilities of existing SSL and fully-supervised learning. Our proposed theory is backed by strong empirical results in disentanglement metrics, SSL classification accuracy and transfer performance. IP-IRM achieves disentanglement without using generative models, making it widely applicable on large-scale visual tasks. As future directions, we will continue to explore the application of group theory in representation learning and seek additional forms of inductive bias for faster convergence. Acknowledgments and Disclosure of Funding The authors would like to thank all reviewers for their constructive suggestions. This research is partly supported by the Alibaba-NTU Joint Research Institute, the A*STAR under its AME YIRG Grant (Project No. A20E6c0101), and the Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 2 grant.
1. What is the focus of the paper regarding unsupervised representation learning? 2. What are the strengths of the proposed approach, particularly in its mathematical and conceptual motivation? 3. What are the weaknesses of the paper, especially in terms of experimental performance compared to prior works? 4. How does the reviewer assess the significance and novelty of the paper's contributions? 5. Are there any questions or concerns regarding the paper's content that the reviewer did not explicitly mention?
Summary Of The Paper Review
Summary Of The Paper In this paper, the authors focus on the problem of learning unsupervised representations that are also disentangled. To this end, they build on top of the idea of invariant risk minimization and propose Iterative Partition-based Invariant Risk Minimization (IP-IRM) which iteratively assigns examples to partitions and constrains invariance in the loss among these partitions. They justify their approach mathematically according to a formal definition of disentanglement. The then perform multiple experiments qualitatively and quantitatively demonstrating the learned representations are disentangled. Furthermore, they show promising results on Out-of-distribution classification tasks. Review Strengths: The paper is well written, flows well, and is easy to understand. The problem is very relevant. There has been a great deal of work on learning self-supervised representations in computer vision. However, much work has focused on improving image classification numbers and less on how well the representation is disentangled. Disentanglement is an essential foundation for abstract reasoning. The paper is very well motivated mathematically and conceptually. They show, using a formal definition of disentanglement, why IP-IRM is a sensible approach. Quantitative and qualitative experimentation is very thorough and convincing. Weaknesses: For the OOD experiments on Imagenet, NICO, and related, the improvements in performance over the state of the art is smaller. Overall: The paper addresses an important but underexplored topic in self-supervised learning: disentangled representations. The authors firmly justify IP-IRM mathematically with Higgins's definition of disentanglement. Evaluation of the approach is thorough, and the quantitative and qualitative results convincing. I believe this would be a good contribution to the conference.
NIPS
Title Self-Supervised Learning Disentangled Group Representation as Feature Abstract A good visual representation is an inference map from observations (images) to features (vectors) that faithfully reflects the hidden modularized generative factors (semantics). In this paper, we formulate the notion of “good” representation from a group-theoretic view using Higgins’ definition of disentangled representation [40], and show that existing Self-Supervised Learning (SSL) only disentangles simple augmentation features such as rotation and colorization, thus unable to modularize the remaining semantics. To break the limitation, we propose an iterative SSL algorithm: Iterative Partition-based Invariant Risk Minimization (IP-IRM), which successfully grounds the abstract semantics and the group acting on them into concrete contrastive learning. At each iteration, IP-IRM first partitions the training samples into two subsets that correspond to an entangled group element. Then, it minimizes a subset-invariant contrastive loss, where the invariance guarantees to disentangle the group element. We prove that IP-IRM converges to a fully disentangled representation and show its effectiveness on various benchmarks. Codes are available at https://github.com/Wangt-CN/IP-IRM. N/A 1 Introduction Deep learning is all about learning feature representations [5]. Compared to the conventional end-to-end supervised learning, Self-Supervised Learning (SSL) first learns a generic feature representation (e.g., a network backbone) by training with unsupervised pretext tasks such as the prevailing contrastive objective [36, 16], and then the above stage-1 feature is expected to serve various stage-2 applications with proper fine-tuning. SSL for visual representation is so fascinating that it is the first time that we can obtain “good” visual features for free, just like the trending pre-training in NLP community [26, 8]. However, most SSL works only care how much stage-2 performance an SSL feature can improve, but overlook what feature SSL is learning, why it can be learned, what cannot be learned, what the gap between SSL and Supervised Learning (SL) is, and when SSL can surpass SL? The crux of answering those questions is to formally understand what a feature representation is and what a good one is. We postulate the classic world model of visual generation and feature representation [1, 69] as in Figure 1. Let U be a set of (unseen) semantics, e.g., attributes such as “digit” and “color”. There 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Entangle Semantics 1344D 1212D SimCLR SimCLR + PIRM Aug. Related L1 Norm = 0.06 L1 Norm = 0.06 (a) (c) Aug. Unrelated 20 40 60 80 SimCLR IP-IRM SimCLR IP-IRM Ac cu ra cy (b) IP-IRMSimCLR Layer 29 IP-IRMSimCLR Layer 18 IP-IRMSimCLR Layer 29 IP-IRMSimCLR Layer 18 IP-IRMSimCLR Layer 29 IP-IRMSimCLR Layer 18 0 1 2 3 Figure 2: (a) The heat map visualizes feature dimensions related to augmentations (aug. related) and unrelated to augmentations (aug. unrelated), whose respective classification accuracy is shown in the bar chart below. Dashed bar denotes the accuracy using full feature dimensions. Experiment was performed on STL10 [22] with representation learnt with SimCLR [16] and our IP-IRM. (b) Visualization of CNN activations [77] of 4 filters on layer 29 and 18 of VGG [75] trained on ImageNet100 [81]. The filters were chosen by first clustering the aug. unrelated filters with k-means (k = 4) and then selecting the filters corresponding to the cluster centers. is a set of independent and causal mechanisms [66] ϕ : U → I, generating images from semantics, e.g., writing a digit “0” when thinking of “0” [74]. A visual representation is the inference process φ : I → X that maps image pixels to vector space features, e.g., a neural network. We define semantic representation as the functional composition f : U → I → X . In this paper, we are only interested in the parameterization of the inference process for feature extraction, but not the generation process, i.e., we assume ∀I ∈ I , ∃u ∈ U , such that I = ϕ(u) is fixed as the observation of each image sample. Therefore, we consider semantic and visual representations the same as feature representation, or simply representation, and we slightly abuse φ(I) := f ( ϕ−1(I) ) , i.e., φ and f share the same trainable parameters. We call the vector x = φ(I) as feature, where x ∈ X . We propose to use Higgins’ definition of disentangled representation [40] to define what is “good”. Definition 1. (Disentangled Representation) Let G be the group acting on U , i.e., g · u ∈ U × U transforms u ∈ U , e.g., a “turn green” group element changing the semantic from “red” to “green”. Suppose there is a direct product decomposition1 G = g1× . . .× gm and U = U1× . . .×Um, where gi acts on Ui respectively. A feature representation is disentangled if there exists a group G acting on X such that: 1. Equivariant: ∀g ∈ G,∀u ∈ U , f(g ·u) = g ·f(u), e.g., the feature of the changed semantic: “red” to “green” in U , is equivalent to directly change the color vector in X from “red” to “green”. 2. Decomposable: there is a decomposition X = X1 × . . .×Xm, such that each Xi is fixed by the action of all gj , j 6= i and affected only by gi, e.g., changing the “color” semantic in U does not affect the “digit” vector in X . Compared to the previous definition of feature representation which is a static mapping, the disentangled representation in Definition 1 is dynamic as it explicitly incorporate group representation [35], which is a homomorphism from group to group actions on a space, e.g., G → X × X , and it is common to use the feature space X as a shorthand—this is where our title stands. Definition 1 defines “good” features in the common views: 1) Robustness: a good feature should be invariant to the change of environmental semantics, such as external interventions [45, 87] or domain shifts [32]. By the above definition, a change is always retained in a subspace Xi, while others are not affected. Hence, the subsequent classifier will focus on the invariant features and ignore the ever-changing Xi. 2) Zero-shot Generalization: even if a new combination of semantics is unseen in training, each semantic has been learned as features. So, the metrics of each Xi trained by seen samples remain valid for unseen samples [95]. Are the existing SSL methods learning disentangled representations? No. We show in Section 4 that they can only disentangle representations according to the hand-crafted augmentations, e.g., color jitter and rotation. For example, in Figure 2 (a), even if we only use the augmentation-related feature, 1Note that gi can also denote a cyclic subgroup Gi such as rotation [0◦ : 1◦ : 360◦], or a countable one but treated as cyclic such as translation [(0, 0) : (1, 1) : (width, height)] and color [0 : 1 : 255]. the classification accuracy of a standard SSL (SimCLR [16]) does not lose much as compared to the full feature use. Figure 2 (b) visualizes that the CNN features in each layer are indeed entangled (e.g., tyre, motor, and background in the motorcycle image). In contrast, our approach IP-IRM, to be introduced below, disentangles more useful features beyond augmentations. In this paper, we propose Iterative Partition-based Invariant Risk Minimization (IP-IRM [ai"p@:m]) that guarantees to learn disentangled representations in an SSL fashion. We present the algorithm in Section 3, followed by the theoretical justifications in Section 4. In a nutshell, at each iteration, IP-IRM first partitions the training data into two disjoint subsets, each of which is an orbit of the already disentangled group, and the cross-orbit group corresponds to an entangled group element gi. Then, we adopt the Invariant Risk Minimization (IRM) [2] to implement a partition-based SSL, which disentangles the representation Xi w.r.t. gi. Iterating the above two steps eventually converges to a fully disentangled representation w.r.t. ∏m i=1 gi. In Section 5, we show promising experimental results on various feature disentanglement and SSL benchmarks. 2 Related Work Self-Supervised Learning. SSL aims to learn representations from unlabeled data with hand-crafted pretext tasks [28, 63, 33]. Recently, Contrastive learning [65, 61, 38, 80, 16] prevails in most state-ofthe-art methods. The key is to map positive samples closer, while pushing apart negative ones in the feature space. Specifically, the positive samples are from the augmented views [82, 3, 94, 42] of each instance and the negative ones are other instances. Along this direction, follow-up methods are mainly four-fold: 1) Memory-bank [90, 61, 36, 18]: storing the prototypes of all the instances computed previously into a memory bank to benefit from a large number of negative samples. 2) Using siamese network [7] to avoid representation collapse [34, 19, 83]. 3) Assigning clusters to samples to integrate inter-instance similarity into contrastive learning [11, 12, 13, 88, 56]. 4) Seeking hard negative samples with adversarial training or better sampling strategies [73, 20, 44, 48]. In contrast, our proposed IP-IRM jumps out of the above frame and introduces the disentangled representation into SSL with group theory to show the limitations of existing SSL and how to break through them. Disentangled Representation. This notion dates back to [4], and henceforward becomes a highlevel goal of separating the factors of variations in the data [84, 79, 86, 58]. Several works aim to provide a more precise description [27, 29, 72] by adopting an information-theoretic view [17, 27] and measuring the properties of a disentangled representation explicitly [29, 72]. We adopt the recent group-theoretic definition from Higgins et al. [40], which not only unifies the existing, but also resolves the previous controversial points [78, 59]. Although supervised learning of disentangled representation is a well-studied field [100, 43, 10, 70, 49], unsupervised disentanglement based on GAN [17, 64, 57, 71] or VAE [39, 15, 99, 50] is still believed to be theoretically challenging [59]. Thanks to the Higgins’ definition, we prove that the proposed IP-IRM converges with full-semantic disentanglement using group representation theory. Notably, IP-IRM learns a disentangled representation with an inference process, without using generative models as in all the existing unsupervised methods, making IP-IRM applicable even on large-scale datasets. Group Representation Learning. A group representation has two elements [47, 35]: 1) a homomorphism (e.g., a mapping function) from the group to its group action acting on a vector space, and 2) the vector space. Usually, when there is no ambiguity, we can use either element as the definition. Most existing works focus on learning the first element. They first define the group of interest, such as spherical rotations [24] or image scaling [89, 76], and then learn the parameters of the group actions [23, 46, 68]. In contrast, we focus on the second element; more specifically, we are interested in learning a map between two vector spaces: image pixel space and feature vector space. Our representation learning is flexible because it delays the group action learning to downstream tasks on demand. For example, in a classification task, a classifier can be seen as a group action that is invariant to class-agnostic groups but equivariant to class-specific groups (see Section 4). 3 IP-IRM Algorithm Notations. Our goal is to learn the feature extractor φ in a self-supervised fashion. We define a partition matrix P ∈ {0, 1}N×2 that partitions N training images into 2 disjoint subsets. Pi,k = 1 if the i-th image belongs to the k-th subset and 0 otherwise. Suppose we have a pretext task loss function L(φ, θ = 1, k,P) defined on the samples in the k-th subset, where θ = 1 is a “dummy” parameter used to evaluate the invariance of the SSL loss across the subsets (later discussed in Step 1). For example, L can be defined as: L(φ, θ = 1, k,P) = ∑ x∈Xk −log exp ( xTx∗ · θ )∑ x′∈Xk∪X∗\x exp (x Tx′ · θ) , (1) where Xk = φ({Ii|Pi,k = 1}), and x∗ ∈ X ∗ is the augmented view feature of x ∈ Xk. Input. N training images. Randomly initialized φ. A partition matrix P initialized such that the first column of P is 1, i.e., all samples belong to the first subset. Set P = {P}. Output. Disentangled feature extractor φ. Step 1 [Update φ]. We update φ by: min φ ∑ P∈P 2∑ k=1 [ L(φ, θ = 1, k,P) + λ1 ‖∇θ=1L(φ, θ = 1, k,P)‖2 ] , (2) where λ1 is a hyper-parameter. The second term delineates how far the contrast in one subset is from a constant baseline θ = 1. The minimization of both of them encourages φ in different subsets close to the same baseline, i.e., invariance across the subsets. See IRM [2] for more details. In particular, the first iteration corresponds to the standard SSL with X1 in Eq. (1) containing all training images. Step 2 [Update P]. We fix φ and find a new partition P∗ by P∗ = argmax P 2∑ k=1 [ L(φ, θ = 1, k,P) + λ2 ‖∇θ=1L(φ, θ = 1, k,P)‖2 ] , (3) where λ2 is a hyper-parameter. In practice, we use a continuous partition matrix in RN×2 during optimization and then threshold it to {0, 1}N×2. We update P ← P ∪P∗ and iterate the above two steps until convergence. 4 Justification Recall that IP-IRM uses training sample partitions to learn the disentangled representations w.r.t.∏m i=1 gi. As we have a G-equivariant feature map between the sample space I and feature space X (the equivariance is later guaranteed by Lemma 1), we slightly abuse the notation by using X to denote both spaces. Also, we assume that X is a homogeneous space of G, i.e., any sample x′ ∈ X can be transited from another sample x by a group action g · x. Intuitively, G is all you need to describe the diversity of the training set. It is worth noting that g is any group element in G while gi is a Cartesian “building block” of G, e.g., g can be decomposed by (g1, g2, ..., gm). We show that partition and group are tightly connected by the concept of orbit. Given a sample x ∈ X , its group orbit w.r.t. G is a sample set G(x) = {g · x | g ∈ G}. As shown in Figure 3 (a), if G is a set of attributes shared by classes, e.g., “color” and “pose ”, the orbit is the sample set of the class of x; in Figure 3 (b), if G denotes augmentations, the orbit is the set of augmented images. In particular, we can see that the disjoint orbits in Figure 3 naturally form a partition. Formally, we have the following definition: Definition 2. (Orbit & Partition [47]) Given a subgroup D ⊂ G, it partitions X into the disjoint subsets: {D(c1 · x), ...,D(ck · x)}, where k is the number of cosets {c1D, ..., ckD}, and the cosets form a factor group1 G/D = {ci}ki=1. In particular, ci · x can be considered as a sample of the i-th class, transited from any sample x ∈ X . Interestingly, the partition offers a new perspective for the training data format in Supervised Learning (SL) and Self-Supervised Learning (SSL). In SL, as shown in Figure 3 (a), the data is labeled with k classes, each of which is an orbit with D(ci · x) training samples, whose variations are depicted 1Given G = D × K with K = c1 × . . .× ck, then D̄ = {(d, e) | d ∈ D} is a normal subgroup of G, and G/D̄ is isomorphic to K [47]. We write G/D = {ci}ki=1 with slight abuse of notation. by the class-sharing attribute group D. The cross-orbit group action, e.g., cdog · x, can be read as “turn x into a dog” and such “turn” is always valid due to the assumption that X is a homogeneous space of G. In SSL, as shown in Figure 3 (b), each training sample x is augmented by the group D. So, D(ci · x) consists of all the augmentations of the i-th sample, where the cross-orbit group action ci · x can be read as “turn x into the i-th sample”. Thanks to the orbit and partition view of training data, we are ready to revisit model generalization in a group-theoretic view by using invariance and equivariance—the two sides of the coin, whose name is disentanglement. For SL, we expect that a good feature is disentangled into a class-agnostic part and a class-specific part: the former (latter) is invariant (equivariant) to G/D—cross-orbit traverse, but equivariant (invariant) to D—in-orbit traverse. By using such feature, a model can generalize to diverse testing samples (limited to |D| variations) by only keeping the class-specific feature. Formally, we prove that we can achieve such disentanglement by contrastive learning: Lemma 1. (Disentanglement by Contrastive Learning) Training loss − log exp(x T i xj)∑ x∈X exp(x T j x) disentan- gles X w.r.t. (G/D)×D, where xi and xj are from the same orbit. We can draw the following interesting corollaries from Lemma 1 (details in Appendix): 1. If we use all the samples in the denominator of the loss, we can approximate to G-equivariant features given limited training samples. This is because the loss minimization guarantees ∀(xi,xj) ∈ X × X , i 6= j → xi 6= xj , i.e., any pair corresponds to a group action. 2. Conventional cross-entropy loss in SL is a special case, if we define x ∈ X = {x1, ...,xk} as k classifier weights. So, SL does not guarantee the disentanglement of G/D, which causes generalization error if the class domain of downstream task is different from SL pre-training, e.g., a subset of G/D. 3. In contrastive learning based SSL, D = “augmentations” (recall Figure 2), and the number of augmentations |Daug| is generally much smaller compared to the class-wise sample diversity |DSL| in SL. This enables the SL model to generalize to more diverse testing samples (|DSL|) by filtering out the class-agnostic features (e.g., background) and focusing on the class-specific ones (e.g., foreground), which explains why SSL is worse than SL in downstream classification. 4. In SL, if the number of training samples per orbit is not enough, i.e., smaller than |D(ci · x)|, the disentanglement between D and G/D cannot be guaranteed, such as the challenges in few-shot learning [96]. Fortunately, in SSL, the number is enough as we always include all the augmented samples in training. Moreover, we conjecture thatDaug only contains simple cyclic group elements such as rotation and colorization, which are easier for representation learning. Lemma 1 does not guarantee the decomposability of each d ∈ D. Nonetheless, the downstream model can still generalize by keeping the class-specific features affected by G/D. Therefore, the key to fill the gap or even let SSL surpass SL is to achieve the full disentanglement of G/Daug. Theorem 1. The representation is fully disentangled w.r.t. G/Daug if and only if ∀ci ∈ G/Daug, the contrastive loss in Eq. (1) is invariant to the 2 orbits of partition {G′(ci · x),G′(c−1i · x)}, where G′ = G/ci = Daug × c1 × . . .× ci−1 × ci+1 × . . .× ck. The maximization in Step 2 is based on the contra-position of the sufficient condition of Theorem 1. Denote the currently disentangled group as D (initially Daug). If we can find a partition P∗ to maximize the loss in Eq. (3), i.e., SSL loss is variant across the orbits, then ∃h ∈ G/D such that the representation of h is entangled, i.e., P∗ = {D(h · x),D(h−1 · x)}. Figure 3 (c) illustrates a discovered partition about color. The minimization in Step 1 is based on the necessary condition of Theorem 1. Based on the discovered P∗, if we minimize Eq. (2), we can further disentangle h and update D ← D × h. Overall, IP-IRM converges as G/Daug is finite. Note that an improved contrastive objective [92] can further disentangle each d ∈ Daug and achieve full disentanglement w.r.t. G. 5 Experiments 5.1 Unsupervised Disentanglement Datasets. We used two datasets. CMNIST [2] has 60,000 digit images with semantic labels of digits (0-9) and colors (red and green). These images differ in other semantics (e.g., slant and font) that are not labeled. Moreover, there is a strong correlation between digits and colors (most 0-4 in red and 5-9 in green), increasing the difficulty to disentangle them. Shapes3D [50] contains 480,000 images with 6 labelled semantics, i.e., size, type, azimuth, as well as floor, wall and object color. Note that we only considered the first three semantics for evaluation, as the standard augmentations in SSL will contaminate any color-related semantics. Settings. We adopted 6 representative disentanglement metrics: Disentangle Metric for Informativeness (DCI) [29], Interventional Robustness Score (IRS) [79], Explicitness Score (EXP) [72], Modularity Score (MOD) [72] and the accuracy of predicting the ground-truth semantic labels by two classification models called logistic regression (LR) and gradient boosted trees (GBT) [59]. Specifically, DCI and EXP measure the explicitness, i.e., the values of semantics can be decoded from the feature using a linear transformation. MOD and IRS measure the modularity, i.e., whether each feature dimension is equivariant to the shift of a single semantic. See Appendix for more detailed formula of the metrics. In evaluation, we trained CNN-based feature extractor backbones with comparable number of parameters for all the baselines and our IP-IRM. The full implementation details are in Appendix. Results. In Table 1, we compared the proposed IP-IRM to the standard SSL method SimCLR [16] as well as several generative disentanglement methods [51, 41, 9, 15, 50]. On both CMNIST and Shapes3D dataset, IP-IRM outperforms SimCLR regarding all metrics except for only IRS where the most relative gain is 8.8% for MOD. For this MOD, we notice that VAE performs better than our IP-IRM by 6 points, i.e., 0.82 v.s. 0.76 for Shapes3D. This is because VAE explicitly pursues a high modularity score through regularizing the dimension-wise independence in the feature space. However, this regularization is adversarial to discriminative objectives [14, 95]. Indeed, we can observe from the column of LR (i.e., the performance of downstream linear classification) that VAE methods have clearly poor performance especially on the more challenging dataset Shapes3D. We can draw the same conclusion from the results of GBT. Different from VAE methods, our IP-IRM is optimized towards disentanglement without such regularization, and is thus able to outperform the others in downstream tasks while obtaining a competitive value of modularity. What do IP-IRM features look like? Figure 4 visualizes the features learned by SimCLR and our IP-IRM on two datasets: CMNIST in Figure 4 (a) and STL10 dataset in Figure 4 (b). In the following, we use Figure 4 (a) as the example, and can easily draw the similar conclusions from Figure 4 (b). On the left-hand side of Figure 4 (a), it is obvious that there is no clear boundary to distinguish the semantic of color in the SimCLR feature space. Besides, the features of the same digit semantic are scattered in two regions. On the right-hand side of (a), we have 3 observations for IP-IRM. 1) The features are well clustered and each cluster corresponds to a specific semantic of either digit or color. This validates the equivariant property of IP-IRM representation that it responds to any changes of the existing semantics, e.g., digit and color on this dataset. 2) The feature space has the symmetrical structure for each individual semantic, validating the decomposable property of IP-IRM representation. More specifically, i) mirroring a feature (w.r.t. “*” in the figure center) indicates the change on the only semantic of color, regardless of the other semantic (digit); and ii) a counterclockwise rotation (denoted by black arrows from same-colored 1 to 7) indicates the change on the only semantic of digit. 3) IP-IRM reveals the true distribution (similarity) of different classes. For example, digits 3, 5, 8 sharing sub-parts (curved bottoms and turnings) have closer feature points in the IP-IRM feature space. How does IP-IRM disentangle features? 1) Discovered P∗: To visualize the discovered partitions P∗ at each maximization step, we performed an experiment on a binary CMNIST (digit 0 and 1 in color red and green), and show the results in Figure 5 (a). Please kindly refer to Appendix for the full results on CMNIST. First, each partition tells apart a specific semantic into two subsets, e.g., in Partition #1, red and green digits are separated. Second, besides the obvious semantics—digit and color (labelled on the dataset), we can discover new semantics, e.g., the digit slant shown in Partition #3. 2) Disentangled Representation: In Figure 5 (b), we aim to visualize how equivariant each feature dimension is to the change of each semantic, i.e., a darker color shows that a dimension is more equivariant w.r.t. the semantic indicated on the left. We can see that SimCLR fails to learn the decomposable representation, e.g., the 8-th dimension captures azimuth, type and size in Shapes3D. In contrast, our IP-IRM achieves disentanglement by representing the semantics into interpretable dimensions, e.g., the 6-th and 7-th dimensions captures the size, the 4-th for type and the 2-nd and 9-th for azimuth on the Shapes3D. Overall, the results support the justification in Section 4, i.e., we discover a new semantic (affected by h) through the partition P∗ at each iteration and IP-IRM eventually converges with a disentangled representation. 5.2 Self-Supervised Learning Datasets and Settings. We conducted the SSL evaluations on 2 standard benchmarks following [88, 20, 48]. Cifar100 [54] contains 60,000 images in 100 classes and STL10 [22] has 113,000 images in 10 classes. We used SimCLR [16], DCL [20] and HCL [48] as baselines, and learned the representations for 400 and 1000 epochs. We evaluated both linear and k-NN (k = 200) accuracies for the downstream classification task. Implementation details are in appendix. Method STL10 Cifar100 k-NN Linear k-NN Linear 400 epoch training SimCLR [16] 73.60 78.89 54.94 66.63 DCL [20] 78.82 82.56 57.29 68.59 HCL [48] 80.06 87.60 59.61 69.22 SimCLR+IP-IRM 79.66 84.44 59.10 69.55 DCL+IP-IRM 81.51 85.36 58.37 68.76 HCL+IP-IRM 84.29 87.81 60.05 69.95 1,000 epoch training SimCLR [16] 78.60 84.24 59.45 68.73 SimCLR† [55] 79.80 85.56 63.67 72.18 SimCLR†+IP-IRM 85.08 89.91 65.82 73.99 Supervised∗ - - - 73.72 Supervised∗+MixUp [97] - - - 74.19 Notably, our SimCLR+IP-IRM surpasses vanilla supervised learning on Cifar100 under the same evaluation setting. Still, the quality of disentanglement cannot be fully evaluated when the training and test samples are identically distributed—while the improved accuracy demonstrates that IP-IRM representation is more equivariant to class semantics, it does not reveal if the representation is decomposable. Hence we present an out-of-distribution (OOD) setting in Section 5.3 to further show this property. Is IP-IRM sensitive to the values of hyper-parameters? 1) λ1 and λ2 in Eq. (2) and Eq. (3). In Figure 6 (a), we observe that the best performance is achieved with λ1 and λ2 taking values from 0.2 to 0.5 on both datasets. All accuracies drop sharply if using λ1 = 1.0. The reason is that a higher λ1 forces the model to push the φ-induced similarity to fixed baseline θ = 1, rather than decrease the loss L on the pretext task, leading to poor convergence. 2) The number of epochs. In Figure 6 (b), we plot the Top-1 accuracies of using k-NN classifiers along the 700-epoch training of two kinds of SSL representations—SimCLR and IP-IRM. It is obvious that IP-IRM converges faster and achieves a higher accuracy than SimCLR. It is worth to highlight that on the STL10, the accuracy of SimCLR starts to oscillate and grow slowly after the 150-th epoch, while ours keeps on improving. This is an empirical evidence that IP-IRM keeps on disentangling more and more semantics in the feature space, and has the potential of improvement through long-term training. 5.3 Potential on Large-Scale Data Datasets. We evaluated on the standard benchmark of supervised learning ImageNet ILSVRC2012 [25] which has in total 1,331,167 images in 1,000 classes. To further reveal if a representation is decomposable, we used NICO [37], which is a real-world image dataset designed for OOD evaluations. It contains 25,000 images in 19 classes, with a strong correlation between the foreground and background in the train split (e.g., most dogs on grass). We also studied the transferability of the learned representation following [30, 52]: FGVC Aircraft (Aircraft) [60], Caltech-101 (Caltech) [31], Stanford Cars (Cars) [93], Cifar10 [53], Cifar100 [53], DTD [21], Oxford 102 Flowers (Flowers) [62], Food-101 (Food) [6], Oxford-IIIT Pets (Pets) [67] and SUN397 (SUN) [91]. These datasets include coarse- to fine-grained classification tasks, and vary in the amount of training data (2,000-75,000 images) and classes (10-397 classes), representing a wide range of transfer learning settings. Settings. For the ImageNet, all the representations were trained for 200 epochs due to limited computing resources. We followed the common setting [80, 36], using a linear classifier, and report Top-1 classification accuracies. For NICO, we fixed the ImageNet pre-trained ResNet-50 backbone and fine-tuned the classifier. See appendix for more training details. For the transfer learning, we followed [30, 52] to report the classification accuracies on Cars, Cifar-10, Cifar-100, DTD, Food, SUN and the average per-class accuracies on Aircraft, Caltech, Flowers, Pets. We call them uniformly as Accuracy. We used the few-shot n-way-k-shot setting for model evaluation. Specifically, we randomly sampled 2,000 episodes from the test splits of above datasets. An episode contains n classes, each with k training samples and 15 testing samples, where we fine-tuned the linear classifier (backbone weights frozen) for 100 epochs on the training samples, and evaluated the classifier on the testing samples. We evaluated with n = k = 5 (results of n = 5, k = 20 in Appendix). ImageNet and NICO. In Table 3 ImageNet accuracy, our IP-IRM achieves the best performance over all baseline models. Yet we believe that this does not show the full potential of IP-IRM, because ImageNet is a larger-scale dataset with many semantics, and it is hard to achieve a full disentanglement of all semantics within the limited 200 epochs. To evaluate the feature decomposability of IPIRM, we compared the performance on NICO with various SSL baselines in Table 3, where our approach significantly outperforms the baselines by 1.5-4.2%. This validates IP-IRM feature is more decomposable—if each semantic feature (e.g., background) is decomposed in some fixed dimensions and some classes vary with such semantic, then the classifier will recognize this as a non-discriminative variant feature and hence focus on other more discriminative features (i.e., foreground). In this way, even though some classes are confounded by those non-discriminative features (e.g., most of the “dog” images are with “grass” background), the fixed dimensions still help classifiers neglect those non-discriminative ones. We further visualized the CAM [98] on NICO in Figure 7, which indeed shows that IP-IRM helps the classifier focus on the foreground regions. Few-Shot Tasks. As shown in Table 4, our IP-IRM significantly improves the performance of 5-way-5-shot setting, e.g., we outperform the baseline MoCo-v2 by 2.2%. This is because IP-IRM can further disentangled G\Daug over SSL, which is essential for representations to generalize to different downstream class domains (recall Corollary 2 of Lemma 1). This is also in line with recent works [86] showing that a disentangled representation is especially beneficial in low-shot scenarios, and further demonstrates the importance of disentanglement in downstream tasks. 6 Conclusion We presented an unsupervised disentangled representation learning method called Iterative Partitionbased Invariant Risk Minimization (IP-IRM), based on Self-Supervised Learning (SSL). IP-IRM iteratively partitions the dataset into semantic-related subsets, and learns a representation invariant across the subsets using SSL with an IRM loss. We show that with theoretical guarantee, IP-IRM converges with a disentangled representation under the group-theoretical view, which fundamentally surpasses the capabilities of existing SSL and fully-supervised learning. Our proposed theory is backed by strong empirical results in disentanglement metrics, SSL classification accuracy and transfer performance. IP-IRM achieves disentanglement without using generative models, making it widely applicable on large-scale visual tasks. As future directions, we will continue to explore the application of group theory in representation learning and seek additional forms of inductive bias for faster convergence. Acknowledgments and Disclosure of Funding The authors would like to thank all reviewers for their constructive suggestions. This research is partly supported by the Alibaba-NTU Joint Research Institute, the A*STAR under its AME YIRG Grant (Project No. A20E6c0101), and the Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 2 grant.
1. What is the focus of the paper regarding self-supervised learning and disentanglement representation? 2. What are the strengths and weaknesses of the proposed method compared to prior works? 3. Do you have any questions or concerns regarding the technical aspects, such as the iterative partition mechanism, the combination of IRM, SSL, and SB-disentanglement, and the proofs in the paper? 4. How does the reviewer assess the quality, originality, clarity, and significance of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a self-supervised learning model which is a variant of the Invariant Risk Minimization with an iterative partition mechanism. The method can encourage the self-supervised learning of a disentangled representation. Empirical results validate the effectiveness of the proposed model on self-supervised learning and disentangled representation learning. Review Originality: The proposed method depends on the Invariant Risk Minimization model. The disentanglement validation depends on the symmetry-based (SB) disentangled representation definition. The overall learning setting is the same as the general self-supervised learning. The core technical originality I personally believe is the iterative partition mechanism, but I would also consider the combination of IRM, SSL, and SB-disentanglement is novel and original as well as they are not directly related. Quality: The experimental results clearly validate the disentanglement and self-supervised learning quality as they improve the existing SSL baselines by large margins. However, my main concern is about the justification part (Section 4) of the paper, where I am not fully convinced by the proofs of Lemma 1 (and therefore Theorem 1). In Line 155, the paper states there are two possible cases if Lemma 1 doesn't hold. But why are there only these two cases? It looks like the line 155-156 and the appendix line 61-62 try to describe the same thing: (1) main paper line 155-156: 'f is a collapsed representation that maps all inputs to a fixed vector, which is impossible in today's deep model training [88]'; (2) appendix line 61-62: 'The representation is not equivariant w.r.t. the action of \Pi G_i. As discussed in Lemma 1, this is impossible in today's deep model training [26].' However, nothing in the Lemma 1 discussion is related to the appendix statement about the equivariance with the product of subgroups, neither does the cited paper ([26] of the appendix). This makes me very confused about the correctness of the proof B.1 in the appendix. In main paper lines 156-159, the paper cites [35] to say there is always a linear projection. But why is it related to the group G_{t+1}? And why is it related to the proof of Lemma 1? If the SB-disentanglement is actually realized as claimed in section 4, can the authors show the specific learned groups (or subgroups) empirically? As long as I can see in the disentanglement experiments, what the paper tries to emphasize is the separation of latent subspaces, which corresponds to traditional disentangled representation learning without considering the groups that can actually act on them. The unsupervised disentanglement experiment only reports the means of the metrics and no standard deviations. Typically models should be reported with runs of multiple random seeds to get an evaluation on disentanglement performance. Clarity: There could be some improvements. There should be some preliminaries about IRM. At least a general introduction about what problem IRM is trying to solve, why it works, and why the SSL task falls into its setting. The justification part (Section 4) should be improved on its proof of Lemma 1 and Theorem 1 (see my concerns in the Quality part). Significance: The direction of improving the disentanglement property of self-supervised learning representations is promising and important. The results are significant and I believe it will benefit the representation learning community.
NIPS
Title SHAQ: Incorporating Shapley Value Theory into Multi-Agent Q-Learning Abstract Value factorisation is a useful technique for multi-agent reinforcement learning (MARL) in global reward game, however, its underlying mechanism is not yet fully understood. This paper studies a theoretical framework for value factorisation with interpretability via Shapley value theory. We generalise Shapley value to Markov convex game called Markov Shapley value (MSV) and apply it as a value factorisation method in global reward game, which is obtained by the equivalence between the two games. Based on the properties of MSV, we derive Shapley-Bellman optimality equation (SBOE) to evaluate the optimal MSV, which corresponds to an optimal joint deterministic policy. Furthermore, we propose Shapley-Bellman operator (SBO) that is proved to solve SBOE. With a stochastic approximation and some transformations, a new MARL algorithm called Shapley Q-learning (SHAQ) is established, the implementation of which is guided by the theoretical results of SBO and MSV. We also discuss the relationship between SHAQ and relevant value factorisation methods. In the experiments, SHAQ exhibits not only superior performances on all tasks but also the interpretability that agrees with the theoretical analysis. The implementation of this paper is placed on https://github.com/hsvgbkhgbv/shapley-q-learning. 1 Introduction Cooperative games are a critical research area in multi-agent reinforcement learning (MARL). Many real-life tasks can be modeled as cooperative games, e.g. the coordination of autonomous vehicles [1], autonomous distributed logistics [2] and distributed voltage control in power networks [3]. In this paper, we consider global reward game (a.k.a. team reward game), an important subclass of cooperative games, wherein agents aim to jointly maximize cumulative global rewards over time. There are two categories of methods to solve this problem: (i) each agent identically maximizes cumulative global rewards, i.e. learning with a shared value function [4–6]; and (ii) each agent individually maximizes distributed values, i.e. learning with (implicit) credit assignments (e.g. marginal contribution and value factorisation) [7–11]. By the view of non-cooperative game theory, global reward game are equivalent to Markov game [12] with global reward (a.k.a. team reward). Its aim is to learn a stationary joint policy to reach a Markov equilibrium so that no agent tends to unilaterally change its policy to maximize cumulative global rewards. Standing by this view, learning with value factorisation cannot be directly explained [13]. In ∗Correspondence to Yunjie Gu who is also an honorary lecturer at Imperial College London. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). this paper, to clearly interpret the value factorisation, we take the perspective of cooperative game theory [14], wherein agents are partitioned into coalitions and a payoff distribution scheme is found to distribute optimal values to coalitions. The corresponding solution is called Markov core, whereby no agent has an incentive to deviate. When all agents are partitioned into one coalition (called grand coalition), the payoff distribution scheme naturally plays the role of value factorisation. Wang et al. [13] extended convex game (i.e. a game model in cooperative game theory) [14] to dynamic scenarios, which we name as Markov convex game in this paper. We construct the analytic form of Shapley value for Markov convex game, and prove that it reaches the Markov core under the grand coalition, named as Markov Shapley value. The optimal Markov Shapley value implies not only the optimal global value but also that no agent has incentives to deviate from the grand coalition. Additionally, Markov Shapley value enjoys the following properties: (i) identifiability of dummy agents; (ii) efficiency; (iii) reflecting the contribution; and (iv) symmetry. These properties aid the interpretation and validity of value factorisation in the global reward game, and such transparency and reliability are critical to industrial applications [3]. Based on the efficiency property, we derive Shapley-Bellman optimality equation that is an extension of Bellman optimality equation [15, 16]. Moreover, we propose Shapley-Bellman operator and prove its convergence to the Shapley-Bellman optimality equation and its optimal joint deterministic policy. With a stochastic approximation of Shapley-Bellman operator and some transformations, we derive an algorithm called Shapley Q-learning (SHAQ). SHAQ learns to approximate the optimal Markov Shapley Q-value (an equivalent form of the optimal Markov Shapley value). Moreover, we enable SHAQ decentralised in order to fit the decentralised execution framework and this decentralisation still remains the convergence condition of Shapley-Bellman operator. The proposed method, SHAQ, is evaluated on two global reward games such as Predator-Prey [17] and multi-agent StarCraft benchmark tasks [18]. In the experiments, SHAQ shows not only generally good performances on solving all tasks but also the interpretability that is deficient in the state-of-the-art baselines. 2 Markov Convex Game We now formally define Markov convex game (MCG) that can be described as a tupleN ,S,A, T,⇤,⇡,Rt, . N is the set of all agents. S is the set of states and A = ×i∈NAi is the joint action set of all agents wherein Ai is each agent’s action set. T (s,a, s′) = Pr(s′s,a) is defined as the transition probability between the successive states. CS = {C1, ...,Cn} is a coalition structure, where Ci ⊆ N called a coalition is a subset of all agents. ⇤ is a collection of coalition structures. and N are two special cases of coalitions i.e. the empty coalition and the grand coalition respectively. Conventionally, it is assumed that CmCk = ,∀Cm,Ck ⊆ N . ⇡ = ×i∈N⇡i is the joint policy of all agents. For any coalition C, it is equipped with a coalition policy ⇡C(aC s) = ×i∈C⇡i(ais) defined over the coalition action set AC = ×i∈CAi. Therefore, ⇡ can be seen as the grand coalition policy. Rt ∶ S ×AC → [0,∞) (i.e., a characteristic function) is the coalition reward at time step t. Accordingly, Rt(s,a) is the grand coalition reward (i.e., equivalent to the global reward) at time step t that is written as R(s,a) or R for conciseness in the rest of paper. ∈ (0,1) is the discounted factor. The infinite long-term discounted cumulative coalition rewards is defined as V ⇡C(s) = E⇡C∑∞t=1 t−1Rt(s,aC) St = s ∈ [0,∞), called a coalition value. Moreover, the empty coalition value V ⇡(s) = 0 and V ⇡(s) denotes the grand coalition value (i.e. also called the global value since the equivalence proof from [13]). The solution of MCG is to find a tuple CS, (max⇡i xi(s))i∈N , where (max⇡i xi(s))i∈N indicates the payoff distributions (i.e. credit assignments) under the optimal joint policy given a coalition structure. Under the assumptionCmCk = ,∀Cm,Ck ⊆ N , the condition for MCG is as follows: max ⇡C∪ V ⇡C∪ (s) ≥max ⇡Cm V ⇡Cm (s) +max ⇡Ck V ⇡Ck (s), ∀Cm,Ck ⊆ N ,C∪ = Cm ∪ Ck. (1) In MCG with the grand coalition i.e., CS = {N}, Markov core, a solution concept describing stability, is defined as a set of payoff distribution schemes by which no agent has incentives to deviate from the grand coalition to gain more profits. Mathematically, Markov core can be expressed as: MarkovCore = max ⇡i xi(s)i∈N max⇡C x(sC) ≥max⇡C V ⇡C(s),∀C ⊆ N , s ∈ S , (2) where max⇡C x(sC) = ∑i∈Cmax⇡i xi(s). It aims to find a payoff distribution scheme (xi(s))i∈N that can finally converge to Markov core under the optimal joint policy. To assist the application on Q-learning, we similarly define coalition Q-value as Q⇡C(s,aC) ∈ [0,+∞) for all coalitions C ⊂ N . Following the above convention, the grand coalition Q-value (or the global Q-value) can be written as Q⇡(s,a). Moreover, the optimal coalition Q-value of C w.r.t. the optimal joint policy of D ⊆ C (i.e., ⇡∗D) and the suboptimal joint policy of CD (i.e., ⇡CD) is defined as Q⇡ ∗D(s,aC). Therefore, the optimal coalition Q-value of C w.r.t. the optimal joint policy of C is defined as Q⇡ ∗C(s,aC). Accordingly, the optimal global coalition Q-value w.r.t. the optimal joint policy of the grand coalition is denoted as Q⇡ ∗(s,a). 3 Markov Shapley Value By the view of cooperative game theory, the grand coalition is progressively formed by a permutation of agents. Accordingly, marginal contribution is an implementation of the credit reflecting an agent’s contribution. The formal definition is shown in Definition 1. Definition 1. In Markov convex game, with a permutation of agents j1, j2, ..., jN ,∀jn ∈N forming the grand coalition N , where n ∈ {1, ..., N }, ja ≠ jb if a ≠ b, the marginal contribution of an agent i is defined as the following equation such that i(sCi) =max ⇡Ci V ⇡Ci∪{i}(s) −max ⇡Ci V ⇡Ci (s), (3) where Ci = {j1, ..., jn−1} for jn = i is an arbitrary intermediate coalition where agent i would join during the process of grand coalition formation. Proposition 1. Agent i’s action marginal contribution can be derived as follows: i(s, aiCi) =max aCi Q⇡ ∗Ci (s,aCi∪{i}) −maxaCi Q⇡ ∗Ci (s,aCi). (4) As Proposition 1 shows, an agent’s action marginal contribution (analogous to Q-value) can be derived according to Eq.4. It is usually more useful for solving MARL problems. It is apparent that marginal contribution only considers one permutation to form the grand coalition. By the viewpoint from Shapley [19], the fairness is achieved through considering how much the agent i increases the optimal values (i.e. marginal contributions) of the coalitions in all possible permutations when it joins in, i.e., max⇡Ci V ⇡Ci∪{i}(s) −max⇡Ci V ⇡Ci (s),∀Ci ⊆N {i}. Therefore, we construct Shapley value under Markov dynamics based on the marginal contributions shown in Definition 2, named as Markov Shapley value (MSV). Definition 2. Markov Shapley value is represented as V i (s) = Ci ⊆ N {i} Ci!(N − Ci − 1)!N ! ⋅ i(sCi). (5) With the deterministic policy, Markov Shapley value can be equivalently represented as Q i (s, ai) = Ci ⊆ N {i} Ci!(N − Ci − 1)!N ! ⋅ i(s, aiCi). (6) where i(sCi) is defined in Eq.3 and i(s, aiCi) is defined in Eq.4. For convenience, we name Eq.6 as Markov Shapley Q-value (MSQ). Briefly, MSV calculates the weighted average of marginal contributions. Since a coalition may repeatedly appear among all permutations (i.e. N ! permutations), the ratio between the occurrence frequency Ci!(N − Ci−1)! and the total frequency N ! is used as a weight to describe the importance of the corresponding marginal contribution. Besides, the sum of all weights is equal to 1, so each weight can be interpreted as a probability distribution. Consequently, MSV can be seen as the expectation of marginal contributions, denoted as ECi∼Pr(CiN {i}) [ i(sCi)]. Note that Pr(CiN {i}) is a bell-shaped probability distribution. By the above relationship, Remark 1 is directly obtained. Remark 1. Uniformly sampling different permutations is equivalent to directly sampling from Pr(CiN {i}), since the coalition generation is from the permutation to form the grand coalition. Proposition 2. Markov Shapley value possesses properties as follows: (i) identifiability of dummy agents: V i (s) = 0; (ii) efficiency: max⇡ V ⇡(s) = ∑i∈N max⇡i V i (s); (iii) reflecting the contribution; and (iv) symmetry. Proposition 2 shows four properties of MSV. The most important property is Property (ii) that aids the formulation of Shapley-Bellman optimality equation. Property (iii) shows that MSV is a fundamental index to quantitatively describe each agent’s contribution. Property (i) and (iii) play important roles in interpretation for value factorisation (or credit assignment). Property (iv) indicates that if two agents are symmetric, then their optimal MSVs should be equal, but the reverse does not necessarily hold. All these properties that define the fairness are inherited from the original Shapley value [19]. 4 Shapley Q-Learning 4.1 Definition and Formulation Shapley-Bellman Optimality Equation. Based on the Bellman optimality equation [15] and the following conditions (the interpretability of which are left to Section 4.2): C.1. Efficiency of MSV (i.e. the result from Proposition 2); C.2. Q ∗ i (s, ai) = wi(s, ai) Q⇡∗(s,a) − bi(s), where wi(s, ai) > 0 and bi(s) ≥ 0 are bounded and ∑i∈N wi(s, ai)−1bi(s) = 0, we derive Shapley-Bellman optimality equation (SBOE) for evaluating the optimal MSQ (an equivalent form to optimal MSV) such that Q ∗(s,a) =w(s,a) s′∈S Pr(s′s,a)R + i∈N max ai Q ∗ i (s′, ai) − b(s), (7) where w(s,a) = [wi(s, ai)] ∈ RN + ; b(s) = [bi(s)] ∈ RN ≥0 ; Q ∗(s,a) = [Q ∗i (s, ai)] ∈ RN ≥0 and Q ∗ i (s, ai) denotes the optimal MSQ. If Eq.7 holds, the optimal MSQ is achieved. Moreover, it reveals an implication that for any s ∈ S and a∗i = argmaxai Q ∗i (s, ai), we have a solution wi(s, a∗i ) = 1N (see Appendix E.4.1). Literally, the assigned credits would be equal and each agent would receive Q⇡ ∗(s,a)N if performing the optimal actions. It is apparent that the efficiency still holds under this situation, which can be interpreted as an extremely fair credit assignment such that the credit to each agent should not be discriminated if all of them perform optimally, regardless of their roles. The equal credit assignment was also revealed by Wang et al. [20] recently from another perspective of analysis. Nevertheless, wi(s, ai) for ai ≠ argmaxai Q ∗i (s, ai) needs to be learned. Shapley-Bellman Operator. To find an optimal solution described by Eq.7, we now propose an operator called Shapley-Bellman operator (SBO), i.e., ⌥ ∶ ×i∈NQ i (s, ai) ×i∈NQ i (s, ai), which is defined as follows: ⌥ ×i∈NQ i (s, ai) =w(s,a) s′∈S Pr(s′s,a)R + i∈N max ai Q i (s′, ai) − b(s), (8) where wi(s, ai) = 1N when ai = argmaxai Q i (s, ai). We prove that the optimal joint deterministic policy can be achieved by recursively running SBO in Theorem 1. Theorem 1. Shapley-Bellman operator is able to converge to the optimal Markov Shapley Q-value and the corresponding optimal joint deterministic policy when maxs ∑i∈N maxai wi(s, ai) < 1 . Shapley Q-Learning. For easy implementation, we conduct transformation for the stochastic approximation of SBO and derive Shapley Q-learning (SHAQ) whose TD error is shown as follows: (s,a, s′) = R + i∈N max ai Q i (s′, ai) − i∈N i(s, ai) Q i (s, ai), (9) where i(s, ai) = 1 ai = argmaxai Q i (s, ai), ↵i(s, ai) ai ≠ argmaxai Q i (s, ai). (10) Actually, the closed-form expression of i(s, ai) is written as N −1wi(s, ai)−1. If inserting the condition that wi(s, ai) = 1N when ai = argmaxai Q i (s, ai) as well as defining i(s, ai) as ↵i(s, ai) when ai ≠ argmaxai Q i (s, ai), Eq.10 is obtained. The term b(s) is cancelled in Eq.9 thanks to the condition such that ∑i∈N wi(s, ai)−1bi(s) = 0. Note that the condition to wi(s, ai) in Theorem 1 should hold for the convergence of SHAQ in implementation (see Appendix E.4.4). 4.2 Validity and Interpretability In this section, we show the validity of SBOE and the interpretability of SHAQ, i.e., providing the reasons why SBOE is valid to be formulated and SHAQ is an interpretable value factorisation method for the global reward game. Theorem 2. The optimal Markov Shapley value is a solution in the Markov core under Markov convex game with the grand coalition. Remark 2. For an arbitrary state s ∈ S, by C.2 it is not difficult to check that even if an arbitrary agent i is dummy (i.e., Q ∗ i (s, ai) = 0 for some i ∈ N ), Q⇡∗(s,a) and Q ∗j (s, aj),∀j ≠ i would not be zero if bi(s) ≠ 0. If the extreme case happens that for an arbitrary state s ∈ S all agents are dummies, since ∑i∈N wi(s, ai)−1bi(s) = 0 we are allowed to set bi(s) = 0,∀i ∈ N so that Q⇡ ∗(s,a) = 0 and efficiency such that maxaQ⇡∗(s,a) = ∑i∈N maxai Q ∗i (s, ai) is still valid. First, we give a proof for showing that the optimal MSV is a solution in Markov core under the grand coalition, as Theorem 2 shows. Since a solution in Markov core implies the optimal global value (see Remark 5 in Appendix D.2.2), we can conclude that the optimal MSV can lead to the optimal global value (a.k.a. social welfare), which links Condition C.1 to Markov core. As a result, solving SBOE is equivalent to solving Markov core under the grand coalition and SHAQ is actually a learning algorithm that reliably converges to Markov core. As per the definition in Section 2, we can say that SHAQ leads to the result that no agents have incentives to deviate from the grand coalition, which provides an interpretation of value factorisation for global reward game. Condition C.2 is a condition that maintains the validity of the relationship between the optimal MSQ and the optimal global Q-value even if there exist dummy agents (see Remark 2), so that the definition of SBOE is valid for MCG and MSQ in almost every case, which preserves the completeness of the theory. 4.3 Implementations We now describe a practical implementation of SHAQ for Dec-POMDP [21] (i.e. the global reward game but with partial observations). First, the global state is replaced by the history of each agent to guarantee the optimal deterministic joint policy [21]. Accordingly, Markov Shapley Q-value is denoted as Q i (⌧i, ai), wherein ⌧i is a history of partial observations of agent i. Since the paradigm of centralised training decentralised execution (CTDE) [22] is applied, the global state (i.e. s) for ↵̂i(s, ai) can be obtained during training. Proposition 3. Suppose any action marginal contribution can be factorised to the form such that i(s, aiCi) = (s,aCi∪{i}) Q̂i(s, ai). With the condition such that ECi∼Pr(CiN {i}) (s,aCi∪{i}) = 1 ai = argmaxai Q i (s, ai), K ∈ (0,1) ai ≠ argmaxai Q i (s, ai), we have Q i (s, ai) = Q̂i(s, ai) ai = argmaxai Q̂i(s, ai), ↵i(s, ai) Q i (s, ai) = ↵̂i(s, ai) Q̂i(s, ai) ai ≠ argmaxai Q̂i(s, ai), (11) where ↵̂i(s, ai) = ECi∼Pr(CiN {i}) ̂i(s, ai;aCi) and ̂i(s, ai;aCi) ∶= ↵i(s, ai) (s,aCi∪{i}). Compatible with the decentralised execution, we use only one parametric function Q̂i(⌧i, ai) to directly approximate Q i (⌧i, ai). By inserting Eq.11 into Eq.9, i(s, ai) is transformed into the form as follows: ̂i(s, ai) = 1 ai = argmaxai Q̂i(s, ai), ↵̂i(s, ai) ai ≠ argmaxai Q̂i(s, ai), (12) where ↵̂i(s, ai) = ECi∼Pr(CiN {i}) ̂i(s, ai;aCi). To solve partial observability, Q̂i(⌧i, ai) is empirically represented as recurrent neural network (RNN) with GRUs [23]. ̂i(s, ai;aCi) is directly approximated by a parametric function Fs + 1 and thus ↵̂i(s, ai) can be expressed as follows: ↵̂i(s, ai) = 1 M M k=1 Fs Q̂Cki (⌧Cki ,aCki ), Q̂i(⌧i, ai) + 1, (13) where Q̂Cki (⌧Cki ,aCki ) = 1Cki ∑j∈Cki Q̂j(⌧j , aj) and Cki is sampled M times from Pr(CiN {i}) (i.e., implemented as Remark 1 suggests) to approximate ECi∼Pr(CiN {i})[ ̂i(s, ai;aCi)] using Monte Carlo approximation; and Fs is a monotonic function, followed by an absolute activation function, whose weights are generated from hyper-networks w.r.t. the global state. We show that Eq.13 satisfies the condition to wi(s, ai) in Theorem 1 (see Appendix E.6.1), so it is a reliable implementation. By using the framework of fitted Q-learning [24] to solve large number of states (i.e., could be usually infinite) and plugging in the above designed modules, the practical least-square-error loss function derived from Eq.9 is therefore stated as follows: min ✓, Es,⌧,a,R,⌧ ′ R + i∈N max ai Q̂i(⌧ ′i , ai; ✓−) − i∈N ̂i(s, ai; ) Q̂i(⌧i, ai; ✓) 2 , (14) where all agents share the parameters of Q̂i(s, ai; ✓) and ↵̂i(s, ai; ) respectively; and Q̂i(s′, ai; ✓−) works as the target where ✓− is periodically updated. The general training procedure follows the paradigm of DQN [25], with a replay buffer to store the online collection of agents’ episodes. To depict an overview of the algorithm, the pseudo code is shown in Appendix A. 5 Related Work Value Factorisation in MARL. To deal with the instability during training in global reward game by independent learners [26], the centralised training and decentralised execution (CTDE) [22] was proposed and it became a general paradigm for MARL. Based on CTDE, MADDPG [27] learns a global Q-value that can be regarded as assigning the same credits to all agents during training [13], which may cause the unfair credit assignment [28]. To avoid this problem, VDN [8] was proposed to learn the factorised Q-value, assuming that any global Q-value equals to the sum of decentralised Q-values. Nevertheless, this factorisation may limit the representation of the global Q-value. To mitigate this issue, QMIX [9] and QTRAN [10] were proposed to represent the global Q-value with a richer class w.r.t. decentralised Q-values, based on the assumption (called Individual-Global-Max) of convergence to the optimal joint deterministic policy. Markov Shapley value proposed in this paper belongs to the family of value factorisation, based on the game-theoretical framework called MCG that enjoys the interpretability. From the conventional cooperative games (e.g., network flow game [29], induced subgraph game [30] that can be used for modelling social networks, and facility location game [31]), it is insightful that the coalition introduced in this paper exists. In many scenarios, however, the information of coalition might be unknown. Therefore, the latent coalition is assumed, and we only need to concentrate on the observable information, e.g., the global reward. Relationship to VDN. By setting i(s, ai) = 1 for all state-action pairs, SHAQ degrades to VDN [8]. Although VDN tried to tackle the problem of dummy agents, Sunehag et al. [8] did not give a theoretical guarantee on identifying it. The Markov Shapley value theory proposed in this paper well addresses this issue from both theoretical and empirical aspects. These aspects show that VDN is a subclass of SHAQ. The theoretical framework proposed in this paper answers to why VDN works well in most scenarios but performs poorly in some scenarios (i.e., i(s, ai) = 1 in Eq.9 was incorrectly defined over the suboptimal actions). Relationship to COMA. Compared with COMA [7], each agent i’s credit assignment Q̄i(s, ai) is mathematically expressed as follows: Q̄i(s, ai) = Q̄⇡(s,a) − Q̄⇡−i(s,a−i), Q̄⇡−i(s,a−i) = ai ⇡i(ais)Q̄⇡ (s, (a−i, ai)) , where subscript −i indicates the agents excluding i. Q̄i(s, ai) can be seen as the action marginal contribution between the grand coalition Q-value and the coalition Q-value excluding the agent i, under some permutation to form the grand coalition wherein agent i is located at the last position. The efficiency is obviously violated (i.e., the sum of optimal action marginal contributions defined here is unlikely to be equal to the optimal grand coalition Q-value). In contrast to COMA, SHAQ considers all permutations to form the grand coalition to preserve the efficiency. Relationship to Independent Learning. Independent learning (e.g. IQL [26]) can be also seen as a special credit assignment, however, the credit assigned to each agent is still with no intuitive interpretation. Mathematically, suppose that Q̄i(s, ai) is the independent Q-value of agent i, we can rewrite it in the form consisting of action marginal contributions such that Q̄i(s, ai) = ECi∼Pr(CiN {i}) ̄i(s, aiCi) . It is intuitive to see that the independent Q-value is a direct approximation of MSQ, ignoring coalition formation, while SHAQ considers coalition formation in approximation. This gives an explanation for why independent learning works well in some cooperative tasks [32]. Nevertheless, it encounters the same issue as in COMA, the loss of properties led by the coalition formation. Relationship to SQDDPG. We now discuss the relationship between SQDDPG [13] and SHAQ. In terms of algorithms, SQDDPG belongs to policy gradient methods (i.e. an approximation of policy iteration) while SHAQ belongs to value based methods (i.e. an approximation of value iteration). Since policy iteration (with one-step policy evaluation) is equivalent to value iteration [33] (at least under a finite state space and a finite action space), the theory behind SHAQ directly fills the gap in SQDDPG on theoretical guarantees of convergence to optimal joint policy. Specifically, the learning procedure of SQDDPG iteratively performs the following two stages: Stage 1: min ✓ Es,a,R,s′ R + i∈N Q̂ i (s′, a′i; ✓−) − i∈N Q̂ i (s, ai; ✓) 2 . Stage 2: ⇡i(s) ∈ argmax ai Q̂ i (s, ai; ✓). It can be observed that both SQDDPG and SHAQ ideally converge to the same optimal MSQs w.r.t. the optimal actions such that Es,s′ max a R(s,a) + i∈N max a′i Q̂ ∗ i (s′, a′i) − i∈N max ai Q̂ ∗ i (s, ai) 2 = 0. However, about suboptimal actions, SQDDPG does not provide any theoretical guarantee, whereas SHAQ does with specific implementations as shown in Eq.13 to match the theoretical results shown in this paper. Note that this is critical to reliable interpretations of the optimal MSQ w.r.t. suboptimal actions (e.g., for detecting adversarial attacks on controllers if deployed in industry [34]). 6 Experiments In this section, we show the experimental results of SHAQ on Predator-Prey [17] and various tasks in StarCraft Multi-Agent Challenge (SMAC) 2. The baselines that we select for comparison are COMA [7], VDN [8], QMIX [9], MASAC [36], QTRAN [10], QPLEX [37] and W-QMIX (including CWQMIX and OW-QMIX) [35]. The implementation details of our algorithm are shown in Appendix B.1, whereas the implementation of baselines are from [35] 3. We also compare SHAQ with SQDDPG [13] 4, which is shown in Appendix C.3. For all experiments, we use the ✏-greedy exploration strategy, where ✏ is annealed from 1 to 0.05. The annealing time steps vary among different experiments. For Predator-Prey, we apply 1 million time steps for annealing, following the setup from [37]. For the easy and hard maps in SMAC, we apply 50k time steps for annealing, the same as that in [18]; while for the super-hard maps in SMAC, we apply 1 million time steps for annealing to obtain more explorations so that more state-action pairs can be visited. About the replay buffer size, we set as 5000 for all algorithms that is the same as that in [35]. To fairly evaluate all algorithms, we run each experiment with 5 random seeds. All graphs showing experimental results are plotted with the 2The version that we use in this paper is SC2.4.6.2.69232 rather than the newer SC2.4.10. As reported from [35], the performance is not comparable across versions. 3The source code of baseline implementation is from https://github.com/oxwhirl/wqmix. 4The code of SQDDPG is implemented based on https://github.com/hsvgbkhgbv/SQDDPG. median and 25%-75% quartile shading. About the interpretability of algorithms, we evaluate the algorithms with both both ✏-greedy policy (i.e., ✏ = 0.8) for obtaining mixed optimal and suboptimal actions and greedy policy for obtaining pure optimal actions. The ablation study of SHAQ is shown in Appendix C.4. 6.1 Predator-Prey We firstly run the experiments on a partially-observable task called Predator-Prey [17], wherein 8 predators that are feasible to be controlled aim to capture 8 preys with random policies in a 10x10 grid world. Each agent’s observation is a 5x5 sub-grid centering around it. If a prey is captured by coordination of 2 agents, predators will be rewarded by 10. On the other hand, each unsuccessful attempt by only 1 agent will be punished by a negative reward p. In this experiment, we study the behaviors of each algorithm under different values of p (that describes different levels of coordination). As [35] reported, only QTRAN and W-QMIX can solve this task, while [37] found that the failure was primarily due to the lack of explorations. As a result, we apply the identical epsilon annealing schedule (i.e. 1 million time steps) adopted in [37]. Performance Analysis. As Figure 1 shows, SHAQ can always solve the tasks with different values of p. With the epsilon annealing strategy from [37], W-QMIX does not perform as well as reported in [35]. The reason could be its poor robustness to the increased explorations [35] for this environment (see the evidential experimental results in Appendix C.6). The good performance of VDN validates our analysis in Section 5, whereas the performance of QTRAN is surprisingly almost invariant to the value of p. The performances of QPLEX and QMIX become obviously worse when p=-2. The failure of MASAC and COMA could be due to that relative overgeneralisation5 prevents policy gradient methods from better coordination [39]. Interpretability of SHAQ. To verify that SHAQ possesses the interpretability, we show its credit assignment on Predator-Prey. As we see from Figure 2b, all agents are around and capture a prey, so 5Relative overgeneralisation is a common game theoretic pathology that the suboptimal actions are preferred when matched with arbitrary actions from the collaborating agents [38]. both of them perform the optimal actions and deserve almost the equal optimal credit assignment as 4.2927 and 4.0644, which verifies our theoretical claim. From Figure 2a, it can be seen that two agents are far away from preys, so they receive low credits as 2.4709 and 2.8435. On the other hand, the other two agents are around a prey, but they do not perform the optimal action “capture”, so they receive less credits than the two agents in Figure 2b. Nevertheless, they are around a prey, so they perform better than those agents that are far away from preys and receive comparatively greater credits as 3.2933 and 3.1159. The coherent credit assignments in both Figure 2a and 2b implies that the assigned credits reflect agents’ contributions (verifying (iii) in Proposition 2) , i.e., each agent receives the credit that is consistent with its decision. 6.2 StarCraft Multi-Agent Challenge We next evaluate SHAQ on the more challenging SMAC tasks, the environmental settings of which are the same as that in [18]. To broadly compare the performance of SHAQ with baselines, we select 4 easy maps: 8m, 3s5z, 1c3s5z and 10m_vs_11m; 3 hard maps: 5m_vs_6m, 3s_vs_5z and 2c_vs_64zg; and 4 super-hard maps: 3s5z_vs_3s6z, Corridor, MMM2 and 6h_vs_8z. All training is through online data collection. Due to the limited space, we only show partial results in the main part of paper and leave the rest in Appendix C.1. Performance Analysis. It shows in Figure 3 that SHAQ outperforms all baselines on all maps, except for 6h_vs_8z. On 6h_vs_8z, SHAQ can beat all baselines except for CW-QMIX. VDN performs well on 4 maps but bad on the other 2 maps, which still verifies our analysis in Section 5. QMIX and QPLEX perform well on the most of maps, except for 3s_vs_5z, 2c_vs_64zg and 6h_vs_8z. As for COMA, MADDPG and MASAC, their poor performances could be due to the weak adaptability to challenging tasks. Although QTRAN can theoretically represent the complete class of the global Q-value [10], its complicated learning paradigm could impede the convergence to the value function for challenging tasks and therefore result in the poor performance. Although W-QMIX performs well on some maps, owing to lacking a law on hyperparameter tuning [35] it is difficult to be adapted for all scenarios (see Appendix C.2). Interpretability of SHAQ. To further show the interpretability of SHAQ, we also conduct a test on 3m (i.e. a simple task in SMAC). As seen from Figure 4a, Agent 3 faces the direction opposite to enemies, meanwhile, the enemies are out of its attacking range. It can be understood as that Agent 3 does not contribute to the team and thus it is almost a dummy agent. Its MSQ is 0.84 (around 0) that correctly catch the manner of a dummy agent (verifying (i) in Proposition 2). In contrast, Agent 1 and Agent 2 are attacking enemies, while Agent 1 suffers from more attacks (with lower health) than Agent 2. As a result, Agent 1 contributes more than Agent 2 and therefore its MSQ is greater, which implies that the credits reflect agents’ contributions (verifying (iii) in Proposition 2). On the other hand, we can see from Figure 4e that with the optimal policies all agents receive almost identical MSQs (verifying the theoretical results in Section 4.1). The above results well verify the theoretical analysis that we deliver before. To justify that the MSQs learned by SHAQ are non-trivial, we also show the results of VDN, QMIX and QPLEX. It is surprising that the Q-values of these baselines are also almost identical among agents for the optimal actions (however, the property disappears in more complicated scenarios as shown in Appendix C.5 while the property of SHAQ is still valid). Since VDN is a subclass of SHAQ and possesses the same form of loss function for optimal actions, it is reasonable that it obtains the similar results to SHAQ. As for the suboptimal actions, VDN does not possess an explicit interpretation as SHAQ due to the incorrect definition of i(s, ai) = 1 over suboptimal actions (verifying the statement in Section 5). The values of QMIX and QPLEX are difficult to be explained. 7 Conclusion Summary. This paper generalises Shapley value to Markov convex game, called Markov Shapley value. Markov Shapley value inherits a number of properties: (i) identifiability of dummy agents; (ii) efficiency; (iii) reflecting the contribution and (iv) symmetry. Based on Property (ii), we derive Shapley-Bellman optimality equation, Shapley-Bellman operator and SHAQ. We prove that solving Shapley-Bellman optimality equation is equivalent to solving the Markov core (i.e., no agent has incentives to deviate from the grand coalition). Markov convex game with the grand coalition is equivalent to global reward game [13], wherein Markov Shapley value plays the role of value factorisation. Since SHAQ is a stochastic approximation of Shapley-Bellman operator that is proved to solve Shapley-Bellman optimality equation, global reward game with value factorisation becomes valid standing by the cooperative game theoretical framework (i.e. solving Markov core). Property (i) and (iii) in Proposition 2 are demonstrated in the experiments showing the interpretability of SHAQ. Limitation and Future Work. The value of Markov convex game is not limited to solving problems with the grand coalition, though in this paper we design SHAQ that only focuses on the scenario with the grand coalition. By removing the condition of supermodularity (see Eq.1), this framework can be used to study more general coalition games where different coalitions of agents as units may compete/cooperate with each other. Since the grand coalition and Markov Shapley value is not a solution in Markov core yet, the learning process becomes more complicated to converge to Markov core. A possible research direction in future is to investigate dynamically forming the coalition structure and conducting credit assignments simultaneously. Acknowledgements This work is sponsored by the Engineering and Physical Sciences Research Council of UK (EPSRC) under awards EP/S000909/1. Tae-Kyun Kim is partly sponsored by KAIA grant (22CTAP-C16379302, MOLIT), NST grant (CRC 21011, MSIT), KOCCA grant (R2022020028, MCST) and the Samsung Display corporation. Yuan Zhang is sponsored by the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 953348 (ELO-X).
1. What is the focus and contribution of the paper regarding Shapley value in Markov Games? 2. What are the strengths of the proposed Shapley Q-Learning algorithm compared to other existing algorithms? 3. What are the weaknesses of the paper, particularly in terms of experimentation and interpretation? 4. Do you have any concerns or suggestions regarding the limitations of the work?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper presents a theoretical framework that studies Shapley value in the context of Markov Games as a useful technique for value factorization and credit assignment in agents coalitions. Leveraging this framework, the authors proposed Shapley Q-Learning (SHAQ) derived from a novel definition of a Shapley-Bellman Operator. The proposed algorithm is compared with a suite of existing algorithms (COMA, VDN, QMIX) in predator-prey and the StarCraft MA Challenge, contrasting competitive results while showing interesting properties of interpretability. Strengths And Weaknesses Strengths The paper is well-written and properly motivated. The work is well-placed among the existing and vast literature in Multiagent Reinforcement Learning (MARL). The combination of Shapley's theory with Q-Learning seems a novel contribution in the interesting and always challenging setting of MARL. Weaknesses The experimental section would benefit from a discussion on the interpretability of SHAQ in the predator-prey setting, which seems to be missing in the current manuscript. Questions Would the authors be able to reproduce the interpretability results offer for SMAC on the simple predator-pray setting? Limitations The authors addressed the limitation of the work, including the assumptions and restrictions imposed in the scenarios considered.
NIPS
Title SHAQ: Incorporating Shapley Value Theory into Multi-Agent Q-Learning Abstract Value factorisation is a useful technique for multi-agent reinforcement learning (MARL) in global reward game, however, its underlying mechanism is not yet fully understood. This paper studies a theoretical framework for value factorisation with interpretability via Shapley value theory. We generalise Shapley value to Markov convex game called Markov Shapley value (MSV) and apply it as a value factorisation method in global reward game, which is obtained by the equivalence between the two games. Based on the properties of MSV, we derive Shapley-Bellman optimality equation (SBOE) to evaluate the optimal MSV, which corresponds to an optimal joint deterministic policy. Furthermore, we propose Shapley-Bellman operator (SBO) that is proved to solve SBOE. With a stochastic approximation and some transformations, a new MARL algorithm called Shapley Q-learning (SHAQ) is established, the implementation of which is guided by the theoretical results of SBO and MSV. We also discuss the relationship between SHAQ and relevant value factorisation methods. In the experiments, SHAQ exhibits not only superior performances on all tasks but also the interpretability that agrees with the theoretical analysis. The implementation of this paper is placed on https://github.com/hsvgbkhgbv/shapley-q-learning. 1 Introduction Cooperative games are a critical research area in multi-agent reinforcement learning (MARL). Many real-life tasks can be modeled as cooperative games, e.g. the coordination of autonomous vehicles [1], autonomous distributed logistics [2] and distributed voltage control in power networks [3]. In this paper, we consider global reward game (a.k.a. team reward game), an important subclass of cooperative games, wherein agents aim to jointly maximize cumulative global rewards over time. There are two categories of methods to solve this problem: (i) each agent identically maximizes cumulative global rewards, i.e. learning with a shared value function [4–6]; and (ii) each agent individually maximizes distributed values, i.e. learning with (implicit) credit assignments (e.g. marginal contribution and value factorisation) [7–11]. By the view of non-cooperative game theory, global reward game are equivalent to Markov game [12] with global reward (a.k.a. team reward). Its aim is to learn a stationary joint policy to reach a Markov equilibrium so that no agent tends to unilaterally change its policy to maximize cumulative global rewards. Standing by this view, learning with value factorisation cannot be directly explained [13]. In ∗Correspondence to Yunjie Gu who is also an honorary lecturer at Imperial College London. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). this paper, to clearly interpret the value factorisation, we take the perspective of cooperative game theory [14], wherein agents are partitioned into coalitions and a payoff distribution scheme is found to distribute optimal values to coalitions. The corresponding solution is called Markov core, whereby no agent has an incentive to deviate. When all agents are partitioned into one coalition (called grand coalition), the payoff distribution scheme naturally plays the role of value factorisation. Wang et al. [13] extended convex game (i.e. a game model in cooperative game theory) [14] to dynamic scenarios, which we name as Markov convex game in this paper. We construct the analytic form of Shapley value for Markov convex game, and prove that it reaches the Markov core under the grand coalition, named as Markov Shapley value. The optimal Markov Shapley value implies not only the optimal global value but also that no agent has incentives to deviate from the grand coalition. Additionally, Markov Shapley value enjoys the following properties: (i) identifiability of dummy agents; (ii) efficiency; (iii) reflecting the contribution; and (iv) symmetry. These properties aid the interpretation and validity of value factorisation in the global reward game, and such transparency and reliability are critical to industrial applications [3]. Based on the efficiency property, we derive Shapley-Bellman optimality equation that is an extension of Bellman optimality equation [15, 16]. Moreover, we propose Shapley-Bellman operator and prove its convergence to the Shapley-Bellman optimality equation and its optimal joint deterministic policy. With a stochastic approximation of Shapley-Bellman operator and some transformations, we derive an algorithm called Shapley Q-learning (SHAQ). SHAQ learns to approximate the optimal Markov Shapley Q-value (an equivalent form of the optimal Markov Shapley value). Moreover, we enable SHAQ decentralised in order to fit the decentralised execution framework and this decentralisation still remains the convergence condition of Shapley-Bellman operator. The proposed method, SHAQ, is evaluated on two global reward games such as Predator-Prey [17] and multi-agent StarCraft benchmark tasks [18]. In the experiments, SHAQ shows not only generally good performances on solving all tasks but also the interpretability that is deficient in the state-of-the-art baselines. 2 Markov Convex Game We now formally define Markov convex game (MCG) that can be described as a tupleN ,S,A, T,⇤,⇡,Rt, . N is the set of all agents. S is the set of states and A = ×i∈NAi is the joint action set of all agents wherein Ai is each agent’s action set. T (s,a, s′) = Pr(s′s,a) is defined as the transition probability between the successive states. CS = {C1, ...,Cn} is a coalition structure, where Ci ⊆ N called a coalition is a subset of all agents. ⇤ is a collection of coalition structures. and N are two special cases of coalitions i.e. the empty coalition and the grand coalition respectively. Conventionally, it is assumed that CmCk = ,∀Cm,Ck ⊆ N . ⇡ = ×i∈N⇡i is the joint policy of all agents. For any coalition C, it is equipped with a coalition policy ⇡C(aC s) = ×i∈C⇡i(ais) defined over the coalition action set AC = ×i∈CAi. Therefore, ⇡ can be seen as the grand coalition policy. Rt ∶ S ×AC → [0,∞) (i.e., a characteristic function) is the coalition reward at time step t. Accordingly, Rt(s,a) is the grand coalition reward (i.e., equivalent to the global reward) at time step t that is written as R(s,a) or R for conciseness in the rest of paper. ∈ (0,1) is the discounted factor. The infinite long-term discounted cumulative coalition rewards is defined as V ⇡C(s) = E⇡C∑∞t=1 t−1Rt(s,aC) St = s ∈ [0,∞), called a coalition value. Moreover, the empty coalition value V ⇡(s) = 0 and V ⇡(s) denotes the grand coalition value (i.e. also called the global value since the equivalence proof from [13]). The solution of MCG is to find a tuple CS, (max⇡i xi(s))i∈N , where (max⇡i xi(s))i∈N indicates the payoff distributions (i.e. credit assignments) under the optimal joint policy given a coalition structure. Under the assumptionCmCk = ,∀Cm,Ck ⊆ N , the condition for MCG is as follows: max ⇡C∪ V ⇡C∪ (s) ≥max ⇡Cm V ⇡Cm (s) +max ⇡Ck V ⇡Ck (s), ∀Cm,Ck ⊆ N ,C∪ = Cm ∪ Ck. (1) In MCG with the grand coalition i.e., CS = {N}, Markov core, a solution concept describing stability, is defined as a set of payoff distribution schemes by which no agent has incentives to deviate from the grand coalition to gain more profits. Mathematically, Markov core can be expressed as: MarkovCore = max ⇡i xi(s)i∈N max⇡C x(sC) ≥max⇡C V ⇡C(s),∀C ⊆ N , s ∈ S , (2) where max⇡C x(sC) = ∑i∈Cmax⇡i xi(s). It aims to find a payoff distribution scheme (xi(s))i∈N that can finally converge to Markov core under the optimal joint policy. To assist the application on Q-learning, we similarly define coalition Q-value as Q⇡C(s,aC) ∈ [0,+∞) for all coalitions C ⊂ N . Following the above convention, the grand coalition Q-value (or the global Q-value) can be written as Q⇡(s,a). Moreover, the optimal coalition Q-value of C w.r.t. the optimal joint policy of D ⊆ C (i.e., ⇡∗D) and the suboptimal joint policy of CD (i.e., ⇡CD) is defined as Q⇡ ∗D(s,aC). Therefore, the optimal coalition Q-value of C w.r.t. the optimal joint policy of C is defined as Q⇡ ∗C(s,aC). Accordingly, the optimal global coalition Q-value w.r.t. the optimal joint policy of the grand coalition is denoted as Q⇡ ∗(s,a). 3 Markov Shapley Value By the view of cooperative game theory, the grand coalition is progressively formed by a permutation of agents. Accordingly, marginal contribution is an implementation of the credit reflecting an agent’s contribution. The formal definition is shown in Definition 1. Definition 1. In Markov convex game, with a permutation of agents j1, j2, ..., jN ,∀jn ∈N forming the grand coalition N , where n ∈ {1, ..., N }, ja ≠ jb if a ≠ b, the marginal contribution of an agent i is defined as the following equation such that i(sCi) =max ⇡Ci V ⇡Ci∪{i}(s) −max ⇡Ci V ⇡Ci (s), (3) where Ci = {j1, ..., jn−1} for jn = i is an arbitrary intermediate coalition where agent i would join during the process of grand coalition formation. Proposition 1. Agent i’s action marginal contribution can be derived as follows: i(s, aiCi) =max aCi Q⇡ ∗Ci (s,aCi∪{i}) −maxaCi Q⇡ ∗Ci (s,aCi). (4) As Proposition 1 shows, an agent’s action marginal contribution (analogous to Q-value) can be derived according to Eq.4. It is usually more useful for solving MARL problems. It is apparent that marginal contribution only considers one permutation to form the grand coalition. By the viewpoint from Shapley [19], the fairness is achieved through considering how much the agent i increases the optimal values (i.e. marginal contributions) of the coalitions in all possible permutations when it joins in, i.e., max⇡Ci V ⇡Ci∪{i}(s) −max⇡Ci V ⇡Ci (s),∀Ci ⊆N {i}. Therefore, we construct Shapley value under Markov dynamics based on the marginal contributions shown in Definition 2, named as Markov Shapley value (MSV). Definition 2. Markov Shapley value is represented as V i (s) = Ci ⊆ N {i} Ci!(N − Ci − 1)!N ! ⋅ i(sCi). (5) With the deterministic policy, Markov Shapley value can be equivalently represented as Q i (s, ai) = Ci ⊆ N {i} Ci!(N − Ci − 1)!N ! ⋅ i(s, aiCi). (6) where i(sCi) is defined in Eq.3 and i(s, aiCi) is defined in Eq.4. For convenience, we name Eq.6 as Markov Shapley Q-value (MSQ). Briefly, MSV calculates the weighted average of marginal contributions. Since a coalition may repeatedly appear among all permutations (i.e. N ! permutations), the ratio between the occurrence frequency Ci!(N − Ci−1)! and the total frequency N ! is used as a weight to describe the importance of the corresponding marginal contribution. Besides, the sum of all weights is equal to 1, so each weight can be interpreted as a probability distribution. Consequently, MSV can be seen as the expectation of marginal contributions, denoted as ECi∼Pr(CiN {i}) [ i(sCi)]. Note that Pr(CiN {i}) is a bell-shaped probability distribution. By the above relationship, Remark 1 is directly obtained. Remark 1. Uniformly sampling different permutations is equivalent to directly sampling from Pr(CiN {i}), since the coalition generation is from the permutation to form the grand coalition. Proposition 2. Markov Shapley value possesses properties as follows: (i) identifiability of dummy agents: V i (s) = 0; (ii) efficiency: max⇡ V ⇡(s) = ∑i∈N max⇡i V i (s); (iii) reflecting the contribution; and (iv) symmetry. Proposition 2 shows four properties of MSV. The most important property is Property (ii) that aids the formulation of Shapley-Bellman optimality equation. Property (iii) shows that MSV is a fundamental index to quantitatively describe each agent’s contribution. Property (i) and (iii) play important roles in interpretation for value factorisation (or credit assignment). Property (iv) indicates that if two agents are symmetric, then their optimal MSVs should be equal, but the reverse does not necessarily hold. All these properties that define the fairness are inherited from the original Shapley value [19]. 4 Shapley Q-Learning 4.1 Definition and Formulation Shapley-Bellman Optimality Equation. Based on the Bellman optimality equation [15] and the following conditions (the interpretability of which are left to Section 4.2): C.1. Efficiency of MSV (i.e. the result from Proposition 2); C.2. Q ∗ i (s, ai) = wi(s, ai) Q⇡∗(s,a) − bi(s), where wi(s, ai) > 0 and bi(s) ≥ 0 are bounded and ∑i∈N wi(s, ai)−1bi(s) = 0, we derive Shapley-Bellman optimality equation (SBOE) for evaluating the optimal MSQ (an equivalent form to optimal MSV) such that Q ∗(s,a) =w(s,a) s′∈S Pr(s′s,a)R + i∈N max ai Q ∗ i (s′, ai) − b(s), (7) where w(s,a) = [wi(s, ai)] ∈ RN + ; b(s) = [bi(s)] ∈ RN ≥0 ; Q ∗(s,a) = [Q ∗i (s, ai)] ∈ RN ≥0 and Q ∗ i (s, ai) denotes the optimal MSQ. If Eq.7 holds, the optimal MSQ is achieved. Moreover, it reveals an implication that for any s ∈ S and a∗i = argmaxai Q ∗i (s, ai), we have a solution wi(s, a∗i ) = 1N (see Appendix E.4.1). Literally, the assigned credits would be equal and each agent would receive Q⇡ ∗(s,a)N if performing the optimal actions. It is apparent that the efficiency still holds under this situation, which can be interpreted as an extremely fair credit assignment such that the credit to each agent should not be discriminated if all of them perform optimally, regardless of their roles. The equal credit assignment was also revealed by Wang et al. [20] recently from another perspective of analysis. Nevertheless, wi(s, ai) for ai ≠ argmaxai Q ∗i (s, ai) needs to be learned. Shapley-Bellman Operator. To find an optimal solution described by Eq.7, we now propose an operator called Shapley-Bellman operator (SBO), i.e., ⌥ ∶ ×i∈NQ i (s, ai) ×i∈NQ i (s, ai), which is defined as follows: ⌥ ×i∈NQ i (s, ai) =w(s,a) s′∈S Pr(s′s,a)R + i∈N max ai Q i (s′, ai) − b(s), (8) where wi(s, ai) = 1N when ai = argmaxai Q i (s, ai). We prove that the optimal joint deterministic policy can be achieved by recursively running SBO in Theorem 1. Theorem 1. Shapley-Bellman operator is able to converge to the optimal Markov Shapley Q-value and the corresponding optimal joint deterministic policy when maxs ∑i∈N maxai wi(s, ai) < 1 . Shapley Q-Learning. For easy implementation, we conduct transformation for the stochastic approximation of SBO and derive Shapley Q-learning (SHAQ) whose TD error is shown as follows: (s,a, s′) = R + i∈N max ai Q i (s′, ai) − i∈N i(s, ai) Q i (s, ai), (9) where i(s, ai) = 1 ai = argmaxai Q i (s, ai), ↵i(s, ai) ai ≠ argmaxai Q i (s, ai). (10) Actually, the closed-form expression of i(s, ai) is written as N −1wi(s, ai)−1. If inserting the condition that wi(s, ai) = 1N when ai = argmaxai Q i (s, ai) as well as defining i(s, ai) as ↵i(s, ai) when ai ≠ argmaxai Q i (s, ai), Eq.10 is obtained. The term b(s) is cancelled in Eq.9 thanks to the condition such that ∑i∈N wi(s, ai)−1bi(s) = 0. Note that the condition to wi(s, ai) in Theorem 1 should hold for the convergence of SHAQ in implementation (see Appendix E.4.4). 4.2 Validity and Interpretability In this section, we show the validity of SBOE and the interpretability of SHAQ, i.e., providing the reasons why SBOE is valid to be formulated and SHAQ is an interpretable value factorisation method for the global reward game. Theorem 2. The optimal Markov Shapley value is a solution in the Markov core under Markov convex game with the grand coalition. Remark 2. For an arbitrary state s ∈ S, by C.2 it is not difficult to check that even if an arbitrary agent i is dummy (i.e., Q ∗ i (s, ai) = 0 for some i ∈ N ), Q⇡∗(s,a) and Q ∗j (s, aj),∀j ≠ i would not be zero if bi(s) ≠ 0. If the extreme case happens that for an arbitrary state s ∈ S all agents are dummies, since ∑i∈N wi(s, ai)−1bi(s) = 0 we are allowed to set bi(s) = 0,∀i ∈ N so that Q⇡ ∗(s,a) = 0 and efficiency such that maxaQ⇡∗(s,a) = ∑i∈N maxai Q ∗i (s, ai) is still valid. First, we give a proof for showing that the optimal MSV is a solution in Markov core under the grand coalition, as Theorem 2 shows. Since a solution in Markov core implies the optimal global value (see Remark 5 in Appendix D.2.2), we can conclude that the optimal MSV can lead to the optimal global value (a.k.a. social welfare), which links Condition C.1 to Markov core. As a result, solving SBOE is equivalent to solving Markov core under the grand coalition and SHAQ is actually a learning algorithm that reliably converges to Markov core. As per the definition in Section 2, we can say that SHAQ leads to the result that no agents have incentives to deviate from the grand coalition, which provides an interpretation of value factorisation for global reward game. Condition C.2 is a condition that maintains the validity of the relationship between the optimal MSQ and the optimal global Q-value even if there exist dummy agents (see Remark 2), so that the definition of SBOE is valid for MCG and MSQ in almost every case, which preserves the completeness of the theory. 4.3 Implementations We now describe a practical implementation of SHAQ for Dec-POMDP [21] (i.e. the global reward game but with partial observations). First, the global state is replaced by the history of each agent to guarantee the optimal deterministic joint policy [21]. Accordingly, Markov Shapley Q-value is denoted as Q i (⌧i, ai), wherein ⌧i is a history of partial observations of agent i. Since the paradigm of centralised training decentralised execution (CTDE) [22] is applied, the global state (i.e. s) for ↵̂i(s, ai) can be obtained during training. Proposition 3. Suppose any action marginal contribution can be factorised to the form such that i(s, aiCi) = (s,aCi∪{i}) Q̂i(s, ai). With the condition such that ECi∼Pr(CiN {i}) (s,aCi∪{i}) = 1 ai = argmaxai Q i (s, ai), K ∈ (0,1) ai ≠ argmaxai Q i (s, ai), we have Q i (s, ai) = Q̂i(s, ai) ai = argmaxai Q̂i(s, ai), ↵i(s, ai) Q i (s, ai) = ↵̂i(s, ai) Q̂i(s, ai) ai ≠ argmaxai Q̂i(s, ai), (11) where ↵̂i(s, ai) = ECi∼Pr(CiN {i}) ̂i(s, ai;aCi) and ̂i(s, ai;aCi) ∶= ↵i(s, ai) (s,aCi∪{i}). Compatible with the decentralised execution, we use only one parametric function Q̂i(⌧i, ai) to directly approximate Q i (⌧i, ai). By inserting Eq.11 into Eq.9, i(s, ai) is transformed into the form as follows: ̂i(s, ai) = 1 ai = argmaxai Q̂i(s, ai), ↵̂i(s, ai) ai ≠ argmaxai Q̂i(s, ai), (12) where ↵̂i(s, ai) = ECi∼Pr(CiN {i}) ̂i(s, ai;aCi). To solve partial observability, Q̂i(⌧i, ai) is empirically represented as recurrent neural network (RNN) with GRUs [23]. ̂i(s, ai;aCi) is directly approximated by a parametric function Fs + 1 and thus ↵̂i(s, ai) can be expressed as follows: ↵̂i(s, ai) = 1 M M k=1 Fs Q̂Cki (⌧Cki ,aCki ), Q̂i(⌧i, ai) + 1, (13) where Q̂Cki (⌧Cki ,aCki ) = 1Cki ∑j∈Cki Q̂j(⌧j , aj) and Cki is sampled M times from Pr(CiN {i}) (i.e., implemented as Remark 1 suggests) to approximate ECi∼Pr(CiN {i})[ ̂i(s, ai;aCi)] using Monte Carlo approximation; and Fs is a monotonic function, followed by an absolute activation function, whose weights are generated from hyper-networks w.r.t. the global state. We show that Eq.13 satisfies the condition to wi(s, ai) in Theorem 1 (see Appendix E.6.1), so it is a reliable implementation. By using the framework of fitted Q-learning [24] to solve large number of states (i.e., could be usually infinite) and plugging in the above designed modules, the practical least-square-error loss function derived from Eq.9 is therefore stated as follows: min ✓, Es,⌧,a,R,⌧ ′ R + i∈N max ai Q̂i(⌧ ′i , ai; ✓−) − i∈N ̂i(s, ai; ) Q̂i(⌧i, ai; ✓) 2 , (14) where all agents share the parameters of Q̂i(s, ai; ✓) and ↵̂i(s, ai; ) respectively; and Q̂i(s′, ai; ✓−) works as the target where ✓− is periodically updated. The general training procedure follows the paradigm of DQN [25], with a replay buffer to store the online collection of agents’ episodes. To depict an overview of the algorithm, the pseudo code is shown in Appendix A. 5 Related Work Value Factorisation in MARL. To deal with the instability during training in global reward game by independent learners [26], the centralised training and decentralised execution (CTDE) [22] was proposed and it became a general paradigm for MARL. Based on CTDE, MADDPG [27] learns a global Q-value that can be regarded as assigning the same credits to all agents during training [13], which may cause the unfair credit assignment [28]. To avoid this problem, VDN [8] was proposed to learn the factorised Q-value, assuming that any global Q-value equals to the sum of decentralised Q-values. Nevertheless, this factorisation may limit the representation of the global Q-value. To mitigate this issue, QMIX [9] and QTRAN [10] were proposed to represent the global Q-value with a richer class w.r.t. decentralised Q-values, based on the assumption (called Individual-Global-Max) of convergence to the optimal joint deterministic policy. Markov Shapley value proposed in this paper belongs to the family of value factorisation, based on the game-theoretical framework called MCG that enjoys the interpretability. From the conventional cooperative games (e.g., network flow game [29], induced subgraph game [30] that can be used for modelling social networks, and facility location game [31]), it is insightful that the coalition introduced in this paper exists. In many scenarios, however, the information of coalition might be unknown. Therefore, the latent coalition is assumed, and we only need to concentrate on the observable information, e.g., the global reward. Relationship to VDN. By setting i(s, ai) = 1 for all state-action pairs, SHAQ degrades to VDN [8]. Although VDN tried to tackle the problem of dummy agents, Sunehag et al. [8] did not give a theoretical guarantee on identifying it. The Markov Shapley value theory proposed in this paper well addresses this issue from both theoretical and empirical aspects. These aspects show that VDN is a subclass of SHAQ. The theoretical framework proposed in this paper answers to why VDN works well in most scenarios but performs poorly in some scenarios (i.e., i(s, ai) = 1 in Eq.9 was incorrectly defined over the suboptimal actions). Relationship to COMA. Compared with COMA [7], each agent i’s credit assignment Q̄i(s, ai) is mathematically expressed as follows: Q̄i(s, ai) = Q̄⇡(s,a) − Q̄⇡−i(s,a−i), Q̄⇡−i(s,a−i) = ai ⇡i(ais)Q̄⇡ (s, (a−i, ai)) , where subscript −i indicates the agents excluding i. Q̄i(s, ai) can be seen as the action marginal contribution between the grand coalition Q-value and the coalition Q-value excluding the agent i, under some permutation to form the grand coalition wherein agent i is located at the last position. The efficiency is obviously violated (i.e., the sum of optimal action marginal contributions defined here is unlikely to be equal to the optimal grand coalition Q-value). In contrast to COMA, SHAQ considers all permutations to form the grand coalition to preserve the efficiency. Relationship to Independent Learning. Independent learning (e.g. IQL [26]) can be also seen as a special credit assignment, however, the credit assigned to each agent is still with no intuitive interpretation. Mathematically, suppose that Q̄i(s, ai) is the independent Q-value of agent i, we can rewrite it in the form consisting of action marginal contributions such that Q̄i(s, ai) = ECi∼Pr(CiN {i}) ̄i(s, aiCi) . It is intuitive to see that the independent Q-value is a direct approximation of MSQ, ignoring coalition formation, while SHAQ considers coalition formation in approximation. This gives an explanation for why independent learning works well in some cooperative tasks [32]. Nevertheless, it encounters the same issue as in COMA, the loss of properties led by the coalition formation. Relationship to SQDDPG. We now discuss the relationship between SQDDPG [13] and SHAQ. In terms of algorithms, SQDDPG belongs to policy gradient methods (i.e. an approximation of policy iteration) while SHAQ belongs to value based methods (i.e. an approximation of value iteration). Since policy iteration (with one-step policy evaluation) is equivalent to value iteration [33] (at least under a finite state space and a finite action space), the theory behind SHAQ directly fills the gap in SQDDPG on theoretical guarantees of convergence to optimal joint policy. Specifically, the learning procedure of SQDDPG iteratively performs the following two stages: Stage 1: min ✓ Es,a,R,s′ R + i∈N Q̂ i (s′, a′i; ✓−) − i∈N Q̂ i (s, ai; ✓) 2 . Stage 2: ⇡i(s) ∈ argmax ai Q̂ i (s, ai; ✓). It can be observed that both SQDDPG and SHAQ ideally converge to the same optimal MSQs w.r.t. the optimal actions such that Es,s′ max a R(s,a) + i∈N max a′i Q̂ ∗ i (s′, a′i) − i∈N max ai Q̂ ∗ i (s, ai) 2 = 0. However, about suboptimal actions, SQDDPG does not provide any theoretical guarantee, whereas SHAQ does with specific implementations as shown in Eq.13 to match the theoretical results shown in this paper. Note that this is critical to reliable interpretations of the optimal MSQ w.r.t. suboptimal actions (e.g., for detecting adversarial attacks on controllers if deployed in industry [34]). 6 Experiments In this section, we show the experimental results of SHAQ on Predator-Prey [17] and various tasks in StarCraft Multi-Agent Challenge (SMAC) 2. The baselines that we select for comparison are COMA [7], VDN [8], QMIX [9], MASAC [36], QTRAN [10], QPLEX [37] and W-QMIX (including CWQMIX and OW-QMIX) [35]. The implementation details of our algorithm are shown in Appendix B.1, whereas the implementation of baselines are from [35] 3. We also compare SHAQ with SQDDPG [13] 4, which is shown in Appendix C.3. For all experiments, we use the ✏-greedy exploration strategy, where ✏ is annealed from 1 to 0.05. The annealing time steps vary among different experiments. For Predator-Prey, we apply 1 million time steps for annealing, following the setup from [37]. For the easy and hard maps in SMAC, we apply 50k time steps for annealing, the same as that in [18]; while for the super-hard maps in SMAC, we apply 1 million time steps for annealing to obtain more explorations so that more state-action pairs can be visited. About the replay buffer size, we set as 5000 for all algorithms that is the same as that in [35]. To fairly evaluate all algorithms, we run each experiment with 5 random seeds. All graphs showing experimental results are plotted with the 2The version that we use in this paper is SC2.4.6.2.69232 rather than the newer SC2.4.10. As reported from [35], the performance is not comparable across versions. 3The source code of baseline implementation is from https://github.com/oxwhirl/wqmix. 4The code of SQDDPG is implemented based on https://github.com/hsvgbkhgbv/SQDDPG. median and 25%-75% quartile shading. About the interpretability of algorithms, we evaluate the algorithms with both both ✏-greedy policy (i.e., ✏ = 0.8) for obtaining mixed optimal and suboptimal actions and greedy policy for obtaining pure optimal actions. The ablation study of SHAQ is shown in Appendix C.4. 6.1 Predator-Prey We firstly run the experiments on a partially-observable task called Predator-Prey [17], wherein 8 predators that are feasible to be controlled aim to capture 8 preys with random policies in a 10x10 grid world. Each agent’s observation is a 5x5 sub-grid centering around it. If a prey is captured by coordination of 2 agents, predators will be rewarded by 10. On the other hand, each unsuccessful attempt by only 1 agent will be punished by a negative reward p. In this experiment, we study the behaviors of each algorithm under different values of p (that describes different levels of coordination). As [35] reported, only QTRAN and W-QMIX can solve this task, while [37] found that the failure was primarily due to the lack of explorations. As a result, we apply the identical epsilon annealing schedule (i.e. 1 million time steps) adopted in [37]. Performance Analysis. As Figure 1 shows, SHAQ can always solve the tasks with different values of p. With the epsilon annealing strategy from [37], W-QMIX does not perform as well as reported in [35]. The reason could be its poor robustness to the increased explorations [35] for this environment (see the evidential experimental results in Appendix C.6). The good performance of VDN validates our analysis in Section 5, whereas the performance of QTRAN is surprisingly almost invariant to the value of p. The performances of QPLEX and QMIX become obviously worse when p=-2. The failure of MASAC and COMA could be due to that relative overgeneralisation5 prevents policy gradient methods from better coordination [39]. Interpretability of SHAQ. To verify that SHAQ possesses the interpretability, we show its credit assignment on Predator-Prey. As we see from Figure 2b, all agents are around and capture a prey, so 5Relative overgeneralisation is a common game theoretic pathology that the suboptimal actions are preferred when matched with arbitrary actions from the collaborating agents [38]. both of them perform the optimal actions and deserve almost the equal optimal credit assignment as 4.2927 and 4.0644, which verifies our theoretical claim. From Figure 2a, it can be seen that two agents are far away from preys, so they receive low credits as 2.4709 and 2.8435. On the other hand, the other two agents are around a prey, but they do not perform the optimal action “capture”, so they receive less credits than the two agents in Figure 2b. Nevertheless, they are around a prey, so they perform better than those agents that are far away from preys and receive comparatively greater credits as 3.2933 and 3.1159. The coherent credit assignments in both Figure 2a and 2b implies that the assigned credits reflect agents’ contributions (verifying (iii) in Proposition 2) , i.e., each agent receives the credit that is consistent with its decision. 6.2 StarCraft Multi-Agent Challenge We next evaluate SHAQ on the more challenging SMAC tasks, the environmental settings of which are the same as that in [18]. To broadly compare the performance of SHAQ with baselines, we select 4 easy maps: 8m, 3s5z, 1c3s5z and 10m_vs_11m; 3 hard maps: 5m_vs_6m, 3s_vs_5z and 2c_vs_64zg; and 4 super-hard maps: 3s5z_vs_3s6z, Corridor, MMM2 and 6h_vs_8z. All training is through online data collection. Due to the limited space, we only show partial results in the main part of paper and leave the rest in Appendix C.1. Performance Analysis. It shows in Figure 3 that SHAQ outperforms all baselines on all maps, except for 6h_vs_8z. On 6h_vs_8z, SHAQ can beat all baselines except for CW-QMIX. VDN performs well on 4 maps but bad on the other 2 maps, which still verifies our analysis in Section 5. QMIX and QPLEX perform well on the most of maps, except for 3s_vs_5z, 2c_vs_64zg and 6h_vs_8z. As for COMA, MADDPG and MASAC, their poor performances could be due to the weak adaptability to challenging tasks. Although QTRAN can theoretically represent the complete class of the global Q-value [10], its complicated learning paradigm could impede the convergence to the value function for challenging tasks and therefore result in the poor performance. Although W-QMIX performs well on some maps, owing to lacking a law on hyperparameter tuning [35] it is difficult to be adapted for all scenarios (see Appendix C.2). Interpretability of SHAQ. To further show the interpretability of SHAQ, we also conduct a test on 3m (i.e. a simple task in SMAC). As seen from Figure 4a, Agent 3 faces the direction opposite to enemies, meanwhile, the enemies are out of its attacking range. It can be understood as that Agent 3 does not contribute to the team and thus it is almost a dummy agent. Its MSQ is 0.84 (around 0) that correctly catch the manner of a dummy agent (verifying (i) in Proposition 2). In contrast, Agent 1 and Agent 2 are attacking enemies, while Agent 1 suffers from more attacks (with lower health) than Agent 2. As a result, Agent 1 contributes more than Agent 2 and therefore its MSQ is greater, which implies that the credits reflect agents’ contributions (verifying (iii) in Proposition 2). On the other hand, we can see from Figure 4e that with the optimal policies all agents receive almost identical MSQs (verifying the theoretical results in Section 4.1). The above results well verify the theoretical analysis that we deliver before. To justify that the MSQs learned by SHAQ are non-trivial, we also show the results of VDN, QMIX and QPLEX. It is surprising that the Q-values of these baselines are also almost identical among agents for the optimal actions (however, the property disappears in more complicated scenarios as shown in Appendix C.5 while the property of SHAQ is still valid). Since VDN is a subclass of SHAQ and possesses the same form of loss function for optimal actions, it is reasonable that it obtains the similar results to SHAQ. As for the suboptimal actions, VDN does not possess an explicit interpretation as SHAQ due to the incorrect definition of i(s, ai) = 1 over suboptimal actions (verifying the statement in Section 5). The values of QMIX and QPLEX are difficult to be explained. 7 Conclusion Summary. This paper generalises Shapley value to Markov convex game, called Markov Shapley value. Markov Shapley value inherits a number of properties: (i) identifiability of dummy agents; (ii) efficiency; (iii) reflecting the contribution and (iv) symmetry. Based on Property (ii), we derive Shapley-Bellman optimality equation, Shapley-Bellman operator and SHAQ. We prove that solving Shapley-Bellman optimality equation is equivalent to solving the Markov core (i.e., no agent has incentives to deviate from the grand coalition). Markov convex game with the grand coalition is equivalent to global reward game [13], wherein Markov Shapley value plays the role of value factorisation. Since SHAQ is a stochastic approximation of Shapley-Bellman operator that is proved to solve Shapley-Bellman optimality equation, global reward game with value factorisation becomes valid standing by the cooperative game theoretical framework (i.e. solving Markov core). Property (i) and (iii) in Proposition 2 are demonstrated in the experiments showing the interpretability of SHAQ. Limitation and Future Work. The value of Markov convex game is not limited to solving problems with the grand coalition, though in this paper we design SHAQ that only focuses on the scenario with the grand coalition. By removing the condition of supermodularity (see Eq.1), this framework can be used to study more general coalition games where different coalitions of agents as units may compete/cooperate with each other. Since the grand coalition and Markov Shapley value is not a solution in Markov core yet, the learning process becomes more complicated to converge to Markov core. A possible research direction in future is to investigate dynamically forming the coalition structure and conducting credit assignments simultaneously. Acknowledgements This work is sponsored by the Engineering and Physical Sciences Research Council of UK (EPSRC) under awards EP/S000909/1. Tae-Kyun Kim is partly sponsored by KAIA grant (22CTAP-C16379302, MOLIT), NST grant (CRC 21011, MSIT), KOCCA grant (R2022020028, MCST) and the Samsung Display corporation. Yuan Zhang is sponsored by the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 953348 (ELO-X).
1. What is the focus and contribution of the paper on global reward games? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical framework and algorithm? 3. What are the weaknesses of the paper, especially regarding its experimental comparisons with other works? 4. Do you have any concerns about the assumptions made in the paper, such as the condition in Equation 1 (line 78)? 5. What are the limitations of the proposed method, and can it be extended to more general cooperative games?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper presents a new framework and corresponding algorithm to solve value factorization in global reward games. Specifically, it derives the Shapley-Bellman optimality equation from evaluating the optimal Markov Shapley value and proposes the Shapley-Bellman operator to solve it, which is also proved in the paper. Furthermore, Shapley Q-learning is presented to implement the theoretical framework for predator-prey and SMAC environments. Contributions: The paper proposes a new theoretical cooperative game framework and Shapley Q-learning algorithm for solving global reward games. Moreover, the authors give proof for the theoretical framework and evaluate SHAQ on Predator-Prey and StarCraft tasks, which shows good performance and interpretability. Strengths And Weaknesses Strength: 1. well written, easy to follow 2. novel cooperative game framework for global reward game justified both theoretically and empirically 3. well literature review on relevant fields 4. proof details and codes provided Weakness: 1. Figures 1,2,3 are too small to read easily 2. Improvements seem not significant compared to SOTAs Questions Authors assume games (Predator-Prey and StarCraft) are satisfied those conditions of MCG in Eq. 1 (line 78), does it always satisfy? Limitations The assumption in line 78 for Markov convex games looks too strong, is it possible to extend the same results to general cooperative games?
NIPS
Title SHAQ: Incorporating Shapley Value Theory into Multi-Agent Q-Learning Abstract Value factorisation is a useful technique for multi-agent reinforcement learning (MARL) in global reward game, however, its underlying mechanism is not yet fully understood. This paper studies a theoretical framework for value factorisation with interpretability via Shapley value theory. We generalise Shapley value to Markov convex game called Markov Shapley value (MSV) and apply it as a value factorisation method in global reward game, which is obtained by the equivalence between the two games. Based on the properties of MSV, we derive Shapley-Bellman optimality equation (SBOE) to evaluate the optimal MSV, which corresponds to an optimal joint deterministic policy. Furthermore, we propose Shapley-Bellman operator (SBO) that is proved to solve SBOE. With a stochastic approximation and some transformations, a new MARL algorithm called Shapley Q-learning (SHAQ) is established, the implementation of which is guided by the theoretical results of SBO and MSV. We also discuss the relationship between SHAQ and relevant value factorisation methods. In the experiments, SHAQ exhibits not only superior performances on all tasks but also the interpretability that agrees with the theoretical analysis. The implementation of this paper is placed on https://github.com/hsvgbkhgbv/shapley-q-learning. 1 Introduction Cooperative games are a critical research area in multi-agent reinforcement learning (MARL). Many real-life tasks can be modeled as cooperative games, e.g. the coordination of autonomous vehicles [1], autonomous distributed logistics [2] and distributed voltage control in power networks [3]. In this paper, we consider global reward game (a.k.a. team reward game), an important subclass of cooperative games, wherein agents aim to jointly maximize cumulative global rewards over time. There are two categories of methods to solve this problem: (i) each agent identically maximizes cumulative global rewards, i.e. learning with a shared value function [4–6]; and (ii) each agent individually maximizes distributed values, i.e. learning with (implicit) credit assignments (e.g. marginal contribution and value factorisation) [7–11]. By the view of non-cooperative game theory, global reward game are equivalent to Markov game [12] with global reward (a.k.a. team reward). Its aim is to learn a stationary joint policy to reach a Markov equilibrium so that no agent tends to unilaterally change its policy to maximize cumulative global rewards. Standing by this view, learning with value factorisation cannot be directly explained [13]. In ∗Correspondence to Yunjie Gu who is also an honorary lecturer at Imperial College London. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). this paper, to clearly interpret the value factorisation, we take the perspective of cooperative game theory [14], wherein agents are partitioned into coalitions and a payoff distribution scheme is found to distribute optimal values to coalitions. The corresponding solution is called Markov core, whereby no agent has an incentive to deviate. When all agents are partitioned into one coalition (called grand coalition), the payoff distribution scheme naturally plays the role of value factorisation. Wang et al. [13] extended convex game (i.e. a game model in cooperative game theory) [14] to dynamic scenarios, which we name as Markov convex game in this paper. We construct the analytic form of Shapley value for Markov convex game, and prove that it reaches the Markov core under the grand coalition, named as Markov Shapley value. The optimal Markov Shapley value implies not only the optimal global value but also that no agent has incentives to deviate from the grand coalition. Additionally, Markov Shapley value enjoys the following properties: (i) identifiability of dummy agents; (ii) efficiency; (iii) reflecting the contribution; and (iv) symmetry. These properties aid the interpretation and validity of value factorisation in the global reward game, and such transparency and reliability are critical to industrial applications [3]. Based on the efficiency property, we derive Shapley-Bellman optimality equation that is an extension of Bellman optimality equation [15, 16]. Moreover, we propose Shapley-Bellman operator and prove its convergence to the Shapley-Bellman optimality equation and its optimal joint deterministic policy. With a stochastic approximation of Shapley-Bellman operator and some transformations, we derive an algorithm called Shapley Q-learning (SHAQ). SHAQ learns to approximate the optimal Markov Shapley Q-value (an equivalent form of the optimal Markov Shapley value). Moreover, we enable SHAQ decentralised in order to fit the decentralised execution framework and this decentralisation still remains the convergence condition of Shapley-Bellman operator. The proposed method, SHAQ, is evaluated on two global reward games such as Predator-Prey [17] and multi-agent StarCraft benchmark tasks [18]. In the experiments, SHAQ shows not only generally good performances on solving all tasks but also the interpretability that is deficient in the state-of-the-art baselines. 2 Markov Convex Game We now formally define Markov convex game (MCG) that can be described as a tupleN ,S,A, T,⇤,⇡,Rt, . N is the set of all agents. S is the set of states and A = ×i∈NAi is the joint action set of all agents wherein Ai is each agent’s action set. T (s,a, s′) = Pr(s′s,a) is defined as the transition probability between the successive states. CS = {C1, ...,Cn} is a coalition structure, where Ci ⊆ N called a coalition is a subset of all agents. ⇤ is a collection of coalition structures. and N are two special cases of coalitions i.e. the empty coalition and the grand coalition respectively. Conventionally, it is assumed that CmCk = ,∀Cm,Ck ⊆ N . ⇡ = ×i∈N⇡i is the joint policy of all agents. For any coalition C, it is equipped with a coalition policy ⇡C(aC s) = ×i∈C⇡i(ais) defined over the coalition action set AC = ×i∈CAi. Therefore, ⇡ can be seen as the grand coalition policy. Rt ∶ S ×AC → [0,∞) (i.e., a characteristic function) is the coalition reward at time step t. Accordingly, Rt(s,a) is the grand coalition reward (i.e., equivalent to the global reward) at time step t that is written as R(s,a) or R for conciseness in the rest of paper. ∈ (0,1) is the discounted factor. The infinite long-term discounted cumulative coalition rewards is defined as V ⇡C(s) = E⇡C∑∞t=1 t−1Rt(s,aC) St = s ∈ [0,∞), called a coalition value. Moreover, the empty coalition value V ⇡(s) = 0 and V ⇡(s) denotes the grand coalition value (i.e. also called the global value since the equivalence proof from [13]). The solution of MCG is to find a tuple CS, (max⇡i xi(s))i∈N , where (max⇡i xi(s))i∈N indicates the payoff distributions (i.e. credit assignments) under the optimal joint policy given a coalition structure. Under the assumptionCmCk = ,∀Cm,Ck ⊆ N , the condition for MCG is as follows: max ⇡C∪ V ⇡C∪ (s) ≥max ⇡Cm V ⇡Cm (s) +max ⇡Ck V ⇡Ck (s), ∀Cm,Ck ⊆ N ,C∪ = Cm ∪ Ck. (1) In MCG with the grand coalition i.e., CS = {N}, Markov core, a solution concept describing stability, is defined as a set of payoff distribution schemes by which no agent has incentives to deviate from the grand coalition to gain more profits. Mathematically, Markov core can be expressed as: MarkovCore = max ⇡i xi(s)i∈N max⇡C x(sC) ≥max⇡C V ⇡C(s),∀C ⊆ N , s ∈ S , (2) where max⇡C x(sC) = ∑i∈Cmax⇡i xi(s). It aims to find a payoff distribution scheme (xi(s))i∈N that can finally converge to Markov core under the optimal joint policy. To assist the application on Q-learning, we similarly define coalition Q-value as Q⇡C(s,aC) ∈ [0,+∞) for all coalitions C ⊂ N . Following the above convention, the grand coalition Q-value (or the global Q-value) can be written as Q⇡(s,a). Moreover, the optimal coalition Q-value of C w.r.t. the optimal joint policy of D ⊆ C (i.e., ⇡∗D) and the suboptimal joint policy of CD (i.e., ⇡CD) is defined as Q⇡ ∗D(s,aC). Therefore, the optimal coalition Q-value of C w.r.t. the optimal joint policy of C is defined as Q⇡ ∗C(s,aC). Accordingly, the optimal global coalition Q-value w.r.t. the optimal joint policy of the grand coalition is denoted as Q⇡ ∗(s,a). 3 Markov Shapley Value By the view of cooperative game theory, the grand coalition is progressively formed by a permutation of agents. Accordingly, marginal contribution is an implementation of the credit reflecting an agent’s contribution. The formal definition is shown in Definition 1. Definition 1. In Markov convex game, with a permutation of agents j1, j2, ..., jN ,∀jn ∈N forming the grand coalition N , where n ∈ {1, ..., N }, ja ≠ jb if a ≠ b, the marginal contribution of an agent i is defined as the following equation such that i(sCi) =max ⇡Ci V ⇡Ci∪{i}(s) −max ⇡Ci V ⇡Ci (s), (3) where Ci = {j1, ..., jn−1} for jn = i is an arbitrary intermediate coalition where agent i would join during the process of grand coalition formation. Proposition 1. Agent i’s action marginal contribution can be derived as follows: i(s, aiCi) =max aCi Q⇡ ∗Ci (s,aCi∪{i}) −maxaCi Q⇡ ∗Ci (s,aCi). (4) As Proposition 1 shows, an agent’s action marginal contribution (analogous to Q-value) can be derived according to Eq.4. It is usually more useful for solving MARL problems. It is apparent that marginal contribution only considers one permutation to form the grand coalition. By the viewpoint from Shapley [19], the fairness is achieved through considering how much the agent i increases the optimal values (i.e. marginal contributions) of the coalitions in all possible permutations when it joins in, i.e., max⇡Ci V ⇡Ci∪{i}(s) −max⇡Ci V ⇡Ci (s),∀Ci ⊆N {i}. Therefore, we construct Shapley value under Markov dynamics based on the marginal contributions shown in Definition 2, named as Markov Shapley value (MSV). Definition 2. Markov Shapley value is represented as V i (s) = Ci ⊆ N {i} Ci!(N − Ci − 1)!N ! ⋅ i(sCi). (5) With the deterministic policy, Markov Shapley value can be equivalently represented as Q i (s, ai) = Ci ⊆ N {i} Ci!(N − Ci − 1)!N ! ⋅ i(s, aiCi). (6) where i(sCi) is defined in Eq.3 and i(s, aiCi) is defined in Eq.4. For convenience, we name Eq.6 as Markov Shapley Q-value (MSQ). Briefly, MSV calculates the weighted average of marginal contributions. Since a coalition may repeatedly appear among all permutations (i.e. N ! permutations), the ratio between the occurrence frequency Ci!(N − Ci−1)! and the total frequency N ! is used as a weight to describe the importance of the corresponding marginal contribution. Besides, the sum of all weights is equal to 1, so each weight can be interpreted as a probability distribution. Consequently, MSV can be seen as the expectation of marginal contributions, denoted as ECi∼Pr(CiN {i}) [ i(sCi)]. Note that Pr(CiN {i}) is a bell-shaped probability distribution. By the above relationship, Remark 1 is directly obtained. Remark 1. Uniformly sampling different permutations is equivalent to directly sampling from Pr(CiN {i}), since the coalition generation is from the permutation to form the grand coalition. Proposition 2. Markov Shapley value possesses properties as follows: (i) identifiability of dummy agents: V i (s) = 0; (ii) efficiency: max⇡ V ⇡(s) = ∑i∈N max⇡i V i (s); (iii) reflecting the contribution; and (iv) symmetry. Proposition 2 shows four properties of MSV. The most important property is Property (ii) that aids the formulation of Shapley-Bellman optimality equation. Property (iii) shows that MSV is a fundamental index to quantitatively describe each agent’s contribution. Property (i) and (iii) play important roles in interpretation for value factorisation (or credit assignment). Property (iv) indicates that if two agents are symmetric, then their optimal MSVs should be equal, but the reverse does not necessarily hold. All these properties that define the fairness are inherited from the original Shapley value [19]. 4 Shapley Q-Learning 4.1 Definition and Formulation Shapley-Bellman Optimality Equation. Based on the Bellman optimality equation [15] and the following conditions (the interpretability of which are left to Section 4.2): C.1. Efficiency of MSV (i.e. the result from Proposition 2); C.2. Q ∗ i (s, ai) = wi(s, ai) Q⇡∗(s,a) − bi(s), where wi(s, ai) > 0 and bi(s) ≥ 0 are bounded and ∑i∈N wi(s, ai)−1bi(s) = 0, we derive Shapley-Bellman optimality equation (SBOE) for evaluating the optimal MSQ (an equivalent form to optimal MSV) such that Q ∗(s,a) =w(s,a) s′∈S Pr(s′s,a)R + i∈N max ai Q ∗ i (s′, ai) − b(s), (7) where w(s,a) = [wi(s, ai)] ∈ RN + ; b(s) = [bi(s)] ∈ RN ≥0 ; Q ∗(s,a) = [Q ∗i (s, ai)] ∈ RN ≥0 and Q ∗ i (s, ai) denotes the optimal MSQ. If Eq.7 holds, the optimal MSQ is achieved. Moreover, it reveals an implication that for any s ∈ S and a∗i = argmaxai Q ∗i (s, ai), we have a solution wi(s, a∗i ) = 1N (see Appendix E.4.1). Literally, the assigned credits would be equal and each agent would receive Q⇡ ∗(s,a)N if performing the optimal actions. It is apparent that the efficiency still holds under this situation, which can be interpreted as an extremely fair credit assignment such that the credit to each agent should not be discriminated if all of them perform optimally, regardless of their roles. The equal credit assignment was also revealed by Wang et al. [20] recently from another perspective of analysis. Nevertheless, wi(s, ai) for ai ≠ argmaxai Q ∗i (s, ai) needs to be learned. Shapley-Bellman Operator. To find an optimal solution described by Eq.7, we now propose an operator called Shapley-Bellman operator (SBO), i.e., ⌥ ∶ ×i∈NQ i (s, ai) ×i∈NQ i (s, ai), which is defined as follows: ⌥ ×i∈NQ i (s, ai) =w(s,a) s′∈S Pr(s′s,a)R + i∈N max ai Q i (s′, ai) − b(s), (8) where wi(s, ai) = 1N when ai = argmaxai Q i (s, ai). We prove that the optimal joint deterministic policy can be achieved by recursively running SBO in Theorem 1. Theorem 1. Shapley-Bellman operator is able to converge to the optimal Markov Shapley Q-value and the corresponding optimal joint deterministic policy when maxs ∑i∈N maxai wi(s, ai) < 1 . Shapley Q-Learning. For easy implementation, we conduct transformation for the stochastic approximation of SBO and derive Shapley Q-learning (SHAQ) whose TD error is shown as follows: (s,a, s′) = R + i∈N max ai Q i (s′, ai) − i∈N i(s, ai) Q i (s, ai), (9) where i(s, ai) = 1 ai = argmaxai Q i (s, ai), ↵i(s, ai) ai ≠ argmaxai Q i (s, ai). (10) Actually, the closed-form expression of i(s, ai) is written as N −1wi(s, ai)−1. If inserting the condition that wi(s, ai) = 1N when ai = argmaxai Q i (s, ai) as well as defining i(s, ai) as ↵i(s, ai) when ai ≠ argmaxai Q i (s, ai), Eq.10 is obtained. The term b(s) is cancelled in Eq.9 thanks to the condition such that ∑i∈N wi(s, ai)−1bi(s) = 0. Note that the condition to wi(s, ai) in Theorem 1 should hold for the convergence of SHAQ in implementation (see Appendix E.4.4). 4.2 Validity and Interpretability In this section, we show the validity of SBOE and the interpretability of SHAQ, i.e., providing the reasons why SBOE is valid to be formulated and SHAQ is an interpretable value factorisation method for the global reward game. Theorem 2. The optimal Markov Shapley value is a solution in the Markov core under Markov convex game with the grand coalition. Remark 2. For an arbitrary state s ∈ S, by C.2 it is not difficult to check that even if an arbitrary agent i is dummy (i.e., Q ∗ i (s, ai) = 0 for some i ∈ N ), Q⇡∗(s,a) and Q ∗j (s, aj),∀j ≠ i would not be zero if bi(s) ≠ 0. If the extreme case happens that for an arbitrary state s ∈ S all agents are dummies, since ∑i∈N wi(s, ai)−1bi(s) = 0 we are allowed to set bi(s) = 0,∀i ∈ N so that Q⇡ ∗(s,a) = 0 and efficiency such that maxaQ⇡∗(s,a) = ∑i∈N maxai Q ∗i (s, ai) is still valid. First, we give a proof for showing that the optimal MSV is a solution in Markov core under the grand coalition, as Theorem 2 shows. Since a solution in Markov core implies the optimal global value (see Remark 5 in Appendix D.2.2), we can conclude that the optimal MSV can lead to the optimal global value (a.k.a. social welfare), which links Condition C.1 to Markov core. As a result, solving SBOE is equivalent to solving Markov core under the grand coalition and SHAQ is actually a learning algorithm that reliably converges to Markov core. As per the definition in Section 2, we can say that SHAQ leads to the result that no agents have incentives to deviate from the grand coalition, which provides an interpretation of value factorisation for global reward game. Condition C.2 is a condition that maintains the validity of the relationship between the optimal MSQ and the optimal global Q-value even if there exist dummy agents (see Remark 2), so that the definition of SBOE is valid for MCG and MSQ in almost every case, which preserves the completeness of the theory. 4.3 Implementations We now describe a practical implementation of SHAQ for Dec-POMDP [21] (i.e. the global reward game but with partial observations). First, the global state is replaced by the history of each agent to guarantee the optimal deterministic joint policy [21]. Accordingly, Markov Shapley Q-value is denoted as Q i (⌧i, ai), wherein ⌧i is a history of partial observations of agent i. Since the paradigm of centralised training decentralised execution (CTDE) [22] is applied, the global state (i.e. s) for ↵̂i(s, ai) can be obtained during training. Proposition 3. Suppose any action marginal contribution can be factorised to the form such that i(s, aiCi) = (s,aCi∪{i}) Q̂i(s, ai). With the condition such that ECi∼Pr(CiN {i}) (s,aCi∪{i}) = 1 ai = argmaxai Q i (s, ai), K ∈ (0,1) ai ≠ argmaxai Q i (s, ai), we have Q i (s, ai) = Q̂i(s, ai) ai = argmaxai Q̂i(s, ai), ↵i(s, ai) Q i (s, ai) = ↵̂i(s, ai) Q̂i(s, ai) ai ≠ argmaxai Q̂i(s, ai), (11) where ↵̂i(s, ai) = ECi∼Pr(CiN {i}) ̂i(s, ai;aCi) and ̂i(s, ai;aCi) ∶= ↵i(s, ai) (s,aCi∪{i}). Compatible with the decentralised execution, we use only one parametric function Q̂i(⌧i, ai) to directly approximate Q i (⌧i, ai). By inserting Eq.11 into Eq.9, i(s, ai) is transformed into the form as follows: ̂i(s, ai) = 1 ai = argmaxai Q̂i(s, ai), ↵̂i(s, ai) ai ≠ argmaxai Q̂i(s, ai), (12) where ↵̂i(s, ai) = ECi∼Pr(CiN {i}) ̂i(s, ai;aCi). To solve partial observability, Q̂i(⌧i, ai) is empirically represented as recurrent neural network (RNN) with GRUs [23]. ̂i(s, ai;aCi) is directly approximated by a parametric function Fs + 1 and thus ↵̂i(s, ai) can be expressed as follows: ↵̂i(s, ai) = 1 M M k=1 Fs Q̂Cki (⌧Cki ,aCki ), Q̂i(⌧i, ai) + 1, (13) where Q̂Cki (⌧Cki ,aCki ) = 1Cki ∑j∈Cki Q̂j(⌧j , aj) and Cki is sampled M times from Pr(CiN {i}) (i.e., implemented as Remark 1 suggests) to approximate ECi∼Pr(CiN {i})[ ̂i(s, ai;aCi)] using Monte Carlo approximation; and Fs is a monotonic function, followed by an absolute activation function, whose weights are generated from hyper-networks w.r.t. the global state. We show that Eq.13 satisfies the condition to wi(s, ai) in Theorem 1 (see Appendix E.6.1), so it is a reliable implementation. By using the framework of fitted Q-learning [24] to solve large number of states (i.e., could be usually infinite) and plugging in the above designed modules, the practical least-square-error loss function derived from Eq.9 is therefore stated as follows: min ✓, Es,⌧,a,R,⌧ ′ R + i∈N max ai Q̂i(⌧ ′i , ai; ✓−) − i∈N ̂i(s, ai; ) Q̂i(⌧i, ai; ✓) 2 , (14) where all agents share the parameters of Q̂i(s, ai; ✓) and ↵̂i(s, ai; ) respectively; and Q̂i(s′, ai; ✓−) works as the target where ✓− is periodically updated. The general training procedure follows the paradigm of DQN [25], with a replay buffer to store the online collection of agents’ episodes. To depict an overview of the algorithm, the pseudo code is shown in Appendix A. 5 Related Work Value Factorisation in MARL. To deal with the instability during training in global reward game by independent learners [26], the centralised training and decentralised execution (CTDE) [22] was proposed and it became a general paradigm for MARL. Based on CTDE, MADDPG [27] learns a global Q-value that can be regarded as assigning the same credits to all agents during training [13], which may cause the unfair credit assignment [28]. To avoid this problem, VDN [8] was proposed to learn the factorised Q-value, assuming that any global Q-value equals to the sum of decentralised Q-values. Nevertheless, this factorisation may limit the representation of the global Q-value. To mitigate this issue, QMIX [9] and QTRAN [10] were proposed to represent the global Q-value with a richer class w.r.t. decentralised Q-values, based on the assumption (called Individual-Global-Max) of convergence to the optimal joint deterministic policy. Markov Shapley value proposed in this paper belongs to the family of value factorisation, based on the game-theoretical framework called MCG that enjoys the interpretability. From the conventional cooperative games (e.g., network flow game [29], induced subgraph game [30] that can be used for modelling social networks, and facility location game [31]), it is insightful that the coalition introduced in this paper exists. In many scenarios, however, the information of coalition might be unknown. Therefore, the latent coalition is assumed, and we only need to concentrate on the observable information, e.g., the global reward. Relationship to VDN. By setting i(s, ai) = 1 for all state-action pairs, SHAQ degrades to VDN [8]. Although VDN tried to tackle the problem of dummy agents, Sunehag et al. [8] did not give a theoretical guarantee on identifying it. The Markov Shapley value theory proposed in this paper well addresses this issue from both theoretical and empirical aspects. These aspects show that VDN is a subclass of SHAQ. The theoretical framework proposed in this paper answers to why VDN works well in most scenarios but performs poorly in some scenarios (i.e., i(s, ai) = 1 in Eq.9 was incorrectly defined over the suboptimal actions). Relationship to COMA. Compared with COMA [7], each agent i’s credit assignment Q̄i(s, ai) is mathematically expressed as follows: Q̄i(s, ai) = Q̄⇡(s,a) − Q̄⇡−i(s,a−i), Q̄⇡−i(s,a−i) = ai ⇡i(ais)Q̄⇡ (s, (a−i, ai)) , where subscript −i indicates the agents excluding i. Q̄i(s, ai) can be seen as the action marginal contribution between the grand coalition Q-value and the coalition Q-value excluding the agent i, under some permutation to form the grand coalition wherein agent i is located at the last position. The efficiency is obviously violated (i.e., the sum of optimal action marginal contributions defined here is unlikely to be equal to the optimal grand coalition Q-value). In contrast to COMA, SHAQ considers all permutations to form the grand coalition to preserve the efficiency. Relationship to Independent Learning. Independent learning (e.g. IQL [26]) can be also seen as a special credit assignment, however, the credit assigned to each agent is still with no intuitive interpretation. Mathematically, suppose that Q̄i(s, ai) is the independent Q-value of agent i, we can rewrite it in the form consisting of action marginal contributions such that Q̄i(s, ai) = ECi∼Pr(CiN {i}) ̄i(s, aiCi) . It is intuitive to see that the independent Q-value is a direct approximation of MSQ, ignoring coalition formation, while SHAQ considers coalition formation in approximation. This gives an explanation for why independent learning works well in some cooperative tasks [32]. Nevertheless, it encounters the same issue as in COMA, the loss of properties led by the coalition formation. Relationship to SQDDPG. We now discuss the relationship between SQDDPG [13] and SHAQ. In terms of algorithms, SQDDPG belongs to policy gradient methods (i.e. an approximation of policy iteration) while SHAQ belongs to value based methods (i.e. an approximation of value iteration). Since policy iteration (with one-step policy evaluation) is equivalent to value iteration [33] (at least under a finite state space and a finite action space), the theory behind SHAQ directly fills the gap in SQDDPG on theoretical guarantees of convergence to optimal joint policy. Specifically, the learning procedure of SQDDPG iteratively performs the following two stages: Stage 1: min ✓ Es,a,R,s′ R + i∈N Q̂ i (s′, a′i; ✓−) − i∈N Q̂ i (s, ai; ✓) 2 . Stage 2: ⇡i(s) ∈ argmax ai Q̂ i (s, ai; ✓). It can be observed that both SQDDPG and SHAQ ideally converge to the same optimal MSQs w.r.t. the optimal actions such that Es,s′ max a R(s,a) + i∈N max a′i Q̂ ∗ i (s′, a′i) − i∈N max ai Q̂ ∗ i (s, ai) 2 = 0. However, about suboptimal actions, SQDDPG does not provide any theoretical guarantee, whereas SHAQ does with specific implementations as shown in Eq.13 to match the theoretical results shown in this paper. Note that this is critical to reliable interpretations of the optimal MSQ w.r.t. suboptimal actions (e.g., for detecting adversarial attacks on controllers if deployed in industry [34]). 6 Experiments In this section, we show the experimental results of SHAQ on Predator-Prey [17] and various tasks in StarCraft Multi-Agent Challenge (SMAC) 2. The baselines that we select for comparison are COMA [7], VDN [8], QMIX [9], MASAC [36], QTRAN [10], QPLEX [37] and W-QMIX (including CWQMIX and OW-QMIX) [35]. The implementation details of our algorithm are shown in Appendix B.1, whereas the implementation of baselines are from [35] 3. We also compare SHAQ with SQDDPG [13] 4, which is shown in Appendix C.3. For all experiments, we use the ✏-greedy exploration strategy, where ✏ is annealed from 1 to 0.05. The annealing time steps vary among different experiments. For Predator-Prey, we apply 1 million time steps for annealing, following the setup from [37]. For the easy and hard maps in SMAC, we apply 50k time steps for annealing, the same as that in [18]; while for the super-hard maps in SMAC, we apply 1 million time steps for annealing to obtain more explorations so that more state-action pairs can be visited. About the replay buffer size, we set as 5000 for all algorithms that is the same as that in [35]. To fairly evaluate all algorithms, we run each experiment with 5 random seeds. All graphs showing experimental results are plotted with the 2The version that we use in this paper is SC2.4.6.2.69232 rather than the newer SC2.4.10. As reported from [35], the performance is not comparable across versions. 3The source code of baseline implementation is from https://github.com/oxwhirl/wqmix. 4The code of SQDDPG is implemented based on https://github.com/hsvgbkhgbv/SQDDPG. median and 25%-75% quartile shading. About the interpretability of algorithms, we evaluate the algorithms with both both ✏-greedy policy (i.e., ✏ = 0.8) for obtaining mixed optimal and suboptimal actions and greedy policy for obtaining pure optimal actions. The ablation study of SHAQ is shown in Appendix C.4. 6.1 Predator-Prey We firstly run the experiments on a partially-observable task called Predator-Prey [17], wherein 8 predators that are feasible to be controlled aim to capture 8 preys with random policies in a 10x10 grid world. Each agent’s observation is a 5x5 sub-grid centering around it. If a prey is captured by coordination of 2 agents, predators will be rewarded by 10. On the other hand, each unsuccessful attempt by only 1 agent will be punished by a negative reward p. In this experiment, we study the behaviors of each algorithm under different values of p (that describes different levels of coordination). As [35] reported, only QTRAN and W-QMIX can solve this task, while [37] found that the failure was primarily due to the lack of explorations. As a result, we apply the identical epsilon annealing schedule (i.e. 1 million time steps) adopted in [37]. Performance Analysis. As Figure 1 shows, SHAQ can always solve the tasks with different values of p. With the epsilon annealing strategy from [37], W-QMIX does not perform as well as reported in [35]. The reason could be its poor robustness to the increased explorations [35] for this environment (see the evidential experimental results in Appendix C.6). The good performance of VDN validates our analysis in Section 5, whereas the performance of QTRAN is surprisingly almost invariant to the value of p. The performances of QPLEX and QMIX become obviously worse when p=-2. The failure of MASAC and COMA could be due to that relative overgeneralisation5 prevents policy gradient methods from better coordination [39]. Interpretability of SHAQ. To verify that SHAQ possesses the interpretability, we show its credit assignment on Predator-Prey. As we see from Figure 2b, all agents are around and capture a prey, so 5Relative overgeneralisation is a common game theoretic pathology that the suboptimal actions are preferred when matched with arbitrary actions from the collaborating agents [38]. both of them perform the optimal actions and deserve almost the equal optimal credit assignment as 4.2927 and 4.0644, which verifies our theoretical claim. From Figure 2a, it can be seen that two agents are far away from preys, so they receive low credits as 2.4709 and 2.8435. On the other hand, the other two agents are around a prey, but they do not perform the optimal action “capture”, so they receive less credits than the two agents in Figure 2b. Nevertheless, they are around a prey, so they perform better than those agents that are far away from preys and receive comparatively greater credits as 3.2933 and 3.1159. The coherent credit assignments in both Figure 2a and 2b implies that the assigned credits reflect agents’ contributions (verifying (iii) in Proposition 2) , i.e., each agent receives the credit that is consistent with its decision. 6.2 StarCraft Multi-Agent Challenge We next evaluate SHAQ on the more challenging SMAC tasks, the environmental settings of which are the same as that in [18]. To broadly compare the performance of SHAQ with baselines, we select 4 easy maps: 8m, 3s5z, 1c3s5z and 10m_vs_11m; 3 hard maps: 5m_vs_6m, 3s_vs_5z and 2c_vs_64zg; and 4 super-hard maps: 3s5z_vs_3s6z, Corridor, MMM2 and 6h_vs_8z. All training is through online data collection. Due to the limited space, we only show partial results in the main part of paper and leave the rest in Appendix C.1. Performance Analysis. It shows in Figure 3 that SHAQ outperforms all baselines on all maps, except for 6h_vs_8z. On 6h_vs_8z, SHAQ can beat all baselines except for CW-QMIX. VDN performs well on 4 maps but bad on the other 2 maps, which still verifies our analysis in Section 5. QMIX and QPLEX perform well on the most of maps, except for 3s_vs_5z, 2c_vs_64zg and 6h_vs_8z. As for COMA, MADDPG and MASAC, their poor performances could be due to the weak adaptability to challenging tasks. Although QTRAN can theoretically represent the complete class of the global Q-value [10], its complicated learning paradigm could impede the convergence to the value function for challenging tasks and therefore result in the poor performance. Although W-QMIX performs well on some maps, owing to lacking a law on hyperparameter tuning [35] it is difficult to be adapted for all scenarios (see Appendix C.2). Interpretability of SHAQ. To further show the interpretability of SHAQ, we also conduct a test on 3m (i.e. a simple task in SMAC). As seen from Figure 4a, Agent 3 faces the direction opposite to enemies, meanwhile, the enemies are out of its attacking range. It can be understood as that Agent 3 does not contribute to the team and thus it is almost a dummy agent. Its MSQ is 0.84 (around 0) that correctly catch the manner of a dummy agent (verifying (i) in Proposition 2). In contrast, Agent 1 and Agent 2 are attacking enemies, while Agent 1 suffers from more attacks (with lower health) than Agent 2. As a result, Agent 1 contributes more than Agent 2 and therefore its MSQ is greater, which implies that the credits reflect agents’ contributions (verifying (iii) in Proposition 2). On the other hand, we can see from Figure 4e that with the optimal policies all agents receive almost identical MSQs (verifying the theoretical results in Section 4.1). The above results well verify the theoretical analysis that we deliver before. To justify that the MSQs learned by SHAQ are non-trivial, we also show the results of VDN, QMIX and QPLEX. It is surprising that the Q-values of these baselines are also almost identical among agents for the optimal actions (however, the property disappears in more complicated scenarios as shown in Appendix C.5 while the property of SHAQ is still valid). Since VDN is a subclass of SHAQ and possesses the same form of loss function for optimal actions, it is reasonable that it obtains the similar results to SHAQ. As for the suboptimal actions, VDN does not possess an explicit interpretation as SHAQ due to the incorrect definition of i(s, ai) = 1 over suboptimal actions (verifying the statement in Section 5). The values of QMIX and QPLEX are difficult to be explained. 7 Conclusion Summary. This paper generalises Shapley value to Markov convex game, called Markov Shapley value. Markov Shapley value inherits a number of properties: (i) identifiability of dummy agents; (ii) efficiency; (iii) reflecting the contribution and (iv) symmetry. Based on Property (ii), we derive Shapley-Bellman optimality equation, Shapley-Bellman operator and SHAQ. We prove that solving Shapley-Bellman optimality equation is equivalent to solving the Markov core (i.e., no agent has incentives to deviate from the grand coalition). Markov convex game with the grand coalition is equivalent to global reward game [13], wherein Markov Shapley value plays the role of value factorisation. Since SHAQ is a stochastic approximation of Shapley-Bellman operator that is proved to solve Shapley-Bellman optimality equation, global reward game with value factorisation becomes valid standing by the cooperative game theoretical framework (i.e. solving Markov core). Property (i) and (iii) in Proposition 2 are demonstrated in the experiments showing the interpretability of SHAQ. Limitation and Future Work. The value of Markov convex game is not limited to solving problems with the grand coalition, though in this paper we design SHAQ that only focuses on the scenario with the grand coalition. By removing the condition of supermodularity (see Eq.1), this framework can be used to study more general coalition games where different coalitions of agents as units may compete/cooperate with each other. Since the grand coalition and Markov Shapley value is not a solution in Markov core yet, the learning process becomes more complicated to converge to Markov core. A possible research direction in future is to investigate dynamically forming the coalition structure and conducting credit assignments simultaneously. Acknowledgements This work is sponsored by the Engineering and Physical Sciences Research Council of UK (EPSRC) under awards EP/S000909/1. Tae-Kyun Kim is partly sponsored by KAIA grant (22CTAP-C16379302, MOLIT), NST grant (CRC 21011, MSIT), KOCCA grant (R2022020028, MCST) and the Samsung Display corporation. Yuan Zhang is sponsored by the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 953348 (ELO-X).
1. What is the main contribution of the paper regarding multiagent reinforcement learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in its application of cooperative game theory tools? 3. What are some questions that the reviewer has regarding the paper, such as its limitations and potential compatibility with other approaches? 4. How does the reviewer assess the clarity and impact of the paper's writing and presentation?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper considers multiagent reinforcement learning in a global (cooeprative) reward game. It contrasts the results value factorization frameworks, and proposes an alternative via the Shapley value from cooperative game theory. Basically, the authors consider a form of game with coalition structures, and apply the Shapley value to decompose the reward, and derive and Shapley-Bellman optimiality equation (SBOE) corresponding to the optimal joint determinisitic policy. They propose a Shapley-Bellman opeator (SBO) that solves for the SBOE. These finally give rise to a new multiagent reinforcement learning algorithm, called Shapley Q learning, SHAQ for short, somewhat akin to existing value factorization methods. Empirically, on a few settings (predator-prey and starcraft) SHAQ exhibits better performance than existing approaches, and also provides some interpertability foundation. Strengths And Weaknesses The key strength of the paper is in applying cooperative game theory tools to multiagent Q learning. Recently the Shapley value has become a very popular tool in machine learning due to its ability to decompose the performance of a model to the relative influence of specific features. This has proven a very strong tool for analyzing supervised learning models. The authors propose now propose to use this theoretic foundation to multiagent reinforcement learning. The key weakness in my opion is not having a clear, crisp takeaway from this work. If the main claim is the superior performanc on multiagent reinforcement learning, then the empirical analysis seems to be somewhat lacking, as it covers relatively few domains (there are now enough multiagent gyms that allow a wider variability of tasks). If the main claim is a theoretical foundation, then one might expect a tighter analysis and bounds as compared to existing approaches. Either way, I think the writing is very formal, and could be imporved. What is the main driving intuition here? MARL is typically considered through a non-cooperative game theory prism (Markov Game). Here you are trying to use cooperative game theory, which means you consider subsets of agents, and have some function mapping each such subteam to its success in the task. Then, one might view the Shapley value as a decomposition allocating each single agent its individual reward / impact in the team's success. But why use the Shapley value rather than other solution concepts (such as the Core, which you mention, or the least-core), or the Nucleolous, or the Kernel, or other similar power indices such as the Banzhaf index? Are you using some of the axiomatic foundations to the Shapley value? If so, then where? All in all, I really love the topic of the paper, but the execution could be improved (more domains for empirical evaluation, tighter theoretic bounds versus baselines). And the writing should focus on the intuitions before jumping to the technical definitions Questions What happens in domains which are not completely cooperative (team reward), such as social dillemas or mixed motive games. Does the algorithm still runs? Does it fail? What happens when agents are trained using a mixture of algorithms (SHAQ agent with other approaches - are they compatible). Can you replace the Shapley value with other cooperative solution concepts (the Banzhaf index seems to be the closest, basically your equation just with different weights for the subteams) - does the whole method fail? Limitations As I wrote, the empirical analysis is somewhat limited (but certainly a decent foundation). Also the writing could be improved - at the very least I'd give the formal definitions of a transferable utility cooperative game, coalition structures, the core (as applied to a general CS or characteristic function game). The paper does a better job on the RL side (where things are fully defined). Also, you should have the discussion on what happens in non team reward (non fully cooperative) settings. All in all, a very interesting paper, if only for the nice connection between RL and cooperative game theory.
NIPS
Title NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction Abstract We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs. Existing neural surface reconstruction approaches, such as DVR [Niemeyer et al., 2020] and IDR [Yariv et al., 2020], require foreground mask as supervision, easily get trapped in local minima, and therefore struggle with the reconstruction of objects with severe self-occlusion or thin structures. Meanwhile, recent neural methods for novel view synthesis, such as NeRF [Mildenhall et al., 2020] and its variants, use volume rendering to produce a neural scene representation with robustness of optimization, even for highly complex objects. However, extracting high-quality surfaces from this learned implicit representation is difficult because there are not sufficient surface constraints in the representation. In NeuS, we propose to represent a surface as the zero-level set of a signed distance function (SDF) and develop a new volume rendering method to train a neural SDF representation. We observe that the conventional volume rendering method causes inherent geometric errors (i.e. bias) for surface reconstruction, and therefore propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision. Experiments on the DTU dataset and the BlendedMVS dataset show that NeuS outperforms the state-of-the-arts in high-quality surface reconstruction, especially for objects and scenes with complex structures and self-occlusion. 1 Introduction Reconstructing surfaces from multi-view images is a fundamental problem in computer vision and computer graphics. 3D reconstruction with neural implicit representations has recently become a highly promising alternative to classical reconstruction approaches [35, 8, 2] due to its high reconstruction quality and its potential to reconstruct complex objects that are difficult for classical approaches, such as non-Lambertian surfaces and thin structures. Recent works represent surfaces as signed distance functions (SDF) [46, 49, 17, 22] or occupancy [29, 30]. To train their neural models, these methods use a differentiable surface rendering method to render a 3D object into images and compare them against input images for supervision. For example, IDR [46] produces impressive reconstruction results, but it fails to reconstruct objects with complex structures that causes abrupt depth changes. The cause of this limitation is that the surface rendering method used in IDR only considers a single surface intersection point for each ray. Consequently, the gradient only exists at this single point, which is too local for effective back propagation and would get optimization stuck in a poor local minimum when there are abrupt changes of depth on images. Furthermore, object ∗Corresponding authors. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). masks are needed as supervision for converging to a valid surface. As illustrated in Fig. 1 (a) top, with the radical depth change caused by the hole, the neural network would incorrectly predict the points near the front surface to be blue, failing to find the far-back blue surface. The actual test example in Fig. 1 (b) shows that IDR fails to correctly reconstruct the surfaces near the edges with abrupt depth changes. Recently, NeRF [28] and its variants have explored to use a volume rendering method to learn a volumetric radiance field for novel view synthesis. This volume rendering approach samples multiple points along each ray and perform α-composition of the colors of the sampled points to produce the output pixel colors for training purposes. The advantage of the volume rendering approach is that it can handle abrupt depth changes, because it considers multiple points along the ray and so all the sample points, either near the surface or on the far surface, produce gradient signals for back propagation. For example, referring Fig. 1 (a) bottom, when the near surface (yellow) is found to have inconsistent colors with the input image, the volume rendering approach is capable of training the network to find the far-back surface to produce the correct scene representation. However, since it is intended for novel view synthesis rather than surface reconstruction, NeRF only learns a volume density field, from which it is difficult to extract a high-quality surface. Fig. 1 (b) shows a surface extracted as a level-set surface of the density field learned by NeRF. Although the surface correctly accounts for abrupt depth changes, it contains conspicuous noise in some planar regions. In this work, we present a new neural rendering scheme, called NeuS, for multi-view surface reconstruction. NeuS uses the signed distance function (SDF) for surface representation and uses a novel volume rendering scheme to learn a neural SDF representation. Specifically, by introducing a density distribution induced by SDF, we make it possible to apply the volume rendering approach to learning an implicit SDF representation and thus have the best of both worlds, i.e. an accurate surface representation using a neural SDF model and robust network training in the presence of abrupt depth changes as enabled by volume rendering. Note that simply applying a standard volume rendering method to the density associated with SDF would lead to discernible bias (i.e. inherent geometric errors) in the reconstructed surfaces. This is a new and important observation that we will elaborate later. Therefore we propose a novel volume rendering scheme to ensure unbiased surface reconstruction in the first-order approximation of SDF. Experiments on both DTU dataset and BlendedMVS dataset demonstrated that NeuS is capable of reconstructing complex 3D objects and scenes with severe occlusions and delicate structures, even without foreground masks as supervision. It outperforms the state-of-the-art neural scene representation methods, namely IDR [46] and NeRF [28], in terms of reconstruction quality. 2 Related Works Classical Multi-view Surface and Volumetric Reconstruction. Traditional multi-view 3D reconstruction methods can be roughly classified into two categories: point- and surface-based reconstruction [2, 8, 9, 35] and volumetric reconstruction [6, 3, 36]. Point- and surface-based reconstruction methods estimate the depth map of each pixel by exploiting inter-image photometric consistency [8] and then fuse the depth maps into a global dense point cloud [25, 48]. The surface reconstruction is usually done as a post processing with methods like screened Poisson surface reconstruction [16]. The reconstruction quality heavily relies on the quality of correspondence matching, and the difficulties in matching correspondence for objects without rich textures often lead to severe artifacts and missing parts in the reconstruction results. Alternatively, volumetric reconstruction methods circumvent the difficulty of explicit correspondence matching by estimating occupancy and color in a voxel grid from multi-view images and evaluating the color consistency of each voxel. Due to limited achievable voxel resolution, these methods cannot achieve high accuracy. Neural Implicit Representation. Some methods enforce 3D understanding in a deep learning framework by introducing inductive biases. These inductive biases can be explicit representations, such as voxel grids [13, 5, 44], point cloud [7, 24, 18], meshes [41, 43, 14], and implicit representations. The implicit representations encoded by a neural network has gained a lot of attention recently, since it is continuous and can achieve high spatial resolution. This representation has been applied successfully to shape representation [26, 27, 31, 4, 1, 10, 47, 32], novel view synthesis [38, 23, 15, 28, 21, 33, 34, 40, 37] and multi-view 3D reconstruction [46, 29, 17, 12, 22]. Our work mainly focuses on learning implicit neural representation encoding both geometry and appearance in 3D space from 2D images via classical rendering techniques. Limited in this scope, the related works can be roughly categorized based on the rendering techniques used, i.e. surface rendering based methods and volume rendering based methods. Surface rendering based methods [29, 17, 46, 22] assume that the color of ray only relies on the color of an intersection of the ray with the scene geometry, which makes the gradient only backpropagated to a local region near the intersection. Therefore, such methods struggle with reconstructing complex objects with severe self-occlusions and sudden depth changes. Furthermore, they usually require object masks as supervision. On the contrary, our method performs well for such challenging cases without the need of masks. Volume rendering based methods, such as NeRF[28], render an image by α-compositing colors of the sampled points along each ray. As explained in the introduction, it can handle sudden depth changes and synthesize high-quality images. However, extracting high-fidelity surface from the learned implicit field is difficult because the density-based scene representation lacks sufficient constraints on its level sets. In contrast, our method combines the advantages of surface rendering based and volume rendering based methods by constraining the scene space as a signed distance function but applying volume rendering to train this representation with robustness. UNISURF [30], a concurrent work, also learns an implicit surface via volume rendering. It improves the reconstruction quality by shrinking the sample region of volume rendering during the optimization. Our method differs from UNISURF in that UNISURF represents the surface by occupancy values, while our method represents the scene by an SDF and thus can naturally extract the surface as the zero-level set of it, yielding better reconstruction accuracy than UNISURF, as will be seen later in the experiment section. 3 Method Given a set of posed images {Ik} of a 3D object, our goal is to reconstruct the surface S of it. The surface is represented by the zero-level set of a neural implicit SDF. In order to learn the weights of the neural network, we developed a novel volume rendering method to render images from the implicit SDF and minimize the difference between the rendered images and the input images. This volume rendering approach ensures robust optimization in NeuS for reconstructing objects of complex structures. 3.1 Rendering Procedure Scene representation. With NeuS, the scene of an object to be reconstructed is represented by two functions: f : R3 → R that maps a spatial position x ∈ R3 to its signed distance to the object, and c : R3 × S2 → R3 that encodes the color associated with a point x ∈ R3 and a viewing direction v ∈ S2. Both functions are encoded by Multi-layer Perceptrons (MLP). The surface S of the object is represented by the zero-level set of its SDF, that is, S = { x ∈ R3|f(x) = 0 } . (1) In order to apply a volume rendering method to training the SDF network, we first introduce a probability density function φs(f(x)), called S-density, where f(x), x ∈ R3, is the signed distance function and φs(x) = se−sx/(1 + e−sx)2, commonly known as the logistic density distribution, is the derivative of the Sigmoid function Φs(x) = (1 + e−sx)−1, i.e., φs(x) = Φ′s(x). In principle φs(x) can be any unimodal (i.e. bell-shaped) density distribution centered at 0; here we choose the logistic density distribution for its computational convenience. Note that the standard deviation of φs(x) is given by 1/s, which is also a trainable parameter, that is, 1/s approaches to zero as the network training converges. Intuitively, the main idea of NeuS is that, with the aid of the S-density field φs(f(x)), volume rendering is used to train the SDF network with only 2D input images as supervision. Upon successful minimization of a loss function based on this supervision, the zero-level set of the network-encoded SDF is expected to represent an accurately reconstructed surface S, with its induced S-density φs(f(x)) assuming prominently high values near the surface. Rendering. To learn the parameters of the neural SDF and color field, we advise a volume rendering scheme to render images from the proposed SDF representation. Given a pixel, we denote the ray emitted from this pixel as {p(t) = o + tv|t ≥ 0}, where o is the center of the camera and v is the unit direction vector of the ray. We accumulate the colors along the ray by C(o,v) = ∫ +∞ 0 w(t)c(p(t),v)dt, (2) where C(o,v) is the output color for this pixel, w(t) a weight for the point p(t), and c(p(t),v) the color at the point p along the viewing direction v. Requirements on weight function. The key to learn an accurate SDF representation from 2D images is to build an appropriate connection between output colors and SDF, i.e., to derive an appropriate weight function w(t) on the ray based on the SDF f of the scene. In the following, we list the requirements on the weight function w(t). 1. Unbiased. Given a camera ray p(t), w(t) attains a locally maximal value at a surface intersection point p(t∗), i.e. with f(p(t∗)) = 0, that is, the point p(t∗) is on the zero-level set of the SDF (x). 2. Occlusion-aware. Given any two depth values t0 and t1 satisfying f(t0) = f(t1), w(t0) > 0, w(t1) > 0, and t0 < t1, there is w(t0) > w(t1). That is, when two points have the same SDF value (thus the same SDF-induced S-density value), the point nearer to the view point should have a larger contribution to the final output color than does the other point. An unbiased weight function w(t) guarantees that the intersection of the camera ray with the zero-level set of SDF contributes most to the pixel color. The occlusion-aware property ensures that when a ray sequentially passes multiple surfaces, the rendering procedure will correctly use the color of the surface nearest to the camera to compute the output color. Next, we will first introduce a naive way of defining the weight function w(t) that directly using the standard pipeline of volume rendering, and explain why it is not appropriate for reconstruction before introducing our novel construction of w(t). Naive solution. To make the weight function occlusion-aware, a natural solution is based on the standard volume rendering formulation [28] which defines the weight function by w(t) = T (t)σ(t), (3) where σ(t) is the so-called volume density in classical volume rendering and T (t) = exp(− ∫ t 0 σ(u)du) here denotes the accumulated transmittance along the ray. To adopt the stan- dard volume density formulation [28], here σ(t) is set to be equal to the S-density value, i.e. σ(t) = φs(f(p(t))) and the weight function w(t) is computed by Eqn. 3. Although the resulting weight function is occlusion-aware, it is biased as it introduces inherent errors in the reconstructed surfaces. As illustrated in Fig. 2 (a), the weight function w(t) attains a local maximum at a point before the ray reaches the surface point p(t∗), satisfying f(p(t∗)) = 0. This fact will be proved in the supplementary material. Our solution. To introduce our solution, we first introduce a straightforward way to construct an unbiased weight function, which directly uses the normalized S-density as weights w(t) = φs(f(p(t)))∫ +∞ 0 φs(f(p(u)))du . (4) This construction of weight function is unbiased, but not occlusion-aware. For example, if the ray penetrates two surfaces, the SDF function f will have two zero points on the ray, which leads to two peaks on the weight function w(t) and the resulting weight function will equally blend the colors of two surfaces without considering occlusions. To this end, now we shall design the weight function w(t) that is both occlusion-aware and unbiased in the first-order approximation of SDF, based on the aforementioned straightforward construction. To ensure an occlusion-aware property of the weight function w(t), we will still follow the basic framework of volume rendering as Eqn. 3. However, different from the conventional treatment as in naive solution above, we define our function w(t) from the S-density in a new manner. We first define an opaque density function ρ(t), which is the counterpart of the volume density σ in standard volume rendering. Then we compute the new weight function w(t) by w(t) = T (t)ρ(t), where T (t) = exp ( − ∫ t 0 ρ(u)du ) . (5) How we derive opaque density ρ. We will first consider a simple case where there is only one surface intersection, and the surface is simply a plane. Since Eqn. 4 indeed satisfies the above requirements under this assumption, we derive the underlying opaque density ρ corresponding to the weight definition of Eqn. 4 using the framework of volume rendering. Then we will generalize this opaque density to the general case of multiple surface intersections. Specifically, in the simple case of a single plane intersection, it is easy to see that the signed distance function f(p(t)) is −| cos(θ)| · (t− t∗), where f(p(t∗)) = 0, and θ is the angle between the view direction v and the outward surface normal vector n. Because the surface is assumed locally, | cos(θ)| is a constant. It follows from Eqn. 4 that w(t) = φs(f(p(t)))∫ +∞ −∞ φs(f(p(u)))du = φs(f(p(t)))∫ +∞ −∞ φs(−| cos(θ)| · (u− t∗))du = φs(f(p(t))) | cos(θ)|−1 · ∫ +∞ −∞ φs(u− t∗)du =| cos(θ)|φs(f(p(t))). (6) Recall that the weight function within the framework of volume rendering is given by w(t) = T (t)ρ(t), where T (t) = exp(− ∫ t 0 ρ(u)du) denotes the accumulated transmittance. Therefore, to derive ρ(t), we have T (t)ρ(t) = | cos(θ)|φs(f(p(t))). (7) Since T (t) = exp(− ∫ t 0 ρ(u)du), it is easy to verify that T (t)ρ(t) = −dTdt (t). Further, note that | cos(θ)|φs(f(p(t))) = −dΦsdt (f(p(t))). It follows that dT dt (t) = dΦs dt (f(p(t))). Integrating both sides of this equation yields T (t) = Φs(f(p(t))). (8) Taking the logarithm and then differentiating both sides, we have∫ t −∞ ρ(u)du =− ln(Φs(f(p(t)))) ⇒ ρ(t) = −dΦsdt (f(p(t))) Φs(f(p(t))) . (9) This is the formula of the opaque density ρ(t) in case of single plane intersection. The weight function w(t) induced by ρ(t) is shown in Figure 2(b). Now we generalize the opaque density to the general case where there are multiple surface intersections along the ray p(t). In this case, −dΦsdt (f(p(t))) becomes negative on the segment of the ray with increasing SDF values. Thus we clip it against zero to ensure that the value of ρ is always non-negative. This gives the following opaque density function ρ(t) in general cases. ρ(t) = max ( −dΦsdt (f(p(t))) Φs(f(p(t))) , 0 ) . (10) Based on this equation, the weight function w(t) can be computed with standard volume rendering as in Eqn. 5. The illustration in the case of multiple surface intersection is shown in Figure 3. The following theorem states that in general cases (i.e., including both single surface intersection and multiple surface intersections) the weight function defined by Eqn. 10 and Eqn. 5 is unbiased in the first-order approximation of SDF. The proof is given in the supplementary material. Theorem 1 Suppose that a smooth surface S is defined by the zero-level set of the signed distance function f(x) = 0, and a ray p(t) = o + tv enters the surface S from outside to inside, with the intersection point at p(t∗), that is, f(p(t∗)) = 0 and there exists an interval [tl, tr] such that t∗ ∈ [tl, tr] and f(p(t)) is monotonically decreasing in [tl, tr]. Suppose that in this local interval [tl, tr], the surface can be tangentially approximated by a sufficiently small planar patch, i.e., ∇f is regarded as fixed. Then, the weight function w(t) computed by Eqn. 10 and Eqn. 5 in [tl, tr] attains its maximum at t∗. Discretization. To obtain discrete counterparts of the opacity and weight function, we adopt the same approximation scheme as used in NeRF [28], This scheme samples n points {pi = o+ tiv|i = 1, ..., n, ti < ti+1} along the ray to compute the approximate pixel color of the ray as Ĉ = n∑ i=1 Tiαici, (11) where Ti is the discrete accumulated transmittance defined by Ti = ∏i−1 j=1(1−αj), and αi is discrete opacity values defined by αi = 1− exp ( − ∫ ti+1 ti ρ(t)dt ) , (12) which can further be shown to be αi = max ( Φs(f(p(ti)))− Φs(f(p(ti+1))) Φs(f(p(ti))) , 0 ) . (13) The detailed derivation of this formula for αi is given in the supplementary material. 3.2 Training To train NeuS, we minimize the difference between the rendered colors and the ground truth colors, without any 3D supervision. Besides colors, we can also utilize the masks for supervision if provided. Specifically, we optimize our neural networks and inverse standard deviation s by randomly sampling a batch of pixels and their corresponding rays in world space P = {Ck,Mk,ok,vk}, where Ck is its pixel color and Mk ∈ {0, 1} is its optional mask value, from an image in every iteration. We assume the point sampling size is n and the batch size is m. The loss function is defined as L = Lcolor + λLreg + βLmask. (14) The color loss Lcolor is defined as Lcolor = 1 m ∑ k R(Ĉk, Ck). (15) Same as IDR[46], we empirically chooseR as L1 loss, which in our observation is robust to outliers and stable in training. We add an Eikonal term [10] on the sampled points to regularize the SDF of fθ by Lreg = 1 nm ∑ k,i (‖∇f(p̂k,i)‖2 − 1)2. (16) The optional mask loss Lmask is defined as Lmask = BCE(Mk, Ôk), (17) where Ôk = ∑n i=1 Tk,iαk,i is the sum of weights along the camera ray, and BCE is the binary cross entropy loss. Hierarchical sampling. In this work, we follow a similar hierarchical sampling strategy as in NeRF [28]. We first uniformly sample the points on the ray and then iteratively conduct importance sampling on top of the coarse probability estimation. The difference is that, unlike NeRF which simultaneously optimizes a coarse network and a fine network, we only maintain one network, where the probability in coarse sampling is computed based on the S-density φs(f(x)) with fixed standard deviations while the probability of fine sampling is computed based on φs(f(x)) with the learned s. Details of hierarchical sampling strategy are provided in supplementary materials. 4 Experiments 4.1 Experimental settings Datasets. To evaluate our approach and baseline methods, we use 15 scenes from the DTU dataset [11], same as those used in IDR [46], with a wide variety of materials, appearance and geometry, including challenging cases for reconstruction algorithms, such as non-Lambertian surfaces and thin structures. Each scene contains 49 or 64 images with the image resolution of 1600× 1200. Each scene was tested with and without foreground masks provided by IDR [46]. We further tested on 7 challenging scenes from the low-res set of the BlendedMVS dataset [45](CC-4 License). Each scene has 31− 143 images at 768× 576 pixels and masks are provided by the BlendedMVS dataset. We further captured two thin objects with 32 input images to test our approach on thin structure reconstruction. Baselines. (1) The state-of-the-art surface rendering approach – IDR [46]: IDR can reconstruct surface with high quality but requires foreground masks as supervision; Since IDR has demonstrated superior quality compared to another surface rendering based method – DVR [29], we did not conduct a comparison with DVR. (2) The state-of-the-art volume rendering approach – NeRF [28]: We use a threshold of 25 to extract mesh from the learned density field. We validate this choice in the supplementary material. (3) A widely-used classical MVS method – COLMAP [35]: We reconstruct a mesh from the output point cloud of COLMAP with Screened Poisson Surface Reconstruction [16]. (4) The concurrent work which unifies surface rendering and volume rendering with an occupancy field as scene representation – UNISURF [30]. More details of the baseline methods are included in the supplementary material. 4.2 Comparisons We conducted the comparisons in two settings, with mask supervision (w/ mask) and without mask supervision (w/o mask). We measure the reconstruction quality with the Chamfer distances in the same way as UNISURF [30] and IDR [46] and report the scores in Table 1. The results show that our approach outperforms the baseline methods on the DTU dataset in both settings – w/ and w/o mask in terms of the Chamfer distance. Note that the reported scores of IDR in the setting of w/ mask and NeRF and UNISURF in the w/o mask setting are from IDR [46] and UNISURF [30]. We conduct the qualitative comparisons on the DTU dataset and the BlendedMVS dataset in both settings, w/ mask and w/o mask, in Figure 4 and Figure 5, respectively. As shown in Figure 4 for the setting of w/ mask, IDR shows limited performance for reconstructing thin metals parts in Scan 37 (DTU), and fails to handle sudden depth changes in Stone (BlendedMVS) due to the local optimization process in surface rendering. The extracted meshes of NeRF are noisy since the volume density field has not sufficient constraint on its 3D geometry. Regarding the w/o mask setting, we visually compare our method with NeRF and COLMAP in the setting of w/o mask in Figure 5, which shows our reconstructed surfaces are with more fidelity than baselines. We further show a comparison with UNISURF [30] on two examples in the w/o mask setting. Note that we use the qualitative results of UNISURF reported their paper for comparison. Our method works better for the objects with abrupt depth changes. More qualitative images are included in the supplementary material. 4.3 Analysis Ablation study. To evaluate the effect of the weight calculation, we test three different kinds of weight constructions described in Sec. 3.1: (a) Naive Solution. (b) Straightforward Construction as shown in Eqn. 4. (e) Full Model. As shown in Figure 6, the quantitative result of naive solution is worse than our weight choice (e) in terms of the Chamfer distance. This is because it introduces a bias to the surface reconstruction. If direct construction is used, there are severe artifacts. We also studied the effect of Eikonal regularization [10] and geometric initialization [1]. Without Eikonal regularization or geometric initialization, the result on Chamfer distance is on par with that of the full model. However, neither of them can correctly output a signed distance function. This is indicated by the MAE(mean absolute error) between the SDF predictions and corresponding ground-truth SDF, as shown in the bottom line of Figure 6. The MAE is computed on uniformly-sampled points in the object’s bounding sphere. Qualitative results of SDF predictions are provided in the supplementary material. Thin structures. We additionally show results on two challenging thin objects with 32 input images. The plane with rich texture under the object is used for camera calibration. As shown in Fig. 8, our method is able to accurately reconstruct these thin structures, especially on the edges with abrupt depth changes. Furthermore, different from the methods [39, 19, 42, 20] which only target at high-quality thin structure reconstruction, our method can handle the scenes which have a mixture of thin structures and general objects. 5 Conclusion We have proposed NeuS, a new approach to multiview surface reconstruction that represents 3D surfaces as neural SDF and developed a new volume rendering method for training the implicit SDF representation. NeuS produces high-quality reconstruction and successfully reconstructs objects with severe occlusions and complex structures. It outperforms the state-of-the-arts both qualitatively and quantitatively. One limitation of our method is that although our method does not heavily rely on correspondence matching of texture features, the performance would still degrade for textureless objects (we show the failure cases in the supplementary material). Moreover, NeuS has only a single scale parameter s that is used to model the standard deviation of the probability distribution for all the spatial location. Hence, an interesting future research topic is to model the probability with different variances for different spatial locations together with the optimization of scene representation, depending on different local geometric characteristics. Negative societal impact: like many other learning-based works, our method requires a large amount of computational resources for network training, which can be a concern for global climate change. Acknowlegements We thank Michael Oechsle for providing the results of UNISURF. Christian Theobalt was supported by ERC Consolidator Grant 770784. Lingjie Liu was supported by Lise Meitner Postdoctoral Fellowship. Computational resources are mainly provided by HKU GPU Farm.
1. What is the main contribution of the paper on multi-view surface reconstruction? 2. What are the strengths and weaknesses of the proposed method compared to existing state-of-the-art methods like NeRF and IDR? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What are the missing experiments or ablations that the reviewer suggests to improve the paper's validity? 5. Are there any design choices or formulations that the reviewer finds obscure or unclear? If so, which ones and why?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a method that combines volume rendering with surface rendering techniques for multi-view reconstruction. The method represents the geometry of objects using a neural implicit representation that outputs the SDF at query locations. In order to enable volume rendering, the output SDF is mapped to a density function centered around the surface. Furthermore, they also encode the appearance of the object into the neural implicit representation that is needed for multi-view reconstruction. The main contribution is a novel weight function that is centered (unbiased) around the surface while being occlusion-aware. Therefore, the paper resolves major limitations of volume rendering (to few constraints on surface) and surface rendering (very sparse gradients). The paper extensively discusses different formulations of the weight function (properties, weaknesses, strengths). The method is evaluated on the DTU as well as the BlendedMVS dataset and compared to existing state-of-the-art methods like NeRF (volume rendering) and IDR (surface rendering). Moreover, they evaluate some aspects of the method in an ablation study. Review Here, I order the strengths and weaknesses according to their importance and contribution to the final score as well as additional comments that should be resolved in the final version of the paper. #Strengths: The paper proposes a simple yet elegant formulation for multi-view surface reconstruction using neural implicit representations. The results are impressive and show that it is very beneficial to combine volume rendering and surface rendering to solve the well known problems of noisy surfaces (NeRF) and requiring a mask for supervision (IDR). This is particularly shown in Figure 6. The paper extensively discusses the strengths and limitations of existing weight functions used in volumetric rendering. Especially, the illustrations in Figure 2 and Figure 3 are very useful to understand the problem and the proposed solution. The introduction as well as the related work very well motivate the paper with respect to existing surface and volume rendering methods. This helps the reader to understand the importance of the problem and the proposed solution. By providing an intuition behind different weight functions, the different solutions and their strengths/weaknesses are very well motivated. #Weaknesses: Clarity: The paragraph, where the solution is explained (L185 - L193) is very unclear. It is very hard to understand how the solution is derived from Equation 5 and Equation 6. This should be definitely made clearer for the final version of the paper as it is crucial for the reader to understand the derivation of the main contribution. I would like to see how you derive and combine equation 7 from equation 5 and 6. This could then be also added to the supplementary material for the final paper with an intuition in the main paper. Results/Ablations Firstly, as you also encode the appearance and especially NeRF is primarily designed for novel view synthesis, it would be interesting to see how this method performs on novel view synthesis. For doing this, I suggest to visualize novel views from the scenes and also run a quantitative evaluation using PSNR and SSIM. Secondly, as you already compare to UNISURF, it would also be interesting to see the experiments on the Indoor Scene Dataset as shown in UNISURF (Figure 8, p. 8). Thirdly, one missing experiment is the ablation of all loss terms. I would be particularly interested in an ablation of the Eikonal term to see whether this is also the reason for the better performance compared to UNISURF. Related Work: There are some relevant works missing on multi-view reconstruction. Particularly, the two works below that are one of the first works using machine learning for multi-view reconstruction. Kar et al., Learning a multi-view stereo machine, NeurIPS 2017 Choy et al., 3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction, ECCV 2016 In the section about Neural Implicit Representations, the following work is missing: Sitzmann et al., DeepVoxels: Learning Persistent 3D Feature Embeddings, CVPR 2019 Moreover, I do only partially agree with the discussion of the differences between this work and the concurrent work UNISURF. Although it is correct that UNSIRF represents the geometry as occupancy values, you can also naturally extract the surface at the decision boundary (0.5), which is just a different level set. I doubt that this difference is the reason for the performance improvement over UNISURF. A more in depth discussion would be helpful for the community to understand what improves performance and what does not. Lacking justification of certain design choices: To me, the choice of the logistic density distribution is not motivated enough. I assume the main benefit is the easy integration to the sigmoid function that is used in the final solution (Equation 7). I would like to see a brief comment on that. Clarity/Flow of Reading The clarity of the writing and flow of reading is limited by very long sentences (e.g. Abstract L11 - L14), obscure formulations (e.g. L87 'which makes the gradient only be backpropagated to a local region near the intersection', L118 'neural networks of Multi-layer Perceptron (MLP), L), and wrong/inconsistent prepositions (e.g. requirements of weight function (L147-L148) vs. requirements on weight function (L149)). This significantly breaks the flow while reading the paper and makes it harder to understand everything. #Additional Comments: The writing of the paper should be reviewed and revised in general (uncountable vs. countable nouns, third person s, prepositions, missing or too many a/the (e.g. L56, L86), etc.). This would make the reading much easier and redirects the mental capacity of the reader to the content. A few formulations could be improved for better clarity. L31 - L32: would get optimization stuck --> other formulation L100: sample region --> sampling region L108: zero-set --> zero level set L111: train the network of SDF --> other formulation L120: method to training the SDF network --> method to train the SDF network L134: To learn the parameters of MLPS of the SDF and the color field --> To learn the parameters of the SDF and color networks (or something similar) L157 - 158: near the view point --> closer to the view point Theorem 1: depth --> depth values Theorem 1: going from the outside of the surface --> going from the outside of the object (there is no inside outside of a surface) It would be very good to visualize the meshes (especially in Figure 7) with some sort of error color coding (similar to Tatarchenko et al. 2019, Figure 12). This would guide the reader to better see where the different configuration fail/work. Reference 5 and 6 are the same. They should be merged. The failure case in the appendix is not really informative (Fig. 6). Some more failure cases caused by lack of texture should be shown together with the input views (to see the lack of texture). Guiding the reader more (directly in the figure) would help as well. #Questions What is the reason for using different sampling points for the color and alpha values? (Appendix L57-59) Why do the results for some scenes get worse when using the mask supervision? What is the intuition behind that? (Table 1)
NIPS
Title NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction Abstract We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs. Existing neural surface reconstruction approaches, such as DVR [Niemeyer et al., 2020] and IDR [Yariv et al., 2020], require foreground mask as supervision, easily get trapped in local minima, and therefore struggle with the reconstruction of objects with severe self-occlusion or thin structures. Meanwhile, recent neural methods for novel view synthesis, such as NeRF [Mildenhall et al., 2020] and its variants, use volume rendering to produce a neural scene representation with robustness of optimization, even for highly complex objects. However, extracting high-quality surfaces from this learned implicit representation is difficult because there are not sufficient surface constraints in the representation. In NeuS, we propose to represent a surface as the zero-level set of a signed distance function (SDF) and develop a new volume rendering method to train a neural SDF representation. We observe that the conventional volume rendering method causes inherent geometric errors (i.e. bias) for surface reconstruction, and therefore propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision. Experiments on the DTU dataset and the BlendedMVS dataset show that NeuS outperforms the state-of-the-arts in high-quality surface reconstruction, especially for objects and scenes with complex structures and self-occlusion. 1 Introduction Reconstructing surfaces from multi-view images is a fundamental problem in computer vision and computer graphics. 3D reconstruction with neural implicit representations has recently become a highly promising alternative to classical reconstruction approaches [35, 8, 2] due to its high reconstruction quality and its potential to reconstruct complex objects that are difficult for classical approaches, such as non-Lambertian surfaces and thin structures. Recent works represent surfaces as signed distance functions (SDF) [46, 49, 17, 22] or occupancy [29, 30]. To train their neural models, these methods use a differentiable surface rendering method to render a 3D object into images and compare them against input images for supervision. For example, IDR [46] produces impressive reconstruction results, but it fails to reconstruct objects with complex structures that causes abrupt depth changes. The cause of this limitation is that the surface rendering method used in IDR only considers a single surface intersection point for each ray. Consequently, the gradient only exists at this single point, which is too local for effective back propagation and would get optimization stuck in a poor local minimum when there are abrupt changes of depth on images. Furthermore, object ∗Corresponding authors. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). masks are needed as supervision for converging to a valid surface. As illustrated in Fig. 1 (a) top, with the radical depth change caused by the hole, the neural network would incorrectly predict the points near the front surface to be blue, failing to find the far-back blue surface. The actual test example in Fig. 1 (b) shows that IDR fails to correctly reconstruct the surfaces near the edges with abrupt depth changes. Recently, NeRF [28] and its variants have explored to use a volume rendering method to learn a volumetric radiance field for novel view synthesis. This volume rendering approach samples multiple points along each ray and perform α-composition of the colors of the sampled points to produce the output pixel colors for training purposes. The advantage of the volume rendering approach is that it can handle abrupt depth changes, because it considers multiple points along the ray and so all the sample points, either near the surface or on the far surface, produce gradient signals for back propagation. For example, referring Fig. 1 (a) bottom, when the near surface (yellow) is found to have inconsistent colors with the input image, the volume rendering approach is capable of training the network to find the far-back surface to produce the correct scene representation. However, since it is intended for novel view synthesis rather than surface reconstruction, NeRF only learns a volume density field, from which it is difficult to extract a high-quality surface. Fig. 1 (b) shows a surface extracted as a level-set surface of the density field learned by NeRF. Although the surface correctly accounts for abrupt depth changes, it contains conspicuous noise in some planar regions. In this work, we present a new neural rendering scheme, called NeuS, for multi-view surface reconstruction. NeuS uses the signed distance function (SDF) for surface representation and uses a novel volume rendering scheme to learn a neural SDF representation. Specifically, by introducing a density distribution induced by SDF, we make it possible to apply the volume rendering approach to learning an implicit SDF representation and thus have the best of both worlds, i.e. an accurate surface representation using a neural SDF model and robust network training in the presence of abrupt depth changes as enabled by volume rendering. Note that simply applying a standard volume rendering method to the density associated with SDF would lead to discernible bias (i.e. inherent geometric errors) in the reconstructed surfaces. This is a new and important observation that we will elaborate later. Therefore we propose a novel volume rendering scheme to ensure unbiased surface reconstruction in the first-order approximation of SDF. Experiments on both DTU dataset and BlendedMVS dataset demonstrated that NeuS is capable of reconstructing complex 3D objects and scenes with severe occlusions and delicate structures, even without foreground masks as supervision. It outperforms the state-of-the-art neural scene representation methods, namely IDR [46] and NeRF [28], in terms of reconstruction quality. 2 Related Works Classical Multi-view Surface and Volumetric Reconstruction. Traditional multi-view 3D reconstruction methods can be roughly classified into two categories: point- and surface-based reconstruction [2, 8, 9, 35] and volumetric reconstruction [6, 3, 36]. Point- and surface-based reconstruction methods estimate the depth map of each pixel by exploiting inter-image photometric consistency [8] and then fuse the depth maps into a global dense point cloud [25, 48]. The surface reconstruction is usually done as a post processing with methods like screened Poisson surface reconstruction [16]. The reconstruction quality heavily relies on the quality of correspondence matching, and the difficulties in matching correspondence for objects without rich textures often lead to severe artifacts and missing parts in the reconstruction results. Alternatively, volumetric reconstruction methods circumvent the difficulty of explicit correspondence matching by estimating occupancy and color in a voxel grid from multi-view images and evaluating the color consistency of each voxel. Due to limited achievable voxel resolution, these methods cannot achieve high accuracy. Neural Implicit Representation. Some methods enforce 3D understanding in a deep learning framework by introducing inductive biases. These inductive biases can be explicit representations, such as voxel grids [13, 5, 44], point cloud [7, 24, 18], meshes [41, 43, 14], and implicit representations. The implicit representations encoded by a neural network has gained a lot of attention recently, since it is continuous and can achieve high spatial resolution. This representation has been applied successfully to shape representation [26, 27, 31, 4, 1, 10, 47, 32], novel view synthesis [38, 23, 15, 28, 21, 33, 34, 40, 37] and multi-view 3D reconstruction [46, 29, 17, 12, 22]. Our work mainly focuses on learning implicit neural representation encoding both geometry and appearance in 3D space from 2D images via classical rendering techniques. Limited in this scope, the related works can be roughly categorized based on the rendering techniques used, i.e. surface rendering based methods and volume rendering based methods. Surface rendering based methods [29, 17, 46, 22] assume that the color of ray only relies on the color of an intersection of the ray with the scene geometry, which makes the gradient only backpropagated to a local region near the intersection. Therefore, such methods struggle with reconstructing complex objects with severe self-occlusions and sudden depth changes. Furthermore, they usually require object masks as supervision. On the contrary, our method performs well for such challenging cases without the need of masks. Volume rendering based methods, such as NeRF[28], render an image by α-compositing colors of the sampled points along each ray. As explained in the introduction, it can handle sudden depth changes and synthesize high-quality images. However, extracting high-fidelity surface from the learned implicit field is difficult because the density-based scene representation lacks sufficient constraints on its level sets. In contrast, our method combines the advantages of surface rendering based and volume rendering based methods by constraining the scene space as a signed distance function but applying volume rendering to train this representation with robustness. UNISURF [30], a concurrent work, also learns an implicit surface via volume rendering. It improves the reconstruction quality by shrinking the sample region of volume rendering during the optimization. Our method differs from UNISURF in that UNISURF represents the surface by occupancy values, while our method represents the scene by an SDF and thus can naturally extract the surface as the zero-level set of it, yielding better reconstruction accuracy than UNISURF, as will be seen later in the experiment section. 3 Method Given a set of posed images {Ik} of a 3D object, our goal is to reconstruct the surface S of it. The surface is represented by the zero-level set of a neural implicit SDF. In order to learn the weights of the neural network, we developed a novel volume rendering method to render images from the implicit SDF and minimize the difference between the rendered images and the input images. This volume rendering approach ensures robust optimization in NeuS for reconstructing objects of complex structures. 3.1 Rendering Procedure Scene representation. With NeuS, the scene of an object to be reconstructed is represented by two functions: f : R3 → R that maps a spatial position x ∈ R3 to its signed distance to the object, and c : R3 × S2 → R3 that encodes the color associated with a point x ∈ R3 and a viewing direction v ∈ S2. Both functions are encoded by Multi-layer Perceptrons (MLP). The surface S of the object is represented by the zero-level set of its SDF, that is, S = { x ∈ R3|f(x) = 0 } . (1) In order to apply a volume rendering method to training the SDF network, we first introduce a probability density function φs(f(x)), called S-density, where f(x), x ∈ R3, is the signed distance function and φs(x) = se−sx/(1 + e−sx)2, commonly known as the logistic density distribution, is the derivative of the Sigmoid function Φs(x) = (1 + e−sx)−1, i.e., φs(x) = Φ′s(x). In principle φs(x) can be any unimodal (i.e. bell-shaped) density distribution centered at 0; here we choose the logistic density distribution for its computational convenience. Note that the standard deviation of φs(x) is given by 1/s, which is also a trainable parameter, that is, 1/s approaches to zero as the network training converges. Intuitively, the main idea of NeuS is that, with the aid of the S-density field φs(f(x)), volume rendering is used to train the SDF network with only 2D input images as supervision. Upon successful minimization of a loss function based on this supervision, the zero-level set of the network-encoded SDF is expected to represent an accurately reconstructed surface S, with its induced S-density φs(f(x)) assuming prominently high values near the surface. Rendering. To learn the parameters of the neural SDF and color field, we advise a volume rendering scheme to render images from the proposed SDF representation. Given a pixel, we denote the ray emitted from this pixel as {p(t) = o + tv|t ≥ 0}, where o is the center of the camera and v is the unit direction vector of the ray. We accumulate the colors along the ray by C(o,v) = ∫ +∞ 0 w(t)c(p(t),v)dt, (2) where C(o,v) is the output color for this pixel, w(t) a weight for the point p(t), and c(p(t),v) the color at the point p along the viewing direction v. Requirements on weight function. The key to learn an accurate SDF representation from 2D images is to build an appropriate connection between output colors and SDF, i.e., to derive an appropriate weight function w(t) on the ray based on the SDF f of the scene. In the following, we list the requirements on the weight function w(t). 1. Unbiased. Given a camera ray p(t), w(t) attains a locally maximal value at a surface intersection point p(t∗), i.e. with f(p(t∗)) = 0, that is, the point p(t∗) is on the zero-level set of the SDF (x). 2. Occlusion-aware. Given any two depth values t0 and t1 satisfying f(t0) = f(t1), w(t0) > 0, w(t1) > 0, and t0 < t1, there is w(t0) > w(t1). That is, when two points have the same SDF value (thus the same SDF-induced S-density value), the point nearer to the view point should have a larger contribution to the final output color than does the other point. An unbiased weight function w(t) guarantees that the intersection of the camera ray with the zero-level set of SDF contributes most to the pixel color. The occlusion-aware property ensures that when a ray sequentially passes multiple surfaces, the rendering procedure will correctly use the color of the surface nearest to the camera to compute the output color. Next, we will first introduce a naive way of defining the weight function w(t) that directly using the standard pipeline of volume rendering, and explain why it is not appropriate for reconstruction before introducing our novel construction of w(t). Naive solution. To make the weight function occlusion-aware, a natural solution is based on the standard volume rendering formulation [28] which defines the weight function by w(t) = T (t)σ(t), (3) where σ(t) is the so-called volume density in classical volume rendering and T (t) = exp(− ∫ t 0 σ(u)du) here denotes the accumulated transmittance along the ray. To adopt the stan- dard volume density formulation [28], here σ(t) is set to be equal to the S-density value, i.e. σ(t) = φs(f(p(t))) and the weight function w(t) is computed by Eqn. 3. Although the resulting weight function is occlusion-aware, it is biased as it introduces inherent errors in the reconstructed surfaces. As illustrated in Fig. 2 (a), the weight function w(t) attains a local maximum at a point before the ray reaches the surface point p(t∗), satisfying f(p(t∗)) = 0. This fact will be proved in the supplementary material. Our solution. To introduce our solution, we first introduce a straightforward way to construct an unbiased weight function, which directly uses the normalized S-density as weights w(t) = φs(f(p(t)))∫ +∞ 0 φs(f(p(u)))du . (4) This construction of weight function is unbiased, but not occlusion-aware. For example, if the ray penetrates two surfaces, the SDF function f will have two zero points on the ray, which leads to two peaks on the weight function w(t) and the resulting weight function will equally blend the colors of two surfaces without considering occlusions. To this end, now we shall design the weight function w(t) that is both occlusion-aware and unbiased in the first-order approximation of SDF, based on the aforementioned straightforward construction. To ensure an occlusion-aware property of the weight function w(t), we will still follow the basic framework of volume rendering as Eqn. 3. However, different from the conventional treatment as in naive solution above, we define our function w(t) from the S-density in a new manner. We first define an opaque density function ρ(t), which is the counterpart of the volume density σ in standard volume rendering. Then we compute the new weight function w(t) by w(t) = T (t)ρ(t), where T (t) = exp ( − ∫ t 0 ρ(u)du ) . (5) How we derive opaque density ρ. We will first consider a simple case where there is only one surface intersection, and the surface is simply a plane. Since Eqn. 4 indeed satisfies the above requirements under this assumption, we derive the underlying opaque density ρ corresponding to the weight definition of Eqn. 4 using the framework of volume rendering. Then we will generalize this opaque density to the general case of multiple surface intersections. Specifically, in the simple case of a single plane intersection, it is easy to see that the signed distance function f(p(t)) is −| cos(θ)| · (t− t∗), where f(p(t∗)) = 0, and θ is the angle between the view direction v and the outward surface normal vector n. Because the surface is assumed locally, | cos(θ)| is a constant. It follows from Eqn. 4 that w(t) = φs(f(p(t)))∫ +∞ −∞ φs(f(p(u)))du = φs(f(p(t)))∫ +∞ −∞ φs(−| cos(θ)| · (u− t∗))du = φs(f(p(t))) | cos(θ)|−1 · ∫ +∞ −∞ φs(u− t∗)du =| cos(θ)|φs(f(p(t))). (6) Recall that the weight function within the framework of volume rendering is given by w(t) = T (t)ρ(t), where T (t) = exp(− ∫ t 0 ρ(u)du) denotes the accumulated transmittance. Therefore, to derive ρ(t), we have T (t)ρ(t) = | cos(θ)|φs(f(p(t))). (7) Since T (t) = exp(− ∫ t 0 ρ(u)du), it is easy to verify that T (t)ρ(t) = −dTdt (t). Further, note that | cos(θ)|φs(f(p(t))) = −dΦsdt (f(p(t))). It follows that dT dt (t) = dΦs dt (f(p(t))). Integrating both sides of this equation yields T (t) = Φs(f(p(t))). (8) Taking the logarithm and then differentiating both sides, we have∫ t −∞ ρ(u)du =− ln(Φs(f(p(t)))) ⇒ ρ(t) = −dΦsdt (f(p(t))) Φs(f(p(t))) . (9) This is the formula of the opaque density ρ(t) in case of single plane intersection. The weight function w(t) induced by ρ(t) is shown in Figure 2(b). Now we generalize the opaque density to the general case where there are multiple surface intersections along the ray p(t). In this case, −dΦsdt (f(p(t))) becomes negative on the segment of the ray with increasing SDF values. Thus we clip it against zero to ensure that the value of ρ is always non-negative. This gives the following opaque density function ρ(t) in general cases. ρ(t) = max ( −dΦsdt (f(p(t))) Φs(f(p(t))) , 0 ) . (10) Based on this equation, the weight function w(t) can be computed with standard volume rendering as in Eqn. 5. The illustration in the case of multiple surface intersection is shown in Figure 3. The following theorem states that in general cases (i.e., including both single surface intersection and multiple surface intersections) the weight function defined by Eqn. 10 and Eqn. 5 is unbiased in the first-order approximation of SDF. The proof is given in the supplementary material. Theorem 1 Suppose that a smooth surface S is defined by the zero-level set of the signed distance function f(x) = 0, and a ray p(t) = o + tv enters the surface S from outside to inside, with the intersection point at p(t∗), that is, f(p(t∗)) = 0 and there exists an interval [tl, tr] such that t∗ ∈ [tl, tr] and f(p(t)) is monotonically decreasing in [tl, tr]. Suppose that in this local interval [tl, tr], the surface can be tangentially approximated by a sufficiently small planar patch, i.e., ∇f is regarded as fixed. Then, the weight function w(t) computed by Eqn. 10 and Eqn. 5 in [tl, tr] attains its maximum at t∗. Discretization. To obtain discrete counterparts of the opacity and weight function, we adopt the same approximation scheme as used in NeRF [28], This scheme samples n points {pi = o+ tiv|i = 1, ..., n, ti < ti+1} along the ray to compute the approximate pixel color of the ray as Ĉ = n∑ i=1 Tiαici, (11) where Ti is the discrete accumulated transmittance defined by Ti = ∏i−1 j=1(1−αj), and αi is discrete opacity values defined by αi = 1− exp ( − ∫ ti+1 ti ρ(t)dt ) , (12) which can further be shown to be αi = max ( Φs(f(p(ti)))− Φs(f(p(ti+1))) Φs(f(p(ti))) , 0 ) . (13) The detailed derivation of this formula for αi is given in the supplementary material. 3.2 Training To train NeuS, we minimize the difference between the rendered colors and the ground truth colors, without any 3D supervision. Besides colors, we can also utilize the masks for supervision if provided. Specifically, we optimize our neural networks and inverse standard deviation s by randomly sampling a batch of pixels and their corresponding rays in world space P = {Ck,Mk,ok,vk}, where Ck is its pixel color and Mk ∈ {0, 1} is its optional mask value, from an image in every iteration. We assume the point sampling size is n and the batch size is m. The loss function is defined as L = Lcolor + λLreg + βLmask. (14) The color loss Lcolor is defined as Lcolor = 1 m ∑ k R(Ĉk, Ck). (15) Same as IDR[46], we empirically chooseR as L1 loss, which in our observation is robust to outliers and stable in training. We add an Eikonal term [10] on the sampled points to regularize the SDF of fθ by Lreg = 1 nm ∑ k,i (‖∇f(p̂k,i)‖2 − 1)2. (16) The optional mask loss Lmask is defined as Lmask = BCE(Mk, Ôk), (17) where Ôk = ∑n i=1 Tk,iαk,i is the sum of weights along the camera ray, and BCE is the binary cross entropy loss. Hierarchical sampling. In this work, we follow a similar hierarchical sampling strategy as in NeRF [28]. We first uniformly sample the points on the ray and then iteratively conduct importance sampling on top of the coarse probability estimation. The difference is that, unlike NeRF which simultaneously optimizes a coarse network and a fine network, we only maintain one network, where the probability in coarse sampling is computed based on the S-density φs(f(x)) with fixed standard deviations while the probability of fine sampling is computed based on φs(f(x)) with the learned s. Details of hierarchical sampling strategy are provided in supplementary materials. 4 Experiments 4.1 Experimental settings Datasets. To evaluate our approach and baseline methods, we use 15 scenes from the DTU dataset [11], same as those used in IDR [46], with a wide variety of materials, appearance and geometry, including challenging cases for reconstruction algorithms, such as non-Lambertian surfaces and thin structures. Each scene contains 49 or 64 images with the image resolution of 1600× 1200. Each scene was tested with and without foreground masks provided by IDR [46]. We further tested on 7 challenging scenes from the low-res set of the BlendedMVS dataset [45](CC-4 License). Each scene has 31− 143 images at 768× 576 pixels and masks are provided by the BlendedMVS dataset. We further captured two thin objects with 32 input images to test our approach on thin structure reconstruction. Baselines. (1) The state-of-the-art surface rendering approach – IDR [46]: IDR can reconstruct surface with high quality but requires foreground masks as supervision; Since IDR has demonstrated superior quality compared to another surface rendering based method – DVR [29], we did not conduct a comparison with DVR. (2) The state-of-the-art volume rendering approach – NeRF [28]: We use a threshold of 25 to extract mesh from the learned density field. We validate this choice in the supplementary material. (3) A widely-used classical MVS method – COLMAP [35]: We reconstruct a mesh from the output point cloud of COLMAP with Screened Poisson Surface Reconstruction [16]. (4) The concurrent work which unifies surface rendering and volume rendering with an occupancy field as scene representation – UNISURF [30]. More details of the baseline methods are included in the supplementary material. 4.2 Comparisons We conducted the comparisons in two settings, with mask supervision (w/ mask) and without mask supervision (w/o mask). We measure the reconstruction quality with the Chamfer distances in the same way as UNISURF [30] and IDR [46] and report the scores in Table 1. The results show that our approach outperforms the baseline methods on the DTU dataset in both settings – w/ and w/o mask in terms of the Chamfer distance. Note that the reported scores of IDR in the setting of w/ mask and NeRF and UNISURF in the w/o mask setting are from IDR [46] and UNISURF [30]. We conduct the qualitative comparisons on the DTU dataset and the BlendedMVS dataset in both settings, w/ mask and w/o mask, in Figure 4 and Figure 5, respectively. As shown in Figure 4 for the setting of w/ mask, IDR shows limited performance for reconstructing thin metals parts in Scan 37 (DTU), and fails to handle sudden depth changes in Stone (BlendedMVS) due to the local optimization process in surface rendering. The extracted meshes of NeRF are noisy since the volume density field has not sufficient constraint on its 3D geometry. Regarding the w/o mask setting, we visually compare our method with NeRF and COLMAP in the setting of w/o mask in Figure 5, which shows our reconstructed surfaces are with more fidelity than baselines. We further show a comparison with UNISURF [30] on two examples in the w/o mask setting. Note that we use the qualitative results of UNISURF reported their paper for comparison. Our method works better for the objects with abrupt depth changes. More qualitative images are included in the supplementary material. 4.3 Analysis Ablation study. To evaluate the effect of the weight calculation, we test three different kinds of weight constructions described in Sec. 3.1: (a) Naive Solution. (b) Straightforward Construction as shown in Eqn. 4. (e) Full Model. As shown in Figure 6, the quantitative result of naive solution is worse than our weight choice (e) in terms of the Chamfer distance. This is because it introduces a bias to the surface reconstruction. If direct construction is used, there are severe artifacts. We also studied the effect of Eikonal regularization [10] and geometric initialization [1]. Without Eikonal regularization or geometric initialization, the result on Chamfer distance is on par with that of the full model. However, neither of them can correctly output a signed distance function. This is indicated by the MAE(mean absolute error) between the SDF predictions and corresponding ground-truth SDF, as shown in the bottom line of Figure 6. The MAE is computed on uniformly-sampled points in the object’s bounding sphere. Qualitative results of SDF predictions are provided in the supplementary material. Thin structures. We additionally show results on two challenging thin objects with 32 input images. The plane with rich texture under the object is used for camera calibration. As shown in Fig. 8, our method is able to accurately reconstruct these thin structures, especially on the edges with abrupt depth changes. Furthermore, different from the methods [39, 19, 42, 20] which only target at high-quality thin structure reconstruction, our method can handle the scenes which have a mixture of thin structures and general objects. 5 Conclusion We have proposed NeuS, a new approach to multiview surface reconstruction that represents 3D surfaces as neural SDF and developed a new volume rendering method for training the implicit SDF representation. NeuS produces high-quality reconstruction and successfully reconstructs objects with severe occlusions and complex structures. It outperforms the state-of-the-arts both qualitatively and quantitatively. One limitation of our method is that although our method does not heavily rely on correspondence matching of texture features, the performance would still degrade for textureless objects (we show the failure cases in the supplementary material). Moreover, NeuS has only a single scale parameter s that is used to model the standard deviation of the probability distribution for all the spatial location. Hence, an interesting future research topic is to model the probability with different variances for different spatial locations together with the optimization of scene representation, depending on different local geometric characteristics. Negative societal impact: like many other learning-based works, our method requires a large amount of computational resources for network training, which can be a concern for global climate change. Acknowlegements We thank Michael Oechsle for providing the results of UNISURF. Christian Theobalt was supported by ERC Consolidator Grant 770784. Lingjie Liu was supported by Lise Meitner Postdoctoral Fellowship. Computational resources are mainly provided by HKU GPU Farm.
1. What is the main contribution of the paper regarding multi-view reconstruction? 2. What are the strengths of the proposed approach, particularly in its ability to combine volumetric rendering and SDF representation? 3. What are some potential downstream applications that could benefit from the reconstructed surfaces produced by this method? 4. Are there any limitations or trade-offs associated with the approach, such as image synthesis quality or applications that require more local gradients? 5. How might the model handle situations with fewer views or noisy input data?
Summary Of The Paper Review
Summary Of The Paper This paper presents a novel multi-view reconstruction model that couples an implicit SDF representation (as in IDR [31]) with an unbiased volumetric rendering function (as in NeRF [20]). The volumetric rendering function is deliberately "de-biased" by redefining the opacity values α i such that the weight is maximized exactly on the surface defined by the zero level set of the SDF. The resulting model enjoys the benefits of both surface-based representation for high-fidelity surface reconstruction, and dense volumetric rendering which provides effective gradients that are not restricted locally to the surface and facilitate surface details. The reconstructed surfaces are much better compared to IDR and NeRF. This result can potentially also be very useful for further downstream applications that require smooth effective gradient signals. Review Strengths S1 - method The idea of combining volumetric rendering with SDF is actually very neat, which is better than the analytical gradient used in DVR and IDR that is only defined on surface and hence very local. This is very well implemented by carefully designing the alpha values to observe the surface properties of the SDF, driven by mathematical proofs. The resulting model is clean and easily interpretable. S2 - results The reconstructed surfaces are clearly better than existing methods, eg IDR, NeRF, UNISURF, with smooth surfaces (unlike noisy shapes extracted from NeRF) and fine details, handling thin structures well (much better than IDR). The model can also perform reconstruction without object masks without much loss in quality, whereas DVR and IDR require mask supervision. Thorough ablation and analysis results, including visualizations of the training progression, are presented and provide a lot of insights to the model. S3 - writing The paper is very well written and highly polished. I really enjoyed reading it. The motivations are clearly explained and illustrated with examples and great figures. Technical descriptions are followed by motivation/intuition and careful proofs, and very well structured such that the readers could easily follow through. Weaknesses W1 - image synthesis There are no comparison on the image synthesis quality. It seems the rendered image appear less visually appealing compared to NeRF, which however is a trade-off for obtaining explicit surface. W2 - (minor comment) more applications Although the paper is already very complete in itself, it mostly follows the experimental setup of previous method (which is good). There are also many other interesting applications now that the model can obtain smooth, non-local gradients with volumetric rendering while maintaining a surface representation. For example, does that helps with the speed of convergence? Have the authors tried optimizing camera poses from coarse estimates as IDR does, and would it tolerate more noise? Does it help with the cases with fewer views? Typos Line 176 & 183: ϕ ¯ should be ϕ Line 152: remove "{" Line 162: "a unbiased" -> "an unbiased" Post-Rebuttal I appreciate the authors' efforts in the rebuttal. This is a solid submission with a novel method and great results. I will keep my original rating.
NIPS
Title NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction Abstract We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs. Existing neural surface reconstruction approaches, such as DVR [Niemeyer et al., 2020] and IDR [Yariv et al., 2020], require foreground mask as supervision, easily get trapped in local minima, and therefore struggle with the reconstruction of objects with severe self-occlusion or thin structures. Meanwhile, recent neural methods for novel view synthesis, such as NeRF [Mildenhall et al., 2020] and its variants, use volume rendering to produce a neural scene representation with robustness of optimization, even for highly complex objects. However, extracting high-quality surfaces from this learned implicit representation is difficult because there are not sufficient surface constraints in the representation. In NeuS, we propose to represent a surface as the zero-level set of a signed distance function (SDF) and develop a new volume rendering method to train a neural SDF representation. We observe that the conventional volume rendering method causes inherent geometric errors (i.e. bias) for surface reconstruction, and therefore propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision. Experiments on the DTU dataset and the BlendedMVS dataset show that NeuS outperforms the state-of-the-arts in high-quality surface reconstruction, especially for objects and scenes with complex structures and self-occlusion. 1 Introduction Reconstructing surfaces from multi-view images is a fundamental problem in computer vision and computer graphics. 3D reconstruction with neural implicit representations has recently become a highly promising alternative to classical reconstruction approaches [35, 8, 2] due to its high reconstruction quality and its potential to reconstruct complex objects that are difficult for classical approaches, such as non-Lambertian surfaces and thin structures. Recent works represent surfaces as signed distance functions (SDF) [46, 49, 17, 22] or occupancy [29, 30]. To train their neural models, these methods use a differentiable surface rendering method to render a 3D object into images and compare them against input images for supervision. For example, IDR [46] produces impressive reconstruction results, but it fails to reconstruct objects with complex structures that causes abrupt depth changes. The cause of this limitation is that the surface rendering method used in IDR only considers a single surface intersection point for each ray. Consequently, the gradient only exists at this single point, which is too local for effective back propagation and would get optimization stuck in a poor local minimum when there are abrupt changes of depth on images. Furthermore, object ∗Corresponding authors. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). masks are needed as supervision for converging to a valid surface. As illustrated in Fig. 1 (a) top, with the radical depth change caused by the hole, the neural network would incorrectly predict the points near the front surface to be blue, failing to find the far-back blue surface. The actual test example in Fig. 1 (b) shows that IDR fails to correctly reconstruct the surfaces near the edges with abrupt depth changes. Recently, NeRF [28] and its variants have explored to use a volume rendering method to learn a volumetric radiance field for novel view synthesis. This volume rendering approach samples multiple points along each ray and perform α-composition of the colors of the sampled points to produce the output pixel colors for training purposes. The advantage of the volume rendering approach is that it can handle abrupt depth changes, because it considers multiple points along the ray and so all the sample points, either near the surface or on the far surface, produce gradient signals for back propagation. For example, referring Fig. 1 (a) bottom, when the near surface (yellow) is found to have inconsistent colors with the input image, the volume rendering approach is capable of training the network to find the far-back surface to produce the correct scene representation. However, since it is intended for novel view synthesis rather than surface reconstruction, NeRF only learns a volume density field, from which it is difficult to extract a high-quality surface. Fig. 1 (b) shows a surface extracted as a level-set surface of the density field learned by NeRF. Although the surface correctly accounts for abrupt depth changes, it contains conspicuous noise in some planar regions. In this work, we present a new neural rendering scheme, called NeuS, for multi-view surface reconstruction. NeuS uses the signed distance function (SDF) for surface representation and uses a novel volume rendering scheme to learn a neural SDF representation. Specifically, by introducing a density distribution induced by SDF, we make it possible to apply the volume rendering approach to learning an implicit SDF representation and thus have the best of both worlds, i.e. an accurate surface representation using a neural SDF model and robust network training in the presence of abrupt depth changes as enabled by volume rendering. Note that simply applying a standard volume rendering method to the density associated with SDF would lead to discernible bias (i.e. inherent geometric errors) in the reconstructed surfaces. This is a new and important observation that we will elaborate later. Therefore we propose a novel volume rendering scheme to ensure unbiased surface reconstruction in the first-order approximation of SDF. Experiments on both DTU dataset and BlendedMVS dataset demonstrated that NeuS is capable of reconstructing complex 3D objects and scenes with severe occlusions and delicate structures, even without foreground masks as supervision. It outperforms the state-of-the-art neural scene representation methods, namely IDR [46] and NeRF [28], in terms of reconstruction quality. 2 Related Works Classical Multi-view Surface and Volumetric Reconstruction. Traditional multi-view 3D reconstruction methods can be roughly classified into two categories: point- and surface-based reconstruction [2, 8, 9, 35] and volumetric reconstruction [6, 3, 36]. Point- and surface-based reconstruction methods estimate the depth map of each pixel by exploiting inter-image photometric consistency [8] and then fuse the depth maps into a global dense point cloud [25, 48]. The surface reconstruction is usually done as a post processing with methods like screened Poisson surface reconstruction [16]. The reconstruction quality heavily relies on the quality of correspondence matching, and the difficulties in matching correspondence for objects without rich textures often lead to severe artifacts and missing parts in the reconstruction results. Alternatively, volumetric reconstruction methods circumvent the difficulty of explicit correspondence matching by estimating occupancy and color in a voxel grid from multi-view images and evaluating the color consistency of each voxel. Due to limited achievable voxel resolution, these methods cannot achieve high accuracy. Neural Implicit Representation. Some methods enforce 3D understanding in a deep learning framework by introducing inductive biases. These inductive biases can be explicit representations, such as voxel grids [13, 5, 44], point cloud [7, 24, 18], meshes [41, 43, 14], and implicit representations. The implicit representations encoded by a neural network has gained a lot of attention recently, since it is continuous and can achieve high spatial resolution. This representation has been applied successfully to shape representation [26, 27, 31, 4, 1, 10, 47, 32], novel view synthesis [38, 23, 15, 28, 21, 33, 34, 40, 37] and multi-view 3D reconstruction [46, 29, 17, 12, 22]. Our work mainly focuses on learning implicit neural representation encoding both geometry and appearance in 3D space from 2D images via classical rendering techniques. Limited in this scope, the related works can be roughly categorized based on the rendering techniques used, i.e. surface rendering based methods and volume rendering based methods. Surface rendering based methods [29, 17, 46, 22] assume that the color of ray only relies on the color of an intersection of the ray with the scene geometry, which makes the gradient only backpropagated to a local region near the intersection. Therefore, such methods struggle with reconstructing complex objects with severe self-occlusions and sudden depth changes. Furthermore, they usually require object masks as supervision. On the contrary, our method performs well for such challenging cases without the need of masks. Volume rendering based methods, such as NeRF[28], render an image by α-compositing colors of the sampled points along each ray. As explained in the introduction, it can handle sudden depth changes and synthesize high-quality images. However, extracting high-fidelity surface from the learned implicit field is difficult because the density-based scene representation lacks sufficient constraints on its level sets. In contrast, our method combines the advantages of surface rendering based and volume rendering based methods by constraining the scene space as a signed distance function but applying volume rendering to train this representation with robustness. UNISURF [30], a concurrent work, also learns an implicit surface via volume rendering. It improves the reconstruction quality by shrinking the sample region of volume rendering during the optimization. Our method differs from UNISURF in that UNISURF represents the surface by occupancy values, while our method represents the scene by an SDF and thus can naturally extract the surface as the zero-level set of it, yielding better reconstruction accuracy than UNISURF, as will be seen later in the experiment section. 3 Method Given a set of posed images {Ik} of a 3D object, our goal is to reconstruct the surface S of it. The surface is represented by the zero-level set of a neural implicit SDF. In order to learn the weights of the neural network, we developed a novel volume rendering method to render images from the implicit SDF and minimize the difference between the rendered images and the input images. This volume rendering approach ensures robust optimization in NeuS for reconstructing objects of complex structures. 3.1 Rendering Procedure Scene representation. With NeuS, the scene of an object to be reconstructed is represented by two functions: f : R3 → R that maps a spatial position x ∈ R3 to its signed distance to the object, and c : R3 × S2 → R3 that encodes the color associated with a point x ∈ R3 and a viewing direction v ∈ S2. Both functions are encoded by Multi-layer Perceptrons (MLP). The surface S of the object is represented by the zero-level set of its SDF, that is, S = { x ∈ R3|f(x) = 0 } . (1) In order to apply a volume rendering method to training the SDF network, we first introduce a probability density function φs(f(x)), called S-density, where f(x), x ∈ R3, is the signed distance function and φs(x) = se−sx/(1 + e−sx)2, commonly known as the logistic density distribution, is the derivative of the Sigmoid function Φs(x) = (1 + e−sx)−1, i.e., φs(x) = Φ′s(x). In principle φs(x) can be any unimodal (i.e. bell-shaped) density distribution centered at 0; here we choose the logistic density distribution for its computational convenience. Note that the standard deviation of φs(x) is given by 1/s, which is also a trainable parameter, that is, 1/s approaches to zero as the network training converges. Intuitively, the main idea of NeuS is that, with the aid of the S-density field φs(f(x)), volume rendering is used to train the SDF network with only 2D input images as supervision. Upon successful minimization of a loss function based on this supervision, the zero-level set of the network-encoded SDF is expected to represent an accurately reconstructed surface S, with its induced S-density φs(f(x)) assuming prominently high values near the surface. Rendering. To learn the parameters of the neural SDF and color field, we advise a volume rendering scheme to render images from the proposed SDF representation. Given a pixel, we denote the ray emitted from this pixel as {p(t) = o + tv|t ≥ 0}, where o is the center of the camera and v is the unit direction vector of the ray. We accumulate the colors along the ray by C(o,v) = ∫ +∞ 0 w(t)c(p(t),v)dt, (2) where C(o,v) is the output color for this pixel, w(t) a weight for the point p(t), and c(p(t),v) the color at the point p along the viewing direction v. Requirements on weight function. The key to learn an accurate SDF representation from 2D images is to build an appropriate connection between output colors and SDF, i.e., to derive an appropriate weight function w(t) on the ray based on the SDF f of the scene. In the following, we list the requirements on the weight function w(t). 1. Unbiased. Given a camera ray p(t), w(t) attains a locally maximal value at a surface intersection point p(t∗), i.e. with f(p(t∗)) = 0, that is, the point p(t∗) is on the zero-level set of the SDF (x). 2. Occlusion-aware. Given any two depth values t0 and t1 satisfying f(t0) = f(t1), w(t0) > 0, w(t1) > 0, and t0 < t1, there is w(t0) > w(t1). That is, when two points have the same SDF value (thus the same SDF-induced S-density value), the point nearer to the view point should have a larger contribution to the final output color than does the other point. An unbiased weight function w(t) guarantees that the intersection of the camera ray with the zero-level set of SDF contributes most to the pixel color. The occlusion-aware property ensures that when a ray sequentially passes multiple surfaces, the rendering procedure will correctly use the color of the surface nearest to the camera to compute the output color. Next, we will first introduce a naive way of defining the weight function w(t) that directly using the standard pipeline of volume rendering, and explain why it is not appropriate for reconstruction before introducing our novel construction of w(t). Naive solution. To make the weight function occlusion-aware, a natural solution is based on the standard volume rendering formulation [28] which defines the weight function by w(t) = T (t)σ(t), (3) where σ(t) is the so-called volume density in classical volume rendering and T (t) = exp(− ∫ t 0 σ(u)du) here denotes the accumulated transmittance along the ray. To adopt the stan- dard volume density formulation [28], here σ(t) is set to be equal to the S-density value, i.e. σ(t) = φs(f(p(t))) and the weight function w(t) is computed by Eqn. 3. Although the resulting weight function is occlusion-aware, it is biased as it introduces inherent errors in the reconstructed surfaces. As illustrated in Fig. 2 (a), the weight function w(t) attains a local maximum at a point before the ray reaches the surface point p(t∗), satisfying f(p(t∗)) = 0. This fact will be proved in the supplementary material. Our solution. To introduce our solution, we first introduce a straightforward way to construct an unbiased weight function, which directly uses the normalized S-density as weights w(t) = φs(f(p(t)))∫ +∞ 0 φs(f(p(u)))du . (4) This construction of weight function is unbiased, but not occlusion-aware. For example, if the ray penetrates two surfaces, the SDF function f will have two zero points on the ray, which leads to two peaks on the weight function w(t) and the resulting weight function will equally blend the colors of two surfaces without considering occlusions. To this end, now we shall design the weight function w(t) that is both occlusion-aware and unbiased in the first-order approximation of SDF, based on the aforementioned straightforward construction. To ensure an occlusion-aware property of the weight function w(t), we will still follow the basic framework of volume rendering as Eqn. 3. However, different from the conventional treatment as in naive solution above, we define our function w(t) from the S-density in a new manner. We first define an opaque density function ρ(t), which is the counterpart of the volume density σ in standard volume rendering. Then we compute the new weight function w(t) by w(t) = T (t)ρ(t), where T (t) = exp ( − ∫ t 0 ρ(u)du ) . (5) How we derive opaque density ρ. We will first consider a simple case where there is only one surface intersection, and the surface is simply a plane. Since Eqn. 4 indeed satisfies the above requirements under this assumption, we derive the underlying opaque density ρ corresponding to the weight definition of Eqn. 4 using the framework of volume rendering. Then we will generalize this opaque density to the general case of multiple surface intersections. Specifically, in the simple case of a single plane intersection, it is easy to see that the signed distance function f(p(t)) is −| cos(θ)| · (t− t∗), where f(p(t∗)) = 0, and θ is the angle between the view direction v and the outward surface normal vector n. Because the surface is assumed locally, | cos(θ)| is a constant. It follows from Eqn. 4 that w(t) = φs(f(p(t)))∫ +∞ −∞ φs(f(p(u)))du = φs(f(p(t)))∫ +∞ −∞ φs(−| cos(θ)| · (u− t∗))du = φs(f(p(t))) | cos(θ)|−1 · ∫ +∞ −∞ φs(u− t∗)du =| cos(θ)|φs(f(p(t))). (6) Recall that the weight function within the framework of volume rendering is given by w(t) = T (t)ρ(t), where T (t) = exp(− ∫ t 0 ρ(u)du) denotes the accumulated transmittance. Therefore, to derive ρ(t), we have T (t)ρ(t) = | cos(θ)|φs(f(p(t))). (7) Since T (t) = exp(− ∫ t 0 ρ(u)du), it is easy to verify that T (t)ρ(t) = −dTdt (t). Further, note that | cos(θ)|φs(f(p(t))) = −dΦsdt (f(p(t))). It follows that dT dt (t) = dΦs dt (f(p(t))). Integrating both sides of this equation yields T (t) = Φs(f(p(t))). (8) Taking the logarithm and then differentiating both sides, we have∫ t −∞ ρ(u)du =− ln(Φs(f(p(t)))) ⇒ ρ(t) = −dΦsdt (f(p(t))) Φs(f(p(t))) . (9) This is the formula of the opaque density ρ(t) in case of single plane intersection. The weight function w(t) induced by ρ(t) is shown in Figure 2(b). Now we generalize the opaque density to the general case where there are multiple surface intersections along the ray p(t). In this case, −dΦsdt (f(p(t))) becomes negative on the segment of the ray with increasing SDF values. Thus we clip it against zero to ensure that the value of ρ is always non-negative. This gives the following opaque density function ρ(t) in general cases. ρ(t) = max ( −dΦsdt (f(p(t))) Φs(f(p(t))) , 0 ) . (10) Based on this equation, the weight function w(t) can be computed with standard volume rendering as in Eqn. 5. The illustration in the case of multiple surface intersection is shown in Figure 3. The following theorem states that in general cases (i.e., including both single surface intersection and multiple surface intersections) the weight function defined by Eqn. 10 and Eqn. 5 is unbiased in the first-order approximation of SDF. The proof is given in the supplementary material. Theorem 1 Suppose that a smooth surface S is defined by the zero-level set of the signed distance function f(x) = 0, and a ray p(t) = o + tv enters the surface S from outside to inside, with the intersection point at p(t∗), that is, f(p(t∗)) = 0 and there exists an interval [tl, tr] such that t∗ ∈ [tl, tr] and f(p(t)) is monotonically decreasing in [tl, tr]. Suppose that in this local interval [tl, tr], the surface can be tangentially approximated by a sufficiently small planar patch, i.e., ∇f is regarded as fixed. Then, the weight function w(t) computed by Eqn. 10 and Eqn. 5 in [tl, tr] attains its maximum at t∗. Discretization. To obtain discrete counterparts of the opacity and weight function, we adopt the same approximation scheme as used in NeRF [28], This scheme samples n points {pi = o+ tiv|i = 1, ..., n, ti < ti+1} along the ray to compute the approximate pixel color of the ray as Ĉ = n∑ i=1 Tiαici, (11) where Ti is the discrete accumulated transmittance defined by Ti = ∏i−1 j=1(1−αj), and αi is discrete opacity values defined by αi = 1− exp ( − ∫ ti+1 ti ρ(t)dt ) , (12) which can further be shown to be αi = max ( Φs(f(p(ti)))− Φs(f(p(ti+1))) Φs(f(p(ti))) , 0 ) . (13) The detailed derivation of this formula for αi is given in the supplementary material. 3.2 Training To train NeuS, we minimize the difference between the rendered colors and the ground truth colors, without any 3D supervision. Besides colors, we can also utilize the masks for supervision if provided. Specifically, we optimize our neural networks and inverse standard deviation s by randomly sampling a batch of pixels and their corresponding rays in world space P = {Ck,Mk,ok,vk}, where Ck is its pixel color and Mk ∈ {0, 1} is its optional mask value, from an image in every iteration. We assume the point sampling size is n and the batch size is m. The loss function is defined as L = Lcolor + λLreg + βLmask. (14) The color loss Lcolor is defined as Lcolor = 1 m ∑ k R(Ĉk, Ck). (15) Same as IDR[46], we empirically chooseR as L1 loss, which in our observation is robust to outliers and stable in training. We add an Eikonal term [10] on the sampled points to regularize the SDF of fθ by Lreg = 1 nm ∑ k,i (‖∇f(p̂k,i)‖2 − 1)2. (16) The optional mask loss Lmask is defined as Lmask = BCE(Mk, Ôk), (17) where Ôk = ∑n i=1 Tk,iαk,i is the sum of weights along the camera ray, and BCE is the binary cross entropy loss. Hierarchical sampling. In this work, we follow a similar hierarchical sampling strategy as in NeRF [28]. We first uniformly sample the points on the ray and then iteratively conduct importance sampling on top of the coarse probability estimation. The difference is that, unlike NeRF which simultaneously optimizes a coarse network and a fine network, we only maintain one network, where the probability in coarse sampling is computed based on the S-density φs(f(x)) with fixed standard deviations while the probability of fine sampling is computed based on φs(f(x)) with the learned s. Details of hierarchical sampling strategy are provided in supplementary materials. 4 Experiments 4.1 Experimental settings Datasets. To evaluate our approach and baseline methods, we use 15 scenes from the DTU dataset [11], same as those used in IDR [46], with a wide variety of materials, appearance and geometry, including challenging cases for reconstruction algorithms, such as non-Lambertian surfaces and thin structures. Each scene contains 49 or 64 images with the image resolution of 1600× 1200. Each scene was tested with and without foreground masks provided by IDR [46]. We further tested on 7 challenging scenes from the low-res set of the BlendedMVS dataset [45](CC-4 License). Each scene has 31− 143 images at 768× 576 pixels and masks are provided by the BlendedMVS dataset. We further captured two thin objects with 32 input images to test our approach on thin structure reconstruction. Baselines. (1) The state-of-the-art surface rendering approach – IDR [46]: IDR can reconstruct surface with high quality but requires foreground masks as supervision; Since IDR has demonstrated superior quality compared to another surface rendering based method – DVR [29], we did not conduct a comparison with DVR. (2) The state-of-the-art volume rendering approach – NeRF [28]: We use a threshold of 25 to extract mesh from the learned density field. We validate this choice in the supplementary material. (3) A widely-used classical MVS method – COLMAP [35]: We reconstruct a mesh from the output point cloud of COLMAP with Screened Poisson Surface Reconstruction [16]. (4) The concurrent work which unifies surface rendering and volume rendering with an occupancy field as scene representation – UNISURF [30]. More details of the baseline methods are included in the supplementary material. 4.2 Comparisons We conducted the comparisons in two settings, with mask supervision (w/ mask) and without mask supervision (w/o mask). We measure the reconstruction quality with the Chamfer distances in the same way as UNISURF [30] and IDR [46] and report the scores in Table 1. The results show that our approach outperforms the baseline methods on the DTU dataset in both settings – w/ and w/o mask in terms of the Chamfer distance. Note that the reported scores of IDR in the setting of w/ mask and NeRF and UNISURF in the w/o mask setting are from IDR [46] and UNISURF [30]. We conduct the qualitative comparisons on the DTU dataset and the BlendedMVS dataset in both settings, w/ mask and w/o mask, in Figure 4 and Figure 5, respectively. As shown in Figure 4 for the setting of w/ mask, IDR shows limited performance for reconstructing thin metals parts in Scan 37 (DTU), and fails to handle sudden depth changes in Stone (BlendedMVS) due to the local optimization process in surface rendering. The extracted meshes of NeRF are noisy since the volume density field has not sufficient constraint on its 3D geometry. Regarding the w/o mask setting, we visually compare our method with NeRF and COLMAP in the setting of w/o mask in Figure 5, which shows our reconstructed surfaces are with more fidelity than baselines. We further show a comparison with UNISURF [30] on two examples in the w/o mask setting. Note that we use the qualitative results of UNISURF reported their paper for comparison. Our method works better for the objects with abrupt depth changes. More qualitative images are included in the supplementary material. 4.3 Analysis Ablation study. To evaluate the effect of the weight calculation, we test three different kinds of weight constructions described in Sec. 3.1: (a) Naive Solution. (b) Straightforward Construction as shown in Eqn. 4. (e) Full Model. As shown in Figure 6, the quantitative result of naive solution is worse than our weight choice (e) in terms of the Chamfer distance. This is because it introduces a bias to the surface reconstruction. If direct construction is used, there are severe artifacts. We also studied the effect of Eikonal regularization [10] and geometric initialization [1]. Without Eikonal regularization or geometric initialization, the result on Chamfer distance is on par with that of the full model. However, neither of them can correctly output a signed distance function. This is indicated by the MAE(mean absolute error) between the SDF predictions and corresponding ground-truth SDF, as shown in the bottom line of Figure 6. The MAE is computed on uniformly-sampled points in the object’s bounding sphere. Qualitative results of SDF predictions are provided in the supplementary material. Thin structures. We additionally show results on two challenging thin objects with 32 input images. The plane with rich texture under the object is used for camera calibration. As shown in Fig. 8, our method is able to accurately reconstruct these thin structures, especially on the edges with abrupt depth changes. Furthermore, different from the methods [39, 19, 42, 20] which only target at high-quality thin structure reconstruction, our method can handle the scenes which have a mixture of thin structures and general objects. 5 Conclusion We have proposed NeuS, a new approach to multiview surface reconstruction that represents 3D surfaces as neural SDF and developed a new volume rendering method for training the implicit SDF representation. NeuS produces high-quality reconstruction and successfully reconstructs objects with severe occlusions and complex structures. It outperforms the state-of-the-arts both qualitatively and quantitatively. One limitation of our method is that although our method does not heavily rely on correspondence matching of texture features, the performance would still degrade for textureless objects (we show the failure cases in the supplementary material). Moreover, NeuS has only a single scale parameter s that is used to model the standard deviation of the probability distribution for all the spatial location. Hence, an interesting future research topic is to model the probability with different variances for different spatial locations together with the optimization of scene representation, depending on different local geometric characteristics. Negative societal impact: like many other learning-based works, our method requires a large amount of computational resources for network training, which can be a concern for global climate change. Acknowlegements We thank Michael Oechsle for providing the results of UNISURF. Christian Theobalt was supported by ERC Consolidator Grant 770784. Lingjie Liu was supported by Lise Meitner Postdoctoral Fellowship. Computational resources are mainly provided by HKU GPU Farm.
1. What are the strengths and weaknesses of the proposed method for implicit 3D surface reconstruction from posed 2D images? 2. How does the method compare to state-of-the-art baseline methods in terms of quality and efficiency? 3. What are some potential applications of the recovered 3D representation in SDF format? 4. Are there any limitations or challenges associated with the use of neural volume rendering methods for surface reconstruction? 5. How might the proposed method be improved or extended in future work?
Summary Of The Paper Review
Summary Of The Paper This paper presents a method for implicit 3D surface reconstruction from posed 2D images, where the 3D surface is represented with an SDF. The authors advocate a new way of using neural volume rendering methods for surface reconstruction, where the density field is induced by the optimized SDF instead of direct MLP outputs (as in e.g. NeRF). In addition, the authors described two key properties that should be satisfied for volume rendering with SDF representations and proposed a novel solution. Experimental results on the DTU multiview dataset shows its advantage over state-of-the-art baseline methods. Review Strengths: The paper is very well-written, with a clear problem statement and a thorough overview of related literature. To take advantage of the nice optimization properties in neural volume rendering methods e.g. NeRF, the authors pointed out two important properties which should be satisfied for extracting surfaces with SDF as the representation: unbiasedness and occlusion-awareness. The authors suggested two simple solutions satisfying only each of the property, and then propose a nice solution to blend the two together. This makes a complicated concept easy to understand. The authors also provided a nice theoretical proof on how the proposed solution satisfy both properties. The proposed method NeuS is compared with extensive recent baseline methods and significantly outperforms both qualitatively and quantitatively. Ablation studies and failure cases are also provided, which helps understand the contribution of each component. Weaknesses: It was mentioned that a proof on how Eq. 6 is not unbiased would be provided, but only an empirical example was provided in the supplementary materials. It is still unclear to me how the construction of Eq. 6 is not unbiased. Although the results are superior, it would be helpful to provide a comparison on the training/inference speed. Since NeuS is still a volume rendering method with hierarchical sampling as adopted in NeRF, I would expect it to be almost as slow as NeRF to render. How slow would NeuS be compared to IDR? It would be good to also evaluate how well the recovered 3D representation actually satisfy an SDF, as SDF recovery is also one main goal of the paper. This would be important since efficient rendering (e.g. with sphere tracing) and relighting could be applied on true SDF shapes. The current evaluation seem to focus on 3D shape reconstruction in terms of surface accuracy (measured by Chamfer distances). In Fig. 4 of the supplementary material, the exterior region does not seem to correctly satisfy an SDF. I think it would also be interesting to see whether Neus would be able to reconstruct surfaces on forward-facing scenes, such as in the LLFF dataset originally considered in NeRF. Does the reparametrization of coordinates using NDC affect the solution? Other minor problems: It is unclear what s in L123 -- is it a hyperparameter that should be tuned? δ i is undefined in Eq. 6. (I'm guessing it corresponds to t i + 1 − t i ?) L183: the equation seems off, as ϕ ¯ = ϕ s ∘ f (Eq. 2). L192: the differential is missing.
NIPS
Title NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction Abstract We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs. Existing neural surface reconstruction approaches, such as DVR [Niemeyer et al., 2020] and IDR [Yariv et al., 2020], require foreground mask as supervision, easily get trapped in local minima, and therefore struggle with the reconstruction of objects with severe self-occlusion or thin structures. Meanwhile, recent neural methods for novel view synthesis, such as NeRF [Mildenhall et al., 2020] and its variants, use volume rendering to produce a neural scene representation with robustness of optimization, even for highly complex objects. However, extracting high-quality surfaces from this learned implicit representation is difficult because there are not sufficient surface constraints in the representation. In NeuS, we propose to represent a surface as the zero-level set of a signed distance function (SDF) and develop a new volume rendering method to train a neural SDF representation. We observe that the conventional volume rendering method causes inherent geometric errors (i.e. bias) for surface reconstruction, and therefore propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision. Experiments on the DTU dataset and the BlendedMVS dataset show that NeuS outperforms the state-of-the-arts in high-quality surface reconstruction, especially for objects and scenes with complex structures and self-occlusion. 1 Introduction Reconstructing surfaces from multi-view images is a fundamental problem in computer vision and computer graphics. 3D reconstruction with neural implicit representations has recently become a highly promising alternative to classical reconstruction approaches [35, 8, 2] due to its high reconstruction quality and its potential to reconstruct complex objects that are difficult for classical approaches, such as non-Lambertian surfaces and thin structures. Recent works represent surfaces as signed distance functions (SDF) [46, 49, 17, 22] or occupancy [29, 30]. To train their neural models, these methods use a differentiable surface rendering method to render a 3D object into images and compare them against input images for supervision. For example, IDR [46] produces impressive reconstruction results, but it fails to reconstruct objects with complex structures that causes abrupt depth changes. The cause of this limitation is that the surface rendering method used in IDR only considers a single surface intersection point for each ray. Consequently, the gradient only exists at this single point, which is too local for effective back propagation and would get optimization stuck in a poor local minimum when there are abrupt changes of depth on images. Furthermore, object ∗Corresponding authors. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). masks are needed as supervision for converging to a valid surface. As illustrated in Fig. 1 (a) top, with the radical depth change caused by the hole, the neural network would incorrectly predict the points near the front surface to be blue, failing to find the far-back blue surface. The actual test example in Fig. 1 (b) shows that IDR fails to correctly reconstruct the surfaces near the edges with abrupt depth changes. Recently, NeRF [28] and its variants have explored to use a volume rendering method to learn a volumetric radiance field for novel view synthesis. This volume rendering approach samples multiple points along each ray and perform α-composition of the colors of the sampled points to produce the output pixel colors for training purposes. The advantage of the volume rendering approach is that it can handle abrupt depth changes, because it considers multiple points along the ray and so all the sample points, either near the surface or on the far surface, produce gradient signals for back propagation. For example, referring Fig. 1 (a) bottom, when the near surface (yellow) is found to have inconsistent colors with the input image, the volume rendering approach is capable of training the network to find the far-back surface to produce the correct scene representation. However, since it is intended for novel view synthesis rather than surface reconstruction, NeRF only learns a volume density field, from which it is difficult to extract a high-quality surface. Fig. 1 (b) shows a surface extracted as a level-set surface of the density field learned by NeRF. Although the surface correctly accounts for abrupt depth changes, it contains conspicuous noise in some planar regions. In this work, we present a new neural rendering scheme, called NeuS, for multi-view surface reconstruction. NeuS uses the signed distance function (SDF) for surface representation and uses a novel volume rendering scheme to learn a neural SDF representation. Specifically, by introducing a density distribution induced by SDF, we make it possible to apply the volume rendering approach to learning an implicit SDF representation and thus have the best of both worlds, i.e. an accurate surface representation using a neural SDF model and robust network training in the presence of abrupt depth changes as enabled by volume rendering. Note that simply applying a standard volume rendering method to the density associated with SDF would lead to discernible bias (i.e. inherent geometric errors) in the reconstructed surfaces. This is a new and important observation that we will elaborate later. Therefore we propose a novel volume rendering scheme to ensure unbiased surface reconstruction in the first-order approximation of SDF. Experiments on both DTU dataset and BlendedMVS dataset demonstrated that NeuS is capable of reconstructing complex 3D objects and scenes with severe occlusions and delicate structures, even without foreground masks as supervision. It outperforms the state-of-the-art neural scene representation methods, namely IDR [46] and NeRF [28], in terms of reconstruction quality. 2 Related Works Classical Multi-view Surface and Volumetric Reconstruction. Traditional multi-view 3D reconstruction methods can be roughly classified into two categories: point- and surface-based reconstruction [2, 8, 9, 35] and volumetric reconstruction [6, 3, 36]. Point- and surface-based reconstruction methods estimate the depth map of each pixel by exploiting inter-image photometric consistency [8] and then fuse the depth maps into a global dense point cloud [25, 48]. The surface reconstruction is usually done as a post processing with methods like screened Poisson surface reconstruction [16]. The reconstruction quality heavily relies on the quality of correspondence matching, and the difficulties in matching correspondence for objects without rich textures often lead to severe artifacts and missing parts in the reconstruction results. Alternatively, volumetric reconstruction methods circumvent the difficulty of explicit correspondence matching by estimating occupancy and color in a voxel grid from multi-view images and evaluating the color consistency of each voxel. Due to limited achievable voxel resolution, these methods cannot achieve high accuracy. Neural Implicit Representation. Some methods enforce 3D understanding in a deep learning framework by introducing inductive biases. These inductive biases can be explicit representations, such as voxel grids [13, 5, 44], point cloud [7, 24, 18], meshes [41, 43, 14], and implicit representations. The implicit representations encoded by a neural network has gained a lot of attention recently, since it is continuous and can achieve high spatial resolution. This representation has been applied successfully to shape representation [26, 27, 31, 4, 1, 10, 47, 32], novel view synthesis [38, 23, 15, 28, 21, 33, 34, 40, 37] and multi-view 3D reconstruction [46, 29, 17, 12, 22]. Our work mainly focuses on learning implicit neural representation encoding both geometry and appearance in 3D space from 2D images via classical rendering techniques. Limited in this scope, the related works can be roughly categorized based on the rendering techniques used, i.e. surface rendering based methods and volume rendering based methods. Surface rendering based methods [29, 17, 46, 22] assume that the color of ray only relies on the color of an intersection of the ray with the scene geometry, which makes the gradient only backpropagated to a local region near the intersection. Therefore, such methods struggle with reconstructing complex objects with severe self-occlusions and sudden depth changes. Furthermore, they usually require object masks as supervision. On the contrary, our method performs well for such challenging cases without the need of masks. Volume rendering based methods, such as NeRF[28], render an image by α-compositing colors of the sampled points along each ray. As explained in the introduction, it can handle sudden depth changes and synthesize high-quality images. However, extracting high-fidelity surface from the learned implicit field is difficult because the density-based scene representation lacks sufficient constraints on its level sets. In contrast, our method combines the advantages of surface rendering based and volume rendering based methods by constraining the scene space as a signed distance function but applying volume rendering to train this representation with robustness. UNISURF [30], a concurrent work, also learns an implicit surface via volume rendering. It improves the reconstruction quality by shrinking the sample region of volume rendering during the optimization. Our method differs from UNISURF in that UNISURF represents the surface by occupancy values, while our method represents the scene by an SDF and thus can naturally extract the surface as the zero-level set of it, yielding better reconstruction accuracy than UNISURF, as will be seen later in the experiment section. 3 Method Given a set of posed images {Ik} of a 3D object, our goal is to reconstruct the surface S of it. The surface is represented by the zero-level set of a neural implicit SDF. In order to learn the weights of the neural network, we developed a novel volume rendering method to render images from the implicit SDF and minimize the difference between the rendered images and the input images. This volume rendering approach ensures robust optimization in NeuS for reconstructing objects of complex structures. 3.1 Rendering Procedure Scene representation. With NeuS, the scene of an object to be reconstructed is represented by two functions: f : R3 → R that maps a spatial position x ∈ R3 to its signed distance to the object, and c : R3 × S2 → R3 that encodes the color associated with a point x ∈ R3 and a viewing direction v ∈ S2. Both functions are encoded by Multi-layer Perceptrons (MLP). The surface S of the object is represented by the zero-level set of its SDF, that is, S = { x ∈ R3|f(x) = 0 } . (1) In order to apply a volume rendering method to training the SDF network, we first introduce a probability density function φs(f(x)), called S-density, where f(x), x ∈ R3, is the signed distance function and φs(x) = se−sx/(1 + e−sx)2, commonly known as the logistic density distribution, is the derivative of the Sigmoid function Φs(x) = (1 + e−sx)−1, i.e., φs(x) = Φ′s(x). In principle φs(x) can be any unimodal (i.e. bell-shaped) density distribution centered at 0; here we choose the logistic density distribution for its computational convenience. Note that the standard deviation of φs(x) is given by 1/s, which is also a trainable parameter, that is, 1/s approaches to zero as the network training converges. Intuitively, the main idea of NeuS is that, with the aid of the S-density field φs(f(x)), volume rendering is used to train the SDF network with only 2D input images as supervision. Upon successful minimization of a loss function based on this supervision, the zero-level set of the network-encoded SDF is expected to represent an accurately reconstructed surface S, with its induced S-density φs(f(x)) assuming prominently high values near the surface. Rendering. To learn the parameters of the neural SDF and color field, we advise a volume rendering scheme to render images from the proposed SDF representation. Given a pixel, we denote the ray emitted from this pixel as {p(t) = o + tv|t ≥ 0}, where o is the center of the camera and v is the unit direction vector of the ray. We accumulate the colors along the ray by C(o,v) = ∫ +∞ 0 w(t)c(p(t),v)dt, (2) where C(o,v) is the output color for this pixel, w(t) a weight for the point p(t), and c(p(t),v) the color at the point p along the viewing direction v. Requirements on weight function. The key to learn an accurate SDF representation from 2D images is to build an appropriate connection between output colors and SDF, i.e., to derive an appropriate weight function w(t) on the ray based on the SDF f of the scene. In the following, we list the requirements on the weight function w(t). 1. Unbiased. Given a camera ray p(t), w(t) attains a locally maximal value at a surface intersection point p(t∗), i.e. with f(p(t∗)) = 0, that is, the point p(t∗) is on the zero-level set of the SDF (x). 2. Occlusion-aware. Given any two depth values t0 and t1 satisfying f(t0) = f(t1), w(t0) > 0, w(t1) > 0, and t0 < t1, there is w(t0) > w(t1). That is, when two points have the same SDF value (thus the same SDF-induced S-density value), the point nearer to the view point should have a larger contribution to the final output color than does the other point. An unbiased weight function w(t) guarantees that the intersection of the camera ray with the zero-level set of SDF contributes most to the pixel color. The occlusion-aware property ensures that when a ray sequentially passes multiple surfaces, the rendering procedure will correctly use the color of the surface nearest to the camera to compute the output color. Next, we will first introduce a naive way of defining the weight function w(t) that directly using the standard pipeline of volume rendering, and explain why it is not appropriate for reconstruction before introducing our novel construction of w(t). Naive solution. To make the weight function occlusion-aware, a natural solution is based on the standard volume rendering formulation [28] which defines the weight function by w(t) = T (t)σ(t), (3) where σ(t) is the so-called volume density in classical volume rendering and T (t) = exp(− ∫ t 0 σ(u)du) here denotes the accumulated transmittance along the ray. To adopt the stan- dard volume density formulation [28], here σ(t) is set to be equal to the S-density value, i.e. σ(t) = φs(f(p(t))) and the weight function w(t) is computed by Eqn. 3. Although the resulting weight function is occlusion-aware, it is biased as it introduces inherent errors in the reconstructed surfaces. As illustrated in Fig. 2 (a), the weight function w(t) attains a local maximum at a point before the ray reaches the surface point p(t∗), satisfying f(p(t∗)) = 0. This fact will be proved in the supplementary material. Our solution. To introduce our solution, we first introduce a straightforward way to construct an unbiased weight function, which directly uses the normalized S-density as weights w(t) = φs(f(p(t)))∫ +∞ 0 φs(f(p(u)))du . (4) This construction of weight function is unbiased, but not occlusion-aware. For example, if the ray penetrates two surfaces, the SDF function f will have two zero points on the ray, which leads to two peaks on the weight function w(t) and the resulting weight function will equally blend the colors of two surfaces without considering occlusions. To this end, now we shall design the weight function w(t) that is both occlusion-aware and unbiased in the first-order approximation of SDF, based on the aforementioned straightforward construction. To ensure an occlusion-aware property of the weight function w(t), we will still follow the basic framework of volume rendering as Eqn. 3. However, different from the conventional treatment as in naive solution above, we define our function w(t) from the S-density in a new manner. We first define an opaque density function ρ(t), which is the counterpart of the volume density σ in standard volume rendering. Then we compute the new weight function w(t) by w(t) = T (t)ρ(t), where T (t) = exp ( − ∫ t 0 ρ(u)du ) . (5) How we derive opaque density ρ. We will first consider a simple case where there is only one surface intersection, and the surface is simply a plane. Since Eqn. 4 indeed satisfies the above requirements under this assumption, we derive the underlying opaque density ρ corresponding to the weight definition of Eqn. 4 using the framework of volume rendering. Then we will generalize this opaque density to the general case of multiple surface intersections. Specifically, in the simple case of a single plane intersection, it is easy to see that the signed distance function f(p(t)) is −| cos(θ)| · (t− t∗), where f(p(t∗)) = 0, and θ is the angle between the view direction v and the outward surface normal vector n. Because the surface is assumed locally, | cos(θ)| is a constant. It follows from Eqn. 4 that w(t) = φs(f(p(t)))∫ +∞ −∞ φs(f(p(u)))du = φs(f(p(t)))∫ +∞ −∞ φs(−| cos(θ)| · (u− t∗))du = φs(f(p(t))) | cos(θ)|−1 · ∫ +∞ −∞ φs(u− t∗)du =| cos(θ)|φs(f(p(t))). (6) Recall that the weight function within the framework of volume rendering is given by w(t) = T (t)ρ(t), where T (t) = exp(− ∫ t 0 ρ(u)du) denotes the accumulated transmittance. Therefore, to derive ρ(t), we have T (t)ρ(t) = | cos(θ)|φs(f(p(t))). (7) Since T (t) = exp(− ∫ t 0 ρ(u)du), it is easy to verify that T (t)ρ(t) = −dTdt (t). Further, note that | cos(θ)|φs(f(p(t))) = −dΦsdt (f(p(t))). It follows that dT dt (t) = dΦs dt (f(p(t))). Integrating both sides of this equation yields T (t) = Φs(f(p(t))). (8) Taking the logarithm and then differentiating both sides, we have∫ t −∞ ρ(u)du =− ln(Φs(f(p(t)))) ⇒ ρ(t) = −dΦsdt (f(p(t))) Φs(f(p(t))) . (9) This is the formula of the opaque density ρ(t) in case of single plane intersection. The weight function w(t) induced by ρ(t) is shown in Figure 2(b). Now we generalize the opaque density to the general case where there are multiple surface intersections along the ray p(t). In this case, −dΦsdt (f(p(t))) becomes negative on the segment of the ray with increasing SDF values. Thus we clip it against zero to ensure that the value of ρ is always non-negative. This gives the following opaque density function ρ(t) in general cases. ρ(t) = max ( −dΦsdt (f(p(t))) Φs(f(p(t))) , 0 ) . (10) Based on this equation, the weight function w(t) can be computed with standard volume rendering as in Eqn. 5. The illustration in the case of multiple surface intersection is shown in Figure 3. The following theorem states that in general cases (i.e., including both single surface intersection and multiple surface intersections) the weight function defined by Eqn. 10 and Eqn. 5 is unbiased in the first-order approximation of SDF. The proof is given in the supplementary material. Theorem 1 Suppose that a smooth surface S is defined by the zero-level set of the signed distance function f(x) = 0, and a ray p(t) = o + tv enters the surface S from outside to inside, with the intersection point at p(t∗), that is, f(p(t∗)) = 0 and there exists an interval [tl, tr] such that t∗ ∈ [tl, tr] and f(p(t)) is monotonically decreasing in [tl, tr]. Suppose that in this local interval [tl, tr], the surface can be tangentially approximated by a sufficiently small planar patch, i.e., ∇f is regarded as fixed. Then, the weight function w(t) computed by Eqn. 10 and Eqn. 5 in [tl, tr] attains its maximum at t∗. Discretization. To obtain discrete counterparts of the opacity and weight function, we adopt the same approximation scheme as used in NeRF [28], This scheme samples n points {pi = o+ tiv|i = 1, ..., n, ti < ti+1} along the ray to compute the approximate pixel color of the ray as Ĉ = n∑ i=1 Tiαici, (11) where Ti is the discrete accumulated transmittance defined by Ti = ∏i−1 j=1(1−αj), and αi is discrete opacity values defined by αi = 1− exp ( − ∫ ti+1 ti ρ(t)dt ) , (12) which can further be shown to be αi = max ( Φs(f(p(ti)))− Φs(f(p(ti+1))) Φs(f(p(ti))) , 0 ) . (13) The detailed derivation of this formula for αi is given in the supplementary material. 3.2 Training To train NeuS, we minimize the difference between the rendered colors and the ground truth colors, without any 3D supervision. Besides colors, we can also utilize the masks for supervision if provided. Specifically, we optimize our neural networks and inverse standard deviation s by randomly sampling a batch of pixels and their corresponding rays in world space P = {Ck,Mk,ok,vk}, where Ck is its pixel color and Mk ∈ {0, 1} is its optional mask value, from an image in every iteration. We assume the point sampling size is n and the batch size is m. The loss function is defined as L = Lcolor + λLreg + βLmask. (14) The color loss Lcolor is defined as Lcolor = 1 m ∑ k R(Ĉk, Ck). (15) Same as IDR[46], we empirically chooseR as L1 loss, which in our observation is robust to outliers and stable in training. We add an Eikonal term [10] on the sampled points to regularize the SDF of fθ by Lreg = 1 nm ∑ k,i (‖∇f(p̂k,i)‖2 − 1)2. (16) The optional mask loss Lmask is defined as Lmask = BCE(Mk, Ôk), (17) where Ôk = ∑n i=1 Tk,iαk,i is the sum of weights along the camera ray, and BCE is the binary cross entropy loss. Hierarchical sampling. In this work, we follow a similar hierarchical sampling strategy as in NeRF [28]. We first uniformly sample the points on the ray and then iteratively conduct importance sampling on top of the coarse probability estimation. The difference is that, unlike NeRF which simultaneously optimizes a coarse network and a fine network, we only maintain one network, where the probability in coarse sampling is computed based on the S-density φs(f(x)) with fixed standard deviations while the probability of fine sampling is computed based on φs(f(x)) with the learned s. Details of hierarchical sampling strategy are provided in supplementary materials. 4 Experiments 4.1 Experimental settings Datasets. To evaluate our approach and baseline methods, we use 15 scenes from the DTU dataset [11], same as those used in IDR [46], with a wide variety of materials, appearance and geometry, including challenging cases for reconstruction algorithms, such as non-Lambertian surfaces and thin structures. Each scene contains 49 or 64 images with the image resolution of 1600× 1200. Each scene was tested with and without foreground masks provided by IDR [46]. We further tested on 7 challenging scenes from the low-res set of the BlendedMVS dataset [45](CC-4 License). Each scene has 31− 143 images at 768× 576 pixels and masks are provided by the BlendedMVS dataset. We further captured two thin objects with 32 input images to test our approach on thin structure reconstruction. Baselines. (1) The state-of-the-art surface rendering approach – IDR [46]: IDR can reconstruct surface with high quality but requires foreground masks as supervision; Since IDR has demonstrated superior quality compared to another surface rendering based method – DVR [29], we did not conduct a comparison with DVR. (2) The state-of-the-art volume rendering approach – NeRF [28]: We use a threshold of 25 to extract mesh from the learned density field. We validate this choice in the supplementary material. (3) A widely-used classical MVS method – COLMAP [35]: We reconstruct a mesh from the output point cloud of COLMAP with Screened Poisson Surface Reconstruction [16]. (4) The concurrent work which unifies surface rendering and volume rendering with an occupancy field as scene representation – UNISURF [30]. More details of the baseline methods are included in the supplementary material. 4.2 Comparisons We conducted the comparisons in two settings, with mask supervision (w/ mask) and without mask supervision (w/o mask). We measure the reconstruction quality with the Chamfer distances in the same way as UNISURF [30] and IDR [46] and report the scores in Table 1. The results show that our approach outperforms the baseline methods on the DTU dataset in both settings – w/ and w/o mask in terms of the Chamfer distance. Note that the reported scores of IDR in the setting of w/ mask and NeRF and UNISURF in the w/o mask setting are from IDR [46] and UNISURF [30]. We conduct the qualitative comparisons on the DTU dataset and the BlendedMVS dataset in both settings, w/ mask and w/o mask, in Figure 4 and Figure 5, respectively. As shown in Figure 4 for the setting of w/ mask, IDR shows limited performance for reconstructing thin metals parts in Scan 37 (DTU), and fails to handle sudden depth changes in Stone (BlendedMVS) due to the local optimization process in surface rendering. The extracted meshes of NeRF are noisy since the volume density field has not sufficient constraint on its 3D geometry. Regarding the w/o mask setting, we visually compare our method with NeRF and COLMAP in the setting of w/o mask in Figure 5, which shows our reconstructed surfaces are with more fidelity than baselines. We further show a comparison with UNISURF [30] on two examples in the w/o mask setting. Note that we use the qualitative results of UNISURF reported their paper for comparison. Our method works better for the objects with abrupt depth changes. More qualitative images are included in the supplementary material. 4.3 Analysis Ablation study. To evaluate the effect of the weight calculation, we test three different kinds of weight constructions described in Sec. 3.1: (a) Naive Solution. (b) Straightforward Construction as shown in Eqn. 4. (e) Full Model. As shown in Figure 6, the quantitative result of naive solution is worse than our weight choice (e) in terms of the Chamfer distance. This is because it introduces a bias to the surface reconstruction. If direct construction is used, there are severe artifacts. We also studied the effect of Eikonal regularization [10] and geometric initialization [1]. Without Eikonal regularization or geometric initialization, the result on Chamfer distance is on par with that of the full model. However, neither of them can correctly output a signed distance function. This is indicated by the MAE(mean absolute error) between the SDF predictions and corresponding ground-truth SDF, as shown in the bottom line of Figure 6. The MAE is computed on uniformly-sampled points in the object’s bounding sphere. Qualitative results of SDF predictions are provided in the supplementary material. Thin structures. We additionally show results on two challenging thin objects with 32 input images. The plane with rich texture under the object is used for camera calibration. As shown in Fig. 8, our method is able to accurately reconstruct these thin structures, especially on the edges with abrupt depth changes. Furthermore, different from the methods [39, 19, 42, 20] which only target at high-quality thin structure reconstruction, our method can handle the scenes which have a mixture of thin structures and general objects. 5 Conclusion We have proposed NeuS, a new approach to multiview surface reconstruction that represents 3D surfaces as neural SDF and developed a new volume rendering method for training the implicit SDF representation. NeuS produces high-quality reconstruction and successfully reconstructs objects with severe occlusions and complex structures. It outperforms the state-of-the-arts both qualitatively and quantitatively. One limitation of our method is that although our method does not heavily rely on correspondence matching of texture features, the performance would still degrade for textureless objects (we show the failure cases in the supplementary material). Moreover, NeuS has only a single scale parameter s that is used to model the standard deviation of the probability distribution for all the spatial location. Hence, an interesting future research topic is to model the probability with different variances for different spatial locations together with the optimization of scene representation, depending on different local geometric characteristics. Negative societal impact: like many other learning-based works, our method requires a large amount of computational resources for network training, which can be a concern for global climate change. Acknowlegements We thank Michael Oechsle for providing the results of UNISURF. Christian Theobalt was supported by ERC Consolidator Grant 770784. Lingjie Liu was supported by Lise Meitner Postdoctoral Fellowship. Computational resources are mainly provided by HKU GPU Farm.
1. What is the focus and contribution of the paper on neural surface reconstruction? 2. What are the strengths of the proposed approach, particularly in representing geometry with signed distance functions? 3. What are the weaknesses of the paper, especially regarding comparisons with other works? 4. Do you have any concerns about the representation of geometry using SDF? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a neural surface reconstruction method which they call called NeuS. The novelty of this method is derived from their use of signed distance functions to represent geometry, and a proposed volume rendering method for accumulating information along rays. With these changes they are better able to estimate depth in more complicated surface geometry, demonstrate clearly superior shape predictions, and achieve state of the art quantitative results on the DTU dataset. Review This paper is quite strong. It identified an issue with NERF style volumetric rendering method, and solved it by representing geometry with an SDF. The proposed volume rendering method which allows for this change is well motivated and explained. Both the qualitative and quantitative results are convincing and compelling. One point on concern is the comparison to other methods. I am not an expert in this field but it seems there has been quite an explosion of nerf-like papers since its release, and from a quick lit review of them it seems like NERF would no longer be considered state of the art in novel view synthesis, as referenced in the baselines section. NERF was released only last year and so there is leeway here for concurrent, or close to concurrent research, but I would perhaps revise this claim. If you wanted to raise my score higher it might be nice to compare directly to papers like "Neural Sparse Voxel Fields" which I see you reference, or other more recent papers which seem to tackle the same shortcomings in the original paper. other comments: lines 67 to 70 are true of some methods but not all. This overly general statement is misleading. There needs to be some editing for grammar, especially in the intro section.
NIPS
Title Is the Bellman residual a bad proxy? Abstract This paper aims at theoretically and empirically comparing two standard optimization criteria for Reinforcement Learning: i) maximization of the mean value and ii) minimization of the Bellman residual. For that purpose, we place ourselves in the framework of policy search algorithms, that are usually designed to maximize the mean value, and derive a method that minimizes the residual ‖T∗vπ − vπ‖1,ν over policies. A theoretical analysis shows how good this proxy is to policy optimization, and notably that it is better than its value-based counterpart. We also propose experiments on randomly generated generic Markov decision processes, specifically designed for studying the influence of the involved concentrability coefficient. They show that the Bellman residual is generally a bad proxy to policy optimization and that directly maximizing the mean value is much better, despite the current lack of deep theoretical analysis. This might seem obvious, as directly addressing the problem of interest is usually better, but given the prevalence of (projected) Bellman residual minimization in value-based reinforcement learning, we believe that this question is worth to be considered. 1 Introduction Reinforcement Learning (RL) aims at estimating a policy π close to the optimal one, in the sense that its value, vπ (the expected discounted return), is close to maximal, i.e ‖v∗ − vπ‖ is small (v∗ being the optimal value), for some norm. Controlling the residual ‖T∗vθ − vθ‖ (where T∗ is the optimal Bellman operator and vθ a value function parameterized by θ) over a class of parameterized value functions is a classical approach in value-based RL, and especially in Approximate Dynamic Programming (ADP). Indeed, controlling this residual allows controlling the distance to the optimal value function: generally speaking, we have that ‖v∗ − vπvθ ‖ ≤ C 1− γ ‖T∗vθ − vθ‖, (1) with the policy πvθ being greedy with respect to vθ [17, 19]. Some classical ADP approaches actually minimize a projected Bellman residual, ‖Π(T∗vθ − vθ)‖, where Π is the operator projecting onto the hypothesis space to which vθ belongs: Approximate Value Iteration (AVI) [11, 9] tries to minimize this using a fixed-point approach, vθk+1 = ΠT∗vθk , and it has been shown recently [18] that Least-Squares Policy Iteration (LSPI) [13] tries to minimize it using a Newton approach1. Notice that in this case (projected residual), there is no general performance bound2 for controlling ‖v∗ − vπvθ ‖. 1(Exact) policy iteration actually minimizes ‖T∗v − v‖ using a Newton descent [10]. 2With a single action, this approach reduces to LSTD (Least-Squares Temporal Differences) [5], that can be arbitrarily bad in an off-policy setting [20]. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Despite the fact that (unprojected) residual approaches come easily with performance guarantees, they are not extensively studied in the (value-based) literature (one can mention [3] that considers a subgradient descent or [19] that frames the norm of the residual as a delta-convex function). A reason for this is that they lead to biased estimates when the Markovian transition kernel is stochastic and unknown [1], which is a rather standard case. Projected Bellman residual approaches are more common, even if not introduced as such originally (notable exceptions are [16, 18]). An alternative approach consists in maximizing directly the mean value Eν [vπ(S)] for a userdefined state distribution ν, this being equivalent to directly minimizing ‖v∗ − vπ‖1,ν , see Sec. 2. This suggests defining a class of parameterized policies and optimizing over them, which is the predominant approach in policy search3 [7]. This paper aims at theoretically and experimentally studying these two approaches: maximizing the mean value (related algorithms operate on policies) and minimizing the residual (related algorithms operate on value functions). In that purpose, we place ourselves in the context of policy search algorithms. We adopt this position because we could derive a method that minimizes the residual ‖T∗vπ − vπ‖1,ν over policies and compare to other methods that usually maximize the mean value. On the other hand, adapting ADP methods so that they maximize the mean value is way harder4. This new approach is presented in Sec. 3, and we show theoretically how good this proxy. In Sec. 4, we conduct experiments on randomly generated generic Markov decision processes to compare both approaches empirically. The experiments are specifically designed to study the influence of the involved concentrability coefficient. Despite the good theoretical properties of the Bellman residual approach, it turns out that it only works well if there is a good match between the sampling distribution and the discounted state occupancy distribution induced by the optimal policy, which is a very limiting requirement. In comparison, maximizing the mean value is rather insensitive to this issue and works well whatever the sampling distribution is, contrary to what suggests the sole related theoretical bound. This study thus suggests that maximizing the mean value, although it doesn’t provide easy theoretical analysis, is a better approach to build efficient and robust RL algorithms. 2 Background 2.1 Notations Let ∆X be the set of probability distributions over a finite set X and Y X the set of applications from X to the set Y . By convention, all vectors are column vectors, except distributions (for left multiplication). A Markov Decision Process (MDP) is a tuple {S,A, P,R, γ}, where S is the finite state space5, A is the finite action space, P ∈ (∆S)S×A is the Markovian transition kernel (P (s′|s, a) denotes the probability of transiting to s′ when action a is applied in state s),R ∈ RS×A is the bounded reward function (R(s, a) represents the local benefit of doing action a in state s) and γ ∈ (0, 1) is the discount factor. For v ∈ RS , we write ‖v‖1,ν = ∑ s∈S ν(s)|v(s)| the ν-weighted `1-norm of v. Notice that when the function v ∈ RS is componentwise positive, that is v ≥ 0, the ν-weighted `1-norm of v is actually its expectation with respect to ν: if v ≥ 0, then ‖v‖1,ν = Eν [v(S)] = νv. We will make an intensive use of this basic property in the following. A stochastic policy π ∈ (∆A)S associates a distribution over actions to each state. The policy-induced reward and transition kernels,Rπ ∈ RS and Pπ ∈ (∆S)S , are defined as Rπ(s) = Eπ(.|s)[R(s,A)] and Pπ(s′|s) = Eπ(.|s)[P (s′|s,A)]. The quality of a policy is quantified by the associated value function vπ ∈ RS : vπ(s) = E[ ∑ t≥0 γtRπ(St)|S0 = s, St+1 ∼ Pπ(.|St)]. 3A remarkable aspect of policy search is that it does not necessarily rely on the Markovian assumption, but this is out of the scope of this paper (residual approaches rely on it, through the Bellman equation). Some recent and effective approaches build on policy search, such as deep deterministic policy gradient [15] or trust region policy optimization [23]. Here, we focus on the canonical mean value maximization approach. 4Approximate linear programming could be considered as such but is often computationally intractable [8, 6]. 5This choice is done for ease and clarity of exposition, the following results could be extended to continuous state and action spaces. The value vπ is the unique fixed point of the Bellman operator Tπ , defined as Tπv = Rπ + γPπv for any v ∈ RS . Let define the second Bellman operator T∗ as, for any v ∈ RS , T∗v = maxπ∈(∆A)S Tπv. A policy π is greedy with respect to v ∈ RS , denoted π ∈ G(v) if Tπv = T∗v. There exists an optimal policy π∗ that satisfies componentwise vπ∗ ≥ vπ, for all π ∈ (∆A)S . Moreover, we have that π∗ ∈ G(v∗), with v∗ being the unique fixed point of T∗. Finally, for any distribution µ ∈ ∆S , the γ-weighted occupancy measure induced by the policy π when the initial state is sampled from µ is defined as dµ,π = (1− γ)µ ∑ t≥0 γtP tπ = (1− γ)µ(I − γPπ)−1 ∈ ∆S . For two distributions µ and ν, we write ‖µν ‖∞ the smallest constant C satisfying, for all s ∈ S, µ(s) ≤ Cν(s). This quantity measures the mismatch between the two distributions. 2.2 Maximizing the mean value Let P be a space of parameterized stochastic policies and let µ be a distribution of interest. The optimal policy has a higher value than any other policy, for any state. If the MDP is too large, satisfying this condition is not reasonable. Therefore, a natural idea consists in searching for a policy such that the associated value function is as close as possible to the optimal one, in expectation, according to a distribution of interest µ. More formally, this means minimizing ‖v∗ − v‖1,µ = Eµ[v∗(S) − vπ(S)] ≥ 0. The optimal value function being unknown, one cannot address this problem directly, but it is equivalent to maximizing Eµ[vπ(S)]. This is the basic principle of many policy search approaches: max π∈P Jν(π) with Jν(π) = Eν [vπ(S)] = νvπ. Notice that we used a sampling distribution ν here, possibly different from the distribution of interest µ. Related algorithms differ notably by the considered criterion (e.g., it can be the mean reward rather than the γ-discounted cumulative reward considered here) and by how the corresponding optimization problem is solved. We refer to [7] for a survey on that topic. Contrary to ADP, the theoretical efficiency of this family of approaches has not been studied a lot. Indeed, as far as we know, there is a sole performance bound for maximizing the mean value. Theorem 1 (Scherrer and Geist [22]). Assume that the policy space P is stable by stochastic mixture, that is ∀π, π′ ∈ P,∀α ∈ (0, 1), (1−α)π+απ′ ∈ P . Define the ν-greedy-complexity of the policy space P as Eν(P) = max π∈P min π′∈P dν,π(T∗vπ − Tπ′vπ). Then, any policy π that is an -local optimum of Jν , in the sense that ∀π′ ∈ Π, lim α→0 νv(1−α)π+απ′ − νvπ α ≤ , enjoys the following global performance guarantee: µ(v∗ − vπ) ≤ 1 (1− γ)2 ∥∥∥∥dµ,π∗ν ∥∥∥∥ ∞ (Eν(P) + ) . This bound (as all bounds of this kind) has three terms: an horizon term, a concentrability term and an error term. The term 11−γ is the average optimization horizon. This concentrability coefficient (‖dµ,π∗/ν‖∞) measures the mismatch between the used distribution ν and the γ-weighted occupancy measure induced by the optimal policy π∗ when the initial state is sampled from the distribution of interest µ. This tells that if µ is the distribution of interest, one should optimize Jdµ,π∗ , which is not feasible, π∗ being unknown (in this case, the coefficient is equal to 1, its lower bound). This coefficient can be arbitrarily large: consider the case where µ concentrates on a single starting state (that is µ(s0) = 1 for a given state s0) and such that the optimal policy leads to other states (that is, dµ,π∗(s0) < 1), the coefficient is then infinite. However, it is also the best concentrability coefficient according to [21], that provides a theoretical and empirical comparison of Approximate Policy Iteration (API) schemes. The error term is Eν(P) + , where Eν(P) measures the capacity of the policy space to represent the policies being greedy with respect to the value of any policy in P and tells how the computed policy π is close to a local optimum of Jν . There exist other policy search approches, based on ADP rather than on maximizing the mean value, such as Conservative Policy Iteration (CPI) [12] or Direct Policy Iteration (DPI) [14]. The bound of Thm. 1 matches the bounds of DPI or CPI. Actually, CPI can be shown to be a boosting approach maximizing the mean value. See the discussion in [22] for more details. However, this bound is also based on a very strong assumption (stability by stochastic mixture of the policy space) which is not satisfied by all commonly used policy parameterizations. 3 Minimizing the Bellman residual Direct maximization of the mean value operates on policies, while residual approaches operate on value functions. To study these two optimization criteria together, we introduce a policy search method that minimizes a residual. As noted before, we do so because it is much simpler than introducing a value-based approach that maximizes the mean value. We also show how good this proxy is to policy optimization. Although this algorithm is new, it is not claimed to be a core contribution of the paper. Yet it is clearly a mandatory step to support the comparison between optimization criteria. 3.1 Optimization problem We propose to search a policy in P that minimizes the following Bellman residual: min π∈P Jν(π) with Jν(π) = ‖T∗vπ − vπ‖1,ν . Notice that, as for the maximization of the mean value, we used a sampling distribution ν, possibly different from the distribution of interest µ. From the basic properties of the Bellman operator, for any policy π we have that T∗vπ ≥ vπ. Consequently, the ν-weighted `1-norm of the residual is indeed the expected Bellman residual: Jν(π) = Eν [[T∗vπ](S)− vπ(S)] = ν(T∗vπ − vπ). Therefore, there is naturally no bias problem for minimizing a residual here, contrary to other residual approaches [1]. This is an interesting result on its own, as removing the bias in value-based residual approaches is far from being straightforward. This results from the optimization being done over policies and not over values, and thus from vπ being an actual value (the one of the current policy) obeying to the Bellman equation6. Any optimization method can be envisioned to minimize Jν . Here, we simply propose to apply a subgradient descent (despite the lack of convexity). Theorem 2 (Subgradient of Jν). Recall that given the considered notations, the distribution νPG(vπ) is the state distribution obtained by sampling the initial state according to ν, applying the action being greedy with respect to vπ and following the dynamics to the next state. This being said, the subgradient of Jν is given by −∇Jν(π) = 1 1− γ ∑ s,a ( dν,π(s)− γdνPG(vπ),π(s) ) π(a|s)∇ lnπ(a|s)qπ(s, a), with qπ(s, a) = R(s, a) + γ ∑ s′∈S P (s ′|s, a)vπ(s′) the state-action value function. Proof. The proof relies on basic (sub)gradient calculus, it is given in the appendix. There are two terms in the negative subgradient −∇Jν : the first one corresponds to the gradient of Jν , the second one (up to the multiplication by −γ) is the gradient of JνPG(vπ) and acts as a kind of correction. This subgradient can be estimated using Monte Carlo rollouts, but doing so is harder than for classic policy search (as it requires additionally sampling from νPG(vπ), which requires estimating 6The property T∗v ≥ v does not hold if v is not the value function of a given policy, as in value-based approaches. the state-action value function). Also, this gradient involves computing the maximum over actions (as it requires sampling from νPG(vπ), that comes from explicitly considering the Bellman optimality operator), which prevents from extending easily this approach to continuous actions, contrary to classic policy search. Thus, from an algorithmic point of view, this approach has drawbacks. Yet, we do not discuss further how to efficiently estimate this subgradient since we introduced this approach for the sake of comparison to standard policy search methods only. For this reason, we will consider an ideal algorithm in the experimental section where an analytical computation of the subgradient is possible, see Sec. 4. This will place us in an unrealistically good setting, which will help focusing on the main conclusions. Before this, we study how good this proxy is to policy optimization. 3.2 Analysis Theorem 3 (Proxy bound for residual policy search). We have that ‖v∗ − vπ‖1,µ ≤ 1 1− γ ∥∥∥∥dµ,π∗ν ∥∥∥∥ ∞ Jν(π) = 1 1− γ ∥∥∥∥dµ,π∗ν ∥∥∥∥ ∞ ‖T∗vπ − vπ‖1,ν . Proof. The proof can be easily derived from the analyses of [12], [17] or [22]. We detail it for completeness in the appendix. This bound shows how controlling the residual helps in controlling the error. It has a linear dependency on the horizon and the concentrability coefficient is the best one can expect (according to [21]). It has the same form has the bounds for value-based residual minimization [17, 19] (see also Eq. (1)). It is even better due to the involved concentrability coefficient (the ones for value-based bounds are worst, see [21] for a comparison). Unfortunately, this bound is hardly comparable to the one of Th. 1, due to the error terms. In Th. 3, the error term (the residual) is a global error (how good is the residual as a proxy), whereas in Th. 1 the error term is mainly a local error (how small is the gradient after minimizing the mean value). Notice also that Th. 3 is roughly an intermediate step for proving Th. 1, and that it applies to any policy (suggesting that searching for a policy that minimizes the residual makes sense). One could argue that a similar bound for mean value maximization would be something like: if Jµ(π) ≥ α, then ‖v∗ − vπ‖1,µ ≤ µv∗ − α. However, this is an oracle bound, as it depends on the unknown solution v∗. It is thus hardly exploitable. The aim of this paper is to compare these two optimization approaches to RL. At a first sight, maximizing directly the mean value should be better (as a more direct approach). If the bounds of Th. 1 and 3 are hardly comparable, we can still discuss the involved terms. The horizon term is better (linear instead of quadratic) for the residual approach. Yet, an horizon term can possibly be hidden in the residual itself. Both bounds imply the same concentrability coefficient, the best one can expect. This is a very important term in RL bounds, often underestimated: as these coefficients can easily explode, minimizing an error makes sense only if it’s not multiplied by infinity. This coefficient suggests that one should use dµ,π∗ as the sampling distribution. This is rarely reasonable, while using instead directly the distribution of interest is more natural. Therefore, the experiments we propose on the next section focus on the influence of this concentrability coefficient. 4 Experiments We consider Garnet problems [2, 4]. They are a class of randomly built MDPs meant to be totally abstract while remaining representative of the problems that might be encountered in practice. Here, a Garnet G(|S|, |A|, b) is specified by the number of states, the number of actions and the branching factor. For each (s, a) couple, b different next states are chosen randomly and the associated probabilities are set by randomly partitioning the unit interval. The reward is null, except for 10% of states where it is set to a random value, uniform in (1, 2). We set γ = 0.99. For the policy space, we consider a Gibbs parameterization: P = {πw : πw(a|s) ∝ ew >φ(s,a)}. The features are also randomly generated, F (d, l). First, we generate binary state-features ϕ(s) of dimension d, such that l components are set to 1 (the others are thus 0). The positions of the 1’s are selected randomly such that no two states have the same feature. Then, the state-action features, of dimension d|A|, are classically defined as φ(s, a) = (0 . . . 0 ϕ(s) 0 . . . 0)>, the position of the zeros depending on the action. Notice that in general this policy space is not stable by stochastic mixture, so the bound for policy search does not formally apply. We compare classic policy search (denoted as PS(ν)), that maximizes the mean value, and residual policy search (denoted as RPS(ν)), that minimizes the mean residual. We optimize the relative objective functions with a normalized gradient ascent (resp. normalized subgradient descent) with a constant learning rate α = 0.1. The gradients are computed analytically (as we have access to the model), so the following results represent an ideal case, when one can do an infinite number of rollouts. Unless said otherwise, the distribution µ ∈ ∆S of interest is the uniform distribution. 4.1 Using the distribution of interest First, we consider ν = µ. We generate randomly 100 Garnets G(30, 4, 2) and 100 features F (8, 3). For each Garnet-feature couple, we run both algorithms for T = 1000 iterations. For each algorithm, we measure two quantities: the (normalized) error ‖v∗−vπ‖1,µ‖v∗‖1,µ (notice that as rewards are positive, we have ‖v∗‖1,µ = µv∗) and the Bellman residual ‖T∗vπ − vπ‖1,µ, where π depends on the algorithm and on the iteration. We show the results (mean±standard deviation) on Fig. 1. Fig. 1.a shows that PS(µ) succeeds in decreasing the error. This was to be expected, as it is the criterion it optimizes. Fig. 1.c shows how the residual of the policies computed by PS(µ) evolves. By comparing this to Fig. 1.a, it can be observed that the residual and the error are not necessarily correlated: the error can decrease while the residual increases, and a low error does not necessarily involves a low residual. Fig. 1.d shows that RPS(µ) succeeds in decreasing the residual. Again, this is not surprising, as it is the optimized criterion. Fig. 1.b shows how the error of the policies computed by RPS(µ) evolves. Comparing this to Fig. 1.d, it can be observed that decreasing the residual lowers the error: this is consistent with the bound of Thm. 3. Comparing Figs. 1.a and 1.b, it appears clearly that RPS(µ) is less efficient than PS(µ) for decreasing the error. This might seem obvious, as PS(µ) directly optimizes the criterion of interest. However, when comparing the errors and the residuals for each method, it can be observed that they are not necessarily correlated. Decreasing the residual lowers the error, but one can have a low error with a high residual and vice versa. As explained in Sec. 1, (projected) residual-based methods are prevalent for many reinforcement learning approaches. We consider a policy-based residual rather than a value-based one to ease the comparison, but it is worth studying the reason for such a different behavior. 4.2 Using the ideal distribution The lower the concentrability coefficient ‖dµ,π∗ν ‖∞ is, the better the bounds in Thm. 1 and 3 are. This coefficient is minimized for ν = dµ,π∗ . This is an unrealistic case (π∗ is unknown), but since we work with known MDPs we can compute this quantity (the model being known), for the sake of a complete empirical analysis. Therefore, PS(dµ,π∗) and RPS(dµ,π∗) are compared in Fig. 2. We highlight the fact that the errors and the residuals shown in this figure are measured respectively to the distribution of interest µ, and not the distribution dµ,π∗ used for the optimization. Fig. 2.a shows that PS(dµ,π∗) succeeds in decreasing the error ‖v∗ − vπ‖1,µ. However, comparing Fig. 2.a to Fig. 1.a, there is no significant gain in using ν = dµ,π∗ instead of ν = µ. This suggests that the dependency of the bound in Thm. 1 on the concentrability coefficient is not tight. Fig. 2.c shows how the corresponding residual evolves. Again, there is no strong correlation between the residual and the error. Fig. 2.d shows how the residual ‖T∗vπ − vπ‖1,µ evolves for RPS(dµ,π∗). It is not decreasing, but it is not what is optimized (the residual ‖T∗vπ − vπ‖1,dµ,π∗ , not shown, decreases indeed, in a similar fashion than Fig. 1.d). Fig. 2.b shows how the related error evolves. Compared to Fig. 2.a, there is no significant difference. The behavior of the residual is similar for both methods (Figs. 2.c and 2.d). Overall, this suggests that controlling the residual (RPS) allows controlling the error, but that this requires a wise choice for the distribution ν. On the other hand, controlling directly the error (PS) is much less sensitive to this. In other words, this suggests a stronger dependency of the residual approach to the mismatch between the sampling distribution and the discounted state occupancy measure induced by the optimal policy. 4.3 Varying the sampling distribution This experiment is designed to study the effect of the mismatch between the distributions. We sample 100 Garnets G(30, 4, 2), as well as associated feature sets F (8, 3). The distribution of interest is no longer the uniform distribution, but a measure that concentrates on a single starting state of interest s0: µ(s0) = 1. This is an adverserial case, as it implies that ‖dµ,π∗µ ‖∞ =∞: the branching factor being equal to 2, the optimal policy π∗ cannot concentrate on s0. The sampling distribution is defined as being a mixture between the distribution of interest and the ideal distribution. For α ∈ [0, 1], να is defined as να = (1− α)µ+ αdµ,π∗ . It is straightforward to show that in this case the concentrability coefficient is indeed 1α (with the convention that 1 0 =∞):∥∥∥∥dµ,π∗να ∥∥∥∥ ∞ = max ( dµ,π∗(s0) (1− α) + αdµ,π∗(s0) ; 1 α ) = 1 α . For each MDP, the learning (for PS(να) and RPS(να)) is repeated, from the same initial policy, by setting α = 1k , for k ∈ [1; 25]. Let πt,x be the policy learnt by algorithm x (PS or RPS) at iteration t, the integrated error (resp. integrated residual) is defined as 1 T T∑ t=1 ‖v∗ − vπt,x‖1,µ ‖v∗‖1,µ (resp. 1 T T∑ t=1 ‖T∗vπt,x − vπt,x‖1,µ). Notice that here again, the integrated error and residual are defined with respect to µ, the distribution of interest, and not να, the sampling distribution used for optimization. We get an integrated error (resp. residual) for each value of α = 1k , and represent it as a function of k = ‖ dµ,π∗ να ‖∞, the concentrability coefficient. Results are presented in Fig. 3, that shows these functions averaged across the 100 randomly generated MDPs (mean±standard deviation as before, minimum and maximum values are shown in dashed line). Fig. 3.a shows the integrated error for PS(να). It can be observed that the mismatch between measures has no influence on the efficiency of the algorithm. Fig. 3.b shows the same thing for RPS(να). The integrated error increases greatly as the mismatch between the sampling measure and the ideal one increases (the value to which the error saturates correspond to no improvement over the initial policy). Comparing both figures, it can be observed that RPS performs as well as PS only when the ideal distribution is used (this corresponds to a concentrability coefficient of 1). Fig. 3.c and 3.d show the integrated residual for each algorithm. It can be observed that RPS consistently achieves a lower residual than PS. Overall, this suggests that using the Bellman residual as a proxy is efficient only if the sampling distribution is close to the ideal one, which is difficult to achieve in general (the ideal distribution dµ,π∗ being unknown). On the other hand, the more direct approach consisting in maximizing the mean value is much more robust to this issue (and can, as a consequence, be considered directly with the distribution of interest). One could argue that the way we optimize the considered objective function is rather naive (for example, considering a constant learning rate). But this does not change the conclusions of this experimental study, that deals with how the error and the Bellman residual are related and with how the concentrability influences each optimization approach. This point is developed in the appendix. 5 Conclusion The aim of this article was to compare two optimization approaches to reinforcement learning: minimizing a Bellman residual and maximizing the mean value. As said in Sec. 1, Bellman residuals are prevalent in ADP. Notably, value iteration minimizes such a residual using a fixed-point approach and policy iteration minimizes it with a Newton descent. On another hand, maximizing the mean value (Sec. 2) is prevalent in policy search approaches. As Bellman residual minimization methods are naturally value-based and mean value maximization approaches policy-based, we introduced a policy-based residual minimization algorithm in order to study both optimization problems together. For the introduced residual method, we proved a proxy bound, better than value-based residual minimization. The different nature of the bounds of Th. 1 and 3 made the comparison difficult, but both involve the same concentrability coefficient, a term often underestimated in RL bounds. Therefore, we compared both approaches empirically on a set of randomly generated Garnets, the study being designed to quantify the influence of this concentrability coefficient. From these experiments, it appears that the Bellman residual is a good proxy for the error (the distance to the optimal value function) only if, luckily, the concentrability coefficient is small for the considered MDP and the distribution of interest, or one can afford a change of measure for the optimization problem, such that the sampling distribution is close to the ideal one. Regarding this second point, one can change to a measure different from the ideal one, dµ,π∗ (for example, using for ν a uniform distribution when the distribution of interest concentrates on a single state would help), but this is difficult in general (one should know roughly where the optimal policy will lead to). Conversely, maximizing the mean value appears to be insensitive to this problem. This suggests that the Bellman residual is generally a bad proxy to policy optimization, and that maximizing the mean value is more likely to result in efficient and robust reinforcement learning algorithms, despite the current lack of deep theoretical analysis. This conclusion might seems obvious, as maximizing the mean value is a more direct approach, but this discussion has never been addressed in the literature, as far as we know, and we think it to be important, given the prevalence of (projected) residual minimization in value-based RL.
1. What is the main contribution of the paper in the field of reinforcement learning? 2. What are the strengths of the paper, particularly in terms of its theoretical and empirical analyses? 3. What is the limitation of the paper regarding its scope and applicability to more complex settings?
Review
Review The paper investigates a fundamental question in RL, namely whether we should directly maximize the mean value, which is the primary objective, or if it is suitable to minimize the Bellman residual, as is done by several algorithms. The paper provides new theoretical analysis, in the form of a bound on residual policy search. This is complemented by a series of empirical results with random (Garnet-type) MDPs. The paper has a simple take-home message, which is that minimizing the Bellman residual is more susceptible to mismatch between the sample distribution and the optimal distribution, compared to policy search methods that directly optimize the value. The papes is well written; it outlines a specific question, and provides both theoretical and empirical support for answering this question. The new theoretical analysis extends previous known results. It is a modest extension, but potentially useful, and easy to grasp. The empirical results are informative, and constructed to shed light on the core question. I would be interested in seeing analogous results for standard RL benchmarks, to verify that the results carry over to "real" MDPs, not just random, but that is a minor point. One weakness of the work might be its scope: it is focused on a simple setting of policy search, without tackling implication of the work for more complex settings more commonly used, such as TRPO, DDPG.
NIPS
Title Is the Bellman residual a bad proxy? Abstract This paper aims at theoretically and empirically comparing two standard optimization criteria for Reinforcement Learning: i) maximization of the mean value and ii) minimization of the Bellman residual. For that purpose, we place ourselves in the framework of policy search algorithms, that are usually designed to maximize the mean value, and derive a method that minimizes the residual ‖T∗vπ − vπ‖1,ν over policies. A theoretical analysis shows how good this proxy is to policy optimization, and notably that it is better than its value-based counterpart. We also propose experiments on randomly generated generic Markov decision processes, specifically designed for studying the influence of the involved concentrability coefficient. They show that the Bellman residual is generally a bad proxy to policy optimization and that directly maximizing the mean value is much better, despite the current lack of deep theoretical analysis. This might seem obvious, as directly addressing the problem of interest is usually better, but given the prevalence of (projected) Bellman residual minimization in value-based reinforcement learning, we believe that this question is worth to be considered. 1 Introduction Reinforcement Learning (RL) aims at estimating a policy π close to the optimal one, in the sense that its value, vπ (the expected discounted return), is close to maximal, i.e ‖v∗ − vπ‖ is small (v∗ being the optimal value), for some norm. Controlling the residual ‖T∗vθ − vθ‖ (where T∗ is the optimal Bellman operator and vθ a value function parameterized by θ) over a class of parameterized value functions is a classical approach in value-based RL, and especially in Approximate Dynamic Programming (ADP). Indeed, controlling this residual allows controlling the distance to the optimal value function: generally speaking, we have that ‖v∗ − vπvθ ‖ ≤ C 1− γ ‖T∗vθ − vθ‖, (1) with the policy πvθ being greedy with respect to vθ [17, 19]. Some classical ADP approaches actually minimize a projected Bellman residual, ‖Π(T∗vθ − vθ)‖, where Π is the operator projecting onto the hypothesis space to which vθ belongs: Approximate Value Iteration (AVI) [11, 9] tries to minimize this using a fixed-point approach, vθk+1 = ΠT∗vθk , and it has been shown recently [18] that Least-Squares Policy Iteration (LSPI) [13] tries to minimize it using a Newton approach1. Notice that in this case (projected residual), there is no general performance bound2 for controlling ‖v∗ − vπvθ ‖. 1(Exact) policy iteration actually minimizes ‖T∗v − v‖ using a Newton descent [10]. 2With a single action, this approach reduces to LSTD (Least-Squares Temporal Differences) [5], that can be arbitrarily bad in an off-policy setting [20]. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Despite the fact that (unprojected) residual approaches come easily with performance guarantees, they are not extensively studied in the (value-based) literature (one can mention [3] that considers a subgradient descent or [19] that frames the norm of the residual as a delta-convex function). A reason for this is that they lead to biased estimates when the Markovian transition kernel is stochastic and unknown [1], which is a rather standard case. Projected Bellman residual approaches are more common, even if not introduced as such originally (notable exceptions are [16, 18]). An alternative approach consists in maximizing directly the mean value Eν [vπ(S)] for a userdefined state distribution ν, this being equivalent to directly minimizing ‖v∗ − vπ‖1,ν , see Sec. 2. This suggests defining a class of parameterized policies and optimizing over them, which is the predominant approach in policy search3 [7]. This paper aims at theoretically and experimentally studying these two approaches: maximizing the mean value (related algorithms operate on policies) and minimizing the residual (related algorithms operate on value functions). In that purpose, we place ourselves in the context of policy search algorithms. We adopt this position because we could derive a method that minimizes the residual ‖T∗vπ − vπ‖1,ν over policies and compare to other methods that usually maximize the mean value. On the other hand, adapting ADP methods so that they maximize the mean value is way harder4. This new approach is presented in Sec. 3, and we show theoretically how good this proxy. In Sec. 4, we conduct experiments on randomly generated generic Markov decision processes to compare both approaches empirically. The experiments are specifically designed to study the influence of the involved concentrability coefficient. Despite the good theoretical properties of the Bellman residual approach, it turns out that it only works well if there is a good match between the sampling distribution and the discounted state occupancy distribution induced by the optimal policy, which is a very limiting requirement. In comparison, maximizing the mean value is rather insensitive to this issue and works well whatever the sampling distribution is, contrary to what suggests the sole related theoretical bound. This study thus suggests that maximizing the mean value, although it doesn’t provide easy theoretical analysis, is a better approach to build efficient and robust RL algorithms. 2 Background 2.1 Notations Let ∆X be the set of probability distributions over a finite set X and Y X the set of applications from X to the set Y . By convention, all vectors are column vectors, except distributions (for left multiplication). A Markov Decision Process (MDP) is a tuple {S,A, P,R, γ}, where S is the finite state space5, A is the finite action space, P ∈ (∆S)S×A is the Markovian transition kernel (P (s′|s, a) denotes the probability of transiting to s′ when action a is applied in state s),R ∈ RS×A is the bounded reward function (R(s, a) represents the local benefit of doing action a in state s) and γ ∈ (0, 1) is the discount factor. For v ∈ RS , we write ‖v‖1,ν = ∑ s∈S ν(s)|v(s)| the ν-weighted `1-norm of v. Notice that when the function v ∈ RS is componentwise positive, that is v ≥ 0, the ν-weighted `1-norm of v is actually its expectation with respect to ν: if v ≥ 0, then ‖v‖1,ν = Eν [v(S)] = νv. We will make an intensive use of this basic property in the following. A stochastic policy π ∈ (∆A)S associates a distribution over actions to each state. The policy-induced reward and transition kernels,Rπ ∈ RS and Pπ ∈ (∆S)S , are defined as Rπ(s) = Eπ(.|s)[R(s,A)] and Pπ(s′|s) = Eπ(.|s)[P (s′|s,A)]. The quality of a policy is quantified by the associated value function vπ ∈ RS : vπ(s) = E[ ∑ t≥0 γtRπ(St)|S0 = s, St+1 ∼ Pπ(.|St)]. 3A remarkable aspect of policy search is that it does not necessarily rely on the Markovian assumption, but this is out of the scope of this paper (residual approaches rely on it, through the Bellman equation). Some recent and effective approaches build on policy search, such as deep deterministic policy gradient [15] or trust region policy optimization [23]. Here, we focus on the canonical mean value maximization approach. 4Approximate linear programming could be considered as such but is often computationally intractable [8, 6]. 5This choice is done for ease and clarity of exposition, the following results could be extended to continuous state and action spaces. The value vπ is the unique fixed point of the Bellman operator Tπ , defined as Tπv = Rπ + γPπv for any v ∈ RS . Let define the second Bellman operator T∗ as, for any v ∈ RS , T∗v = maxπ∈(∆A)S Tπv. A policy π is greedy with respect to v ∈ RS , denoted π ∈ G(v) if Tπv = T∗v. There exists an optimal policy π∗ that satisfies componentwise vπ∗ ≥ vπ, for all π ∈ (∆A)S . Moreover, we have that π∗ ∈ G(v∗), with v∗ being the unique fixed point of T∗. Finally, for any distribution µ ∈ ∆S , the γ-weighted occupancy measure induced by the policy π when the initial state is sampled from µ is defined as dµ,π = (1− γ)µ ∑ t≥0 γtP tπ = (1− γ)µ(I − γPπ)−1 ∈ ∆S . For two distributions µ and ν, we write ‖µν ‖∞ the smallest constant C satisfying, for all s ∈ S, µ(s) ≤ Cν(s). This quantity measures the mismatch between the two distributions. 2.2 Maximizing the mean value Let P be a space of parameterized stochastic policies and let µ be a distribution of interest. The optimal policy has a higher value than any other policy, for any state. If the MDP is too large, satisfying this condition is not reasonable. Therefore, a natural idea consists in searching for a policy such that the associated value function is as close as possible to the optimal one, in expectation, according to a distribution of interest µ. More formally, this means minimizing ‖v∗ − v‖1,µ = Eµ[v∗(S) − vπ(S)] ≥ 0. The optimal value function being unknown, one cannot address this problem directly, but it is equivalent to maximizing Eµ[vπ(S)]. This is the basic principle of many policy search approaches: max π∈P Jν(π) with Jν(π) = Eν [vπ(S)] = νvπ. Notice that we used a sampling distribution ν here, possibly different from the distribution of interest µ. Related algorithms differ notably by the considered criterion (e.g., it can be the mean reward rather than the γ-discounted cumulative reward considered here) and by how the corresponding optimization problem is solved. We refer to [7] for a survey on that topic. Contrary to ADP, the theoretical efficiency of this family of approaches has not been studied a lot. Indeed, as far as we know, there is a sole performance bound for maximizing the mean value. Theorem 1 (Scherrer and Geist [22]). Assume that the policy space P is stable by stochastic mixture, that is ∀π, π′ ∈ P,∀α ∈ (0, 1), (1−α)π+απ′ ∈ P . Define the ν-greedy-complexity of the policy space P as Eν(P) = max π∈P min π′∈P dν,π(T∗vπ − Tπ′vπ). Then, any policy π that is an -local optimum of Jν , in the sense that ∀π′ ∈ Π, lim α→0 νv(1−α)π+απ′ − νvπ α ≤ , enjoys the following global performance guarantee: µ(v∗ − vπ) ≤ 1 (1− γ)2 ∥∥∥∥dµ,π∗ν ∥∥∥∥ ∞ (Eν(P) + ) . This bound (as all bounds of this kind) has three terms: an horizon term, a concentrability term and an error term. The term 11−γ is the average optimization horizon. This concentrability coefficient (‖dµ,π∗/ν‖∞) measures the mismatch between the used distribution ν and the γ-weighted occupancy measure induced by the optimal policy π∗ when the initial state is sampled from the distribution of interest µ. This tells that if µ is the distribution of interest, one should optimize Jdµ,π∗ , which is not feasible, π∗ being unknown (in this case, the coefficient is equal to 1, its lower bound). This coefficient can be arbitrarily large: consider the case where µ concentrates on a single starting state (that is µ(s0) = 1 for a given state s0) and such that the optimal policy leads to other states (that is, dµ,π∗(s0) < 1), the coefficient is then infinite. However, it is also the best concentrability coefficient according to [21], that provides a theoretical and empirical comparison of Approximate Policy Iteration (API) schemes. The error term is Eν(P) + , where Eν(P) measures the capacity of the policy space to represent the policies being greedy with respect to the value of any policy in P and tells how the computed policy π is close to a local optimum of Jν . There exist other policy search approches, based on ADP rather than on maximizing the mean value, such as Conservative Policy Iteration (CPI) [12] or Direct Policy Iteration (DPI) [14]. The bound of Thm. 1 matches the bounds of DPI or CPI. Actually, CPI can be shown to be a boosting approach maximizing the mean value. See the discussion in [22] for more details. However, this bound is also based on a very strong assumption (stability by stochastic mixture of the policy space) which is not satisfied by all commonly used policy parameterizations. 3 Minimizing the Bellman residual Direct maximization of the mean value operates on policies, while residual approaches operate on value functions. To study these two optimization criteria together, we introduce a policy search method that minimizes a residual. As noted before, we do so because it is much simpler than introducing a value-based approach that maximizes the mean value. We also show how good this proxy is to policy optimization. Although this algorithm is new, it is not claimed to be a core contribution of the paper. Yet it is clearly a mandatory step to support the comparison between optimization criteria. 3.1 Optimization problem We propose to search a policy in P that minimizes the following Bellman residual: min π∈P Jν(π) with Jν(π) = ‖T∗vπ − vπ‖1,ν . Notice that, as for the maximization of the mean value, we used a sampling distribution ν, possibly different from the distribution of interest µ. From the basic properties of the Bellman operator, for any policy π we have that T∗vπ ≥ vπ. Consequently, the ν-weighted `1-norm of the residual is indeed the expected Bellman residual: Jν(π) = Eν [[T∗vπ](S)− vπ(S)] = ν(T∗vπ − vπ). Therefore, there is naturally no bias problem for minimizing a residual here, contrary to other residual approaches [1]. This is an interesting result on its own, as removing the bias in value-based residual approaches is far from being straightforward. This results from the optimization being done over policies and not over values, and thus from vπ being an actual value (the one of the current policy) obeying to the Bellman equation6. Any optimization method can be envisioned to minimize Jν . Here, we simply propose to apply a subgradient descent (despite the lack of convexity). Theorem 2 (Subgradient of Jν). Recall that given the considered notations, the distribution νPG(vπ) is the state distribution obtained by sampling the initial state according to ν, applying the action being greedy with respect to vπ and following the dynamics to the next state. This being said, the subgradient of Jν is given by −∇Jν(π) = 1 1− γ ∑ s,a ( dν,π(s)− γdνPG(vπ),π(s) ) π(a|s)∇ lnπ(a|s)qπ(s, a), with qπ(s, a) = R(s, a) + γ ∑ s′∈S P (s ′|s, a)vπ(s′) the state-action value function. Proof. The proof relies on basic (sub)gradient calculus, it is given in the appendix. There are two terms in the negative subgradient −∇Jν : the first one corresponds to the gradient of Jν , the second one (up to the multiplication by −γ) is the gradient of JνPG(vπ) and acts as a kind of correction. This subgradient can be estimated using Monte Carlo rollouts, but doing so is harder than for classic policy search (as it requires additionally sampling from νPG(vπ), which requires estimating 6The property T∗v ≥ v does not hold if v is not the value function of a given policy, as in value-based approaches. the state-action value function). Also, this gradient involves computing the maximum over actions (as it requires sampling from νPG(vπ), that comes from explicitly considering the Bellman optimality operator), which prevents from extending easily this approach to continuous actions, contrary to classic policy search. Thus, from an algorithmic point of view, this approach has drawbacks. Yet, we do not discuss further how to efficiently estimate this subgradient since we introduced this approach for the sake of comparison to standard policy search methods only. For this reason, we will consider an ideal algorithm in the experimental section where an analytical computation of the subgradient is possible, see Sec. 4. This will place us in an unrealistically good setting, which will help focusing on the main conclusions. Before this, we study how good this proxy is to policy optimization. 3.2 Analysis Theorem 3 (Proxy bound for residual policy search). We have that ‖v∗ − vπ‖1,µ ≤ 1 1− γ ∥∥∥∥dµ,π∗ν ∥∥∥∥ ∞ Jν(π) = 1 1− γ ∥∥∥∥dµ,π∗ν ∥∥∥∥ ∞ ‖T∗vπ − vπ‖1,ν . Proof. The proof can be easily derived from the analyses of [12], [17] or [22]. We detail it for completeness in the appendix. This bound shows how controlling the residual helps in controlling the error. It has a linear dependency on the horizon and the concentrability coefficient is the best one can expect (according to [21]). It has the same form has the bounds for value-based residual minimization [17, 19] (see also Eq. (1)). It is even better due to the involved concentrability coefficient (the ones for value-based bounds are worst, see [21] for a comparison). Unfortunately, this bound is hardly comparable to the one of Th. 1, due to the error terms. In Th. 3, the error term (the residual) is a global error (how good is the residual as a proxy), whereas in Th. 1 the error term is mainly a local error (how small is the gradient after minimizing the mean value). Notice also that Th. 3 is roughly an intermediate step for proving Th. 1, and that it applies to any policy (suggesting that searching for a policy that minimizes the residual makes sense). One could argue that a similar bound for mean value maximization would be something like: if Jµ(π) ≥ α, then ‖v∗ − vπ‖1,µ ≤ µv∗ − α. However, this is an oracle bound, as it depends on the unknown solution v∗. It is thus hardly exploitable. The aim of this paper is to compare these two optimization approaches to RL. At a first sight, maximizing directly the mean value should be better (as a more direct approach). If the bounds of Th. 1 and 3 are hardly comparable, we can still discuss the involved terms. The horizon term is better (linear instead of quadratic) for the residual approach. Yet, an horizon term can possibly be hidden in the residual itself. Both bounds imply the same concentrability coefficient, the best one can expect. This is a very important term in RL bounds, often underestimated: as these coefficients can easily explode, minimizing an error makes sense only if it’s not multiplied by infinity. This coefficient suggests that one should use dµ,π∗ as the sampling distribution. This is rarely reasonable, while using instead directly the distribution of interest is more natural. Therefore, the experiments we propose on the next section focus on the influence of this concentrability coefficient. 4 Experiments We consider Garnet problems [2, 4]. They are a class of randomly built MDPs meant to be totally abstract while remaining representative of the problems that might be encountered in practice. Here, a Garnet G(|S|, |A|, b) is specified by the number of states, the number of actions and the branching factor. For each (s, a) couple, b different next states are chosen randomly and the associated probabilities are set by randomly partitioning the unit interval. The reward is null, except for 10% of states where it is set to a random value, uniform in (1, 2). We set γ = 0.99. For the policy space, we consider a Gibbs parameterization: P = {πw : πw(a|s) ∝ ew >φ(s,a)}. The features are also randomly generated, F (d, l). First, we generate binary state-features ϕ(s) of dimension d, such that l components are set to 1 (the others are thus 0). The positions of the 1’s are selected randomly such that no two states have the same feature. Then, the state-action features, of dimension d|A|, are classically defined as φ(s, a) = (0 . . . 0 ϕ(s) 0 . . . 0)>, the position of the zeros depending on the action. Notice that in general this policy space is not stable by stochastic mixture, so the bound for policy search does not formally apply. We compare classic policy search (denoted as PS(ν)), that maximizes the mean value, and residual policy search (denoted as RPS(ν)), that minimizes the mean residual. We optimize the relative objective functions with a normalized gradient ascent (resp. normalized subgradient descent) with a constant learning rate α = 0.1. The gradients are computed analytically (as we have access to the model), so the following results represent an ideal case, when one can do an infinite number of rollouts. Unless said otherwise, the distribution µ ∈ ∆S of interest is the uniform distribution. 4.1 Using the distribution of interest First, we consider ν = µ. We generate randomly 100 Garnets G(30, 4, 2) and 100 features F (8, 3). For each Garnet-feature couple, we run both algorithms for T = 1000 iterations. For each algorithm, we measure two quantities: the (normalized) error ‖v∗−vπ‖1,µ‖v∗‖1,µ (notice that as rewards are positive, we have ‖v∗‖1,µ = µv∗) and the Bellman residual ‖T∗vπ − vπ‖1,µ, where π depends on the algorithm and on the iteration. We show the results (mean±standard deviation) on Fig. 1. Fig. 1.a shows that PS(µ) succeeds in decreasing the error. This was to be expected, as it is the criterion it optimizes. Fig. 1.c shows how the residual of the policies computed by PS(µ) evolves. By comparing this to Fig. 1.a, it can be observed that the residual and the error are not necessarily correlated: the error can decrease while the residual increases, and a low error does not necessarily involves a low residual. Fig. 1.d shows that RPS(µ) succeeds in decreasing the residual. Again, this is not surprising, as it is the optimized criterion. Fig. 1.b shows how the error of the policies computed by RPS(µ) evolves. Comparing this to Fig. 1.d, it can be observed that decreasing the residual lowers the error: this is consistent with the bound of Thm. 3. Comparing Figs. 1.a and 1.b, it appears clearly that RPS(µ) is less efficient than PS(µ) for decreasing the error. This might seem obvious, as PS(µ) directly optimizes the criterion of interest. However, when comparing the errors and the residuals for each method, it can be observed that they are not necessarily correlated. Decreasing the residual lowers the error, but one can have a low error with a high residual and vice versa. As explained in Sec. 1, (projected) residual-based methods are prevalent for many reinforcement learning approaches. We consider a policy-based residual rather than a value-based one to ease the comparison, but it is worth studying the reason for such a different behavior. 4.2 Using the ideal distribution The lower the concentrability coefficient ‖dµ,π∗ν ‖∞ is, the better the bounds in Thm. 1 and 3 are. This coefficient is minimized for ν = dµ,π∗ . This is an unrealistic case (π∗ is unknown), but since we work with known MDPs we can compute this quantity (the model being known), for the sake of a complete empirical analysis. Therefore, PS(dµ,π∗) and RPS(dµ,π∗) are compared in Fig. 2. We highlight the fact that the errors and the residuals shown in this figure are measured respectively to the distribution of interest µ, and not the distribution dµ,π∗ used for the optimization. Fig. 2.a shows that PS(dµ,π∗) succeeds in decreasing the error ‖v∗ − vπ‖1,µ. However, comparing Fig. 2.a to Fig. 1.a, there is no significant gain in using ν = dµ,π∗ instead of ν = µ. This suggests that the dependency of the bound in Thm. 1 on the concentrability coefficient is not tight. Fig. 2.c shows how the corresponding residual evolves. Again, there is no strong correlation between the residual and the error. Fig. 2.d shows how the residual ‖T∗vπ − vπ‖1,µ evolves for RPS(dµ,π∗). It is not decreasing, but it is not what is optimized (the residual ‖T∗vπ − vπ‖1,dµ,π∗ , not shown, decreases indeed, in a similar fashion than Fig. 1.d). Fig. 2.b shows how the related error evolves. Compared to Fig. 2.a, there is no significant difference. The behavior of the residual is similar for both methods (Figs. 2.c and 2.d). Overall, this suggests that controlling the residual (RPS) allows controlling the error, but that this requires a wise choice for the distribution ν. On the other hand, controlling directly the error (PS) is much less sensitive to this. In other words, this suggests a stronger dependency of the residual approach to the mismatch between the sampling distribution and the discounted state occupancy measure induced by the optimal policy. 4.3 Varying the sampling distribution This experiment is designed to study the effect of the mismatch between the distributions. We sample 100 Garnets G(30, 4, 2), as well as associated feature sets F (8, 3). The distribution of interest is no longer the uniform distribution, but a measure that concentrates on a single starting state of interest s0: µ(s0) = 1. This is an adverserial case, as it implies that ‖dµ,π∗µ ‖∞ =∞: the branching factor being equal to 2, the optimal policy π∗ cannot concentrate on s0. The sampling distribution is defined as being a mixture between the distribution of interest and the ideal distribution. For α ∈ [0, 1], να is defined as να = (1− α)µ+ αdµ,π∗ . It is straightforward to show that in this case the concentrability coefficient is indeed 1α (with the convention that 1 0 =∞):∥∥∥∥dµ,π∗να ∥∥∥∥ ∞ = max ( dµ,π∗(s0) (1− α) + αdµ,π∗(s0) ; 1 α ) = 1 α . For each MDP, the learning (for PS(να) and RPS(να)) is repeated, from the same initial policy, by setting α = 1k , for k ∈ [1; 25]. Let πt,x be the policy learnt by algorithm x (PS or RPS) at iteration t, the integrated error (resp. integrated residual) is defined as 1 T T∑ t=1 ‖v∗ − vπt,x‖1,µ ‖v∗‖1,µ (resp. 1 T T∑ t=1 ‖T∗vπt,x − vπt,x‖1,µ). Notice that here again, the integrated error and residual are defined with respect to µ, the distribution of interest, and not να, the sampling distribution used for optimization. We get an integrated error (resp. residual) for each value of α = 1k , and represent it as a function of k = ‖ dµ,π∗ να ‖∞, the concentrability coefficient. Results are presented in Fig. 3, that shows these functions averaged across the 100 randomly generated MDPs (mean±standard deviation as before, minimum and maximum values are shown in dashed line). Fig. 3.a shows the integrated error for PS(να). It can be observed that the mismatch between measures has no influence on the efficiency of the algorithm. Fig. 3.b shows the same thing for RPS(να). The integrated error increases greatly as the mismatch between the sampling measure and the ideal one increases (the value to which the error saturates correspond to no improvement over the initial policy). Comparing both figures, it can be observed that RPS performs as well as PS only when the ideal distribution is used (this corresponds to a concentrability coefficient of 1). Fig. 3.c and 3.d show the integrated residual for each algorithm. It can be observed that RPS consistently achieves a lower residual than PS. Overall, this suggests that using the Bellman residual as a proxy is efficient only if the sampling distribution is close to the ideal one, which is difficult to achieve in general (the ideal distribution dµ,π∗ being unknown). On the other hand, the more direct approach consisting in maximizing the mean value is much more robust to this issue (and can, as a consequence, be considered directly with the distribution of interest). One could argue that the way we optimize the considered objective function is rather naive (for example, considering a constant learning rate). But this does not change the conclusions of this experimental study, that deals with how the error and the Bellman residual are related and with how the concentrability influences each optimization approach. This point is developed in the appendix. 5 Conclusion The aim of this article was to compare two optimization approaches to reinforcement learning: minimizing a Bellman residual and maximizing the mean value. As said in Sec. 1, Bellman residuals are prevalent in ADP. Notably, value iteration minimizes such a residual using a fixed-point approach and policy iteration minimizes it with a Newton descent. On another hand, maximizing the mean value (Sec. 2) is prevalent in policy search approaches. As Bellman residual minimization methods are naturally value-based and mean value maximization approaches policy-based, we introduced a policy-based residual minimization algorithm in order to study both optimization problems together. For the introduced residual method, we proved a proxy bound, better than value-based residual minimization. The different nature of the bounds of Th. 1 and 3 made the comparison difficult, but both involve the same concentrability coefficient, a term often underestimated in RL bounds. Therefore, we compared both approaches empirically on a set of randomly generated Garnets, the study being designed to quantify the influence of this concentrability coefficient. From these experiments, it appears that the Bellman residual is a good proxy for the error (the distance to the optimal value function) only if, luckily, the concentrability coefficient is small for the considered MDP and the distribution of interest, or one can afford a change of measure for the optimization problem, such that the sampling distribution is close to the ideal one. Regarding this second point, one can change to a measure different from the ideal one, dµ,π∗ (for example, using for ν a uniform distribution when the distribution of interest concentrates on a single state would help), but this is difficult in general (one should know roughly where the optimal policy will lead to). Conversely, maximizing the mean value appears to be insensitive to this problem. This suggests that the Bellman residual is generally a bad proxy to policy optimization, and that maximizing the mean value is more likely to result in efficient and robust reinforcement learning algorithms, despite the current lack of deep theoretical analysis. This conclusion might seems obvious, as maximizing the mean value is a more direct approach, but this discussion has never been addressed in the literature, as far as we know, and we think it to be important, given the prevalence of (projected) residual minimization in value-based RL.
1. What is the main contribution of the paper regarding policy search? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. How does the reviewer assess the novelty and significance of the paper's content? 4. Are there any concerns or questions regarding the assumptions and limitations of the proposed method?
Review
Review The paper discusses a very interesting topic: which measure (BR minimization and mean value maximization) is better for policy search? 1. The reviewer is interested in one possible reason besides the discussion in the paper: the Markov assumption. As we know, the Bellman residual assumption is more restrictive to the Markov assumption of the task, whereas the mean value is not so restrictive to that. In real tasks for policy search, many tasks may break the Markov assumption. Does this also account for the reason why mean-value is a better metric in policy search? 2. There is a missing reference for (projected) Bellman minimization search: Toward Off-Policy Learning Control with Function Approximation, by H Maei et.al. (using projected TD for off-policy control instead of just policy evaluation) There is also one question: H Maei's method suffers from the latent learning problem: the optimal policy, though learned, is not manifest in behavior. Until finishing the learning process, the learned policy is not allowed to be expressed and used. The reviewer wonders if such problem (latent learning) exists in the policy-based BR minimization approach. Please explain why if it does not exist.
NIPS
Title Is the Bellman residual a bad proxy? Abstract This paper aims at theoretically and empirically comparing two standard optimization criteria for Reinforcement Learning: i) maximization of the mean value and ii) minimization of the Bellman residual. For that purpose, we place ourselves in the framework of policy search algorithms, that are usually designed to maximize the mean value, and derive a method that minimizes the residual ‖T∗vπ − vπ‖1,ν over policies. A theoretical analysis shows how good this proxy is to policy optimization, and notably that it is better than its value-based counterpart. We also propose experiments on randomly generated generic Markov decision processes, specifically designed for studying the influence of the involved concentrability coefficient. They show that the Bellman residual is generally a bad proxy to policy optimization and that directly maximizing the mean value is much better, despite the current lack of deep theoretical analysis. This might seem obvious, as directly addressing the problem of interest is usually better, but given the prevalence of (projected) Bellman residual minimization in value-based reinforcement learning, we believe that this question is worth to be considered. 1 Introduction Reinforcement Learning (RL) aims at estimating a policy π close to the optimal one, in the sense that its value, vπ (the expected discounted return), is close to maximal, i.e ‖v∗ − vπ‖ is small (v∗ being the optimal value), for some norm. Controlling the residual ‖T∗vθ − vθ‖ (where T∗ is the optimal Bellman operator and vθ a value function parameterized by θ) over a class of parameterized value functions is a classical approach in value-based RL, and especially in Approximate Dynamic Programming (ADP). Indeed, controlling this residual allows controlling the distance to the optimal value function: generally speaking, we have that ‖v∗ − vπvθ ‖ ≤ C 1− γ ‖T∗vθ − vθ‖, (1) with the policy πvθ being greedy with respect to vθ [17, 19]. Some classical ADP approaches actually minimize a projected Bellman residual, ‖Π(T∗vθ − vθ)‖, where Π is the operator projecting onto the hypothesis space to which vθ belongs: Approximate Value Iteration (AVI) [11, 9] tries to minimize this using a fixed-point approach, vθk+1 = ΠT∗vθk , and it has been shown recently [18] that Least-Squares Policy Iteration (LSPI) [13] tries to minimize it using a Newton approach1. Notice that in this case (projected residual), there is no general performance bound2 for controlling ‖v∗ − vπvθ ‖. 1(Exact) policy iteration actually minimizes ‖T∗v − v‖ using a Newton descent [10]. 2With a single action, this approach reduces to LSTD (Least-Squares Temporal Differences) [5], that can be arbitrarily bad in an off-policy setting [20]. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Despite the fact that (unprojected) residual approaches come easily with performance guarantees, they are not extensively studied in the (value-based) literature (one can mention [3] that considers a subgradient descent or [19] that frames the norm of the residual as a delta-convex function). A reason for this is that they lead to biased estimates when the Markovian transition kernel is stochastic and unknown [1], which is a rather standard case. Projected Bellman residual approaches are more common, even if not introduced as such originally (notable exceptions are [16, 18]). An alternative approach consists in maximizing directly the mean value Eν [vπ(S)] for a userdefined state distribution ν, this being equivalent to directly minimizing ‖v∗ − vπ‖1,ν , see Sec. 2. This suggests defining a class of parameterized policies and optimizing over them, which is the predominant approach in policy search3 [7]. This paper aims at theoretically and experimentally studying these two approaches: maximizing the mean value (related algorithms operate on policies) and minimizing the residual (related algorithms operate on value functions). In that purpose, we place ourselves in the context of policy search algorithms. We adopt this position because we could derive a method that minimizes the residual ‖T∗vπ − vπ‖1,ν over policies and compare to other methods that usually maximize the mean value. On the other hand, adapting ADP methods so that they maximize the mean value is way harder4. This new approach is presented in Sec. 3, and we show theoretically how good this proxy. In Sec. 4, we conduct experiments on randomly generated generic Markov decision processes to compare both approaches empirically. The experiments are specifically designed to study the influence of the involved concentrability coefficient. Despite the good theoretical properties of the Bellman residual approach, it turns out that it only works well if there is a good match between the sampling distribution and the discounted state occupancy distribution induced by the optimal policy, which is a very limiting requirement. In comparison, maximizing the mean value is rather insensitive to this issue and works well whatever the sampling distribution is, contrary to what suggests the sole related theoretical bound. This study thus suggests that maximizing the mean value, although it doesn’t provide easy theoretical analysis, is a better approach to build efficient and robust RL algorithms. 2 Background 2.1 Notations Let ∆X be the set of probability distributions over a finite set X and Y X the set of applications from X to the set Y . By convention, all vectors are column vectors, except distributions (for left multiplication). A Markov Decision Process (MDP) is a tuple {S,A, P,R, γ}, where S is the finite state space5, A is the finite action space, P ∈ (∆S)S×A is the Markovian transition kernel (P (s′|s, a) denotes the probability of transiting to s′ when action a is applied in state s),R ∈ RS×A is the bounded reward function (R(s, a) represents the local benefit of doing action a in state s) and γ ∈ (0, 1) is the discount factor. For v ∈ RS , we write ‖v‖1,ν = ∑ s∈S ν(s)|v(s)| the ν-weighted `1-norm of v. Notice that when the function v ∈ RS is componentwise positive, that is v ≥ 0, the ν-weighted `1-norm of v is actually its expectation with respect to ν: if v ≥ 0, then ‖v‖1,ν = Eν [v(S)] = νv. We will make an intensive use of this basic property in the following. A stochastic policy π ∈ (∆A)S associates a distribution over actions to each state. The policy-induced reward and transition kernels,Rπ ∈ RS and Pπ ∈ (∆S)S , are defined as Rπ(s) = Eπ(.|s)[R(s,A)] and Pπ(s′|s) = Eπ(.|s)[P (s′|s,A)]. The quality of a policy is quantified by the associated value function vπ ∈ RS : vπ(s) = E[ ∑ t≥0 γtRπ(St)|S0 = s, St+1 ∼ Pπ(.|St)]. 3A remarkable aspect of policy search is that it does not necessarily rely on the Markovian assumption, but this is out of the scope of this paper (residual approaches rely on it, through the Bellman equation). Some recent and effective approaches build on policy search, such as deep deterministic policy gradient [15] or trust region policy optimization [23]. Here, we focus on the canonical mean value maximization approach. 4Approximate linear programming could be considered as such but is often computationally intractable [8, 6]. 5This choice is done for ease and clarity of exposition, the following results could be extended to continuous state and action spaces. The value vπ is the unique fixed point of the Bellman operator Tπ , defined as Tπv = Rπ + γPπv for any v ∈ RS . Let define the second Bellman operator T∗ as, for any v ∈ RS , T∗v = maxπ∈(∆A)S Tπv. A policy π is greedy with respect to v ∈ RS , denoted π ∈ G(v) if Tπv = T∗v. There exists an optimal policy π∗ that satisfies componentwise vπ∗ ≥ vπ, for all π ∈ (∆A)S . Moreover, we have that π∗ ∈ G(v∗), with v∗ being the unique fixed point of T∗. Finally, for any distribution µ ∈ ∆S , the γ-weighted occupancy measure induced by the policy π when the initial state is sampled from µ is defined as dµ,π = (1− γ)µ ∑ t≥0 γtP tπ = (1− γ)µ(I − γPπ)−1 ∈ ∆S . For two distributions µ and ν, we write ‖µν ‖∞ the smallest constant C satisfying, for all s ∈ S, µ(s) ≤ Cν(s). This quantity measures the mismatch between the two distributions. 2.2 Maximizing the mean value Let P be a space of parameterized stochastic policies and let µ be a distribution of interest. The optimal policy has a higher value than any other policy, for any state. If the MDP is too large, satisfying this condition is not reasonable. Therefore, a natural idea consists in searching for a policy such that the associated value function is as close as possible to the optimal one, in expectation, according to a distribution of interest µ. More formally, this means minimizing ‖v∗ − v‖1,µ = Eµ[v∗(S) − vπ(S)] ≥ 0. The optimal value function being unknown, one cannot address this problem directly, but it is equivalent to maximizing Eµ[vπ(S)]. This is the basic principle of many policy search approaches: max π∈P Jν(π) with Jν(π) = Eν [vπ(S)] = νvπ. Notice that we used a sampling distribution ν here, possibly different from the distribution of interest µ. Related algorithms differ notably by the considered criterion (e.g., it can be the mean reward rather than the γ-discounted cumulative reward considered here) and by how the corresponding optimization problem is solved. We refer to [7] for a survey on that topic. Contrary to ADP, the theoretical efficiency of this family of approaches has not been studied a lot. Indeed, as far as we know, there is a sole performance bound for maximizing the mean value. Theorem 1 (Scherrer and Geist [22]). Assume that the policy space P is stable by stochastic mixture, that is ∀π, π′ ∈ P,∀α ∈ (0, 1), (1−α)π+απ′ ∈ P . Define the ν-greedy-complexity of the policy space P as Eν(P) = max π∈P min π′∈P dν,π(T∗vπ − Tπ′vπ). Then, any policy π that is an -local optimum of Jν , in the sense that ∀π′ ∈ Π, lim α→0 νv(1−α)π+απ′ − νvπ α ≤ , enjoys the following global performance guarantee: µ(v∗ − vπ) ≤ 1 (1− γ)2 ∥∥∥∥dµ,π∗ν ∥∥∥∥ ∞ (Eν(P) + ) . This bound (as all bounds of this kind) has three terms: an horizon term, a concentrability term and an error term. The term 11−γ is the average optimization horizon. This concentrability coefficient (‖dµ,π∗/ν‖∞) measures the mismatch between the used distribution ν and the γ-weighted occupancy measure induced by the optimal policy π∗ when the initial state is sampled from the distribution of interest µ. This tells that if µ is the distribution of interest, one should optimize Jdµ,π∗ , which is not feasible, π∗ being unknown (in this case, the coefficient is equal to 1, its lower bound). This coefficient can be arbitrarily large: consider the case where µ concentrates on a single starting state (that is µ(s0) = 1 for a given state s0) and such that the optimal policy leads to other states (that is, dµ,π∗(s0) < 1), the coefficient is then infinite. However, it is also the best concentrability coefficient according to [21], that provides a theoretical and empirical comparison of Approximate Policy Iteration (API) schemes. The error term is Eν(P) + , where Eν(P) measures the capacity of the policy space to represent the policies being greedy with respect to the value of any policy in P and tells how the computed policy π is close to a local optimum of Jν . There exist other policy search approches, based on ADP rather than on maximizing the mean value, such as Conservative Policy Iteration (CPI) [12] or Direct Policy Iteration (DPI) [14]. The bound of Thm. 1 matches the bounds of DPI or CPI. Actually, CPI can be shown to be a boosting approach maximizing the mean value. See the discussion in [22] for more details. However, this bound is also based on a very strong assumption (stability by stochastic mixture of the policy space) which is not satisfied by all commonly used policy parameterizations. 3 Minimizing the Bellman residual Direct maximization of the mean value operates on policies, while residual approaches operate on value functions. To study these two optimization criteria together, we introduce a policy search method that minimizes a residual. As noted before, we do so because it is much simpler than introducing a value-based approach that maximizes the mean value. We also show how good this proxy is to policy optimization. Although this algorithm is new, it is not claimed to be a core contribution of the paper. Yet it is clearly a mandatory step to support the comparison between optimization criteria. 3.1 Optimization problem We propose to search a policy in P that minimizes the following Bellman residual: min π∈P Jν(π) with Jν(π) = ‖T∗vπ − vπ‖1,ν . Notice that, as for the maximization of the mean value, we used a sampling distribution ν, possibly different from the distribution of interest µ. From the basic properties of the Bellman operator, for any policy π we have that T∗vπ ≥ vπ. Consequently, the ν-weighted `1-norm of the residual is indeed the expected Bellman residual: Jν(π) = Eν [[T∗vπ](S)− vπ(S)] = ν(T∗vπ − vπ). Therefore, there is naturally no bias problem for minimizing a residual here, contrary to other residual approaches [1]. This is an interesting result on its own, as removing the bias in value-based residual approaches is far from being straightforward. This results from the optimization being done over policies and not over values, and thus from vπ being an actual value (the one of the current policy) obeying to the Bellman equation6. Any optimization method can be envisioned to minimize Jν . Here, we simply propose to apply a subgradient descent (despite the lack of convexity). Theorem 2 (Subgradient of Jν). Recall that given the considered notations, the distribution νPG(vπ) is the state distribution obtained by sampling the initial state according to ν, applying the action being greedy with respect to vπ and following the dynamics to the next state. This being said, the subgradient of Jν is given by −∇Jν(π) = 1 1− γ ∑ s,a ( dν,π(s)− γdνPG(vπ),π(s) ) π(a|s)∇ lnπ(a|s)qπ(s, a), with qπ(s, a) = R(s, a) + γ ∑ s′∈S P (s ′|s, a)vπ(s′) the state-action value function. Proof. The proof relies on basic (sub)gradient calculus, it is given in the appendix. There are two terms in the negative subgradient −∇Jν : the first one corresponds to the gradient of Jν , the second one (up to the multiplication by −γ) is the gradient of JνPG(vπ) and acts as a kind of correction. This subgradient can be estimated using Monte Carlo rollouts, but doing so is harder than for classic policy search (as it requires additionally sampling from νPG(vπ), which requires estimating 6The property T∗v ≥ v does not hold if v is not the value function of a given policy, as in value-based approaches. the state-action value function). Also, this gradient involves computing the maximum over actions (as it requires sampling from νPG(vπ), that comes from explicitly considering the Bellman optimality operator), which prevents from extending easily this approach to continuous actions, contrary to classic policy search. Thus, from an algorithmic point of view, this approach has drawbacks. Yet, we do not discuss further how to efficiently estimate this subgradient since we introduced this approach for the sake of comparison to standard policy search methods only. For this reason, we will consider an ideal algorithm in the experimental section where an analytical computation of the subgradient is possible, see Sec. 4. This will place us in an unrealistically good setting, which will help focusing on the main conclusions. Before this, we study how good this proxy is to policy optimization. 3.2 Analysis Theorem 3 (Proxy bound for residual policy search). We have that ‖v∗ − vπ‖1,µ ≤ 1 1− γ ∥∥∥∥dµ,π∗ν ∥∥∥∥ ∞ Jν(π) = 1 1− γ ∥∥∥∥dµ,π∗ν ∥∥∥∥ ∞ ‖T∗vπ − vπ‖1,ν . Proof. The proof can be easily derived from the analyses of [12], [17] or [22]. We detail it for completeness in the appendix. This bound shows how controlling the residual helps in controlling the error. It has a linear dependency on the horizon and the concentrability coefficient is the best one can expect (according to [21]). It has the same form has the bounds for value-based residual minimization [17, 19] (see also Eq. (1)). It is even better due to the involved concentrability coefficient (the ones for value-based bounds are worst, see [21] for a comparison). Unfortunately, this bound is hardly comparable to the one of Th. 1, due to the error terms. In Th. 3, the error term (the residual) is a global error (how good is the residual as a proxy), whereas in Th. 1 the error term is mainly a local error (how small is the gradient after minimizing the mean value). Notice also that Th. 3 is roughly an intermediate step for proving Th. 1, and that it applies to any policy (suggesting that searching for a policy that minimizes the residual makes sense). One could argue that a similar bound for mean value maximization would be something like: if Jµ(π) ≥ α, then ‖v∗ − vπ‖1,µ ≤ µv∗ − α. However, this is an oracle bound, as it depends on the unknown solution v∗. It is thus hardly exploitable. The aim of this paper is to compare these two optimization approaches to RL. At a first sight, maximizing directly the mean value should be better (as a more direct approach). If the bounds of Th. 1 and 3 are hardly comparable, we can still discuss the involved terms. The horizon term is better (linear instead of quadratic) for the residual approach. Yet, an horizon term can possibly be hidden in the residual itself. Both bounds imply the same concentrability coefficient, the best one can expect. This is a very important term in RL bounds, often underestimated: as these coefficients can easily explode, minimizing an error makes sense only if it’s not multiplied by infinity. This coefficient suggests that one should use dµ,π∗ as the sampling distribution. This is rarely reasonable, while using instead directly the distribution of interest is more natural. Therefore, the experiments we propose on the next section focus on the influence of this concentrability coefficient. 4 Experiments We consider Garnet problems [2, 4]. They are a class of randomly built MDPs meant to be totally abstract while remaining representative of the problems that might be encountered in practice. Here, a Garnet G(|S|, |A|, b) is specified by the number of states, the number of actions and the branching factor. For each (s, a) couple, b different next states are chosen randomly and the associated probabilities are set by randomly partitioning the unit interval. The reward is null, except for 10% of states where it is set to a random value, uniform in (1, 2). We set γ = 0.99. For the policy space, we consider a Gibbs parameterization: P = {πw : πw(a|s) ∝ ew >φ(s,a)}. The features are also randomly generated, F (d, l). First, we generate binary state-features ϕ(s) of dimension d, such that l components are set to 1 (the others are thus 0). The positions of the 1’s are selected randomly such that no two states have the same feature. Then, the state-action features, of dimension d|A|, are classically defined as φ(s, a) = (0 . . . 0 ϕ(s) 0 . . . 0)>, the position of the zeros depending on the action. Notice that in general this policy space is not stable by stochastic mixture, so the bound for policy search does not formally apply. We compare classic policy search (denoted as PS(ν)), that maximizes the mean value, and residual policy search (denoted as RPS(ν)), that minimizes the mean residual. We optimize the relative objective functions with a normalized gradient ascent (resp. normalized subgradient descent) with a constant learning rate α = 0.1. The gradients are computed analytically (as we have access to the model), so the following results represent an ideal case, when one can do an infinite number of rollouts. Unless said otherwise, the distribution µ ∈ ∆S of interest is the uniform distribution. 4.1 Using the distribution of interest First, we consider ν = µ. We generate randomly 100 Garnets G(30, 4, 2) and 100 features F (8, 3). For each Garnet-feature couple, we run both algorithms for T = 1000 iterations. For each algorithm, we measure two quantities: the (normalized) error ‖v∗−vπ‖1,µ‖v∗‖1,µ (notice that as rewards are positive, we have ‖v∗‖1,µ = µv∗) and the Bellman residual ‖T∗vπ − vπ‖1,µ, where π depends on the algorithm and on the iteration. We show the results (mean±standard deviation) on Fig. 1. Fig. 1.a shows that PS(µ) succeeds in decreasing the error. This was to be expected, as it is the criterion it optimizes. Fig. 1.c shows how the residual of the policies computed by PS(µ) evolves. By comparing this to Fig. 1.a, it can be observed that the residual and the error are not necessarily correlated: the error can decrease while the residual increases, and a low error does not necessarily involves a low residual. Fig. 1.d shows that RPS(µ) succeeds in decreasing the residual. Again, this is not surprising, as it is the optimized criterion. Fig. 1.b shows how the error of the policies computed by RPS(µ) evolves. Comparing this to Fig. 1.d, it can be observed that decreasing the residual lowers the error: this is consistent with the bound of Thm. 3. Comparing Figs. 1.a and 1.b, it appears clearly that RPS(µ) is less efficient than PS(µ) for decreasing the error. This might seem obvious, as PS(µ) directly optimizes the criterion of interest. However, when comparing the errors and the residuals for each method, it can be observed that they are not necessarily correlated. Decreasing the residual lowers the error, but one can have a low error with a high residual and vice versa. As explained in Sec. 1, (projected) residual-based methods are prevalent for many reinforcement learning approaches. We consider a policy-based residual rather than a value-based one to ease the comparison, but it is worth studying the reason for such a different behavior. 4.2 Using the ideal distribution The lower the concentrability coefficient ‖dµ,π∗ν ‖∞ is, the better the bounds in Thm. 1 and 3 are. This coefficient is minimized for ν = dµ,π∗ . This is an unrealistic case (π∗ is unknown), but since we work with known MDPs we can compute this quantity (the model being known), for the sake of a complete empirical analysis. Therefore, PS(dµ,π∗) and RPS(dµ,π∗) are compared in Fig. 2. We highlight the fact that the errors and the residuals shown in this figure are measured respectively to the distribution of interest µ, and not the distribution dµ,π∗ used for the optimization. Fig. 2.a shows that PS(dµ,π∗) succeeds in decreasing the error ‖v∗ − vπ‖1,µ. However, comparing Fig. 2.a to Fig. 1.a, there is no significant gain in using ν = dµ,π∗ instead of ν = µ. This suggests that the dependency of the bound in Thm. 1 on the concentrability coefficient is not tight. Fig. 2.c shows how the corresponding residual evolves. Again, there is no strong correlation between the residual and the error. Fig. 2.d shows how the residual ‖T∗vπ − vπ‖1,µ evolves for RPS(dµ,π∗). It is not decreasing, but it is not what is optimized (the residual ‖T∗vπ − vπ‖1,dµ,π∗ , not shown, decreases indeed, in a similar fashion than Fig. 1.d). Fig. 2.b shows how the related error evolves. Compared to Fig. 2.a, there is no significant difference. The behavior of the residual is similar for both methods (Figs. 2.c and 2.d). Overall, this suggests that controlling the residual (RPS) allows controlling the error, but that this requires a wise choice for the distribution ν. On the other hand, controlling directly the error (PS) is much less sensitive to this. In other words, this suggests a stronger dependency of the residual approach to the mismatch between the sampling distribution and the discounted state occupancy measure induced by the optimal policy. 4.3 Varying the sampling distribution This experiment is designed to study the effect of the mismatch between the distributions. We sample 100 Garnets G(30, 4, 2), as well as associated feature sets F (8, 3). The distribution of interest is no longer the uniform distribution, but a measure that concentrates on a single starting state of interest s0: µ(s0) = 1. This is an adverserial case, as it implies that ‖dµ,π∗µ ‖∞ =∞: the branching factor being equal to 2, the optimal policy π∗ cannot concentrate on s0. The sampling distribution is defined as being a mixture between the distribution of interest and the ideal distribution. For α ∈ [0, 1], να is defined as να = (1− α)µ+ αdµ,π∗ . It is straightforward to show that in this case the concentrability coefficient is indeed 1α (with the convention that 1 0 =∞):∥∥∥∥dµ,π∗να ∥∥∥∥ ∞ = max ( dµ,π∗(s0) (1− α) + αdµ,π∗(s0) ; 1 α ) = 1 α . For each MDP, the learning (for PS(να) and RPS(να)) is repeated, from the same initial policy, by setting α = 1k , for k ∈ [1; 25]. Let πt,x be the policy learnt by algorithm x (PS or RPS) at iteration t, the integrated error (resp. integrated residual) is defined as 1 T T∑ t=1 ‖v∗ − vπt,x‖1,µ ‖v∗‖1,µ (resp. 1 T T∑ t=1 ‖T∗vπt,x − vπt,x‖1,µ). Notice that here again, the integrated error and residual are defined with respect to µ, the distribution of interest, and not να, the sampling distribution used for optimization. We get an integrated error (resp. residual) for each value of α = 1k , and represent it as a function of k = ‖ dµ,π∗ να ‖∞, the concentrability coefficient. Results are presented in Fig. 3, that shows these functions averaged across the 100 randomly generated MDPs (mean±standard deviation as before, minimum and maximum values are shown in dashed line). Fig. 3.a shows the integrated error for PS(να). It can be observed that the mismatch between measures has no influence on the efficiency of the algorithm. Fig. 3.b shows the same thing for RPS(να). The integrated error increases greatly as the mismatch between the sampling measure and the ideal one increases (the value to which the error saturates correspond to no improvement over the initial policy). Comparing both figures, it can be observed that RPS performs as well as PS only when the ideal distribution is used (this corresponds to a concentrability coefficient of 1). Fig. 3.c and 3.d show the integrated residual for each algorithm. It can be observed that RPS consistently achieves a lower residual than PS. Overall, this suggests that using the Bellman residual as a proxy is efficient only if the sampling distribution is close to the ideal one, which is difficult to achieve in general (the ideal distribution dµ,π∗ being unknown). On the other hand, the more direct approach consisting in maximizing the mean value is much more robust to this issue (and can, as a consequence, be considered directly with the distribution of interest). One could argue that the way we optimize the considered objective function is rather naive (for example, considering a constant learning rate). But this does not change the conclusions of this experimental study, that deals with how the error and the Bellman residual are related and with how the concentrability influences each optimization approach. This point is developed in the appendix. 5 Conclusion The aim of this article was to compare two optimization approaches to reinforcement learning: minimizing a Bellman residual and maximizing the mean value. As said in Sec. 1, Bellman residuals are prevalent in ADP. Notably, value iteration minimizes such a residual using a fixed-point approach and policy iteration minimizes it with a Newton descent. On another hand, maximizing the mean value (Sec. 2) is prevalent in policy search approaches. As Bellman residual minimization methods are naturally value-based and mean value maximization approaches policy-based, we introduced a policy-based residual minimization algorithm in order to study both optimization problems together. For the introduced residual method, we proved a proxy bound, better than value-based residual minimization. The different nature of the bounds of Th. 1 and 3 made the comparison difficult, but both involve the same concentrability coefficient, a term often underestimated in RL bounds. Therefore, we compared both approaches empirically on a set of randomly generated Garnets, the study being designed to quantify the influence of this concentrability coefficient. From these experiments, it appears that the Bellman residual is a good proxy for the error (the distance to the optimal value function) only if, luckily, the concentrability coefficient is small for the considered MDP and the distribution of interest, or one can afford a change of measure for the optimization problem, such that the sampling distribution is close to the ideal one. Regarding this second point, one can change to a measure different from the ideal one, dµ,π∗ (for example, using for ν a uniform distribution when the distribution of interest concentrates on a single state would help), but this is difficult in general (one should know roughly where the optimal policy will lead to). Conversely, maximizing the mean value appears to be insensitive to this problem. This suggests that the Bellman residual is generally a bad proxy to policy optimization, and that maximizing the mean value is more likely to result in efficient and robust reinforcement learning algorithms, despite the current lack of deep theoretical analysis. This conclusion might seems obvious, as maximizing the mean value is a more direct approach, but this discussion has never been addressed in the literature, as far as we know, and we think it to be important, given the prevalence of (projected) residual minimization in value-based RL.
1. What is the main contribution of the paper in the field of reinforcement learning? 2. What are the strengths of the paper, particularly in its theoretical and empirical analysis? 3. What are the weaknesses of the paper, especially regarding the excitement level of the results and the lack of discussion on performance bounds? 4. Do you have any concerns or suggestions regarding the presentation of the plot graphics?
Review
Review The paper sheds light on the question whether policy search by directly maximizing the mean value or as a proxy minimizing the Bellman residual is more effective. Leaving aside implementation and estimation issues, the effectiveness of both objectives is compared theoretically and empirically. The empirical study suggests that maximizing the mean value is superior as its performance does not deteriorate when the concentrability coefficient becomes high. Overall, the study presented in this paper is well executed and the paper is well written. While the results are certainly not exciting, the authors raise the valid point that both objectives are prevalent in RL and therefore such a study is of interest. GIven that the study reveals a mismatch between the theoretical and empirical performance of mean value maximization, I would have liked to see a discussion of whether performance bounds without a dependency on the concentrability coefficient are possible and what the challenges to prove such a bound are. Are there lower bounds available that have a dependency on related quantities? The font size in the plots is too small. Axis labels and ticks are not readable at all.
NIPS
Title The Sound of APALM Clapping: Faster Nonsmooth Nonconvex Optimization with Stochastic Asynchronous PALM Abstract We introduce the Stochastic Asynchronous Proximal Alternating Linearized Minimization (SAPALM) method, a block coordinate stochastic proximal-gradient method for solving nonconvex, nonsmooth optimization problems. SAPALM is the first asynchronous parallel optimization method that provably converges on a large class of nonconvex, nonsmooth problems. We prove that SAPALM matches the best known rates of convergence — among synchronous or asynchronous methods — on this problem class. We provide upper bounds on the number of workers for which we can expect to see a linear speedup, which match the best bounds known for less complex problems, and show that in practice SAPALM achieves this linear speedup. We demonstrate state-of-the-art performance on several matrix factorization problems. 1 Introduction Parallel optimization algorithms often feature synchronization steps: all processors wait for the last to finish before moving on to the next major iteration. Unfortunately, the distribution of finish times is heavy tailed. Hence as the number of processors increases, most processors waste most of their time waiting. A natural solution is to remove any synchronization steps: instead, allow each idle processor to update the global state of the algorithm and continue, ignoring read and write conflicts whenever they occur. Occasionally one processor will erase the work of another; the hope is that the gain from allowing processors to work at their own paces offsets the loss from a sloppy division of labor. These asynchronous parallel optimization methods can work quite well in practice, but it is difficult to tune their parameters: lock-free code is notoriously hard to debug. For these problems, there is nothing as practical as a good theory, which might explain how to set these parameters so as to guarantee convergence. In this paper, we propose a theoretical framework guaranteeing convergence of a class of asynchronous algorithms for problems of the form minimize (x1,...,xm)∈H1×...×Hm f(x1, . . . , xm) + m∑ j=1 rj(xj), (1) where f is a continuously differentiable (C1) function with an L-Lipschitz gradient, each rj is a lower semicontinuous (not necessarily convex or differentiable) function, and the sets Hj are Euclidean spaces (i.e.,Hj = Rnj for some nj ∈ N). This problem class includes many (convex and nonconvex) signal recovery problems, matrix factorization problems, and, more generally, any generalized low rank model [20]. Following terminology from these domains, we view f as a loss function and each rj as a regularizer. For example, f might encode the misfit between the observations and the model, while the regularizers rj encode structural constraints on the model such as sparsity or nonnegativity. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Many synchronous parallel algorithms have been proposed to solve (1), including stochastic proximalgradient and block coordinate descent methods [22, 3]. Our asynchronous variants build on these synchronous methods, and in particular on proximal alternating linearized minimization (PALM) [3]. These asynchronous variants depend on the same parameters as the synchronous methods, such as a step size parameter, but also new ones, such as the maximum allowable delay. Our contribution here is to provide a convergence theory to guide the choice of those parameters within our control (such as the stepsize) in light of those out of our control (such as the maximum delay) to ensure convergence at the rate guaranteed by theory. We call this algorithm the Stochastic Asynchronous Proximal Alternating Linearized Minimization method, or SAPALM for short. Lock-free optimization is not a new idea. Many of the first theoretical results for such algorithms appear in the textbook [2], written over a generation ago. But within the last few years, asynchronous stochastic gradient and block coordinate methods have become newly popular, and enthusiasm in practice has been matched by progress in theory. Guaranteed convergence for these algorithms has been established for convex problems; see, for example, [13, 15, 16, 12, 11, 4, 1]. Asynchrony has also been used to speed up algorithms for nonconvex optimization, in particular, for learning deep neural networks [6] and completing low-rank matrices [23]. In contrast to the convex case, the existing asynchronous convergence theory for nonconvex problems is limited to the following four scenarios: stochastic gradient methods for smooth unconstrained problems [19, 10]; block coordinate methods for smooth problems with separable, convex constraints [18]; block coordinate methods for the general problem (1) [5]; and deterministic distributed proximal-gradient methods for smooth nonconvex loss functions with a single nonsmooth, convex regularizer [9]. A general block-coordinate stochastic gradient method with nonsmooth, nonconvex regularizers is still missing from the theory. We aim to fill this gap. Contributions. We introduce SAPALM, the first asynchronous parallel optimization method that provably converges for all nonconvex, nonsmooth problems of the form (1). SAPALM is a a block coordinate stochastic proximal-gradient method that generalizes the deterministic PALM method of [5, 3]. When applied to problem (1), we prove that SAPALM matches the best, known rates of convergence, due to [8] in the case where each rj is convex and m = 1: that is, asynchrony carries no theoretical penalty for convergence speed. We test SAPALM on a few example problems and compare to a synchronous implementation, showing a linear speedup. Notation. Let m ∈ N denote the number of coordinate blocks. We letH = H1 × . . .×Hm. For every x ∈ H, each partial gradient ∇jf(x1, . . . , xj−1, ·, xj+1, . . . , xm) : Hj → Hj is Lj-Lipschitz continuous; we let L = minj{Lj} ≤ maxj{Lj} = L. The number τ ∈ N is the maximum allowable delay. Define the aggregate regularizer r : H → (−∞,∞] as r(x) = ∑m j=1 rj(xj). For each j ∈ {1, . . . ,m}, y ∈ Hj , and γ > 0, define the proximal operator proxγrj (y) := argmin xj∈Hj { rj(xj) + 1 2γ ‖xj − y‖2 } For convex rj , proxγrj (y) is uniquely defined, but for nonconvex problems, it is, in general, a set. We make the mild assumption that for all y ∈ Hj , we have proxγrj (y) 6= ∅. A slight technicality arises from our ability to choose among multiple elements of proxγrj (y), especially in light of the stochastic nature of SAPALM. Thus, for all y, j and γ > 0, we fix an element ζj(y, γ) ∈ proxγrj (y). (2) By [17, Exercise 14.38], we can assume that ζj is measurable, which enables us to reason with expectations wherever they involve ζj . As shorthand, we use proxγrj (y) to denote the (unique) choice ζj(y, γ). For any random variable or vector X , we let Ek [X] = E [ X | xk, . . . , x0, νk, . . . , ν0 ] denote the conditional expectation of X with respect to the sigma algebra generated by the history of SAPALM. 2 Algorithm Description Algorithm 1 displays the SAPALM method. We highlight a few features of the algorithm which we discuss in more detail below. Algorithm 1 SAPALM [Local view] Input: x ∈ H 1: All processors in parallel do 2: loop 3: Randomly select a coordinate block j ∈ {1, . . . ,m} 4: Read x from shared memory 5: Compute g = ∇jf(x) + νj 6: Choose stepsize γj ∈ R++ . According to Assumption 3 7: xj ← proxγjrj (xj − γjg) . According to (2) • Inconsistent iterates. Other processors may write updates to x in the time required to read x from memory. • Coordinate blocks. When the coordinate blocks xj are low dimensional, it reduces the likelihood that one update will be immediately erased by another, simultaneous update. • Noise. The noise ν ∈ H is a random variable that we use to model injected noise. It can be set to 0, or chosen to accelerate each iteration, or to avoid saddle points. Algorithm 1 has an equivalent (mathematical) description which we present in Algorithm 2, using an iteration counter k which is incremented each time a processor completes an update. This iteration counter is not required by the processors themselves to compute the updates. In Algorithm 1, a processor might not have access to the shared-memory’s global state, xk, at iteration k. Rather, because all processors can continuously update the global state while other processors are reading, local processors might only read the inconsistently delayed iterate xk−dk = (x k−dk,1 1 , . . . , x k−dk,m m ), where the delays dk are integers less than τ , and xl = x0 when l < 0. Algorithm 2 SAPALM [Global view] Input: x0 ∈ H 1: for k ∈ N do 2: Randomly select a coordinate block jk ∈ {1, . . . ,m} 3: Read xk−dk = (xk−dk,11 , . . . , x k−dk,m m ) from shared memory 4: Compute gk = ∇jkf(xk−dk) + νkjk 5: Choose stepsize γkjk ∈ R++ . According to Assumption 3 6: for j = 1, . . . ,m do 7: if j = jk then 8: xk+1jk ← proxγkjkrjk (x k jk − γkjkg k) . According to (2) 9: else 10: xk+1j ← xkj 2.1 Assumptions on the Delay, Independence, Variance, and Stepsizes Assumption 1 (Bounded Delay). There exists some τ ∈ N such that, for all k ∈ N, the sequence of coordinate delays lie within dk ∈ {0, . . . , τ}m. Assumption 2 (Independence). The indices {jk}k∈N are uniformly distributed and collectively IID. They are independent from the history of the algorithm xk, . . . , x0, νk, . . . , ν0 for all k ∈ N. We employ two possible restrictions on the noise sequence νk and the sequence of allowable stepsizes γkj , all of which lead to different convergence rates: Assumption 3 (Noise Regimes and Stepsizes). Let σ2k := Ek [ ‖νk‖2 ] denote the expected squared norm of the noise, and let a ∈ (1,∞). Assume that Ek [ νk ] = 0 and that there is a sequence of weights {ck}k∈N ⊆ [1,∞) such that (∀k ∈ N) , (∀j ∈ {1, . . . ,m}) γkj := 1 ack(Lj + 2Lτm−1/2) . which we choose using the following two rules, both of which depend on the growth of σk: Summable. ∑∞ k=0 σ 2 k <∞ =⇒ ck ≡ 1; α-Diminishing. (α ∈ (0, 1)) σ2k = O((k + 1)−α) =⇒ ck = Θ((k + 1)(1−α)). More noise, measured by σk, results in worse convergence rates and stricter requirements regarding which stepsizes can be chosen. We provide two stepsize choices which, depending on the noise regime, interpolate between Θ(1) and Θ(k1−α) for any α ∈ (0, 1). Larger stepsizes lead to convergence rates of order O(k−1), while smaller ones lead to order O(k−α). 2.2 Algorithm Features Inconsistent Asynchronous Reading. SAPALM allows asynchronous access patterns. A processor may, at any time, and without notifying other processors: 1. Read. While other processors are writing to shared-memory, read the possibly out-of-sync, delayed coordinates xk−dk,11 , . . . , x k−dk,m m . 2. Compute. Locally, compute the partial gradient∇jkf(x k−dk,1 1 , . . . , x k−dk,m m ). 3. Write. After computing the gradient, replace the jkth coordinate with xk+1jk ∈ argmin y rjk(y) + 〈∇jkf(xk−dk) + νkjk , y − x k jk 〉+ 1 2γkjk ‖y − xkjk‖ 2. Uncoordinated access eliminates waiting time for processors, which speeds up computation. The processors are blissfully ignorant of any conflict between their actions, and the paradoxes these conflicts entail: for example, the states xk−dk,11 , . . . , x k−dk,m m need never have simultaneously existed in memory. Although we write the method with a global counter k, the asynchronous processors need not be aware of it; and the requirement that the delays dk remain bounded by τ does not demand coordination, but rather serves only to define τ . What Does the Noise Model Capture? SAPALM is the first asynchronous PALM algorithm to allow and analyze noisy updates. The stochastic noise, νk, captures three phenomena: 1. Computational Error. Noise due to random computational error. 2. Avoiding Saddles. Noise deliberately injected for the purpose of avoiding saddles, as in [7]. 3. Stochastic Gradients. Noise due to stochastic approximations of delayed gradients. Of course, the noise model also captures any combination of the above phenomena. The last one is, perhaps, the most interesting: it allows us to prove convergence for a stochastic- or minibatch-gradient version of APALM, rather than requiring processors to compute a full (delayed) gradient. Stochastic gradients can be computed faster than their batch counterparts, allowing more frequent updates. 2.3 SAPALM as an Asynchronous Block Mini-Batch Stochastic Proximal-Gradient Method In Algorithm 1, any stochastic estimator ∇f(xk−dk ; ξ) of the gradient may be used, as long as Ek [ ∇f(xk−dk ; ξ) ] = ∇f(xk−dk), and Ek [ ‖∇f(xk−dk ; ξ)−∇f(xk−dk)‖2 ] ≤ σ2. In particular, if Problem 1 takes the form minimize x∈H Eξ [f(x1, . . . , xm; ξ)] + 1 m m∑ j=1 rj(xj), then, in Algorithm 2, the stochastic mini-batch estimator gk = m−1k ∑mk i=1∇f(xk−dk ; ξi), where ξi are IID, may be used in place of ∇f(xk−dk) + νk. A quick calculation shows that Ek [ ‖gk −∇f(xk−dk)‖2 ] = O(m−1k ). Thus, any increasing batch size mk = Ω((k + 1) −α), with α ∈ (0, 1), conforms to Assumption 3. When nonsmooth regularizers are present, all known convergence rate results for nonconvex stochastic gradient algorithms require the use of increasing, rather than fixed, minibatch sizes; see [8, 22] for analogous, synchronous algorithms. 3 Convergence Theorem Measuring Convergence for Nonconvex Problems. For nonconvex problems, it is standard to measure convergence (to a stationary point) by the expected violation of stationarity, which for us is the (deterministic) quantity: Sk := E m∑ j=1 ∥∥∥∥∥ 1γkj (wkj − xkj ) + νk ∥∥∥∥∥ 2 ; where (∀j ∈ {1, . . . ,m}) wkj = proxγkj rj (x k j − γkj (∇jf(xk−dk) + νkj )). (3) A reduction to the case r ≡ 0 and dk = 0 reveals that wkj − xkj + γkj νkj = −γkj∇jf(xk) and, hence, Sk = E [ ‖∇f(xk)‖2 ] . More generally, wkj − rkj + γkj νkj ∈ −γkj (∂Lrj(wkj ) +∇jf(xk−dk)) where ∂Lrj is the limiting subdifferential of rj [17] which, if rj is convex, reduces to the standard convex subdifferential familiar from [14]. A messy but straightforward calculation shows that our convergence rates for Sk can be converted to convergence rates for elements of ∂Lr(wk) +∇f(wk). We present our main convergence theorem now and defer the proof to Section 4. Theorem 1 (SAPALM Convergence Rates). Let {xk}k∈N ⊆ H be the SAPALM sequence created by Algorithm 2. Then, under Assumption 3 the following convergence rates hold: for all T ∈ N, if {νk}k∈N is 1. Summable, then min k=0,...,T Sk ≤ Ek∼PT [Sk] = O ( m(L+ 2Lτm−1/2) T + 1 ) ; 2. α-Diminishing, then min k=0,...,T Sk ≤ Ek∼PT [Sk] = O ( m(L+ 2Lτm−1/2) +m log(T + 1) (T + 1)−α ) ; where, for all T ∈ N, PT is the distribution {0, . . . , T} such that PT (X = k) ∝ c−1k . Effects of Delay and Linear Speedups. The m−1/2 term in the convergence rates presented in Theorem 1 prevents the delay τ from dominating our rates of convergence. In particular, as long as τ = O( √ m), the convergence rate in the synchronous (τ = 0) and asynchronous cases are within a small constant factor of each other. In that case, because the work per iteration in the synchronous and asynchronous versions of SAPALM is the same, we expect a linear speedup: SAPALM with p processors will converge nearly p times faster than PALM, since the iteration counter will be updated p times as often. As a rule of thumb, τ is roughly proportional to the number of processors. Hence we can achieve a linear speedup on as many as O( √ m) processors. 3.1 The Asynchronous Stochastic Block Gradient Method If the regularizer r is identically zero, then the noise νk need not vanish in the limit. The following theorem guarantees convergence of asynchronous stochastic block gradient descent with a constant minibatch size. See the supplemental material for a proof. Theorem 2 (SAPALM Convergence Rates (r ≡ 0)). Let {xk}k∈N ⊆ H be the SAPALM sequence created by Algorithm 2 in the case that r ≡ 0. If, for all k ∈ N, {Ek [ ‖νk‖2 ] }k∈N is bounded (not necessarily diminishing) and (∃a ∈ (1,∞)) , (∀k ∈ N) , (∀j ∈ {1, . . . ,m}) γkj := 1 a √ k(Lj + 2Mτm−1/2) , then for all T ∈ N, we have min k=0,...,T Sk ≤ Ek∼PT [Sk] = O ( m(L+ 2Lτm−1/2) +m log(T + 1)√ T + 1 ) , where PT is the distribution {0, . . . , T} such that PT (X = k) ∝ k−1/2. 4 Convergence Analysis 4.1 The Asynchronous Lyapunov Function Key to the convergence of SAPALM is the following Lyapunov function, defined onH1+τ , which aggregates not only the current state of the algorithm, as is common in synchronous algorithms, but also the history of the algorithm over the delayed time steps: (∀x(0), x(1), . . . , x(τ) ∈ H) Φ(x(0), x(1), . . . , x(τ)) = f(x(0)) + r(x(0)) + L 2 √ m τ∑ h=1 (τ − h+ 1)‖x(h)− x(h− 1)‖2. This Lyapunov function appears in our convergence analysis through the following inequality, which is proved in the supplemental material. Lemma 1 (Lyapunov Function Supermartingale Inequality). For all k ∈ N, let zk = (xk, . . . , xk−τ ) ∈ H1+τ . Then for all > 0, we have Ek [ Φ(zk+1) ] ≤ Φ(zk)− 1 2m m∑ j=1 ( 1 γkj − (1 + ) ( Lj + 2Lτ m1/2 )) Ek [ ‖wkj − xkj + γkj νkj ‖2 ] + m∑ j=1 γkj ( 1 + γkj (1 + −1) ( Lj + 2Lτm −1/2))Ek [‖νkj ‖2] 2m where for all j ∈ {1, . . . ,m}, we have wkj = proxγkj rj (x k j − γkj (∇jf(xk−dk) + νkj )). In particular, for σk = 0, we can take = 0 and assume the last line is zero. Notice that if σk = = 0 and γkj is chosen as suggested in Algorithm 2, the (conditional) expected value of the Lyapunov function is strictly decreasing. If σk is nonzero, the factor will be used in concert with the stepsize γkj to ensure that noise does not cause the algorithm to diverge. 4.2 Proof of Theorem 1 For either noise regime, we define, for all k ∈ N and j ∈ {1, . . . ,m}, the factor := 2−1(a − 1). With the assumed choice of γkj and , Lemma 1 implies that the expected Lyapunov function decreases, up to a summable residual: with Akj := w k j − xkj + γkj νkj , we have E [ Φ(zk+1) ] ≤ E [ Φ(zk) ] − E 1 2m m∑ j=1 1 γkj ( 1− 1 + ack ) ‖Akj ‖2 + m∑ j=1 γkj ( 1 + γkj (1 + −1) ( Lj + 2Lτm −1/2))E [Ek [‖νkj ‖2]] 2m . (4) Two upper bounds follow from the the definition of γkj , the lower bound ck ≥ 1, and the straightforward inequalities (ack)−1(L+ 2Mτm−1/2)−1 ≥ γkj ≥ (ack)−1(L+ 2Mτm−1/2)−1: 1 ck Sk ≤ 1 (1−(1+ )a−1) 2ma(L+2Lτm−1/2) E 1 2m m∑ j=1 1 γkj ( 1− 1 + ack ) ‖Akj ‖2 and m∑ j=1 γkj ( 1 + γkj (1 + −1) ( Lj + 2Lτm −1/2))Ek [‖νkj ‖2] 2m ≤ (1 + (ack) −1(1 + −1))(σ2k/ck) 2a(L+ 2Lτm−1/2) . Now rearrange (4), use E [ Φ(zk+1) ] ≥ infx∈H{f(x) + r(x)} and E [ Φ(z0) ] = f(x0) + r(x0), and sum (4) over k to get 1∑T k=0 c −1 k T∑ k=0 1 ck Sk ≤ f(x0) + r(x0)− infx∈H{f(x) + r(x)}+ ∑T k=0 (1+(ack) −1(1+ −1))(σ2k/ck) 2a(L+2Lτm−1/2) (1−(1+ )a−1) 2ma(L+2Lτm−1/2) ∑T k=0 c −1 k . The left hand side of this inequality is bounded from below by mink=0,...,T Sk and is precisely the term Ek∼PT [Sk]. What remains to be shown is an upper bound on the right hand side, which we will now call RT . If the noise is summable, then ck ≡ 1, so ∑T k=0 c −1 k = (T+1) and ∑T k=0 σ 2 k/ck <∞, which implies that RT = O(m(L+ 2Lτm−1/2)(T + 1)−1). If the noise is α-diminishing, then ck = Θ ( k(1−α) ) , so ∑T k=0 c −1 k = Θ((T + 1) α) and, because σ2k/ck = O(k −1), there exists a B > 0 such that∑T k=0 σ 2 k/ck ≤ ∑T k=0Bk −1 = O(log(T +1)), which implies that RT = O((m(L+2Lτm−1/2)+ m log(T + 1))(T + 1)−α). 5 Numerical Experiments In this section, we present numerical results to confirm that SAPALM delivers the expected performance gains over PALM. We confirm two properties: 1) SAPALM converges to values nearly as low as PALM given the same number of iterations, 2) SAPALM exhibits a near-linear speedup as the number of workers increases. All experiments use an Intel Xeon machine with 2 sockets and 10 cores per socket. We use two different nonconvex matrix factorization problems to exhibit these properties, to which we apply two different SAPALM variants: one without noise, and one with stochastic gradient noise. For each of our examples, we generate a matrix A ∈ Rn×n with iid standard normal entries, where n = 2000. Although SAPALM is intended for use on much larger problems, using a small problem size makes write conflicts more likely, and so serves as an ideal setting to understand how asynchrony affects convergence. 1. Sparse PCA with Asynchronous Block Coordinate Updates. We minimize argmin X,Y 1 2 ||A−XTY ||2F + λ‖X‖1 + λ‖Y ‖1, (5) where X ∈ Rd×n and Y ∈ Rd×n for some d ∈ N. We solve this problem using SAPALM with no noise νk = 0. 2. Quadratically Regularized Firm Thresholding PCA with Asynchronous Stochastic Gradients. We minimize argmin X,Y 1 2 ||A−XTY ||2F + λ(‖X‖Firm + ‖Y ‖Firm) + µ 2 (‖X‖2F + ‖Y ‖2F ), (6) where X ∈ Rd×n, Y ∈ Rd×n, and ‖ · ‖Firm is the firm thresholding penalty proposed in [21]: a nonconvex, nonsmooth function whose proximal operator truncates small values to zero and preserves large values. We solve this problem using the stochastic gradient SAPALM variant from Section 2.3. In both experiments X and Y are treated as coordinate blocks. Notice that for this problem, the SAPALM update decouples over the entries of each coordinate block. Each worker updates its coordinate block (say, X) by cycling through the coordinates of X and updating each in turn, restarting at a random coordinate after each cycle. In Figures (1a) and (1c), we see objective function values plotted by iteration. By this metric, SAPALM performs as well as PALM, its single threaded variant; for the second problem, the curves for different thread counts all overlap. Note, in particular, that SAPALM does not diverge. But SAPALM can add additional workers to increment the iteration counter more quickly, as seen in Figure 1b, allowing SAPALM to outperform its single threaded variant. We measure the speedup Sk(p) of SAPALM by the (relative) time for p workers to produce k iterates Sk(p) = Tk(p) Tk(1) , (7) where Tk(p) is the time to produce k iterates using p workers. Table 2 shows that SAPALM achieves near linear speedup for a range of variable sizes d. (Dashes — denote experiments not run.) threads d=10 d=20 d=100 1 1 1 1 2 1.9722 1.9812 – 4 3.7623 3.7635 – 8 7.1444 7.3315 7.3719 16 13.376 14.5322 14.743 Table 2: Sparse PCA speedup for 16 iterations by problem size and thread count. Deviations from linearity can be attributed to a breakdown in the abstraction of a “shared memory” computer: as each worker modifies the “shared” variables X and Y , some communication is required to maintain cache coherency across all cores and processors. In addition, Intel Xeon processors share L3 cache between all cores on the processor. All threads compete for the same L3 cache space, slowing down each iteration. For small d, write conflicts are more likely; for large d, communication to maintain cache coherency dominates. 6 Discussion A few straightforward generalizations of our work are possible; we omit them to simplify notation. Removing the log factors. The log factors in Theorem 1 can easily be removed by fixing a maximum number of iterations for which we plan to run SAPALM and adjusting the ck factors accordingly, as in [14, Equation (3.2.10)]. Cluster points of {xk}k∈N. Using the strategy employed in [5], it’s possible to show that all cluster points of {xk}k∈N are (almost surely) stationary points of f + r. Weakened Assumptions on Lipschitz Constants. We can weaken our assumptions to allow Lj to vary: we can assume Lj(x1, . . . , xj−1, ·, xj+1, . . . , xm)-Lipschitz continuity each partial gradient ∇jf(x1, . . . , xj−1, ·, xj+1, . . . , xm) : Hj → Hj , for every x ∈ H. 7 Conclusion This paper presented SAPALM, the first asynchronous parallel optimization method that provably converges on a large class of nonconvex, nonsmooth problems. We provide a convergence theory for SAPALM, and show that with the parameters suggested by this theory, SAPALM achieves a near linear speedup over serial PALM. As a special case, we provide the first convergence rate for (synchronous or asynchronous) stochastic block proximal gradient methods for nonconvex regularizers. These results give specific guidance to ensure fast convergence of practical asynchronous methods on a large class of important, nonconvex optimization problems, and pave the way towards a deeper understanding of stability of these methods in the presence of noise.
1. How does the reviewer assess the novelty and significance of the proposed block coordinate optimization algorithm? 2. What are the strengths and weaknesses of the paper regarding its theoretical analysis and empirical results? 3. Do you have any questions or concerns about the convergence guarantee of the algorithm? 4. How does the reviewer evaluate the applicability and effectiveness of the algorithm in handling nonconvex regularization and non-unique proximal operators? 5. Are there any suggestions for improving the clarity and readability of the paper's content?
Review
Review The paper presents a block coordinate optimization algorithm using asynchronous parallel processing. Specifically, the proposed method is applicable to nonconvex nonsmooth problems of cost functions obeying a specific structure (Eq 1). The algorithm is easy to implement and is accompanied with a convergence analysis. The proposed algorithm achieves near linear speedup over similar algorithms working serially. The claims are supported by empirical results on two matrix factorization problems.The presented algorithm and its accompanied guaranteed convergence are indeed very interesting. Nevertheless, there are a few places in the paper that are not quite clear to me: 1. Is the algorithm guaranteed to converge to a "local minimum"? The convergence analysis in Section 3 relies on the definition in Eq(3). The latter merely guarantees states convergence to a "stationary point". However, throughout the paper, authors mention that the addition of the noise v_k helps the algorithm escape from saddle points (Line 116, Item 2). It seems there is an argument by the authors about reaching a local minimum, which the analysis (as currently presented) does not provide. Please clarify. 2. Nonconvex Regularization: The authors state that the proposed algorithm/analysis applies to general nonconvex regularizers. The latter may give rise to non-unique proximal operators (Line 63). Given that handling general nonconvexity is stressed as a key property of the algorithm, it would be nice if handling such non-unique scenarios were explained in greater details. Unfortunately, it seems the nonconvex experiment (Firm Norm) also limited to the special case when the proximal operator can be defined uniquely (despite nonconvexity). In general, however, how should the algorithm choose among non-unique answers? Does a random choice uniformly across the set of proximal operators work? Or it may cause a problem if the randomly chosen elements between two successive iterates happen to be far from each other? 3. It is not stated what different colors in Figure 1 represent neither in the caption of Figure 1, nor in the main text). I suppose these are the number of threads, but it should be mentioned explicitly.
NIPS
Title The Sound of APALM Clapping: Faster Nonsmooth Nonconvex Optimization with Stochastic Asynchronous PALM Abstract We introduce the Stochastic Asynchronous Proximal Alternating Linearized Minimization (SAPALM) method, a block coordinate stochastic proximal-gradient method for solving nonconvex, nonsmooth optimization problems. SAPALM is the first asynchronous parallel optimization method that provably converges on a large class of nonconvex, nonsmooth problems. We prove that SAPALM matches the best known rates of convergence — among synchronous or asynchronous methods — on this problem class. We provide upper bounds on the number of workers for which we can expect to see a linear speedup, which match the best bounds known for less complex problems, and show that in practice SAPALM achieves this linear speedup. We demonstrate state-of-the-art performance on several matrix factorization problems. 1 Introduction Parallel optimization algorithms often feature synchronization steps: all processors wait for the last to finish before moving on to the next major iteration. Unfortunately, the distribution of finish times is heavy tailed. Hence as the number of processors increases, most processors waste most of their time waiting. A natural solution is to remove any synchronization steps: instead, allow each idle processor to update the global state of the algorithm and continue, ignoring read and write conflicts whenever they occur. Occasionally one processor will erase the work of another; the hope is that the gain from allowing processors to work at their own paces offsets the loss from a sloppy division of labor. These asynchronous parallel optimization methods can work quite well in practice, but it is difficult to tune their parameters: lock-free code is notoriously hard to debug. For these problems, there is nothing as practical as a good theory, which might explain how to set these parameters so as to guarantee convergence. In this paper, we propose a theoretical framework guaranteeing convergence of a class of asynchronous algorithms for problems of the form minimize (x1,...,xm)∈H1×...×Hm f(x1, . . . , xm) + m∑ j=1 rj(xj), (1) where f is a continuously differentiable (C1) function with an L-Lipschitz gradient, each rj is a lower semicontinuous (not necessarily convex or differentiable) function, and the sets Hj are Euclidean spaces (i.e.,Hj = Rnj for some nj ∈ N). This problem class includes many (convex and nonconvex) signal recovery problems, matrix factorization problems, and, more generally, any generalized low rank model [20]. Following terminology from these domains, we view f as a loss function and each rj as a regularizer. For example, f might encode the misfit between the observations and the model, while the regularizers rj encode structural constraints on the model such as sparsity or nonnegativity. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Many synchronous parallel algorithms have been proposed to solve (1), including stochastic proximalgradient and block coordinate descent methods [22, 3]. Our asynchronous variants build on these synchronous methods, and in particular on proximal alternating linearized minimization (PALM) [3]. These asynchronous variants depend on the same parameters as the synchronous methods, such as a step size parameter, but also new ones, such as the maximum allowable delay. Our contribution here is to provide a convergence theory to guide the choice of those parameters within our control (such as the stepsize) in light of those out of our control (such as the maximum delay) to ensure convergence at the rate guaranteed by theory. We call this algorithm the Stochastic Asynchronous Proximal Alternating Linearized Minimization method, or SAPALM for short. Lock-free optimization is not a new idea. Many of the first theoretical results for such algorithms appear in the textbook [2], written over a generation ago. But within the last few years, asynchronous stochastic gradient and block coordinate methods have become newly popular, and enthusiasm in practice has been matched by progress in theory. Guaranteed convergence for these algorithms has been established for convex problems; see, for example, [13, 15, 16, 12, 11, 4, 1]. Asynchrony has also been used to speed up algorithms for nonconvex optimization, in particular, for learning deep neural networks [6] and completing low-rank matrices [23]. In contrast to the convex case, the existing asynchronous convergence theory for nonconvex problems is limited to the following four scenarios: stochastic gradient methods for smooth unconstrained problems [19, 10]; block coordinate methods for smooth problems with separable, convex constraints [18]; block coordinate methods for the general problem (1) [5]; and deterministic distributed proximal-gradient methods for smooth nonconvex loss functions with a single nonsmooth, convex regularizer [9]. A general block-coordinate stochastic gradient method with nonsmooth, nonconvex regularizers is still missing from the theory. We aim to fill this gap. Contributions. We introduce SAPALM, the first asynchronous parallel optimization method that provably converges for all nonconvex, nonsmooth problems of the form (1). SAPALM is a a block coordinate stochastic proximal-gradient method that generalizes the deterministic PALM method of [5, 3]. When applied to problem (1), we prove that SAPALM matches the best, known rates of convergence, due to [8] in the case where each rj is convex and m = 1: that is, asynchrony carries no theoretical penalty for convergence speed. We test SAPALM on a few example problems and compare to a synchronous implementation, showing a linear speedup. Notation. Let m ∈ N denote the number of coordinate blocks. We letH = H1 × . . .×Hm. For every x ∈ H, each partial gradient ∇jf(x1, . . . , xj−1, ·, xj+1, . . . , xm) : Hj → Hj is Lj-Lipschitz continuous; we let L = minj{Lj} ≤ maxj{Lj} = L. The number τ ∈ N is the maximum allowable delay. Define the aggregate regularizer r : H → (−∞,∞] as r(x) = ∑m j=1 rj(xj). For each j ∈ {1, . . . ,m}, y ∈ Hj , and γ > 0, define the proximal operator proxγrj (y) := argmin xj∈Hj { rj(xj) + 1 2γ ‖xj − y‖2 } For convex rj , proxγrj (y) is uniquely defined, but for nonconvex problems, it is, in general, a set. We make the mild assumption that for all y ∈ Hj , we have proxγrj (y) 6= ∅. A slight technicality arises from our ability to choose among multiple elements of proxγrj (y), especially in light of the stochastic nature of SAPALM. Thus, for all y, j and γ > 0, we fix an element ζj(y, γ) ∈ proxγrj (y). (2) By [17, Exercise 14.38], we can assume that ζj is measurable, which enables us to reason with expectations wherever they involve ζj . As shorthand, we use proxγrj (y) to denote the (unique) choice ζj(y, γ). For any random variable or vector X , we let Ek [X] = E [ X | xk, . . . , x0, νk, . . . , ν0 ] denote the conditional expectation of X with respect to the sigma algebra generated by the history of SAPALM. 2 Algorithm Description Algorithm 1 displays the SAPALM method. We highlight a few features of the algorithm which we discuss in more detail below. Algorithm 1 SAPALM [Local view] Input: x ∈ H 1: All processors in parallel do 2: loop 3: Randomly select a coordinate block j ∈ {1, . . . ,m} 4: Read x from shared memory 5: Compute g = ∇jf(x) + νj 6: Choose stepsize γj ∈ R++ . According to Assumption 3 7: xj ← proxγjrj (xj − γjg) . According to (2) • Inconsistent iterates. Other processors may write updates to x in the time required to read x from memory. • Coordinate blocks. When the coordinate blocks xj are low dimensional, it reduces the likelihood that one update will be immediately erased by another, simultaneous update. • Noise. The noise ν ∈ H is a random variable that we use to model injected noise. It can be set to 0, or chosen to accelerate each iteration, or to avoid saddle points. Algorithm 1 has an equivalent (mathematical) description which we present in Algorithm 2, using an iteration counter k which is incremented each time a processor completes an update. This iteration counter is not required by the processors themselves to compute the updates. In Algorithm 1, a processor might not have access to the shared-memory’s global state, xk, at iteration k. Rather, because all processors can continuously update the global state while other processors are reading, local processors might only read the inconsistently delayed iterate xk−dk = (x k−dk,1 1 , . . . , x k−dk,m m ), where the delays dk are integers less than τ , and xl = x0 when l < 0. Algorithm 2 SAPALM [Global view] Input: x0 ∈ H 1: for k ∈ N do 2: Randomly select a coordinate block jk ∈ {1, . . . ,m} 3: Read xk−dk = (xk−dk,11 , . . . , x k−dk,m m ) from shared memory 4: Compute gk = ∇jkf(xk−dk) + νkjk 5: Choose stepsize γkjk ∈ R++ . According to Assumption 3 6: for j = 1, . . . ,m do 7: if j = jk then 8: xk+1jk ← proxγkjkrjk (x k jk − γkjkg k) . According to (2) 9: else 10: xk+1j ← xkj 2.1 Assumptions on the Delay, Independence, Variance, and Stepsizes Assumption 1 (Bounded Delay). There exists some τ ∈ N such that, for all k ∈ N, the sequence of coordinate delays lie within dk ∈ {0, . . . , τ}m. Assumption 2 (Independence). The indices {jk}k∈N are uniformly distributed and collectively IID. They are independent from the history of the algorithm xk, . . . , x0, νk, . . . , ν0 for all k ∈ N. We employ two possible restrictions on the noise sequence νk and the sequence of allowable stepsizes γkj , all of which lead to different convergence rates: Assumption 3 (Noise Regimes and Stepsizes). Let σ2k := Ek [ ‖νk‖2 ] denote the expected squared norm of the noise, and let a ∈ (1,∞). Assume that Ek [ νk ] = 0 and that there is a sequence of weights {ck}k∈N ⊆ [1,∞) such that (∀k ∈ N) , (∀j ∈ {1, . . . ,m}) γkj := 1 ack(Lj + 2Lτm−1/2) . which we choose using the following two rules, both of which depend on the growth of σk: Summable. ∑∞ k=0 σ 2 k <∞ =⇒ ck ≡ 1; α-Diminishing. (α ∈ (0, 1)) σ2k = O((k + 1)−α) =⇒ ck = Θ((k + 1)(1−α)). More noise, measured by σk, results in worse convergence rates and stricter requirements regarding which stepsizes can be chosen. We provide two stepsize choices which, depending on the noise regime, interpolate between Θ(1) and Θ(k1−α) for any α ∈ (0, 1). Larger stepsizes lead to convergence rates of order O(k−1), while smaller ones lead to order O(k−α). 2.2 Algorithm Features Inconsistent Asynchronous Reading. SAPALM allows asynchronous access patterns. A processor may, at any time, and without notifying other processors: 1. Read. While other processors are writing to shared-memory, read the possibly out-of-sync, delayed coordinates xk−dk,11 , . . . , x k−dk,m m . 2. Compute. Locally, compute the partial gradient∇jkf(x k−dk,1 1 , . . . , x k−dk,m m ). 3. Write. After computing the gradient, replace the jkth coordinate with xk+1jk ∈ argmin y rjk(y) + 〈∇jkf(xk−dk) + νkjk , y − x k jk 〉+ 1 2γkjk ‖y − xkjk‖ 2. Uncoordinated access eliminates waiting time for processors, which speeds up computation. The processors are blissfully ignorant of any conflict between their actions, and the paradoxes these conflicts entail: for example, the states xk−dk,11 , . . . , x k−dk,m m need never have simultaneously existed in memory. Although we write the method with a global counter k, the asynchronous processors need not be aware of it; and the requirement that the delays dk remain bounded by τ does not demand coordination, but rather serves only to define τ . What Does the Noise Model Capture? SAPALM is the first asynchronous PALM algorithm to allow and analyze noisy updates. The stochastic noise, νk, captures three phenomena: 1. Computational Error. Noise due to random computational error. 2. Avoiding Saddles. Noise deliberately injected for the purpose of avoiding saddles, as in [7]. 3. Stochastic Gradients. Noise due to stochastic approximations of delayed gradients. Of course, the noise model also captures any combination of the above phenomena. The last one is, perhaps, the most interesting: it allows us to prove convergence for a stochastic- or minibatch-gradient version of APALM, rather than requiring processors to compute a full (delayed) gradient. Stochastic gradients can be computed faster than their batch counterparts, allowing more frequent updates. 2.3 SAPALM as an Asynchronous Block Mini-Batch Stochastic Proximal-Gradient Method In Algorithm 1, any stochastic estimator ∇f(xk−dk ; ξ) of the gradient may be used, as long as Ek [ ∇f(xk−dk ; ξ) ] = ∇f(xk−dk), and Ek [ ‖∇f(xk−dk ; ξ)−∇f(xk−dk)‖2 ] ≤ σ2. In particular, if Problem 1 takes the form minimize x∈H Eξ [f(x1, . . . , xm; ξ)] + 1 m m∑ j=1 rj(xj), then, in Algorithm 2, the stochastic mini-batch estimator gk = m−1k ∑mk i=1∇f(xk−dk ; ξi), where ξi are IID, may be used in place of ∇f(xk−dk) + νk. A quick calculation shows that Ek [ ‖gk −∇f(xk−dk)‖2 ] = O(m−1k ). Thus, any increasing batch size mk = Ω((k + 1) −α), with α ∈ (0, 1), conforms to Assumption 3. When nonsmooth regularizers are present, all known convergence rate results for nonconvex stochastic gradient algorithms require the use of increasing, rather than fixed, minibatch sizes; see [8, 22] for analogous, synchronous algorithms. 3 Convergence Theorem Measuring Convergence for Nonconvex Problems. For nonconvex problems, it is standard to measure convergence (to a stationary point) by the expected violation of stationarity, which for us is the (deterministic) quantity: Sk := E m∑ j=1 ∥∥∥∥∥ 1γkj (wkj − xkj ) + νk ∥∥∥∥∥ 2 ; where (∀j ∈ {1, . . . ,m}) wkj = proxγkj rj (x k j − γkj (∇jf(xk−dk) + νkj )). (3) A reduction to the case r ≡ 0 and dk = 0 reveals that wkj − xkj + γkj νkj = −γkj∇jf(xk) and, hence, Sk = E [ ‖∇f(xk)‖2 ] . More generally, wkj − rkj + γkj νkj ∈ −γkj (∂Lrj(wkj ) +∇jf(xk−dk)) where ∂Lrj is the limiting subdifferential of rj [17] which, if rj is convex, reduces to the standard convex subdifferential familiar from [14]. A messy but straightforward calculation shows that our convergence rates for Sk can be converted to convergence rates for elements of ∂Lr(wk) +∇f(wk). We present our main convergence theorem now and defer the proof to Section 4. Theorem 1 (SAPALM Convergence Rates). Let {xk}k∈N ⊆ H be the SAPALM sequence created by Algorithm 2. Then, under Assumption 3 the following convergence rates hold: for all T ∈ N, if {νk}k∈N is 1. Summable, then min k=0,...,T Sk ≤ Ek∼PT [Sk] = O ( m(L+ 2Lτm−1/2) T + 1 ) ; 2. α-Diminishing, then min k=0,...,T Sk ≤ Ek∼PT [Sk] = O ( m(L+ 2Lτm−1/2) +m log(T + 1) (T + 1)−α ) ; where, for all T ∈ N, PT is the distribution {0, . . . , T} such that PT (X = k) ∝ c−1k . Effects of Delay and Linear Speedups. The m−1/2 term in the convergence rates presented in Theorem 1 prevents the delay τ from dominating our rates of convergence. In particular, as long as τ = O( √ m), the convergence rate in the synchronous (τ = 0) and asynchronous cases are within a small constant factor of each other. In that case, because the work per iteration in the synchronous and asynchronous versions of SAPALM is the same, we expect a linear speedup: SAPALM with p processors will converge nearly p times faster than PALM, since the iteration counter will be updated p times as often. As a rule of thumb, τ is roughly proportional to the number of processors. Hence we can achieve a linear speedup on as many as O( √ m) processors. 3.1 The Asynchronous Stochastic Block Gradient Method If the regularizer r is identically zero, then the noise νk need not vanish in the limit. The following theorem guarantees convergence of asynchronous stochastic block gradient descent with a constant minibatch size. See the supplemental material for a proof. Theorem 2 (SAPALM Convergence Rates (r ≡ 0)). Let {xk}k∈N ⊆ H be the SAPALM sequence created by Algorithm 2 in the case that r ≡ 0. If, for all k ∈ N, {Ek [ ‖νk‖2 ] }k∈N is bounded (not necessarily diminishing) and (∃a ∈ (1,∞)) , (∀k ∈ N) , (∀j ∈ {1, . . . ,m}) γkj := 1 a √ k(Lj + 2Mτm−1/2) , then for all T ∈ N, we have min k=0,...,T Sk ≤ Ek∼PT [Sk] = O ( m(L+ 2Lτm−1/2) +m log(T + 1)√ T + 1 ) , where PT is the distribution {0, . . . , T} such that PT (X = k) ∝ k−1/2. 4 Convergence Analysis 4.1 The Asynchronous Lyapunov Function Key to the convergence of SAPALM is the following Lyapunov function, defined onH1+τ , which aggregates not only the current state of the algorithm, as is common in synchronous algorithms, but also the history of the algorithm over the delayed time steps: (∀x(0), x(1), . . . , x(τ) ∈ H) Φ(x(0), x(1), . . . , x(τ)) = f(x(0)) + r(x(0)) + L 2 √ m τ∑ h=1 (τ − h+ 1)‖x(h)− x(h− 1)‖2. This Lyapunov function appears in our convergence analysis through the following inequality, which is proved in the supplemental material. Lemma 1 (Lyapunov Function Supermartingale Inequality). For all k ∈ N, let zk = (xk, . . . , xk−τ ) ∈ H1+τ . Then for all > 0, we have Ek [ Φ(zk+1) ] ≤ Φ(zk)− 1 2m m∑ j=1 ( 1 γkj − (1 + ) ( Lj + 2Lτ m1/2 )) Ek [ ‖wkj − xkj + γkj νkj ‖2 ] + m∑ j=1 γkj ( 1 + γkj (1 + −1) ( Lj + 2Lτm −1/2))Ek [‖νkj ‖2] 2m where for all j ∈ {1, . . . ,m}, we have wkj = proxγkj rj (x k j − γkj (∇jf(xk−dk) + νkj )). In particular, for σk = 0, we can take = 0 and assume the last line is zero. Notice that if σk = = 0 and γkj is chosen as suggested in Algorithm 2, the (conditional) expected value of the Lyapunov function is strictly decreasing. If σk is nonzero, the factor will be used in concert with the stepsize γkj to ensure that noise does not cause the algorithm to diverge. 4.2 Proof of Theorem 1 For either noise regime, we define, for all k ∈ N and j ∈ {1, . . . ,m}, the factor := 2−1(a − 1). With the assumed choice of γkj and , Lemma 1 implies that the expected Lyapunov function decreases, up to a summable residual: with Akj := w k j − xkj + γkj νkj , we have E [ Φ(zk+1) ] ≤ E [ Φ(zk) ] − E 1 2m m∑ j=1 1 γkj ( 1− 1 + ack ) ‖Akj ‖2 + m∑ j=1 γkj ( 1 + γkj (1 + −1) ( Lj + 2Lτm −1/2))E [Ek [‖νkj ‖2]] 2m . (4) Two upper bounds follow from the the definition of γkj , the lower bound ck ≥ 1, and the straightforward inequalities (ack)−1(L+ 2Mτm−1/2)−1 ≥ γkj ≥ (ack)−1(L+ 2Mτm−1/2)−1: 1 ck Sk ≤ 1 (1−(1+ )a−1) 2ma(L+2Lτm−1/2) E 1 2m m∑ j=1 1 γkj ( 1− 1 + ack ) ‖Akj ‖2 and m∑ j=1 γkj ( 1 + γkj (1 + −1) ( Lj + 2Lτm −1/2))Ek [‖νkj ‖2] 2m ≤ (1 + (ack) −1(1 + −1))(σ2k/ck) 2a(L+ 2Lτm−1/2) . Now rearrange (4), use E [ Φ(zk+1) ] ≥ infx∈H{f(x) + r(x)} and E [ Φ(z0) ] = f(x0) + r(x0), and sum (4) over k to get 1∑T k=0 c −1 k T∑ k=0 1 ck Sk ≤ f(x0) + r(x0)− infx∈H{f(x) + r(x)}+ ∑T k=0 (1+(ack) −1(1+ −1))(σ2k/ck) 2a(L+2Lτm−1/2) (1−(1+ )a−1) 2ma(L+2Lτm−1/2) ∑T k=0 c −1 k . The left hand side of this inequality is bounded from below by mink=0,...,T Sk and is precisely the term Ek∼PT [Sk]. What remains to be shown is an upper bound on the right hand side, which we will now call RT . If the noise is summable, then ck ≡ 1, so ∑T k=0 c −1 k = (T+1) and ∑T k=0 σ 2 k/ck <∞, which implies that RT = O(m(L+ 2Lτm−1/2)(T + 1)−1). If the noise is α-diminishing, then ck = Θ ( k(1−α) ) , so ∑T k=0 c −1 k = Θ((T + 1) α) and, because σ2k/ck = O(k −1), there exists a B > 0 such that∑T k=0 σ 2 k/ck ≤ ∑T k=0Bk −1 = O(log(T +1)), which implies that RT = O((m(L+2Lτm−1/2)+ m log(T + 1))(T + 1)−α). 5 Numerical Experiments In this section, we present numerical results to confirm that SAPALM delivers the expected performance gains over PALM. We confirm two properties: 1) SAPALM converges to values nearly as low as PALM given the same number of iterations, 2) SAPALM exhibits a near-linear speedup as the number of workers increases. All experiments use an Intel Xeon machine with 2 sockets and 10 cores per socket. We use two different nonconvex matrix factorization problems to exhibit these properties, to which we apply two different SAPALM variants: one without noise, and one with stochastic gradient noise. For each of our examples, we generate a matrix A ∈ Rn×n with iid standard normal entries, where n = 2000. Although SAPALM is intended for use on much larger problems, using a small problem size makes write conflicts more likely, and so serves as an ideal setting to understand how asynchrony affects convergence. 1. Sparse PCA with Asynchronous Block Coordinate Updates. We minimize argmin X,Y 1 2 ||A−XTY ||2F + λ‖X‖1 + λ‖Y ‖1, (5) where X ∈ Rd×n and Y ∈ Rd×n for some d ∈ N. We solve this problem using SAPALM with no noise νk = 0. 2. Quadratically Regularized Firm Thresholding PCA with Asynchronous Stochastic Gradients. We minimize argmin X,Y 1 2 ||A−XTY ||2F + λ(‖X‖Firm + ‖Y ‖Firm) + µ 2 (‖X‖2F + ‖Y ‖2F ), (6) where X ∈ Rd×n, Y ∈ Rd×n, and ‖ · ‖Firm is the firm thresholding penalty proposed in [21]: a nonconvex, nonsmooth function whose proximal operator truncates small values to zero and preserves large values. We solve this problem using the stochastic gradient SAPALM variant from Section 2.3. In both experiments X and Y are treated as coordinate blocks. Notice that for this problem, the SAPALM update decouples over the entries of each coordinate block. Each worker updates its coordinate block (say, X) by cycling through the coordinates of X and updating each in turn, restarting at a random coordinate after each cycle. In Figures (1a) and (1c), we see objective function values plotted by iteration. By this metric, SAPALM performs as well as PALM, its single threaded variant; for the second problem, the curves for different thread counts all overlap. Note, in particular, that SAPALM does not diverge. But SAPALM can add additional workers to increment the iteration counter more quickly, as seen in Figure 1b, allowing SAPALM to outperform its single threaded variant. We measure the speedup Sk(p) of SAPALM by the (relative) time for p workers to produce k iterates Sk(p) = Tk(p) Tk(1) , (7) where Tk(p) is the time to produce k iterates using p workers. Table 2 shows that SAPALM achieves near linear speedup for a range of variable sizes d. (Dashes — denote experiments not run.) threads d=10 d=20 d=100 1 1 1 1 2 1.9722 1.9812 – 4 3.7623 3.7635 – 8 7.1444 7.3315 7.3719 16 13.376 14.5322 14.743 Table 2: Sparse PCA speedup for 16 iterations by problem size and thread count. Deviations from linearity can be attributed to a breakdown in the abstraction of a “shared memory” computer: as each worker modifies the “shared” variables X and Y , some communication is required to maintain cache coherency across all cores and processors. In addition, Intel Xeon processors share L3 cache between all cores on the processor. All threads compete for the same L3 cache space, slowing down each iteration. For small d, write conflicts are more likely; for large d, communication to maintain cache coherency dominates. 6 Discussion A few straightforward generalizations of our work are possible; we omit them to simplify notation. Removing the log factors. The log factors in Theorem 1 can easily be removed by fixing a maximum number of iterations for which we plan to run SAPALM and adjusting the ck factors accordingly, as in [14, Equation (3.2.10)]. Cluster points of {xk}k∈N. Using the strategy employed in [5], it’s possible to show that all cluster points of {xk}k∈N are (almost surely) stationary points of f + r. Weakened Assumptions on Lipschitz Constants. We can weaken our assumptions to allow Lj to vary: we can assume Lj(x1, . . . , xj−1, ·, xj+1, . . . , xm)-Lipschitz continuity each partial gradient ∇jf(x1, . . . , xj−1, ·, xj+1, . . . , xm) : Hj → Hj , for every x ∈ H. 7 Conclusion This paper presented SAPALM, the first asynchronous parallel optimization method that provably converges on a large class of nonconvex, nonsmooth problems. We provide a convergence theory for SAPALM, and show that with the parameters suggested by this theory, SAPALM achieves a near linear speedup over serial PALM. As a special case, we provide the first convergence rate for (synchronous or asynchronous) stochastic block proximal gradient methods for nonconvex regularizers. These results give specific guidance to ensure fast convergence of practical asynchronous methods on a large class of important, nonconvex optimization problems, and pave the way towards a deeper understanding of stability of these methods in the presence of noise.
1. What is the main contribution of the paper in terms of optimization problems? 2. What is the significance of the proposed algorithm, and how does it differ from other methods? 3. How does the reviewer assess the clarity and quality of the paper's content? 4. What are the strengths of the paper regarding its theoretical analysis and experimental results? 5. Does the reviewer have any concerns or suggestions for improving the paper?
Review
Review This paper introduces a noisy asynchronous block coordinate descent method for solving nonconvex, nonsmooth optimization problems. The main contribution is its prove of convergence for a large class of optimization problems. The algorithm proposed in this paper for implementing noisy asynchronous block coordinate descent is something that comes to the mind immediately if someone is to implement such an algorithm. The main contribution of this paper is providing proofs that such a simple algorithm actually works. It proves that the algorithm matches the best known rates of convergence on this problem class. The paper is well-written and it is very easy to follow the arguments of the paper. The authors start with a short introduction of known results and subsequently explain their algorithm and their main convergence theorem. Through experiments on two different non convex matrix factorization problems, they showed that the algorithm attain linear speed up on the number of cpu threads. Overall the paper is a valuable peace of work and its theoretical results can be of benefit for practitioner of large machine learning systems.
NIPS
Title The Sound of APALM Clapping: Faster Nonsmooth Nonconvex Optimization with Stochastic Asynchronous PALM Abstract We introduce the Stochastic Asynchronous Proximal Alternating Linearized Minimization (SAPALM) method, a block coordinate stochastic proximal-gradient method for solving nonconvex, nonsmooth optimization problems. SAPALM is the first asynchronous parallel optimization method that provably converges on a large class of nonconvex, nonsmooth problems. We prove that SAPALM matches the best known rates of convergence — among synchronous or asynchronous methods — on this problem class. We provide upper bounds on the number of workers for which we can expect to see a linear speedup, which match the best bounds known for less complex problems, and show that in practice SAPALM achieves this linear speedup. We demonstrate state-of-the-art performance on several matrix factorization problems. 1 Introduction Parallel optimization algorithms often feature synchronization steps: all processors wait for the last to finish before moving on to the next major iteration. Unfortunately, the distribution of finish times is heavy tailed. Hence as the number of processors increases, most processors waste most of their time waiting. A natural solution is to remove any synchronization steps: instead, allow each idle processor to update the global state of the algorithm and continue, ignoring read and write conflicts whenever they occur. Occasionally one processor will erase the work of another; the hope is that the gain from allowing processors to work at their own paces offsets the loss from a sloppy division of labor. These asynchronous parallel optimization methods can work quite well in practice, but it is difficult to tune their parameters: lock-free code is notoriously hard to debug. For these problems, there is nothing as practical as a good theory, which might explain how to set these parameters so as to guarantee convergence. In this paper, we propose a theoretical framework guaranteeing convergence of a class of asynchronous algorithms for problems of the form minimize (x1,...,xm)∈H1×...×Hm f(x1, . . . , xm) + m∑ j=1 rj(xj), (1) where f is a continuously differentiable (C1) function with an L-Lipschitz gradient, each rj is a lower semicontinuous (not necessarily convex or differentiable) function, and the sets Hj are Euclidean spaces (i.e.,Hj = Rnj for some nj ∈ N). This problem class includes many (convex and nonconvex) signal recovery problems, matrix factorization problems, and, more generally, any generalized low rank model [20]. Following terminology from these domains, we view f as a loss function and each rj as a regularizer. For example, f might encode the misfit between the observations and the model, while the regularizers rj encode structural constraints on the model such as sparsity or nonnegativity. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Many synchronous parallel algorithms have been proposed to solve (1), including stochastic proximalgradient and block coordinate descent methods [22, 3]. Our asynchronous variants build on these synchronous methods, and in particular on proximal alternating linearized minimization (PALM) [3]. These asynchronous variants depend on the same parameters as the synchronous methods, such as a step size parameter, but also new ones, such as the maximum allowable delay. Our contribution here is to provide a convergence theory to guide the choice of those parameters within our control (such as the stepsize) in light of those out of our control (such as the maximum delay) to ensure convergence at the rate guaranteed by theory. We call this algorithm the Stochastic Asynchronous Proximal Alternating Linearized Minimization method, or SAPALM for short. Lock-free optimization is not a new idea. Many of the first theoretical results for such algorithms appear in the textbook [2], written over a generation ago. But within the last few years, asynchronous stochastic gradient and block coordinate methods have become newly popular, and enthusiasm in practice has been matched by progress in theory. Guaranteed convergence for these algorithms has been established for convex problems; see, for example, [13, 15, 16, 12, 11, 4, 1]. Asynchrony has also been used to speed up algorithms for nonconvex optimization, in particular, for learning deep neural networks [6] and completing low-rank matrices [23]. In contrast to the convex case, the existing asynchronous convergence theory for nonconvex problems is limited to the following four scenarios: stochastic gradient methods for smooth unconstrained problems [19, 10]; block coordinate methods for smooth problems with separable, convex constraints [18]; block coordinate methods for the general problem (1) [5]; and deterministic distributed proximal-gradient methods for smooth nonconvex loss functions with a single nonsmooth, convex regularizer [9]. A general block-coordinate stochastic gradient method with nonsmooth, nonconvex regularizers is still missing from the theory. We aim to fill this gap. Contributions. We introduce SAPALM, the first asynchronous parallel optimization method that provably converges for all nonconvex, nonsmooth problems of the form (1). SAPALM is a a block coordinate stochastic proximal-gradient method that generalizes the deterministic PALM method of [5, 3]. When applied to problem (1), we prove that SAPALM matches the best, known rates of convergence, due to [8] in the case where each rj is convex and m = 1: that is, asynchrony carries no theoretical penalty for convergence speed. We test SAPALM on a few example problems and compare to a synchronous implementation, showing a linear speedup. Notation. Let m ∈ N denote the number of coordinate blocks. We letH = H1 × . . .×Hm. For every x ∈ H, each partial gradient ∇jf(x1, . . . , xj−1, ·, xj+1, . . . , xm) : Hj → Hj is Lj-Lipschitz continuous; we let L = minj{Lj} ≤ maxj{Lj} = L. The number τ ∈ N is the maximum allowable delay. Define the aggregate regularizer r : H → (−∞,∞] as r(x) = ∑m j=1 rj(xj). For each j ∈ {1, . . . ,m}, y ∈ Hj , and γ > 0, define the proximal operator proxγrj (y) := argmin xj∈Hj { rj(xj) + 1 2γ ‖xj − y‖2 } For convex rj , proxγrj (y) is uniquely defined, but for nonconvex problems, it is, in general, a set. We make the mild assumption that for all y ∈ Hj , we have proxγrj (y) 6= ∅. A slight technicality arises from our ability to choose among multiple elements of proxγrj (y), especially in light of the stochastic nature of SAPALM. Thus, for all y, j and γ > 0, we fix an element ζj(y, γ) ∈ proxγrj (y). (2) By [17, Exercise 14.38], we can assume that ζj is measurable, which enables us to reason with expectations wherever they involve ζj . As shorthand, we use proxγrj (y) to denote the (unique) choice ζj(y, γ). For any random variable or vector X , we let Ek [X] = E [ X | xk, . . . , x0, νk, . . . , ν0 ] denote the conditional expectation of X with respect to the sigma algebra generated by the history of SAPALM. 2 Algorithm Description Algorithm 1 displays the SAPALM method. We highlight a few features of the algorithm which we discuss in more detail below. Algorithm 1 SAPALM [Local view] Input: x ∈ H 1: All processors in parallel do 2: loop 3: Randomly select a coordinate block j ∈ {1, . . . ,m} 4: Read x from shared memory 5: Compute g = ∇jf(x) + νj 6: Choose stepsize γj ∈ R++ . According to Assumption 3 7: xj ← proxγjrj (xj − γjg) . According to (2) • Inconsistent iterates. Other processors may write updates to x in the time required to read x from memory. • Coordinate blocks. When the coordinate blocks xj are low dimensional, it reduces the likelihood that one update will be immediately erased by another, simultaneous update. • Noise. The noise ν ∈ H is a random variable that we use to model injected noise. It can be set to 0, or chosen to accelerate each iteration, or to avoid saddle points. Algorithm 1 has an equivalent (mathematical) description which we present in Algorithm 2, using an iteration counter k which is incremented each time a processor completes an update. This iteration counter is not required by the processors themselves to compute the updates. In Algorithm 1, a processor might not have access to the shared-memory’s global state, xk, at iteration k. Rather, because all processors can continuously update the global state while other processors are reading, local processors might only read the inconsistently delayed iterate xk−dk = (x k−dk,1 1 , . . . , x k−dk,m m ), where the delays dk are integers less than τ , and xl = x0 when l < 0. Algorithm 2 SAPALM [Global view] Input: x0 ∈ H 1: for k ∈ N do 2: Randomly select a coordinate block jk ∈ {1, . . . ,m} 3: Read xk−dk = (xk−dk,11 , . . . , x k−dk,m m ) from shared memory 4: Compute gk = ∇jkf(xk−dk) + νkjk 5: Choose stepsize γkjk ∈ R++ . According to Assumption 3 6: for j = 1, . . . ,m do 7: if j = jk then 8: xk+1jk ← proxγkjkrjk (x k jk − γkjkg k) . According to (2) 9: else 10: xk+1j ← xkj 2.1 Assumptions on the Delay, Independence, Variance, and Stepsizes Assumption 1 (Bounded Delay). There exists some τ ∈ N such that, for all k ∈ N, the sequence of coordinate delays lie within dk ∈ {0, . . . , τ}m. Assumption 2 (Independence). The indices {jk}k∈N are uniformly distributed and collectively IID. They are independent from the history of the algorithm xk, . . . , x0, νk, . . . , ν0 for all k ∈ N. We employ two possible restrictions on the noise sequence νk and the sequence of allowable stepsizes γkj , all of which lead to different convergence rates: Assumption 3 (Noise Regimes and Stepsizes). Let σ2k := Ek [ ‖νk‖2 ] denote the expected squared norm of the noise, and let a ∈ (1,∞). Assume that Ek [ νk ] = 0 and that there is a sequence of weights {ck}k∈N ⊆ [1,∞) such that (∀k ∈ N) , (∀j ∈ {1, . . . ,m}) γkj := 1 ack(Lj + 2Lτm−1/2) . which we choose using the following two rules, both of which depend on the growth of σk: Summable. ∑∞ k=0 σ 2 k <∞ =⇒ ck ≡ 1; α-Diminishing. (α ∈ (0, 1)) σ2k = O((k + 1)−α) =⇒ ck = Θ((k + 1)(1−α)). More noise, measured by σk, results in worse convergence rates and stricter requirements regarding which stepsizes can be chosen. We provide two stepsize choices which, depending on the noise regime, interpolate between Θ(1) and Θ(k1−α) for any α ∈ (0, 1). Larger stepsizes lead to convergence rates of order O(k−1), while smaller ones lead to order O(k−α). 2.2 Algorithm Features Inconsistent Asynchronous Reading. SAPALM allows asynchronous access patterns. A processor may, at any time, and without notifying other processors: 1. Read. While other processors are writing to shared-memory, read the possibly out-of-sync, delayed coordinates xk−dk,11 , . . . , x k−dk,m m . 2. Compute. Locally, compute the partial gradient∇jkf(x k−dk,1 1 , . . . , x k−dk,m m ). 3. Write. After computing the gradient, replace the jkth coordinate with xk+1jk ∈ argmin y rjk(y) + 〈∇jkf(xk−dk) + νkjk , y − x k jk 〉+ 1 2γkjk ‖y − xkjk‖ 2. Uncoordinated access eliminates waiting time for processors, which speeds up computation. The processors are blissfully ignorant of any conflict between their actions, and the paradoxes these conflicts entail: for example, the states xk−dk,11 , . . . , x k−dk,m m need never have simultaneously existed in memory. Although we write the method with a global counter k, the asynchronous processors need not be aware of it; and the requirement that the delays dk remain bounded by τ does not demand coordination, but rather serves only to define τ . What Does the Noise Model Capture? SAPALM is the first asynchronous PALM algorithm to allow and analyze noisy updates. The stochastic noise, νk, captures three phenomena: 1. Computational Error. Noise due to random computational error. 2. Avoiding Saddles. Noise deliberately injected for the purpose of avoiding saddles, as in [7]. 3. Stochastic Gradients. Noise due to stochastic approximations of delayed gradients. Of course, the noise model also captures any combination of the above phenomena. The last one is, perhaps, the most interesting: it allows us to prove convergence for a stochastic- or minibatch-gradient version of APALM, rather than requiring processors to compute a full (delayed) gradient. Stochastic gradients can be computed faster than their batch counterparts, allowing more frequent updates. 2.3 SAPALM as an Asynchronous Block Mini-Batch Stochastic Proximal-Gradient Method In Algorithm 1, any stochastic estimator ∇f(xk−dk ; ξ) of the gradient may be used, as long as Ek [ ∇f(xk−dk ; ξ) ] = ∇f(xk−dk), and Ek [ ‖∇f(xk−dk ; ξ)−∇f(xk−dk)‖2 ] ≤ σ2. In particular, if Problem 1 takes the form minimize x∈H Eξ [f(x1, . . . , xm; ξ)] + 1 m m∑ j=1 rj(xj), then, in Algorithm 2, the stochastic mini-batch estimator gk = m−1k ∑mk i=1∇f(xk−dk ; ξi), where ξi are IID, may be used in place of ∇f(xk−dk) + νk. A quick calculation shows that Ek [ ‖gk −∇f(xk−dk)‖2 ] = O(m−1k ). Thus, any increasing batch size mk = Ω((k + 1) −α), with α ∈ (0, 1), conforms to Assumption 3. When nonsmooth regularizers are present, all known convergence rate results for nonconvex stochastic gradient algorithms require the use of increasing, rather than fixed, minibatch sizes; see [8, 22] for analogous, synchronous algorithms. 3 Convergence Theorem Measuring Convergence for Nonconvex Problems. For nonconvex problems, it is standard to measure convergence (to a stationary point) by the expected violation of stationarity, which for us is the (deterministic) quantity: Sk := E m∑ j=1 ∥∥∥∥∥ 1γkj (wkj − xkj ) + νk ∥∥∥∥∥ 2 ; where (∀j ∈ {1, . . . ,m}) wkj = proxγkj rj (x k j − γkj (∇jf(xk−dk) + νkj )). (3) A reduction to the case r ≡ 0 and dk = 0 reveals that wkj − xkj + γkj νkj = −γkj∇jf(xk) and, hence, Sk = E [ ‖∇f(xk)‖2 ] . More generally, wkj − rkj + γkj νkj ∈ −γkj (∂Lrj(wkj ) +∇jf(xk−dk)) where ∂Lrj is the limiting subdifferential of rj [17] which, if rj is convex, reduces to the standard convex subdifferential familiar from [14]. A messy but straightforward calculation shows that our convergence rates for Sk can be converted to convergence rates for elements of ∂Lr(wk) +∇f(wk). We present our main convergence theorem now and defer the proof to Section 4. Theorem 1 (SAPALM Convergence Rates). Let {xk}k∈N ⊆ H be the SAPALM sequence created by Algorithm 2. Then, under Assumption 3 the following convergence rates hold: for all T ∈ N, if {νk}k∈N is 1. Summable, then min k=0,...,T Sk ≤ Ek∼PT [Sk] = O ( m(L+ 2Lτm−1/2) T + 1 ) ; 2. α-Diminishing, then min k=0,...,T Sk ≤ Ek∼PT [Sk] = O ( m(L+ 2Lτm−1/2) +m log(T + 1) (T + 1)−α ) ; where, for all T ∈ N, PT is the distribution {0, . . . , T} such that PT (X = k) ∝ c−1k . Effects of Delay and Linear Speedups. The m−1/2 term in the convergence rates presented in Theorem 1 prevents the delay τ from dominating our rates of convergence. In particular, as long as τ = O( √ m), the convergence rate in the synchronous (τ = 0) and asynchronous cases are within a small constant factor of each other. In that case, because the work per iteration in the synchronous and asynchronous versions of SAPALM is the same, we expect a linear speedup: SAPALM with p processors will converge nearly p times faster than PALM, since the iteration counter will be updated p times as often. As a rule of thumb, τ is roughly proportional to the number of processors. Hence we can achieve a linear speedup on as many as O( √ m) processors. 3.1 The Asynchronous Stochastic Block Gradient Method If the regularizer r is identically zero, then the noise νk need not vanish in the limit. The following theorem guarantees convergence of asynchronous stochastic block gradient descent with a constant minibatch size. See the supplemental material for a proof. Theorem 2 (SAPALM Convergence Rates (r ≡ 0)). Let {xk}k∈N ⊆ H be the SAPALM sequence created by Algorithm 2 in the case that r ≡ 0. If, for all k ∈ N, {Ek [ ‖νk‖2 ] }k∈N is bounded (not necessarily diminishing) and (∃a ∈ (1,∞)) , (∀k ∈ N) , (∀j ∈ {1, . . . ,m}) γkj := 1 a √ k(Lj + 2Mτm−1/2) , then for all T ∈ N, we have min k=0,...,T Sk ≤ Ek∼PT [Sk] = O ( m(L+ 2Lτm−1/2) +m log(T + 1)√ T + 1 ) , where PT is the distribution {0, . . . , T} such that PT (X = k) ∝ k−1/2. 4 Convergence Analysis 4.1 The Asynchronous Lyapunov Function Key to the convergence of SAPALM is the following Lyapunov function, defined onH1+τ , which aggregates not only the current state of the algorithm, as is common in synchronous algorithms, but also the history of the algorithm over the delayed time steps: (∀x(0), x(1), . . . , x(τ) ∈ H) Φ(x(0), x(1), . . . , x(τ)) = f(x(0)) + r(x(0)) + L 2 √ m τ∑ h=1 (τ − h+ 1)‖x(h)− x(h− 1)‖2. This Lyapunov function appears in our convergence analysis through the following inequality, which is proved in the supplemental material. Lemma 1 (Lyapunov Function Supermartingale Inequality). For all k ∈ N, let zk = (xk, . . . , xk−τ ) ∈ H1+τ . Then for all > 0, we have Ek [ Φ(zk+1) ] ≤ Φ(zk)− 1 2m m∑ j=1 ( 1 γkj − (1 + ) ( Lj + 2Lτ m1/2 )) Ek [ ‖wkj − xkj + γkj νkj ‖2 ] + m∑ j=1 γkj ( 1 + γkj (1 + −1) ( Lj + 2Lτm −1/2))Ek [‖νkj ‖2] 2m where for all j ∈ {1, . . . ,m}, we have wkj = proxγkj rj (x k j − γkj (∇jf(xk−dk) + νkj )). In particular, for σk = 0, we can take = 0 and assume the last line is zero. Notice that if σk = = 0 and γkj is chosen as suggested in Algorithm 2, the (conditional) expected value of the Lyapunov function is strictly decreasing. If σk is nonzero, the factor will be used in concert with the stepsize γkj to ensure that noise does not cause the algorithm to diverge. 4.2 Proof of Theorem 1 For either noise regime, we define, for all k ∈ N and j ∈ {1, . . . ,m}, the factor := 2−1(a − 1). With the assumed choice of γkj and , Lemma 1 implies that the expected Lyapunov function decreases, up to a summable residual: with Akj := w k j − xkj + γkj νkj , we have E [ Φ(zk+1) ] ≤ E [ Φ(zk) ] − E 1 2m m∑ j=1 1 γkj ( 1− 1 + ack ) ‖Akj ‖2 + m∑ j=1 γkj ( 1 + γkj (1 + −1) ( Lj + 2Lτm −1/2))E [Ek [‖νkj ‖2]] 2m . (4) Two upper bounds follow from the the definition of γkj , the lower bound ck ≥ 1, and the straightforward inequalities (ack)−1(L+ 2Mτm−1/2)−1 ≥ γkj ≥ (ack)−1(L+ 2Mτm−1/2)−1: 1 ck Sk ≤ 1 (1−(1+ )a−1) 2ma(L+2Lτm−1/2) E 1 2m m∑ j=1 1 γkj ( 1− 1 + ack ) ‖Akj ‖2 and m∑ j=1 γkj ( 1 + γkj (1 + −1) ( Lj + 2Lτm −1/2))Ek [‖νkj ‖2] 2m ≤ (1 + (ack) −1(1 + −1))(σ2k/ck) 2a(L+ 2Lτm−1/2) . Now rearrange (4), use E [ Φ(zk+1) ] ≥ infx∈H{f(x) + r(x)} and E [ Φ(z0) ] = f(x0) + r(x0), and sum (4) over k to get 1∑T k=0 c −1 k T∑ k=0 1 ck Sk ≤ f(x0) + r(x0)− infx∈H{f(x) + r(x)}+ ∑T k=0 (1+(ack) −1(1+ −1))(σ2k/ck) 2a(L+2Lτm−1/2) (1−(1+ )a−1) 2ma(L+2Lτm−1/2) ∑T k=0 c −1 k . The left hand side of this inequality is bounded from below by mink=0,...,T Sk and is precisely the term Ek∼PT [Sk]. What remains to be shown is an upper bound on the right hand side, which we will now call RT . If the noise is summable, then ck ≡ 1, so ∑T k=0 c −1 k = (T+1) and ∑T k=0 σ 2 k/ck <∞, which implies that RT = O(m(L+ 2Lτm−1/2)(T + 1)−1). If the noise is α-diminishing, then ck = Θ ( k(1−α) ) , so ∑T k=0 c −1 k = Θ((T + 1) α) and, because σ2k/ck = O(k −1), there exists a B > 0 such that∑T k=0 σ 2 k/ck ≤ ∑T k=0Bk −1 = O(log(T +1)), which implies that RT = O((m(L+2Lτm−1/2)+ m log(T + 1))(T + 1)−α). 5 Numerical Experiments In this section, we present numerical results to confirm that SAPALM delivers the expected performance gains over PALM. We confirm two properties: 1) SAPALM converges to values nearly as low as PALM given the same number of iterations, 2) SAPALM exhibits a near-linear speedup as the number of workers increases. All experiments use an Intel Xeon machine with 2 sockets and 10 cores per socket. We use two different nonconvex matrix factorization problems to exhibit these properties, to which we apply two different SAPALM variants: one without noise, and one with stochastic gradient noise. For each of our examples, we generate a matrix A ∈ Rn×n with iid standard normal entries, where n = 2000. Although SAPALM is intended for use on much larger problems, using a small problem size makes write conflicts more likely, and so serves as an ideal setting to understand how asynchrony affects convergence. 1. Sparse PCA with Asynchronous Block Coordinate Updates. We minimize argmin X,Y 1 2 ||A−XTY ||2F + λ‖X‖1 + λ‖Y ‖1, (5) where X ∈ Rd×n and Y ∈ Rd×n for some d ∈ N. We solve this problem using SAPALM with no noise νk = 0. 2. Quadratically Regularized Firm Thresholding PCA with Asynchronous Stochastic Gradients. We minimize argmin X,Y 1 2 ||A−XTY ||2F + λ(‖X‖Firm + ‖Y ‖Firm) + µ 2 (‖X‖2F + ‖Y ‖2F ), (6) where X ∈ Rd×n, Y ∈ Rd×n, and ‖ · ‖Firm is the firm thresholding penalty proposed in [21]: a nonconvex, nonsmooth function whose proximal operator truncates small values to zero and preserves large values. We solve this problem using the stochastic gradient SAPALM variant from Section 2.3. In both experiments X and Y are treated as coordinate blocks. Notice that for this problem, the SAPALM update decouples over the entries of each coordinate block. Each worker updates its coordinate block (say, X) by cycling through the coordinates of X and updating each in turn, restarting at a random coordinate after each cycle. In Figures (1a) and (1c), we see objective function values plotted by iteration. By this metric, SAPALM performs as well as PALM, its single threaded variant; for the second problem, the curves for different thread counts all overlap. Note, in particular, that SAPALM does not diverge. But SAPALM can add additional workers to increment the iteration counter more quickly, as seen in Figure 1b, allowing SAPALM to outperform its single threaded variant. We measure the speedup Sk(p) of SAPALM by the (relative) time for p workers to produce k iterates Sk(p) = Tk(p) Tk(1) , (7) where Tk(p) is the time to produce k iterates using p workers. Table 2 shows that SAPALM achieves near linear speedup for a range of variable sizes d. (Dashes — denote experiments not run.) threads d=10 d=20 d=100 1 1 1 1 2 1.9722 1.9812 – 4 3.7623 3.7635 – 8 7.1444 7.3315 7.3719 16 13.376 14.5322 14.743 Table 2: Sparse PCA speedup for 16 iterations by problem size and thread count. Deviations from linearity can be attributed to a breakdown in the abstraction of a “shared memory” computer: as each worker modifies the “shared” variables X and Y , some communication is required to maintain cache coherency across all cores and processors. In addition, Intel Xeon processors share L3 cache between all cores on the processor. All threads compete for the same L3 cache space, slowing down each iteration. For small d, write conflicts are more likely; for large d, communication to maintain cache coherency dominates. 6 Discussion A few straightforward generalizations of our work are possible; we omit them to simplify notation. Removing the log factors. The log factors in Theorem 1 can easily be removed by fixing a maximum number of iterations for which we plan to run SAPALM and adjusting the ck factors accordingly, as in [14, Equation (3.2.10)]. Cluster points of {xk}k∈N. Using the strategy employed in [5], it’s possible to show that all cluster points of {xk}k∈N are (almost surely) stationary points of f + r. Weakened Assumptions on Lipschitz Constants. We can weaken our assumptions to allow Lj to vary: we can assume Lj(x1, . . . , xj−1, ·, xj+1, . . . , xm)-Lipschitz continuity each partial gradient ∇jf(x1, . . . , xj−1, ·, xj+1, . . . , xm) : Hj → Hj , for every x ∈ H. 7 Conclusion This paper presented SAPALM, the first asynchronous parallel optimization method that provably converges on a large class of nonconvex, nonsmooth problems. We provide a convergence theory for SAPALM, and show that with the parameters suggested by this theory, SAPALM achieves a near linear speedup over serial PALM. As a special case, we provide the first convergence rate for (synchronous or asynchronous) stochastic block proximal gradient methods for nonconvex regularizers. These results give specific guidance to ensure fast convergence of practical asynchronous methods on a large class of important, nonconvex optimization problems, and pave the way towards a deeper understanding of stability of these methods in the presence of noise.
1. What is the main contribution of the paper regarding asynchronous proximal gradient descent? 2. How does the proposed approach relate to prior works such as stochastic gradient descent based sampling? 3. Do you have any questions or concerns about the proof of guaranteed convergence to a local optimal solution? 4. How does the distributed computation in the proposed algorithm interpret unchecked asynchronous updates of variables as noise when computing gradients? 5. Can you explain how recent developments in stochastic gradient descent-based sampling compare with the authors' approach? 6. Are there any specific areas where the paper could be improved, such as providing more details or comparisons with other methods?
Review
Review The paper proposes an asynchronous proximal gradient descent algorithm and provides a proof of its guaranteed convergence to a local optimal solution. The main contribution is the finding that the prox function for nonconvex problems defines a set which in turn induces a measurable space and to interpret the unchecked asynchronous updates of variables as a source of noise when computing the gradient. In doing so, the distributed computation can be seen as an instance of stochastically altered gradient descent algorithm. The authors report convergence behavior under two different noise regimes resulting in constant and decreasing step sizes respectively.I find the approach rather interesting, especially the broad and general definition of the problem makes the approach applicable to a wide range of problems. However, I was surprised by the absence of any reference to the seminal Robbins/Munro paper and also to the recent developments in stochastic gradient descent based sampling (see below). The authors do local gradient descent updates of coordinate blocks by computing partial gradients and adding noise in each asynchronous step. I was wondering, how this relates to the "usual" stochastic gradient descent update, i.e., given that the locally computed partial gradient will be based on delayed (noisy) variable states, a sequence of these noisy partial gradients would converge to the true partial gradient as well. Further, recent SGD based sampling has shown that adding noise to the variable states obtained by noisy gradient updates (as the authors do as well) provides good samples of the distribution underlying the optimal variable setting also in a non-convex setting. That being said, the work handed in remains valid, but it would have been interesting to compare the proposed approach to well established stochastic gradient methods. The overall procedure is laid out well and comprehensible. The chosen examples in the experiments section are well suited to demonstrate the scalability benefits of the algorithm. Minor to that, I have a few remarks on style and the overall rationale: - line 60: "each" is unnecessary here when m = 1 - line 69: k is not yet defined as are g, and \nu - line 106: the notation using y is probably wrong, shouldn't it read argmin_{x_{j_k}^k} r_{j_k} (x_{j_k}^k) + ... ? - Algorithms 1 and 2 lack a break condition and output - Table 1: I assume the timing is in seconds? Literature: Robbins, H., & Monro, S. (1951). A stochastic approximation method. The Annals of Mathematical Statistics, 400–407. Welling, M., & Teh, Y.-W. (2011). Bayesian learning via stochastic gradient Langevin dynamics (pp. 681–688). Presented at the Proceedings of the 28th International Conference on Machine Learning.
NIPS
Title The Sound of APALM Clapping: Faster Nonsmooth Nonconvex Optimization with Stochastic Asynchronous PALM Abstract We introduce the Stochastic Asynchronous Proximal Alternating Linearized Minimization (SAPALM) method, a block coordinate stochastic proximal-gradient method for solving nonconvex, nonsmooth optimization problems. SAPALM is the first asynchronous parallel optimization method that provably converges on a large class of nonconvex, nonsmooth problems. We prove that SAPALM matches the best known rates of convergence — among synchronous or asynchronous methods — on this problem class. We provide upper bounds on the number of workers for which we can expect to see a linear speedup, which match the best bounds known for less complex problems, and show that in practice SAPALM achieves this linear speedup. We demonstrate state-of-the-art performance on several matrix factorization problems. 1 Introduction Parallel optimization algorithms often feature synchronization steps: all processors wait for the last to finish before moving on to the next major iteration. Unfortunately, the distribution of finish times is heavy tailed. Hence as the number of processors increases, most processors waste most of their time waiting. A natural solution is to remove any synchronization steps: instead, allow each idle processor to update the global state of the algorithm and continue, ignoring read and write conflicts whenever they occur. Occasionally one processor will erase the work of another; the hope is that the gain from allowing processors to work at their own paces offsets the loss from a sloppy division of labor. These asynchronous parallel optimization methods can work quite well in practice, but it is difficult to tune their parameters: lock-free code is notoriously hard to debug. For these problems, there is nothing as practical as a good theory, which might explain how to set these parameters so as to guarantee convergence. In this paper, we propose a theoretical framework guaranteeing convergence of a class of asynchronous algorithms for problems of the form minimize (x1,...,xm)∈H1×...×Hm f(x1, . . . , xm) + m∑ j=1 rj(xj), (1) where f is a continuously differentiable (C1) function with an L-Lipschitz gradient, each rj is a lower semicontinuous (not necessarily convex or differentiable) function, and the sets Hj are Euclidean spaces (i.e.,Hj = Rnj for some nj ∈ N). This problem class includes many (convex and nonconvex) signal recovery problems, matrix factorization problems, and, more generally, any generalized low rank model [20]. Following terminology from these domains, we view f as a loss function and each rj as a regularizer. For example, f might encode the misfit between the observations and the model, while the regularizers rj encode structural constraints on the model such as sparsity or nonnegativity. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Many synchronous parallel algorithms have been proposed to solve (1), including stochastic proximalgradient and block coordinate descent methods [22, 3]. Our asynchronous variants build on these synchronous methods, and in particular on proximal alternating linearized minimization (PALM) [3]. These asynchronous variants depend on the same parameters as the synchronous methods, such as a step size parameter, but also new ones, such as the maximum allowable delay. Our contribution here is to provide a convergence theory to guide the choice of those parameters within our control (such as the stepsize) in light of those out of our control (such as the maximum delay) to ensure convergence at the rate guaranteed by theory. We call this algorithm the Stochastic Asynchronous Proximal Alternating Linearized Minimization method, or SAPALM for short. Lock-free optimization is not a new idea. Many of the first theoretical results for such algorithms appear in the textbook [2], written over a generation ago. But within the last few years, asynchronous stochastic gradient and block coordinate methods have become newly popular, and enthusiasm in practice has been matched by progress in theory. Guaranteed convergence for these algorithms has been established for convex problems; see, for example, [13, 15, 16, 12, 11, 4, 1]. Asynchrony has also been used to speed up algorithms for nonconvex optimization, in particular, for learning deep neural networks [6] and completing low-rank matrices [23]. In contrast to the convex case, the existing asynchronous convergence theory for nonconvex problems is limited to the following four scenarios: stochastic gradient methods for smooth unconstrained problems [19, 10]; block coordinate methods for smooth problems with separable, convex constraints [18]; block coordinate methods for the general problem (1) [5]; and deterministic distributed proximal-gradient methods for smooth nonconvex loss functions with a single nonsmooth, convex regularizer [9]. A general block-coordinate stochastic gradient method with nonsmooth, nonconvex regularizers is still missing from the theory. We aim to fill this gap. Contributions. We introduce SAPALM, the first asynchronous parallel optimization method that provably converges for all nonconvex, nonsmooth problems of the form (1). SAPALM is a a block coordinate stochastic proximal-gradient method that generalizes the deterministic PALM method of [5, 3]. When applied to problem (1), we prove that SAPALM matches the best, known rates of convergence, due to [8] in the case where each rj is convex and m = 1: that is, asynchrony carries no theoretical penalty for convergence speed. We test SAPALM on a few example problems and compare to a synchronous implementation, showing a linear speedup. Notation. Let m ∈ N denote the number of coordinate blocks. We letH = H1 × . . .×Hm. For every x ∈ H, each partial gradient ∇jf(x1, . . . , xj−1, ·, xj+1, . . . , xm) : Hj → Hj is Lj-Lipschitz continuous; we let L = minj{Lj} ≤ maxj{Lj} = L. The number τ ∈ N is the maximum allowable delay. Define the aggregate regularizer r : H → (−∞,∞] as r(x) = ∑m j=1 rj(xj). For each j ∈ {1, . . . ,m}, y ∈ Hj , and γ > 0, define the proximal operator proxγrj (y) := argmin xj∈Hj { rj(xj) + 1 2γ ‖xj − y‖2 } For convex rj , proxγrj (y) is uniquely defined, but for nonconvex problems, it is, in general, a set. We make the mild assumption that for all y ∈ Hj , we have proxγrj (y) 6= ∅. A slight technicality arises from our ability to choose among multiple elements of proxγrj (y), especially in light of the stochastic nature of SAPALM. Thus, for all y, j and γ > 0, we fix an element ζj(y, γ) ∈ proxγrj (y). (2) By [17, Exercise 14.38], we can assume that ζj is measurable, which enables us to reason with expectations wherever they involve ζj . As shorthand, we use proxγrj (y) to denote the (unique) choice ζj(y, γ). For any random variable or vector X , we let Ek [X] = E [ X | xk, . . . , x0, νk, . . . , ν0 ] denote the conditional expectation of X with respect to the sigma algebra generated by the history of SAPALM. 2 Algorithm Description Algorithm 1 displays the SAPALM method. We highlight a few features of the algorithm which we discuss in more detail below. Algorithm 1 SAPALM [Local view] Input: x ∈ H 1: All processors in parallel do 2: loop 3: Randomly select a coordinate block j ∈ {1, . . . ,m} 4: Read x from shared memory 5: Compute g = ∇jf(x) + νj 6: Choose stepsize γj ∈ R++ . According to Assumption 3 7: xj ← proxγjrj (xj − γjg) . According to (2) • Inconsistent iterates. Other processors may write updates to x in the time required to read x from memory. • Coordinate blocks. When the coordinate blocks xj are low dimensional, it reduces the likelihood that one update will be immediately erased by another, simultaneous update. • Noise. The noise ν ∈ H is a random variable that we use to model injected noise. It can be set to 0, or chosen to accelerate each iteration, or to avoid saddle points. Algorithm 1 has an equivalent (mathematical) description which we present in Algorithm 2, using an iteration counter k which is incremented each time a processor completes an update. This iteration counter is not required by the processors themselves to compute the updates. In Algorithm 1, a processor might not have access to the shared-memory’s global state, xk, at iteration k. Rather, because all processors can continuously update the global state while other processors are reading, local processors might only read the inconsistently delayed iterate xk−dk = (x k−dk,1 1 , . . . , x k−dk,m m ), where the delays dk are integers less than τ , and xl = x0 when l < 0. Algorithm 2 SAPALM [Global view] Input: x0 ∈ H 1: for k ∈ N do 2: Randomly select a coordinate block jk ∈ {1, . . . ,m} 3: Read xk−dk = (xk−dk,11 , . . . , x k−dk,m m ) from shared memory 4: Compute gk = ∇jkf(xk−dk) + νkjk 5: Choose stepsize γkjk ∈ R++ . According to Assumption 3 6: for j = 1, . . . ,m do 7: if j = jk then 8: xk+1jk ← proxγkjkrjk (x k jk − γkjkg k) . According to (2) 9: else 10: xk+1j ← xkj 2.1 Assumptions on the Delay, Independence, Variance, and Stepsizes Assumption 1 (Bounded Delay). There exists some τ ∈ N such that, for all k ∈ N, the sequence of coordinate delays lie within dk ∈ {0, . . . , τ}m. Assumption 2 (Independence). The indices {jk}k∈N are uniformly distributed and collectively IID. They are independent from the history of the algorithm xk, . . . , x0, νk, . . . , ν0 for all k ∈ N. We employ two possible restrictions on the noise sequence νk and the sequence of allowable stepsizes γkj , all of which lead to different convergence rates: Assumption 3 (Noise Regimes and Stepsizes). Let σ2k := Ek [ ‖νk‖2 ] denote the expected squared norm of the noise, and let a ∈ (1,∞). Assume that Ek [ νk ] = 0 and that there is a sequence of weights {ck}k∈N ⊆ [1,∞) such that (∀k ∈ N) , (∀j ∈ {1, . . . ,m}) γkj := 1 ack(Lj + 2Lτm−1/2) . which we choose using the following two rules, both of which depend on the growth of σk: Summable. ∑∞ k=0 σ 2 k <∞ =⇒ ck ≡ 1; α-Diminishing. (α ∈ (0, 1)) σ2k = O((k + 1)−α) =⇒ ck = Θ((k + 1)(1−α)). More noise, measured by σk, results in worse convergence rates and stricter requirements regarding which stepsizes can be chosen. We provide two stepsize choices which, depending on the noise regime, interpolate between Θ(1) and Θ(k1−α) for any α ∈ (0, 1). Larger stepsizes lead to convergence rates of order O(k−1), while smaller ones lead to order O(k−α). 2.2 Algorithm Features Inconsistent Asynchronous Reading. SAPALM allows asynchronous access patterns. A processor may, at any time, and without notifying other processors: 1. Read. While other processors are writing to shared-memory, read the possibly out-of-sync, delayed coordinates xk−dk,11 , . . . , x k−dk,m m . 2. Compute. Locally, compute the partial gradient∇jkf(x k−dk,1 1 , . . . , x k−dk,m m ). 3. Write. After computing the gradient, replace the jkth coordinate with xk+1jk ∈ argmin y rjk(y) + 〈∇jkf(xk−dk) + νkjk , y − x k jk 〉+ 1 2γkjk ‖y − xkjk‖ 2. Uncoordinated access eliminates waiting time for processors, which speeds up computation. The processors are blissfully ignorant of any conflict between their actions, and the paradoxes these conflicts entail: for example, the states xk−dk,11 , . . . , x k−dk,m m need never have simultaneously existed in memory. Although we write the method with a global counter k, the asynchronous processors need not be aware of it; and the requirement that the delays dk remain bounded by τ does not demand coordination, but rather serves only to define τ . What Does the Noise Model Capture? SAPALM is the first asynchronous PALM algorithm to allow and analyze noisy updates. The stochastic noise, νk, captures three phenomena: 1. Computational Error. Noise due to random computational error. 2. Avoiding Saddles. Noise deliberately injected for the purpose of avoiding saddles, as in [7]. 3. Stochastic Gradients. Noise due to stochastic approximations of delayed gradients. Of course, the noise model also captures any combination of the above phenomena. The last one is, perhaps, the most interesting: it allows us to prove convergence for a stochastic- or minibatch-gradient version of APALM, rather than requiring processors to compute a full (delayed) gradient. Stochastic gradients can be computed faster than their batch counterparts, allowing more frequent updates. 2.3 SAPALM as an Asynchronous Block Mini-Batch Stochastic Proximal-Gradient Method In Algorithm 1, any stochastic estimator ∇f(xk−dk ; ξ) of the gradient may be used, as long as Ek [ ∇f(xk−dk ; ξ) ] = ∇f(xk−dk), and Ek [ ‖∇f(xk−dk ; ξ)−∇f(xk−dk)‖2 ] ≤ σ2. In particular, if Problem 1 takes the form minimize x∈H Eξ [f(x1, . . . , xm; ξ)] + 1 m m∑ j=1 rj(xj), then, in Algorithm 2, the stochastic mini-batch estimator gk = m−1k ∑mk i=1∇f(xk−dk ; ξi), where ξi are IID, may be used in place of ∇f(xk−dk) + νk. A quick calculation shows that Ek [ ‖gk −∇f(xk−dk)‖2 ] = O(m−1k ). Thus, any increasing batch size mk = Ω((k + 1) −α), with α ∈ (0, 1), conforms to Assumption 3. When nonsmooth regularizers are present, all known convergence rate results for nonconvex stochastic gradient algorithms require the use of increasing, rather than fixed, minibatch sizes; see [8, 22] for analogous, synchronous algorithms. 3 Convergence Theorem Measuring Convergence for Nonconvex Problems. For nonconvex problems, it is standard to measure convergence (to a stationary point) by the expected violation of stationarity, which for us is the (deterministic) quantity: Sk := E m∑ j=1 ∥∥∥∥∥ 1γkj (wkj − xkj ) + νk ∥∥∥∥∥ 2 ; where (∀j ∈ {1, . . . ,m}) wkj = proxγkj rj (x k j − γkj (∇jf(xk−dk) + νkj )). (3) A reduction to the case r ≡ 0 and dk = 0 reveals that wkj − xkj + γkj νkj = −γkj∇jf(xk) and, hence, Sk = E [ ‖∇f(xk)‖2 ] . More generally, wkj − rkj + γkj νkj ∈ −γkj (∂Lrj(wkj ) +∇jf(xk−dk)) where ∂Lrj is the limiting subdifferential of rj [17] which, if rj is convex, reduces to the standard convex subdifferential familiar from [14]. A messy but straightforward calculation shows that our convergence rates for Sk can be converted to convergence rates for elements of ∂Lr(wk) +∇f(wk). We present our main convergence theorem now and defer the proof to Section 4. Theorem 1 (SAPALM Convergence Rates). Let {xk}k∈N ⊆ H be the SAPALM sequence created by Algorithm 2. Then, under Assumption 3 the following convergence rates hold: for all T ∈ N, if {νk}k∈N is 1. Summable, then min k=0,...,T Sk ≤ Ek∼PT [Sk] = O ( m(L+ 2Lτm−1/2) T + 1 ) ; 2. α-Diminishing, then min k=0,...,T Sk ≤ Ek∼PT [Sk] = O ( m(L+ 2Lτm−1/2) +m log(T + 1) (T + 1)−α ) ; where, for all T ∈ N, PT is the distribution {0, . . . , T} such that PT (X = k) ∝ c−1k . Effects of Delay and Linear Speedups. The m−1/2 term in the convergence rates presented in Theorem 1 prevents the delay τ from dominating our rates of convergence. In particular, as long as τ = O( √ m), the convergence rate in the synchronous (τ = 0) and asynchronous cases are within a small constant factor of each other. In that case, because the work per iteration in the synchronous and asynchronous versions of SAPALM is the same, we expect a linear speedup: SAPALM with p processors will converge nearly p times faster than PALM, since the iteration counter will be updated p times as often. As a rule of thumb, τ is roughly proportional to the number of processors. Hence we can achieve a linear speedup on as many as O( √ m) processors. 3.1 The Asynchronous Stochastic Block Gradient Method If the regularizer r is identically zero, then the noise νk need not vanish in the limit. The following theorem guarantees convergence of asynchronous stochastic block gradient descent with a constant minibatch size. See the supplemental material for a proof. Theorem 2 (SAPALM Convergence Rates (r ≡ 0)). Let {xk}k∈N ⊆ H be the SAPALM sequence created by Algorithm 2 in the case that r ≡ 0. If, for all k ∈ N, {Ek [ ‖νk‖2 ] }k∈N is bounded (not necessarily diminishing) and (∃a ∈ (1,∞)) , (∀k ∈ N) , (∀j ∈ {1, . . . ,m}) γkj := 1 a √ k(Lj + 2Mτm−1/2) , then for all T ∈ N, we have min k=0,...,T Sk ≤ Ek∼PT [Sk] = O ( m(L+ 2Lτm−1/2) +m log(T + 1)√ T + 1 ) , where PT is the distribution {0, . . . , T} such that PT (X = k) ∝ k−1/2. 4 Convergence Analysis 4.1 The Asynchronous Lyapunov Function Key to the convergence of SAPALM is the following Lyapunov function, defined onH1+τ , which aggregates not only the current state of the algorithm, as is common in synchronous algorithms, but also the history of the algorithm over the delayed time steps: (∀x(0), x(1), . . . , x(τ) ∈ H) Φ(x(0), x(1), . . . , x(τ)) = f(x(0)) + r(x(0)) + L 2 √ m τ∑ h=1 (τ − h+ 1)‖x(h)− x(h− 1)‖2. This Lyapunov function appears in our convergence analysis through the following inequality, which is proved in the supplemental material. Lemma 1 (Lyapunov Function Supermartingale Inequality). For all k ∈ N, let zk = (xk, . . . , xk−τ ) ∈ H1+τ . Then for all > 0, we have Ek [ Φ(zk+1) ] ≤ Φ(zk)− 1 2m m∑ j=1 ( 1 γkj − (1 + ) ( Lj + 2Lτ m1/2 )) Ek [ ‖wkj − xkj + γkj νkj ‖2 ] + m∑ j=1 γkj ( 1 + γkj (1 + −1) ( Lj + 2Lτm −1/2))Ek [‖νkj ‖2] 2m where for all j ∈ {1, . . . ,m}, we have wkj = proxγkj rj (x k j − γkj (∇jf(xk−dk) + νkj )). In particular, for σk = 0, we can take = 0 and assume the last line is zero. Notice that if σk = = 0 and γkj is chosen as suggested in Algorithm 2, the (conditional) expected value of the Lyapunov function is strictly decreasing. If σk is nonzero, the factor will be used in concert with the stepsize γkj to ensure that noise does not cause the algorithm to diverge. 4.2 Proof of Theorem 1 For either noise regime, we define, for all k ∈ N and j ∈ {1, . . . ,m}, the factor := 2−1(a − 1). With the assumed choice of γkj and , Lemma 1 implies that the expected Lyapunov function decreases, up to a summable residual: with Akj := w k j − xkj + γkj νkj , we have E [ Φ(zk+1) ] ≤ E [ Φ(zk) ] − E 1 2m m∑ j=1 1 γkj ( 1− 1 + ack ) ‖Akj ‖2 + m∑ j=1 γkj ( 1 + γkj (1 + −1) ( Lj + 2Lτm −1/2))E [Ek [‖νkj ‖2]] 2m . (4) Two upper bounds follow from the the definition of γkj , the lower bound ck ≥ 1, and the straightforward inequalities (ack)−1(L+ 2Mτm−1/2)−1 ≥ γkj ≥ (ack)−1(L+ 2Mτm−1/2)−1: 1 ck Sk ≤ 1 (1−(1+ )a−1) 2ma(L+2Lτm−1/2) E 1 2m m∑ j=1 1 γkj ( 1− 1 + ack ) ‖Akj ‖2 and m∑ j=1 γkj ( 1 + γkj (1 + −1) ( Lj + 2Lτm −1/2))Ek [‖νkj ‖2] 2m ≤ (1 + (ack) −1(1 + −1))(σ2k/ck) 2a(L+ 2Lτm−1/2) . Now rearrange (4), use E [ Φ(zk+1) ] ≥ infx∈H{f(x) + r(x)} and E [ Φ(z0) ] = f(x0) + r(x0), and sum (4) over k to get 1∑T k=0 c −1 k T∑ k=0 1 ck Sk ≤ f(x0) + r(x0)− infx∈H{f(x) + r(x)}+ ∑T k=0 (1+(ack) −1(1+ −1))(σ2k/ck) 2a(L+2Lτm−1/2) (1−(1+ )a−1) 2ma(L+2Lτm−1/2) ∑T k=0 c −1 k . The left hand side of this inequality is bounded from below by mink=0,...,T Sk and is precisely the term Ek∼PT [Sk]. What remains to be shown is an upper bound on the right hand side, which we will now call RT . If the noise is summable, then ck ≡ 1, so ∑T k=0 c −1 k = (T+1) and ∑T k=0 σ 2 k/ck <∞, which implies that RT = O(m(L+ 2Lτm−1/2)(T + 1)−1). If the noise is α-diminishing, then ck = Θ ( k(1−α) ) , so ∑T k=0 c −1 k = Θ((T + 1) α) and, because σ2k/ck = O(k −1), there exists a B > 0 such that∑T k=0 σ 2 k/ck ≤ ∑T k=0Bk −1 = O(log(T +1)), which implies that RT = O((m(L+2Lτm−1/2)+ m log(T + 1))(T + 1)−α). 5 Numerical Experiments In this section, we present numerical results to confirm that SAPALM delivers the expected performance gains over PALM. We confirm two properties: 1) SAPALM converges to values nearly as low as PALM given the same number of iterations, 2) SAPALM exhibits a near-linear speedup as the number of workers increases. All experiments use an Intel Xeon machine with 2 sockets and 10 cores per socket. We use two different nonconvex matrix factorization problems to exhibit these properties, to which we apply two different SAPALM variants: one without noise, and one with stochastic gradient noise. For each of our examples, we generate a matrix A ∈ Rn×n with iid standard normal entries, where n = 2000. Although SAPALM is intended for use on much larger problems, using a small problem size makes write conflicts more likely, and so serves as an ideal setting to understand how asynchrony affects convergence. 1. Sparse PCA with Asynchronous Block Coordinate Updates. We minimize argmin X,Y 1 2 ||A−XTY ||2F + λ‖X‖1 + λ‖Y ‖1, (5) where X ∈ Rd×n and Y ∈ Rd×n for some d ∈ N. We solve this problem using SAPALM with no noise νk = 0. 2. Quadratically Regularized Firm Thresholding PCA with Asynchronous Stochastic Gradients. We minimize argmin X,Y 1 2 ||A−XTY ||2F + λ(‖X‖Firm + ‖Y ‖Firm) + µ 2 (‖X‖2F + ‖Y ‖2F ), (6) where X ∈ Rd×n, Y ∈ Rd×n, and ‖ · ‖Firm is the firm thresholding penalty proposed in [21]: a nonconvex, nonsmooth function whose proximal operator truncates small values to zero and preserves large values. We solve this problem using the stochastic gradient SAPALM variant from Section 2.3. In both experiments X and Y are treated as coordinate blocks. Notice that for this problem, the SAPALM update decouples over the entries of each coordinate block. Each worker updates its coordinate block (say, X) by cycling through the coordinates of X and updating each in turn, restarting at a random coordinate after each cycle. In Figures (1a) and (1c), we see objective function values plotted by iteration. By this metric, SAPALM performs as well as PALM, its single threaded variant; for the second problem, the curves for different thread counts all overlap. Note, in particular, that SAPALM does not diverge. But SAPALM can add additional workers to increment the iteration counter more quickly, as seen in Figure 1b, allowing SAPALM to outperform its single threaded variant. We measure the speedup Sk(p) of SAPALM by the (relative) time for p workers to produce k iterates Sk(p) = Tk(p) Tk(1) , (7) where Tk(p) is the time to produce k iterates using p workers. Table 2 shows that SAPALM achieves near linear speedup for a range of variable sizes d. (Dashes — denote experiments not run.) threads d=10 d=20 d=100 1 1 1 1 2 1.9722 1.9812 – 4 3.7623 3.7635 – 8 7.1444 7.3315 7.3719 16 13.376 14.5322 14.743 Table 2: Sparse PCA speedup for 16 iterations by problem size and thread count. Deviations from linearity can be attributed to a breakdown in the abstraction of a “shared memory” computer: as each worker modifies the “shared” variables X and Y , some communication is required to maintain cache coherency across all cores and processors. In addition, Intel Xeon processors share L3 cache between all cores on the processor. All threads compete for the same L3 cache space, slowing down each iteration. For small d, write conflicts are more likely; for large d, communication to maintain cache coherency dominates. 6 Discussion A few straightforward generalizations of our work are possible; we omit them to simplify notation. Removing the log factors. The log factors in Theorem 1 can easily be removed by fixing a maximum number of iterations for which we plan to run SAPALM and adjusting the ck factors accordingly, as in [14, Equation (3.2.10)]. Cluster points of {xk}k∈N. Using the strategy employed in [5], it’s possible to show that all cluster points of {xk}k∈N are (almost surely) stationary points of f + r. Weakened Assumptions on Lipschitz Constants. We can weaken our assumptions to allow Lj to vary: we can assume Lj(x1, . . . , xj−1, ·, xj+1, . . . , xm)-Lipschitz continuity each partial gradient ∇jf(x1, . . . , xj−1, ·, xj+1, . . . , xm) : Hj → Hj , for every x ∈ H. 7 Conclusion This paper presented SAPALM, the first asynchronous parallel optimization method that provably converges on a large class of nonconvex, nonsmooth problems. We provide a convergence theory for SAPALM, and show that with the parameters suggested by this theory, SAPALM achieves a near linear speedup over serial PALM. As a special case, we provide the first convergence rate for (synchronous or asynchronous) stochastic block proximal gradient methods for nonconvex regularizers. These results give specific guidance to ensure fast convergence of practical asynchronous methods on a large class of important, nonconvex optimization problems, and pave the way towards a deeper understanding of stability of these methods in the presence of noise.
1. What is the focus of the paper, and what are the claimed contributions? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its combination of optimization techniques? 3. Are there any concerns or limitations regarding the method's convergence rates or applicability to certain types of optimization problems? 4. How does the reviewer assess the novelty and significance of the paper's content? 5. Are there any questions or areas for improvement in the paper's presentation or analysis?
Review
Review In this paper, the authors proposed NAPALM for solving nonconvex, nonsmooth optimization problems. And they claim that NAPALM is the first asynchronous parallel optimization method that provably converges on a large class of nonconvex,nonsmooth problems. Moreover, the authors prove iteration complexity that NAPALM and demonstrate state-of-the-art performance on several matrix factorization problems. (1) This paper combines the two optimization techniques called "Asynchronous parallel" and PALM or BGCD for nonconvex nonsmooth optimization problems. In fact, the main proof techniques are standard. Hence, I do not find the results very exciting. (2) The authors claim that their method NAPALM mathches the best known convergence rates. In fact, this optimal rate only holds for the summable error case. The authors should make it clear. (3) This paper also covers the asynchronous stochastic block gradient descent.However, the convergence analysis holds only for the nonconvex, smooth optimization.
NIPS
Title The Sound of APALM Clapping: Faster Nonsmooth Nonconvex Optimization with Stochastic Asynchronous PALM Abstract We introduce the Stochastic Asynchronous Proximal Alternating Linearized Minimization (SAPALM) method, a block coordinate stochastic proximal-gradient method for solving nonconvex, nonsmooth optimization problems. SAPALM is the first asynchronous parallel optimization method that provably converges on a large class of nonconvex, nonsmooth problems. We prove that SAPALM matches the best known rates of convergence — among synchronous or asynchronous methods — on this problem class. We provide upper bounds on the number of workers for which we can expect to see a linear speedup, which match the best bounds known for less complex problems, and show that in practice SAPALM achieves this linear speedup. We demonstrate state-of-the-art performance on several matrix factorization problems. 1 Introduction Parallel optimization algorithms often feature synchronization steps: all processors wait for the last to finish before moving on to the next major iteration. Unfortunately, the distribution of finish times is heavy tailed. Hence as the number of processors increases, most processors waste most of their time waiting. A natural solution is to remove any synchronization steps: instead, allow each idle processor to update the global state of the algorithm and continue, ignoring read and write conflicts whenever they occur. Occasionally one processor will erase the work of another; the hope is that the gain from allowing processors to work at their own paces offsets the loss from a sloppy division of labor. These asynchronous parallel optimization methods can work quite well in practice, but it is difficult to tune their parameters: lock-free code is notoriously hard to debug. For these problems, there is nothing as practical as a good theory, which might explain how to set these parameters so as to guarantee convergence. In this paper, we propose a theoretical framework guaranteeing convergence of a class of asynchronous algorithms for problems of the form minimize (x1,...,xm)∈H1×...×Hm f(x1, . . . , xm) + m∑ j=1 rj(xj), (1) where f is a continuously differentiable (C1) function with an L-Lipschitz gradient, each rj is a lower semicontinuous (not necessarily convex or differentiable) function, and the sets Hj are Euclidean spaces (i.e.,Hj = Rnj for some nj ∈ N). This problem class includes many (convex and nonconvex) signal recovery problems, matrix factorization problems, and, more generally, any generalized low rank model [20]. Following terminology from these domains, we view f as a loss function and each rj as a regularizer. For example, f might encode the misfit between the observations and the model, while the regularizers rj encode structural constraints on the model such as sparsity or nonnegativity. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Many synchronous parallel algorithms have been proposed to solve (1), including stochastic proximalgradient and block coordinate descent methods [22, 3]. Our asynchronous variants build on these synchronous methods, and in particular on proximal alternating linearized minimization (PALM) [3]. These asynchronous variants depend on the same parameters as the synchronous methods, such as a step size parameter, but also new ones, such as the maximum allowable delay. Our contribution here is to provide a convergence theory to guide the choice of those parameters within our control (such as the stepsize) in light of those out of our control (such as the maximum delay) to ensure convergence at the rate guaranteed by theory. We call this algorithm the Stochastic Asynchronous Proximal Alternating Linearized Minimization method, or SAPALM for short. Lock-free optimization is not a new idea. Many of the first theoretical results for such algorithms appear in the textbook [2], written over a generation ago. But within the last few years, asynchronous stochastic gradient and block coordinate methods have become newly popular, and enthusiasm in practice has been matched by progress in theory. Guaranteed convergence for these algorithms has been established for convex problems; see, for example, [13, 15, 16, 12, 11, 4, 1]. Asynchrony has also been used to speed up algorithms for nonconvex optimization, in particular, for learning deep neural networks [6] and completing low-rank matrices [23]. In contrast to the convex case, the existing asynchronous convergence theory for nonconvex problems is limited to the following four scenarios: stochastic gradient methods for smooth unconstrained problems [19, 10]; block coordinate methods for smooth problems with separable, convex constraints [18]; block coordinate methods for the general problem (1) [5]; and deterministic distributed proximal-gradient methods for smooth nonconvex loss functions with a single nonsmooth, convex regularizer [9]. A general block-coordinate stochastic gradient method with nonsmooth, nonconvex regularizers is still missing from the theory. We aim to fill this gap. Contributions. We introduce SAPALM, the first asynchronous parallel optimization method that provably converges for all nonconvex, nonsmooth problems of the form (1). SAPALM is a a block coordinate stochastic proximal-gradient method that generalizes the deterministic PALM method of [5, 3]. When applied to problem (1), we prove that SAPALM matches the best, known rates of convergence, due to [8] in the case where each rj is convex and m = 1: that is, asynchrony carries no theoretical penalty for convergence speed. We test SAPALM on a few example problems and compare to a synchronous implementation, showing a linear speedup. Notation. Let m ∈ N denote the number of coordinate blocks. We letH = H1 × . . .×Hm. For every x ∈ H, each partial gradient ∇jf(x1, . . . , xj−1, ·, xj+1, . . . , xm) : Hj → Hj is Lj-Lipschitz continuous; we let L = minj{Lj} ≤ maxj{Lj} = L. The number τ ∈ N is the maximum allowable delay. Define the aggregate regularizer r : H → (−∞,∞] as r(x) = ∑m j=1 rj(xj). For each j ∈ {1, . . . ,m}, y ∈ Hj , and γ > 0, define the proximal operator proxγrj (y) := argmin xj∈Hj { rj(xj) + 1 2γ ‖xj − y‖2 } For convex rj , proxγrj (y) is uniquely defined, but for nonconvex problems, it is, in general, a set. We make the mild assumption that for all y ∈ Hj , we have proxγrj (y) 6= ∅. A slight technicality arises from our ability to choose among multiple elements of proxγrj (y), especially in light of the stochastic nature of SAPALM. Thus, for all y, j and γ > 0, we fix an element ζj(y, γ) ∈ proxγrj (y). (2) By [17, Exercise 14.38], we can assume that ζj is measurable, which enables us to reason with expectations wherever they involve ζj . As shorthand, we use proxγrj (y) to denote the (unique) choice ζj(y, γ). For any random variable or vector X , we let Ek [X] = E [ X | xk, . . . , x0, νk, . . . , ν0 ] denote the conditional expectation of X with respect to the sigma algebra generated by the history of SAPALM. 2 Algorithm Description Algorithm 1 displays the SAPALM method. We highlight a few features of the algorithm which we discuss in more detail below. Algorithm 1 SAPALM [Local view] Input: x ∈ H 1: All processors in parallel do 2: loop 3: Randomly select a coordinate block j ∈ {1, . . . ,m} 4: Read x from shared memory 5: Compute g = ∇jf(x) + νj 6: Choose stepsize γj ∈ R++ . According to Assumption 3 7: xj ← proxγjrj (xj − γjg) . According to (2) • Inconsistent iterates. Other processors may write updates to x in the time required to read x from memory. • Coordinate blocks. When the coordinate blocks xj are low dimensional, it reduces the likelihood that one update will be immediately erased by another, simultaneous update. • Noise. The noise ν ∈ H is a random variable that we use to model injected noise. It can be set to 0, or chosen to accelerate each iteration, or to avoid saddle points. Algorithm 1 has an equivalent (mathematical) description which we present in Algorithm 2, using an iteration counter k which is incremented each time a processor completes an update. This iteration counter is not required by the processors themselves to compute the updates. In Algorithm 1, a processor might not have access to the shared-memory’s global state, xk, at iteration k. Rather, because all processors can continuously update the global state while other processors are reading, local processors might only read the inconsistently delayed iterate xk−dk = (x k−dk,1 1 , . . . , x k−dk,m m ), where the delays dk are integers less than τ , and xl = x0 when l < 0. Algorithm 2 SAPALM [Global view] Input: x0 ∈ H 1: for k ∈ N do 2: Randomly select a coordinate block jk ∈ {1, . . . ,m} 3: Read xk−dk = (xk−dk,11 , . . . , x k−dk,m m ) from shared memory 4: Compute gk = ∇jkf(xk−dk) + νkjk 5: Choose stepsize γkjk ∈ R++ . According to Assumption 3 6: for j = 1, . . . ,m do 7: if j = jk then 8: xk+1jk ← proxγkjkrjk (x k jk − γkjkg k) . According to (2) 9: else 10: xk+1j ← xkj 2.1 Assumptions on the Delay, Independence, Variance, and Stepsizes Assumption 1 (Bounded Delay). There exists some τ ∈ N such that, for all k ∈ N, the sequence of coordinate delays lie within dk ∈ {0, . . . , τ}m. Assumption 2 (Independence). The indices {jk}k∈N are uniformly distributed and collectively IID. They are independent from the history of the algorithm xk, . . . , x0, νk, . . . , ν0 for all k ∈ N. We employ two possible restrictions on the noise sequence νk and the sequence of allowable stepsizes γkj , all of which lead to different convergence rates: Assumption 3 (Noise Regimes and Stepsizes). Let σ2k := Ek [ ‖νk‖2 ] denote the expected squared norm of the noise, and let a ∈ (1,∞). Assume that Ek [ νk ] = 0 and that there is a sequence of weights {ck}k∈N ⊆ [1,∞) such that (∀k ∈ N) , (∀j ∈ {1, . . . ,m}) γkj := 1 ack(Lj + 2Lτm−1/2) . which we choose using the following two rules, both of which depend on the growth of σk: Summable. ∑∞ k=0 σ 2 k <∞ =⇒ ck ≡ 1; α-Diminishing. (α ∈ (0, 1)) σ2k = O((k + 1)−α) =⇒ ck = Θ((k + 1)(1−α)). More noise, measured by σk, results in worse convergence rates and stricter requirements regarding which stepsizes can be chosen. We provide two stepsize choices which, depending on the noise regime, interpolate between Θ(1) and Θ(k1−α) for any α ∈ (0, 1). Larger stepsizes lead to convergence rates of order O(k−1), while smaller ones lead to order O(k−α). 2.2 Algorithm Features Inconsistent Asynchronous Reading. SAPALM allows asynchronous access patterns. A processor may, at any time, and without notifying other processors: 1. Read. While other processors are writing to shared-memory, read the possibly out-of-sync, delayed coordinates xk−dk,11 , . . . , x k−dk,m m . 2. Compute. Locally, compute the partial gradient∇jkf(x k−dk,1 1 , . . . , x k−dk,m m ). 3. Write. After computing the gradient, replace the jkth coordinate with xk+1jk ∈ argmin y rjk(y) + 〈∇jkf(xk−dk) + νkjk , y − x k jk 〉+ 1 2γkjk ‖y − xkjk‖ 2. Uncoordinated access eliminates waiting time for processors, which speeds up computation. The processors are blissfully ignorant of any conflict between their actions, and the paradoxes these conflicts entail: for example, the states xk−dk,11 , . . . , x k−dk,m m need never have simultaneously existed in memory. Although we write the method with a global counter k, the asynchronous processors need not be aware of it; and the requirement that the delays dk remain bounded by τ does not demand coordination, but rather serves only to define τ . What Does the Noise Model Capture? SAPALM is the first asynchronous PALM algorithm to allow and analyze noisy updates. The stochastic noise, νk, captures three phenomena: 1. Computational Error. Noise due to random computational error. 2. Avoiding Saddles. Noise deliberately injected for the purpose of avoiding saddles, as in [7]. 3. Stochastic Gradients. Noise due to stochastic approximations of delayed gradients. Of course, the noise model also captures any combination of the above phenomena. The last one is, perhaps, the most interesting: it allows us to prove convergence for a stochastic- or minibatch-gradient version of APALM, rather than requiring processors to compute a full (delayed) gradient. Stochastic gradients can be computed faster than their batch counterparts, allowing more frequent updates. 2.3 SAPALM as an Asynchronous Block Mini-Batch Stochastic Proximal-Gradient Method In Algorithm 1, any stochastic estimator ∇f(xk−dk ; ξ) of the gradient may be used, as long as Ek [ ∇f(xk−dk ; ξ) ] = ∇f(xk−dk), and Ek [ ‖∇f(xk−dk ; ξ)−∇f(xk−dk)‖2 ] ≤ σ2. In particular, if Problem 1 takes the form minimize x∈H Eξ [f(x1, . . . , xm; ξ)] + 1 m m∑ j=1 rj(xj), then, in Algorithm 2, the stochastic mini-batch estimator gk = m−1k ∑mk i=1∇f(xk−dk ; ξi), where ξi are IID, may be used in place of ∇f(xk−dk) + νk. A quick calculation shows that Ek [ ‖gk −∇f(xk−dk)‖2 ] = O(m−1k ). Thus, any increasing batch size mk = Ω((k + 1) −α), with α ∈ (0, 1), conforms to Assumption 3. When nonsmooth regularizers are present, all known convergence rate results for nonconvex stochastic gradient algorithms require the use of increasing, rather than fixed, minibatch sizes; see [8, 22] for analogous, synchronous algorithms. 3 Convergence Theorem Measuring Convergence for Nonconvex Problems. For nonconvex problems, it is standard to measure convergence (to a stationary point) by the expected violation of stationarity, which for us is the (deterministic) quantity: Sk := E m∑ j=1 ∥∥∥∥∥ 1γkj (wkj − xkj ) + νk ∥∥∥∥∥ 2 ; where (∀j ∈ {1, . . . ,m}) wkj = proxγkj rj (x k j − γkj (∇jf(xk−dk) + νkj )). (3) A reduction to the case r ≡ 0 and dk = 0 reveals that wkj − xkj + γkj νkj = −γkj∇jf(xk) and, hence, Sk = E [ ‖∇f(xk)‖2 ] . More generally, wkj − rkj + γkj νkj ∈ −γkj (∂Lrj(wkj ) +∇jf(xk−dk)) where ∂Lrj is the limiting subdifferential of rj [17] which, if rj is convex, reduces to the standard convex subdifferential familiar from [14]. A messy but straightforward calculation shows that our convergence rates for Sk can be converted to convergence rates for elements of ∂Lr(wk) +∇f(wk). We present our main convergence theorem now and defer the proof to Section 4. Theorem 1 (SAPALM Convergence Rates). Let {xk}k∈N ⊆ H be the SAPALM sequence created by Algorithm 2. Then, under Assumption 3 the following convergence rates hold: for all T ∈ N, if {νk}k∈N is 1. Summable, then min k=0,...,T Sk ≤ Ek∼PT [Sk] = O ( m(L+ 2Lτm−1/2) T + 1 ) ; 2. α-Diminishing, then min k=0,...,T Sk ≤ Ek∼PT [Sk] = O ( m(L+ 2Lτm−1/2) +m log(T + 1) (T + 1)−α ) ; where, for all T ∈ N, PT is the distribution {0, . . . , T} such that PT (X = k) ∝ c−1k . Effects of Delay and Linear Speedups. The m−1/2 term in the convergence rates presented in Theorem 1 prevents the delay τ from dominating our rates of convergence. In particular, as long as τ = O( √ m), the convergence rate in the synchronous (τ = 0) and asynchronous cases are within a small constant factor of each other. In that case, because the work per iteration in the synchronous and asynchronous versions of SAPALM is the same, we expect a linear speedup: SAPALM with p processors will converge nearly p times faster than PALM, since the iteration counter will be updated p times as often. As a rule of thumb, τ is roughly proportional to the number of processors. Hence we can achieve a linear speedup on as many as O( √ m) processors. 3.1 The Asynchronous Stochastic Block Gradient Method If the regularizer r is identically zero, then the noise νk need not vanish in the limit. The following theorem guarantees convergence of asynchronous stochastic block gradient descent with a constant minibatch size. See the supplemental material for a proof. Theorem 2 (SAPALM Convergence Rates (r ≡ 0)). Let {xk}k∈N ⊆ H be the SAPALM sequence created by Algorithm 2 in the case that r ≡ 0. If, for all k ∈ N, {Ek [ ‖νk‖2 ] }k∈N is bounded (not necessarily diminishing) and (∃a ∈ (1,∞)) , (∀k ∈ N) , (∀j ∈ {1, . . . ,m}) γkj := 1 a √ k(Lj + 2Mτm−1/2) , then for all T ∈ N, we have min k=0,...,T Sk ≤ Ek∼PT [Sk] = O ( m(L+ 2Lτm−1/2) +m log(T + 1)√ T + 1 ) , where PT is the distribution {0, . . . , T} such that PT (X = k) ∝ k−1/2. 4 Convergence Analysis 4.1 The Asynchronous Lyapunov Function Key to the convergence of SAPALM is the following Lyapunov function, defined onH1+τ , which aggregates not only the current state of the algorithm, as is common in synchronous algorithms, but also the history of the algorithm over the delayed time steps: (∀x(0), x(1), . . . , x(τ) ∈ H) Φ(x(0), x(1), . . . , x(τ)) = f(x(0)) + r(x(0)) + L 2 √ m τ∑ h=1 (τ − h+ 1)‖x(h)− x(h− 1)‖2. This Lyapunov function appears in our convergence analysis through the following inequality, which is proved in the supplemental material. Lemma 1 (Lyapunov Function Supermartingale Inequality). For all k ∈ N, let zk = (xk, . . . , xk−τ ) ∈ H1+τ . Then for all > 0, we have Ek [ Φ(zk+1) ] ≤ Φ(zk)− 1 2m m∑ j=1 ( 1 γkj − (1 + ) ( Lj + 2Lτ m1/2 )) Ek [ ‖wkj − xkj + γkj νkj ‖2 ] + m∑ j=1 γkj ( 1 + γkj (1 + −1) ( Lj + 2Lτm −1/2))Ek [‖νkj ‖2] 2m where for all j ∈ {1, . . . ,m}, we have wkj = proxγkj rj (x k j − γkj (∇jf(xk−dk) + νkj )). In particular, for σk = 0, we can take = 0 and assume the last line is zero. Notice that if σk = = 0 and γkj is chosen as suggested in Algorithm 2, the (conditional) expected value of the Lyapunov function is strictly decreasing. If σk is nonzero, the factor will be used in concert with the stepsize γkj to ensure that noise does not cause the algorithm to diverge. 4.2 Proof of Theorem 1 For either noise regime, we define, for all k ∈ N and j ∈ {1, . . . ,m}, the factor := 2−1(a − 1). With the assumed choice of γkj and , Lemma 1 implies that the expected Lyapunov function decreases, up to a summable residual: with Akj := w k j − xkj + γkj νkj , we have E [ Φ(zk+1) ] ≤ E [ Φ(zk) ] − E 1 2m m∑ j=1 1 γkj ( 1− 1 + ack ) ‖Akj ‖2 + m∑ j=1 γkj ( 1 + γkj (1 + −1) ( Lj + 2Lτm −1/2))E [Ek [‖νkj ‖2]] 2m . (4) Two upper bounds follow from the the definition of γkj , the lower bound ck ≥ 1, and the straightforward inequalities (ack)−1(L+ 2Mτm−1/2)−1 ≥ γkj ≥ (ack)−1(L+ 2Mτm−1/2)−1: 1 ck Sk ≤ 1 (1−(1+ )a−1) 2ma(L+2Lτm−1/2) E 1 2m m∑ j=1 1 γkj ( 1− 1 + ack ) ‖Akj ‖2 and m∑ j=1 γkj ( 1 + γkj (1 + −1) ( Lj + 2Lτm −1/2))Ek [‖νkj ‖2] 2m ≤ (1 + (ack) −1(1 + −1))(σ2k/ck) 2a(L+ 2Lτm−1/2) . Now rearrange (4), use E [ Φ(zk+1) ] ≥ infx∈H{f(x) + r(x)} and E [ Φ(z0) ] = f(x0) + r(x0), and sum (4) over k to get 1∑T k=0 c −1 k T∑ k=0 1 ck Sk ≤ f(x0) + r(x0)− infx∈H{f(x) + r(x)}+ ∑T k=0 (1+(ack) −1(1+ −1))(σ2k/ck) 2a(L+2Lτm−1/2) (1−(1+ )a−1) 2ma(L+2Lτm−1/2) ∑T k=0 c −1 k . The left hand side of this inequality is bounded from below by mink=0,...,T Sk and is precisely the term Ek∼PT [Sk]. What remains to be shown is an upper bound on the right hand side, which we will now call RT . If the noise is summable, then ck ≡ 1, so ∑T k=0 c −1 k = (T+1) and ∑T k=0 σ 2 k/ck <∞, which implies that RT = O(m(L+ 2Lτm−1/2)(T + 1)−1). If the noise is α-diminishing, then ck = Θ ( k(1−α) ) , so ∑T k=0 c −1 k = Θ((T + 1) α) and, because σ2k/ck = O(k −1), there exists a B > 0 such that∑T k=0 σ 2 k/ck ≤ ∑T k=0Bk −1 = O(log(T +1)), which implies that RT = O((m(L+2Lτm−1/2)+ m log(T + 1))(T + 1)−α). 5 Numerical Experiments In this section, we present numerical results to confirm that SAPALM delivers the expected performance gains over PALM. We confirm two properties: 1) SAPALM converges to values nearly as low as PALM given the same number of iterations, 2) SAPALM exhibits a near-linear speedup as the number of workers increases. All experiments use an Intel Xeon machine with 2 sockets and 10 cores per socket. We use two different nonconvex matrix factorization problems to exhibit these properties, to which we apply two different SAPALM variants: one without noise, and one with stochastic gradient noise. For each of our examples, we generate a matrix A ∈ Rn×n with iid standard normal entries, where n = 2000. Although SAPALM is intended for use on much larger problems, using a small problem size makes write conflicts more likely, and so serves as an ideal setting to understand how asynchrony affects convergence. 1. Sparse PCA with Asynchronous Block Coordinate Updates. We minimize argmin X,Y 1 2 ||A−XTY ||2F + λ‖X‖1 + λ‖Y ‖1, (5) where X ∈ Rd×n and Y ∈ Rd×n for some d ∈ N. We solve this problem using SAPALM with no noise νk = 0. 2. Quadratically Regularized Firm Thresholding PCA with Asynchronous Stochastic Gradients. We minimize argmin X,Y 1 2 ||A−XTY ||2F + λ(‖X‖Firm + ‖Y ‖Firm) + µ 2 (‖X‖2F + ‖Y ‖2F ), (6) where X ∈ Rd×n, Y ∈ Rd×n, and ‖ · ‖Firm is the firm thresholding penalty proposed in [21]: a nonconvex, nonsmooth function whose proximal operator truncates small values to zero and preserves large values. We solve this problem using the stochastic gradient SAPALM variant from Section 2.3. In both experiments X and Y are treated as coordinate blocks. Notice that for this problem, the SAPALM update decouples over the entries of each coordinate block. Each worker updates its coordinate block (say, X) by cycling through the coordinates of X and updating each in turn, restarting at a random coordinate after each cycle. In Figures (1a) and (1c), we see objective function values plotted by iteration. By this metric, SAPALM performs as well as PALM, its single threaded variant; for the second problem, the curves for different thread counts all overlap. Note, in particular, that SAPALM does not diverge. But SAPALM can add additional workers to increment the iteration counter more quickly, as seen in Figure 1b, allowing SAPALM to outperform its single threaded variant. We measure the speedup Sk(p) of SAPALM by the (relative) time for p workers to produce k iterates Sk(p) = Tk(p) Tk(1) , (7) where Tk(p) is the time to produce k iterates using p workers. Table 2 shows that SAPALM achieves near linear speedup for a range of variable sizes d. (Dashes — denote experiments not run.) threads d=10 d=20 d=100 1 1 1 1 2 1.9722 1.9812 – 4 3.7623 3.7635 – 8 7.1444 7.3315 7.3719 16 13.376 14.5322 14.743 Table 2: Sparse PCA speedup for 16 iterations by problem size and thread count. Deviations from linearity can be attributed to a breakdown in the abstraction of a “shared memory” computer: as each worker modifies the “shared” variables X and Y , some communication is required to maintain cache coherency across all cores and processors. In addition, Intel Xeon processors share L3 cache between all cores on the processor. All threads compete for the same L3 cache space, slowing down each iteration. For small d, write conflicts are more likely; for large d, communication to maintain cache coherency dominates. 6 Discussion A few straightforward generalizations of our work are possible; we omit them to simplify notation. Removing the log factors. The log factors in Theorem 1 can easily be removed by fixing a maximum number of iterations for which we plan to run SAPALM and adjusting the ck factors accordingly, as in [14, Equation (3.2.10)]. Cluster points of {xk}k∈N. Using the strategy employed in [5], it’s possible to show that all cluster points of {xk}k∈N are (almost surely) stationary points of f + r. Weakened Assumptions on Lipschitz Constants. We can weaken our assumptions to allow Lj to vary: we can assume Lj(x1, . . . , xj−1, ·, xj+1, . . . , xm)-Lipschitz continuity each partial gradient ∇jf(x1, . . . , xj−1, ·, xj+1, . . . , xm) : Hj → Hj , for every x ∈ H. 7 Conclusion This paper presented SAPALM, the first asynchronous parallel optimization method that provably converges on a large class of nonconvex, nonsmooth problems. We provide a convergence theory for SAPALM, and show that with the parameters suggested by this theory, SAPALM achieves a near linear speedup over serial PALM. As a special case, we provide the first convergence rate for (synchronous or asynchronous) stochastic block proximal gradient methods for nonconvex regularizers. These results give specific guidance to ensure fast convergence of practical asynchronous methods on a large class of important, nonconvex optimization problems, and pave the way towards a deeper understanding of stability of these methods in the presence of noise.
1. What is the focus of the paper regarding asynchronous coordinate descent? 2. What are the strengths of the paper, particularly in terms of its contributions and generalizations? 3. Do you have any concerns or questions regarding the paper's content, such as the effectiveness of updates or the measurement of stationarity violation? 4. Can the techniques used in the paper be applied to other scenarios, such as deriving the convergence rate of constant batch size SGD?
Review
Review The authors consider asynchronous coordinate descent with noise and possibly nonsmooth regularizer for nonconvex optimization problems, and provide the proof for the convergence rate. The paper proves the convergence rate for asynchronous coordinate descent algorithm with nonsmooth regularizer and noise on nonconvex problems. The main contribution is a generalization of the bounds of Ji Liu and Stephen J. Wright's 2014 paper on asynchronous coordinate descent (this paper cited their 2013 work, but to me the 2014 work is more relevant) to nonconvex optimization problems with some noise on the gradients. This topic is very interesting since there are many applications for SCD under nonconvex setting. I have a few questions about this paper: 1. In line 77, the paper says another update may overwrite previous updates, but from algorithm 2 it seems that all updates are effective. 2. In line 135, the paper says the expected violation of stationarity is the standard measure. Is there any reference for that? 3. From theorem 2 it seems that the theorem only talks about the convergence of increasing batch size SGD. Do you think it is easy to derive the convergence rate of constant batch size SGD like the reference [8] using the same technique in this paper?
NIPS
Title The Sound of APALM Clapping: Faster Nonsmooth Nonconvex Optimization with Stochastic Asynchronous PALM Abstract We introduce the Stochastic Asynchronous Proximal Alternating Linearized Minimization (SAPALM) method, a block coordinate stochastic proximal-gradient method for solving nonconvex, nonsmooth optimization problems. SAPALM is the first asynchronous parallel optimization method that provably converges on a large class of nonconvex, nonsmooth problems. We prove that SAPALM matches the best known rates of convergence — among synchronous or asynchronous methods — on this problem class. We provide upper bounds on the number of workers for which we can expect to see a linear speedup, which match the best bounds known for less complex problems, and show that in practice SAPALM achieves this linear speedup. We demonstrate state-of-the-art performance on several matrix factorization problems. 1 Introduction Parallel optimization algorithms often feature synchronization steps: all processors wait for the last to finish before moving on to the next major iteration. Unfortunately, the distribution of finish times is heavy tailed. Hence as the number of processors increases, most processors waste most of their time waiting. A natural solution is to remove any synchronization steps: instead, allow each idle processor to update the global state of the algorithm and continue, ignoring read and write conflicts whenever they occur. Occasionally one processor will erase the work of another; the hope is that the gain from allowing processors to work at their own paces offsets the loss from a sloppy division of labor. These asynchronous parallel optimization methods can work quite well in practice, but it is difficult to tune their parameters: lock-free code is notoriously hard to debug. For these problems, there is nothing as practical as a good theory, which might explain how to set these parameters so as to guarantee convergence. In this paper, we propose a theoretical framework guaranteeing convergence of a class of asynchronous algorithms for problems of the form minimize (x1,...,xm)∈H1×...×Hm f(x1, . . . , xm) + m∑ j=1 rj(xj), (1) where f is a continuously differentiable (C1) function with an L-Lipschitz gradient, each rj is a lower semicontinuous (not necessarily convex or differentiable) function, and the sets Hj are Euclidean spaces (i.e.,Hj = Rnj for some nj ∈ N). This problem class includes many (convex and nonconvex) signal recovery problems, matrix factorization problems, and, more generally, any generalized low rank model [20]. Following terminology from these domains, we view f as a loss function and each rj as a regularizer. For example, f might encode the misfit between the observations and the model, while the regularizers rj encode structural constraints on the model such as sparsity or nonnegativity. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Many synchronous parallel algorithms have been proposed to solve (1), including stochastic proximalgradient and block coordinate descent methods [22, 3]. Our asynchronous variants build on these synchronous methods, and in particular on proximal alternating linearized minimization (PALM) [3]. These asynchronous variants depend on the same parameters as the synchronous methods, such as a step size parameter, but also new ones, such as the maximum allowable delay. Our contribution here is to provide a convergence theory to guide the choice of those parameters within our control (such as the stepsize) in light of those out of our control (such as the maximum delay) to ensure convergence at the rate guaranteed by theory. We call this algorithm the Stochastic Asynchronous Proximal Alternating Linearized Minimization method, or SAPALM for short. Lock-free optimization is not a new idea. Many of the first theoretical results for such algorithms appear in the textbook [2], written over a generation ago. But within the last few years, asynchronous stochastic gradient and block coordinate methods have become newly popular, and enthusiasm in practice has been matched by progress in theory. Guaranteed convergence for these algorithms has been established for convex problems; see, for example, [13, 15, 16, 12, 11, 4, 1]. Asynchrony has also been used to speed up algorithms for nonconvex optimization, in particular, for learning deep neural networks [6] and completing low-rank matrices [23]. In contrast to the convex case, the existing asynchronous convergence theory for nonconvex problems is limited to the following four scenarios: stochastic gradient methods for smooth unconstrained problems [19, 10]; block coordinate methods for smooth problems with separable, convex constraints [18]; block coordinate methods for the general problem (1) [5]; and deterministic distributed proximal-gradient methods for smooth nonconvex loss functions with a single nonsmooth, convex regularizer [9]. A general block-coordinate stochastic gradient method with nonsmooth, nonconvex regularizers is still missing from the theory. We aim to fill this gap. Contributions. We introduce SAPALM, the first asynchronous parallel optimization method that provably converges for all nonconvex, nonsmooth problems of the form (1). SAPALM is a a block coordinate stochastic proximal-gradient method that generalizes the deterministic PALM method of [5, 3]. When applied to problem (1), we prove that SAPALM matches the best, known rates of convergence, due to [8] in the case where each rj is convex and m = 1: that is, asynchrony carries no theoretical penalty for convergence speed. We test SAPALM on a few example problems and compare to a synchronous implementation, showing a linear speedup. Notation. Let m ∈ N denote the number of coordinate blocks. We letH = H1 × . . .×Hm. For every x ∈ H, each partial gradient ∇jf(x1, . . . , xj−1, ·, xj+1, . . . , xm) : Hj → Hj is Lj-Lipschitz continuous; we let L = minj{Lj} ≤ maxj{Lj} = L. The number τ ∈ N is the maximum allowable delay. Define the aggregate regularizer r : H → (−∞,∞] as r(x) = ∑m j=1 rj(xj). For each j ∈ {1, . . . ,m}, y ∈ Hj , and γ > 0, define the proximal operator proxγrj (y) := argmin xj∈Hj { rj(xj) + 1 2γ ‖xj − y‖2 } For convex rj , proxγrj (y) is uniquely defined, but for nonconvex problems, it is, in general, a set. We make the mild assumption that for all y ∈ Hj , we have proxγrj (y) 6= ∅. A slight technicality arises from our ability to choose among multiple elements of proxγrj (y), especially in light of the stochastic nature of SAPALM. Thus, for all y, j and γ > 0, we fix an element ζj(y, γ) ∈ proxγrj (y). (2) By [17, Exercise 14.38], we can assume that ζj is measurable, which enables us to reason with expectations wherever they involve ζj . As shorthand, we use proxγrj (y) to denote the (unique) choice ζj(y, γ). For any random variable or vector X , we let Ek [X] = E [ X | xk, . . . , x0, νk, . . . , ν0 ] denote the conditional expectation of X with respect to the sigma algebra generated by the history of SAPALM. 2 Algorithm Description Algorithm 1 displays the SAPALM method. We highlight a few features of the algorithm which we discuss in more detail below. Algorithm 1 SAPALM [Local view] Input: x ∈ H 1: All processors in parallel do 2: loop 3: Randomly select a coordinate block j ∈ {1, . . . ,m} 4: Read x from shared memory 5: Compute g = ∇jf(x) + νj 6: Choose stepsize γj ∈ R++ . According to Assumption 3 7: xj ← proxγjrj (xj − γjg) . According to (2) • Inconsistent iterates. Other processors may write updates to x in the time required to read x from memory. • Coordinate blocks. When the coordinate blocks xj are low dimensional, it reduces the likelihood that one update will be immediately erased by another, simultaneous update. • Noise. The noise ν ∈ H is a random variable that we use to model injected noise. It can be set to 0, or chosen to accelerate each iteration, or to avoid saddle points. Algorithm 1 has an equivalent (mathematical) description which we present in Algorithm 2, using an iteration counter k which is incremented each time a processor completes an update. This iteration counter is not required by the processors themselves to compute the updates. In Algorithm 1, a processor might not have access to the shared-memory’s global state, xk, at iteration k. Rather, because all processors can continuously update the global state while other processors are reading, local processors might only read the inconsistently delayed iterate xk−dk = (x k−dk,1 1 , . . . , x k−dk,m m ), where the delays dk are integers less than τ , and xl = x0 when l < 0. Algorithm 2 SAPALM [Global view] Input: x0 ∈ H 1: for k ∈ N do 2: Randomly select a coordinate block jk ∈ {1, . . . ,m} 3: Read xk−dk = (xk−dk,11 , . . . , x k−dk,m m ) from shared memory 4: Compute gk = ∇jkf(xk−dk) + νkjk 5: Choose stepsize γkjk ∈ R++ . According to Assumption 3 6: for j = 1, . . . ,m do 7: if j = jk then 8: xk+1jk ← proxγkjkrjk (x k jk − γkjkg k) . According to (2) 9: else 10: xk+1j ← xkj 2.1 Assumptions on the Delay, Independence, Variance, and Stepsizes Assumption 1 (Bounded Delay). There exists some τ ∈ N such that, for all k ∈ N, the sequence of coordinate delays lie within dk ∈ {0, . . . , τ}m. Assumption 2 (Independence). The indices {jk}k∈N are uniformly distributed and collectively IID. They are independent from the history of the algorithm xk, . . . , x0, νk, . . . , ν0 for all k ∈ N. We employ two possible restrictions on the noise sequence νk and the sequence of allowable stepsizes γkj , all of which lead to different convergence rates: Assumption 3 (Noise Regimes and Stepsizes). Let σ2k := Ek [ ‖νk‖2 ] denote the expected squared norm of the noise, and let a ∈ (1,∞). Assume that Ek [ νk ] = 0 and that there is a sequence of weights {ck}k∈N ⊆ [1,∞) such that (∀k ∈ N) , (∀j ∈ {1, . . . ,m}) γkj := 1 ack(Lj + 2Lτm−1/2) . which we choose using the following two rules, both of which depend on the growth of σk: Summable. ∑∞ k=0 σ 2 k <∞ =⇒ ck ≡ 1; α-Diminishing. (α ∈ (0, 1)) σ2k = O((k + 1)−α) =⇒ ck = Θ((k + 1)(1−α)). More noise, measured by σk, results in worse convergence rates and stricter requirements regarding which stepsizes can be chosen. We provide two stepsize choices which, depending on the noise regime, interpolate between Θ(1) and Θ(k1−α) for any α ∈ (0, 1). Larger stepsizes lead to convergence rates of order O(k−1), while smaller ones lead to order O(k−α). 2.2 Algorithm Features Inconsistent Asynchronous Reading. SAPALM allows asynchronous access patterns. A processor may, at any time, and without notifying other processors: 1. Read. While other processors are writing to shared-memory, read the possibly out-of-sync, delayed coordinates xk−dk,11 , . . . , x k−dk,m m . 2. Compute. Locally, compute the partial gradient∇jkf(x k−dk,1 1 , . . . , x k−dk,m m ). 3. Write. After computing the gradient, replace the jkth coordinate with xk+1jk ∈ argmin y rjk(y) + 〈∇jkf(xk−dk) + νkjk , y − x k jk 〉+ 1 2γkjk ‖y − xkjk‖ 2. Uncoordinated access eliminates waiting time for processors, which speeds up computation. The processors are blissfully ignorant of any conflict between their actions, and the paradoxes these conflicts entail: for example, the states xk−dk,11 , . . . , x k−dk,m m need never have simultaneously existed in memory. Although we write the method with a global counter k, the asynchronous processors need not be aware of it; and the requirement that the delays dk remain bounded by τ does not demand coordination, but rather serves only to define τ . What Does the Noise Model Capture? SAPALM is the first asynchronous PALM algorithm to allow and analyze noisy updates. The stochastic noise, νk, captures three phenomena: 1. Computational Error. Noise due to random computational error. 2. Avoiding Saddles. Noise deliberately injected for the purpose of avoiding saddles, as in [7]. 3. Stochastic Gradients. Noise due to stochastic approximations of delayed gradients. Of course, the noise model also captures any combination of the above phenomena. The last one is, perhaps, the most interesting: it allows us to prove convergence for a stochastic- or minibatch-gradient version of APALM, rather than requiring processors to compute a full (delayed) gradient. Stochastic gradients can be computed faster than their batch counterparts, allowing more frequent updates. 2.3 SAPALM as an Asynchronous Block Mini-Batch Stochastic Proximal-Gradient Method In Algorithm 1, any stochastic estimator ∇f(xk−dk ; ξ) of the gradient may be used, as long as Ek [ ∇f(xk−dk ; ξ) ] = ∇f(xk−dk), and Ek [ ‖∇f(xk−dk ; ξ)−∇f(xk−dk)‖2 ] ≤ σ2. In particular, if Problem 1 takes the form minimize x∈H Eξ [f(x1, . . . , xm; ξ)] + 1 m m∑ j=1 rj(xj), then, in Algorithm 2, the stochastic mini-batch estimator gk = m−1k ∑mk i=1∇f(xk−dk ; ξi), where ξi are IID, may be used in place of ∇f(xk−dk) + νk. A quick calculation shows that Ek [ ‖gk −∇f(xk−dk)‖2 ] = O(m−1k ). Thus, any increasing batch size mk = Ω((k + 1) −α), with α ∈ (0, 1), conforms to Assumption 3. When nonsmooth regularizers are present, all known convergence rate results for nonconvex stochastic gradient algorithms require the use of increasing, rather than fixed, minibatch sizes; see [8, 22] for analogous, synchronous algorithms. 3 Convergence Theorem Measuring Convergence for Nonconvex Problems. For nonconvex problems, it is standard to measure convergence (to a stationary point) by the expected violation of stationarity, which for us is the (deterministic) quantity: Sk := E m∑ j=1 ∥∥∥∥∥ 1γkj (wkj − xkj ) + νk ∥∥∥∥∥ 2 ; where (∀j ∈ {1, . . . ,m}) wkj = proxγkj rj (x k j − γkj (∇jf(xk−dk) + νkj )). (3) A reduction to the case r ≡ 0 and dk = 0 reveals that wkj − xkj + γkj νkj = −γkj∇jf(xk) and, hence, Sk = E [ ‖∇f(xk)‖2 ] . More generally, wkj − rkj + γkj νkj ∈ −γkj (∂Lrj(wkj ) +∇jf(xk−dk)) where ∂Lrj is the limiting subdifferential of rj [17] which, if rj is convex, reduces to the standard convex subdifferential familiar from [14]. A messy but straightforward calculation shows that our convergence rates for Sk can be converted to convergence rates for elements of ∂Lr(wk) +∇f(wk). We present our main convergence theorem now and defer the proof to Section 4. Theorem 1 (SAPALM Convergence Rates). Let {xk}k∈N ⊆ H be the SAPALM sequence created by Algorithm 2. Then, under Assumption 3 the following convergence rates hold: for all T ∈ N, if {νk}k∈N is 1. Summable, then min k=0,...,T Sk ≤ Ek∼PT [Sk] = O ( m(L+ 2Lτm−1/2) T + 1 ) ; 2. α-Diminishing, then min k=0,...,T Sk ≤ Ek∼PT [Sk] = O ( m(L+ 2Lτm−1/2) +m log(T + 1) (T + 1)−α ) ; where, for all T ∈ N, PT is the distribution {0, . . . , T} such that PT (X = k) ∝ c−1k . Effects of Delay and Linear Speedups. The m−1/2 term in the convergence rates presented in Theorem 1 prevents the delay τ from dominating our rates of convergence. In particular, as long as τ = O( √ m), the convergence rate in the synchronous (τ = 0) and asynchronous cases are within a small constant factor of each other. In that case, because the work per iteration in the synchronous and asynchronous versions of SAPALM is the same, we expect a linear speedup: SAPALM with p processors will converge nearly p times faster than PALM, since the iteration counter will be updated p times as often. As a rule of thumb, τ is roughly proportional to the number of processors. Hence we can achieve a linear speedup on as many as O( √ m) processors. 3.1 The Asynchronous Stochastic Block Gradient Method If the regularizer r is identically zero, then the noise νk need not vanish in the limit. The following theorem guarantees convergence of asynchronous stochastic block gradient descent with a constant minibatch size. See the supplemental material for a proof. Theorem 2 (SAPALM Convergence Rates (r ≡ 0)). Let {xk}k∈N ⊆ H be the SAPALM sequence created by Algorithm 2 in the case that r ≡ 0. If, for all k ∈ N, {Ek [ ‖νk‖2 ] }k∈N is bounded (not necessarily diminishing) and (∃a ∈ (1,∞)) , (∀k ∈ N) , (∀j ∈ {1, . . . ,m}) γkj := 1 a √ k(Lj + 2Mτm−1/2) , then for all T ∈ N, we have min k=0,...,T Sk ≤ Ek∼PT [Sk] = O ( m(L+ 2Lτm−1/2) +m log(T + 1)√ T + 1 ) , where PT is the distribution {0, . . . , T} such that PT (X = k) ∝ k−1/2. 4 Convergence Analysis 4.1 The Asynchronous Lyapunov Function Key to the convergence of SAPALM is the following Lyapunov function, defined onH1+τ , which aggregates not only the current state of the algorithm, as is common in synchronous algorithms, but also the history of the algorithm over the delayed time steps: (∀x(0), x(1), . . . , x(τ) ∈ H) Φ(x(0), x(1), . . . , x(τ)) = f(x(0)) + r(x(0)) + L 2 √ m τ∑ h=1 (τ − h+ 1)‖x(h)− x(h− 1)‖2. This Lyapunov function appears in our convergence analysis through the following inequality, which is proved in the supplemental material. Lemma 1 (Lyapunov Function Supermartingale Inequality). For all k ∈ N, let zk = (xk, . . . , xk−τ ) ∈ H1+τ . Then for all > 0, we have Ek [ Φ(zk+1) ] ≤ Φ(zk)− 1 2m m∑ j=1 ( 1 γkj − (1 + ) ( Lj + 2Lτ m1/2 )) Ek [ ‖wkj − xkj + γkj νkj ‖2 ] + m∑ j=1 γkj ( 1 + γkj (1 + −1) ( Lj + 2Lτm −1/2))Ek [‖νkj ‖2] 2m where for all j ∈ {1, . . . ,m}, we have wkj = proxγkj rj (x k j − γkj (∇jf(xk−dk) + νkj )). In particular, for σk = 0, we can take = 0 and assume the last line is zero. Notice that if σk = = 0 and γkj is chosen as suggested in Algorithm 2, the (conditional) expected value of the Lyapunov function is strictly decreasing. If σk is nonzero, the factor will be used in concert with the stepsize γkj to ensure that noise does not cause the algorithm to diverge. 4.2 Proof of Theorem 1 For either noise regime, we define, for all k ∈ N and j ∈ {1, . . . ,m}, the factor := 2−1(a − 1). With the assumed choice of γkj and , Lemma 1 implies that the expected Lyapunov function decreases, up to a summable residual: with Akj := w k j − xkj + γkj νkj , we have E [ Φ(zk+1) ] ≤ E [ Φ(zk) ] − E 1 2m m∑ j=1 1 γkj ( 1− 1 + ack ) ‖Akj ‖2 + m∑ j=1 γkj ( 1 + γkj (1 + −1) ( Lj + 2Lτm −1/2))E [Ek [‖νkj ‖2]] 2m . (4) Two upper bounds follow from the the definition of γkj , the lower bound ck ≥ 1, and the straightforward inequalities (ack)−1(L+ 2Mτm−1/2)−1 ≥ γkj ≥ (ack)−1(L+ 2Mτm−1/2)−1: 1 ck Sk ≤ 1 (1−(1+ )a−1) 2ma(L+2Lτm−1/2) E 1 2m m∑ j=1 1 γkj ( 1− 1 + ack ) ‖Akj ‖2 and m∑ j=1 γkj ( 1 + γkj (1 + −1) ( Lj + 2Lτm −1/2))Ek [‖νkj ‖2] 2m ≤ (1 + (ack) −1(1 + −1))(σ2k/ck) 2a(L+ 2Lτm−1/2) . Now rearrange (4), use E [ Φ(zk+1) ] ≥ infx∈H{f(x) + r(x)} and E [ Φ(z0) ] = f(x0) + r(x0), and sum (4) over k to get 1∑T k=0 c −1 k T∑ k=0 1 ck Sk ≤ f(x0) + r(x0)− infx∈H{f(x) + r(x)}+ ∑T k=0 (1+(ack) −1(1+ −1))(σ2k/ck) 2a(L+2Lτm−1/2) (1−(1+ )a−1) 2ma(L+2Lτm−1/2) ∑T k=0 c −1 k . The left hand side of this inequality is bounded from below by mink=0,...,T Sk and is precisely the term Ek∼PT [Sk]. What remains to be shown is an upper bound on the right hand side, which we will now call RT . If the noise is summable, then ck ≡ 1, so ∑T k=0 c −1 k = (T+1) and ∑T k=0 σ 2 k/ck <∞, which implies that RT = O(m(L+ 2Lτm−1/2)(T + 1)−1). If the noise is α-diminishing, then ck = Θ ( k(1−α) ) , so ∑T k=0 c −1 k = Θ((T + 1) α) and, because σ2k/ck = O(k −1), there exists a B > 0 such that∑T k=0 σ 2 k/ck ≤ ∑T k=0Bk −1 = O(log(T +1)), which implies that RT = O((m(L+2Lτm−1/2)+ m log(T + 1))(T + 1)−α). 5 Numerical Experiments In this section, we present numerical results to confirm that SAPALM delivers the expected performance gains over PALM. We confirm two properties: 1) SAPALM converges to values nearly as low as PALM given the same number of iterations, 2) SAPALM exhibits a near-linear speedup as the number of workers increases. All experiments use an Intel Xeon machine with 2 sockets and 10 cores per socket. We use two different nonconvex matrix factorization problems to exhibit these properties, to which we apply two different SAPALM variants: one without noise, and one with stochastic gradient noise. For each of our examples, we generate a matrix A ∈ Rn×n with iid standard normal entries, where n = 2000. Although SAPALM is intended for use on much larger problems, using a small problem size makes write conflicts more likely, and so serves as an ideal setting to understand how asynchrony affects convergence. 1. Sparse PCA with Asynchronous Block Coordinate Updates. We minimize argmin X,Y 1 2 ||A−XTY ||2F + λ‖X‖1 + λ‖Y ‖1, (5) where X ∈ Rd×n and Y ∈ Rd×n for some d ∈ N. We solve this problem using SAPALM with no noise νk = 0. 2. Quadratically Regularized Firm Thresholding PCA with Asynchronous Stochastic Gradients. We minimize argmin X,Y 1 2 ||A−XTY ||2F + λ(‖X‖Firm + ‖Y ‖Firm) + µ 2 (‖X‖2F + ‖Y ‖2F ), (6) where X ∈ Rd×n, Y ∈ Rd×n, and ‖ · ‖Firm is the firm thresholding penalty proposed in [21]: a nonconvex, nonsmooth function whose proximal operator truncates small values to zero and preserves large values. We solve this problem using the stochastic gradient SAPALM variant from Section 2.3. In both experiments X and Y are treated as coordinate blocks. Notice that for this problem, the SAPALM update decouples over the entries of each coordinate block. Each worker updates its coordinate block (say, X) by cycling through the coordinates of X and updating each in turn, restarting at a random coordinate after each cycle. In Figures (1a) and (1c), we see objective function values plotted by iteration. By this metric, SAPALM performs as well as PALM, its single threaded variant; for the second problem, the curves for different thread counts all overlap. Note, in particular, that SAPALM does not diverge. But SAPALM can add additional workers to increment the iteration counter more quickly, as seen in Figure 1b, allowing SAPALM to outperform its single threaded variant. We measure the speedup Sk(p) of SAPALM by the (relative) time for p workers to produce k iterates Sk(p) = Tk(p) Tk(1) , (7) where Tk(p) is the time to produce k iterates using p workers. Table 2 shows that SAPALM achieves near linear speedup for a range of variable sizes d. (Dashes — denote experiments not run.) threads d=10 d=20 d=100 1 1 1 1 2 1.9722 1.9812 – 4 3.7623 3.7635 – 8 7.1444 7.3315 7.3719 16 13.376 14.5322 14.743 Table 2: Sparse PCA speedup for 16 iterations by problem size and thread count. Deviations from linearity can be attributed to a breakdown in the abstraction of a “shared memory” computer: as each worker modifies the “shared” variables X and Y , some communication is required to maintain cache coherency across all cores and processors. In addition, Intel Xeon processors share L3 cache between all cores on the processor. All threads compete for the same L3 cache space, slowing down each iteration. For small d, write conflicts are more likely; for large d, communication to maintain cache coherency dominates. 6 Discussion A few straightforward generalizations of our work are possible; we omit them to simplify notation. Removing the log factors. The log factors in Theorem 1 can easily be removed by fixing a maximum number of iterations for which we plan to run SAPALM and adjusting the ck factors accordingly, as in [14, Equation (3.2.10)]. Cluster points of {xk}k∈N. Using the strategy employed in [5], it’s possible to show that all cluster points of {xk}k∈N are (almost surely) stationary points of f + r. Weakened Assumptions on Lipschitz Constants. We can weaken our assumptions to allow Lj to vary: we can assume Lj(x1, . . . , xj−1, ·, xj+1, . . . , xm)-Lipschitz continuity each partial gradient ∇jf(x1, . . . , xj−1, ·, xj+1, . . . , xm) : Hj → Hj , for every x ∈ H. 7 Conclusion This paper presented SAPALM, the first asynchronous parallel optimization method that provably converges on a large class of nonconvex, nonsmooth problems. We provide a convergence theory for SAPALM, and show that with the parameters suggested by this theory, SAPALM achieves a near linear speedup over serial PALM. As a special case, we provide the first convergence rate for (synchronous or asynchronous) stochastic block proximal gradient methods for nonconvex regularizers. These results give specific guidance to ensure fast convergence of practical asynchronous methods on a large class of important, nonconvex optimization problems, and pave the way towards a deeper understanding of stability of these methods in the presence of noise.
1. What is the main contribution of the paper in nonconvex nonsmooth optimization? 2. What is the difference between the proposed approach and previous works, particularly [5)? 3. How does the reviewer assess the convergence analysis and experimental results in the paper? 4. What are the strengths and weaknesses of the proposed noisy asynchronous PALM algorithm?
Review
Review This paper proposes a noisy asynchronous PALM algorithm to solve general nonconvex nonsmooth optimization problems. The algorithm is actually a block coordinate stochastic proximal gradient method. The paper gives detailed convergence analysis and can get linear speedup in experiments.This paper is well written and easy to read. The main contribution of this paper is adding the noise to the stochastic coordinate gradient in the asynchronous PALM framework, which is also the main difference compared with the previous work [5]. But I think [5] gives more rigorous theoretical analysis and insights. The authors get different convergence rate under different noise, this is a good point.
NIPS
Title The Complexity of Adversarially Robust Proper Learning of Halfspaces with Agnostic Noise Abstract We study the computational complexity of adversarially robust proper learning of halfspaces in the distribution-independent agnostic PAC model, with a focus on Lp perturbations. We give a computationally efficient learning algorithm and a nearly matching computational hardness result for this problem. An interesting implication of our findings is that the L1 perturbations case is provably computationally harder than the case 2  p < 1. 1 Introduction In recent years, the design of reliable machine learning systems for secure-critical applications, including in computer vision and natural language processing, has been a major goal in the field. One of the main concrete goals in this context has been to develop classifiers that are robust to adversarial examples, i.e., small imperceptible perturbations to the input that can result in erroneous misclassification [BCM+13, SZS+14, GSS15]. This has led to an explosion of research on designing defenses against adversarial examples and attacks on these defenses. See, e.g., [KM18] for a recent tutorial on the topic. Despite significant empirical progress over the past few years, the broad question of designing computationally efficient classifiers that are provably robust to adversarial perturbations remains an outstanding theoretical challenge. In this paper, we focus on understanding the computational complexity of adversarially robust classification in the (distribution-independent) agnostic PAC model [Hau92, KSS94]. Specifically, we study the learnability of halfspaces (or linear threshold functions) in this model with respect to Lp perturbations. A halfspace is any function h w : Rd ! {±1} of the form1 h w (x) = sgn (hw,xi), where w 2 Rd is the associated weight vector. The problem of learning an unknown halfspace has been studied for decades — starting with the Perceptron algorithm [Ros58] — and has arguably been one of the most influential problems in the development of machine learning [Vap98, FS97]. Before we proceed, we introduce the relevant terminology. Let C be a concept class of Boolean-valued functions on an instance space X ✓ Rd and H be a hypothesis class on X . The set of allowable perturbations is defined by a function U : X ! 2X . The robust risk of a hypothesis h 2 H with respect to a distribution D on X⇥{±1} is defined as RU (h,D) = Pr (x,y)⇠D[9z 2 U(x), h(z) 6= y]. The (adversarially robust) agnostic PAC learning problem for C is the following: Given i.i.d. samples from an arbitrary distribution D on X ⇥ {±1}, the goal of the learner is to output a hypothesis h 2 H 1The function sgn : R ! {±1} is defined as sgn(u) = 1 if u 0 and sgn(u) = 1 otherwise. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. such that with high probability it holds RU (h,D) OPTD +✏, where OPTD = inff2C RU (f,D) is the robust risk of the best-fitting function in C. Unfortunately, it follows from known hardness results that this formulation is computationally intractable for the class of halfspaces C = {sgn(hw,xi),w 2 Rd} under Lp perturbations, i.e, for Up, (x) = {z 2 X : kz xkp }, for some p 2. (The reader is referred to the supplementary material for a more detailed explanation.) To be able to obtain computationally efficient algorithms, we relax the above definition in two ways: (1) We allow the hypothesis to be robust within a slightly smaller perturbation region, and (2) We introduce a small constant factor approximation in the error guarantee. In more detail, for some constants 0 < ⌫ < 1 and ↵ > 1, our goal is to efficiently compute a hypothesis h such that with high probability RU p,(1 ⌫) (h,D) ↵ ·OPTDp, +✏ , (1) where OPTDp, = inff2C RUp, (f,D). (Note that for ⌫ = 0 and ↵ = 1, we obtain the original definition.) An interesting setting is when ⌫ is a small constant close to 0, say ⌫ = 0.1, and ↵ = 1+ , where 0 < < 1. In this paper, we characterize the computational complexity of this problem with respect to proper learning algorithms, i.e., algorithms that output a halfspace hypothesis. Throughout this paper, we will assume that the domain of our functions is bounded in the ddimensional Lp unit ball Bdp. All our results immediately extend to general domains with a (necessary) dependence on the diameter of the feasible set. A simple but crucial observation leveraged in our work is the following: The adversarially robust learning problem of halfspaces under Lp perturbations (defined above) is essentially equivalent to the classical problem of agnostic proper PAC learning of halfspaces with an Lp margin. Let p 2, q be the dual exponent of p, i.e., 1/p + 1/q = 1. The problem of agnostic proper PAC learning of halfspaces with an Lp margin is the following: The learner is given i.i.d. samples from a distribution D over Bdp ⇥ {±1}. For w 2 Bdq , its -margin error is defined as errD (w) := Pr (x,y)⇠D[sgn(hw,xi y · ) 6= y]. We also define OPTD := minw2Bd q errD (w). An algorithm is a proper ⌫-robust ↵-agnostic learner for Lp- -margin halfspace if, with probability at least 1 ⌧ , it outputs a halfspace w 2 Bdq with errD (1 ⌫) (w) ↵ ·OPTD +✏ . (2) (When unspecified, the failure probability ⌧ is assumed to be 1/3. It is well-known and easy to see that we can always achieve arbitrarily small value of ⌧ at the cost of O(log(1/⌧)) multiplicative factor in the running time and sample complexity.) We have the following basic observation, which implies that the learning objectives (1) and (2) are equivalent. Throughout this paper, we will state our contributions using the margin formulation (2). Fact 1. For any non-zero w 2 Rd, 0 and D over Rd ⇥ {±1}, RU p, (h w ,D) = errD ( wkwk q ). 1.1 Our Contributions Our main positive result is a robust and agnostic proper learning algorithm for Lp- -margin halfspace with near-optimal running time: Theorem 2 (Robust Learning Algorithm). Fix 2 p < 1 and 0 < < 1. For any 0 < ⌫, < 1, there is a proper ⌫-robust (1 + )-agnostic learner for Lp- -margin halfspace that draws O( p✏2⌫2 2 ) samples and runs in time (1/ )O ⇣ p ⌫ 2 2 ⌘ · poly(d/✏). Furthermore, for p = 1, there is a proper ⌫-robust (1 + )-agnostic learner for L1- -margin halfspace that draws O( log d✏2⌫2 2 ) samples and runs in time d O ⇣ log(1/ ) ⌫ 2 2 ⌘ · poly(1/✏). To interpret the running time of our algorithm, we consider the setting = ⌫ = 0.1. We note two different regimes. If p 2 is a fixed constant, then our algorithm runs in time 2O(1/ 2) poly(d/✏). On the other hand, for p = 1, we obtain a runtime of dO(1/ 2) poly(1/✏). That is, the L1 margin case (which corresponds to adversarial learning with L1 perturbations) appears to be computationally the hardest. As we show in Theorem 3, this fact is inherent for proper learners. Our algorithm establishing Theorem 2 follows via a simple and unified approach, employing a reduction from online (mistake bound) learning [Lit87]. Specifically, we show that any computationally efficient Lp online learner for halfspaces with margin guarantees and mistake bound M can be used in a black-box manner to obtain an algorithm for our problem with runtime roughly poly(d/✏)(1/ )M . Theorem 2 then follows by applying known results from the online learning literature [Gen01a]. For the special case of p = 2 (and ⌫ = 0.1), recent work [DKM19] gave a sophisticated algorithm for our problem with running time poly(d/✏)2 ˜O(1/( 2 )). We note that our algorithm has significantly better dependence on the parameter (quantifying the approximation ratio), and better dependence on 1/ . Importantly, our algorithm is much simpler and immediately generalizes to all Lp norms. Perhaps surprisingly, the running time of our algorithm is nearly the best possible for proper learning. For constant p 2, this follows from the hardness result of [DKM19]. (See the supplementary material for more details.) Furthermore, we prove a tight running time lower bound for robust L1- - margin proper learning of halfspaces. Roughly speaking, we show that for some sufficiently small constant ⌫ > 0, one cannot hope to significantly speed-up our algorithm for ⌫-robust L1- -margin learning of halfspaces. Our computational hardness result is formally stated below. Theorem 3 (Tight Running Time Lower Bound). There exists a constant ⌫ > 0 such that, assuming the (randomized) Gap Exponential Time Hypothesis, there is no proper ⌫-robust 1.5-agnostic learner for L1- -margin halfspace that runs in time f(1/ ) · do(1/ 2 ) poly(1/✏) for any function f . As indicated above, our running time lower bound is based on the so-called Gap Exponential Time Hypothesis (Gap-ETH), which roughly states that no subexponential time algorithm can approximate 3SAT to within (1 ✏) factor, for some constant ✏ > 0. Since we will not be dealing with Gap-ETH directly here, we defer the formal treatment of the hypothesis and discussions on its application to the supplementary material. We remark that the constant 1.5 in our theorem is insignificant. We can increase this “gap” to any constant less than 2. We use the value 1.5 to avoid introducing an additional variable. Another remark is that Theorem 3 only applies for a small constant ⌫ > 0. This leaves the possibility of achieving, e.g., a faster 0.9-robust L1- -margin learner for halfspaces, as an interesting open problem. 1.2 Related Work A sequence of recent works [CBM18, SST+18, BLPR19, MHS19] has studied the sample complexity of adversarially robust PAC learning for general concept classes of bounded VC dimension and for halfspaces in particular. [MHS19] established an upper bound on the sample complexity of PAC learning any concept class with finite VC dimension. A common implication of the aforementioned works is that, for some concept classes, the sample complexity of adversarially robust PAC learning is higher than the sample complexity of (standard) PAC learning. For the class of halfspaces, which is the focus of the current paper, the sample complexity of adversarially robust agnostic PAC learning was shown to be essentially the same as that of (standard) agnostic PAC learning [CBM18, MHS19]. Turning to computational aspects, [BLPR19, DNV19] showed that there exist classification tasks that are efficiently learnable in the standard PAC model, but are computationally hard in the adversarially robust setting (under cryptographic assumptions). Notably, the classification problems shown hard are artificial, in the sense that they do not correspond to natural concept classes. [ADV19] shows that adversarially robust proper learning of degree-2 polynomial threshold functions is computationally hard, even in the realizable setting. On the positive side, [ADV19] gives a polynomial-time algorithm for adversarially robust learning of halfspaces under L1 perturbations, again in the realizable setting. More recently, [MGDS20] generalized this upper bound to a broad class of perturbations, including Lp perturbations. Moreover, [MGDS20] gave an efficient algorithm for learning halfspaces with random classification noise [AL88]. We note that all these algorithms are proper. The problem of agnostically learning halfspaces with a margin has been studied extensively. A number of prior works [BS00, SSS09, SSS10, LS11, BS12, DKM19] studied the case of L 2 margin and gave a range of time-accuracy tradeoffs for the problem. The most closely related prior work is the recent work [DKM19], which gave a proper ⌫-robust ↵-agnostic learning for L 2 - -margin halfspace with near-optimal running time when ↵, ⌫ are universal constants, and a nearly matching computational hardness result. The algorithm of the current paper broadly generalizes, simplifies, and improves the algorithm of [DKM19]. 2 Upper Bound: From Online to Adversarially Robust Agnostic Learning In this section, we provide a generic method that turns an online (mistake bound) learning algorithm for halfspaces into an adversarially robust agnostic algorithm, which is then used to prove Theorem 2. Recall that online learning [Lit87] proceeds in a sequence of rounds. In each round, the algorithm is given an example point, produces a binary prediction on this point, and receives feedback on its prediction (after which it is allowed to update its hypothesis). The mistake bound of an online learner is the maximum number of mistakes (i.e., incorrect predictions) it can make over all possible sequences of examples. We start by defining the notion of online learning with a margin gap in the context of halfspaces: Definition 4. An online learner A for the class of halfspaces is called an Lp online learner with mistake bound M and ( , 0) margin gap if it satisfies the following: In each round, A returns a vector w 2 Bdq . Moreover, for any sequence of labeled examples (xi, yi) such that there exists w ⇤ 2 Bdq with sgn(hw⇤,xii yi ) = yi for all i, there are at most M values of t such that sgn(hwt,xti yt 0) 6= yt, where wt = A((x1, y1), . . . , (xt 1, yt 1)). The Lp online learning problem of halfspaces has been studied extensively in the literature, see, e.g., [Lit87, GLS01, Gen01b, Gen03, BB14]. We will use a result of [Gen01a], which gives a polynomial time Lp online learner with margin gap ( , (1 ⌫) ) and mistake bound O((p 1)/⌫2 2). We are now ready to state our generic proposition that translates an online algorithm with a given mistake bound into an agnostic learning algorithm. We will use the following notation: For S ✓ Bdp ⇥ {±1}, we will use S instead of D to denote the empirical error on the uniform distribution over S. In particular, we denote errS (w) := 1 |S| · |{(x, y) 2 S | sgn(hw,xi y ) 6= y}|. The main result of this section is the following proposition. While we state our proposition for the empirical error, it is simple to convert it into a generalization bound as we will show later in the proof of Theorem 2. Proposition 5. Assume that there is a polynomial time Lp online learner A for halfspaces with a ( , 0) margin gap and mistake bound of M . Then there exists an algorithm that given a multiset of labeled examples S ✓ Bdp ⇥ {±1} and 2 (0, 1), runs in poly(|S|d) · 2O(M log(1/ )) time and with probability 9/10 returns w 2 Bdq such that errS 0(w) (1 + ) ·OPTS . Notice that our algorithm runs in time poly(|S|d) · 2O(M log(1/ )) and has success probability 9/10. It is more convenient to describe a version of our algorithm that runs in poly(|S|d) time, but has small success probability of 2 O(M log(1/ )), as encapsulated by the following lemma. Lemma 6. Assume that there is a polynomial time Lp online learner A for halfspaces with a ( , 0) margin gap and mistake bound of M . Then there exists an algorithm that given a multiset of labeled examples S ✓ Bdp ⇥ {±1} and 2 (0, 1), runs in poly(|S|dM) time and with probability 2 O(M log(1/ )) returns w 2 Bdq such that errS 0(w) (1 + ) ·OPTS . Before proving Lemma 6, notice that Proposition 5 now follows by running the algorithm from Lemma 6 independently 2O(M log(1/ )) times and returning the w with minimum errS 0(w). Since each iteration has a 2 O(M log(1/ )) probability of returning a w with errS 0(w) (1 + ) ·OPTS , with 90% probability at least one of our runs finds a w that satisfies this. Proof of Lemma 6. Let w⇤ 2 Bdq denote an “optimal” halfspace with errS (w⇤) = OPTS . The basic idea of the algorithm is to repeatedly run A on larger and larger subsets of samples each time adding one additional sample in S that the current hypothesis gets wrong. The one worry here is that some of the points in S might be errors, inconsistent with the true classifier w⇤, and feeding them to our online learner will lead it astray. However, at any point in time, either we misclassify (w.r.t. margin 0) only a (1 + ) · OPTS fraction of points (in which case we can abort early and use this hypothesis) or guessing a random misclassified point will have at least an ⌦( ) probability of giving us a non-error. Since our online learner has a mistake bound of M , we will never need to make more than this many correct guesses. Specifically, the algorithm is as follows: • Let Samples = ; • For i = 0 to M – Let w = A(Samples) – Let T be the set of (x, y) 2 S so that sgn(hw,xi y 0) 6= y – If T = ;, and otherwise with 50% probability, return w – Draw (xi, yi) uniformly at random from T , and add it to Samples • Return w To analyze this algorithm, let Sbad be the set of (x, y) 2 S with sgn(hw⇤,xi y ) 6= y. Recall that by assumption |Sbad| OPTS ·|S|. We claim that with probability at least 2 O(M log(1/ )) our algorithm never adds an element of Sbad to Samples and never returns a w in the for loop for which errS 0(w) > (1 + ) ·OPTS . This is because during each iteration of the algorithm either: 1. errS 0(w) > (1+ ) ·OPTS . In this case, there is a 50% probability that we do not return w. If we do not return, then |T | (1 + ) · |Sbad| so there is at least a 1+ /2 probability that the new element added to Samples is not in Sbad. 2. Or errS 0(w) (1 + ) ·OPTS . In this case, there is a 50% probability of returning w. Hence, there is a ( /4)M+1 2 O(M log(1/ )) probability of never adding an element of Sbad to Samples or returning a w in our for-loop with errS 0(w) > (1+ )·OPTS . When this occurs, we claim that we output w such that errS 0(w) (1+ ) ·OPTS . This is because, if this were not the case, we must have reached the final statement at which point we have Samples = ((x 0 , y 0 ), . . . , (xM , yM )), where each (xi, yi) satisfies sgn(hw⇤,xii yi ) = yi and sgn(hwi,xii yi 0) 6= yi with wi = A((x 0 , y 0 ), . . . , (xi 1, yi 1)). But this violates the mistake bound of M . Thus, we output w such that errS 0(w) (1+ )·OPTS with probability at least 2 O(M log(1/ )). We will now show how Proposition 5 can be used to derive Theorem 2. As stated earlier, we will require the following mistake bound for online learning with a margin gap from [Gen01a]. Theorem 7 ([Gen01a]). For any 2 p < 1, there exists a polynomial time Lp online learner with margin gap ( , (1 ⌫) ) and mistake bound O ⇣ (p 1) ⌫2 2 ⌘ . Furthermore, there is a polynomial time L1 online learner with margin gap ( , (1 ⌫) ) and mistake bound O ⇣ log d ⌫2 2 ⌘ . Proof of Theorem 2. Our ⌫-robust (1 + )-agnostic learner for Lp- -margin halfspace works as follows. First, it draws the appropriate number of samples m (as stated in Theorem 2) from D. Then, it runs the algorithm from Proposition 5 on these samples for margin gap ( , (1 ⌫/2) ). Let Mp denote the error bound for Lp online learning with margin gap ( , (1 ⌫/2) ) given by Theorem 7. Our entire algorithm runs in time poly(m) · 2O(Mp·log(1/ )). It is simple to check that this results in the claimed running time. As for the error guarantee, let w 2 Bdq be the output halfspace. With probability 0.8, we have errD (1 ⌫) (w) errS(1 ⌫/2) (w) + ✏/2 (1 + ) ·OPTS(1 ⌫/2) +✏/2 (1 + ) ·OPTD +✏, where the first and last inequalities follow from standard margin generalization bounds [BM02, KP02, KST08] and the second inequality follows from the guarantee of Proposition 5. 3 Tight Running Time Lower Bound: Proof Overview We will now give a high-level overview of our running time lower bound (Theorem 3). Due to space constraint, we will sometimes be informal; everything will be formalized in the supplementary material. The main component of our hardness result will be a reduction from the Label Cover problem2, which is a classical problem in hardness of approximation literature that is widely used as a starting point for proving strong NP-hardness of approximation results (see, e.g., [ABSS97, Hås96, Hås01, Fei98]). Definition 8 (Label Cover). A Label Cover instance L = (U, V,E,⌃U ,⌃V , {⇡e}e2⌃) consists of • a bi-regular bipartite graph (U, V,E), referred to as the constraint graph, • label sets ⌃U and ⌃V , • for every edge e 2 E, a constraint (aka projection) ⇡e : ⌃U ! ⌃V . A labeling of L is a function : U ! ⌃U . We say that covers v 2 V if there exists v 2 ⌃V such that3 ⇡ (u,v)( (u)) = v for all4 u 2 N(v). The value , denoted by valL( ), is defined as the fraction of v 2 V covered by . The value of L, denoted by val(L), is defined as max :U!⌃ U val( ). Moreover, we say that weakly covers v 2 V if there exist distinct neighbors u 1 , u 2 of v such that ⇡ (u 1 ,v)( (u1)) = ⇡(u 2 ,v)( (u2)). The weak value of , denoted by wval( ), is the fraction of v 2 V weakly covered by . The weak value of L, denoted by wval(L), is defined as max :U!⌃ U wval( ). For a Label Cover instance L, we use k to denote |U | and n to denote |U | · |⌃U |+ |V | · |⌃V |. The goal of Label Cover is to find an assignment with maximium value. Several strong inapproximability results for Label Cover are known [Raz98, MR10, DS14]. To prove a tight running time lower bound, we require an inapproximability result for Label Cover with a tight running lower bound as well. Observe that we can solve Label Cover in time nO(k) by enumerating through all possible assignments and compute their values. The following result shows that, even if we aim for a constant approximation ratio, no algorithm that can be significantly faster than this “brute-force” algorithm. Theorem 9 ([Man20]). Assuming Gap-ETH, for any function f and any constant µ 2 (0, 1), no f(k) · no(k)-time algorithm can, given a Label Cover instance L, distinguish between the following two cases: (Completeness) val(L) = 1, and, (Soundness) wval(L) < µ. Given a Label Cover instance L, our reduction produces an oracle O that can sample (in polynomial time) from a distribution D over Bd1 ⇥ {±1} (for some d n) such that: • (Completeness) If val(L) = 1, then OPTD ⇤ ✏⇤. • (Soundness) If wval(L) < µ, then OPTD (1 ⌫) ⇤ > 1.6✏ ⇤. • (Margin and Error Bounds) ⇤ = ⌦(1/ p k) and ✏⇤ = 1/no(k). Here ⌫ > 0 is some constant. Once we have such a reduction, Theorem 3 follows quite easily. The reason is that, if we assume (by contrapositive) that there exists a ⌫-robust 1.5-agnostic learner A for L1- -margin halfspaces that runs in time f(1/ ) · do(1/ 2 ) poly(1/✏), then we can turn A to an algorithm for Label Cover by first using the reduction above to give us an oracle O and then running A on O. With appropriate parameters, A can distinguish between the two cases in Theorem 9 in time f(1/ ⇤) · do(1/( ⇤)2) poly(1/✏⇤) = f(O( p k)) · no(k), which by Theorem 9 violates the randomized Gap-ETH. Therefore, we will henceforth focus on the reduction and its proof of correctness. Previous Results. To explain the key new ideas behind our reduction, it is important to understand high-level approaches taken in previous works and why they fail to yield running time lower bounds as in our Theorem 3. Most of the known hardness results for agnostic learning of halfspaces employ reductions from Label Cover [ABSS97, FGKP06, GR09, FGRW12, DKM19]5. These reductions use gadgets which are “local” in nature. As we will explain next, such “local” reductions cannot work for our purpose. 2Label Cover is sometimes referred to as Projection Game or Two-Prover One-Round Game. 3This is equivalent to ⇡(u 1 ,v)( (u1)) = ⇡(u 2 ,v)( (u2)) for all neighbors u1, u2 of v. 4For every a 2 U [V , we use N(a) to denote the set of neighbors of a (with respect to the graph (U, V,E)). 5Some of these reductions are stated in terms of reductions from Set Cover or from constraint satisfaction problems (CSP). However, it is well-known that these can be formulated as Label Cover. To describe the reductions, it is convenient to think of each sample (x, y) as a linear constraint hw,xi 0 when y = +1 and hw,xi < 0 when y = 1, where the variables are the coordinates w 1 , . . . , wd of w. When we also consider a margin parameter ⇤ > 0, then the constraints become hw,xi ⇤ and hw,xi < ⇤, respectively. Notice here that, for our purpose, we want (i) our halfspace w to be in Bd 1 , i.e., |w 1 |+ · · ·+ |wd| 1, and (ii) each of our samples x to lie in Bd1, i.e., |x 1 |, . . . , |xd| 1. Although the reductions in previous works vary in certain steps, they do share an overall common framework. With some simplification, they typically let e.g. d = |U | · |⌃U |, where each coordinate is associated with U ⇥ ⌃U . In the completeness case, i.e., when some labeling c covers all vertices in V , the intended solution wc is defined by wc (u, u ) = 1[ u = (u)]/k for all u 2 U, u 2 ⌃U . To ensure that this is essentially the best choice of halfspace, these reductions often appeal to several types of linear constraints. For concreteness, we state a simplified version of those from [ABSS97] below. • For every (u, U ) 2 U ⇥ ⌃U , create the constraint w (u, u ) 0. (This corresponds to the labeled sample ( e (a, ),+1).) • For each u 2 U , create the constraint P 2⌃ U w (u, ) 1/k. • For every v 2 V , v 2 ⌃V and u1, u2 2 N(v), add P u 1 2⇡ 1 (u 1 ,v) ( v ) w (u 1 , u 1 ) = P u 2 2⇡ 1 (u 2 ,v) ( v ) w (u 2 , u 2 ) . This equality “checks” the Label Cover constraints ⇡ (u 1 ,v) and ⇡ (u 2 ,v). Clearly, in the completeness case wc satisfies all constraints except the non-positivity constraints for the k non-zero coordinates. (It was argued in [ABSS97] that any halfspace must violate many more constraints in the soundness case.) Observe that this reduction does not yield any margin: wc does not classify any sample with a positive margin. Nonetheless, [DKM19] adapts this reduction to work with a small margin ⇤ > 0 by adding/subtracting appropriate “slack” from each constraint. For example, the first type of constraint is changed to w (u, u ) ⇤. This gives the desired margin ⇤ in the completeness case. However, for the soundness analysis to work, it is crucial that ⇤ O(1/k), as otherwise the constraints can be trivially satisfied6 by w = 0. As such, the above reduction does not work for us, since we would like a margin ⇤ = ⌦(1/ p k). In fact, this also holds for all known reductions, which are “local” in nature and possess similar characteristics. Roughly speaking, each linear constraint of these reductions involves only a constant number of terms that are intended to be set to O(1/k), which means that we cannot hope to get a margin more than O(1/k). Our Approach: Beyond Local Reductions. With the preceding discussion in mind, our reduction has to be “non-local”. To describe our main idea, we need an additional notion of “decomposability” of a Label Cover instance. Roughly speaking, an instance is decomposable if we can partition V into different parts such that each u 2 U has exactly one induced edge to the vertices in each part. Definition 10. A Label Cover instance L = (U, V,E,⌃U ,⌃V , {⇡e}e2E) is said to be decomposable if there exists a partition of V into V 1 [· · ·[Vt such that, for every u 2 U and j 2 [t], |N(u)\Vj | = 1. We use the notation vj(u) to the denote the unique element in N(u) \ Vj . As explained above, “local” reductions use each labeled sample to only check a constant number of Label Cover constraints. In contrast, our reduction will check many constraints in each sample. Specifically, for each subset V j , we will check all the Label Cover constraints involving v 2 V j at once. To formalize this goal, we will require the following definition. Definition 11. Let L = (U, V = V 1 [ · · · [ Vt, E,⌃U ,⌃V , {⇡e}e2E) be a decomposable Label Cover instance. For any j 2 [t], let ⇧j 2 R(V⇥⌃V )⇥(U⇥⌃U ) be defined as ⇧j (v, v ),(u, u ) = ⇢ 1 if v = vj(u) and ⇡ (u,v)( u) = v, 0 otherwise. We set d = |U | · |⌃U | and our intended solution wc in the completeness case is the same as described in the previous reduction. For simplicity, suppose that, in the soundness case, we pick s that does 6Note that w = 0 satisfies the constraints with margin ⇤ 1/k, which is (1 o(1)) ⇤ if ⇤ = !(1/k). not weakly cover any v 2 V and set ws (u, u ) = 1[ u = s(u)]/k. Our simplified task then becomes: Design D such that errD (wc) ⌧ errD (1 ⌫) (w s), where = ⌦(1/ p k), ⌫ > 0 is a constant. Our choice of D is based on two observations. The first is a structural difference between wc(⇧j)T and ws(⇧j)T . Suppose that the constraint graph has right degree . Since c covers all v 2 V , ⇧j “projects” the non-zeros coordinates wc (u, c(u)) for all u 2 N(v) to the same coordinate (v, v), for some v 2 ⌃V , resulting in the value of /k in this coordinate. On the other hand, since s does not even weakly cover any right vertex, all the non-zero coordinates get maps by ⇧j to different coordinates, resulting in the vector ws(⇧j)T having k non-zero coordinates, each having value 1/k. To summarize, we have: wc(⇧j)T has k/ non-zero coordinates, each of value /k. On the other hand, ws(⇧j)T has k non-zero coordinates, each of value 1/k. Our second observation is the following: suppose that u is a vector with T non-zero coordinates, each of value 1/T . If we take a random ±1 vector s, then hu, si is simply 1/T times a sum of T i.i.d. Rademacher random variables. Recall a well-known version of the central limit theorem (e.g., [Ber41, Ess42]): as T ! 1, 1/ p T times a sum of T i.i.d. Rademacher r.v.s converges in distribution to the normal distribution. This implies that limT!1 Pr[hu, si 1/ p T ] = (1). For simplicity, let us ignore the limit for the moment and assume that Pr[hu, si 1/ p T ] = (1). We can now specify the desired distribution D: Pick s uniformly at random from {±1}V⇥⌃V and then let the sample be s⇧j with label +1. By the above two observations, wc will be correctly classified with margin ⇤ = p /k = ⌦(1/ p k) with probability (1). Furthermore, in the soundness case, w s can only get the same error with margin (roughly) p 1/k = ⇤/ p . Intuitively, for > 1, this means that we get a gap of ⌦(1/ p k) in the margins between the two cases, as desired. This concludes our informal proof overview. Further Details and The Full Reduction. Having stated the rough main ideas above, we next state the full reduction. To facilitate this, we define the following additional notations: Definition 12. Let L = (U, V = V 1 [ · · · [ Vt, E,⌃U ,⌃V , {⇡e}e2E) be a decomposable Label Cover instance. For any j 2 [t], let ⇧̂j 2 R(U⇥⌃V )⇥(U⇥⌃U ) be such that ⇧̂j (u0, v ),(u, u ) = ⇢ 1 if u0 = u and ⇡ (u,vj(u))( u) = v, 0 otherwise. Moreover, let ⇧̃j 2 R(V⇥⌃V )⇥(U⇥⌃V ) be such that ⇧̃j (v, 0 v ),(u, v ) = ⇢ 1 if v = vj(u) and 0v = v 0 otherwise. Observe that ⇧j = ⇧̃j · ⇧̂j (where ⇧j is as in Definition 11). Our full reduction is present in Figure 1 below. The exact choice of parameters are deferred to the supplementary material. We note that the distribution described in the previous section corresponds to Step 4c in the reduction. The other steps of the reductions are included to handle certain technical details we had glossed over previously. In particular, the following are the two main additional technical issues we have to deal with here. • (Non-Uniformity of Weights) In the intuitive argument above, we assume that, in the soundness case, we only consider ws such that P u 2⌃ U ws (u, u ) = 1/k. However, this needs not be true in general, and we have to create new samples to (approximately) enforce such a condition. Specifically, for every subset T ✓ U , we add a constraint thatP u2T P u 2⌃ U w (u, u ) |T |/k ⇤. This corresponds to Step 3 in Figure 1. Note that the term ⇤ on the right hand side above is necessary to ensure that, in the completeness case, we still have a margin of ⇤. Unfortunately, this also leaves the possibility of, e.g., some vertex u 2 U has as much as ⇤ extra “mass”. For technical reasons, it turns out that we have to make sure that these extra “masses” do not contribute to too much of kw(⇧j)T k2 2 . To do so, we add additional constraints on w(⇧̂j)T to bound its norm. Such a constraint is of the form: If we pick a subset S of at most ` coordinates, then their sum must be at most |S|/k + ⇤ (and at least ⇤). These corresponds to Steps 4a and 4b in Figure 1. • (Constant Coordinate) Finally, similar to previous works, we cannot have “constants” in our linear constraints. Rather, we need to add a coordinate ? with the intention that w? = 1/2, and replace the constants in the previous step by w?. Note here that we need two additional constraints (Steps 1 and 2 in Figure 1) to ensure that w? has to be roughly 1/2. 4 Conclusions and Open Problems In this work, we studied the computational complexity of adversarially robust learning of halfspaces in the distribution-independent agnostic PAC model. We provided a simple proper learning algorithm for this problem and a nearly matching computational lower bound. While proper learners are typically preferable due to their interpretability, the obvious open question is whether significantly faster non-proper learners are possible. We leave this as an interesting open problem. Another direction for future work is to understand the effect of distributional assumptions on the complexity of the problem and to explore the learnability of simple neural networks in this context. Broader Impact Our work aims to advance the algorithmic foundations of adversarially robust machine learning. This subfield focuses on protecting machine learning models (especially their predictions) against small perturbations of the input data. This broad goal is a pressing challenge in many real-world scenarios, where successful adversarial example attacks can have far-reaching implications given the adoption of machine learning in a wide variety of applications, from self-driving cars to banking. Since the primary focus of our work is theoretical and addresses a simple concept class, we do not expect our results to have immediate societal impact. Nonetheless, we believe that our findings provide interesting insights on the algorithmic possibilities and fundamental computational limitations of adversarially robust learning. We hope that, in the future, these insights could be useful in the design of practically relevant adversarially robust classifiers in the presence of noisy data. Acknowledgments and Disclosure of Funding Ilias Diakonikolas is supported by NSF Award CCF-1652862 (CAREER) and a Sloan Research Fellowship. Daniel M. Kane is supported by NSF Award CCF-1553288 (CAREER) and a Sloan Research Fellowship.
1. What is the main contribution of the paper regarding agnostic learning? 2. What are the strengths of the proposed algorithm, particularly in its efficiency and margin guarantee? 3. What are the weaknesses of the paper, especially in terms of its applicability to more complex hypothesis classes? 4. How does the reviewer assess the novelty and significance of the paper's content? 5. Are there any questions or concerns regarding the paper's assumptions, techniques, or results?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper studies the problem of agnostic learning a halfspace that is adversarially robust to L_p perturbations (in the agnostic PAC model). For halfspaces, this is equivalent to learning agnostic proper learning a halfspace that minimizes the \gamma-margin error, where the margin in measured in some L_p norm. The main results are: 1. An algorithm that runs in time exp(p / \gamma^2)*poly(d) that incurs a small (1+\eps) approximation in both the error, and in the margin. This also gets good guarantees for L_\infty norm (as usual, you can think of p ~ log d here for L_\infty). 2. They show that this dependence of the form d^{O(1/\gamma^2)} for L_\infty norm is unavoidable assuming the Gap-ETH assumption. The algorithm is using a very elegant reduction from online mistake-bounded learning. The algorithmic result was shown in the special case of L_2 [DKM19] using other techniques. Strengths The topic of the paper is timely and well-motivated; we do not understand basic questions involving (computational) efficiently learning adversarial robust classifiers. The results in this paper represent a solid contribution. Perhaps more interesting are the techniques used in obtaining these results. The reduction from online mistake-bounded learning is clever and very elegant. It immediately implies the correct bounds for different norms, while bounds were only known for L_2 using more complicated arguments. Moreover, the hardness result assuming ETH involves fairly sophisticated PCP machinery that might use for other problems (even in fine-grained complexity). Overall, I enjoyed reading the paper, and learning from it. Weaknesses The techniques in this paper are somewhat specific to halfspaces which are very simple hypotheses (adversarial robustness does not correspond to a standard L_p margin for other concept classes). But that being said, it's important to understand these basic hypothesis classes first.
NIPS
Title The Complexity of Adversarially Robust Proper Learning of Halfspaces with Agnostic Noise Abstract We study the computational complexity of adversarially robust proper learning of halfspaces in the distribution-independent agnostic PAC model, with a focus on Lp perturbations. We give a computationally efficient learning algorithm and a nearly matching computational hardness result for this problem. An interesting implication of our findings is that the L1 perturbations case is provably computationally harder than the case 2  p < 1. 1 Introduction In recent years, the design of reliable machine learning systems for secure-critical applications, including in computer vision and natural language processing, has been a major goal in the field. One of the main concrete goals in this context has been to develop classifiers that are robust to adversarial examples, i.e., small imperceptible perturbations to the input that can result in erroneous misclassification [BCM+13, SZS+14, GSS15]. This has led to an explosion of research on designing defenses against adversarial examples and attacks on these defenses. See, e.g., [KM18] for a recent tutorial on the topic. Despite significant empirical progress over the past few years, the broad question of designing computationally efficient classifiers that are provably robust to adversarial perturbations remains an outstanding theoretical challenge. In this paper, we focus on understanding the computational complexity of adversarially robust classification in the (distribution-independent) agnostic PAC model [Hau92, KSS94]. Specifically, we study the learnability of halfspaces (or linear threshold functions) in this model with respect to Lp perturbations. A halfspace is any function h w : Rd ! {±1} of the form1 h w (x) = sgn (hw,xi), where w 2 Rd is the associated weight vector. The problem of learning an unknown halfspace has been studied for decades — starting with the Perceptron algorithm [Ros58] — and has arguably been one of the most influential problems in the development of machine learning [Vap98, FS97]. Before we proceed, we introduce the relevant terminology. Let C be a concept class of Boolean-valued functions on an instance space X ✓ Rd and H be a hypothesis class on X . The set of allowable perturbations is defined by a function U : X ! 2X . The robust risk of a hypothesis h 2 H with respect to a distribution D on X⇥{±1} is defined as RU (h,D) = Pr (x,y)⇠D[9z 2 U(x), h(z) 6= y]. The (adversarially robust) agnostic PAC learning problem for C is the following: Given i.i.d. samples from an arbitrary distribution D on X ⇥ {±1}, the goal of the learner is to output a hypothesis h 2 H 1The function sgn : R ! {±1} is defined as sgn(u) = 1 if u 0 and sgn(u) = 1 otherwise. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. such that with high probability it holds RU (h,D) OPTD +✏, where OPTD = inff2C RU (f,D) is the robust risk of the best-fitting function in C. Unfortunately, it follows from known hardness results that this formulation is computationally intractable for the class of halfspaces C = {sgn(hw,xi),w 2 Rd} under Lp perturbations, i.e, for Up, (x) = {z 2 X : kz xkp }, for some p 2. (The reader is referred to the supplementary material for a more detailed explanation.) To be able to obtain computationally efficient algorithms, we relax the above definition in two ways: (1) We allow the hypothesis to be robust within a slightly smaller perturbation region, and (2) We introduce a small constant factor approximation in the error guarantee. In more detail, for some constants 0 < ⌫ < 1 and ↵ > 1, our goal is to efficiently compute a hypothesis h such that with high probability RU p,(1 ⌫) (h,D) ↵ ·OPTDp, +✏ , (1) where OPTDp, = inff2C RUp, (f,D). (Note that for ⌫ = 0 and ↵ = 1, we obtain the original definition.) An interesting setting is when ⌫ is a small constant close to 0, say ⌫ = 0.1, and ↵ = 1+ , where 0 < < 1. In this paper, we characterize the computational complexity of this problem with respect to proper learning algorithms, i.e., algorithms that output a halfspace hypothesis. Throughout this paper, we will assume that the domain of our functions is bounded in the ddimensional Lp unit ball Bdp. All our results immediately extend to general domains with a (necessary) dependence on the diameter of the feasible set. A simple but crucial observation leveraged in our work is the following: The adversarially robust learning problem of halfspaces under Lp perturbations (defined above) is essentially equivalent to the classical problem of agnostic proper PAC learning of halfspaces with an Lp margin. Let p 2, q be the dual exponent of p, i.e., 1/p + 1/q = 1. The problem of agnostic proper PAC learning of halfspaces with an Lp margin is the following: The learner is given i.i.d. samples from a distribution D over Bdp ⇥ {±1}. For w 2 Bdq , its -margin error is defined as errD (w) := Pr (x,y)⇠D[sgn(hw,xi y · ) 6= y]. We also define OPTD := minw2Bd q errD (w). An algorithm is a proper ⌫-robust ↵-agnostic learner for Lp- -margin halfspace if, with probability at least 1 ⌧ , it outputs a halfspace w 2 Bdq with errD (1 ⌫) (w) ↵ ·OPTD +✏ . (2) (When unspecified, the failure probability ⌧ is assumed to be 1/3. It is well-known and easy to see that we can always achieve arbitrarily small value of ⌧ at the cost of O(log(1/⌧)) multiplicative factor in the running time and sample complexity.) We have the following basic observation, which implies that the learning objectives (1) and (2) are equivalent. Throughout this paper, we will state our contributions using the margin formulation (2). Fact 1. For any non-zero w 2 Rd, 0 and D over Rd ⇥ {±1}, RU p, (h w ,D) = errD ( wkwk q ). 1.1 Our Contributions Our main positive result is a robust and agnostic proper learning algorithm for Lp- -margin halfspace with near-optimal running time: Theorem 2 (Robust Learning Algorithm). Fix 2 p < 1 and 0 < < 1. For any 0 < ⌫, < 1, there is a proper ⌫-robust (1 + )-agnostic learner for Lp- -margin halfspace that draws O( p✏2⌫2 2 ) samples and runs in time (1/ )O ⇣ p ⌫ 2 2 ⌘ · poly(d/✏). Furthermore, for p = 1, there is a proper ⌫-robust (1 + )-agnostic learner for L1- -margin halfspace that draws O( log d✏2⌫2 2 ) samples and runs in time d O ⇣ log(1/ ) ⌫ 2 2 ⌘ · poly(1/✏). To interpret the running time of our algorithm, we consider the setting = ⌫ = 0.1. We note two different regimes. If p 2 is a fixed constant, then our algorithm runs in time 2O(1/ 2) poly(d/✏). On the other hand, for p = 1, we obtain a runtime of dO(1/ 2) poly(1/✏). That is, the L1 margin case (which corresponds to adversarial learning with L1 perturbations) appears to be computationally the hardest. As we show in Theorem 3, this fact is inherent for proper learners. Our algorithm establishing Theorem 2 follows via a simple and unified approach, employing a reduction from online (mistake bound) learning [Lit87]. Specifically, we show that any computationally efficient Lp online learner for halfspaces with margin guarantees and mistake bound M can be used in a black-box manner to obtain an algorithm for our problem with runtime roughly poly(d/✏)(1/ )M . Theorem 2 then follows by applying known results from the online learning literature [Gen01a]. For the special case of p = 2 (and ⌫ = 0.1), recent work [DKM19] gave a sophisticated algorithm for our problem with running time poly(d/✏)2 ˜O(1/( 2 )). We note that our algorithm has significantly better dependence on the parameter (quantifying the approximation ratio), and better dependence on 1/ . Importantly, our algorithm is much simpler and immediately generalizes to all Lp norms. Perhaps surprisingly, the running time of our algorithm is nearly the best possible for proper learning. For constant p 2, this follows from the hardness result of [DKM19]. (See the supplementary material for more details.) Furthermore, we prove a tight running time lower bound for robust L1- - margin proper learning of halfspaces. Roughly speaking, we show that for some sufficiently small constant ⌫ > 0, one cannot hope to significantly speed-up our algorithm for ⌫-robust L1- -margin learning of halfspaces. Our computational hardness result is formally stated below. Theorem 3 (Tight Running Time Lower Bound). There exists a constant ⌫ > 0 such that, assuming the (randomized) Gap Exponential Time Hypothesis, there is no proper ⌫-robust 1.5-agnostic learner for L1- -margin halfspace that runs in time f(1/ ) · do(1/ 2 ) poly(1/✏) for any function f . As indicated above, our running time lower bound is based on the so-called Gap Exponential Time Hypothesis (Gap-ETH), which roughly states that no subexponential time algorithm can approximate 3SAT to within (1 ✏) factor, for some constant ✏ > 0. Since we will not be dealing with Gap-ETH directly here, we defer the formal treatment of the hypothesis and discussions on its application to the supplementary material. We remark that the constant 1.5 in our theorem is insignificant. We can increase this “gap” to any constant less than 2. We use the value 1.5 to avoid introducing an additional variable. Another remark is that Theorem 3 only applies for a small constant ⌫ > 0. This leaves the possibility of achieving, e.g., a faster 0.9-robust L1- -margin learner for halfspaces, as an interesting open problem. 1.2 Related Work A sequence of recent works [CBM18, SST+18, BLPR19, MHS19] has studied the sample complexity of adversarially robust PAC learning for general concept classes of bounded VC dimension and for halfspaces in particular. [MHS19] established an upper bound on the sample complexity of PAC learning any concept class with finite VC dimension. A common implication of the aforementioned works is that, for some concept classes, the sample complexity of adversarially robust PAC learning is higher than the sample complexity of (standard) PAC learning. For the class of halfspaces, which is the focus of the current paper, the sample complexity of adversarially robust agnostic PAC learning was shown to be essentially the same as that of (standard) agnostic PAC learning [CBM18, MHS19]. Turning to computational aspects, [BLPR19, DNV19] showed that there exist classification tasks that are efficiently learnable in the standard PAC model, but are computationally hard in the adversarially robust setting (under cryptographic assumptions). Notably, the classification problems shown hard are artificial, in the sense that they do not correspond to natural concept classes. [ADV19] shows that adversarially robust proper learning of degree-2 polynomial threshold functions is computationally hard, even in the realizable setting. On the positive side, [ADV19] gives a polynomial-time algorithm for adversarially robust learning of halfspaces under L1 perturbations, again in the realizable setting. More recently, [MGDS20] generalized this upper bound to a broad class of perturbations, including Lp perturbations. Moreover, [MGDS20] gave an efficient algorithm for learning halfspaces with random classification noise [AL88]. We note that all these algorithms are proper. The problem of agnostically learning halfspaces with a margin has been studied extensively. A number of prior works [BS00, SSS09, SSS10, LS11, BS12, DKM19] studied the case of L 2 margin and gave a range of time-accuracy tradeoffs for the problem. The most closely related prior work is the recent work [DKM19], which gave a proper ⌫-robust ↵-agnostic learning for L 2 - -margin halfspace with near-optimal running time when ↵, ⌫ are universal constants, and a nearly matching computational hardness result. The algorithm of the current paper broadly generalizes, simplifies, and improves the algorithm of [DKM19]. 2 Upper Bound: From Online to Adversarially Robust Agnostic Learning In this section, we provide a generic method that turns an online (mistake bound) learning algorithm for halfspaces into an adversarially robust agnostic algorithm, which is then used to prove Theorem 2. Recall that online learning [Lit87] proceeds in a sequence of rounds. In each round, the algorithm is given an example point, produces a binary prediction on this point, and receives feedback on its prediction (after which it is allowed to update its hypothesis). The mistake bound of an online learner is the maximum number of mistakes (i.e., incorrect predictions) it can make over all possible sequences of examples. We start by defining the notion of online learning with a margin gap in the context of halfspaces: Definition 4. An online learner A for the class of halfspaces is called an Lp online learner with mistake bound M and ( , 0) margin gap if it satisfies the following: In each round, A returns a vector w 2 Bdq . Moreover, for any sequence of labeled examples (xi, yi) such that there exists w ⇤ 2 Bdq with sgn(hw⇤,xii yi ) = yi for all i, there are at most M values of t such that sgn(hwt,xti yt 0) 6= yt, where wt = A((x1, y1), . . . , (xt 1, yt 1)). The Lp online learning problem of halfspaces has been studied extensively in the literature, see, e.g., [Lit87, GLS01, Gen01b, Gen03, BB14]. We will use a result of [Gen01a], which gives a polynomial time Lp online learner with margin gap ( , (1 ⌫) ) and mistake bound O((p 1)/⌫2 2). We are now ready to state our generic proposition that translates an online algorithm with a given mistake bound into an agnostic learning algorithm. We will use the following notation: For S ✓ Bdp ⇥ {±1}, we will use S instead of D to denote the empirical error on the uniform distribution over S. In particular, we denote errS (w) := 1 |S| · |{(x, y) 2 S | sgn(hw,xi y ) 6= y}|. The main result of this section is the following proposition. While we state our proposition for the empirical error, it is simple to convert it into a generalization bound as we will show later in the proof of Theorem 2. Proposition 5. Assume that there is a polynomial time Lp online learner A for halfspaces with a ( , 0) margin gap and mistake bound of M . Then there exists an algorithm that given a multiset of labeled examples S ✓ Bdp ⇥ {±1} and 2 (0, 1), runs in poly(|S|d) · 2O(M log(1/ )) time and with probability 9/10 returns w 2 Bdq such that errS 0(w) (1 + ) ·OPTS . Notice that our algorithm runs in time poly(|S|d) · 2O(M log(1/ )) and has success probability 9/10. It is more convenient to describe a version of our algorithm that runs in poly(|S|d) time, but has small success probability of 2 O(M log(1/ )), as encapsulated by the following lemma. Lemma 6. Assume that there is a polynomial time Lp online learner A for halfspaces with a ( , 0) margin gap and mistake bound of M . Then there exists an algorithm that given a multiset of labeled examples S ✓ Bdp ⇥ {±1} and 2 (0, 1), runs in poly(|S|dM) time and with probability 2 O(M log(1/ )) returns w 2 Bdq such that errS 0(w) (1 + ) ·OPTS . Before proving Lemma 6, notice that Proposition 5 now follows by running the algorithm from Lemma 6 independently 2O(M log(1/ )) times and returning the w with minimum errS 0(w). Since each iteration has a 2 O(M log(1/ )) probability of returning a w with errS 0(w) (1 + ) ·OPTS , with 90% probability at least one of our runs finds a w that satisfies this. Proof of Lemma 6. Let w⇤ 2 Bdq denote an “optimal” halfspace with errS (w⇤) = OPTS . The basic idea of the algorithm is to repeatedly run A on larger and larger subsets of samples each time adding one additional sample in S that the current hypothesis gets wrong. The one worry here is that some of the points in S might be errors, inconsistent with the true classifier w⇤, and feeding them to our online learner will lead it astray. However, at any point in time, either we misclassify (w.r.t. margin 0) only a (1 + ) · OPTS fraction of points (in which case we can abort early and use this hypothesis) or guessing a random misclassified point will have at least an ⌦( ) probability of giving us a non-error. Since our online learner has a mistake bound of M , we will never need to make more than this many correct guesses. Specifically, the algorithm is as follows: • Let Samples = ; • For i = 0 to M – Let w = A(Samples) – Let T be the set of (x, y) 2 S so that sgn(hw,xi y 0) 6= y – If T = ;, and otherwise with 50% probability, return w – Draw (xi, yi) uniformly at random from T , and add it to Samples • Return w To analyze this algorithm, let Sbad be the set of (x, y) 2 S with sgn(hw⇤,xi y ) 6= y. Recall that by assumption |Sbad| OPTS ·|S|. We claim that with probability at least 2 O(M log(1/ )) our algorithm never adds an element of Sbad to Samples and never returns a w in the for loop for which errS 0(w) > (1 + ) ·OPTS . This is because during each iteration of the algorithm either: 1. errS 0(w) > (1+ ) ·OPTS . In this case, there is a 50% probability that we do not return w. If we do not return, then |T | (1 + ) · |Sbad| so there is at least a 1+ /2 probability that the new element added to Samples is not in Sbad. 2. Or errS 0(w) (1 + ) ·OPTS . In this case, there is a 50% probability of returning w. Hence, there is a ( /4)M+1 2 O(M log(1/ )) probability of never adding an element of Sbad to Samples or returning a w in our for-loop with errS 0(w) > (1+ )·OPTS . When this occurs, we claim that we output w such that errS 0(w) (1+ ) ·OPTS . This is because, if this were not the case, we must have reached the final statement at which point we have Samples = ((x 0 , y 0 ), . . . , (xM , yM )), where each (xi, yi) satisfies sgn(hw⇤,xii yi ) = yi and sgn(hwi,xii yi 0) 6= yi with wi = A((x 0 , y 0 ), . . . , (xi 1, yi 1)). But this violates the mistake bound of M . Thus, we output w such that errS 0(w) (1+ )·OPTS with probability at least 2 O(M log(1/ )). We will now show how Proposition 5 can be used to derive Theorem 2. As stated earlier, we will require the following mistake bound for online learning with a margin gap from [Gen01a]. Theorem 7 ([Gen01a]). For any 2 p < 1, there exists a polynomial time Lp online learner with margin gap ( , (1 ⌫) ) and mistake bound O ⇣ (p 1) ⌫2 2 ⌘ . Furthermore, there is a polynomial time L1 online learner with margin gap ( , (1 ⌫) ) and mistake bound O ⇣ log d ⌫2 2 ⌘ . Proof of Theorem 2. Our ⌫-robust (1 + )-agnostic learner for Lp- -margin halfspace works as follows. First, it draws the appropriate number of samples m (as stated in Theorem 2) from D. Then, it runs the algorithm from Proposition 5 on these samples for margin gap ( , (1 ⌫/2) ). Let Mp denote the error bound for Lp online learning with margin gap ( , (1 ⌫/2) ) given by Theorem 7. Our entire algorithm runs in time poly(m) · 2O(Mp·log(1/ )). It is simple to check that this results in the claimed running time. As for the error guarantee, let w 2 Bdq be the output halfspace. With probability 0.8, we have errD (1 ⌫) (w) errS(1 ⌫/2) (w) + ✏/2 (1 + ) ·OPTS(1 ⌫/2) +✏/2 (1 + ) ·OPTD +✏, where the first and last inequalities follow from standard margin generalization bounds [BM02, KP02, KST08] and the second inequality follows from the guarantee of Proposition 5. 3 Tight Running Time Lower Bound: Proof Overview We will now give a high-level overview of our running time lower bound (Theorem 3). Due to space constraint, we will sometimes be informal; everything will be formalized in the supplementary material. The main component of our hardness result will be a reduction from the Label Cover problem2, which is a classical problem in hardness of approximation literature that is widely used as a starting point for proving strong NP-hardness of approximation results (see, e.g., [ABSS97, Hås96, Hås01, Fei98]). Definition 8 (Label Cover). A Label Cover instance L = (U, V,E,⌃U ,⌃V , {⇡e}e2⌃) consists of • a bi-regular bipartite graph (U, V,E), referred to as the constraint graph, • label sets ⌃U and ⌃V , • for every edge e 2 E, a constraint (aka projection) ⇡e : ⌃U ! ⌃V . A labeling of L is a function : U ! ⌃U . We say that covers v 2 V if there exists v 2 ⌃V such that3 ⇡ (u,v)( (u)) = v for all4 u 2 N(v). The value , denoted by valL( ), is defined as the fraction of v 2 V covered by . The value of L, denoted by val(L), is defined as max :U!⌃ U val( ). Moreover, we say that weakly covers v 2 V if there exist distinct neighbors u 1 , u 2 of v such that ⇡ (u 1 ,v)( (u1)) = ⇡(u 2 ,v)( (u2)). The weak value of , denoted by wval( ), is the fraction of v 2 V weakly covered by . The weak value of L, denoted by wval(L), is defined as max :U!⌃ U wval( ). For a Label Cover instance L, we use k to denote |U | and n to denote |U | · |⌃U |+ |V | · |⌃V |. The goal of Label Cover is to find an assignment with maximium value. Several strong inapproximability results for Label Cover are known [Raz98, MR10, DS14]. To prove a tight running time lower bound, we require an inapproximability result for Label Cover with a tight running lower bound as well. Observe that we can solve Label Cover in time nO(k) by enumerating through all possible assignments and compute their values. The following result shows that, even if we aim for a constant approximation ratio, no algorithm that can be significantly faster than this “brute-force” algorithm. Theorem 9 ([Man20]). Assuming Gap-ETH, for any function f and any constant µ 2 (0, 1), no f(k) · no(k)-time algorithm can, given a Label Cover instance L, distinguish between the following two cases: (Completeness) val(L) = 1, and, (Soundness) wval(L) < µ. Given a Label Cover instance L, our reduction produces an oracle O that can sample (in polynomial time) from a distribution D over Bd1 ⇥ {±1} (for some d n) such that: • (Completeness) If val(L) = 1, then OPTD ⇤ ✏⇤. • (Soundness) If wval(L) < µ, then OPTD (1 ⌫) ⇤ > 1.6✏ ⇤. • (Margin and Error Bounds) ⇤ = ⌦(1/ p k) and ✏⇤ = 1/no(k). Here ⌫ > 0 is some constant. Once we have such a reduction, Theorem 3 follows quite easily. The reason is that, if we assume (by contrapositive) that there exists a ⌫-robust 1.5-agnostic learner A for L1- -margin halfspaces that runs in time f(1/ ) · do(1/ 2 ) poly(1/✏), then we can turn A to an algorithm for Label Cover by first using the reduction above to give us an oracle O and then running A on O. With appropriate parameters, A can distinguish between the two cases in Theorem 9 in time f(1/ ⇤) · do(1/( ⇤)2) poly(1/✏⇤) = f(O( p k)) · no(k), which by Theorem 9 violates the randomized Gap-ETH. Therefore, we will henceforth focus on the reduction and its proof of correctness. Previous Results. To explain the key new ideas behind our reduction, it is important to understand high-level approaches taken in previous works and why they fail to yield running time lower bounds as in our Theorem 3. Most of the known hardness results for agnostic learning of halfspaces employ reductions from Label Cover [ABSS97, FGKP06, GR09, FGRW12, DKM19]5. These reductions use gadgets which are “local” in nature. As we will explain next, such “local” reductions cannot work for our purpose. 2Label Cover is sometimes referred to as Projection Game or Two-Prover One-Round Game. 3This is equivalent to ⇡(u 1 ,v)( (u1)) = ⇡(u 2 ,v)( (u2)) for all neighbors u1, u2 of v. 4For every a 2 U [V , we use N(a) to denote the set of neighbors of a (with respect to the graph (U, V,E)). 5Some of these reductions are stated in terms of reductions from Set Cover or from constraint satisfaction problems (CSP). However, it is well-known that these can be formulated as Label Cover. To describe the reductions, it is convenient to think of each sample (x, y) as a linear constraint hw,xi 0 when y = +1 and hw,xi < 0 when y = 1, where the variables are the coordinates w 1 , . . . , wd of w. When we also consider a margin parameter ⇤ > 0, then the constraints become hw,xi ⇤ and hw,xi < ⇤, respectively. Notice here that, for our purpose, we want (i) our halfspace w to be in Bd 1 , i.e., |w 1 |+ · · ·+ |wd| 1, and (ii) each of our samples x to lie in Bd1, i.e., |x 1 |, . . . , |xd| 1. Although the reductions in previous works vary in certain steps, they do share an overall common framework. With some simplification, they typically let e.g. d = |U | · |⌃U |, where each coordinate is associated with U ⇥ ⌃U . In the completeness case, i.e., when some labeling c covers all vertices in V , the intended solution wc is defined by wc (u, u ) = 1[ u = (u)]/k for all u 2 U, u 2 ⌃U . To ensure that this is essentially the best choice of halfspace, these reductions often appeal to several types of linear constraints. For concreteness, we state a simplified version of those from [ABSS97] below. • For every (u, U ) 2 U ⇥ ⌃U , create the constraint w (u, u ) 0. (This corresponds to the labeled sample ( e (a, ),+1).) • For each u 2 U , create the constraint P 2⌃ U w (u, ) 1/k. • For every v 2 V , v 2 ⌃V and u1, u2 2 N(v), add P u 1 2⇡ 1 (u 1 ,v) ( v ) w (u 1 , u 1 ) = P u 2 2⇡ 1 (u 2 ,v) ( v ) w (u 2 , u 2 ) . This equality “checks” the Label Cover constraints ⇡ (u 1 ,v) and ⇡ (u 2 ,v). Clearly, in the completeness case wc satisfies all constraints except the non-positivity constraints for the k non-zero coordinates. (It was argued in [ABSS97] that any halfspace must violate many more constraints in the soundness case.) Observe that this reduction does not yield any margin: wc does not classify any sample with a positive margin. Nonetheless, [DKM19] adapts this reduction to work with a small margin ⇤ > 0 by adding/subtracting appropriate “slack” from each constraint. For example, the first type of constraint is changed to w (u, u ) ⇤. This gives the desired margin ⇤ in the completeness case. However, for the soundness analysis to work, it is crucial that ⇤ O(1/k), as otherwise the constraints can be trivially satisfied6 by w = 0. As such, the above reduction does not work for us, since we would like a margin ⇤ = ⌦(1/ p k). In fact, this also holds for all known reductions, which are “local” in nature and possess similar characteristics. Roughly speaking, each linear constraint of these reductions involves only a constant number of terms that are intended to be set to O(1/k), which means that we cannot hope to get a margin more than O(1/k). Our Approach: Beyond Local Reductions. With the preceding discussion in mind, our reduction has to be “non-local”. To describe our main idea, we need an additional notion of “decomposability” of a Label Cover instance. Roughly speaking, an instance is decomposable if we can partition V into different parts such that each u 2 U has exactly one induced edge to the vertices in each part. Definition 10. A Label Cover instance L = (U, V,E,⌃U ,⌃V , {⇡e}e2E) is said to be decomposable if there exists a partition of V into V 1 [· · ·[Vt such that, for every u 2 U and j 2 [t], |N(u)\Vj | = 1. We use the notation vj(u) to the denote the unique element in N(u) \ Vj . As explained above, “local” reductions use each labeled sample to only check a constant number of Label Cover constraints. In contrast, our reduction will check many constraints in each sample. Specifically, for each subset V j , we will check all the Label Cover constraints involving v 2 V j at once. To formalize this goal, we will require the following definition. Definition 11. Let L = (U, V = V 1 [ · · · [ Vt, E,⌃U ,⌃V , {⇡e}e2E) be a decomposable Label Cover instance. For any j 2 [t], let ⇧j 2 R(V⇥⌃V )⇥(U⇥⌃U ) be defined as ⇧j (v, v ),(u, u ) = ⇢ 1 if v = vj(u) and ⇡ (u,v)( u) = v, 0 otherwise. We set d = |U | · |⌃U | and our intended solution wc in the completeness case is the same as described in the previous reduction. For simplicity, suppose that, in the soundness case, we pick s that does 6Note that w = 0 satisfies the constraints with margin ⇤ 1/k, which is (1 o(1)) ⇤ if ⇤ = !(1/k). not weakly cover any v 2 V and set ws (u, u ) = 1[ u = s(u)]/k. Our simplified task then becomes: Design D such that errD (wc) ⌧ errD (1 ⌫) (w s), where = ⌦(1/ p k), ⌫ > 0 is a constant. Our choice of D is based on two observations. The first is a structural difference between wc(⇧j)T and ws(⇧j)T . Suppose that the constraint graph has right degree . Since c covers all v 2 V , ⇧j “projects” the non-zeros coordinates wc (u, c(u)) for all u 2 N(v) to the same coordinate (v, v), for some v 2 ⌃V , resulting in the value of /k in this coordinate. On the other hand, since s does not even weakly cover any right vertex, all the non-zero coordinates get maps by ⇧j to different coordinates, resulting in the vector ws(⇧j)T having k non-zero coordinates, each having value 1/k. To summarize, we have: wc(⇧j)T has k/ non-zero coordinates, each of value /k. On the other hand, ws(⇧j)T has k non-zero coordinates, each of value 1/k. Our second observation is the following: suppose that u is a vector with T non-zero coordinates, each of value 1/T . If we take a random ±1 vector s, then hu, si is simply 1/T times a sum of T i.i.d. Rademacher random variables. Recall a well-known version of the central limit theorem (e.g., [Ber41, Ess42]): as T ! 1, 1/ p T times a sum of T i.i.d. Rademacher r.v.s converges in distribution to the normal distribution. This implies that limT!1 Pr[hu, si 1/ p T ] = (1). For simplicity, let us ignore the limit for the moment and assume that Pr[hu, si 1/ p T ] = (1). We can now specify the desired distribution D: Pick s uniformly at random from {±1}V⇥⌃V and then let the sample be s⇧j with label +1. By the above two observations, wc will be correctly classified with margin ⇤ = p /k = ⌦(1/ p k) with probability (1). Furthermore, in the soundness case, w s can only get the same error with margin (roughly) p 1/k = ⇤/ p . Intuitively, for > 1, this means that we get a gap of ⌦(1/ p k) in the margins between the two cases, as desired. This concludes our informal proof overview. Further Details and The Full Reduction. Having stated the rough main ideas above, we next state the full reduction. To facilitate this, we define the following additional notations: Definition 12. Let L = (U, V = V 1 [ · · · [ Vt, E,⌃U ,⌃V , {⇡e}e2E) be a decomposable Label Cover instance. For any j 2 [t], let ⇧̂j 2 R(U⇥⌃V )⇥(U⇥⌃U ) be such that ⇧̂j (u0, v ),(u, u ) = ⇢ 1 if u0 = u and ⇡ (u,vj(u))( u) = v, 0 otherwise. Moreover, let ⇧̃j 2 R(V⇥⌃V )⇥(U⇥⌃V ) be such that ⇧̃j (v, 0 v ),(u, v ) = ⇢ 1 if v = vj(u) and 0v = v 0 otherwise. Observe that ⇧j = ⇧̃j · ⇧̂j (where ⇧j is as in Definition 11). Our full reduction is present in Figure 1 below. The exact choice of parameters are deferred to the supplementary material. We note that the distribution described in the previous section corresponds to Step 4c in the reduction. The other steps of the reductions are included to handle certain technical details we had glossed over previously. In particular, the following are the two main additional technical issues we have to deal with here. • (Non-Uniformity of Weights) In the intuitive argument above, we assume that, in the soundness case, we only consider ws such that P u 2⌃ U ws (u, u ) = 1/k. However, this needs not be true in general, and we have to create new samples to (approximately) enforce such a condition. Specifically, for every subset T ✓ U , we add a constraint thatP u2T P u 2⌃ U w (u, u ) |T |/k ⇤. This corresponds to Step 3 in Figure 1. Note that the term ⇤ on the right hand side above is necessary to ensure that, in the completeness case, we still have a margin of ⇤. Unfortunately, this also leaves the possibility of, e.g., some vertex u 2 U has as much as ⇤ extra “mass”. For technical reasons, it turns out that we have to make sure that these extra “masses” do not contribute to too much of kw(⇧j)T k2 2 . To do so, we add additional constraints on w(⇧̂j)T to bound its norm. Such a constraint is of the form: If we pick a subset S of at most ` coordinates, then their sum must be at most |S|/k + ⇤ (and at least ⇤). These corresponds to Steps 4a and 4b in Figure 1. • (Constant Coordinate) Finally, similar to previous works, we cannot have “constants” in our linear constraints. Rather, we need to add a coordinate ? with the intention that w? = 1/2, and replace the constants in the previous step by w?. Note here that we need two additional constraints (Steps 1 and 2 in Figure 1) to ensure that w? has to be roughly 1/2. 4 Conclusions and Open Problems In this work, we studied the computational complexity of adversarially robust learning of halfspaces in the distribution-independent agnostic PAC model. We provided a simple proper learning algorithm for this problem and a nearly matching computational lower bound. While proper learners are typically preferable due to their interpretability, the obvious open question is whether significantly faster non-proper learners are possible. We leave this as an interesting open problem. Another direction for future work is to understand the effect of distributional assumptions on the complexity of the problem and to explore the learnability of simple neural networks in this context. Broader Impact Our work aims to advance the algorithmic foundations of adversarially robust machine learning. This subfield focuses on protecting machine learning models (especially their predictions) against small perturbations of the input data. This broad goal is a pressing challenge in many real-world scenarios, where successful adversarial example attacks can have far-reaching implications given the adoption of machine learning in a wide variety of applications, from self-driving cars to banking. Since the primary focus of our work is theoretical and addresses a simple concept class, we do not expect our results to have immediate societal impact. Nonetheless, we believe that our findings provide interesting insights on the algorithmic possibilities and fundamental computational limitations of adversarially robust learning. We hope that, in the future, these insights could be useful in the design of practically relevant adversarially robust classifiers in the presence of noisy data. Acknowledgments and Disclosure of Funding Ilias Diakonikolas is supported by NSF Award CCF-1652862 (CAREER) and a Sloan Research Fellowship. Daniel M. Kane is supported by NSF Award CCF-1553288 (CAREER) and a Sloan Research Fellowship.
1. What is the focus of the paper regarding halfspace learning? 2. What are the contributions of the paper, particularly in terms of computational efficiency and hardness results? 3. Are there any limitations or concerns regarding the hardness result, specifically with regards to the level of tightness in the dependence on d and γ? 4. How do the introduced techniques in the paper have potential applications beyond the current context?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper studies the problem of adversarially robust (proper) learning of halfspaces in the agnostic case, with respect to L_p perturbations. This paper furthers our understanding of what we can / cannot do in adversarially robust learning, for the simple class of halfspaces. The contributions of the paper are as follows: 1. A computationally "efficient" learning algorithm that handles the case of L_p perturbations for all p >= 2, including p = infinity. 2. A hardness result showing that in p=infinity case, the running time of the algorithm is nearly optimal (at least in some parameters) assuming the "Gap Exponential Time Hypothesis". This shows the limits of the best results we could hope for without making any distributional assumptions. RESPONSE TO AUTHORS: Thank you for the response. My rating for the paper is unchanged, and I continue to feel the paper should be accepted. Strengths The paper introduces new techniques to establish their results, which might be useful beyond the context of this paper: - To give the learning algorithm, the paper introduces a simple reduction from online learning to agnostic PAC learning. I think this reduction could be useful beyond the setting studied in this paper. - The hardness result proved here reduces from the standard problem of 'Label Cover'. However, there are many subtle details to make the proof go through. In particular, the reduction constructed here has to be "non-local", which goes beyond prior works which primarily used "local" reductions. Weaknesses I find the hardness result a bit limited in that it only show that the dependence in terms of d and \gamma is tight in the case of L_{\infty} perturbations. The result is stated for a "small constant" \nu > 0. Perhaps it might help to say how small \nu needs to be for the result to hold. For example, would \nu=0.1 work?
NIPS
Title The Complexity of Adversarially Robust Proper Learning of Halfspaces with Agnostic Noise Abstract We study the computational complexity of adversarially robust proper learning of halfspaces in the distribution-independent agnostic PAC model, with a focus on Lp perturbations. We give a computationally efficient learning algorithm and a nearly matching computational hardness result for this problem. An interesting implication of our findings is that the L1 perturbations case is provably computationally harder than the case 2  p < 1. 1 Introduction In recent years, the design of reliable machine learning systems for secure-critical applications, including in computer vision and natural language processing, has been a major goal in the field. One of the main concrete goals in this context has been to develop classifiers that are robust to adversarial examples, i.e., small imperceptible perturbations to the input that can result in erroneous misclassification [BCM+13, SZS+14, GSS15]. This has led to an explosion of research on designing defenses against adversarial examples and attacks on these defenses. See, e.g., [KM18] for a recent tutorial on the topic. Despite significant empirical progress over the past few years, the broad question of designing computationally efficient classifiers that are provably robust to adversarial perturbations remains an outstanding theoretical challenge. In this paper, we focus on understanding the computational complexity of adversarially robust classification in the (distribution-independent) agnostic PAC model [Hau92, KSS94]. Specifically, we study the learnability of halfspaces (or linear threshold functions) in this model with respect to Lp perturbations. A halfspace is any function h w : Rd ! {±1} of the form1 h w (x) = sgn (hw,xi), where w 2 Rd is the associated weight vector. The problem of learning an unknown halfspace has been studied for decades — starting with the Perceptron algorithm [Ros58] — and has arguably been one of the most influential problems in the development of machine learning [Vap98, FS97]. Before we proceed, we introduce the relevant terminology. Let C be a concept class of Boolean-valued functions on an instance space X ✓ Rd and H be a hypothesis class on X . The set of allowable perturbations is defined by a function U : X ! 2X . The robust risk of a hypothesis h 2 H with respect to a distribution D on X⇥{±1} is defined as RU (h,D) = Pr (x,y)⇠D[9z 2 U(x), h(z) 6= y]. The (adversarially robust) agnostic PAC learning problem for C is the following: Given i.i.d. samples from an arbitrary distribution D on X ⇥ {±1}, the goal of the learner is to output a hypothesis h 2 H 1The function sgn : R ! {±1} is defined as sgn(u) = 1 if u 0 and sgn(u) = 1 otherwise. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. such that with high probability it holds RU (h,D) OPTD +✏, where OPTD = inff2C RU (f,D) is the robust risk of the best-fitting function in C. Unfortunately, it follows from known hardness results that this formulation is computationally intractable for the class of halfspaces C = {sgn(hw,xi),w 2 Rd} under Lp perturbations, i.e, for Up, (x) = {z 2 X : kz xkp }, for some p 2. (The reader is referred to the supplementary material for a more detailed explanation.) To be able to obtain computationally efficient algorithms, we relax the above definition in two ways: (1) We allow the hypothesis to be robust within a slightly smaller perturbation region, and (2) We introduce a small constant factor approximation in the error guarantee. In more detail, for some constants 0 < ⌫ < 1 and ↵ > 1, our goal is to efficiently compute a hypothesis h such that with high probability RU p,(1 ⌫) (h,D) ↵ ·OPTDp, +✏ , (1) where OPTDp, = inff2C RUp, (f,D). (Note that for ⌫ = 0 and ↵ = 1, we obtain the original definition.) An interesting setting is when ⌫ is a small constant close to 0, say ⌫ = 0.1, and ↵ = 1+ , where 0 < < 1. In this paper, we characterize the computational complexity of this problem with respect to proper learning algorithms, i.e., algorithms that output a halfspace hypothesis. Throughout this paper, we will assume that the domain of our functions is bounded in the ddimensional Lp unit ball Bdp. All our results immediately extend to general domains with a (necessary) dependence on the diameter of the feasible set. A simple but crucial observation leveraged in our work is the following: The adversarially robust learning problem of halfspaces under Lp perturbations (defined above) is essentially equivalent to the classical problem of agnostic proper PAC learning of halfspaces with an Lp margin. Let p 2, q be the dual exponent of p, i.e., 1/p + 1/q = 1. The problem of agnostic proper PAC learning of halfspaces with an Lp margin is the following: The learner is given i.i.d. samples from a distribution D over Bdp ⇥ {±1}. For w 2 Bdq , its -margin error is defined as errD (w) := Pr (x,y)⇠D[sgn(hw,xi y · ) 6= y]. We also define OPTD := minw2Bd q errD (w). An algorithm is a proper ⌫-robust ↵-agnostic learner for Lp- -margin halfspace if, with probability at least 1 ⌧ , it outputs a halfspace w 2 Bdq with errD (1 ⌫) (w) ↵ ·OPTD +✏ . (2) (When unspecified, the failure probability ⌧ is assumed to be 1/3. It is well-known and easy to see that we can always achieve arbitrarily small value of ⌧ at the cost of O(log(1/⌧)) multiplicative factor in the running time and sample complexity.) We have the following basic observation, which implies that the learning objectives (1) and (2) are equivalent. Throughout this paper, we will state our contributions using the margin formulation (2). Fact 1. For any non-zero w 2 Rd, 0 and D over Rd ⇥ {±1}, RU p, (h w ,D) = errD ( wkwk q ). 1.1 Our Contributions Our main positive result is a robust and agnostic proper learning algorithm for Lp- -margin halfspace with near-optimal running time: Theorem 2 (Robust Learning Algorithm). Fix 2 p < 1 and 0 < < 1. For any 0 < ⌫, < 1, there is a proper ⌫-robust (1 + )-agnostic learner for Lp- -margin halfspace that draws O( p✏2⌫2 2 ) samples and runs in time (1/ )O ⇣ p ⌫ 2 2 ⌘ · poly(d/✏). Furthermore, for p = 1, there is a proper ⌫-robust (1 + )-agnostic learner for L1- -margin halfspace that draws O( log d✏2⌫2 2 ) samples and runs in time d O ⇣ log(1/ ) ⌫ 2 2 ⌘ · poly(1/✏). To interpret the running time of our algorithm, we consider the setting = ⌫ = 0.1. We note two different regimes. If p 2 is a fixed constant, then our algorithm runs in time 2O(1/ 2) poly(d/✏). On the other hand, for p = 1, we obtain a runtime of dO(1/ 2) poly(1/✏). That is, the L1 margin case (which corresponds to adversarial learning with L1 perturbations) appears to be computationally the hardest. As we show in Theorem 3, this fact is inherent for proper learners. Our algorithm establishing Theorem 2 follows via a simple and unified approach, employing a reduction from online (mistake bound) learning [Lit87]. Specifically, we show that any computationally efficient Lp online learner for halfspaces with margin guarantees and mistake bound M can be used in a black-box manner to obtain an algorithm for our problem with runtime roughly poly(d/✏)(1/ )M . Theorem 2 then follows by applying known results from the online learning literature [Gen01a]. For the special case of p = 2 (and ⌫ = 0.1), recent work [DKM19] gave a sophisticated algorithm for our problem with running time poly(d/✏)2 ˜O(1/( 2 )). We note that our algorithm has significantly better dependence on the parameter (quantifying the approximation ratio), and better dependence on 1/ . Importantly, our algorithm is much simpler and immediately generalizes to all Lp norms. Perhaps surprisingly, the running time of our algorithm is nearly the best possible for proper learning. For constant p 2, this follows from the hardness result of [DKM19]. (See the supplementary material for more details.) Furthermore, we prove a tight running time lower bound for robust L1- - margin proper learning of halfspaces. Roughly speaking, we show that for some sufficiently small constant ⌫ > 0, one cannot hope to significantly speed-up our algorithm for ⌫-robust L1- -margin learning of halfspaces. Our computational hardness result is formally stated below. Theorem 3 (Tight Running Time Lower Bound). There exists a constant ⌫ > 0 such that, assuming the (randomized) Gap Exponential Time Hypothesis, there is no proper ⌫-robust 1.5-agnostic learner for L1- -margin halfspace that runs in time f(1/ ) · do(1/ 2 ) poly(1/✏) for any function f . As indicated above, our running time lower bound is based on the so-called Gap Exponential Time Hypothesis (Gap-ETH), which roughly states that no subexponential time algorithm can approximate 3SAT to within (1 ✏) factor, for some constant ✏ > 0. Since we will not be dealing with Gap-ETH directly here, we defer the formal treatment of the hypothesis and discussions on its application to the supplementary material. We remark that the constant 1.5 in our theorem is insignificant. We can increase this “gap” to any constant less than 2. We use the value 1.5 to avoid introducing an additional variable. Another remark is that Theorem 3 only applies for a small constant ⌫ > 0. This leaves the possibility of achieving, e.g., a faster 0.9-robust L1- -margin learner for halfspaces, as an interesting open problem. 1.2 Related Work A sequence of recent works [CBM18, SST+18, BLPR19, MHS19] has studied the sample complexity of adversarially robust PAC learning for general concept classes of bounded VC dimension and for halfspaces in particular. [MHS19] established an upper bound on the sample complexity of PAC learning any concept class with finite VC dimension. A common implication of the aforementioned works is that, for some concept classes, the sample complexity of adversarially robust PAC learning is higher than the sample complexity of (standard) PAC learning. For the class of halfspaces, which is the focus of the current paper, the sample complexity of adversarially robust agnostic PAC learning was shown to be essentially the same as that of (standard) agnostic PAC learning [CBM18, MHS19]. Turning to computational aspects, [BLPR19, DNV19] showed that there exist classification tasks that are efficiently learnable in the standard PAC model, but are computationally hard in the adversarially robust setting (under cryptographic assumptions). Notably, the classification problems shown hard are artificial, in the sense that they do not correspond to natural concept classes. [ADV19] shows that adversarially robust proper learning of degree-2 polynomial threshold functions is computationally hard, even in the realizable setting. On the positive side, [ADV19] gives a polynomial-time algorithm for adversarially robust learning of halfspaces under L1 perturbations, again in the realizable setting. More recently, [MGDS20] generalized this upper bound to a broad class of perturbations, including Lp perturbations. Moreover, [MGDS20] gave an efficient algorithm for learning halfspaces with random classification noise [AL88]. We note that all these algorithms are proper. The problem of agnostically learning halfspaces with a margin has been studied extensively. A number of prior works [BS00, SSS09, SSS10, LS11, BS12, DKM19] studied the case of L 2 margin and gave a range of time-accuracy tradeoffs for the problem. The most closely related prior work is the recent work [DKM19], which gave a proper ⌫-robust ↵-agnostic learning for L 2 - -margin halfspace with near-optimal running time when ↵, ⌫ are universal constants, and a nearly matching computational hardness result. The algorithm of the current paper broadly generalizes, simplifies, and improves the algorithm of [DKM19]. 2 Upper Bound: From Online to Adversarially Robust Agnostic Learning In this section, we provide a generic method that turns an online (mistake bound) learning algorithm for halfspaces into an adversarially robust agnostic algorithm, which is then used to prove Theorem 2. Recall that online learning [Lit87] proceeds in a sequence of rounds. In each round, the algorithm is given an example point, produces a binary prediction on this point, and receives feedback on its prediction (after which it is allowed to update its hypothesis). The mistake bound of an online learner is the maximum number of mistakes (i.e., incorrect predictions) it can make over all possible sequences of examples. We start by defining the notion of online learning with a margin gap in the context of halfspaces: Definition 4. An online learner A for the class of halfspaces is called an Lp online learner with mistake bound M and ( , 0) margin gap if it satisfies the following: In each round, A returns a vector w 2 Bdq . Moreover, for any sequence of labeled examples (xi, yi) such that there exists w ⇤ 2 Bdq with sgn(hw⇤,xii yi ) = yi for all i, there are at most M values of t such that sgn(hwt,xti yt 0) 6= yt, where wt = A((x1, y1), . . . , (xt 1, yt 1)). The Lp online learning problem of halfspaces has been studied extensively in the literature, see, e.g., [Lit87, GLS01, Gen01b, Gen03, BB14]. We will use a result of [Gen01a], which gives a polynomial time Lp online learner with margin gap ( , (1 ⌫) ) and mistake bound O((p 1)/⌫2 2). We are now ready to state our generic proposition that translates an online algorithm with a given mistake bound into an agnostic learning algorithm. We will use the following notation: For S ✓ Bdp ⇥ {±1}, we will use S instead of D to denote the empirical error on the uniform distribution over S. In particular, we denote errS (w) := 1 |S| · |{(x, y) 2 S | sgn(hw,xi y ) 6= y}|. The main result of this section is the following proposition. While we state our proposition for the empirical error, it is simple to convert it into a generalization bound as we will show later in the proof of Theorem 2. Proposition 5. Assume that there is a polynomial time Lp online learner A for halfspaces with a ( , 0) margin gap and mistake bound of M . Then there exists an algorithm that given a multiset of labeled examples S ✓ Bdp ⇥ {±1} and 2 (0, 1), runs in poly(|S|d) · 2O(M log(1/ )) time and with probability 9/10 returns w 2 Bdq such that errS 0(w) (1 + ) ·OPTS . Notice that our algorithm runs in time poly(|S|d) · 2O(M log(1/ )) and has success probability 9/10. It is more convenient to describe a version of our algorithm that runs in poly(|S|d) time, but has small success probability of 2 O(M log(1/ )), as encapsulated by the following lemma. Lemma 6. Assume that there is a polynomial time Lp online learner A for halfspaces with a ( , 0) margin gap and mistake bound of M . Then there exists an algorithm that given a multiset of labeled examples S ✓ Bdp ⇥ {±1} and 2 (0, 1), runs in poly(|S|dM) time and with probability 2 O(M log(1/ )) returns w 2 Bdq such that errS 0(w) (1 + ) ·OPTS . Before proving Lemma 6, notice that Proposition 5 now follows by running the algorithm from Lemma 6 independently 2O(M log(1/ )) times and returning the w with minimum errS 0(w). Since each iteration has a 2 O(M log(1/ )) probability of returning a w with errS 0(w) (1 + ) ·OPTS , with 90% probability at least one of our runs finds a w that satisfies this. Proof of Lemma 6. Let w⇤ 2 Bdq denote an “optimal” halfspace with errS (w⇤) = OPTS . The basic idea of the algorithm is to repeatedly run A on larger and larger subsets of samples each time adding one additional sample in S that the current hypothesis gets wrong. The one worry here is that some of the points in S might be errors, inconsistent with the true classifier w⇤, and feeding them to our online learner will lead it astray. However, at any point in time, either we misclassify (w.r.t. margin 0) only a (1 + ) · OPTS fraction of points (in which case we can abort early and use this hypothesis) or guessing a random misclassified point will have at least an ⌦( ) probability of giving us a non-error. Since our online learner has a mistake bound of M , we will never need to make more than this many correct guesses. Specifically, the algorithm is as follows: • Let Samples = ; • For i = 0 to M – Let w = A(Samples) – Let T be the set of (x, y) 2 S so that sgn(hw,xi y 0) 6= y – If T = ;, and otherwise with 50% probability, return w – Draw (xi, yi) uniformly at random from T , and add it to Samples • Return w To analyze this algorithm, let Sbad be the set of (x, y) 2 S with sgn(hw⇤,xi y ) 6= y. Recall that by assumption |Sbad| OPTS ·|S|. We claim that with probability at least 2 O(M log(1/ )) our algorithm never adds an element of Sbad to Samples and never returns a w in the for loop for which errS 0(w) > (1 + ) ·OPTS . This is because during each iteration of the algorithm either: 1. errS 0(w) > (1+ ) ·OPTS . In this case, there is a 50% probability that we do not return w. If we do not return, then |T | (1 + ) · |Sbad| so there is at least a 1+ /2 probability that the new element added to Samples is not in Sbad. 2. Or errS 0(w) (1 + ) ·OPTS . In this case, there is a 50% probability of returning w. Hence, there is a ( /4)M+1 2 O(M log(1/ )) probability of never adding an element of Sbad to Samples or returning a w in our for-loop with errS 0(w) > (1+ )·OPTS . When this occurs, we claim that we output w such that errS 0(w) (1+ ) ·OPTS . This is because, if this were not the case, we must have reached the final statement at which point we have Samples = ((x 0 , y 0 ), . . . , (xM , yM )), where each (xi, yi) satisfies sgn(hw⇤,xii yi ) = yi and sgn(hwi,xii yi 0) 6= yi with wi = A((x 0 , y 0 ), . . . , (xi 1, yi 1)). But this violates the mistake bound of M . Thus, we output w such that errS 0(w) (1+ )·OPTS with probability at least 2 O(M log(1/ )). We will now show how Proposition 5 can be used to derive Theorem 2. As stated earlier, we will require the following mistake bound for online learning with a margin gap from [Gen01a]. Theorem 7 ([Gen01a]). For any 2 p < 1, there exists a polynomial time Lp online learner with margin gap ( , (1 ⌫) ) and mistake bound O ⇣ (p 1) ⌫2 2 ⌘ . Furthermore, there is a polynomial time L1 online learner with margin gap ( , (1 ⌫) ) and mistake bound O ⇣ log d ⌫2 2 ⌘ . Proof of Theorem 2. Our ⌫-robust (1 + )-agnostic learner for Lp- -margin halfspace works as follows. First, it draws the appropriate number of samples m (as stated in Theorem 2) from D. Then, it runs the algorithm from Proposition 5 on these samples for margin gap ( , (1 ⌫/2) ). Let Mp denote the error bound for Lp online learning with margin gap ( , (1 ⌫/2) ) given by Theorem 7. Our entire algorithm runs in time poly(m) · 2O(Mp·log(1/ )). It is simple to check that this results in the claimed running time. As for the error guarantee, let w 2 Bdq be the output halfspace. With probability 0.8, we have errD (1 ⌫) (w) errS(1 ⌫/2) (w) + ✏/2 (1 + ) ·OPTS(1 ⌫/2) +✏/2 (1 + ) ·OPTD +✏, where the first and last inequalities follow from standard margin generalization bounds [BM02, KP02, KST08] and the second inequality follows from the guarantee of Proposition 5. 3 Tight Running Time Lower Bound: Proof Overview We will now give a high-level overview of our running time lower bound (Theorem 3). Due to space constraint, we will sometimes be informal; everything will be formalized in the supplementary material. The main component of our hardness result will be a reduction from the Label Cover problem2, which is a classical problem in hardness of approximation literature that is widely used as a starting point for proving strong NP-hardness of approximation results (see, e.g., [ABSS97, Hås96, Hås01, Fei98]). Definition 8 (Label Cover). A Label Cover instance L = (U, V,E,⌃U ,⌃V , {⇡e}e2⌃) consists of • a bi-regular bipartite graph (U, V,E), referred to as the constraint graph, • label sets ⌃U and ⌃V , • for every edge e 2 E, a constraint (aka projection) ⇡e : ⌃U ! ⌃V . A labeling of L is a function : U ! ⌃U . We say that covers v 2 V if there exists v 2 ⌃V such that3 ⇡ (u,v)( (u)) = v for all4 u 2 N(v). The value , denoted by valL( ), is defined as the fraction of v 2 V covered by . The value of L, denoted by val(L), is defined as max :U!⌃ U val( ). Moreover, we say that weakly covers v 2 V if there exist distinct neighbors u 1 , u 2 of v such that ⇡ (u 1 ,v)( (u1)) = ⇡(u 2 ,v)( (u2)). The weak value of , denoted by wval( ), is the fraction of v 2 V weakly covered by . The weak value of L, denoted by wval(L), is defined as max :U!⌃ U wval( ). For a Label Cover instance L, we use k to denote |U | and n to denote |U | · |⌃U |+ |V | · |⌃V |. The goal of Label Cover is to find an assignment with maximium value. Several strong inapproximability results for Label Cover are known [Raz98, MR10, DS14]. To prove a tight running time lower bound, we require an inapproximability result for Label Cover with a tight running lower bound as well. Observe that we can solve Label Cover in time nO(k) by enumerating through all possible assignments and compute their values. The following result shows that, even if we aim for a constant approximation ratio, no algorithm that can be significantly faster than this “brute-force” algorithm. Theorem 9 ([Man20]). Assuming Gap-ETH, for any function f and any constant µ 2 (0, 1), no f(k) · no(k)-time algorithm can, given a Label Cover instance L, distinguish between the following two cases: (Completeness) val(L) = 1, and, (Soundness) wval(L) < µ. Given a Label Cover instance L, our reduction produces an oracle O that can sample (in polynomial time) from a distribution D over Bd1 ⇥ {±1} (for some d n) such that: • (Completeness) If val(L) = 1, then OPTD ⇤ ✏⇤. • (Soundness) If wval(L) < µ, then OPTD (1 ⌫) ⇤ > 1.6✏ ⇤. • (Margin and Error Bounds) ⇤ = ⌦(1/ p k) and ✏⇤ = 1/no(k). Here ⌫ > 0 is some constant. Once we have such a reduction, Theorem 3 follows quite easily. The reason is that, if we assume (by contrapositive) that there exists a ⌫-robust 1.5-agnostic learner A for L1- -margin halfspaces that runs in time f(1/ ) · do(1/ 2 ) poly(1/✏), then we can turn A to an algorithm for Label Cover by first using the reduction above to give us an oracle O and then running A on O. With appropriate parameters, A can distinguish between the two cases in Theorem 9 in time f(1/ ⇤) · do(1/( ⇤)2) poly(1/✏⇤) = f(O( p k)) · no(k), which by Theorem 9 violates the randomized Gap-ETH. Therefore, we will henceforth focus on the reduction and its proof of correctness. Previous Results. To explain the key new ideas behind our reduction, it is important to understand high-level approaches taken in previous works and why they fail to yield running time lower bounds as in our Theorem 3. Most of the known hardness results for agnostic learning of halfspaces employ reductions from Label Cover [ABSS97, FGKP06, GR09, FGRW12, DKM19]5. These reductions use gadgets which are “local” in nature. As we will explain next, such “local” reductions cannot work for our purpose. 2Label Cover is sometimes referred to as Projection Game or Two-Prover One-Round Game. 3This is equivalent to ⇡(u 1 ,v)( (u1)) = ⇡(u 2 ,v)( (u2)) for all neighbors u1, u2 of v. 4For every a 2 U [V , we use N(a) to denote the set of neighbors of a (with respect to the graph (U, V,E)). 5Some of these reductions are stated in terms of reductions from Set Cover or from constraint satisfaction problems (CSP). However, it is well-known that these can be formulated as Label Cover. To describe the reductions, it is convenient to think of each sample (x, y) as a linear constraint hw,xi 0 when y = +1 and hw,xi < 0 when y = 1, where the variables are the coordinates w 1 , . . . , wd of w. When we also consider a margin parameter ⇤ > 0, then the constraints become hw,xi ⇤ and hw,xi < ⇤, respectively. Notice here that, for our purpose, we want (i) our halfspace w to be in Bd 1 , i.e., |w 1 |+ · · ·+ |wd| 1, and (ii) each of our samples x to lie in Bd1, i.e., |x 1 |, . . . , |xd| 1. Although the reductions in previous works vary in certain steps, they do share an overall common framework. With some simplification, they typically let e.g. d = |U | · |⌃U |, where each coordinate is associated with U ⇥ ⌃U . In the completeness case, i.e., when some labeling c covers all vertices in V , the intended solution wc is defined by wc (u, u ) = 1[ u = (u)]/k for all u 2 U, u 2 ⌃U . To ensure that this is essentially the best choice of halfspace, these reductions often appeal to several types of linear constraints. For concreteness, we state a simplified version of those from [ABSS97] below. • For every (u, U ) 2 U ⇥ ⌃U , create the constraint w (u, u ) 0. (This corresponds to the labeled sample ( e (a, ),+1).) • For each u 2 U , create the constraint P 2⌃ U w (u, ) 1/k. • For every v 2 V , v 2 ⌃V and u1, u2 2 N(v), add P u 1 2⇡ 1 (u 1 ,v) ( v ) w (u 1 , u 1 ) = P u 2 2⇡ 1 (u 2 ,v) ( v ) w (u 2 , u 2 ) . This equality “checks” the Label Cover constraints ⇡ (u 1 ,v) and ⇡ (u 2 ,v). Clearly, in the completeness case wc satisfies all constraints except the non-positivity constraints for the k non-zero coordinates. (It was argued in [ABSS97] that any halfspace must violate many more constraints in the soundness case.) Observe that this reduction does not yield any margin: wc does not classify any sample with a positive margin. Nonetheless, [DKM19] adapts this reduction to work with a small margin ⇤ > 0 by adding/subtracting appropriate “slack” from each constraint. For example, the first type of constraint is changed to w (u, u ) ⇤. This gives the desired margin ⇤ in the completeness case. However, for the soundness analysis to work, it is crucial that ⇤ O(1/k), as otherwise the constraints can be trivially satisfied6 by w = 0. As such, the above reduction does not work for us, since we would like a margin ⇤ = ⌦(1/ p k). In fact, this also holds for all known reductions, which are “local” in nature and possess similar characteristics. Roughly speaking, each linear constraint of these reductions involves only a constant number of terms that are intended to be set to O(1/k), which means that we cannot hope to get a margin more than O(1/k). Our Approach: Beyond Local Reductions. With the preceding discussion in mind, our reduction has to be “non-local”. To describe our main idea, we need an additional notion of “decomposability” of a Label Cover instance. Roughly speaking, an instance is decomposable if we can partition V into different parts such that each u 2 U has exactly one induced edge to the vertices in each part. Definition 10. A Label Cover instance L = (U, V,E,⌃U ,⌃V , {⇡e}e2E) is said to be decomposable if there exists a partition of V into V 1 [· · ·[Vt such that, for every u 2 U and j 2 [t], |N(u)\Vj | = 1. We use the notation vj(u) to the denote the unique element in N(u) \ Vj . As explained above, “local” reductions use each labeled sample to only check a constant number of Label Cover constraints. In contrast, our reduction will check many constraints in each sample. Specifically, for each subset V j , we will check all the Label Cover constraints involving v 2 V j at once. To formalize this goal, we will require the following definition. Definition 11. Let L = (U, V = V 1 [ · · · [ Vt, E,⌃U ,⌃V , {⇡e}e2E) be a decomposable Label Cover instance. For any j 2 [t], let ⇧j 2 R(V⇥⌃V )⇥(U⇥⌃U ) be defined as ⇧j (v, v ),(u, u ) = ⇢ 1 if v = vj(u) and ⇡ (u,v)( u) = v, 0 otherwise. We set d = |U | · |⌃U | and our intended solution wc in the completeness case is the same as described in the previous reduction. For simplicity, suppose that, in the soundness case, we pick s that does 6Note that w = 0 satisfies the constraints with margin ⇤ 1/k, which is (1 o(1)) ⇤ if ⇤ = !(1/k). not weakly cover any v 2 V and set ws (u, u ) = 1[ u = s(u)]/k. Our simplified task then becomes: Design D such that errD (wc) ⌧ errD (1 ⌫) (w s), where = ⌦(1/ p k), ⌫ > 0 is a constant. Our choice of D is based on two observations. The first is a structural difference between wc(⇧j)T and ws(⇧j)T . Suppose that the constraint graph has right degree . Since c covers all v 2 V , ⇧j “projects” the non-zeros coordinates wc (u, c(u)) for all u 2 N(v) to the same coordinate (v, v), for some v 2 ⌃V , resulting in the value of /k in this coordinate. On the other hand, since s does not even weakly cover any right vertex, all the non-zero coordinates get maps by ⇧j to different coordinates, resulting in the vector ws(⇧j)T having k non-zero coordinates, each having value 1/k. To summarize, we have: wc(⇧j)T has k/ non-zero coordinates, each of value /k. On the other hand, ws(⇧j)T has k non-zero coordinates, each of value 1/k. Our second observation is the following: suppose that u is a vector with T non-zero coordinates, each of value 1/T . If we take a random ±1 vector s, then hu, si is simply 1/T times a sum of T i.i.d. Rademacher random variables. Recall a well-known version of the central limit theorem (e.g., [Ber41, Ess42]): as T ! 1, 1/ p T times a sum of T i.i.d. Rademacher r.v.s converges in distribution to the normal distribution. This implies that limT!1 Pr[hu, si 1/ p T ] = (1). For simplicity, let us ignore the limit for the moment and assume that Pr[hu, si 1/ p T ] = (1). We can now specify the desired distribution D: Pick s uniformly at random from {±1}V⇥⌃V and then let the sample be s⇧j with label +1. By the above two observations, wc will be correctly classified with margin ⇤ = p /k = ⌦(1/ p k) with probability (1). Furthermore, in the soundness case, w s can only get the same error with margin (roughly) p 1/k = ⇤/ p . Intuitively, for > 1, this means that we get a gap of ⌦(1/ p k) in the margins between the two cases, as desired. This concludes our informal proof overview. Further Details and The Full Reduction. Having stated the rough main ideas above, we next state the full reduction. To facilitate this, we define the following additional notations: Definition 12. Let L = (U, V = V 1 [ · · · [ Vt, E,⌃U ,⌃V , {⇡e}e2E) be a decomposable Label Cover instance. For any j 2 [t], let ⇧̂j 2 R(U⇥⌃V )⇥(U⇥⌃U ) be such that ⇧̂j (u0, v ),(u, u ) = ⇢ 1 if u0 = u and ⇡ (u,vj(u))( u) = v, 0 otherwise. Moreover, let ⇧̃j 2 R(V⇥⌃V )⇥(U⇥⌃V ) be such that ⇧̃j (v, 0 v ),(u, v ) = ⇢ 1 if v = vj(u) and 0v = v 0 otherwise. Observe that ⇧j = ⇧̃j · ⇧̂j (where ⇧j is as in Definition 11). Our full reduction is present in Figure 1 below. The exact choice of parameters are deferred to the supplementary material. We note that the distribution described in the previous section corresponds to Step 4c in the reduction. The other steps of the reductions are included to handle certain technical details we had glossed over previously. In particular, the following are the two main additional technical issues we have to deal with here. • (Non-Uniformity of Weights) In the intuitive argument above, we assume that, in the soundness case, we only consider ws such that P u 2⌃ U ws (u, u ) = 1/k. However, this needs not be true in general, and we have to create new samples to (approximately) enforce such a condition. Specifically, for every subset T ✓ U , we add a constraint thatP u2T P u 2⌃ U w (u, u ) |T |/k ⇤. This corresponds to Step 3 in Figure 1. Note that the term ⇤ on the right hand side above is necessary to ensure that, in the completeness case, we still have a margin of ⇤. Unfortunately, this also leaves the possibility of, e.g., some vertex u 2 U has as much as ⇤ extra “mass”. For technical reasons, it turns out that we have to make sure that these extra “masses” do not contribute to too much of kw(⇧j)T k2 2 . To do so, we add additional constraints on w(⇧̂j)T to bound its norm. Such a constraint is of the form: If we pick a subset S of at most ` coordinates, then their sum must be at most |S|/k + ⇤ (and at least ⇤). These corresponds to Steps 4a and 4b in Figure 1. • (Constant Coordinate) Finally, similar to previous works, we cannot have “constants” in our linear constraints. Rather, we need to add a coordinate ? with the intention that w? = 1/2, and replace the constants in the previous step by w?. Note here that we need two additional constraints (Steps 1 and 2 in Figure 1) to ensure that w? has to be roughly 1/2. 4 Conclusions and Open Problems In this work, we studied the computational complexity of adversarially robust learning of halfspaces in the distribution-independent agnostic PAC model. We provided a simple proper learning algorithm for this problem and a nearly matching computational lower bound. While proper learners are typically preferable due to their interpretability, the obvious open question is whether significantly faster non-proper learners are possible. We leave this as an interesting open problem. Another direction for future work is to understand the effect of distributional assumptions on the complexity of the problem and to explore the learnability of simple neural networks in this context. Broader Impact Our work aims to advance the algorithmic foundations of adversarially robust machine learning. This subfield focuses on protecting machine learning models (especially their predictions) against small perturbations of the input data. This broad goal is a pressing challenge in many real-world scenarios, where successful adversarial example attacks can have far-reaching implications given the adoption of machine learning in a wide variety of applications, from self-driving cars to banking. Since the primary focus of our work is theoretical and addresses a simple concept class, we do not expect our results to have immediate societal impact. Nonetheless, we believe that our findings provide interesting insights on the algorithmic possibilities and fundamental computational limitations of adversarially robust learning. We hope that, in the future, these insights could be useful in the design of practically relevant adversarially robust classifiers in the presence of noisy data. Acknowledgments and Disclosure of Funding Ilias Diakonikolas is supported by NSF Award CCF-1652862 (CAREER) and a Sloan Research Fellowship. Daniel M. Kane is supported by NSF Award CCF-1553288 (CAREER) and a Sloan Research Fellowship.
1. What is the focus and contribution of the paper on adversarially robust semi-agnostic learning of halfspaces? 2. What are the strengths of the proposed approach, particularly in terms of the lower bound? 3. What are the weaknesses of the paper, especially regarding the algorithmic upper bound and comparison with other works? 4. Do you have any concerns about the limitation of the proposed method in the context of halfspaces? 5. Are there any open questions or future research directions related to this work on adversarially robust learning?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors study adversarially robust semi-agnostic learning of halfspaces. Their work is built on the observation that learning halfspaces in a manner robust to adversarial L_p perturbations is essentially equivalent to learning halfspaces with an L_p margin. Hence, the authors are able to make blackbox use of existing algorithms. The main algorithmic difficulty seems to be that since the learning should also be agnostic, we want to avoid selecting a halfspace that has been influenced by labeled examples incompatible with the margin guarantee for the optimal halfspace. To circumvent this difficulty, the authors use an existing *online* algorithm for learning halfspaces with margin, combined with many random attempts to iteratively select a set of "good" examples from the sample set. The authors also give essentially matching lower bounds for the case of L_infty perturbations, under the Gap-ETH hypothesis. This is proved via a reduction from Label Cover instances to the learning problem. Strengths The lower bound seems fairly novel. Compared to previous reductions from Label Cover, the authors are able to get substantially better margin bounds by exploiting global "decomposability" structure in the hard Label Cover instances. This provides a good illustration of the limits for agnostic adversially robust learning, even in such a simple setting as halfspaces. Weaknesses The algorithmic upper bound seems to be a fairly straightforward application of existing halfspace learning algorithms; the main contribution here is realizing that the existing online learners are enough. Furthermore, there are a number of other papers giving adversarially robust learning guarantees for halfspaces, including (non-agnostic) learning for L_p perturbations with random classification noise, and semi-agnostic learning for L_2 perturbations.
NIPS
Title The Complexity of Adversarially Robust Proper Learning of Halfspaces with Agnostic Noise Abstract We study the computational complexity of adversarially robust proper learning of halfspaces in the distribution-independent agnostic PAC model, with a focus on Lp perturbations. We give a computationally efficient learning algorithm and a nearly matching computational hardness result for this problem. An interesting implication of our findings is that the L1 perturbations case is provably computationally harder than the case 2  p < 1. 1 Introduction In recent years, the design of reliable machine learning systems for secure-critical applications, including in computer vision and natural language processing, has been a major goal in the field. One of the main concrete goals in this context has been to develop classifiers that are robust to adversarial examples, i.e., small imperceptible perturbations to the input that can result in erroneous misclassification [BCM+13, SZS+14, GSS15]. This has led to an explosion of research on designing defenses against adversarial examples and attacks on these defenses. See, e.g., [KM18] for a recent tutorial on the topic. Despite significant empirical progress over the past few years, the broad question of designing computationally efficient classifiers that are provably robust to adversarial perturbations remains an outstanding theoretical challenge. In this paper, we focus on understanding the computational complexity of adversarially robust classification in the (distribution-independent) agnostic PAC model [Hau92, KSS94]. Specifically, we study the learnability of halfspaces (or linear threshold functions) in this model with respect to Lp perturbations. A halfspace is any function h w : Rd ! {±1} of the form1 h w (x) = sgn (hw,xi), where w 2 Rd is the associated weight vector. The problem of learning an unknown halfspace has been studied for decades — starting with the Perceptron algorithm [Ros58] — and has arguably been one of the most influential problems in the development of machine learning [Vap98, FS97]. Before we proceed, we introduce the relevant terminology. Let C be a concept class of Boolean-valued functions on an instance space X ✓ Rd and H be a hypothesis class on X . The set of allowable perturbations is defined by a function U : X ! 2X . The robust risk of a hypothesis h 2 H with respect to a distribution D on X⇥{±1} is defined as RU (h,D) = Pr (x,y)⇠D[9z 2 U(x), h(z) 6= y]. The (adversarially robust) agnostic PAC learning problem for C is the following: Given i.i.d. samples from an arbitrary distribution D on X ⇥ {±1}, the goal of the learner is to output a hypothesis h 2 H 1The function sgn : R ! {±1} is defined as sgn(u) = 1 if u 0 and sgn(u) = 1 otherwise. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. such that with high probability it holds RU (h,D) OPTD +✏, where OPTD = inff2C RU (f,D) is the robust risk of the best-fitting function in C. Unfortunately, it follows from known hardness results that this formulation is computationally intractable for the class of halfspaces C = {sgn(hw,xi),w 2 Rd} under Lp perturbations, i.e, for Up, (x) = {z 2 X : kz xkp }, for some p 2. (The reader is referred to the supplementary material for a more detailed explanation.) To be able to obtain computationally efficient algorithms, we relax the above definition in two ways: (1) We allow the hypothesis to be robust within a slightly smaller perturbation region, and (2) We introduce a small constant factor approximation in the error guarantee. In more detail, for some constants 0 < ⌫ < 1 and ↵ > 1, our goal is to efficiently compute a hypothesis h such that with high probability RU p,(1 ⌫) (h,D) ↵ ·OPTDp, +✏ , (1) where OPTDp, = inff2C RUp, (f,D). (Note that for ⌫ = 0 and ↵ = 1, we obtain the original definition.) An interesting setting is when ⌫ is a small constant close to 0, say ⌫ = 0.1, and ↵ = 1+ , where 0 < < 1. In this paper, we characterize the computational complexity of this problem with respect to proper learning algorithms, i.e., algorithms that output a halfspace hypothesis. Throughout this paper, we will assume that the domain of our functions is bounded in the ddimensional Lp unit ball Bdp. All our results immediately extend to general domains with a (necessary) dependence on the diameter of the feasible set. A simple but crucial observation leveraged in our work is the following: The adversarially robust learning problem of halfspaces under Lp perturbations (defined above) is essentially equivalent to the classical problem of agnostic proper PAC learning of halfspaces with an Lp margin. Let p 2, q be the dual exponent of p, i.e., 1/p + 1/q = 1. The problem of agnostic proper PAC learning of halfspaces with an Lp margin is the following: The learner is given i.i.d. samples from a distribution D over Bdp ⇥ {±1}. For w 2 Bdq , its -margin error is defined as errD (w) := Pr (x,y)⇠D[sgn(hw,xi y · ) 6= y]. We also define OPTD := minw2Bd q errD (w). An algorithm is a proper ⌫-robust ↵-agnostic learner for Lp- -margin halfspace if, with probability at least 1 ⌧ , it outputs a halfspace w 2 Bdq with errD (1 ⌫) (w) ↵ ·OPTD +✏ . (2) (When unspecified, the failure probability ⌧ is assumed to be 1/3. It is well-known and easy to see that we can always achieve arbitrarily small value of ⌧ at the cost of O(log(1/⌧)) multiplicative factor in the running time and sample complexity.) We have the following basic observation, which implies that the learning objectives (1) and (2) are equivalent. Throughout this paper, we will state our contributions using the margin formulation (2). Fact 1. For any non-zero w 2 Rd, 0 and D over Rd ⇥ {±1}, RU p, (h w ,D) = errD ( wkwk q ). 1.1 Our Contributions Our main positive result is a robust and agnostic proper learning algorithm for Lp- -margin halfspace with near-optimal running time: Theorem 2 (Robust Learning Algorithm). Fix 2 p < 1 and 0 < < 1. For any 0 < ⌫, < 1, there is a proper ⌫-robust (1 + )-agnostic learner for Lp- -margin halfspace that draws O( p✏2⌫2 2 ) samples and runs in time (1/ )O ⇣ p ⌫ 2 2 ⌘ · poly(d/✏). Furthermore, for p = 1, there is a proper ⌫-robust (1 + )-agnostic learner for L1- -margin halfspace that draws O( log d✏2⌫2 2 ) samples and runs in time d O ⇣ log(1/ ) ⌫ 2 2 ⌘ · poly(1/✏). To interpret the running time of our algorithm, we consider the setting = ⌫ = 0.1. We note two different regimes. If p 2 is a fixed constant, then our algorithm runs in time 2O(1/ 2) poly(d/✏). On the other hand, for p = 1, we obtain a runtime of dO(1/ 2) poly(1/✏). That is, the L1 margin case (which corresponds to adversarial learning with L1 perturbations) appears to be computationally the hardest. As we show in Theorem 3, this fact is inherent for proper learners. Our algorithm establishing Theorem 2 follows via a simple and unified approach, employing a reduction from online (mistake bound) learning [Lit87]. Specifically, we show that any computationally efficient Lp online learner for halfspaces with margin guarantees and mistake bound M can be used in a black-box manner to obtain an algorithm for our problem with runtime roughly poly(d/✏)(1/ )M . Theorem 2 then follows by applying known results from the online learning literature [Gen01a]. For the special case of p = 2 (and ⌫ = 0.1), recent work [DKM19] gave a sophisticated algorithm for our problem with running time poly(d/✏)2 ˜O(1/( 2 )). We note that our algorithm has significantly better dependence on the parameter (quantifying the approximation ratio), and better dependence on 1/ . Importantly, our algorithm is much simpler and immediately generalizes to all Lp norms. Perhaps surprisingly, the running time of our algorithm is nearly the best possible for proper learning. For constant p 2, this follows from the hardness result of [DKM19]. (See the supplementary material for more details.) Furthermore, we prove a tight running time lower bound for robust L1- - margin proper learning of halfspaces. Roughly speaking, we show that for some sufficiently small constant ⌫ > 0, one cannot hope to significantly speed-up our algorithm for ⌫-robust L1- -margin learning of halfspaces. Our computational hardness result is formally stated below. Theorem 3 (Tight Running Time Lower Bound). There exists a constant ⌫ > 0 such that, assuming the (randomized) Gap Exponential Time Hypothesis, there is no proper ⌫-robust 1.5-agnostic learner for L1- -margin halfspace that runs in time f(1/ ) · do(1/ 2 ) poly(1/✏) for any function f . As indicated above, our running time lower bound is based on the so-called Gap Exponential Time Hypothesis (Gap-ETH), which roughly states that no subexponential time algorithm can approximate 3SAT to within (1 ✏) factor, for some constant ✏ > 0. Since we will not be dealing with Gap-ETH directly here, we defer the formal treatment of the hypothesis and discussions on its application to the supplementary material. We remark that the constant 1.5 in our theorem is insignificant. We can increase this “gap” to any constant less than 2. We use the value 1.5 to avoid introducing an additional variable. Another remark is that Theorem 3 only applies for a small constant ⌫ > 0. This leaves the possibility of achieving, e.g., a faster 0.9-robust L1- -margin learner for halfspaces, as an interesting open problem. 1.2 Related Work A sequence of recent works [CBM18, SST+18, BLPR19, MHS19] has studied the sample complexity of adversarially robust PAC learning for general concept classes of bounded VC dimension and for halfspaces in particular. [MHS19] established an upper bound on the sample complexity of PAC learning any concept class with finite VC dimension. A common implication of the aforementioned works is that, for some concept classes, the sample complexity of adversarially robust PAC learning is higher than the sample complexity of (standard) PAC learning. For the class of halfspaces, which is the focus of the current paper, the sample complexity of adversarially robust agnostic PAC learning was shown to be essentially the same as that of (standard) agnostic PAC learning [CBM18, MHS19]. Turning to computational aspects, [BLPR19, DNV19] showed that there exist classification tasks that are efficiently learnable in the standard PAC model, but are computationally hard in the adversarially robust setting (under cryptographic assumptions). Notably, the classification problems shown hard are artificial, in the sense that they do not correspond to natural concept classes. [ADV19] shows that adversarially robust proper learning of degree-2 polynomial threshold functions is computationally hard, even in the realizable setting. On the positive side, [ADV19] gives a polynomial-time algorithm for adversarially robust learning of halfspaces under L1 perturbations, again in the realizable setting. More recently, [MGDS20] generalized this upper bound to a broad class of perturbations, including Lp perturbations. Moreover, [MGDS20] gave an efficient algorithm for learning halfspaces with random classification noise [AL88]. We note that all these algorithms are proper. The problem of agnostically learning halfspaces with a margin has been studied extensively. A number of prior works [BS00, SSS09, SSS10, LS11, BS12, DKM19] studied the case of L 2 margin and gave a range of time-accuracy tradeoffs for the problem. The most closely related prior work is the recent work [DKM19], which gave a proper ⌫-robust ↵-agnostic learning for L 2 - -margin halfspace with near-optimal running time when ↵, ⌫ are universal constants, and a nearly matching computational hardness result. The algorithm of the current paper broadly generalizes, simplifies, and improves the algorithm of [DKM19]. 2 Upper Bound: From Online to Adversarially Robust Agnostic Learning In this section, we provide a generic method that turns an online (mistake bound) learning algorithm for halfspaces into an adversarially robust agnostic algorithm, which is then used to prove Theorem 2. Recall that online learning [Lit87] proceeds in a sequence of rounds. In each round, the algorithm is given an example point, produces a binary prediction on this point, and receives feedback on its prediction (after which it is allowed to update its hypothesis). The mistake bound of an online learner is the maximum number of mistakes (i.e., incorrect predictions) it can make over all possible sequences of examples. We start by defining the notion of online learning with a margin gap in the context of halfspaces: Definition 4. An online learner A for the class of halfspaces is called an Lp online learner with mistake bound M and ( , 0) margin gap if it satisfies the following: In each round, A returns a vector w 2 Bdq . Moreover, for any sequence of labeled examples (xi, yi) such that there exists w ⇤ 2 Bdq with sgn(hw⇤,xii yi ) = yi for all i, there are at most M values of t such that sgn(hwt,xti yt 0) 6= yt, where wt = A((x1, y1), . . . , (xt 1, yt 1)). The Lp online learning problem of halfspaces has been studied extensively in the literature, see, e.g., [Lit87, GLS01, Gen01b, Gen03, BB14]. We will use a result of [Gen01a], which gives a polynomial time Lp online learner with margin gap ( , (1 ⌫) ) and mistake bound O((p 1)/⌫2 2). We are now ready to state our generic proposition that translates an online algorithm with a given mistake bound into an agnostic learning algorithm. We will use the following notation: For S ✓ Bdp ⇥ {±1}, we will use S instead of D to denote the empirical error on the uniform distribution over S. In particular, we denote errS (w) := 1 |S| · |{(x, y) 2 S | sgn(hw,xi y ) 6= y}|. The main result of this section is the following proposition. While we state our proposition for the empirical error, it is simple to convert it into a generalization bound as we will show later in the proof of Theorem 2. Proposition 5. Assume that there is a polynomial time Lp online learner A for halfspaces with a ( , 0) margin gap and mistake bound of M . Then there exists an algorithm that given a multiset of labeled examples S ✓ Bdp ⇥ {±1} and 2 (0, 1), runs in poly(|S|d) · 2O(M log(1/ )) time and with probability 9/10 returns w 2 Bdq such that errS 0(w) (1 + ) ·OPTS . Notice that our algorithm runs in time poly(|S|d) · 2O(M log(1/ )) and has success probability 9/10. It is more convenient to describe a version of our algorithm that runs in poly(|S|d) time, but has small success probability of 2 O(M log(1/ )), as encapsulated by the following lemma. Lemma 6. Assume that there is a polynomial time Lp online learner A for halfspaces with a ( , 0) margin gap and mistake bound of M . Then there exists an algorithm that given a multiset of labeled examples S ✓ Bdp ⇥ {±1} and 2 (0, 1), runs in poly(|S|dM) time and with probability 2 O(M log(1/ )) returns w 2 Bdq such that errS 0(w) (1 + ) ·OPTS . Before proving Lemma 6, notice that Proposition 5 now follows by running the algorithm from Lemma 6 independently 2O(M log(1/ )) times and returning the w with minimum errS 0(w). Since each iteration has a 2 O(M log(1/ )) probability of returning a w with errS 0(w) (1 + ) ·OPTS , with 90% probability at least one of our runs finds a w that satisfies this. Proof of Lemma 6. Let w⇤ 2 Bdq denote an “optimal” halfspace with errS (w⇤) = OPTS . The basic idea of the algorithm is to repeatedly run A on larger and larger subsets of samples each time adding one additional sample in S that the current hypothesis gets wrong. The one worry here is that some of the points in S might be errors, inconsistent with the true classifier w⇤, and feeding them to our online learner will lead it astray. However, at any point in time, either we misclassify (w.r.t. margin 0) only a (1 + ) · OPTS fraction of points (in which case we can abort early and use this hypothesis) or guessing a random misclassified point will have at least an ⌦( ) probability of giving us a non-error. Since our online learner has a mistake bound of M , we will never need to make more than this many correct guesses. Specifically, the algorithm is as follows: • Let Samples = ; • For i = 0 to M – Let w = A(Samples) – Let T be the set of (x, y) 2 S so that sgn(hw,xi y 0) 6= y – If T = ;, and otherwise with 50% probability, return w – Draw (xi, yi) uniformly at random from T , and add it to Samples • Return w To analyze this algorithm, let Sbad be the set of (x, y) 2 S with sgn(hw⇤,xi y ) 6= y. Recall that by assumption |Sbad| OPTS ·|S|. We claim that with probability at least 2 O(M log(1/ )) our algorithm never adds an element of Sbad to Samples and never returns a w in the for loop for which errS 0(w) > (1 + ) ·OPTS . This is because during each iteration of the algorithm either: 1. errS 0(w) > (1+ ) ·OPTS . In this case, there is a 50% probability that we do not return w. If we do not return, then |T | (1 + ) · |Sbad| so there is at least a 1+ /2 probability that the new element added to Samples is not in Sbad. 2. Or errS 0(w) (1 + ) ·OPTS . In this case, there is a 50% probability of returning w. Hence, there is a ( /4)M+1 2 O(M log(1/ )) probability of never adding an element of Sbad to Samples or returning a w in our for-loop with errS 0(w) > (1+ )·OPTS . When this occurs, we claim that we output w such that errS 0(w) (1+ ) ·OPTS . This is because, if this were not the case, we must have reached the final statement at which point we have Samples = ((x 0 , y 0 ), . . . , (xM , yM )), where each (xi, yi) satisfies sgn(hw⇤,xii yi ) = yi and sgn(hwi,xii yi 0) 6= yi with wi = A((x 0 , y 0 ), . . . , (xi 1, yi 1)). But this violates the mistake bound of M . Thus, we output w such that errS 0(w) (1+ )·OPTS with probability at least 2 O(M log(1/ )). We will now show how Proposition 5 can be used to derive Theorem 2. As stated earlier, we will require the following mistake bound for online learning with a margin gap from [Gen01a]. Theorem 7 ([Gen01a]). For any 2 p < 1, there exists a polynomial time Lp online learner with margin gap ( , (1 ⌫) ) and mistake bound O ⇣ (p 1) ⌫2 2 ⌘ . Furthermore, there is a polynomial time L1 online learner with margin gap ( , (1 ⌫) ) and mistake bound O ⇣ log d ⌫2 2 ⌘ . Proof of Theorem 2. Our ⌫-robust (1 + )-agnostic learner for Lp- -margin halfspace works as follows. First, it draws the appropriate number of samples m (as stated in Theorem 2) from D. Then, it runs the algorithm from Proposition 5 on these samples for margin gap ( , (1 ⌫/2) ). Let Mp denote the error bound for Lp online learning with margin gap ( , (1 ⌫/2) ) given by Theorem 7. Our entire algorithm runs in time poly(m) · 2O(Mp·log(1/ )). It is simple to check that this results in the claimed running time. As for the error guarantee, let w 2 Bdq be the output halfspace. With probability 0.8, we have errD (1 ⌫) (w) errS(1 ⌫/2) (w) + ✏/2 (1 + ) ·OPTS(1 ⌫/2) +✏/2 (1 + ) ·OPTD +✏, where the first and last inequalities follow from standard margin generalization bounds [BM02, KP02, KST08] and the second inequality follows from the guarantee of Proposition 5. 3 Tight Running Time Lower Bound: Proof Overview We will now give a high-level overview of our running time lower bound (Theorem 3). Due to space constraint, we will sometimes be informal; everything will be formalized in the supplementary material. The main component of our hardness result will be a reduction from the Label Cover problem2, which is a classical problem in hardness of approximation literature that is widely used as a starting point for proving strong NP-hardness of approximation results (see, e.g., [ABSS97, Hås96, Hås01, Fei98]). Definition 8 (Label Cover). A Label Cover instance L = (U, V,E,⌃U ,⌃V , {⇡e}e2⌃) consists of • a bi-regular bipartite graph (U, V,E), referred to as the constraint graph, • label sets ⌃U and ⌃V , • for every edge e 2 E, a constraint (aka projection) ⇡e : ⌃U ! ⌃V . A labeling of L is a function : U ! ⌃U . We say that covers v 2 V if there exists v 2 ⌃V such that3 ⇡ (u,v)( (u)) = v for all4 u 2 N(v). The value , denoted by valL( ), is defined as the fraction of v 2 V covered by . The value of L, denoted by val(L), is defined as max :U!⌃ U val( ). Moreover, we say that weakly covers v 2 V if there exist distinct neighbors u 1 , u 2 of v such that ⇡ (u 1 ,v)( (u1)) = ⇡(u 2 ,v)( (u2)). The weak value of , denoted by wval( ), is the fraction of v 2 V weakly covered by . The weak value of L, denoted by wval(L), is defined as max :U!⌃ U wval( ). For a Label Cover instance L, we use k to denote |U | and n to denote |U | · |⌃U |+ |V | · |⌃V |. The goal of Label Cover is to find an assignment with maximium value. Several strong inapproximability results for Label Cover are known [Raz98, MR10, DS14]. To prove a tight running time lower bound, we require an inapproximability result for Label Cover with a tight running lower bound as well. Observe that we can solve Label Cover in time nO(k) by enumerating through all possible assignments and compute their values. The following result shows that, even if we aim for a constant approximation ratio, no algorithm that can be significantly faster than this “brute-force” algorithm. Theorem 9 ([Man20]). Assuming Gap-ETH, for any function f and any constant µ 2 (0, 1), no f(k) · no(k)-time algorithm can, given a Label Cover instance L, distinguish between the following two cases: (Completeness) val(L) = 1, and, (Soundness) wval(L) < µ. Given a Label Cover instance L, our reduction produces an oracle O that can sample (in polynomial time) from a distribution D over Bd1 ⇥ {±1} (for some d n) such that: • (Completeness) If val(L) = 1, then OPTD ⇤ ✏⇤. • (Soundness) If wval(L) < µ, then OPTD (1 ⌫) ⇤ > 1.6✏ ⇤. • (Margin and Error Bounds) ⇤ = ⌦(1/ p k) and ✏⇤ = 1/no(k). Here ⌫ > 0 is some constant. Once we have such a reduction, Theorem 3 follows quite easily. The reason is that, if we assume (by contrapositive) that there exists a ⌫-robust 1.5-agnostic learner A for L1- -margin halfspaces that runs in time f(1/ ) · do(1/ 2 ) poly(1/✏), then we can turn A to an algorithm for Label Cover by first using the reduction above to give us an oracle O and then running A on O. With appropriate parameters, A can distinguish between the two cases in Theorem 9 in time f(1/ ⇤) · do(1/( ⇤)2) poly(1/✏⇤) = f(O( p k)) · no(k), which by Theorem 9 violates the randomized Gap-ETH. Therefore, we will henceforth focus on the reduction and its proof of correctness. Previous Results. To explain the key new ideas behind our reduction, it is important to understand high-level approaches taken in previous works and why they fail to yield running time lower bounds as in our Theorem 3. Most of the known hardness results for agnostic learning of halfspaces employ reductions from Label Cover [ABSS97, FGKP06, GR09, FGRW12, DKM19]5. These reductions use gadgets which are “local” in nature. As we will explain next, such “local” reductions cannot work for our purpose. 2Label Cover is sometimes referred to as Projection Game or Two-Prover One-Round Game. 3This is equivalent to ⇡(u 1 ,v)( (u1)) = ⇡(u 2 ,v)( (u2)) for all neighbors u1, u2 of v. 4For every a 2 U [V , we use N(a) to denote the set of neighbors of a (with respect to the graph (U, V,E)). 5Some of these reductions are stated in terms of reductions from Set Cover or from constraint satisfaction problems (CSP). However, it is well-known that these can be formulated as Label Cover. To describe the reductions, it is convenient to think of each sample (x, y) as a linear constraint hw,xi 0 when y = +1 and hw,xi < 0 when y = 1, where the variables are the coordinates w 1 , . . . , wd of w. When we also consider a margin parameter ⇤ > 0, then the constraints become hw,xi ⇤ and hw,xi < ⇤, respectively. Notice here that, for our purpose, we want (i) our halfspace w to be in Bd 1 , i.e., |w 1 |+ · · ·+ |wd| 1, and (ii) each of our samples x to lie in Bd1, i.e., |x 1 |, . . . , |xd| 1. Although the reductions in previous works vary in certain steps, they do share an overall common framework. With some simplification, they typically let e.g. d = |U | · |⌃U |, where each coordinate is associated with U ⇥ ⌃U . In the completeness case, i.e., when some labeling c covers all vertices in V , the intended solution wc is defined by wc (u, u ) = 1[ u = (u)]/k for all u 2 U, u 2 ⌃U . To ensure that this is essentially the best choice of halfspace, these reductions often appeal to several types of linear constraints. For concreteness, we state a simplified version of those from [ABSS97] below. • For every (u, U ) 2 U ⇥ ⌃U , create the constraint w (u, u ) 0. (This corresponds to the labeled sample ( e (a, ),+1).) • For each u 2 U , create the constraint P 2⌃ U w (u, ) 1/k. • For every v 2 V , v 2 ⌃V and u1, u2 2 N(v), add P u 1 2⇡ 1 (u 1 ,v) ( v ) w (u 1 , u 1 ) = P u 2 2⇡ 1 (u 2 ,v) ( v ) w (u 2 , u 2 ) . This equality “checks” the Label Cover constraints ⇡ (u 1 ,v) and ⇡ (u 2 ,v). Clearly, in the completeness case wc satisfies all constraints except the non-positivity constraints for the k non-zero coordinates. (It was argued in [ABSS97] that any halfspace must violate many more constraints in the soundness case.) Observe that this reduction does not yield any margin: wc does not classify any sample with a positive margin. Nonetheless, [DKM19] adapts this reduction to work with a small margin ⇤ > 0 by adding/subtracting appropriate “slack” from each constraint. For example, the first type of constraint is changed to w (u, u ) ⇤. This gives the desired margin ⇤ in the completeness case. However, for the soundness analysis to work, it is crucial that ⇤ O(1/k), as otherwise the constraints can be trivially satisfied6 by w = 0. As such, the above reduction does not work for us, since we would like a margin ⇤ = ⌦(1/ p k). In fact, this also holds for all known reductions, which are “local” in nature and possess similar characteristics. Roughly speaking, each linear constraint of these reductions involves only a constant number of terms that are intended to be set to O(1/k), which means that we cannot hope to get a margin more than O(1/k). Our Approach: Beyond Local Reductions. With the preceding discussion in mind, our reduction has to be “non-local”. To describe our main idea, we need an additional notion of “decomposability” of a Label Cover instance. Roughly speaking, an instance is decomposable if we can partition V into different parts such that each u 2 U has exactly one induced edge to the vertices in each part. Definition 10. A Label Cover instance L = (U, V,E,⌃U ,⌃V , {⇡e}e2E) is said to be decomposable if there exists a partition of V into V 1 [· · ·[Vt such that, for every u 2 U and j 2 [t], |N(u)\Vj | = 1. We use the notation vj(u) to the denote the unique element in N(u) \ Vj . As explained above, “local” reductions use each labeled sample to only check a constant number of Label Cover constraints. In contrast, our reduction will check many constraints in each sample. Specifically, for each subset V j , we will check all the Label Cover constraints involving v 2 V j at once. To formalize this goal, we will require the following definition. Definition 11. Let L = (U, V = V 1 [ · · · [ Vt, E,⌃U ,⌃V , {⇡e}e2E) be a decomposable Label Cover instance. For any j 2 [t], let ⇧j 2 R(V⇥⌃V )⇥(U⇥⌃U ) be defined as ⇧j (v, v ),(u, u ) = ⇢ 1 if v = vj(u) and ⇡ (u,v)( u) = v, 0 otherwise. We set d = |U | · |⌃U | and our intended solution wc in the completeness case is the same as described in the previous reduction. For simplicity, suppose that, in the soundness case, we pick s that does 6Note that w = 0 satisfies the constraints with margin ⇤ 1/k, which is (1 o(1)) ⇤ if ⇤ = !(1/k). not weakly cover any v 2 V and set ws (u, u ) = 1[ u = s(u)]/k. Our simplified task then becomes: Design D such that errD (wc) ⌧ errD (1 ⌫) (w s), where = ⌦(1/ p k), ⌫ > 0 is a constant. Our choice of D is based on two observations. The first is a structural difference between wc(⇧j)T and ws(⇧j)T . Suppose that the constraint graph has right degree . Since c covers all v 2 V , ⇧j “projects” the non-zeros coordinates wc (u, c(u)) for all u 2 N(v) to the same coordinate (v, v), for some v 2 ⌃V , resulting in the value of /k in this coordinate. On the other hand, since s does not even weakly cover any right vertex, all the non-zero coordinates get maps by ⇧j to different coordinates, resulting in the vector ws(⇧j)T having k non-zero coordinates, each having value 1/k. To summarize, we have: wc(⇧j)T has k/ non-zero coordinates, each of value /k. On the other hand, ws(⇧j)T has k non-zero coordinates, each of value 1/k. Our second observation is the following: suppose that u is a vector with T non-zero coordinates, each of value 1/T . If we take a random ±1 vector s, then hu, si is simply 1/T times a sum of T i.i.d. Rademacher random variables. Recall a well-known version of the central limit theorem (e.g., [Ber41, Ess42]): as T ! 1, 1/ p T times a sum of T i.i.d. Rademacher r.v.s converges in distribution to the normal distribution. This implies that limT!1 Pr[hu, si 1/ p T ] = (1). For simplicity, let us ignore the limit for the moment and assume that Pr[hu, si 1/ p T ] = (1). We can now specify the desired distribution D: Pick s uniformly at random from {±1}V⇥⌃V and then let the sample be s⇧j with label +1. By the above two observations, wc will be correctly classified with margin ⇤ = p /k = ⌦(1/ p k) with probability (1). Furthermore, in the soundness case, w s can only get the same error with margin (roughly) p 1/k = ⇤/ p . Intuitively, for > 1, this means that we get a gap of ⌦(1/ p k) in the margins between the two cases, as desired. This concludes our informal proof overview. Further Details and The Full Reduction. Having stated the rough main ideas above, we next state the full reduction. To facilitate this, we define the following additional notations: Definition 12. Let L = (U, V = V 1 [ · · · [ Vt, E,⌃U ,⌃V , {⇡e}e2E) be a decomposable Label Cover instance. For any j 2 [t], let ⇧̂j 2 R(U⇥⌃V )⇥(U⇥⌃U ) be such that ⇧̂j (u0, v ),(u, u ) = ⇢ 1 if u0 = u and ⇡ (u,vj(u))( u) = v, 0 otherwise. Moreover, let ⇧̃j 2 R(V⇥⌃V )⇥(U⇥⌃V ) be such that ⇧̃j (v, 0 v ),(u, v ) = ⇢ 1 if v = vj(u) and 0v = v 0 otherwise. Observe that ⇧j = ⇧̃j · ⇧̂j (where ⇧j is as in Definition 11). Our full reduction is present in Figure 1 below. The exact choice of parameters are deferred to the supplementary material. We note that the distribution described in the previous section corresponds to Step 4c in the reduction. The other steps of the reductions are included to handle certain technical details we had glossed over previously. In particular, the following are the two main additional technical issues we have to deal with here. • (Non-Uniformity of Weights) In the intuitive argument above, we assume that, in the soundness case, we only consider ws such that P u 2⌃ U ws (u, u ) = 1/k. However, this needs not be true in general, and we have to create new samples to (approximately) enforce such a condition. Specifically, for every subset T ✓ U , we add a constraint thatP u2T P u 2⌃ U w (u, u ) |T |/k ⇤. This corresponds to Step 3 in Figure 1. Note that the term ⇤ on the right hand side above is necessary to ensure that, in the completeness case, we still have a margin of ⇤. Unfortunately, this also leaves the possibility of, e.g., some vertex u 2 U has as much as ⇤ extra “mass”. For technical reasons, it turns out that we have to make sure that these extra “masses” do not contribute to too much of kw(⇧j)T k2 2 . To do so, we add additional constraints on w(⇧̂j)T to bound its norm. Such a constraint is of the form: If we pick a subset S of at most ` coordinates, then their sum must be at most |S|/k + ⇤ (and at least ⇤). These corresponds to Steps 4a and 4b in Figure 1. • (Constant Coordinate) Finally, similar to previous works, we cannot have “constants” in our linear constraints. Rather, we need to add a coordinate ? with the intention that w? = 1/2, and replace the constants in the previous step by w?. Note here that we need two additional constraints (Steps 1 and 2 in Figure 1) to ensure that w? has to be roughly 1/2. 4 Conclusions and Open Problems In this work, we studied the computational complexity of adversarially robust learning of halfspaces in the distribution-independent agnostic PAC model. We provided a simple proper learning algorithm for this problem and a nearly matching computational lower bound. While proper learners are typically preferable due to their interpretability, the obvious open question is whether significantly faster non-proper learners are possible. We leave this as an interesting open problem. Another direction for future work is to understand the effect of distributional assumptions on the complexity of the problem and to explore the learnability of simple neural networks in this context. Broader Impact Our work aims to advance the algorithmic foundations of adversarially robust machine learning. This subfield focuses on protecting machine learning models (especially their predictions) against small perturbations of the input data. This broad goal is a pressing challenge in many real-world scenarios, where successful adversarial example attacks can have far-reaching implications given the adoption of machine learning in a wide variety of applications, from self-driving cars to banking. Since the primary focus of our work is theoretical and addresses a simple concept class, we do not expect our results to have immediate societal impact. Nonetheless, we believe that our findings provide interesting insights on the algorithmic possibilities and fundamental computational limitations of adversarially robust learning. We hope that, in the future, these insights could be useful in the design of practically relevant adversarially robust classifiers in the presence of noisy data. Acknowledgments and Disclosure of Funding Ilias Diakonikolas is supported by NSF Award CCF-1652862 (CAREER) and a Sloan Research Fellowship. Daniel M. Kane is supported by NSF Award CCF-1553288 (CAREER) and a Sloan Research Fellowship.
1. What is the focus of the paper regarding robust learning? 2. What are the strengths of the proposed approach, particularly in terms of its ability to handle different types of perturbations? 3. Do you have any concerns about the significance or novelty of the work compared to prior research? 4. How does the reviewer assess the clarity, quality, and impact of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper extends and complements recent work by Diakonikolas et al. on essentially the same problem: supposing that there exists a halfspace that only fails to get margin gamma on an OPT fraction of points, find a halfspace that only fails to get margin (1+nu)gamma on a (1+delta)OPT+epsilon fraction. As noted in the introduction, if the examples lie in the l_p unit ball and the margin is calculated over weights in the l_q unit ball for q dual to p, this corresponds to agnostic learning of a classifier that is robust to l_p perturbations. The main (positive) result of Diakonikolas et al. is restricted to p=q=2, and features an exponential dependence on 1/delta; here the result is generalized to all p, and the dependence on delta is improved to exponential in log(1/delta) (i.e., it is moved to the base of the exponent). (Prior recent works had treated the case of proper robust learning.) Both works also prove some negative results. Diakonikolas et al. gave a hardness result for the l_2 norm under ETH, showing that for some constant nu and any constant delta, the exponential dependence on 1/gamma^2 of all of these algorithms is essentially necessary. This work gives a lower bound for the l_infinity norm, showing that the dimension^O(1/gamma^2) term in the running time obtained by their algorithm for the l_infinity norm problem is similarly necessary. Strengths The problem of perturbation robustness is of significant interest at the moment, and l_2 perturbations are not the most natural kinds of perturbations to consider in that context -- indeed, l_infinity is the most commonly considered, and this is the main case treated by this work. Moreover, although the realizable case had been treated by previous work, some kind of noise tolerance is essential in practice, and in some ways agnostic noise tolerance is the ideal guarantee, but also the hardest to achieve. Especially in the context of the complementary running time lower bound, this seems like a good result in this direction. Also, the algorithm here is structured as a generic reduction to a variant of mistake bounded learning with a margin gap, for which an algorithm existed previously in the literature. This is a nice conceptual connection between the models. Finally, the construction of the lower bound has some nice techniques, and the body of the submission does a good job of summarizing the technical innovation here. Weaknesses I don't want to overstate the weaknesses, they are relatively minor. Nevertheless, given the amount of work on closely related problems, this work is in some ways a little incremental. Also, one might argue that a stochastic noise model is often of more relevance in practice than the highly pessimistic agnostic noise model, where much stronger guarantees are generally possible for stochastic noise. (I feel that the agnostic model is still worthy of study and more relevant for some scenarios, though.) Also there was no empirical evaluation, but I feel that the theoretical results are sufficiently interesting to merit publication. Response to authors: Sure, Massart noise is an excellent example of a realistic noise model. (It is not quite as pessimistic as agnostic noise, of course...)
NIPS
Title Learning in Observable POMDPs, without Computationally Intractable Oracles Abstract Much of reinforcement learning theory is built on top of oracles that are computationally hard to implement. Specifically for learning near-optimal policies in Partially Observable Markov Decision Processes (POMDPs), existing algorithms either need to make strong assumptions about the model dynamics (e.g. deterministic transitions) or assume access to an oracle for solving a hard optimistic planning or estimation problem as a subroutine. In this work we develop the first oracle-free learning algorithm for POMDPs under reasonable assumptions. Specifically, we give a quasipolynomial-time end-to-end algorithm for learning in “observable” POMDPs, where observability is the assumption that well-separated distributions over states induce well-separated distributions over observations. Our techniques circumvent the more traditional approach of using the principle of optimism under uncertainty to promote exploration, and instead give a novel application of barycentric spanners to constructing policy covers. 1 Introduction Markov Decision Processes (MDPs) are a ubiquitous model in reinforcement learning that aim to capture sequential decision-making problems in a variety of applications spanning robotics to healthcare. However, modelling a problem with an MDP makes the often-unrealistic assumption that the agent has perfect knowledge about the state of the world. Partially Observable Markov Decision Processes (POMDPs) are a broad generalization of MDPs which capture an agent’s inherent uncertainty about the state: while there is still an underlying state that updates according to the agent’s actions, the agent never directly observes the state, but instead receives samples from a statedependent observation distribution. The greater generality afforded by partial observability is crucial to applications in game theory [BS18], healthcare [Hau00, HF00b], market design [WME+22], and robotics [CKK96]. Unfortunately, this greater generality comes with steep statistical and computational costs. There are well-known statistical lower bounds [JKKL20, KAL16], which show that in the worst case, it is statistically intractable to find a near-optimal policy for a POMDP given the ability to play policies on it (the learning problem), even given unlimited computation. Furthermore, there are worst-case computational lower bounds [PT87, Lit94, BDRS96, LGM01, VLB12], which establish that it is computationally intractable to find a near-optimal policy even when given the exact parameters of the model (the substantially simpler planning problem). Nevertheless there is a sizeable literature devoted to overcoming the statistical intractability of the learning problem by restricting to natural subclasses of POMDPs [KAL16, GDB16, ALA16, JKKL20, XCGZ21, KECM21a, KECM21b, LCSJ22]. There are far fewer works attempting to overcome computational intractability, and all make severe restrictions on either the model dynamics [JKKL20, KAL16] or the structure of the uncertainty [BDRS96, KECM21a]. The standard practice is to simply sidestep computational issues by assuming access to strong oracles such as ones that 36th Conference on Neural Information Processing Systems (NeurIPS 2022). solve Optimistic Planning (given a constrained, non-convex set of POMDPs, find the maximum value achievable by any policy on any POMDP in the set) [JKKL20] or Optimistic Maximum Likelihood Estimation (given a set of action/observation sample trajectories, find a POMDP which obtains maximum value conditioned on approximately maximizing the likelihood of seeing those trajectories) [LCSJ22]. Unsurprisingly, these oracles are computationally intractable to implement. Is there any hope for giving computationally efficient, oracle-free learning algorithms for POMDPs under reasonable assumptions? The naïve approach would require exponential time, and thus even a quasi-polynomial time algorithm would represent a dramatic improvement. A necessary first step towards solving the learning problem is having a computationally efficient planning algorithm. Few such algorithms have provable guarantees under reasonable model assumptions, but recently it was shown [GMR22] that there is a quasipolynomial-time planning algorithm for POMDPs which satisfy an observability assumption. Let H ∈ N be the horizon length of the POMDP, and for each state s and step h ∈ [H], let Oh(·|s) denote the observation distribution at state s and step h. Then observability is defined as follows: Assumption 1.1 ([EDKM07, GMR22]). Let γ > 0. For h ∈ [H], let Oh be the matrix with columns Oh(·|s), indexed by states s. We say that the matrix Oh satisfies γ-observability if for each h, for any distributions b, b′ over states, ‖Ohb−Ohb′‖1 ≥ γ ‖b− b′‖1. A POMDP satisfies (one-step) γ-observability if all H of its observation matrices do. Compared to previous assumptions enabling computationally efficient planning, observability is much milder because it makes no assumptions about the dynamics of the POMDP, and it allows for natural observation models such as noisy or lossy sensors [GMR22]. It is known that statistically efficient learning is possible under somewhat weaker assumptions than observability [JKKL20], however these works rely on solving a planning problem that is computationally intractable. This raises the question: Is observability enough to remedy both the computational and statistical woes of learning POMDPs? In particular, can we get not only efficient planning but efficient learning too? Overview of results. This work provides an affirmative answer to the questions above: we give an algorithm (BaSeCAMP, Algorithm 3) with quasi-polynomial time (and sample) complexity for learning near-optimal policies in observable POMDPs – see Theorem 3.1. While this falls just short of polynomial time, it turns out to be optimal in the sense that even for observable POMDPs there is a quasi-polynomial time lower bound for the (simpler) planning problem under standard complexity assumptions [GMR22]. A key innovation of our approach is an alternative technique to encourage exploration: whereas essentially all previous approaches for partially observable RL used the principle of optimism under uncertainty to encourage the algorithm to visit states [JKKL20, LCSJ22], we introduce a new framework based on the use of barycentric spanners [AK08] and policy covers [DKJ+19]. While each of these tools has previously been used in the broader RL literature to promote exploration (e.g. [LS17, FKQR21, DKJ+19, AHKS20, DGZ22, JKSY20, MHKL20]), they have not been used specifically in the study of POMDPs with imperfect observations,1 and indeed our usage of them differs substantially from past instances. The starting point for our approach is a result of [GMR22] (restated as Theorem 2.1) which implies that the dynamics of an observable POMDP P may be approximated by those of a Markov decision processM with a quasi-polynomial number of states. If we knew the transitions ofM, then we could simply use dynamic programming to find an optimal policy forM, which would be guaranteed to be a near-optimal policy for P . Instead, we must learn the transitions ofM, for which it is necessary to explore all (reachable) states of the underlying POMDP P . A naïve approach to encourage exploration is to learn the transitions of P via forwards induction on the layer h, using, at each step h, our knowledge of the learned transitions at steps prior to h to find a policy which visits each reachable state at step h. Such an approach would lead to a policy cover, namely a collection of policies which visits all reachable latent states. A major problem with this approach is that the latent states are not observed: instead, we only see observations. Hence a natural approach might be to choose policies which lead to all possible observations at each step h. This approach is clearly insufficient, since, e.g., a single state could emit a uniform distribution over observations. Thus we instead compute the following stronger concept: for each step h, we consider the set X of all possible distributions over observations at 1Policy covers have been used in the special case of block MDPs [DKJ+19, MHKL20], namely where different states produce disjoint observations. step h under any general policy, and attempt to find a barycentric spanner of X , namely a small subset X ′ ⊂ X so that all other distributions in X can be expressed as a linear combination of elements of X ′ with bounded coefficients. By playing a policy which realizes each distribution in such a barycentric spanner X ′, we are able to explore all reachable latent states, despite having no knowledge about which states we are exploring. This discussion omits a key technical aspect of the proof, which is the fact that we can only compute a barycentric spanner for a set X corresponding to an empirical estimate M̂ ofM. A key innovation in our proof is a technique to dynamically use such barycentric spanners, even when M̂ is inaccurate, to improve the quality of the estimate M̂. We remark that a similar dynamic usage of barycentric spanners appeared in [FKQR21]; we discuss in the appendix why the approach of [FKQR21], as well as related approaches involving nonstationary MDPs [WL21, WDZ22, WYDW21], cannot be applied here. Taking a step back, few models in reinforcement learning (beyond tabular or linear MDPs) admit computationally efficient end-to-end learning algorithms – indeed, our main contribution is a way to circumvent the daunting task of implementing any of the various constrained optimistic planning oracles assumed in previous optimism-based approaches. We hope that our techniques may be useful in other contexts for avoiding computational intractability without resorting to oracles. 2 Preliminaries For sets T ,Q, let QT denote the set of mappings from T → Q. Accordingly, we will identify RT with |T |-dimensional Euclidean space, and let ∆(T ) ⊂ RT consist of distributions on T . For d ∈ N and a vector v ∈ Rd, we denote its components by v(1), . . . ,v(d). For integers a ≤ b, we abbreviate a sequence (xa, xa+1, . . . , xb) by xa:b. If a > b, then we let xa:b denote the empty sequence. Sometimes we refer to negative indices of a sequence x1:n: in such cases the elements with negative indices may be taken to be aribtrary, as they will never affect the value of the expression. See Appendix B.1 for clarification. For x ∈ R, we write [x]+ = max{x, 0}, and [x]− = −min{x, 0}. For sets S, T , the notation S ⊂ T allows for the possibility that S = T . 2.1 Background on POMDPs In this paper we address the problem of learning finite-horizon partially observable Markov decision processes (POMDPs). Formally, a POMDP P is a tuple P = (H,S,A,O, b1, R,T,O), where: H ∈ N is a positive integer denoting the horizon length; S is a finite set of states of size S := |S|; A is a finite set of actions of size A := |A|; O is a finite set of observations of size O := |O|; b1 is the initial distribution over states; and R,T,O are given as follows. First, R = (R1, . . . , RH) denotes a tuple of reward functions, where, for h ∈ [H], Rh : O → [0, 1] gives the reward received as a function of the observation at step h. (It is customary in the literatue [JKKL20, LCSJ22] to define the rewards as being a function of the observations as opposed to being observed by the algorithm as separate information.) Second, T = (T1, . . . ,TH) is a tuple of transition kernels, where, for h ∈ [H], s, s′ ∈ S, a ∈ A, Th(s′|s, a) denotes the probability of transitioning from s to s′ at step h when action a is taken. For each a ∈ A, we will write Th(a) ∈ RS×S to denote the matrix with Th(a)s,s′ = Th(s|s′, a). Third, O = (O1, . . . ,OH) is a tuple of observation matrices, where for h ∈ [H], s ∈ S, o ∈ O, (Oh)o,s, also written as Oh(o|s), denotes the probability observing o while in state s at step h. Thus Oh ∈ RO×S for each h. Sometimes, for disambiguation, we will refer to the states S as the latent states of the POMDP P . The interaction (namely, a single episode) with P proceeds as follows: initially a state s1 ∼ b1 is drawn from the initial state distribution. At each step 1 ≤ h < H , an action ah ∈ A is chosen (as a function of previous observations and actions taken), P transitions to a new state sh+1 ∼ Th(·|sh, ah), a new observation is observed, oh+1 ∼ Oh+1(·|sh+1), and a reward of Rh+1(oh+1) is received (and observed). We emphasize that the underlying states s1:H are never observed directly. As a matter of convention, we assume that no observation is observed at step h = 1; thus the first observation is o2. 2.2 Policies, value functions A deterministic policy σ is a tuple σ = (σ1, . . . , σH), where σh : Ah−1 ×Oh−1 → A is a mapping from histories up to step h, namely tuples (a1:h−1, o2:h), to actions. We will denote the collection of histories up to step h by Hh := Ah−1 × Oh−1 and the set of deterministic policies by Πdet, meaning that Πdet = ∏H h=1AHh . A general policy π is a distribution over deterministic policies; the set of general policies is denoted by Πgen := ∆( ∏H h=1AHh). Given a general policy π ∈ Πgen, we denote by σ ∼ π the draw of a deterministic policy from the distribution π; to execute a general policy π, a sample σ ∼ π is first drawn and then followed for an episode of the POMDP. For a general policy π and some event E , write PPa1:H−1,o2:H ,s1:H∼π(E) to denote the probability of E when s1:H , a1:H−1, o2:H is drawn from a trajectory following policy π for the POMDP P . At times we will compress notation in the subscript, e.g., write PPπ (E) if the definition of s1:H , a1:H−1, o2:H is evident. In similar spirit, we will write EPa1:H−1,o2:H ,s1:H∼π[·] to denote expectations. Given a general policy π ∈ Πgen, define the value function for π at step 1 by V π,P1 (∅) = EPo1:H∼π [∑H h=2Rh(oh) ] , namely as the expected reward received by following π. Our objective is to find a policy π which maximizes V π,P1 (∅), in the PAC-RL model [KS02]: in particular, the algorithm does not have access to the transition kernel, reward function, or observation matrices of P , but can repeatedly choose a general policy π and observe the following data from a single trajectory drawn according to π: (a1, o2, R2(o2), a2, . . . , aH−1, oH , RH(oH)). The challenge is to choose such policies π which can sufficiently explore the environment. Finally, we remark that Markov decision processes (MDPs) are a special case of POMDPs where O = S and Oh(o|s) = 1[o = s] for all h ∈ [H], o, s ∈ S . For the MDPs we will consider, the initial state distribution will be left unspecified (indeed, the optimal policy of an MDP does not depend on the initial state distribution). Thus, we consider MDPsM described by a tupleM = (H,S,A, R,T). 2.3 Belief contraction A prerequisite for a computationally efficient learning algorithm in observable POMDPs is a computationally efficient planning algorithm, i.e. an algorithm to find an approximately optimal policy when the POMDP is known. Recent work [GMR22] obtains such a planning algorithm taking quasipolynomial time; we now introduce the key tools used in [GMR22], which are used in our algorithm as well. Consider a POMDP P = (H,S,A,O, b1, R,T,O). Given some h ∈ [H] and a history (a1:h−1, o2:h) ∈ Hh, the belief state bPh (a1:h−1, o2:h) ∈ ∆(S) is given by the distribution of the state sh conditioned on taking actions a1:h−1 and observing o2:h in the first h steps. Formally, the belief state is defined inductively as follows: bP1 (∅) = b1, and for 2 ≤ h ≤ H and any (a1:h−1, o2:h) ∈ Hh, bPh (a1:h−1, o2:h) = U P h−1(b P h−1(a1:h−2, o2:h−1); ah−1, oh), where for b ∈ ∆(S), a ∈ A, o ∈ O, UPh (b; a, o) ∈ ∆(S) is the distribution defined by UPh (b; a, o)(s) := Oh+1(o|s) · ∑ s′∈S b(s ′) · Th(s|s′, a)∑ x∈S Oh+1(o|x) ∑ s′∈S b(s ′) · Th(x|s′, a) . We call UPh the belief update operator. The belief state b P h (a1:h−1, o2:h) is a sufficient statistic for the sequence of future actions and observations under any deterministic policy. In particular, the optimal policy can be expressed as a function of the belief state, rather than the entire history. Thus, a natural approach to plan a near-optimal policy is to find a small set B ⊂ ∆(S) of distributions over states such that each possible belief state bPh (a1:h−1, o2:h) is close to some element of B. Unfortunately, this is not possible, even in observable POMDPs [GMR22, Example D.2]. The main result of [GMR22] circumvents this issue by showing that there is a subset B ⊂ ∆(S) of quasipolynomial size (depending on P) so that bPh (a1:h−1, o2:h) is close to some element of B in expectation under any given policy. To state the result of [GMR22], we need to introduce approximate belief states: Definition 2.1 (Approximate belief state). Fix a POMDP P = (H,S,A,O, b1, R,T,O). For any distribution D ∈ ∆(S), as well as any choices of 1 ≤ h ≤ H and L ≥ 0, the approximate belief state bapx,Ph (ah−L:h−1, oh−L+1:h; D) is defined as follows, via induction on L: in the case that L = 0, then we define bapx,Ph (∅; D) := { b1 : h = 1 D : h > 1, and for the case that L > 0, define, for h > L, bapx,Ph (ah−L:h−1, oh−L+1:h; D) := U P h−1(b apx,P h−1 (ah−L:h−2, oh−L+1:h−1; D); ah−1, oh). We extend the above definition to the case that h ≤ L by defining bapx,Ph (ah−L:h−1, oh−L+1:h; D) := b apx,P h (amax{1,h−L}:h−1, omax{2,h−L+1}:h; D). In words, the approximate belief state bapx,Ph (ah−L:h−1, oh−L+1:h; D) is obtained by applying the belief update operator starting from the distribution D at step h−L, if h−L > 1 (and otherwise, starting from b1, at step 1). At times, we will drop the superscript P from the above definitions and write bh,bapxh . The main technical result of [GMR22], stated as Theorem 2.1 below (with slight differences, see Appendix B), proves that if the POMDP P is γ-observable for some γ > 0, then for a wide range of distributions D , for sufficiently large L, the approximate belief state bapx,Ph (ah−L:h−1, oh−L+1:h; D) will be close to (i.e., “contract to”) the true belief state bPh (a1:h−1, o2:h). Theorem 2.1 (Belief contraction; Theorems 4.1 and 4.7 of [GMR22]). Consider any γ-observable POMDP P , any > 0 and L ∈ N so that L ≥ C ·min { log(1/( φ)) log(log(1/φ)/ ) γ2 , log(1/( φ)) γ4 } . Fix any π ∈ Πgen, and suppose that D ∈ ∆S satisfies b P h (a1:h−L−1,o2:h−L)(s) D(s) ≤ 1 φ for all (ah−1, o2:h). Then EP(a1:h−1,o2:h)∼π ∥∥∥bPh (a1:h−1, o2:h)− bapx,Ph (ah−L:h−1, oh−L+1:h; D)∥∥∥ 1 ≤ . 2.4 Visitation distributions For a POMDP P = (H,S,A,O, b1, R,T,O), policy π ∈ Πgen, and step h ∈ [H], the (latent) state visitation distribution at step h is dP,πS,h ∈ ∆(S) defined by d P,π S,h (s) := P P s1:H∼π(sh = s), and the observation visitation distribution at step h is dP,πO,h ∈ ∆(O) defined by d P,π O,h := P P o1:H∼π(oh = ·) = Oh · dP,πS,h . As will be discussed in Section 4.1, Theorem 2.1 implies that the transitions of the POMDP P can be approximated by those of an MDPM whose states consist of L-tuples of actions and observations. Thus we will often deal with such MDPsM of the formM = (H,Z,A, R,T) where the set of states has the product structure Z = AL × OL. We then define o : Z → O by o(a1:L, o1:L) = oL. For such MDPs, we define the observation visitation distributions by dM,πO,h (o) := P M z1:H∼π(o(zh) = o). Finally, for o ∈ O, we let eo ∈ R O denote the oth unit vector; thus, for instance, we have dP,πO,h (o) = 〈eo, d P,π O,h 〉. 3 Main result: learning observable POMDPs in quasipolynomial time Theorem 3.1 below states our main guarantee for BaSeCAMP (Barycentric Spanner policy Cover with Approximate MDP; Algorithm 3). Theorem 3.1. Given any α, β, γ > 0 and γ-observable POMDP P , BaSeCAMP with parameter settings as described in Section C.1 outputs a policy which is α-suboptimal with probability at least 1− β, using time and sample complexity bounded by (OA)CL log(1/β), where C > 1 is a constant and L = min { log(HSO/(αγ)) γ4 , log2(HSO/(αγ)) γ2 } . It is natural to ask whether the complexity guarantee of Theorem 3.1 can be improved further. [GMR22, Theorem 6.4] shows that under the Exponential Time Hypothesis, there is no algorithm for planning in γ-observable POMDPs which runs in time (SAHO)o(log(SAHO/α)/γ) and produces α-suboptimal policies, even if the POMDP is known. Thus, up to polynomial factors in the exponent, Theorem 3.1 is optimal. It is plausible, however, that there could be an algorithm which runs in quasipolynomial time yet only needs polynomially many samples; we leave this question for future work. 4 Algorithm description 4.1 Approximating P with an MDP. A key consequence of observability is that, by the belief contraction result of Theorem 2.1, the POMDP P is well-approximated by an MDPM of quasi-polynomial size. In more detail, we will apply Theorem 2.1 with φ = 1/S, D = Unif(S), and some L = poly(log(S/ )/γ) sufficiently large so as to satisfy the requirement of the theorem statement. The MDP M has state space Z := AL × OL, horizon H , action space A, and transitions PMh (·|zh, ah) which are defined via a belief update on the approximate belief state bapx,Ph (zh; Unif(S))2: in particular, for a state zh = (ah−L:h−1, oh−L+1:h) ∈ Z ofM, action ah, and subsequent observation oh+1 ∈ O, define PMh ((ah−L+1:h, oh−L+2:h+1)|zh, ah) := e>oh+1 ·O P h+1 · TPh (ah) · b apx,P h (zh; Unif(S)). (1) The above definition should be compared with the probability of observing oh+1 given history (a1:h, o2:h) and policy π when interacting with the POMDP P , which is PPoh+1∼π(oh+1|a1:h, o2:h) = e > oh+1 ·OPh+1 · TPh (ah) · bPh (a1:h−1, o2:h). (2) Theorem 2.1 gives that ∥∥∥bPh (a1:h−1, o2:h)− bapx,Ph (ah−L:h−1, oh−L+1:h; Unif(S))∥∥∥ 1 is small in expectation under any general policy π, which, using (1) and (2), gives that, for all π ∈ Πgen and h ∈ [H], Ea1:h,o2:h∼π ∑ oh+1∈O ∣∣PMh (oh+1|zh, ah)− PPh (oh+1|a1:h, o2:h)∣∣ ≤ . (3) (Above we have written zh = (ah−L:h−1, oh−L+1:h) and, via abuse of notation, PMh (oh+1|zh, ah) in place of PMh ((ah−L+1:h, oh−L+2:h+1)|zh, ah).) The inequality (3) establishes that the dynamics of P under any policy may be approximated by those of the MDPM. Crucially, this implies that there exists a deterministic Markov policy forM which is near-optimal among general policies for P; the set of such Markov policies forM is denoted by ΠmarkovZ . Because of the Markovian structure of M, such a policy can be found in time polynomial in the size ofM (which is quasi-polynomial in the underlying problem parameters), ifM is known. Of course,M is not known. Approximately learning the MDPM. These observations suggest the following model-based approach of trying to learn the transitions ofM. Suppose that we know a sequence of general policies π1, . . . , πH (abbreviated as π1:H ) so that for each h, πh visits a uniformly random state of P at step h− L (i.e. dP,π h S,h−L = Unif(S)). Then we can estimate the transitions ofM as follows: play π h for h−L−1 steps and then playL+1 random actions, generating a trajectory (a1:h, o2:h+1). Conditioned on zh = (ah−L:h−1, oh−L+1:h) and final action ah, the last observation of this sample trajectory, oh+1, would give an unbiased draw from the transition distribution PMh (·|zh, ah). Repeating this procedure would allow estimation of PMh (see Lemma E.1). This idea is formalized in the procedure ConstructMDP (Algorithm 1): given a sequence of general policies π1, . . . , πH (abbreviated π1:H ), ConstructMDP constructs an MDP, denoted M̂ = M̂(π1:H), which empirically approximates M using the sampling procedure described above. For technical reasons, M̂ actually has state space Z := AL · OL, where O := O ∪ {osink} and osink is a special “sink observation” so that, after osink is observed, all future observations are also osink. Furthermore, we remark that dP,π h S,h−L does not have to be exactly uniform – it suffices if πh visits all states of P at step h− L with non-negligible probability. 4.2 Exploration via barycentric spanners The above procedure for approximating M with M̂ omits a crucial detail: how can we find the “exploratory policies” π1:H? Indeed a major obstacle to finding such policies is that we never directly 2For simplicity, descriptions of the reward function of M are omitted; we refer the reader to the appendix for the full details of the proof observe the states of P . By repeatedly playing a policy π on P , we can estimate the induced observation visitation distribution dP,πO,h−L, which is related to the state visitation distribution via the equality O†h−L · d P,π O,h−L = d P,π S,h−L. Unfortunately, the matrix Oh−L is still unknown, and in general unidentifiable. On the positive side, we can attempt to learnM layer by layer – in particular, when learning the hth layer, we can assume that we have learned previous layers, i.e., dM̂,πO,h−L approximates d M,π O,h−L, and therefore dP,πO,h−L. Even though M̂ does not have underlying latent states, we can define “formal” latent state distributions on M̂ in analogy with P , i.e. dM̂,πS,h−L := O † h−L · d M̂,π O,h−L. But this does not seem helpful, again because Oh−L is unknown. Our first key insight is that a policy πh, for which dM̂,π h S,h−L puts non-negligible mass on all states, can be found (when it exists) via knowledge of M̂ and the technique of Barycentric Spanners – all without ever explicitly computing dM̂,π h S,h−L. Barycentric spanners. Suppose we knew that the transitions of our empirical estimate M̂ approximate those ofM up to step h, and we want to find a policy πh for which the (formal) latent state distribution dM̂,π h S,h−L is non-negligible on all states. Unfortunately, the set of achievable latent state distributions {O†h−L · d M̂,π O,h−L : π ∈ Π gen} ⊂ RS is defined implicitly, via the unknown observation matrix Oh−L. But we do have access to XM̂,h−L := {d M̂,π O,h−L : π ∈ Π gen} ⊂ RO, the set of achievable distributions over the observation at step h − L. In particular, for any reward function on observations at step h − L, we can efficiently (by dynamic programming) find a policy π that maximizes reward on M̂ over all policies. In other words, we can solve linear optimization problems over X . By a classic result [AK08], we can thus efficiently find a barycentric spanner for XM̂,h−L: Definition 4.1 (Barycentric spanner). Consider a subset X ⊂ Rd. For B ≥ 1, a set X ′ ⊂ X of size d is a B-approximate barycentric spanner of X if each x ∈ X can be expressed as a linear combination of elements in X ′ with coefficients in [−B,B]. Using the guarantee of [AK08] (restated as Lemma D.1) applied to the set XM̂,h−L, we can find, in time polynomial in the size of M̂, a 2-approximate barycentric spanner π̃1, . . . , π̃O of XM̂,h−L; this procedure is formalized in BarySpannerPolicy (Algorithm 2). Thus, for any policy π, the observation distribution dM̂,πO,h−L induced by π is a linear combination of the distributions {d M̂,π̃i O,h−L : i ∈ [O]} with coefficients in [−2, 2]. Since dM̂,πS,h−L = O † h−L · d M̂,π O,h−L for all π, it follows that the formal latent state distribution dM̂,πS,h−L induced by π is a linear combination of the formal latent state distributions {dM̂,π̃ i S,h−L : i ∈ [O]} with the same coefficients. Now, intuitively, the randomized mixture policy πmix = 1O (π̃ 1 + · · ·+ π̃O) should explore every reachable state; indeed, we show the following guarantee for BarySpannerPolicy: Lemma 4.1 (Informal version & special case of Lemma D.2). In the above setting, under some technical conditions, for all s ∈ S and π ∈ Πgen, it holds that dM̂,πmixS,h−L (s) ≥ 1 4O2 · d M̂,π S,h−L(s). But recall the original goal: a policy πh which explores P – not M̂. Unfortunately, those states in P which can only be reached with probability that is small compared to the distance between P and M̂ may be missed by πmix. When we use πmix to compute the next-step transitions of M̂, this will lead to additional error when applying belief contraction (Theorem 2.1), and therefore additional error between P and M̂. If not handled carefully, this error will compound exponentially over layers. 4.3 The full algorithm via iterative discovery The solution to the dilemma discussed above is to not try to construct our estimate M̂ ofM layer by layer, hoping that at each layer we can explore all reachable states of P despite making errors in Algorithm 1 ConstructMDP 1: procedure ConstructMDP(L,N0, N1, π1, . . . , πH ) 2: for 1 ≤ h ≤ H do 3: Let π̂h be the policy which follows πh for the first max{h−L−1, 0} steps and thereafter chooses uniformly random actions. 4: Draw N0 independent trajectories from the policy π̂h: 5: Denote the data from the ith trajectory by ai1:H−1, o i 2:H , for i ∈ [N0]. 6: Set zih = (a i h−L:h−1, o i h−L+1:h) for all i ∈ [N0], h ∈ [H]. 7: // Construct the transitions PM̂h (zh+1|zh, ah) as follows: 8: for each zh = (ah−L:h−1, oh−L+1:h) ∈ Z, ah ∈ A do 9: // Define PM̂h (·|zh, ah) to be the empirical distribution of zih+1|zih, aih, as follows: 10: For oh+1 ∈ O, define ϕ(oh+1) := |{i : (aimax{1,h−L}:h, o i max{2,h−L+1}:h+1) = (amax{1,h−L}:h, omax{2,h−L+1}:h+1)}|. 11: if ∑ oh+1 ϕ(oh+1) ≥ N1 then 12: Set PM̂h ((ah−L+1:h, oh−L+2:h+1)|zh, ah) := ϕ(oh+1)∑ o′ h+1 ϕ(o′h+1) for all oh+1 ∈ O. 13: Set RM̂h (zh, ah) := R P h (o i h) for some i with o i h = oh. . R P h (o i h) is observed. 14: else 15: Let PM̂h (·|zh, ah) put all its mass on (ah−L+1:h, (oh−L+2:h, osink)). 16: for each zh = (ah−L:h−1, oh−L+1:h) ∈ Z\Z and ah ∈ A do 17: Let PM̂h (·|zh, ah) put all its mass on (ah−L+1:h, (oh−L+2:h, osink)). 18: Let M̂ denote the MDP (Z, H,A, RM̂,PM̂). 19: return the MDP M̂, which we denote by M̂(π1:H). Algorithm 2 BarySpannerPolicy 1: procedure BarySpannerPolicy(M̂, h) . M̂ is MDP on state space Z , horizon H; h ∈ [H] 2: if h ≤ L then return an arbitrary general policy. 3: Let O be the linear optimization oracle which given r ∈ RO, returns arg maxπ∈ΠmarkovZ 〈r, d π,M̂ O,h−L〉 and maxπ∈ΠmarkovZ 〈r, d π,M̂ O,h−L〉 . Note that O can be implemented in time Õ(|Z| ·HO) using dynamic programming 4: Using the algorithm of [AK08] with oracle O , compute policies {π1, . . . , πO} so that {dπ i,M̂ O,h−L : i ∈ [O]} is a 2-approximate barycentric spanner of {d π,M̂ O,h−L : π ∈ Π markov Z }. . This algorithm requires only O(O2 logO) calls to O 5: return the general policy 1O · ∑O i=1 π i. earlier layers of M̂. Instead, we have to be able to go back and “fix” errors in our empirical estimates at earlier layers. This task is performed in our main algorithm, BaSeCAMP (Algorithm 3). For some K ∈ N, BaSeCAMP runs for a total of K iterations: at each iteration k ∈ [K], BaSeCAMP defines H general policies, πk,1, . . . , πk,H ∈ Πgen (abbreviated πk,1:H ; step 4). The algorithm’s overall goal is that, for some k, for each h ∈ [H], πk,h explores all latent states at step h− L that are reachable by any policy. To ensure that this condition holds at some iteration, BaSeCAMP performs the following two main steps for each iteration k: first, it calls Algorithm 1 to construct an MDP, denoted M̂(k), using the policies πk,1:H . Then, for each h ∈ [H], it passes the tuple (M̂(k), h) to BarySpannerPolicy, which returns as output a general policy, πk+1,h,0. Then, policies πk+1,h are produced (step 7) by averaging πk+1,h ′,0, for all h′ ≥ h. The policies πk+1,h are then mixed into the policies πk+1,h for the next iteration k+ 1. It follows from properties of BarySpannerPolicy that, in the event that the policies πk,h are not sufficiently exploratory, one of the new policies πk+1,h visits some latent state (s, h′) ∈ S × [H], which was not previously visited by πk,1:H with significant probability. Thus, after a total of K = O(SH) iterations k, it follows that we must have visited all (reachable) latent states of the POMDP. At the end of these K iterations, BaSeCAMP computes an optimal policy for each M̂(k) and returns the best of them (as evaluated on fresh trajectories drawn from P; step 12). Algorithm 3 BaSeCAMP (Barycentric Spanner policy Cover with Approximate MDP) 1: procedure BaSeCAMP(L,N0, N1, α, β,K) 2: Initialize π1,1, . . . , π1,H to be arbitrary policies. 3: for k ∈ [K] do 4: For each h ∈ [H], set πk,h = 1k ∑K k′=1 π k′,h. 5: Run ConstructMDP(L,N0, N1, πk,1:H) and let its output be M̂(k). 6: For each h ∈ [H], let πk+1,h,0 be the output of BarySpannerPolicy(M̂(k), h). 7: For each h ∈ [H], define πk+1,h := 1H−h+1 ∑H h′=h π k+1,h′,0. 8: // Choose the best optimal policy amongst all M̂(k) 9: for k ∈ [K] do 10: Let πk? denote an optimal policy of M̂(k). 11: Execute πk? for 100H2 logK/β α2 trajectories and let the mean reward across them be r̂ k. 12: Let k? = arg maxk∈[K] r̂ k. 13: return πk ? ? . 5 Proof Outline We now briefly outline the proof of Theorem 3.1; further details may be found in the appendix. The high-level idea of the proof is to show that the algorithm BaSeCAMP makes a given amount of progress for each iteration k, as specified in the following lemma: Lemma 5.1 (“Progress lemma”: informal version of Lemma I.2). Fix any iteration k in Algorithm 3, step 3. Then, for some parameters δ, φ with α δ φ > 0, one of the following statements holds: 1. Any (s, h) with dP,π k,h S,h−L (s) < φ satisfies d P,π S,h−L(s) ≤ δ for all general policies π. 2. There is some (h, s) ∈ [H]× S so that: dP,π k,h S,h−L (s) < φ ·H 2S, yet, for all k′ > k: dP,π k′,h S,h−L (s) ≥ φ ·H 2S. Given Lemma 5.1, the proof of Theorem 3.1 is fairly straightforward. In particular, each (h, s) ∈ [H]× S can only appear as the specified pair in item 2 of the lemma for a single iteration k. Thus, as long as K > HS, item 1 must hold for some value of k, say k? ∈ [K]. In turn, it is not difficult to show from this that the MDP M̂(k?) is a good approximation of P , in the sense that for any general policy π, the values of π in M̂(k?) and in P are close (Lemma H.3). Thus, the optimal policy πk?? of M̂(k?) will be a near-optimal policy of P , and steps 9 through 12 of BaSeCAMP will identify either the policy πk ? ? or some other policy π k′ ? which has even higher reward on P . Proof of the progress lemma. The bulk of the proof of Theorem 3.1 consists of the proof of Lemma 5.1, which we proceed to outline. Suppose that item 1 of the lemma does not hold, meaning that there is some π ∈ Πgen and some (h, s) ∈ [H]× S so that dP,π k,h S,h−L (s) < φ yet d P,π S,h−L(s) > δ; i.e., πk,h does not explore (s, h− L), but the policy π does. Roughly speaking, BaSeCAMP ensures that item 2 holds in this case, using the following two steps: (A) First, we show that 〈es,O†h−L ·d M̂(k),π O,h−L 〉 ≥ δ ′, where δ′ is some parameter satisfying δ δ′ φ. In words, the estimate of the underlying state distribution provided by M̂(k) also has the property that some policy π visits (s, h−L) with non-negligible probability (namely, δ′). While this statement would be straightforward if M̂(k) were a close approximation P , this is not necessarily the case (indeed, if it were the case, then item 1 of Lemma 5.1 would hold). To circumvent this issue, we introduce a family of intermediate POMDPs indexed by H ′ ∈ [H] and denoted Pφ,H′(πk,1:H), which we call truncated POMDPs. Roughly speaking, the truncated POMDP Pφ,H′(πk,1:H) diverts transitions away from all states at step H ′ − L which πk,H ′ does not visit with probability at least φ. This modification is made to allow Theorem 2.1 to be applied to Pφ,H′(πk,1:H) and any general policy π. By doing so, we may show a one-sided error bound between Pφ,H′(πk,1:H) and M̂(k) (Lemma G.4) which, importantly, holds even when the policies πk,1:H may fail to explore some states. It is this one-sided error bound which implies the lower bound on 〈es,O†h−L · d M̂(k),π O,h−L 〉. (B) Second, we show that the policy πk+1,h,0 produced by BarySpannerPolicy in step 6 explores (s, h− L) with sufficient probability. To do so, we first use Lemma 4.1 to conclude that 〈es,O†h−L · dM̂ (k),πk+1,h,0 O,h−L 〉 ≥ δ′ 4O2 . The more challenging step is to use this fact to conclude a lower bound on dP,π k+1,h,0 S,h−L (s); unfortunately, the one-sided error bound between P and M̂ (k) that we used in the previous paragraph goes in the wrong direction here. The solution is to use Lemma H.3, which has the following consequence: either dP,π k+1,h,0 S,h−L (s) is not too small, or else the policy π k+1,h,0 visits some state at a step prior to h− L which was not sufficiently explored by any of the policies πk,1:H (see Section C for further details). In either case πk+1,h,0 visits a state that was not previously explored, and the fact that πk+1,h,0 is mixed in to πk ′,h′ for k′ > k, h′ ≤ h (steps 4, 7 of BaSeCAMP) allows us to conclude item 2 of Lemma 5.1. 6 Conclusion In this paper we have demonstrated the first quasipolynomial-time (and quasipolynomial-sample) algorithm for learning observable POMDPs. Several interesting directions for future work remain: • It is straightforward to show that a γ-observable POMDP is Ω(γ/ √ S) weakly-revealing in the sense of [LCSJ22, Assumption 1]. Thus, the results of [JKKL20, LCSJ22] imply that γ-observable POMDPs can be learned with polynomially many samples, albeit by computationally inefficient algorithms. Thus, as discussed following Theorem 3.1, it is natural to wonder whether we can achieve the best of both worlds: is there a quasipolynomialtime algorithm that only needs polynomially many samples, or can one show a computationalstatistical tradeoff? • It is also natural to ask whether an analogue of Assumption 1.1 for the `2 norm, namely that ‖Ohb−Ohb′‖2 ≥ γ‖b− b′‖2 for all h ∈ [H], is sufficient for computationally efficient learnability. Even the planning version of this question (where the parameters of the POMDP are known and the problem is to find a near-optimal policy) is open. Acknowledgments and Disclosure of Funding N.G. is supported by a Fannie & John Hertz Foundation Fellowship and an NSF Graduate Fellowship. A.M. is supported by a Microsoft Trustworthy AI Grant, NSF Large CCF-1565235, a David and Lucile Packard Fellowship and an ONR Young Investigator Award. D.R. is supported by an Akamai Presidential Fellowship and a U.S. DoD NDSEG Fellowship.
1. What is the focus and contribution of the paper regarding solving unknown POMDPs? 2. What are the strengths of the proposed method, particularly in its theoretical analysis? 3. Do you have any concerns or questions about the method, such as its ability to include the optimal policy for the POMDP? 4. Are there any parts of the paper where additional detail or explanation would be helpful? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper introduces a method for approximately solving unknown POMDPs. The method relies on constructing an MDP approximation and a set of policies that incrementally improve coverage of that MDP's state space, thereby increasing the accuracy of the MDP approximation. The approach makes use of a novel exploration objective that uses the concept of barycentric spanners to ensure that every possible observation distribution can be represented as a linear combination of a small "core" set of observation distributions. The resulting algorithm has approximately optimal time complexity and quasi-polynomial sample complexity. Strengths And Weaknesses Strengths: The paper covers a lot of difficult theoretical ground and communicates the high-level ideas very clearly. The theoretical results appear to be correct, and the supplementary materials appear to be detailed enough that one could verify the claims presented in the main text, although I was only able to check them at a high level. The section "Overview of results" is extremely helpful for communicating the high-level ideas quickly, and for providing a roadmap of the remainder of the paper. Weaknesses: Lines 135-141: I'm a little worried about the way the policies are defined here. It's fine to say a general policy is a distribution over deterministic policies, but when we execute that policy, we sample a deterministic policy from it, and follow that deterministic policy for the entire episode. My concern is: does this scheme necessarily include the optimal policy for the POMDP? Are all trajectories that can be achieved with a fully stochastic policy still supported under the resulting distribution? Is this construction allowed because the policy is conditioned on the entire history? I'm confused about a few things: a. Line 46: "A necessary first step towards solving the learning problem is having a computationally efficient planning algorithm." -> Is there a reason why a model-free approach wouldn't work? b. Line 120: Why is reward a function of observations O ? I'm used to seeing it depend on states S . c. Lines 280-282: "the formal latent state distribution induced by π is a linear combination of the formal latent state distributions with the same coefficients" -> Why is this the case? In a few of the definitions and assumptions, I would have preferred to have more detail. However, I appreciate that the paper is quite short on space, so I understand that many sections have been significantly compressed already. a. Assumption 1.1 defines the distance between two distributions as the sum of the differences in probability for each outcome. Why use this particular definition for γ -observability? b. In section 2.2, while I appreciate that the notation is presented up front, it would be helpful to see an example for why we need to define the probability of an event P π P ( E ) . c. In section 2.4, the presentation of the product space Z is confusing. It took until much later for me to realize that we need this because it's the form that the POMDP -> MDP conversion requires. d. In Algorithm 3, lines 3-4, there are multiple definitions of k , which is confusing. e. In line 331, I could not find a definition for e s anywhere. I assumed this was a unit/one-hot vector, so the ⟨ e i , ⋅ ⟩ is equivalent to indexing the second term at position i . It would be helpful to have this notation explicitly defined, even if it's just in the appendix. Review Summary: This paper seems useful for several reasons. First, it introduces an algorithm for learning in POMDPs that achieves optimal time complexity and quasi-polynomial sample complexity. Second, it presents a novel exploration scheme that relies on the concept of barycentric spanners, rather than the ubiquitous "optimism under uncertainty" principle. While I can only verify the correctness at a high level, the material is presented clearly and the high-level ideas are very approachable. Unfortunately, space limitations preclude a more detailed treatment in the main text, but the supplementary materials appear to more than make up for it. Accept. Questions (See numbered items above.) Limitations N/A
NIPS
Title Learning in Observable POMDPs, without Computationally Intractable Oracles Abstract Much of reinforcement learning theory is built on top of oracles that are computationally hard to implement. Specifically for learning near-optimal policies in Partially Observable Markov Decision Processes (POMDPs), existing algorithms either need to make strong assumptions about the model dynamics (e.g. deterministic transitions) or assume access to an oracle for solving a hard optimistic planning or estimation problem as a subroutine. In this work we develop the first oracle-free learning algorithm for POMDPs under reasonable assumptions. Specifically, we give a quasipolynomial-time end-to-end algorithm for learning in “observable” POMDPs, where observability is the assumption that well-separated distributions over states induce well-separated distributions over observations. Our techniques circumvent the more traditional approach of using the principle of optimism under uncertainty to promote exploration, and instead give a novel application of barycentric spanners to constructing policy covers. 1 Introduction Markov Decision Processes (MDPs) are a ubiquitous model in reinforcement learning that aim to capture sequential decision-making problems in a variety of applications spanning robotics to healthcare. However, modelling a problem with an MDP makes the often-unrealistic assumption that the agent has perfect knowledge about the state of the world. Partially Observable Markov Decision Processes (POMDPs) are a broad generalization of MDPs which capture an agent’s inherent uncertainty about the state: while there is still an underlying state that updates according to the agent’s actions, the agent never directly observes the state, but instead receives samples from a statedependent observation distribution. The greater generality afforded by partial observability is crucial to applications in game theory [BS18], healthcare [Hau00, HF00b], market design [WME+22], and robotics [CKK96]. Unfortunately, this greater generality comes with steep statistical and computational costs. There are well-known statistical lower bounds [JKKL20, KAL16], which show that in the worst case, it is statistically intractable to find a near-optimal policy for a POMDP given the ability to play policies on it (the learning problem), even given unlimited computation. Furthermore, there are worst-case computational lower bounds [PT87, Lit94, BDRS96, LGM01, VLB12], which establish that it is computationally intractable to find a near-optimal policy even when given the exact parameters of the model (the substantially simpler planning problem). Nevertheless there is a sizeable literature devoted to overcoming the statistical intractability of the learning problem by restricting to natural subclasses of POMDPs [KAL16, GDB16, ALA16, JKKL20, XCGZ21, KECM21a, KECM21b, LCSJ22]. There are far fewer works attempting to overcome computational intractability, and all make severe restrictions on either the model dynamics [JKKL20, KAL16] or the structure of the uncertainty [BDRS96, KECM21a]. The standard practice is to simply sidestep computational issues by assuming access to strong oracles such as ones that 36th Conference on Neural Information Processing Systems (NeurIPS 2022). solve Optimistic Planning (given a constrained, non-convex set of POMDPs, find the maximum value achievable by any policy on any POMDP in the set) [JKKL20] or Optimistic Maximum Likelihood Estimation (given a set of action/observation sample trajectories, find a POMDP which obtains maximum value conditioned on approximately maximizing the likelihood of seeing those trajectories) [LCSJ22]. Unsurprisingly, these oracles are computationally intractable to implement. Is there any hope for giving computationally efficient, oracle-free learning algorithms for POMDPs under reasonable assumptions? The naïve approach would require exponential time, and thus even a quasi-polynomial time algorithm would represent a dramatic improvement. A necessary first step towards solving the learning problem is having a computationally efficient planning algorithm. Few such algorithms have provable guarantees under reasonable model assumptions, but recently it was shown [GMR22] that there is a quasipolynomial-time planning algorithm for POMDPs which satisfy an observability assumption. Let H ∈ N be the horizon length of the POMDP, and for each state s and step h ∈ [H], let Oh(·|s) denote the observation distribution at state s and step h. Then observability is defined as follows: Assumption 1.1 ([EDKM07, GMR22]). Let γ > 0. For h ∈ [H], let Oh be the matrix with columns Oh(·|s), indexed by states s. We say that the matrix Oh satisfies γ-observability if for each h, for any distributions b, b′ over states, ‖Ohb−Ohb′‖1 ≥ γ ‖b− b′‖1. A POMDP satisfies (one-step) γ-observability if all H of its observation matrices do. Compared to previous assumptions enabling computationally efficient planning, observability is much milder because it makes no assumptions about the dynamics of the POMDP, and it allows for natural observation models such as noisy or lossy sensors [GMR22]. It is known that statistically efficient learning is possible under somewhat weaker assumptions than observability [JKKL20], however these works rely on solving a planning problem that is computationally intractable. This raises the question: Is observability enough to remedy both the computational and statistical woes of learning POMDPs? In particular, can we get not only efficient planning but efficient learning too? Overview of results. This work provides an affirmative answer to the questions above: we give an algorithm (BaSeCAMP, Algorithm 3) with quasi-polynomial time (and sample) complexity for learning near-optimal policies in observable POMDPs – see Theorem 3.1. While this falls just short of polynomial time, it turns out to be optimal in the sense that even for observable POMDPs there is a quasi-polynomial time lower bound for the (simpler) planning problem under standard complexity assumptions [GMR22]. A key innovation of our approach is an alternative technique to encourage exploration: whereas essentially all previous approaches for partially observable RL used the principle of optimism under uncertainty to encourage the algorithm to visit states [JKKL20, LCSJ22], we introduce a new framework based on the use of barycentric spanners [AK08] and policy covers [DKJ+19]. While each of these tools has previously been used in the broader RL literature to promote exploration (e.g. [LS17, FKQR21, DKJ+19, AHKS20, DGZ22, JKSY20, MHKL20]), they have not been used specifically in the study of POMDPs with imperfect observations,1 and indeed our usage of them differs substantially from past instances. The starting point for our approach is a result of [GMR22] (restated as Theorem 2.1) which implies that the dynamics of an observable POMDP P may be approximated by those of a Markov decision processM with a quasi-polynomial number of states. If we knew the transitions ofM, then we could simply use dynamic programming to find an optimal policy forM, which would be guaranteed to be a near-optimal policy for P . Instead, we must learn the transitions ofM, for which it is necessary to explore all (reachable) states of the underlying POMDP P . A naïve approach to encourage exploration is to learn the transitions of P via forwards induction on the layer h, using, at each step h, our knowledge of the learned transitions at steps prior to h to find a policy which visits each reachable state at step h. Such an approach would lead to a policy cover, namely a collection of policies which visits all reachable latent states. A major problem with this approach is that the latent states are not observed: instead, we only see observations. Hence a natural approach might be to choose policies which lead to all possible observations at each step h. This approach is clearly insufficient, since, e.g., a single state could emit a uniform distribution over observations. Thus we instead compute the following stronger concept: for each step h, we consider the set X of all possible distributions over observations at 1Policy covers have been used in the special case of block MDPs [DKJ+19, MHKL20], namely where different states produce disjoint observations. step h under any general policy, and attempt to find a barycentric spanner of X , namely a small subset X ′ ⊂ X so that all other distributions in X can be expressed as a linear combination of elements of X ′ with bounded coefficients. By playing a policy which realizes each distribution in such a barycentric spanner X ′, we are able to explore all reachable latent states, despite having no knowledge about which states we are exploring. This discussion omits a key technical aspect of the proof, which is the fact that we can only compute a barycentric spanner for a set X corresponding to an empirical estimate M̂ ofM. A key innovation in our proof is a technique to dynamically use such barycentric spanners, even when M̂ is inaccurate, to improve the quality of the estimate M̂. We remark that a similar dynamic usage of barycentric spanners appeared in [FKQR21]; we discuss in the appendix why the approach of [FKQR21], as well as related approaches involving nonstationary MDPs [WL21, WDZ22, WYDW21], cannot be applied here. Taking a step back, few models in reinforcement learning (beyond tabular or linear MDPs) admit computationally efficient end-to-end learning algorithms – indeed, our main contribution is a way to circumvent the daunting task of implementing any of the various constrained optimistic planning oracles assumed in previous optimism-based approaches. We hope that our techniques may be useful in other contexts for avoiding computational intractability without resorting to oracles. 2 Preliminaries For sets T ,Q, let QT denote the set of mappings from T → Q. Accordingly, we will identify RT with |T |-dimensional Euclidean space, and let ∆(T ) ⊂ RT consist of distributions on T . For d ∈ N and a vector v ∈ Rd, we denote its components by v(1), . . . ,v(d). For integers a ≤ b, we abbreviate a sequence (xa, xa+1, . . . , xb) by xa:b. If a > b, then we let xa:b denote the empty sequence. Sometimes we refer to negative indices of a sequence x1:n: in such cases the elements with negative indices may be taken to be aribtrary, as they will never affect the value of the expression. See Appendix B.1 for clarification. For x ∈ R, we write [x]+ = max{x, 0}, and [x]− = −min{x, 0}. For sets S, T , the notation S ⊂ T allows for the possibility that S = T . 2.1 Background on POMDPs In this paper we address the problem of learning finite-horizon partially observable Markov decision processes (POMDPs). Formally, a POMDP P is a tuple P = (H,S,A,O, b1, R,T,O), where: H ∈ N is a positive integer denoting the horizon length; S is a finite set of states of size S := |S|; A is a finite set of actions of size A := |A|; O is a finite set of observations of size O := |O|; b1 is the initial distribution over states; and R,T,O are given as follows. First, R = (R1, . . . , RH) denotes a tuple of reward functions, where, for h ∈ [H], Rh : O → [0, 1] gives the reward received as a function of the observation at step h. (It is customary in the literatue [JKKL20, LCSJ22] to define the rewards as being a function of the observations as opposed to being observed by the algorithm as separate information.) Second, T = (T1, . . . ,TH) is a tuple of transition kernels, where, for h ∈ [H], s, s′ ∈ S, a ∈ A, Th(s′|s, a) denotes the probability of transitioning from s to s′ at step h when action a is taken. For each a ∈ A, we will write Th(a) ∈ RS×S to denote the matrix with Th(a)s,s′ = Th(s|s′, a). Third, O = (O1, . . . ,OH) is a tuple of observation matrices, where for h ∈ [H], s ∈ S, o ∈ O, (Oh)o,s, also written as Oh(o|s), denotes the probability observing o while in state s at step h. Thus Oh ∈ RO×S for each h. Sometimes, for disambiguation, we will refer to the states S as the latent states of the POMDP P . The interaction (namely, a single episode) with P proceeds as follows: initially a state s1 ∼ b1 is drawn from the initial state distribution. At each step 1 ≤ h < H , an action ah ∈ A is chosen (as a function of previous observations and actions taken), P transitions to a new state sh+1 ∼ Th(·|sh, ah), a new observation is observed, oh+1 ∼ Oh+1(·|sh+1), and a reward of Rh+1(oh+1) is received (and observed). We emphasize that the underlying states s1:H are never observed directly. As a matter of convention, we assume that no observation is observed at step h = 1; thus the first observation is o2. 2.2 Policies, value functions A deterministic policy σ is a tuple σ = (σ1, . . . , σH), where σh : Ah−1 ×Oh−1 → A is a mapping from histories up to step h, namely tuples (a1:h−1, o2:h), to actions. We will denote the collection of histories up to step h by Hh := Ah−1 × Oh−1 and the set of deterministic policies by Πdet, meaning that Πdet = ∏H h=1AHh . A general policy π is a distribution over deterministic policies; the set of general policies is denoted by Πgen := ∆( ∏H h=1AHh). Given a general policy π ∈ Πgen, we denote by σ ∼ π the draw of a deterministic policy from the distribution π; to execute a general policy π, a sample σ ∼ π is first drawn and then followed for an episode of the POMDP. For a general policy π and some event E , write PPa1:H−1,o2:H ,s1:H∼π(E) to denote the probability of E when s1:H , a1:H−1, o2:H is drawn from a trajectory following policy π for the POMDP P . At times we will compress notation in the subscript, e.g., write PPπ (E) if the definition of s1:H , a1:H−1, o2:H is evident. In similar spirit, we will write EPa1:H−1,o2:H ,s1:H∼π[·] to denote expectations. Given a general policy π ∈ Πgen, define the value function for π at step 1 by V π,P1 (∅) = EPo1:H∼π [∑H h=2Rh(oh) ] , namely as the expected reward received by following π. Our objective is to find a policy π which maximizes V π,P1 (∅), in the PAC-RL model [KS02]: in particular, the algorithm does not have access to the transition kernel, reward function, or observation matrices of P , but can repeatedly choose a general policy π and observe the following data from a single trajectory drawn according to π: (a1, o2, R2(o2), a2, . . . , aH−1, oH , RH(oH)). The challenge is to choose such policies π which can sufficiently explore the environment. Finally, we remark that Markov decision processes (MDPs) are a special case of POMDPs where O = S and Oh(o|s) = 1[o = s] for all h ∈ [H], o, s ∈ S . For the MDPs we will consider, the initial state distribution will be left unspecified (indeed, the optimal policy of an MDP does not depend on the initial state distribution). Thus, we consider MDPsM described by a tupleM = (H,S,A, R,T). 2.3 Belief contraction A prerequisite for a computationally efficient learning algorithm in observable POMDPs is a computationally efficient planning algorithm, i.e. an algorithm to find an approximately optimal policy when the POMDP is known. Recent work [GMR22] obtains such a planning algorithm taking quasipolynomial time; we now introduce the key tools used in [GMR22], which are used in our algorithm as well. Consider a POMDP P = (H,S,A,O, b1, R,T,O). Given some h ∈ [H] and a history (a1:h−1, o2:h) ∈ Hh, the belief state bPh (a1:h−1, o2:h) ∈ ∆(S) is given by the distribution of the state sh conditioned on taking actions a1:h−1 and observing o2:h in the first h steps. Formally, the belief state is defined inductively as follows: bP1 (∅) = b1, and for 2 ≤ h ≤ H and any (a1:h−1, o2:h) ∈ Hh, bPh (a1:h−1, o2:h) = U P h−1(b P h−1(a1:h−2, o2:h−1); ah−1, oh), where for b ∈ ∆(S), a ∈ A, o ∈ O, UPh (b; a, o) ∈ ∆(S) is the distribution defined by UPh (b; a, o)(s) := Oh+1(o|s) · ∑ s′∈S b(s ′) · Th(s|s′, a)∑ x∈S Oh+1(o|x) ∑ s′∈S b(s ′) · Th(x|s′, a) . We call UPh the belief update operator. The belief state b P h (a1:h−1, o2:h) is a sufficient statistic for the sequence of future actions and observations under any deterministic policy. In particular, the optimal policy can be expressed as a function of the belief state, rather than the entire history. Thus, a natural approach to plan a near-optimal policy is to find a small set B ⊂ ∆(S) of distributions over states such that each possible belief state bPh (a1:h−1, o2:h) is close to some element of B. Unfortunately, this is not possible, even in observable POMDPs [GMR22, Example D.2]. The main result of [GMR22] circumvents this issue by showing that there is a subset B ⊂ ∆(S) of quasipolynomial size (depending on P) so that bPh (a1:h−1, o2:h) is close to some element of B in expectation under any given policy. To state the result of [GMR22], we need to introduce approximate belief states: Definition 2.1 (Approximate belief state). Fix a POMDP P = (H,S,A,O, b1, R,T,O). For any distribution D ∈ ∆(S), as well as any choices of 1 ≤ h ≤ H and L ≥ 0, the approximate belief state bapx,Ph (ah−L:h−1, oh−L+1:h; D) is defined as follows, via induction on L: in the case that L = 0, then we define bapx,Ph (∅; D) := { b1 : h = 1 D : h > 1, and for the case that L > 0, define, for h > L, bapx,Ph (ah−L:h−1, oh−L+1:h; D) := U P h−1(b apx,P h−1 (ah−L:h−2, oh−L+1:h−1; D); ah−1, oh). We extend the above definition to the case that h ≤ L by defining bapx,Ph (ah−L:h−1, oh−L+1:h; D) := b apx,P h (amax{1,h−L}:h−1, omax{2,h−L+1}:h; D). In words, the approximate belief state bapx,Ph (ah−L:h−1, oh−L+1:h; D) is obtained by applying the belief update operator starting from the distribution D at step h−L, if h−L > 1 (and otherwise, starting from b1, at step 1). At times, we will drop the superscript P from the above definitions and write bh,bapxh . The main technical result of [GMR22], stated as Theorem 2.1 below (with slight differences, see Appendix B), proves that if the POMDP P is γ-observable for some γ > 0, then for a wide range of distributions D , for sufficiently large L, the approximate belief state bapx,Ph (ah−L:h−1, oh−L+1:h; D) will be close to (i.e., “contract to”) the true belief state bPh (a1:h−1, o2:h). Theorem 2.1 (Belief contraction; Theorems 4.1 and 4.7 of [GMR22]). Consider any γ-observable POMDP P , any > 0 and L ∈ N so that L ≥ C ·min { log(1/( φ)) log(log(1/φ)/ ) γ2 , log(1/( φ)) γ4 } . Fix any π ∈ Πgen, and suppose that D ∈ ∆S satisfies b P h (a1:h−L−1,o2:h−L)(s) D(s) ≤ 1 φ for all (ah−1, o2:h). Then EP(a1:h−1,o2:h)∼π ∥∥∥bPh (a1:h−1, o2:h)− bapx,Ph (ah−L:h−1, oh−L+1:h; D)∥∥∥ 1 ≤ . 2.4 Visitation distributions For a POMDP P = (H,S,A,O, b1, R,T,O), policy π ∈ Πgen, and step h ∈ [H], the (latent) state visitation distribution at step h is dP,πS,h ∈ ∆(S) defined by d P,π S,h (s) := P P s1:H∼π(sh = s), and the observation visitation distribution at step h is dP,πO,h ∈ ∆(O) defined by d P,π O,h := P P o1:H∼π(oh = ·) = Oh · dP,πS,h . As will be discussed in Section 4.1, Theorem 2.1 implies that the transitions of the POMDP P can be approximated by those of an MDPM whose states consist of L-tuples of actions and observations. Thus we will often deal with such MDPsM of the formM = (H,Z,A, R,T) where the set of states has the product structure Z = AL × OL. We then define o : Z → O by o(a1:L, o1:L) = oL. For such MDPs, we define the observation visitation distributions by dM,πO,h (o) := P M z1:H∼π(o(zh) = o). Finally, for o ∈ O, we let eo ∈ R O denote the oth unit vector; thus, for instance, we have dP,πO,h (o) = 〈eo, d P,π O,h 〉. 3 Main result: learning observable POMDPs in quasipolynomial time Theorem 3.1 below states our main guarantee for BaSeCAMP (Barycentric Spanner policy Cover with Approximate MDP; Algorithm 3). Theorem 3.1. Given any α, β, γ > 0 and γ-observable POMDP P , BaSeCAMP with parameter settings as described in Section C.1 outputs a policy which is α-suboptimal with probability at least 1− β, using time and sample complexity bounded by (OA)CL log(1/β), where C > 1 is a constant and L = min { log(HSO/(αγ)) γ4 , log2(HSO/(αγ)) γ2 } . It is natural to ask whether the complexity guarantee of Theorem 3.1 can be improved further. [GMR22, Theorem 6.4] shows that under the Exponential Time Hypothesis, there is no algorithm for planning in γ-observable POMDPs which runs in time (SAHO)o(log(SAHO/α)/γ) and produces α-suboptimal policies, even if the POMDP is known. Thus, up to polynomial factors in the exponent, Theorem 3.1 is optimal. It is plausible, however, that there could be an algorithm which runs in quasipolynomial time yet only needs polynomially many samples; we leave this question for future work. 4 Algorithm description 4.1 Approximating P with an MDP. A key consequence of observability is that, by the belief contraction result of Theorem 2.1, the POMDP P is well-approximated by an MDPM of quasi-polynomial size. In more detail, we will apply Theorem 2.1 with φ = 1/S, D = Unif(S), and some L = poly(log(S/ )/γ) sufficiently large so as to satisfy the requirement of the theorem statement. The MDP M has state space Z := AL × OL, horizon H , action space A, and transitions PMh (·|zh, ah) which are defined via a belief update on the approximate belief state bapx,Ph (zh; Unif(S))2: in particular, for a state zh = (ah−L:h−1, oh−L+1:h) ∈ Z ofM, action ah, and subsequent observation oh+1 ∈ O, define PMh ((ah−L+1:h, oh−L+2:h+1)|zh, ah) := e>oh+1 ·O P h+1 · TPh (ah) · b apx,P h (zh; Unif(S)). (1) The above definition should be compared with the probability of observing oh+1 given history (a1:h, o2:h) and policy π when interacting with the POMDP P , which is PPoh+1∼π(oh+1|a1:h, o2:h) = e > oh+1 ·OPh+1 · TPh (ah) · bPh (a1:h−1, o2:h). (2) Theorem 2.1 gives that ∥∥∥bPh (a1:h−1, o2:h)− bapx,Ph (ah−L:h−1, oh−L+1:h; Unif(S))∥∥∥ 1 is small in expectation under any general policy π, which, using (1) and (2), gives that, for all π ∈ Πgen and h ∈ [H], Ea1:h,o2:h∼π ∑ oh+1∈O ∣∣PMh (oh+1|zh, ah)− PPh (oh+1|a1:h, o2:h)∣∣ ≤ . (3) (Above we have written zh = (ah−L:h−1, oh−L+1:h) and, via abuse of notation, PMh (oh+1|zh, ah) in place of PMh ((ah−L+1:h, oh−L+2:h+1)|zh, ah).) The inequality (3) establishes that the dynamics of P under any policy may be approximated by those of the MDPM. Crucially, this implies that there exists a deterministic Markov policy forM which is near-optimal among general policies for P; the set of such Markov policies forM is denoted by ΠmarkovZ . Because of the Markovian structure of M, such a policy can be found in time polynomial in the size ofM (which is quasi-polynomial in the underlying problem parameters), ifM is known. Of course,M is not known. Approximately learning the MDPM. These observations suggest the following model-based approach of trying to learn the transitions ofM. Suppose that we know a sequence of general policies π1, . . . , πH (abbreviated as π1:H ) so that for each h, πh visits a uniformly random state of P at step h− L (i.e. dP,π h S,h−L = Unif(S)). Then we can estimate the transitions ofM as follows: play π h for h−L−1 steps and then playL+1 random actions, generating a trajectory (a1:h, o2:h+1). Conditioned on zh = (ah−L:h−1, oh−L+1:h) and final action ah, the last observation of this sample trajectory, oh+1, would give an unbiased draw from the transition distribution PMh (·|zh, ah). Repeating this procedure would allow estimation of PMh (see Lemma E.1). This idea is formalized in the procedure ConstructMDP (Algorithm 1): given a sequence of general policies π1, . . . , πH (abbreviated π1:H ), ConstructMDP constructs an MDP, denoted M̂ = M̂(π1:H), which empirically approximates M using the sampling procedure described above. For technical reasons, M̂ actually has state space Z := AL · OL, where O := O ∪ {osink} and osink is a special “sink observation” so that, after osink is observed, all future observations are also osink. Furthermore, we remark that dP,π h S,h−L does not have to be exactly uniform – it suffices if πh visits all states of P at step h− L with non-negligible probability. 4.2 Exploration via barycentric spanners The above procedure for approximating M with M̂ omits a crucial detail: how can we find the “exploratory policies” π1:H? Indeed a major obstacle to finding such policies is that we never directly 2For simplicity, descriptions of the reward function of M are omitted; we refer the reader to the appendix for the full details of the proof observe the states of P . By repeatedly playing a policy π on P , we can estimate the induced observation visitation distribution dP,πO,h−L, which is related to the state visitation distribution via the equality O†h−L · d P,π O,h−L = d P,π S,h−L. Unfortunately, the matrix Oh−L is still unknown, and in general unidentifiable. On the positive side, we can attempt to learnM layer by layer – in particular, when learning the hth layer, we can assume that we have learned previous layers, i.e., dM̂,πO,h−L approximates d M,π O,h−L, and therefore dP,πO,h−L. Even though M̂ does not have underlying latent states, we can define “formal” latent state distributions on M̂ in analogy with P , i.e. dM̂,πS,h−L := O † h−L · d M̂,π O,h−L. But this does not seem helpful, again because Oh−L is unknown. Our first key insight is that a policy πh, for which dM̂,π h S,h−L puts non-negligible mass on all states, can be found (when it exists) via knowledge of M̂ and the technique of Barycentric Spanners – all without ever explicitly computing dM̂,π h S,h−L. Barycentric spanners. Suppose we knew that the transitions of our empirical estimate M̂ approximate those ofM up to step h, and we want to find a policy πh for which the (formal) latent state distribution dM̂,π h S,h−L is non-negligible on all states. Unfortunately, the set of achievable latent state distributions {O†h−L · d M̂,π O,h−L : π ∈ Π gen} ⊂ RS is defined implicitly, via the unknown observation matrix Oh−L. But we do have access to XM̂,h−L := {d M̂,π O,h−L : π ∈ Π gen} ⊂ RO, the set of achievable distributions over the observation at step h − L. In particular, for any reward function on observations at step h − L, we can efficiently (by dynamic programming) find a policy π that maximizes reward on M̂ over all policies. In other words, we can solve linear optimization problems over X . By a classic result [AK08], we can thus efficiently find a barycentric spanner for XM̂,h−L: Definition 4.1 (Barycentric spanner). Consider a subset X ⊂ Rd. For B ≥ 1, a set X ′ ⊂ X of size d is a B-approximate barycentric spanner of X if each x ∈ X can be expressed as a linear combination of elements in X ′ with coefficients in [−B,B]. Using the guarantee of [AK08] (restated as Lemma D.1) applied to the set XM̂,h−L, we can find, in time polynomial in the size of M̂, a 2-approximate barycentric spanner π̃1, . . . , π̃O of XM̂,h−L; this procedure is formalized in BarySpannerPolicy (Algorithm 2). Thus, for any policy π, the observation distribution dM̂,πO,h−L induced by π is a linear combination of the distributions {d M̂,π̃i O,h−L : i ∈ [O]} with coefficients in [−2, 2]. Since dM̂,πS,h−L = O † h−L · d M̂,π O,h−L for all π, it follows that the formal latent state distribution dM̂,πS,h−L induced by π is a linear combination of the formal latent state distributions {dM̂,π̃ i S,h−L : i ∈ [O]} with the same coefficients. Now, intuitively, the randomized mixture policy πmix = 1O (π̃ 1 + · · ·+ π̃O) should explore every reachable state; indeed, we show the following guarantee for BarySpannerPolicy: Lemma 4.1 (Informal version & special case of Lemma D.2). In the above setting, under some technical conditions, for all s ∈ S and π ∈ Πgen, it holds that dM̂,πmixS,h−L (s) ≥ 1 4O2 · d M̂,π S,h−L(s). But recall the original goal: a policy πh which explores P – not M̂. Unfortunately, those states in P which can only be reached with probability that is small compared to the distance between P and M̂ may be missed by πmix. When we use πmix to compute the next-step transitions of M̂, this will lead to additional error when applying belief contraction (Theorem 2.1), and therefore additional error between P and M̂. If not handled carefully, this error will compound exponentially over layers. 4.3 The full algorithm via iterative discovery The solution to the dilemma discussed above is to not try to construct our estimate M̂ ofM layer by layer, hoping that at each layer we can explore all reachable states of P despite making errors in Algorithm 1 ConstructMDP 1: procedure ConstructMDP(L,N0, N1, π1, . . . , πH ) 2: for 1 ≤ h ≤ H do 3: Let π̂h be the policy which follows πh for the first max{h−L−1, 0} steps and thereafter chooses uniformly random actions. 4: Draw N0 independent trajectories from the policy π̂h: 5: Denote the data from the ith trajectory by ai1:H−1, o i 2:H , for i ∈ [N0]. 6: Set zih = (a i h−L:h−1, o i h−L+1:h) for all i ∈ [N0], h ∈ [H]. 7: // Construct the transitions PM̂h (zh+1|zh, ah) as follows: 8: for each zh = (ah−L:h−1, oh−L+1:h) ∈ Z, ah ∈ A do 9: // Define PM̂h (·|zh, ah) to be the empirical distribution of zih+1|zih, aih, as follows: 10: For oh+1 ∈ O, define ϕ(oh+1) := |{i : (aimax{1,h−L}:h, o i max{2,h−L+1}:h+1) = (amax{1,h−L}:h, omax{2,h−L+1}:h+1)}|. 11: if ∑ oh+1 ϕ(oh+1) ≥ N1 then 12: Set PM̂h ((ah−L+1:h, oh−L+2:h+1)|zh, ah) := ϕ(oh+1)∑ o′ h+1 ϕ(o′h+1) for all oh+1 ∈ O. 13: Set RM̂h (zh, ah) := R P h (o i h) for some i with o i h = oh. . R P h (o i h) is observed. 14: else 15: Let PM̂h (·|zh, ah) put all its mass on (ah−L+1:h, (oh−L+2:h, osink)). 16: for each zh = (ah−L:h−1, oh−L+1:h) ∈ Z\Z and ah ∈ A do 17: Let PM̂h (·|zh, ah) put all its mass on (ah−L+1:h, (oh−L+2:h, osink)). 18: Let M̂ denote the MDP (Z, H,A, RM̂,PM̂). 19: return the MDP M̂, which we denote by M̂(π1:H). Algorithm 2 BarySpannerPolicy 1: procedure BarySpannerPolicy(M̂, h) . M̂ is MDP on state space Z , horizon H; h ∈ [H] 2: if h ≤ L then return an arbitrary general policy. 3: Let O be the linear optimization oracle which given r ∈ RO, returns arg maxπ∈ΠmarkovZ 〈r, d π,M̂ O,h−L〉 and maxπ∈ΠmarkovZ 〈r, d π,M̂ O,h−L〉 . Note that O can be implemented in time Õ(|Z| ·HO) using dynamic programming 4: Using the algorithm of [AK08] with oracle O , compute policies {π1, . . . , πO} so that {dπ i,M̂ O,h−L : i ∈ [O]} is a 2-approximate barycentric spanner of {d π,M̂ O,h−L : π ∈ Π markov Z }. . This algorithm requires only O(O2 logO) calls to O 5: return the general policy 1O · ∑O i=1 π i. earlier layers of M̂. Instead, we have to be able to go back and “fix” errors in our empirical estimates at earlier layers. This task is performed in our main algorithm, BaSeCAMP (Algorithm 3). For some K ∈ N, BaSeCAMP runs for a total of K iterations: at each iteration k ∈ [K], BaSeCAMP defines H general policies, πk,1, . . . , πk,H ∈ Πgen (abbreviated πk,1:H ; step 4). The algorithm’s overall goal is that, for some k, for each h ∈ [H], πk,h explores all latent states at step h− L that are reachable by any policy. To ensure that this condition holds at some iteration, BaSeCAMP performs the following two main steps for each iteration k: first, it calls Algorithm 1 to construct an MDP, denoted M̂(k), using the policies πk,1:H . Then, for each h ∈ [H], it passes the tuple (M̂(k), h) to BarySpannerPolicy, which returns as output a general policy, πk+1,h,0. Then, policies πk+1,h are produced (step 7) by averaging πk+1,h ′,0, for all h′ ≥ h. The policies πk+1,h are then mixed into the policies πk+1,h for the next iteration k+ 1. It follows from properties of BarySpannerPolicy that, in the event that the policies πk,h are not sufficiently exploratory, one of the new policies πk+1,h visits some latent state (s, h′) ∈ S × [H], which was not previously visited by πk,1:H with significant probability. Thus, after a total of K = O(SH) iterations k, it follows that we must have visited all (reachable) latent states of the POMDP. At the end of these K iterations, BaSeCAMP computes an optimal policy for each M̂(k) and returns the best of them (as evaluated on fresh trajectories drawn from P; step 12). Algorithm 3 BaSeCAMP (Barycentric Spanner policy Cover with Approximate MDP) 1: procedure BaSeCAMP(L,N0, N1, α, β,K) 2: Initialize π1,1, . . . , π1,H to be arbitrary policies. 3: for k ∈ [K] do 4: For each h ∈ [H], set πk,h = 1k ∑K k′=1 π k′,h. 5: Run ConstructMDP(L,N0, N1, πk,1:H) and let its output be M̂(k). 6: For each h ∈ [H], let πk+1,h,0 be the output of BarySpannerPolicy(M̂(k), h). 7: For each h ∈ [H], define πk+1,h := 1H−h+1 ∑H h′=h π k+1,h′,0. 8: // Choose the best optimal policy amongst all M̂(k) 9: for k ∈ [K] do 10: Let πk? denote an optimal policy of M̂(k). 11: Execute πk? for 100H2 logK/β α2 trajectories and let the mean reward across them be r̂ k. 12: Let k? = arg maxk∈[K] r̂ k. 13: return πk ? ? . 5 Proof Outline We now briefly outline the proof of Theorem 3.1; further details may be found in the appendix. The high-level idea of the proof is to show that the algorithm BaSeCAMP makes a given amount of progress for each iteration k, as specified in the following lemma: Lemma 5.1 (“Progress lemma”: informal version of Lemma I.2). Fix any iteration k in Algorithm 3, step 3. Then, for some parameters δ, φ with α δ φ > 0, one of the following statements holds: 1. Any (s, h) with dP,π k,h S,h−L (s) < φ satisfies d P,π S,h−L(s) ≤ δ for all general policies π. 2. There is some (h, s) ∈ [H]× S so that: dP,π k,h S,h−L (s) < φ ·H 2S, yet, for all k′ > k: dP,π k′,h S,h−L (s) ≥ φ ·H 2S. Given Lemma 5.1, the proof of Theorem 3.1 is fairly straightforward. In particular, each (h, s) ∈ [H]× S can only appear as the specified pair in item 2 of the lemma for a single iteration k. Thus, as long as K > HS, item 1 must hold for some value of k, say k? ∈ [K]. In turn, it is not difficult to show from this that the MDP M̂(k?) is a good approximation of P , in the sense that for any general policy π, the values of π in M̂(k?) and in P are close (Lemma H.3). Thus, the optimal policy πk?? of M̂(k?) will be a near-optimal policy of P , and steps 9 through 12 of BaSeCAMP will identify either the policy πk ? ? or some other policy π k′ ? which has even higher reward on P . Proof of the progress lemma. The bulk of the proof of Theorem 3.1 consists of the proof of Lemma 5.1, which we proceed to outline. Suppose that item 1 of the lemma does not hold, meaning that there is some π ∈ Πgen and some (h, s) ∈ [H]× S so that dP,π k,h S,h−L (s) < φ yet d P,π S,h−L(s) > δ; i.e., πk,h does not explore (s, h− L), but the policy π does. Roughly speaking, BaSeCAMP ensures that item 2 holds in this case, using the following two steps: (A) First, we show that 〈es,O†h−L ·d M̂(k),π O,h−L 〉 ≥ δ ′, where δ′ is some parameter satisfying δ δ′ φ. In words, the estimate of the underlying state distribution provided by M̂(k) also has the property that some policy π visits (s, h−L) with non-negligible probability (namely, δ′). While this statement would be straightforward if M̂(k) were a close approximation P , this is not necessarily the case (indeed, if it were the case, then item 1 of Lemma 5.1 would hold). To circumvent this issue, we introduce a family of intermediate POMDPs indexed by H ′ ∈ [H] and denoted Pφ,H′(πk,1:H), which we call truncated POMDPs. Roughly speaking, the truncated POMDP Pφ,H′(πk,1:H) diverts transitions away from all states at step H ′ − L which πk,H ′ does not visit with probability at least φ. This modification is made to allow Theorem 2.1 to be applied to Pφ,H′(πk,1:H) and any general policy π. By doing so, we may show a one-sided error bound between Pφ,H′(πk,1:H) and M̂(k) (Lemma G.4) which, importantly, holds even when the policies πk,1:H may fail to explore some states. It is this one-sided error bound which implies the lower bound on 〈es,O†h−L · d M̂(k),π O,h−L 〉. (B) Second, we show that the policy πk+1,h,0 produced by BarySpannerPolicy in step 6 explores (s, h− L) with sufficient probability. To do so, we first use Lemma 4.1 to conclude that 〈es,O†h−L · dM̂ (k),πk+1,h,0 O,h−L 〉 ≥ δ′ 4O2 . The more challenging step is to use this fact to conclude a lower bound on dP,π k+1,h,0 S,h−L (s); unfortunately, the one-sided error bound between P and M̂ (k) that we used in the previous paragraph goes in the wrong direction here. The solution is to use Lemma H.3, which has the following consequence: either dP,π k+1,h,0 S,h−L (s) is not too small, or else the policy π k+1,h,0 visits some state at a step prior to h− L which was not sufficiently explored by any of the policies πk,1:H (see Section C for further details). In either case πk+1,h,0 visits a state that was not previously explored, and the fact that πk+1,h,0 is mixed in to πk ′,h′ for k′ > k, h′ ≤ h (steps 4, 7 of BaSeCAMP) allows us to conclude item 2 of Lemma 5.1. 6 Conclusion In this paper we have demonstrated the first quasipolynomial-time (and quasipolynomial-sample) algorithm for learning observable POMDPs. Several interesting directions for future work remain: • It is straightforward to show that a γ-observable POMDP is Ω(γ/ √ S) weakly-revealing in the sense of [LCSJ22, Assumption 1]. Thus, the results of [JKKL20, LCSJ22] imply that γ-observable POMDPs can be learned with polynomially many samples, albeit by computationally inefficient algorithms. Thus, as discussed following Theorem 3.1, it is natural to wonder whether we can achieve the best of both worlds: is there a quasipolynomialtime algorithm that only needs polynomially many samples, or can one show a computationalstatistical tradeoff? • It is also natural to ask whether an analogue of Assumption 1.1 for the `2 norm, namely that ‖Ohb−Ohb′‖2 ≥ γ‖b− b′‖2 for all h ∈ [H], is sufficient for computationally efficient learnability. Even the planning version of this question (where the parameters of the POMDP are known and the problem is to find a near-optimal policy) is open. Acknowledgments and Disclosure of Funding N.G. is supported by a Fannie & John Hertz Foundation Fellowship and an NSF Graduate Fellowship. A.M. is supported by a Microsoft Trustworthy AI Grant, NSF Large CCF-1565235, a David and Lucile Packard Fellowship and an ONR Young Investigator Award. D.R. is supported by an Akamai Presidential Fellowship and a U.S. DoD NDSEG Fellowship.
1. What is the focus and contribution of the paper regarding partially observable Markov decision processes (POMDPs)? 2. What are the strengths of the proposed algorithm, particularly in its novel approach and theoretical perspective? 3. What are the weaknesses of the paper, especially regarding its reliance on certain assumptions and lack of empirical evidence? 4. How do the authors address the issue of checking the underlying assumptions in practical scenarios? 5. To what extent does the result rely on the complexity reduction in planning under the observability assumption, and how much is specific to the learning setting? 6. What are the limitations of the paper, and how might they be addressed in future research?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes a quasi-polynomial-time algorithm for learning in a class of partially observable Markov decision processes (POMDPs). It builds on the recent set of results that noticed the reductions in the computational complexity of planning in POMDPs that comply with an assumption on the observability in the POMDP. Strengths And Weaknesses Strengths: The paper tackles a difficult problem and offers a novel perspective. The direction of understanding the implications of imposing practically viable assumptions on POMDPs on planning and learning is an emerging one. It makes sense to investigate problems in this direction. It is a relatively dense yet sufficiently clearly written paper. Weaknesses: While I am sure the authors will claim that the paper is a theoretical one, a paper that argues to be not computationally intractable should provide empirical evidence of the implied traceability. It relies on assumptions similar to those in the recent literature. Nevertheless, it is necessary to provide additional and convincing evidence that the assumptions can be checked in practice and they will hold for interesting and large classes of problem instances. The lack of empirical evidence as mentioned in the previous bullet point hurts here as well. While with a different perspective, there is additional recent work that aims to cope with the computational impracticality of planning in POMDPs: https://arxiv.org/abs/2009.11459, https://arxiv.org/abs/2204.00755, and https://arxiv.org/abs/1710.10294. Contrast with this work would help and may broaden the interest in the current paper. Questions When is it easy to check the underlying assumptions? How much does the result rely on the complexity reduction in planning under the observability assumption and how much of it is specific to the learning setting? A discussion cannot hurt. Limitations There is no explicit discussion of the limitations in the paper or an empirical study that would help the reader establish an intuition on the implications and limitations of the results.
NIPS
Title Learning in Observable POMDPs, without Computationally Intractable Oracles Abstract Much of reinforcement learning theory is built on top of oracles that are computationally hard to implement. Specifically for learning near-optimal policies in Partially Observable Markov Decision Processes (POMDPs), existing algorithms either need to make strong assumptions about the model dynamics (e.g. deterministic transitions) or assume access to an oracle for solving a hard optimistic planning or estimation problem as a subroutine. In this work we develop the first oracle-free learning algorithm for POMDPs under reasonable assumptions. Specifically, we give a quasipolynomial-time end-to-end algorithm for learning in “observable” POMDPs, where observability is the assumption that well-separated distributions over states induce well-separated distributions over observations. Our techniques circumvent the more traditional approach of using the principle of optimism under uncertainty to promote exploration, and instead give a novel application of barycentric spanners to constructing policy covers. 1 Introduction Markov Decision Processes (MDPs) are a ubiquitous model in reinforcement learning that aim to capture sequential decision-making problems in a variety of applications spanning robotics to healthcare. However, modelling a problem with an MDP makes the often-unrealistic assumption that the agent has perfect knowledge about the state of the world. Partially Observable Markov Decision Processes (POMDPs) are a broad generalization of MDPs which capture an agent’s inherent uncertainty about the state: while there is still an underlying state that updates according to the agent’s actions, the agent never directly observes the state, but instead receives samples from a statedependent observation distribution. The greater generality afforded by partial observability is crucial to applications in game theory [BS18], healthcare [Hau00, HF00b], market design [WME+22], and robotics [CKK96]. Unfortunately, this greater generality comes with steep statistical and computational costs. There are well-known statistical lower bounds [JKKL20, KAL16], which show that in the worst case, it is statistically intractable to find a near-optimal policy for a POMDP given the ability to play policies on it (the learning problem), even given unlimited computation. Furthermore, there are worst-case computational lower bounds [PT87, Lit94, BDRS96, LGM01, VLB12], which establish that it is computationally intractable to find a near-optimal policy even when given the exact parameters of the model (the substantially simpler planning problem). Nevertheless there is a sizeable literature devoted to overcoming the statistical intractability of the learning problem by restricting to natural subclasses of POMDPs [KAL16, GDB16, ALA16, JKKL20, XCGZ21, KECM21a, KECM21b, LCSJ22]. There are far fewer works attempting to overcome computational intractability, and all make severe restrictions on either the model dynamics [JKKL20, KAL16] or the structure of the uncertainty [BDRS96, KECM21a]. The standard practice is to simply sidestep computational issues by assuming access to strong oracles such as ones that 36th Conference on Neural Information Processing Systems (NeurIPS 2022). solve Optimistic Planning (given a constrained, non-convex set of POMDPs, find the maximum value achievable by any policy on any POMDP in the set) [JKKL20] or Optimistic Maximum Likelihood Estimation (given a set of action/observation sample trajectories, find a POMDP which obtains maximum value conditioned on approximately maximizing the likelihood of seeing those trajectories) [LCSJ22]. Unsurprisingly, these oracles are computationally intractable to implement. Is there any hope for giving computationally efficient, oracle-free learning algorithms for POMDPs under reasonable assumptions? The naïve approach would require exponential time, and thus even a quasi-polynomial time algorithm would represent a dramatic improvement. A necessary first step towards solving the learning problem is having a computationally efficient planning algorithm. Few such algorithms have provable guarantees under reasonable model assumptions, but recently it was shown [GMR22] that there is a quasipolynomial-time planning algorithm for POMDPs which satisfy an observability assumption. Let H ∈ N be the horizon length of the POMDP, and for each state s and step h ∈ [H], let Oh(·|s) denote the observation distribution at state s and step h. Then observability is defined as follows: Assumption 1.1 ([EDKM07, GMR22]). Let γ > 0. For h ∈ [H], let Oh be the matrix with columns Oh(·|s), indexed by states s. We say that the matrix Oh satisfies γ-observability if for each h, for any distributions b, b′ over states, ‖Ohb−Ohb′‖1 ≥ γ ‖b− b′‖1. A POMDP satisfies (one-step) γ-observability if all H of its observation matrices do. Compared to previous assumptions enabling computationally efficient planning, observability is much milder because it makes no assumptions about the dynamics of the POMDP, and it allows for natural observation models such as noisy or lossy sensors [GMR22]. It is known that statistically efficient learning is possible under somewhat weaker assumptions than observability [JKKL20], however these works rely on solving a planning problem that is computationally intractable. This raises the question: Is observability enough to remedy both the computational and statistical woes of learning POMDPs? In particular, can we get not only efficient planning but efficient learning too? Overview of results. This work provides an affirmative answer to the questions above: we give an algorithm (BaSeCAMP, Algorithm 3) with quasi-polynomial time (and sample) complexity for learning near-optimal policies in observable POMDPs – see Theorem 3.1. While this falls just short of polynomial time, it turns out to be optimal in the sense that even for observable POMDPs there is a quasi-polynomial time lower bound for the (simpler) planning problem under standard complexity assumptions [GMR22]. A key innovation of our approach is an alternative technique to encourage exploration: whereas essentially all previous approaches for partially observable RL used the principle of optimism under uncertainty to encourage the algorithm to visit states [JKKL20, LCSJ22], we introduce a new framework based on the use of barycentric spanners [AK08] and policy covers [DKJ+19]. While each of these tools has previously been used in the broader RL literature to promote exploration (e.g. [LS17, FKQR21, DKJ+19, AHKS20, DGZ22, JKSY20, MHKL20]), they have not been used specifically in the study of POMDPs with imperfect observations,1 and indeed our usage of them differs substantially from past instances. The starting point for our approach is a result of [GMR22] (restated as Theorem 2.1) which implies that the dynamics of an observable POMDP P may be approximated by those of a Markov decision processM with a quasi-polynomial number of states. If we knew the transitions ofM, then we could simply use dynamic programming to find an optimal policy forM, which would be guaranteed to be a near-optimal policy for P . Instead, we must learn the transitions ofM, for which it is necessary to explore all (reachable) states of the underlying POMDP P . A naïve approach to encourage exploration is to learn the transitions of P via forwards induction on the layer h, using, at each step h, our knowledge of the learned transitions at steps prior to h to find a policy which visits each reachable state at step h. Such an approach would lead to a policy cover, namely a collection of policies which visits all reachable latent states. A major problem with this approach is that the latent states are not observed: instead, we only see observations. Hence a natural approach might be to choose policies which lead to all possible observations at each step h. This approach is clearly insufficient, since, e.g., a single state could emit a uniform distribution over observations. Thus we instead compute the following stronger concept: for each step h, we consider the set X of all possible distributions over observations at 1Policy covers have been used in the special case of block MDPs [DKJ+19, MHKL20], namely where different states produce disjoint observations. step h under any general policy, and attempt to find a barycentric spanner of X , namely a small subset X ′ ⊂ X so that all other distributions in X can be expressed as a linear combination of elements of X ′ with bounded coefficients. By playing a policy which realizes each distribution in such a barycentric spanner X ′, we are able to explore all reachable latent states, despite having no knowledge about which states we are exploring. This discussion omits a key technical aspect of the proof, which is the fact that we can only compute a barycentric spanner for a set X corresponding to an empirical estimate M̂ ofM. A key innovation in our proof is a technique to dynamically use such barycentric spanners, even when M̂ is inaccurate, to improve the quality of the estimate M̂. We remark that a similar dynamic usage of barycentric spanners appeared in [FKQR21]; we discuss in the appendix why the approach of [FKQR21], as well as related approaches involving nonstationary MDPs [WL21, WDZ22, WYDW21], cannot be applied here. Taking a step back, few models in reinforcement learning (beyond tabular or linear MDPs) admit computationally efficient end-to-end learning algorithms – indeed, our main contribution is a way to circumvent the daunting task of implementing any of the various constrained optimistic planning oracles assumed in previous optimism-based approaches. We hope that our techniques may be useful in other contexts for avoiding computational intractability without resorting to oracles. 2 Preliminaries For sets T ,Q, let QT denote the set of mappings from T → Q. Accordingly, we will identify RT with |T |-dimensional Euclidean space, and let ∆(T ) ⊂ RT consist of distributions on T . For d ∈ N and a vector v ∈ Rd, we denote its components by v(1), . . . ,v(d). For integers a ≤ b, we abbreviate a sequence (xa, xa+1, . . . , xb) by xa:b. If a > b, then we let xa:b denote the empty sequence. Sometimes we refer to negative indices of a sequence x1:n: in such cases the elements with negative indices may be taken to be aribtrary, as they will never affect the value of the expression. See Appendix B.1 for clarification. For x ∈ R, we write [x]+ = max{x, 0}, and [x]− = −min{x, 0}. For sets S, T , the notation S ⊂ T allows for the possibility that S = T . 2.1 Background on POMDPs In this paper we address the problem of learning finite-horizon partially observable Markov decision processes (POMDPs). Formally, a POMDP P is a tuple P = (H,S,A,O, b1, R,T,O), where: H ∈ N is a positive integer denoting the horizon length; S is a finite set of states of size S := |S|; A is a finite set of actions of size A := |A|; O is a finite set of observations of size O := |O|; b1 is the initial distribution over states; and R,T,O are given as follows. First, R = (R1, . . . , RH) denotes a tuple of reward functions, where, for h ∈ [H], Rh : O → [0, 1] gives the reward received as a function of the observation at step h. (It is customary in the literatue [JKKL20, LCSJ22] to define the rewards as being a function of the observations as opposed to being observed by the algorithm as separate information.) Second, T = (T1, . . . ,TH) is a tuple of transition kernels, where, for h ∈ [H], s, s′ ∈ S, a ∈ A, Th(s′|s, a) denotes the probability of transitioning from s to s′ at step h when action a is taken. For each a ∈ A, we will write Th(a) ∈ RS×S to denote the matrix with Th(a)s,s′ = Th(s|s′, a). Third, O = (O1, . . . ,OH) is a tuple of observation matrices, where for h ∈ [H], s ∈ S, o ∈ O, (Oh)o,s, also written as Oh(o|s), denotes the probability observing o while in state s at step h. Thus Oh ∈ RO×S for each h. Sometimes, for disambiguation, we will refer to the states S as the latent states of the POMDP P . The interaction (namely, a single episode) with P proceeds as follows: initially a state s1 ∼ b1 is drawn from the initial state distribution. At each step 1 ≤ h < H , an action ah ∈ A is chosen (as a function of previous observations and actions taken), P transitions to a new state sh+1 ∼ Th(·|sh, ah), a new observation is observed, oh+1 ∼ Oh+1(·|sh+1), and a reward of Rh+1(oh+1) is received (and observed). We emphasize that the underlying states s1:H are never observed directly. As a matter of convention, we assume that no observation is observed at step h = 1; thus the first observation is o2. 2.2 Policies, value functions A deterministic policy σ is a tuple σ = (σ1, . . . , σH), where σh : Ah−1 ×Oh−1 → A is a mapping from histories up to step h, namely tuples (a1:h−1, o2:h), to actions. We will denote the collection of histories up to step h by Hh := Ah−1 × Oh−1 and the set of deterministic policies by Πdet, meaning that Πdet = ∏H h=1AHh . A general policy π is a distribution over deterministic policies; the set of general policies is denoted by Πgen := ∆( ∏H h=1AHh). Given a general policy π ∈ Πgen, we denote by σ ∼ π the draw of a deterministic policy from the distribution π; to execute a general policy π, a sample σ ∼ π is first drawn and then followed for an episode of the POMDP. For a general policy π and some event E , write PPa1:H−1,o2:H ,s1:H∼π(E) to denote the probability of E when s1:H , a1:H−1, o2:H is drawn from a trajectory following policy π for the POMDP P . At times we will compress notation in the subscript, e.g., write PPπ (E) if the definition of s1:H , a1:H−1, o2:H is evident. In similar spirit, we will write EPa1:H−1,o2:H ,s1:H∼π[·] to denote expectations. Given a general policy π ∈ Πgen, define the value function for π at step 1 by V π,P1 (∅) = EPo1:H∼π [∑H h=2Rh(oh) ] , namely as the expected reward received by following π. Our objective is to find a policy π which maximizes V π,P1 (∅), in the PAC-RL model [KS02]: in particular, the algorithm does not have access to the transition kernel, reward function, or observation matrices of P , but can repeatedly choose a general policy π and observe the following data from a single trajectory drawn according to π: (a1, o2, R2(o2), a2, . . . , aH−1, oH , RH(oH)). The challenge is to choose such policies π which can sufficiently explore the environment. Finally, we remark that Markov decision processes (MDPs) are a special case of POMDPs where O = S and Oh(o|s) = 1[o = s] for all h ∈ [H], o, s ∈ S . For the MDPs we will consider, the initial state distribution will be left unspecified (indeed, the optimal policy of an MDP does not depend on the initial state distribution). Thus, we consider MDPsM described by a tupleM = (H,S,A, R,T). 2.3 Belief contraction A prerequisite for a computationally efficient learning algorithm in observable POMDPs is a computationally efficient planning algorithm, i.e. an algorithm to find an approximately optimal policy when the POMDP is known. Recent work [GMR22] obtains such a planning algorithm taking quasipolynomial time; we now introduce the key tools used in [GMR22], which are used in our algorithm as well. Consider a POMDP P = (H,S,A,O, b1, R,T,O). Given some h ∈ [H] and a history (a1:h−1, o2:h) ∈ Hh, the belief state bPh (a1:h−1, o2:h) ∈ ∆(S) is given by the distribution of the state sh conditioned on taking actions a1:h−1 and observing o2:h in the first h steps. Formally, the belief state is defined inductively as follows: bP1 (∅) = b1, and for 2 ≤ h ≤ H and any (a1:h−1, o2:h) ∈ Hh, bPh (a1:h−1, o2:h) = U P h−1(b P h−1(a1:h−2, o2:h−1); ah−1, oh), where for b ∈ ∆(S), a ∈ A, o ∈ O, UPh (b; a, o) ∈ ∆(S) is the distribution defined by UPh (b; a, o)(s) := Oh+1(o|s) · ∑ s′∈S b(s ′) · Th(s|s′, a)∑ x∈S Oh+1(o|x) ∑ s′∈S b(s ′) · Th(x|s′, a) . We call UPh the belief update operator. The belief state b P h (a1:h−1, o2:h) is a sufficient statistic for the sequence of future actions and observations under any deterministic policy. In particular, the optimal policy can be expressed as a function of the belief state, rather than the entire history. Thus, a natural approach to plan a near-optimal policy is to find a small set B ⊂ ∆(S) of distributions over states such that each possible belief state bPh (a1:h−1, o2:h) is close to some element of B. Unfortunately, this is not possible, even in observable POMDPs [GMR22, Example D.2]. The main result of [GMR22] circumvents this issue by showing that there is a subset B ⊂ ∆(S) of quasipolynomial size (depending on P) so that bPh (a1:h−1, o2:h) is close to some element of B in expectation under any given policy. To state the result of [GMR22], we need to introduce approximate belief states: Definition 2.1 (Approximate belief state). Fix a POMDP P = (H,S,A,O, b1, R,T,O). For any distribution D ∈ ∆(S), as well as any choices of 1 ≤ h ≤ H and L ≥ 0, the approximate belief state bapx,Ph (ah−L:h−1, oh−L+1:h; D) is defined as follows, via induction on L: in the case that L = 0, then we define bapx,Ph (∅; D) := { b1 : h = 1 D : h > 1, and for the case that L > 0, define, for h > L, bapx,Ph (ah−L:h−1, oh−L+1:h; D) := U P h−1(b apx,P h−1 (ah−L:h−2, oh−L+1:h−1; D); ah−1, oh). We extend the above definition to the case that h ≤ L by defining bapx,Ph (ah−L:h−1, oh−L+1:h; D) := b apx,P h (amax{1,h−L}:h−1, omax{2,h−L+1}:h; D). In words, the approximate belief state bapx,Ph (ah−L:h−1, oh−L+1:h; D) is obtained by applying the belief update operator starting from the distribution D at step h−L, if h−L > 1 (and otherwise, starting from b1, at step 1). At times, we will drop the superscript P from the above definitions and write bh,bapxh . The main technical result of [GMR22], stated as Theorem 2.1 below (with slight differences, see Appendix B), proves that if the POMDP P is γ-observable for some γ > 0, then for a wide range of distributions D , for sufficiently large L, the approximate belief state bapx,Ph (ah−L:h−1, oh−L+1:h; D) will be close to (i.e., “contract to”) the true belief state bPh (a1:h−1, o2:h). Theorem 2.1 (Belief contraction; Theorems 4.1 and 4.7 of [GMR22]). Consider any γ-observable POMDP P , any > 0 and L ∈ N so that L ≥ C ·min { log(1/( φ)) log(log(1/φ)/ ) γ2 , log(1/( φ)) γ4 } . Fix any π ∈ Πgen, and suppose that D ∈ ∆S satisfies b P h (a1:h−L−1,o2:h−L)(s) D(s) ≤ 1 φ for all (ah−1, o2:h). Then EP(a1:h−1,o2:h)∼π ∥∥∥bPh (a1:h−1, o2:h)− bapx,Ph (ah−L:h−1, oh−L+1:h; D)∥∥∥ 1 ≤ . 2.4 Visitation distributions For a POMDP P = (H,S,A,O, b1, R,T,O), policy π ∈ Πgen, and step h ∈ [H], the (latent) state visitation distribution at step h is dP,πS,h ∈ ∆(S) defined by d P,π S,h (s) := P P s1:H∼π(sh = s), and the observation visitation distribution at step h is dP,πO,h ∈ ∆(O) defined by d P,π O,h := P P o1:H∼π(oh = ·) = Oh · dP,πS,h . As will be discussed in Section 4.1, Theorem 2.1 implies that the transitions of the POMDP P can be approximated by those of an MDPM whose states consist of L-tuples of actions and observations. Thus we will often deal with such MDPsM of the formM = (H,Z,A, R,T) where the set of states has the product structure Z = AL × OL. We then define o : Z → O by o(a1:L, o1:L) = oL. For such MDPs, we define the observation visitation distributions by dM,πO,h (o) := P M z1:H∼π(o(zh) = o). Finally, for o ∈ O, we let eo ∈ R O denote the oth unit vector; thus, for instance, we have dP,πO,h (o) = 〈eo, d P,π O,h 〉. 3 Main result: learning observable POMDPs in quasipolynomial time Theorem 3.1 below states our main guarantee for BaSeCAMP (Barycentric Spanner policy Cover with Approximate MDP; Algorithm 3). Theorem 3.1. Given any α, β, γ > 0 and γ-observable POMDP P , BaSeCAMP with parameter settings as described in Section C.1 outputs a policy which is α-suboptimal with probability at least 1− β, using time and sample complexity bounded by (OA)CL log(1/β), where C > 1 is a constant and L = min { log(HSO/(αγ)) γ4 , log2(HSO/(αγ)) γ2 } . It is natural to ask whether the complexity guarantee of Theorem 3.1 can be improved further. [GMR22, Theorem 6.4] shows that under the Exponential Time Hypothesis, there is no algorithm for planning in γ-observable POMDPs which runs in time (SAHO)o(log(SAHO/α)/γ) and produces α-suboptimal policies, even if the POMDP is known. Thus, up to polynomial factors in the exponent, Theorem 3.1 is optimal. It is plausible, however, that there could be an algorithm which runs in quasipolynomial time yet only needs polynomially many samples; we leave this question for future work. 4 Algorithm description 4.1 Approximating P with an MDP. A key consequence of observability is that, by the belief contraction result of Theorem 2.1, the POMDP P is well-approximated by an MDPM of quasi-polynomial size. In more detail, we will apply Theorem 2.1 with φ = 1/S, D = Unif(S), and some L = poly(log(S/ )/γ) sufficiently large so as to satisfy the requirement of the theorem statement. The MDP M has state space Z := AL × OL, horizon H , action space A, and transitions PMh (·|zh, ah) which are defined via a belief update on the approximate belief state bapx,Ph (zh; Unif(S))2: in particular, for a state zh = (ah−L:h−1, oh−L+1:h) ∈ Z ofM, action ah, and subsequent observation oh+1 ∈ O, define PMh ((ah−L+1:h, oh−L+2:h+1)|zh, ah) := e>oh+1 ·O P h+1 · TPh (ah) · b apx,P h (zh; Unif(S)). (1) The above definition should be compared with the probability of observing oh+1 given history (a1:h, o2:h) and policy π when interacting with the POMDP P , which is PPoh+1∼π(oh+1|a1:h, o2:h) = e > oh+1 ·OPh+1 · TPh (ah) · bPh (a1:h−1, o2:h). (2) Theorem 2.1 gives that ∥∥∥bPh (a1:h−1, o2:h)− bapx,Ph (ah−L:h−1, oh−L+1:h; Unif(S))∥∥∥ 1 is small in expectation under any general policy π, which, using (1) and (2), gives that, for all π ∈ Πgen and h ∈ [H], Ea1:h,o2:h∼π ∑ oh+1∈O ∣∣PMh (oh+1|zh, ah)− PPh (oh+1|a1:h, o2:h)∣∣ ≤ . (3) (Above we have written zh = (ah−L:h−1, oh−L+1:h) and, via abuse of notation, PMh (oh+1|zh, ah) in place of PMh ((ah−L+1:h, oh−L+2:h+1)|zh, ah).) The inequality (3) establishes that the dynamics of P under any policy may be approximated by those of the MDPM. Crucially, this implies that there exists a deterministic Markov policy forM which is near-optimal among general policies for P; the set of such Markov policies forM is denoted by ΠmarkovZ . Because of the Markovian structure of M, such a policy can be found in time polynomial in the size ofM (which is quasi-polynomial in the underlying problem parameters), ifM is known. Of course,M is not known. Approximately learning the MDPM. These observations suggest the following model-based approach of trying to learn the transitions ofM. Suppose that we know a sequence of general policies π1, . . . , πH (abbreviated as π1:H ) so that for each h, πh visits a uniformly random state of P at step h− L (i.e. dP,π h S,h−L = Unif(S)). Then we can estimate the transitions ofM as follows: play π h for h−L−1 steps and then playL+1 random actions, generating a trajectory (a1:h, o2:h+1). Conditioned on zh = (ah−L:h−1, oh−L+1:h) and final action ah, the last observation of this sample trajectory, oh+1, would give an unbiased draw from the transition distribution PMh (·|zh, ah). Repeating this procedure would allow estimation of PMh (see Lemma E.1). This idea is formalized in the procedure ConstructMDP (Algorithm 1): given a sequence of general policies π1, . . . , πH (abbreviated π1:H ), ConstructMDP constructs an MDP, denoted M̂ = M̂(π1:H), which empirically approximates M using the sampling procedure described above. For technical reasons, M̂ actually has state space Z := AL · OL, where O := O ∪ {osink} and osink is a special “sink observation” so that, after osink is observed, all future observations are also osink. Furthermore, we remark that dP,π h S,h−L does not have to be exactly uniform – it suffices if πh visits all states of P at step h− L with non-negligible probability. 4.2 Exploration via barycentric spanners The above procedure for approximating M with M̂ omits a crucial detail: how can we find the “exploratory policies” π1:H? Indeed a major obstacle to finding such policies is that we never directly 2For simplicity, descriptions of the reward function of M are omitted; we refer the reader to the appendix for the full details of the proof observe the states of P . By repeatedly playing a policy π on P , we can estimate the induced observation visitation distribution dP,πO,h−L, which is related to the state visitation distribution via the equality O†h−L · d P,π O,h−L = d P,π S,h−L. Unfortunately, the matrix Oh−L is still unknown, and in general unidentifiable. On the positive side, we can attempt to learnM layer by layer – in particular, when learning the hth layer, we can assume that we have learned previous layers, i.e., dM̂,πO,h−L approximates d M,π O,h−L, and therefore dP,πO,h−L. Even though M̂ does not have underlying latent states, we can define “formal” latent state distributions on M̂ in analogy with P , i.e. dM̂,πS,h−L := O † h−L · d M̂,π O,h−L. But this does not seem helpful, again because Oh−L is unknown. Our first key insight is that a policy πh, for which dM̂,π h S,h−L puts non-negligible mass on all states, can be found (when it exists) via knowledge of M̂ and the technique of Barycentric Spanners – all without ever explicitly computing dM̂,π h S,h−L. Barycentric spanners. Suppose we knew that the transitions of our empirical estimate M̂ approximate those ofM up to step h, and we want to find a policy πh for which the (formal) latent state distribution dM̂,π h S,h−L is non-negligible on all states. Unfortunately, the set of achievable latent state distributions {O†h−L · d M̂,π O,h−L : π ∈ Π gen} ⊂ RS is defined implicitly, via the unknown observation matrix Oh−L. But we do have access to XM̂,h−L := {d M̂,π O,h−L : π ∈ Π gen} ⊂ RO, the set of achievable distributions over the observation at step h − L. In particular, for any reward function on observations at step h − L, we can efficiently (by dynamic programming) find a policy π that maximizes reward on M̂ over all policies. In other words, we can solve linear optimization problems over X . By a classic result [AK08], we can thus efficiently find a barycentric spanner for XM̂,h−L: Definition 4.1 (Barycentric spanner). Consider a subset X ⊂ Rd. For B ≥ 1, a set X ′ ⊂ X of size d is a B-approximate barycentric spanner of X if each x ∈ X can be expressed as a linear combination of elements in X ′ with coefficients in [−B,B]. Using the guarantee of [AK08] (restated as Lemma D.1) applied to the set XM̂,h−L, we can find, in time polynomial in the size of M̂, a 2-approximate barycentric spanner π̃1, . . . , π̃O of XM̂,h−L; this procedure is formalized in BarySpannerPolicy (Algorithm 2). Thus, for any policy π, the observation distribution dM̂,πO,h−L induced by π is a linear combination of the distributions {d M̂,π̃i O,h−L : i ∈ [O]} with coefficients in [−2, 2]. Since dM̂,πS,h−L = O † h−L · d M̂,π O,h−L for all π, it follows that the formal latent state distribution dM̂,πS,h−L induced by π is a linear combination of the formal latent state distributions {dM̂,π̃ i S,h−L : i ∈ [O]} with the same coefficients. Now, intuitively, the randomized mixture policy πmix = 1O (π̃ 1 + · · ·+ π̃O) should explore every reachable state; indeed, we show the following guarantee for BarySpannerPolicy: Lemma 4.1 (Informal version & special case of Lemma D.2). In the above setting, under some technical conditions, for all s ∈ S and π ∈ Πgen, it holds that dM̂,πmixS,h−L (s) ≥ 1 4O2 · d M̂,π S,h−L(s). But recall the original goal: a policy πh which explores P – not M̂. Unfortunately, those states in P which can only be reached with probability that is small compared to the distance between P and M̂ may be missed by πmix. When we use πmix to compute the next-step transitions of M̂, this will lead to additional error when applying belief contraction (Theorem 2.1), and therefore additional error between P and M̂. If not handled carefully, this error will compound exponentially over layers. 4.3 The full algorithm via iterative discovery The solution to the dilemma discussed above is to not try to construct our estimate M̂ ofM layer by layer, hoping that at each layer we can explore all reachable states of P despite making errors in Algorithm 1 ConstructMDP 1: procedure ConstructMDP(L,N0, N1, π1, . . . , πH ) 2: for 1 ≤ h ≤ H do 3: Let π̂h be the policy which follows πh for the first max{h−L−1, 0} steps and thereafter chooses uniformly random actions. 4: Draw N0 independent trajectories from the policy π̂h: 5: Denote the data from the ith trajectory by ai1:H−1, o i 2:H , for i ∈ [N0]. 6: Set zih = (a i h−L:h−1, o i h−L+1:h) for all i ∈ [N0], h ∈ [H]. 7: // Construct the transitions PM̂h (zh+1|zh, ah) as follows: 8: for each zh = (ah−L:h−1, oh−L+1:h) ∈ Z, ah ∈ A do 9: // Define PM̂h (·|zh, ah) to be the empirical distribution of zih+1|zih, aih, as follows: 10: For oh+1 ∈ O, define ϕ(oh+1) := |{i : (aimax{1,h−L}:h, o i max{2,h−L+1}:h+1) = (amax{1,h−L}:h, omax{2,h−L+1}:h+1)}|. 11: if ∑ oh+1 ϕ(oh+1) ≥ N1 then 12: Set PM̂h ((ah−L+1:h, oh−L+2:h+1)|zh, ah) := ϕ(oh+1)∑ o′ h+1 ϕ(o′h+1) for all oh+1 ∈ O. 13: Set RM̂h (zh, ah) := R P h (o i h) for some i with o i h = oh. . R P h (o i h) is observed. 14: else 15: Let PM̂h (·|zh, ah) put all its mass on (ah−L+1:h, (oh−L+2:h, osink)). 16: for each zh = (ah−L:h−1, oh−L+1:h) ∈ Z\Z and ah ∈ A do 17: Let PM̂h (·|zh, ah) put all its mass on (ah−L+1:h, (oh−L+2:h, osink)). 18: Let M̂ denote the MDP (Z, H,A, RM̂,PM̂). 19: return the MDP M̂, which we denote by M̂(π1:H). Algorithm 2 BarySpannerPolicy 1: procedure BarySpannerPolicy(M̂, h) . M̂ is MDP on state space Z , horizon H; h ∈ [H] 2: if h ≤ L then return an arbitrary general policy. 3: Let O be the linear optimization oracle which given r ∈ RO, returns arg maxπ∈ΠmarkovZ 〈r, d π,M̂ O,h−L〉 and maxπ∈ΠmarkovZ 〈r, d π,M̂ O,h−L〉 . Note that O can be implemented in time Õ(|Z| ·HO) using dynamic programming 4: Using the algorithm of [AK08] with oracle O , compute policies {π1, . . . , πO} so that {dπ i,M̂ O,h−L : i ∈ [O]} is a 2-approximate barycentric spanner of {d π,M̂ O,h−L : π ∈ Π markov Z }. . This algorithm requires only O(O2 logO) calls to O 5: return the general policy 1O · ∑O i=1 π i. earlier layers of M̂. Instead, we have to be able to go back and “fix” errors in our empirical estimates at earlier layers. This task is performed in our main algorithm, BaSeCAMP (Algorithm 3). For some K ∈ N, BaSeCAMP runs for a total of K iterations: at each iteration k ∈ [K], BaSeCAMP defines H general policies, πk,1, . . . , πk,H ∈ Πgen (abbreviated πk,1:H ; step 4). The algorithm’s overall goal is that, for some k, for each h ∈ [H], πk,h explores all latent states at step h− L that are reachable by any policy. To ensure that this condition holds at some iteration, BaSeCAMP performs the following two main steps for each iteration k: first, it calls Algorithm 1 to construct an MDP, denoted M̂(k), using the policies πk,1:H . Then, for each h ∈ [H], it passes the tuple (M̂(k), h) to BarySpannerPolicy, which returns as output a general policy, πk+1,h,0. Then, policies πk+1,h are produced (step 7) by averaging πk+1,h ′,0, for all h′ ≥ h. The policies πk+1,h are then mixed into the policies πk+1,h for the next iteration k+ 1. It follows from properties of BarySpannerPolicy that, in the event that the policies πk,h are not sufficiently exploratory, one of the new policies πk+1,h visits some latent state (s, h′) ∈ S × [H], which was not previously visited by πk,1:H with significant probability. Thus, after a total of K = O(SH) iterations k, it follows that we must have visited all (reachable) latent states of the POMDP. At the end of these K iterations, BaSeCAMP computes an optimal policy for each M̂(k) and returns the best of them (as evaluated on fresh trajectories drawn from P; step 12). Algorithm 3 BaSeCAMP (Barycentric Spanner policy Cover with Approximate MDP) 1: procedure BaSeCAMP(L,N0, N1, α, β,K) 2: Initialize π1,1, . . . , π1,H to be arbitrary policies. 3: for k ∈ [K] do 4: For each h ∈ [H], set πk,h = 1k ∑K k′=1 π k′,h. 5: Run ConstructMDP(L,N0, N1, πk,1:H) and let its output be M̂(k). 6: For each h ∈ [H], let πk+1,h,0 be the output of BarySpannerPolicy(M̂(k), h). 7: For each h ∈ [H], define πk+1,h := 1H−h+1 ∑H h′=h π k+1,h′,0. 8: // Choose the best optimal policy amongst all M̂(k) 9: for k ∈ [K] do 10: Let πk? denote an optimal policy of M̂(k). 11: Execute πk? for 100H2 logK/β α2 trajectories and let the mean reward across them be r̂ k. 12: Let k? = arg maxk∈[K] r̂ k. 13: return πk ? ? . 5 Proof Outline We now briefly outline the proof of Theorem 3.1; further details may be found in the appendix. The high-level idea of the proof is to show that the algorithm BaSeCAMP makes a given amount of progress for each iteration k, as specified in the following lemma: Lemma 5.1 (“Progress lemma”: informal version of Lemma I.2). Fix any iteration k in Algorithm 3, step 3. Then, for some parameters δ, φ with α δ φ > 0, one of the following statements holds: 1. Any (s, h) with dP,π k,h S,h−L (s) < φ satisfies d P,π S,h−L(s) ≤ δ for all general policies π. 2. There is some (h, s) ∈ [H]× S so that: dP,π k,h S,h−L (s) < φ ·H 2S, yet, for all k′ > k: dP,π k′,h S,h−L (s) ≥ φ ·H 2S. Given Lemma 5.1, the proof of Theorem 3.1 is fairly straightforward. In particular, each (h, s) ∈ [H]× S can only appear as the specified pair in item 2 of the lemma for a single iteration k. Thus, as long as K > HS, item 1 must hold for some value of k, say k? ∈ [K]. In turn, it is not difficult to show from this that the MDP M̂(k?) is a good approximation of P , in the sense that for any general policy π, the values of π in M̂(k?) and in P are close (Lemma H.3). Thus, the optimal policy πk?? of M̂(k?) will be a near-optimal policy of P , and steps 9 through 12 of BaSeCAMP will identify either the policy πk ? ? or some other policy π k′ ? which has even higher reward on P . Proof of the progress lemma. The bulk of the proof of Theorem 3.1 consists of the proof of Lemma 5.1, which we proceed to outline. Suppose that item 1 of the lemma does not hold, meaning that there is some π ∈ Πgen and some (h, s) ∈ [H]× S so that dP,π k,h S,h−L (s) < φ yet d P,π S,h−L(s) > δ; i.e., πk,h does not explore (s, h− L), but the policy π does. Roughly speaking, BaSeCAMP ensures that item 2 holds in this case, using the following two steps: (A) First, we show that 〈es,O†h−L ·d M̂(k),π O,h−L 〉 ≥ δ ′, where δ′ is some parameter satisfying δ δ′ φ. In words, the estimate of the underlying state distribution provided by M̂(k) also has the property that some policy π visits (s, h−L) with non-negligible probability (namely, δ′). While this statement would be straightforward if M̂(k) were a close approximation P , this is not necessarily the case (indeed, if it were the case, then item 1 of Lemma 5.1 would hold). To circumvent this issue, we introduce a family of intermediate POMDPs indexed by H ′ ∈ [H] and denoted Pφ,H′(πk,1:H), which we call truncated POMDPs. Roughly speaking, the truncated POMDP Pφ,H′(πk,1:H) diverts transitions away from all states at step H ′ − L which πk,H ′ does not visit with probability at least φ. This modification is made to allow Theorem 2.1 to be applied to Pφ,H′(πk,1:H) and any general policy π. By doing so, we may show a one-sided error bound between Pφ,H′(πk,1:H) and M̂(k) (Lemma G.4) which, importantly, holds even when the policies πk,1:H may fail to explore some states. It is this one-sided error bound which implies the lower bound on 〈es,O†h−L · d M̂(k),π O,h−L 〉. (B) Second, we show that the policy πk+1,h,0 produced by BarySpannerPolicy in step 6 explores (s, h− L) with sufficient probability. To do so, we first use Lemma 4.1 to conclude that 〈es,O†h−L · dM̂ (k),πk+1,h,0 O,h−L 〉 ≥ δ′ 4O2 . The more challenging step is to use this fact to conclude a lower bound on dP,π k+1,h,0 S,h−L (s); unfortunately, the one-sided error bound between P and M̂ (k) that we used in the previous paragraph goes in the wrong direction here. The solution is to use Lemma H.3, which has the following consequence: either dP,π k+1,h,0 S,h−L (s) is not too small, or else the policy π k+1,h,0 visits some state at a step prior to h− L which was not sufficiently explored by any of the policies πk,1:H (see Section C for further details). In either case πk+1,h,0 visits a state that was not previously explored, and the fact that πk+1,h,0 is mixed in to πk ′,h′ for k′ > k, h′ ≤ h (steps 4, 7 of BaSeCAMP) allows us to conclude item 2 of Lemma 5.1. 6 Conclusion In this paper we have demonstrated the first quasipolynomial-time (and quasipolynomial-sample) algorithm for learning observable POMDPs. Several interesting directions for future work remain: • It is straightforward to show that a γ-observable POMDP is Ω(γ/ √ S) weakly-revealing in the sense of [LCSJ22, Assumption 1]. Thus, the results of [JKKL20, LCSJ22] imply that γ-observable POMDPs can be learned with polynomially many samples, albeit by computationally inefficient algorithms. Thus, as discussed following Theorem 3.1, it is natural to wonder whether we can achieve the best of both worlds: is there a quasipolynomialtime algorithm that only needs polynomially many samples, or can one show a computationalstatistical tradeoff? • It is also natural to ask whether an analogue of Assumption 1.1 for the `2 norm, namely that ‖Ohb−Ohb′‖2 ≥ γ‖b− b′‖2 for all h ∈ [H], is sufficient for computationally efficient learnability. Even the planning version of this question (where the parameters of the POMDP are known and the problem is to find a near-optimal policy) is open. Acknowledgments and Disclosure of Funding N.G. is supported by a Fannie & John Hertz Foundation Fellowship and an NSF Graduate Fellowship. A.M. is supported by a Microsoft Trustworthy AI Grant, NSF Large CCF-1565235, a David and Lucile Packard Fellowship and an ONR Young Investigator Award. D.R. is supported by an Akamai Presidential Fellowship and a U.S. DoD NDSEG Fellowship.
1. What is the focus and contribution of the paper on learning tabular POMDPs? 2. What are the strengths of the proposed algorithm, particularly in terms of computational complexity? 3. What are the weaknesses of the paper, especially regarding sample complexity and comparisons with other works? 4. How does the reviewer assess the limitations of the paper's assumptions and applications? 5. Do you have any suggestions for improving the paper or combining its techniques with others?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes a novel algorithm for learning tabular POMDPs based on belief contraction and barycentric spanners. Both its sample complexity and computational complexity scales exponentially w.r.t the observable parameter 1 / γ and quasi-polynomially w.r.t S , A , O , H and 1 / α . Strengths And Weaknesses This is a nice paper with interesting results and novel analysis. I enjoy reading it. Strength To my knowledge, this is the first quasi-polynomial time algorithm for learning tabular MDPs under reasonable assumptions. The application of barycentric spanners is novel and the analysis is nontrivial. Weakness The sample complexity is exponential in 1 / γ and quasi-polynomially in all other parameters. If I understand correctly, the algorithms in [LCSJ22] only require a polynomial number of samples to learn a near-optimal policy under the same observable condition. (Nonetheless, the algorithms in [LCSJ22] seem to require exponential time to implement as pointed out in this work. ) I believe these differences should be pointed out explicitly in the main paper. The observable assumption can never be satisfied when the number of states is larger than the number of observations. Any comments on how to address this limitation and generalize the results in this paper? [LCSJ22] Qinghua Liu, Alan Chung, Csaba Szepesvári, and Chi Jin. When is partially observable reinforcement learning not scary? arXiv preprint arXiv:2204.08967, 2022. Questions Given that poly sample complexity is achievable [LCSJ22], do you think one can combine the techniques in this paper and [LCSJ22] to obtain poly sample and quasi-poly computation algorithm? The belief contraction result hints that a POMDP can be approximated by an MDP by viewing the most recent M-step observations-actions as states. So I am curious in stead of using the barycentric spanners, can we simply run UCRL or other tabular MDP algorithms on this MDP? Limitations NA
NIPS
Title Recursive Reinforcement Learning Abstract Recursion is the fundamental paradigm to finitely describe potentially infinite objects. As state-of-the-art reinforcement learning (RL) algorithms cannot directly reason about recursion, they must rely on the practitioner’s ingenuity in designing a suitable “flat” representation of the environment. The resulting manual feature constructions and approximations are cumbersome and error-prone; their lack of transparency hampers scalability. To overcome these challenges, we develop RL algorithms capable of computing optimal policies in environments described as a collection of Markov decision processes (MDPs) that can recursively invoke one another. Each constituent MDP is characterized by several entry and exit points that correspond to input and output values of these invocations. These recursive MDPs (or RMDPs) are expressively equivalent to probabilistic pushdown systems (with call-stack playing the role of the pushdown stack), and can model probabilistic programs with recursive procedural calls. We introduce Recursive Q-learning— a model-free RL algorithm for RMDPs—and prove that it converges for finite, single-exit and deterministic multi-exit RMDPs under mild assumptions. 1 Introduction Reinforcement learning [36] (RL) is a stochastic approximation based approach to optimization, where learning agents rely on scalar reward signals from the environment to converge to an optimal behavior. Watkins’s seminal approach [41] to RL, known as Q-learning, judiciously combines exploration/exploitation with dynamic programming to provide guaranteed convergence [40] to optimal behaviors in environments modeled as Markov decision processes (MDPs) with finite state and action spaces. RL has also been applied to MDPs with uncountable state and action spaces, although convergence guarantees for such environments require strong regularity assumptions. Modern variants of Q-learning (and other tabular RL algorithms) harness the universal approximability and ease-of-training rendered by deep neural networks [18] to discover creative solutions to problems traditionally considered beyond the reach of AI [30, 39, 33]. These RL algorithms are designed with a flat Markovian view of the environment in the form of a “state, action, reward, and next state” interface [9] in every interaction with the learning agent, where the states/actions may come from infinite sets. When such infinitude presents itself in the form of finitely represented recursive structures, the inability of the RL algorithms to handle structured environments means that the structure present in the environment is not available to the RL algorithm to generalize its learning upon. The work of [41] already provides a roadmap for hierarchically structured environments; since then, considerable progress has been made in developing algorithms for hierarchical RL [6, 13, 37, 31] with varying optimality guarantees. Still, the hierarchical MDPs ∗Authors listed alphabetically. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). are expressively equivalent to finite-state MDPs, although they may be exponentially more succinct (Lemma 1). Thus, hierarchical RL algorithms are inapplicable in the presence of unbounded recursion. On the other hand, recursion occurs naturally in human reasoning [10], mathematics and computation [34, 35], and physical environments [29]. Recursion is a powerful cognitive tool in enabling a divide-and-conquer strategy [11] to problem solving (e.g., tower of Hanoi, depth-first search) and, consequently, recursive solutions enhance explainability in the form of intuitive inductive proofs of correctness. Unlike flat representations, the structure exposed by recursive definitions enables generalizability. Recursive concepts, such as recursive functions and data structures, provide scaffolding for efficient and transparent algorithms. Finally, the models of physical environments express the system evolution in the form of recursive equations. We posit that the lack of RL algorithms in handling recursion is an obstacle to their applicability, explainability, and generalizability. This paper aims to fill the gap by studying recursive Markov decision processes [16] as environment models in reinforcement learning. We dub this setting recursive reinforcement learning (RRL). MDPs with Recursion. A recursive Markov Decision Process (RMDP) [16] is a finite collection of special MDPs, called component MDPs, with special entry and exit nodes that take the role of input parameter and return value, respectively. The states of component MDPs may either be the standard MDP states, or they may be “boxes” with input and output ports; these boxes are mapped to other component MDPs (possibly, the component itself) with some matching of the entry and exit nodes. An RMDP where every component has only one exit is called a 1-exit RMDP, otherwise we call it a general or multi-exit RMDP. Single-exit RMDPs are strictly less expressive than general RMDPs as they are equivalent to functions without any return value. Nonetheless, 1-exit RMDPs are more expressive than finite-state MDPs [32] and relate closely to controlled branching processes [17]. Example 1 (Cloud Computing). As an example of recursive MDP, consider the Boolean program shown in Figure 2. This example (inspired from [21]) models a cloud computing scenario to compute a task T depicted as the component T . Here, a decision maker is faced with two choices: either she can choose to execute the task monolithically (with a cost of 8 units) or chose to decompose the task into three S tasks. The process of decomposition and later combining the results cost 0.5 units. Each task S can either be executed on a fast, but unreliable server that costs 1 unit, but with probability 0.4 the server may crash, and require a recomputation of the task. When executed on a reliable server, the task S costs 1.5 units, however the task may be interrupted by a higher-priority task and the decision maker will be compensated for this delay by earning a 0.2 unit reward. During the interrupt service routine H , there is a choice to upgrade the priority of the task for a cost of 0.2 units. Otherwise, the interrupt service routing costs 0.01 unit (due to added delay) and the interrupt service routine itself can be interrupted, causing the service routine to be re-executed in addition to that of the new interrupt service routine. The goal of the RL agent is to compute an optimal policy that maximize the total reward to termination. This example is represented as a recursive MDP in Figure 1. This RMDP has three components T , S, and H . The component T has three boxes b1, b2, and b3 all mapped to components S. The component T and S both have single entry and exit nodes, while the component H has two exit nodes. Removing the thick (maroon) transitions and the exit u7 makes the RMDP 1-exit. The edges specify both, the name of the action and the corresponding reward. While the component T is non-stochastic, components S and H both have stochastic transitions depicted by the grey circle. Recursive MDPs strictly generalize finite-state MDPs and hierarchical MDPs, and semantically encode countable state MDPs with the context encoding the stack frame of unbounded depth. RMDPs generalize several well-known stochastic models including stochastic context-free grammars [28, 26] and multi-type branching processes [22, 38]. Moreover, RMDPs are expressively equivalent to proba- bilistic pushdown systems (MDPs with unbounded stacks), and can model probabilistic programs with unrestricted recursion. Given their expressive power, it is not surprising that reachability and termination problems for general RMDPs are undecidable. Single-exit RMDPs, on the other hand, are considerably more well-behaved with decidable termination [16] and total reward optimization under positive reward restriction [15]. Exploiting these properties, a convergent RL algorithm [21] has been proposed for 1-exit RMDPs with positive cost restriction. However, to the best of our knowledge, no RL algorithm exists for general RMDPs. Applications of Recursive RL. Next, we present some paradigmatic applications of recursive RL. – Probabilistic Program Synthesis. As shown in Example 1, RMDPs can model procedural probabilistic Boolean program. Hence, the recursive RL can be used for program synthesis in unknown, uncertain environments. Boolean abstractions of programs [4] are particularly suited to modeling as RMDPs. Potential applications include program verification [4, 5, 14] and program synthesis [19]. – Context-Free Reward Machines. Recently, reward machines [23] have been proposed to model non-Markovian reward signals in RL. In this setting, a regular language extended with the reward signals (Mealy machines) over the observation sequences of the MDP is used to encode reward signals. In this setting the RL algorithms operate on the finite MDP given by the product of the MDP with the reward machine. Following the Chomsky hierarchy, context-free grammars or pushdown automata can be used to provide more expressive reward schemes than regular languages. This results in context-free reward machines: reward machines with an unbounded stack. As an example of such a more expressive reward language, consider a grid-world with a reachability objective with some designated charging stations, where 1-unit dwell-time charges the unbounded capacity battery by 1-unit. If every action discharges the battery by 1-unit, the reward scheme to reach the target location without ever draining the battery cannot be captured by a regular reward machine. On the other hand, this reward signal can be captured with an RMDP, where charging by 1-unit amounts to calling a component and discharging amounts to returning from the component such that the length of the call stack encodes the battery level. More generally, any context-free requirement over finite-state MDPs can be captured using general RMDPs. – Stochastic Context-Free Grammars. Stochastic CFGs and branching decision processes can capture a structured evolution of a system. These can be used for modeling disease spread, population dynamics, and natural languages. RRL can be used to learn optimal behavior in systems expressed using such stochastic grammars. Overview. We begin the technical presentation by providing the formal definition of RMDPs and the total reward problem: which we show to be undecidable in general. We then develop PAC learning results under mild restrictions. In Section 3, we develop Recursive Q-learning, a model-free RL algorithm for RMDPs. In Section 4, we show that Recursive Q-learning converges to an optimal solution in the single-exit setting. Section 5 then demonstrates the empirical performance of Recursive Q-learning. 2 Recursive Markov Decision Processes A Markov decision processM is a tuple (A,S, T, r) where A is the set of actions, S is the set of states, T : S ×A→ D(S) is the probabilistic transition function, and r : S ×A→ R is the reward function. We say that an MDPM is finite if both S and A are finite. For any state s ∈ S, A(s) denotes the set of actions that may be selected in state s. A recursive Markov decision process (RMDP) [16] is a tuple M = (M1, . . . ,Mk), where each component Mi = (Ai, Ni, Bi, Yi,Eni,Exi, δi) consists of: – A set Ai of actions; – A set Ni of nodes, with a distinguished subset Eni of entry nodes and a (disjoint) subset Exi of exit nodes (we assume an arbitrary but fixed ordering on Exi and Eni); – A set Bi of boxes along with a mapping Yi : Bi 7→ {1, . . . , k} that assigns to every box (the index of) a component. To each box b ∈ Bi, we associate a set of call ports, Callb = {(b, en) | en ∈ EnY (b)}, and a set of return ports, Retb = {(b, ex) | ex ∈ ExY (b)}; – we let Calli = ∪b∈BiCallb, Reti = ∪b∈BiRetb, and let Qi = Ni ∪Calli ∪Reti be the set of all nodes, call ports and return ports; we refer to these as the vertices of component Mi. – A transition function δi : Qi × Ai → D(Qi), where, for each tuple δi(u, a)(v) = p is the transition probability of a transition from the source u ∈ (Ni \Exi)∪Reti to the destination v ∈ (Ni \ Eni) ∪ Calli; we often write p(v|u, a) for δi(u, a)(v). – A reward function ri : Qi ×Ai → R is the reward associated with transitions. We assume that the set of boxes B1, . . . , Bk and set of nodes N1, N2, . . . , Nk are mutually disjoint. We use symbols N,B,A,Q,En,Ex, δ to denote the union of the corresponding symbols over all components. We say that an RMDP is finite if k and all Ai, Ni and Bi are finite. An execution of an RMDP begins at an entry node of some component and, depending upon the sequence of input actions, the state evolves naturally like an MDP according to the transition distributions. However, when the execution reaches an entry port of a box, this box is stored on a stack of pending calls, and the execution continues naturally from the corresponding entry node of the component mapped to that box. When an exit node of a component is encountered, and if the stack of pending calls is empty then the run terminates; otherwise, it pops the box from the top of the stack and jumps to the exit port of the just popped box corresponding to the just reached exit of the component. The semantics of an RMDP is an infinite state MDP, whose states are pairs consisting of a sequence of boxes, called the context, mimicking the stack of pending calls and the current vertex. The height of the call stack is incremented (decremented) when a call (return) is made. A stack height of 0 refers to termination, while the empty stack has height 1. The semantics of a recursive MDP M = (M1, . . . ,Mk) with Mi = (Ai, Ni, Bi, Yi,Eni,Exi, δi, ri) are given as a (infinite-state) MDP [[M ]] = (AM , SM , TM , rM ) where – AM = ∪ki=1Ai is the set of actions; – SM ⊆ B∗×Q is the set of states, consisting of the stack context and the current node; – TM : SM×AM → D(SM ) is the transition function such that for s = (〈κ〉, q) ∈ SM and action a ∈ AM , the distribution δM (s, a) is defined as: 1. if the vertex q is a call port, i.e. q = (b, en) ∈ Call, then δM (s, a)(〈κ, b〉, en) = 1; 2. if the vertex q is an exit node, i.e. q = ex ∈ Ex, then if κ = 〈∅〉 then the process terminates and otherwise δM (s, a)(〈κ′〉, (b, ex)) = 1 where (b, ex) ∈ Ret(b) and κ = 〈κ′, b〉; 3. otherwise, δM (s, a)(〈κ〉, q′) = δ(q, a)(q′). – the reward function rM : SM × AM → R is such that for s = (〈κ〉, q) ∈ SM and action a ∈ AM , the reward rM (s, a) is zero if q is either a call port or the exit node, and otherwise rM (s, a)(〈κ〉, q′) = r(q, a)(q′). We call the maximum value of the absolute one-step reward the diameter of an RMDP and denote it by rmax = maxs,a |r(s, a)|. Given the semantics of an RMDP M as an (infinite) MDP [[M ]], the concepts of strategies (also called policies) as well as positional strategies are well defined. We distinguish a special class of strategies—called stackless strategies—that are deterministic and do not depend on the history or the stack context at all. We are interested in computing strategies σ that maximize the expected total reward. Given RMDP M , a strategy σ determines sequences Xi and Yi of random variables denoting the ith state and action of the MDP [[M ]]. The total reward under strategy σ and its optimal value are respectively defined as ETotalMσ (s) = lim N→∞ EMσ (s) {∑ 1≤i≤N r(Xi−1, Yi) } , ETotalM (s) = sup σ ETotalMσ (s). For an RMDP M and a state s, a strategy σ is called proper if the expected number of steps taken by M before termination when starting at s is finite. To ensure that the limit above exists, as the sum of rewards can otherwise oscillate arbitrarily, we assume the following. Assumption 1 (Proper Policy Assumption). All strategies are proper for all states. We call an RMDP that satisfies Assumption 1 a proper RMDP. This assumption is akin to proper policy assumptions [7] often posed on the stochastic shortest path problems, and ensures that the total expected reward is finite. The expected total reward optimization problem over proper RMDPs subsumes the discounted optimization problem over finite-state MDPs since discounting with a factor λ is analogous to terminating with probability 1−λ at every step [36]. The properness assumption on RMDPs can be enforced by introducing an appropriate discounting (see Appendix F). Undecidability. Given an RMDP M , an initial node v, and a threshold D, the strategy existence problem is to decide whether there exists a strategy in [[M ]] with value greater than or equal to D when starting at the initial state (〈∅〉, q), i.e., at some entry node q with an empty context. Theorem 1 (Undecidability of the Strategy Existence Problem). Given a proper RMDP and a threshold D, deciding whether there exists a strategy with expected value greater than D is undecidable. PAC-learnability. Although it is undecidable to determine whether or not a strategy can exceed some threshold in a proper RMDP, the problem of ε-approximating the optimal value is decidable when parameters co, λ and b (defined below) are known. Our approach to PAC-learnability [1] is to learn the distribution of the transition function δ well enough and then produce an approximate, but not necessarily efficient, evaluation of our learned model. To allow PAC-learnability, we need a further nuanced notion of ε-proper policies. A policy is called ε-proper, if it terminates with a uniform bound on the expected number of steps for allM′ that differ fromM only in the transition function, where ∑ q∈S,a∈A,r∈S |δM(q, a)(r)− δM′(q, a)(r)| ≤ ε (we then say thatM′ is ε-close toM), and where the support of δM ′(q, a) is a subset of the support of δM (q, a) for all q ∈ S and a ∈ A. An RMDP is called ε-proper, if all strategies are ε-proper for M for all states of the RMDP. Assumption 2 (PAC-learnability). We restrict our attention to ε-proper RMDPs. We further require that all policies have a falling expected stack height. Namely, we require for all M′ ε-close to M and all policies σ that the expected stack height in step k is bounded by some function co − µ · ∑k i=1 p M′σ run (k), where co ≥ 1 is an offset, µ ∈]0, 1] is the decline per step, and p M′σ run (k) is the likelihood that the RMDPM′ with strategy σ is still running after k steps. We finally require that the absolute expected value from every strategy is bounded: ∣∣ETotalM′σ ((〈∅〉, q))∣∣ ≤ b for some b. Theorem 2. For every ε-proper RMDP with parameters co, µ, and b, ETotalM(s) is PAC-learnable. These parameters can be replaced by discounting. Indeed, our proofs start with discounted rewards, and then relax the assumptions to allow for using undiscounted rewards. Using a discount factor λ translates to parameters b = d1−λ , co = 1 + 1 1−λ , and µ = 1−λ. 3 Recursive Q-Learning for Multi-Exit RMDPs While RMDPs with multiple exits come with undecidability results, they are the interesting cases as they represent systems with an arbitrary call stack. We suggest an abstraction that turns them into a fixed size data structure, which is well suited for neural networks. Given a proper recursive MDP M = (M1, . . . ,Mk) with Mi = (Ai, Ni, Bi, Yi,Eni,Exi, δi, ri) with semantics [[M ]] = (AM , SM , TM , rM ), the optimal total expected reward can be captured by the following equations OPTrecur(M). For every κ ∈ B∗ and q ∈ Q: y(〈κ〉, q) = y(〈κ, b〉, en) q=(b, en) ∈ Call 0 q ∈ Ex, κ = 〈∅〉 y(〈κ′〉, (b, q)) q ∈ Ex, (b, q) ∈ Ret(b), κ=〈κ′, b〉 max a∈A(q) { r(q, a)+ ∑ q′∈Q p(q′|q, a)y(〈κ〉, q′) } otherwise. These equations capture the optimality equations on the underlying infinite MDP [[M ]]. It is straightforward to see that, if these equations admit a solution, then the solution equals the optimal total expected reward [32]. Moreover, an optimal policy can be computed by choosing the actions that maximize the right-hand-side. However, since the state space is countably infinite and has an intricate structure, an algorithm to compute a solution to these equations is not immediate. To make it accessible to learning, we abstract the call stack 〈κ, b〉 to its exit value, i.e. the total expected reward from the exit nodes of the box b, under the stack context 〈κ〉. Note that when a box is called, the value of each of its exits may still be unknown, but it is (for a given strategy) fixed. Naturally, if two stack contexts 〈κ, b〉 and 〈κ′, b〉 achieve the same expected total reward from each exit of the block b, then both the optimal strategy and the expected total reward, are the same. This simple but precise and effective abstraction of stacks with exit values allows us to consider the following optimality equations OPTcont(M). For every 1 ≤ i ≤ k, q ∈ Qi, v ∈ R|Exi|: x(v, q) = x(v′, en)[v′ 7→ (x(v, q′))q′∈Retb ] q = (b, en) ∈ Call v(q) q ∈ Ex max a∈A(q) { r(q, a)+ ∑ q′∈Q p(q′|q, a)x(v, q′) } otherwise. Here v is a vector where v(ex) is the (expected) reward that we get once we reach exit ex of the current component. Informally when a box is called, this vector is being updated with the current estimates of the reward that we get once the box is exited. The ex entry of this vector v′ = (x(v, q′))q′∈Retb is x(v, (b, ex)), which is the value that we achieve from exit (b, ex). This continuous-space abstraction of the countably infinite space of the stack contexts enables the application of deep feedforward neural networks [18] with a finite state encoding in RL. It also provides an elegant connection to the smoothness of differences to exit values: if all exit costs are changed by less than ε, then the cost of each state within a box changes by less than ε, too. The following theorem connects both versions of optimality equations. Theorem 3 (Fixed Point). If y is a fixed point of OPTrecur and x is a fixed point of OPTcont, then y(〈∅〉, q) = x(0, q). Moreover, any policy optimal from (0, q) is also optimal from (〈∅〉, q). We design a generalization of the Q-learning algorithm [40] for recursive MDPs based on the optimality equations OPTcont shown in Algorithm 1. We implement several optimizations in our algorithm. We assume implicit transitions from the entry and exit ports of the box to the corresponding entry and exit nodes of the components. A further optimization is achieved by applying a dimension reduction on the representation of the exit value vector v by normalizing these values in such a way that one of the exits has value 0. This normalization does not affect optimal strategies as, when two stacks incur similar costs in that they have the same offset between the cost of each exit, the optimal strategy is still the same, with the difference in cost being this offset. While the convergence of Algorithm 1 is not guaranteed for the general multi-exit RMDPs, the algorithm converges for the special cases of deterministic proper RMDPs and 1-exit RMDPs (Section 4). For the deterministic multi-exit case, the observation is straightforward as the properness assumption reduces the semantics to be directed acyclic graph, and the correct values are eventually propagated from the leafs backwards. Theorem 4. Tabular Recursive Q-learning converges to the optimal values for deterministic proper multi-exit RMDPs with a learning rate of 1 when all state-action pairs are visited infinitely often. Algorithm 1: Recursive Q-learning 1 Initialize Q(s, v, a) arbitrarily 2 while not converged do 3 v ← 0 4 stack← ∅ 5 Sample trajectory τ ∼ {(s, a, r, s′), ...} 6 for s, a, r, s′ in τ do 7 Update αi according to learning rate schedule 8 if entered box then 9 {sexit1 , . . . , sexitn} ← getExits(s′) 10 v′ ← [maxa′∈A(sexit1 )Q(sexit1 , v, a ′), . . . ,maxa′∈A(sexitn )Q(sexitn , v, a ′)] 11 v′min ← min(v′) 12 v′ ← v′ − v′min 13 Q(s, v, a)← (1− αi)Q(s, v, a) + αi(r + maxa′∈A(s′)Q(s′, v′, a′) + v′min ) 14 stack.push(v) 15 v ← v′ 16 else if exited box then 17 {sexit1 , . . . , sexitn} ← getExits(s) 18 Set k such that s′ = sexitk 19 Q(s, v, a)← (1− αi)Q(s, v, a) + αi(r + v(k)) 20 v ← stack.pop() 21 else 22 Q(s, v, a)← (1− αi)Q(s, v, a) + αi(r + maxa∈A(s′)Q(s′, v, a′)) 23 end 24 end 25 end 26 return Q 4 Convergence of Recursive Q-Learning for Proper 1-exit RMDPs Recall that a proper 1-exit RMDP is a proper RMDP where, for each component Mi, the set of exits Exi is a singleton. For this special case, we show that the recursive Q-learning algorithm converges to the optimal strategy. The optimality equations OPTcont(M) (similar to [15]) can be simplified in the case of 1-exit RMDPs whose unique fixed point solution will give us the optimal values of the total reward objective. For every q ∈ Q: x(q) = x(en) + x(b, ex ′) q = (b, en) ∈ Call, ex = ExY (b) maxa∈A(q) { r(q, a) + ∑ q′∈Q p(q′|q, a)x(q′) } otherwise. We now denote the system of all these equations in a vector form as x̄ = F (x̄). Given a 1-exit RMDP, we can easily construct its associated equation system above in linear time. Theorem 5 (Unique Fixed Point). The vector consisting of ETotalM (q) values is the unique fixed point of F . Moreover, a solution of these equations provide optimal stackless strategies. Note that for the 1-exit setting, Algorithm 1 simplifies to Algorithm 2 since v is always 0 and vmin is always the maximum Q-value for the exit. The convergence of the recursive Q-learning algorithm for 1-exit RMDPs follows from Theorem 5 and stochastic approximation [40, 8]. Theorem 6. Algorithm 2 converges to the optimal values in 1-exit RMDP when the learning rates satisfy ∑∞ i=0 αi =∞, ∑∞ i=0 α 2 i <∞, and all state-action pairs are visited infinitely often. In order to show efficient PAC learnability for -proper 1-exit RMDP M , it suffices to know an upper bound on the expected number of steps taken by M when starting at any vertex with the empty stack content, which will be denoted by K. Theorem 7 (Efficient PAC Learning for 1-Exit RMDPs). For every -proper 1-exit RMDP with diameter rmax and the expected time to terminate ≤ K, ETotalM(s) is efficiently PAC-learnable. Algorithm 2: Recursive Q-learning (1-exit special case) 1 Initialize Q(s, a) arbitrarily 2 while not converged do 3 Sample trajectory τ ∼ {(s, a, r, s′), ...} 4 for s, a, r, s′ in τ do 5 Update αi according to learning rate schedule 6 if entered box then 7 sexit ← getExit(s′) 8 Q(s, a)← (1− αi)Q(s, a) + αi(r + maxa′∈A(s′)Q(s′, a′) + maxa′∈A(sexit)Q(sexit, a′)) 9 else if exited box then 10 Q(s, a)← (1− αi)Q(s, a) + αi(r) 11 else 12 Q(s, a)← (1− αi)Q(s, a) + αi(r + maxa∈A(s′)Q(s′, a′)) 13 end 14 end 15 end 16 return Q 5 Experiments We implemented Algorithm 1 in tabular form as well as with a neural network. For the tabular implementation, we quantized the vector v directly after its computation to ensure that the Qtable remains small and discrete. For the neural network implementation we used the techniques used in DQN [30], replay buffers and target networks, for additional stability. The details of this implementation can be found in the appendix. We consider three examples: one to demonstrate the application of Recursive Q-learning for synthesizing probabilistic programs, one to demonstrate convergence in the single-exit setting, and one to demonstrate the use of a context-free reward machine. We compare Recursive Q-learning to Q-learning where the RMDP is treated as an MDP by the agent, i.e., the agent treats stack calls and returns as if they were normal MDP transitions. 5.1 Cloud computing The cloud computing example, introduced in Example 1, is a recursive probabilistic program with decision points for an RL agent to optimize over. The optimal strategy is to select the reliable server and to never upgrade. This strategy produces an expected total reward of −5.3425. Figure 3 shows that tabular Recursive Q-learning with discretization quickly converges to the optimal solution on this multi-exit RMDP while Q-learning oscillates around a suboptimal policy. 5.2 Infinite spelunking E I 1 2 I E I E I1 1 2 2 1 E I E I1 1 2 2 Consider a single-exit RMDP gridworld with two box types, 1 and 2, shown at the bottom of the figure to the right. These box types are the two types of levels in an infinitely deep cave. When falling or descending to another level, the level type switches. Passing over a trap, shown in red, results in the agent teleporting to a random position and falling with probability 0.5. The agent has fallen into the cave at the position denoted by I without climbing equipment. However, there is climbing equipment in one of the types of levels at a known location denoted by E. The agent has four move directions—north, east, south, west—as well as an additional action to descend further or ascend. Until the climbing equipment is obtained, the agent can only descend. Once the climbing equipment is obtained, the traps no longer affect the agent and the agent can ascend only from the position where it fell down. With probability 0.01 the agent ascends from the current level with the climbing gear. This has the effect of box-wise discounting with discount factor 0.99. The agent’s objective is to leave the cave from where it fell in as as soon as possible. The reward is −1 on every step. There are two main strategies to consider. The first strategy tries to obtain the climbing gear by going over the traps. This strategy leads to an unbounded number of possible levels since the traps may repeatedly trigger. The second strategy avoids the traps entirely. The figure to the right shows partial descending trajectories from these strategies, with the actions shown in green, the trap teleportations shown in blue, and the locations the agent fell down from are shown as small black squares. Which strategy is better depends on the probability of the traps triggering. With a trap probability of 0.5, the optimal strategy is to try and reach the climbing equipment by going over the traps. Figure 1 shows the convergence of tabular Recursive Q-learning for 1-exit RMDPs to this optimal strategy while the strategy learned by Q-learning does not improve. 5.3 Palindrome gridworld To demonstrate the ability to incorporate context-free objectives, consider a 3× 3 gridworld with a goal cell in the center and a randomly selected initial state. The agent has four move actions—north, east, south, west—and a special control action. The objective of the agent is to reach the goal cell while forming an action sequence that is a palindrome of even length. What makes this possible is that when the agent performs an action that pushes against a wall of the gridworld, no movement occurs. To monitor the progress of the property, we compose this MDP with a nondeterministic pushdown automaton. The agent must use its special action to determine when to resolve the nondeterminism in the pushdown automaton. Additionally, the agent uses its special action to declare the end of its action sequence. To ensure properness, the agent’s selected action is corrupted into the special action with probability 0.01. The agent is given a reward of 50 upon success, −5 when the agent selects an action that causes the pushdown automaton to enter a rejecting sink, and −1 on all other timesteps. Figure 3 shows the convergence of Deep Recursive Q-learning to an optimal strategy on this example, while DQN fails to find a good strategy. 6 Related Work Hierarchical RL is an approach for learning policies on MDPs that introduces a hierarchy in policy space. There are three prevalent approaches to specify this hierarchy. The options framework [37] represents these hierarchies as policies each with a starting and termination condition. The hierarchy of abstract machines (HAM) framework [31] represents the policy as as a hierarchical composition of nondeterministic finite-state machines. Finally, the MAXQ framework [13] represent the hierarchy using a programmatic representation with finite range variables and strict hierarchy among modules. Semi-Markov decision processes (SDMPs) and hierarchical MDPs are fundamental models that appear in the context of hierarchical RL. SMDPs generalize MDPs with timed actions. The RL algorithms for SMDPs are based on natural generalization of the Bellman equations to accommodate timed actions. Hierarchical MDPs model bounded recursion and can be solved by flattening to a MDP, or by producing policies that are only optimal locally. Recursive MDPs model unbounded recursion in the environment space. The orthogonality of recursion in environment space and in policy space means they are complementary—one can consider applying ideas in hierarchical RL to find a policy in an RMDP. The authors of [3] proposed using partially specified programs with recursion to constrain the policy space, but only considered bounded recursion. Hierarchical MDPs [37, 31, 13], and factored MDPs [12, 20] indeed offer compact representations of finite MDPs. These representations can be exponentially more succinct. However, note that finite instances of these formalism are not any more expressive than finite MDPs as instances of these formalism can always be rewritten as a finite MDP. On the other hand, recursive MDPs studied in this paper are strictly more expressive than finite MDPs. Even 1-exit RMDPs may not be expressible as finite MDPs, due to a potentially unbounded stack, but remarkably they can be solved exactly with a finite tabular model-free reinforcement learning algorithm (Theorem 6) without needing to resort to ad-hoc approximations of the unbounded stack configurations. Context-free grammars in RL for optimization of molecules has been considered before by introducing a bound on the recursion depth to induce a finite MDP [25, 42]. Combining context-free grammars and reward machines was proposed as a future research direction in [24], where context-free reward machines have been first described in this paper. Recursive MDPs have been studied outside the RL setting [16], including results for 1-exit RMDP termination [16] and total reward optimization under positive reward restriction [15]. A convergent model-free RL algorithm for 1-exit RMDPs with positive cost restriction was proposed in [21]. The results on undecidability, on PAC-learnability, the introduction of the algorithm Recursive Q-learning, convergence results for Recursive Q-learning, and its deep learning extension are novel contributions of this paper. 7 Conclusion Reinforcement learning so far has primarily considered Markov decision processes (MDPs). Although extremely expressive, this formalism may require “flattening” a more expressive representation that contains recursion. In this paper we examine the use of recursive MDPs (RMDPs) in reinforcement learning—a setting we call recursive reinforcement learning. A recursive MDP is a collection of MDP components, where each component has the ability to recursive call each other. This allows the introduction of an unbounded stack. We propose abstracting this discrete stack with a continuous abstraction in the form of the costs of the exits of a component. Using this abstraction, we introduce Recursive Q-learning—a model-free reinforcement learning algorithm for RMDPs. We prove that tabular Recursive Q-learning converges to an optimal solution on finite 1-exit RMDPs, even though the underlying MDP has an infinite state space. We demonstrate the potential of our approach on a set of examples that includes probabilistic program synthesis, a single-exit RMDP, and an MDP composed with a context-free property. Acknowledgments. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreements 864075 (CAESAR), and 956123 (FOCETA). This work is supported in part by the National Science Foundation (NSF) grant CCF2009022 and by NSF CAREER award CCF-2146563. This work utilized the Summit supercomputer, which is supported by the National Science Foundation (awards ACI-1532235 and ACI-1532236), the University of Colorado Boulder, and Colorado State University. The Summit supercomputer is a joint effort of the University of Colorado Boulder and Colorado State University.
1. Can you clarify the statement regarding 1-exit RMDPs being strictly less expressive than general RMDPs? 2. What is the relation between RMDPs and policy program sketches for programmable reinforcement learning agents? 3. Can you provide a clearer explanation of context-free reward machines and how the palindrome gridworld can be represented as an RMDP? 4. Can you mention earlier in the paper that the full state of an RMDP corresponds to the current state of the active component MDP conjoined with the stack of all component MDPs called so far? 5. Can you explain why strategies (and runs) terminology is used instead of policies? 6. Can you define E x and E n and explain the notation used in the optimality equations? 7. Can you elaborate on how Algorithm 1 attains convergence to the optimum in the Cloud Computing example despite the lack of theoretical guarantee and use of discretization? 8. How does discretization impact convergence in the general case for recursive Q-learning?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper introduces recursive MDPs (RMDPs) as a formalism for representing a large class of infinite-state MDPs as a (finite) set of component MDPs that may recursively call each other. The authors first present an example RMDP (a probabilistic program with holes, representing action choice points), then formally define RMDPs and their translation to infinite-state MDPs, such that each state in an RMDP corresponds to a node in the currently active component MDP, paired with the stack of component MDPs called so far. The authors next derive a number of theoretical results about RMDPs under variants of the restriction that all policies terminate in a finite number of steps in expectation (achieved by adding discounting to an RMDP). In particular, they show that ϵ -proper RMDPs are PAC-learnable, derive a generalized Bellman equation for proper RMDPs, and further derive a sound and effective continuous-space abstraction of RMDPs that preserves the optimal value function and optimal policy. In this abstraction, an RMDP state is abstracted to a tuple of a continuous vector and the node of the currently active component, enabling the application of deep reinforcement learning approaches that require finite-dimensional state encodings. Using the above results, the authors derive a recursive Q-learning algorithm that they prove to converge in the special cases of 1-exit RMDPs and deterministic multi-exit MDPs. They also develop a deep Q-learning variant, presented in the Appendix. They then evaluate these algorithms on a number of example RMDPs across different application contexts, showing that they achieve convergence and higher total reward, unlike non-recursive (deep) Q-learning approaches. Strengths And Weaknesses This was a really interesting (albeit theoretically dense) paper that introduces a set of representational and algorithmic ideas that are (to my knowledge) original, are likely to be of interest and significance to a number of communities who might wish to apply reinforcement learning to environments that exhibit recursive dynamics (probabilistic programs, environments with non-Markovian task rewards, etc.) By taking the general principle of the "infinite use of finite means" and applying it to MDPs, RMDPs extend the space of sequential decision problems that can be succinctly modeled. Beyond that representational insight, the authors derive a number of useful theoretical results that can be applied to a large class of RMDPs (most of the restrictions are satisfied by the common modeling assumption of a discount factor), and further show that RMDPs admit a finite-dimensional continuous-space abstraction, thereby deriving the nice result that many RMDPs can be optimally solved by finite policies that do not depend on the state of the call-stack or state history, while also enabling the use of deep RL approaches. While the derived Q-learning algorithms unfortunately are not proven to converge in a large number of other cases (multi-exit RMDPs), they empirically appear to outperform non-recursive algorithms even in those settings. Even if the authors do not prove everything one might like, all in all this seems to me like a good start for a paper that also introduces the concept of RMDPs. My main piece of feedback is that the exposition and presentation was unclear or confusing to me at points. The example and applications at the start helped, but at least a number of statements later on that seemed obvious to the authors were not at all clear to me. I'll highlight these in my questions and suggestions below. (Due to time constraints, I'll also note that I only briefly skimmed the proofs in the Appendix.) Separately, in discussing related work, I was surprised by the lack of discussion of programmable reinforcement learning agents [1], which provides a formalism for potentially recursive policies, specified as subroutines with choice points that may call each other. While recursive policies are not the same thing as recursive MDPs, it stuck me that they may be similar or equivalent in some contexts, especially in the Cloud Computing / probabilistic program synthesis example, where the structure of a policy program sketch corresponds exactly to the structure of the RMDP itself. I think it would be good to cite work in this field, and discuss at least some of these connections. Apart from that, I enjoyed reading this paper, and I think a number of research communities will benefit from its publication. Questions The following statement on Lines 54-55 initially confused me: "The 1-exit RMDPs are strictly less expressive than general RMDPs as they are equivalent to functions without any return value." It took me a while to realize that having 1-exit means you can only return a constant value, which is hence equivalent to not having a return value at all. Perhaps this can be clarified? Upon seeing Figure 2, it seemed to me that RMDPs are very similar to policy program sketches for programmable reinforcement learning agents. What is the relation between these two formalisms? The explanation of context-free reward machines, especially the battery example, was confusing to me (I still don't understand it). Relatedly, it was not obvious to me how the palindrome gridworld can be represented as an RMDP. I think it would be good to show the formal translation of MDPs with context-free reward machines to RMDPs somewhere in the Appendix, Until I read Lines 150 to 164, I was confused by what a full state RMDP actually is. I think it'd be good to mention somewhere earlier, e.g., during the informal introduction, that the full state of an RMDP just corresponds to the current state of the active component MDP, conjoined with the stack of all component MDPs called so far. This also makes it easier to see how RMDPs have an infinite state space. In lines 165 to 170, the language of "strategies" (and also "runs", later on) was unfamiliar to me -- it seems like this terminology is more common in some subfields, whereas "policy" is more standard in others. I think it would be good to briefly explain that strategies are just policies that may be history-dependent. When defining the optimality equations, the terms E x and E n are not defined anywhere earlier in the paper. I assumed that E x is just the union of all exit nodes across component MDPs, but it's not defined on Line 137. The notation for the equations defining OPT cont ( M ) was confusing. First, I was confused by the fact that v seems to have varying dimension depending on what component you're in, because on the second branch, it says that q ∈ E x rather than q ∈ E x i . The notation used for the first branch is also hard to understand -- I think more explanation below in English would be helpful here. Given that a continuous space abstraction is used for (tabular) recursive Q-learning, isn't it actually impossible to implement using a Q-table? I understand that Algorithm 2 for single-exit RMDPs avoids this, because you don't actually need to store the exit values anywhere, but for the general case, discretization is required. What impact does this have on convergence? Relatedly, the Cloud Computing example is a stochastic multi-exit RMDP not covered by any of the theoretical results. Yet Algorithm 1 seems to attain convergence to the optimum, despite the lack of theoretical guarantee, and despite the use of discretization. How was this possible, and how was the true optimal strategy found and justified? Minor comments: Line 66: "taks" should be task. Limitations The authors have discussed limitations of their approach in the assumptions they make for their theoretical results. That said, there are a few questions re discretization for tabular learning that I think should also be answered or at least mentioned, as noted above.
NIPS
Title Recursive Reinforcement Learning Abstract Recursion is the fundamental paradigm to finitely describe potentially infinite objects. As state-of-the-art reinforcement learning (RL) algorithms cannot directly reason about recursion, they must rely on the practitioner’s ingenuity in designing a suitable “flat” representation of the environment. The resulting manual feature constructions and approximations are cumbersome and error-prone; their lack of transparency hampers scalability. To overcome these challenges, we develop RL algorithms capable of computing optimal policies in environments described as a collection of Markov decision processes (MDPs) that can recursively invoke one another. Each constituent MDP is characterized by several entry and exit points that correspond to input and output values of these invocations. These recursive MDPs (or RMDPs) are expressively equivalent to probabilistic pushdown systems (with call-stack playing the role of the pushdown stack), and can model probabilistic programs with recursive procedural calls. We introduce Recursive Q-learning— a model-free RL algorithm for RMDPs—and prove that it converges for finite, single-exit and deterministic multi-exit RMDPs under mild assumptions. 1 Introduction Reinforcement learning [36] (RL) is a stochastic approximation based approach to optimization, where learning agents rely on scalar reward signals from the environment to converge to an optimal behavior. Watkins’s seminal approach [41] to RL, known as Q-learning, judiciously combines exploration/exploitation with dynamic programming to provide guaranteed convergence [40] to optimal behaviors in environments modeled as Markov decision processes (MDPs) with finite state and action spaces. RL has also been applied to MDPs with uncountable state and action spaces, although convergence guarantees for such environments require strong regularity assumptions. Modern variants of Q-learning (and other tabular RL algorithms) harness the universal approximability and ease-of-training rendered by deep neural networks [18] to discover creative solutions to problems traditionally considered beyond the reach of AI [30, 39, 33]. These RL algorithms are designed with a flat Markovian view of the environment in the form of a “state, action, reward, and next state” interface [9] in every interaction with the learning agent, where the states/actions may come from infinite sets. When such infinitude presents itself in the form of finitely represented recursive structures, the inability of the RL algorithms to handle structured environments means that the structure present in the environment is not available to the RL algorithm to generalize its learning upon. The work of [41] already provides a roadmap for hierarchically structured environments; since then, considerable progress has been made in developing algorithms for hierarchical RL [6, 13, 37, 31] with varying optimality guarantees. Still, the hierarchical MDPs ∗Authors listed alphabetically. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). are expressively equivalent to finite-state MDPs, although they may be exponentially more succinct (Lemma 1). Thus, hierarchical RL algorithms are inapplicable in the presence of unbounded recursion. On the other hand, recursion occurs naturally in human reasoning [10], mathematics and computation [34, 35], and physical environments [29]. Recursion is a powerful cognitive tool in enabling a divide-and-conquer strategy [11] to problem solving (e.g., tower of Hanoi, depth-first search) and, consequently, recursive solutions enhance explainability in the form of intuitive inductive proofs of correctness. Unlike flat representations, the structure exposed by recursive definitions enables generalizability. Recursive concepts, such as recursive functions and data structures, provide scaffolding for efficient and transparent algorithms. Finally, the models of physical environments express the system evolution in the form of recursive equations. We posit that the lack of RL algorithms in handling recursion is an obstacle to their applicability, explainability, and generalizability. This paper aims to fill the gap by studying recursive Markov decision processes [16] as environment models in reinforcement learning. We dub this setting recursive reinforcement learning (RRL). MDPs with Recursion. A recursive Markov Decision Process (RMDP) [16] is a finite collection of special MDPs, called component MDPs, with special entry and exit nodes that take the role of input parameter and return value, respectively. The states of component MDPs may either be the standard MDP states, or they may be “boxes” with input and output ports; these boxes are mapped to other component MDPs (possibly, the component itself) with some matching of the entry and exit nodes. An RMDP where every component has only one exit is called a 1-exit RMDP, otherwise we call it a general or multi-exit RMDP. Single-exit RMDPs are strictly less expressive than general RMDPs as they are equivalent to functions without any return value. Nonetheless, 1-exit RMDPs are more expressive than finite-state MDPs [32] and relate closely to controlled branching processes [17]. Example 1 (Cloud Computing). As an example of recursive MDP, consider the Boolean program shown in Figure 2. This example (inspired from [21]) models a cloud computing scenario to compute a task T depicted as the component T . Here, a decision maker is faced with two choices: either she can choose to execute the task monolithically (with a cost of 8 units) or chose to decompose the task into three S tasks. The process of decomposition and later combining the results cost 0.5 units. Each task S can either be executed on a fast, but unreliable server that costs 1 unit, but with probability 0.4 the server may crash, and require a recomputation of the task. When executed on a reliable server, the task S costs 1.5 units, however the task may be interrupted by a higher-priority task and the decision maker will be compensated for this delay by earning a 0.2 unit reward. During the interrupt service routine H , there is a choice to upgrade the priority of the task for a cost of 0.2 units. Otherwise, the interrupt service routing costs 0.01 unit (due to added delay) and the interrupt service routine itself can be interrupted, causing the service routine to be re-executed in addition to that of the new interrupt service routine. The goal of the RL agent is to compute an optimal policy that maximize the total reward to termination. This example is represented as a recursive MDP in Figure 1. This RMDP has three components T , S, and H . The component T has three boxes b1, b2, and b3 all mapped to components S. The component T and S both have single entry and exit nodes, while the component H has two exit nodes. Removing the thick (maroon) transitions and the exit u7 makes the RMDP 1-exit. The edges specify both, the name of the action and the corresponding reward. While the component T is non-stochastic, components S and H both have stochastic transitions depicted by the grey circle. Recursive MDPs strictly generalize finite-state MDPs and hierarchical MDPs, and semantically encode countable state MDPs with the context encoding the stack frame of unbounded depth. RMDPs generalize several well-known stochastic models including stochastic context-free grammars [28, 26] and multi-type branching processes [22, 38]. Moreover, RMDPs are expressively equivalent to proba- bilistic pushdown systems (MDPs with unbounded stacks), and can model probabilistic programs with unrestricted recursion. Given their expressive power, it is not surprising that reachability and termination problems for general RMDPs are undecidable. Single-exit RMDPs, on the other hand, are considerably more well-behaved with decidable termination [16] and total reward optimization under positive reward restriction [15]. Exploiting these properties, a convergent RL algorithm [21] has been proposed for 1-exit RMDPs with positive cost restriction. However, to the best of our knowledge, no RL algorithm exists for general RMDPs. Applications of Recursive RL. Next, we present some paradigmatic applications of recursive RL. – Probabilistic Program Synthesis. As shown in Example 1, RMDPs can model procedural probabilistic Boolean program. Hence, the recursive RL can be used for program synthesis in unknown, uncertain environments. Boolean abstractions of programs [4] are particularly suited to modeling as RMDPs. Potential applications include program verification [4, 5, 14] and program synthesis [19]. – Context-Free Reward Machines. Recently, reward machines [23] have been proposed to model non-Markovian reward signals in RL. In this setting, a regular language extended with the reward signals (Mealy machines) over the observation sequences of the MDP is used to encode reward signals. In this setting the RL algorithms operate on the finite MDP given by the product of the MDP with the reward machine. Following the Chomsky hierarchy, context-free grammars or pushdown automata can be used to provide more expressive reward schemes than regular languages. This results in context-free reward machines: reward machines with an unbounded stack. As an example of such a more expressive reward language, consider a grid-world with a reachability objective with some designated charging stations, where 1-unit dwell-time charges the unbounded capacity battery by 1-unit. If every action discharges the battery by 1-unit, the reward scheme to reach the target location without ever draining the battery cannot be captured by a regular reward machine. On the other hand, this reward signal can be captured with an RMDP, where charging by 1-unit amounts to calling a component and discharging amounts to returning from the component such that the length of the call stack encodes the battery level. More generally, any context-free requirement over finite-state MDPs can be captured using general RMDPs. – Stochastic Context-Free Grammars. Stochastic CFGs and branching decision processes can capture a structured evolution of a system. These can be used for modeling disease spread, population dynamics, and natural languages. RRL can be used to learn optimal behavior in systems expressed using such stochastic grammars. Overview. We begin the technical presentation by providing the formal definition of RMDPs and the total reward problem: which we show to be undecidable in general. We then develop PAC learning results under mild restrictions. In Section 3, we develop Recursive Q-learning, a model-free RL algorithm for RMDPs. In Section 4, we show that Recursive Q-learning converges to an optimal solution in the single-exit setting. Section 5 then demonstrates the empirical performance of Recursive Q-learning. 2 Recursive Markov Decision Processes A Markov decision processM is a tuple (A,S, T, r) where A is the set of actions, S is the set of states, T : S ×A→ D(S) is the probabilistic transition function, and r : S ×A→ R is the reward function. We say that an MDPM is finite if both S and A are finite. For any state s ∈ S, A(s) denotes the set of actions that may be selected in state s. A recursive Markov decision process (RMDP) [16] is a tuple M = (M1, . . . ,Mk), where each component Mi = (Ai, Ni, Bi, Yi,Eni,Exi, δi) consists of: – A set Ai of actions; – A set Ni of nodes, with a distinguished subset Eni of entry nodes and a (disjoint) subset Exi of exit nodes (we assume an arbitrary but fixed ordering on Exi and Eni); – A set Bi of boxes along with a mapping Yi : Bi 7→ {1, . . . , k} that assigns to every box (the index of) a component. To each box b ∈ Bi, we associate a set of call ports, Callb = {(b, en) | en ∈ EnY (b)}, and a set of return ports, Retb = {(b, ex) | ex ∈ ExY (b)}; – we let Calli = ∪b∈BiCallb, Reti = ∪b∈BiRetb, and let Qi = Ni ∪Calli ∪Reti be the set of all nodes, call ports and return ports; we refer to these as the vertices of component Mi. – A transition function δi : Qi × Ai → D(Qi), where, for each tuple δi(u, a)(v) = p is the transition probability of a transition from the source u ∈ (Ni \Exi)∪Reti to the destination v ∈ (Ni \ Eni) ∪ Calli; we often write p(v|u, a) for δi(u, a)(v). – A reward function ri : Qi ×Ai → R is the reward associated with transitions. We assume that the set of boxes B1, . . . , Bk and set of nodes N1, N2, . . . , Nk are mutually disjoint. We use symbols N,B,A,Q,En,Ex, δ to denote the union of the corresponding symbols over all components. We say that an RMDP is finite if k and all Ai, Ni and Bi are finite. An execution of an RMDP begins at an entry node of some component and, depending upon the sequence of input actions, the state evolves naturally like an MDP according to the transition distributions. However, when the execution reaches an entry port of a box, this box is stored on a stack of pending calls, and the execution continues naturally from the corresponding entry node of the component mapped to that box. When an exit node of a component is encountered, and if the stack of pending calls is empty then the run terminates; otherwise, it pops the box from the top of the stack and jumps to the exit port of the just popped box corresponding to the just reached exit of the component. The semantics of an RMDP is an infinite state MDP, whose states are pairs consisting of a sequence of boxes, called the context, mimicking the stack of pending calls and the current vertex. The height of the call stack is incremented (decremented) when a call (return) is made. A stack height of 0 refers to termination, while the empty stack has height 1. The semantics of a recursive MDP M = (M1, . . . ,Mk) with Mi = (Ai, Ni, Bi, Yi,Eni,Exi, δi, ri) are given as a (infinite-state) MDP [[M ]] = (AM , SM , TM , rM ) where – AM = ∪ki=1Ai is the set of actions; – SM ⊆ B∗×Q is the set of states, consisting of the stack context and the current node; – TM : SM×AM → D(SM ) is the transition function such that for s = (〈κ〉, q) ∈ SM and action a ∈ AM , the distribution δM (s, a) is defined as: 1. if the vertex q is a call port, i.e. q = (b, en) ∈ Call, then δM (s, a)(〈κ, b〉, en) = 1; 2. if the vertex q is an exit node, i.e. q = ex ∈ Ex, then if κ = 〈∅〉 then the process terminates and otherwise δM (s, a)(〈κ′〉, (b, ex)) = 1 where (b, ex) ∈ Ret(b) and κ = 〈κ′, b〉; 3. otherwise, δM (s, a)(〈κ〉, q′) = δ(q, a)(q′). – the reward function rM : SM × AM → R is such that for s = (〈κ〉, q) ∈ SM and action a ∈ AM , the reward rM (s, a) is zero if q is either a call port or the exit node, and otherwise rM (s, a)(〈κ〉, q′) = r(q, a)(q′). We call the maximum value of the absolute one-step reward the diameter of an RMDP and denote it by rmax = maxs,a |r(s, a)|. Given the semantics of an RMDP M as an (infinite) MDP [[M ]], the concepts of strategies (also called policies) as well as positional strategies are well defined. We distinguish a special class of strategies—called stackless strategies—that are deterministic and do not depend on the history or the stack context at all. We are interested in computing strategies σ that maximize the expected total reward. Given RMDP M , a strategy σ determines sequences Xi and Yi of random variables denoting the ith state and action of the MDP [[M ]]. The total reward under strategy σ and its optimal value are respectively defined as ETotalMσ (s) = lim N→∞ EMσ (s) {∑ 1≤i≤N r(Xi−1, Yi) } , ETotalM (s) = sup σ ETotalMσ (s). For an RMDP M and a state s, a strategy σ is called proper if the expected number of steps taken by M before termination when starting at s is finite. To ensure that the limit above exists, as the sum of rewards can otherwise oscillate arbitrarily, we assume the following. Assumption 1 (Proper Policy Assumption). All strategies are proper for all states. We call an RMDP that satisfies Assumption 1 a proper RMDP. This assumption is akin to proper policy assumptions [7] often posed on the stochastic shortest path problems, and ensures that the total expected reward is finite. The expected total reward optimization problem over proper RMDPs subsumes the discounted optimization problem over finite-state MDPs since discounting with a factor λ is analogous to terminating with probability 1−λ at every step [36]. The properness assumption on RMDPs can be enforced by introducing an appropriate discounting (see Appendix F). Undecidability. Given an RMDP M , an initial node v, and a threshold D, the strategy existence problem is to decide whether there exists a strategy in [[M ]] with value greater than or equal to D when starting at the initial state (〈∅〉, q), i.e., at some entry node q with an empty context. Theorem 1 (Undecidability of the Strategy Existence Problem). Given a proper RMDP and a threshold D, deciding whether there exists a strategy with expected value greater than D is undecidable. PAC-learnability. Although it is undecidable to determine whether or not a strategy can exceed some threshold in a proper RMDP, the problem of ε-approximating the optimal value is decidable when parameters co, λ and b (defined below) are known. Our approach to PAC-learnability [1] is to learn the distribution of the transition function δ well enough and then produce an approximate, but not necessarily efficient, evaluation of our learned model. To allow PAC-learnability, we need a further nuanced notion of ε-proper policies. A policy is called ε-proper, if it terminates with a uniform bound on the expected number of steps for allM′ that differ fromM only in the transition function, where ∑ q∈S,a∈A,r∈S |δM(q, a)(r)− δM′(q, a)(r)| ≤ ε (we then say thatM′ is ε-close toM), and where the support of δM ′(q, a) is a subset of the support of δM (q, a) for all q ∈ S and a ∈ A. An RMDP is called ε-proper, if all strategies are ε-proper for M for all states of the RMDP. Assumption 2 (PAC-learnability). We restrict our attention to ε-proper RMDPs. We further require that all policies have a falling expected stack height. Namely, we require for all M′ ε-close to M and all policies σ that the expected stack height in step k is bounded by some function co − µ · ∑k i=1 p M′σ run (k), where co ≥ 1 is an offset, µ ∈]0, 1] is the decline per step, and p M′σ run (k) is the likelihood that the RMDPM′ with strategy σ is still running after k steps. We finally require that the absolute expected value from every strategy is bounded: ∣∣ETotalM′σ ((〈∅〉, q))∣∣ ≤ b for some b. Theorem 2. For every ε-proper RMDP with parameters co, µ, and b, ETotalM(s) is PAC-learnable. These parameters can be replaced by discounting. Indeed, our proofs start with discounted rewards, and then relax the assumptions to allow for using undiscounted rewards. Using a discount factor λ translates to parameters b = d1−λ , co = 1 + 1 1−λ , and µ = 1−λ. 3 Recursive Q-Learning for Multi-Exit RMDPs While RMDPs with multiple exits come with undecidability results, they are the interesting cases as they represent systems with an arbitrary call stack. We suggest an abstraction that turns them into a fixed size data structure, which is well suited for neural networks. Given a proper recursive MDP M = (M1, . . . ,Mk) with Mi = (Ai, Ni, Bi, Yi,Eni,Exi, δi, ri) with semantics [[M ]] = (AM , SM , TM , rM ), the optimal total expected reward can be captured by the following equations OPTrecur(M). For every κ ∈ B∗ and q ∈ Q: y(〈κ〉, q) = y(〈κ, b〉, en) q=(b, en) ∈ Call 0 q ∈ Ex, κ = 〈∅〉 y(〈κ′〉, (b, q)) q ∈ Ex, (b, q) ∈ Ret(b), κ=〈κ′, b〉 max a∈A(q) { r(q, a)+ ∑ q′∈Q p(q′|q, a)y(〈κ〉, q′) } otherwise. These equations capture the optimality equations on the underlying infinite MDP [[M ]]. It is straightforward to see that, if these equations admit a solution, then the solution equals the optimal total expected reward [32]. Moreover, an optimal policy can be computed by choosing the actions that maximize the right-hand-side. However, since the state space is countably infinite and has an intricate structure, an algorithm to compute a solution to these equations is not immediate. To make it accessible to learning, we abstract the call stack 〈κ, b〉 to its exit value, i.e. the total expected reward from the exit nodes of the box b, under the stack context 〈κ〉. Note that when a box is called, the value of each of its exits may still be unknown, but it is (for a given strategy) fixed. Naturally, if two stack contexts 〈κ, b〉 and 〈κ′, b〉 achieve the same expected total reward from each exit of the block b, then both the optimal strategy and the expected total reward, are the same. This simple but precise and effective abstraction of stacks with exit values allows us to consider the following optimality equations OPTcont(M). For every 1 ≤ i ≤ k, q ∈ Qi, v ∈ R|Exi|: x(v, q) = x(v′, en)[v′ 7→ (x(v, q′))q′∈Retb ] q = (b, en) ∈ Call v(q) q ∈ Ex max a∈A(q) { r(q, a)+ ∑ q′∈Q p(q′|q, a)x(v, q′) } otherwise. Here v is a vector where v(ex) is the (expected) reward that we get once we reach exit ex of the current component. Informally when a box is called, this vector is being updated with the current estimates of the reward that we get once the box is exited. The ex entry of this vector v′ = (x(v, q′))q′∈Retb is x(v, (b, ex)), which is the value that we achieve from exit (b, ex). This continuous-space abstraction of the countably infinite space of the stack contexts enables the application of deep feedforward neural networks [18] with a finite state encoding in RL. It also provides an elegant connection to the smoothness of differences to exit values: if all exit costs are changed by less than ε, then the cost of each state within a box changes by less than ε, too. The following theorem connects both versions of optimality equations. Theorem 3 (Fixed Point). If y is a fixed point of OPTrecur and x is a fixed point of OPTcont, then y(〈∅〉, q) = x(0, q). Moreover, any policy optimal from (0, q) is also optimal from (〈∅〉, q). We design a generalization of the Q-learning algorithm [40] for recursive MDPs based on the optimality equations OPTcont shown in Algorithm 1. We implement several optimizations in our algorithm. We assume implicit transitions from the entry and exit ports of the box to the corresponding entry and exit nodes of the components. A further optimization is achieved by applying a dimension reduction on the representation of the exit value vector v by normalizing these values in such a way that one of the exits has value 0. This normalization does not affect optimal strategies as, when two stacks incur similar costs in that they have the same offset between the cost of each exit, the optimal strategy is still the same, with the difference in cost being this offset. While the convergence of Algorithm 1 is not guaranteed for the general multi-exit RMDPs, the algorithm converges for the special cases of deterministic proper RMDPs and 1-exit RMDPs (Section 4). For the deterministic multi-exit case, the observation is straightforward as the properness assumption reduces the semantics to be directed acyclic graph, and the correct values are eventually propagated from the leafs backwards. Theorem 4. Tabular Recursive Q-learning converges to the optimal values for deterministic proper multi-exit RMDPs with a learning rate of 1 when all state-action pairs are visited infinitely often. Algorithm 1: Recursive Q-learning 1 Initialize Q(s, v, a) arbitrarily 2 while not converged do 3 v ← 0 4 stack← ∅ 5 Sample trajectory τ ∼ {(s, a, r, s′), ...} 6 for s, a, r, s′ in τ do 7 Update αi according to learning rate schedule 8 if entered box then 9 {sexit1 , . . . , sexitn} ← getExits(s′) 10 v′ ← [maxa′∈A(sexit1 )Q(sexit1 , v, a ′), . . . ,maxa′∈A(sexitn )Q(sexitn , v, a ′)] 11 v′min ← min(v′) 12 v′ ← v′ − v′min 13 Q(s, v, a)← (1− αi)Q(s, v, a) + αi(r + maxa′∈A(s′)Q(s′, v′, a′) + v′min ) 14 stack.push(v) 15 v ← v′ 16 else if exited box then 17 {sexit1 , . . . , sexitn} ← getExits(s) 18 Set k such that s′ = sexitk 19 Q(s, v, a)← (1− αi)Q(s, v, a) + αi(r + v(k)) 20 v ← stack.pop() 21 else 22 Q(s, v, a)← (1− αi)Q(s, v, a) + αi(r + maxa∈A(s′)Q(s′, v, a′)) 23 end 24 end 25 end 26 return Q 4 Convergence of Recursive Q-Learning for Proper 1-exit RMDPs Recall that a proper 1-exit RMDP is a proper RMDP where, for each component Mi, the set of exits Exi is a singleton. For this special case, we show that the recursive Q-learning algorithm converges to the optimal strategy. The optimality equations OPTcont(M) (similar to [15]) can be simplified in the case of 1-exit RMDPs whose unique fixed point solution will give us the optimal values of the total reward objective. For every q ∈ Q: x(q) = x(en) + x(b, ex ′) q = (b, en) ∈ Call, ex = ExY (b) maxa∈A(q) { r(q, a) + ∑ q′∈Q p(q′|q, a)x(q′) } otherwise. We now denote the system of all these equations in a vector form as x̄ = F (x̄). Given a 1-exit RMDP, we can easily construct its associated equation system above in linear time. Theorem 5 (Unique Fixed Point). The vector consisting of ETotalM (q) values is the unique fixed point of F . Moreover, a solution of these equations provide optimal stackless strategies. Note that for the 1-exit setting, Algorithm 1 simplifies to Algorithm 2 since v is always 0 and vmin is always the maximum Q-value for the exit. The convergence of the recursive Q-learning algorithm for 1-exit RMDPs follows from Theorem 5 and stochastic approximation [40, 8]. Theorem 6. Algorithm 2 converges to the optimal values in 1-exit RMDP when the learning rates satisfy ∑∞ i=0 αi =∞, ∑∞ i=0 α 2 i <∞, and all state-action pairs are visited infinitely often. In order to show efficient PAC learnability for -proper 1-exit RMDP M , it suffices to know an upper bound on the expected number of steps taken by M when starting at any vertex with the empty stack content, which will be denoted by K. Theorem 7 (Efficient PAC Learning for 1-Exit RMDPs). For every -proper 1-exit RMDP with diameter rmax and the expected time to terminate ≤ K, ETotalM(s) is efficiently PAC-learnable. Algorithm 2: Recursive Q-learning (1-exit special case) 1 Initialize Q(s, a) arbitrarily 2 while not converged do 3 Sample trajectory τ ∼ {(s, a, r, s′), ...} 4 for s, a, r, s′ in τ do 5 Update αi according to learning rate schedule 6 if entered box then 7 sexit ← getExit(s′) 8 Q(s, a)← (1− αi)Q(s, a) + αi(r + maxa′∈A(s′)Q(s′, a′) + maxa′∈A(sexit)Q(sexit, a′)) 9 else if exited box then 10 Q(s, a)← (1− αi)Q(s, a) + αi(r) 11 else 12 Q(s, a)← (1− αi)Q(s, a) + αi(r + maxa∈A(s′)Q(s′, a′)) 13 end 14 end 15 end 16 return Q 5 Experiments We implemented Algorithm 1 in tabular form as well as with a neural network. For the tabular implementation, we quantized the vector v directly after its computation to ensure that the Qtable remains small and discrete. For the neural network implementation we used the techniques used in DQN [30], replay buffers and target networks, for additional stability. The details of this implementation can be found in the appendix. We consider three examples: one to demonstrate the application of Recursive Q-learning for synthesizing probabilistic programs, one to demonstrate convergence in the single-exit setting, and one to demonstrate the use of a context-free reward machine. We compare Recursive Q-learning to Q-learning where the RMDP is treated as an MDP by the agent, i.e., the agent treats stack calls and returns as if they were normal MDP transitions. 5.1 Cloud computing The cloud computing example, introduced in Example 1, is a recursive probabilistic program with decision points for an RL agent to optimize over. The optimal strategy is to select the reliable server and to never upgrade. This strategy produces an expected total reward of −5.3425. Figure 3 shows that tabular Recursive Q-learning with discretization quickly converges to the optimal solution on this multi-exit RMDP while Q-learning oscillates around a suboptimal policy. 5.2 Infinite spelunking E I 1 2 I E I E I1 1 2 2 1 E I E I1 1 2 2 Consider a single-exit RMDP gridworld with two box types, 1 and 2, shown at the bottom of the figure to the right. These box types are the two types of levels in an infinitely deep cave. When falling or descending to another level, the level type switches. Passing over a trap, shown in red, results in the agent teleporting to a random position and falling with probability 0.5. The agent has fallen into the cave at the position denoted by I without climbing equipment. However, there is climbing equipment in one of the types of levels at a known location denoted by E. The agent has four move directions—north, east, south, west—as well as an additional action to descend further or ascend. Until the climbing equipment is obtained, the agent can only descend. Once the climbing equipment is obtained, the traps no longer affect the agent and the agent can ascend only from the position where it fell down. With probability 0.01 the agent ascends from the current level with the climbing gear. This has the effect of box-wise discounting with discount factor 0.99. The agent’s objective is to leave the cave from where it fell in as as soon as possible. The reward is −1 on every step. There are two main strategies to consider. The first strategy tries to obtain the climbing gear by going over the traps. This strategy leads to an unbounded number of possible levels since the traps may repeatedly trigger. The second strategy avoids the traps entirely. The figure to the right shows partial descending trajectories from these strategies, with the actions shown in green, the trap teleportations shown in blue, and the locations the agent fell down from are shown as small black squares. Which strategy is better depends on the probability of the traps triggering. With a trap probability of 0.5, the optimal strategy is to try and reach the climbing equipment by going over the traps. Figure 1 shows the convergence of tabular Recursive Q-learning for 1-exit RMDPs to this optimal strategy while the strategy learned by Q-learning does not improve. 5.3 Palindrome gridworld To demonstrate the ability to incorporate context-free objectives, consider a 3× 3 gridworld with a goal cell in the center and a randomly selected initial state. The agent has four move actions—north, east, south, west—and a special control action. The objective of the agent is to reach the goal cell while forming an action sequence that is a palindrome of even length. What makes this possible is that when the agent performs an action that pushes against a wall of the gridworld, no movement occurs. To monitor the progress of the property, we compose this MDP with a nondeterministic pushdown automaton. The agent must use its special action to determine when to resolve the nondeterminism in the pushdown automaton. Additionally, the agent uses its special action to declare the end of its action sequence. To ensure properness, the agent’s selected action is corrupted into the special action with probability 0.01. The agent is given a reward of 50 upon success, −5 when the agent selects an action that causes the pushdown automaton to enter a rejecting sink, and −1 on all other timesteps. Figure 3 shows the convergence of Deep Recursive Q-learning to an optimal strategy on this example, while DQN fails to find a good strategy. 6 Related Work Hierarchical RL is an approach for learning policies on MDPs that introduces a hierarchy in policy space. There are three prevalent approaches to specify this hierarchy. The options framework [37] represents these hierarchies as policies each with a starting and termination condition. The hierarchy of abstract machines (HAM) framework [31] represents the policy as as a hierarchical composition of nondeterministic finite-state machines. Finally, the MAXQ framework [13] represent the hierarchy using a programmatic representation with finite range variables and strict hierarchy among modules. Semi-Markov decision processes (SDMPs) and hierarchical MDPs are fundamental models that appear in the context of hierarchical RL. SMDPs generalize MDPs with timed actions. The RL algorithms for SMDPs are based on natural generalization of the Bellman equations to accommodate timed actions. Hierarchical MDPs model bounded recursion and can be solved by flattening to a MDP, or by producing policies that are only optimal locally. Recursive MDPs model unbounded recursion in the environment space. The orthogonality of recursion in environment space and in policy space means they are complementary—one can consider applying ideas in hierarchical RL to find a policy in an RMDP. The authors of [3] proposed using partially specified programs with recursion to constrain the policy space, but only considered bounded recursion. Hierarchical MDPs [37, 31, 13], and factored MDPs [12, 20] indeed offer compact representations of finite MDPs. These representations can be exponentially more succinct. However, note that finite instances of these formalism are not any more expressive than finite MDPs as instances of these formalism can always be rewritten as a finite MDP. On the other hand, recursive MDPs studied in this paper are strictly more expressive than finite MDPs. Even 1-exit RMDPs may not be expressible as finite MDPs, due to a potentially unbounded stack, but remarkably they can be solved exactly with a finite tabular model-free reinforcement learning algorithm (Theorem 6) without needing to resort to ad-hoc approximations of the unbounded stack configurations. Context-free grammars in RL for optimization of molecules has been considered before by introducing a bound on the recursion depth to induce a finite MDP [25, 42]. Combining context-free grammars and reward machines was proposed as a future research direction in [24], where context-free reward machines have been first described in this paper. Recursive MDPs have been studied outside the RL setting [16], including results for 1-exit RMDP termination [16] and total reward optimization under positive reward restriction [15]. A convergent model-free RL algorithm for 1-exit RMDPs with positive cost restriction was proposed in [21]. The results on undecidability, on PAC-learnability, the introduction of the algorithm Recursive Q-learning, convergence results for Recursive Q-learning, and its deep learning extension are novel contributions of this paper. 7 Conclusion Reinforcement learning so far has primarily considered Markov decision processes (MDPs). Although extremely expressive, this formalism may require “flattening” a more expressive representation that contains recursion. In this paper we examine the use of recursive MDPs (RMDPs) in reinforcement learning—a setting we call recursive reinforcement learning. A recursive MDP is a collection of MDP components, where each component has the ability to recursive call each other. This allows the introduction of an unbounded stack. We propose abstracting this discrete stack with a continuous abstraction in the form of the costs of the exits of a component. Using this abstraction, we introduce Recursive Q-learning—a model-free reinforcement learning algorithm for RMDPs. We prove that tabular Recursive Q-learning converges to an optimal solution on finite 1-exit RMDPs, even though the underlying MDP has an infinite state space. We demonstrate the potential of our approach on a set of examples that includes probabilistic program synthesis, a single-exit RMDP, and an MDP composed with a context-free property. Acknowledgments. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreements 864075 (CAESAR), and 956123 (FOCETA). This work is supported in part by the National Science Foundation (NSF) grant CCF2009022 and by NSF CAREER award CCF-2146563. This work utilized the Summit supercomputer, which is supported by the National Science Foundation (awards ACI-1532235 and ACI-1532236), the University of Colorado Boulder, and Colorado State University. The Summit supercomputer is a joint effort of the University of Colorado Boulder and Colorado State University.
1. What is the focus and contribution of the paper regarding reinforcement learning in recursive Markov decision processes? 2. What are the strengths and weaknesses of the proposed approach, particularly in its relationship with existing reinforcement learning methods? 3. Do you have any concerns or questions regarding the theoretical assertions made in the paper? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any limitations or potential biases in the paper's approach or assumptions that need to be addressed?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper studies reinforcement learning in recursive Markov decision processes (RMDPs). RMDPs were proposed a few years ago, and the authors cited one paper in which reinforcement learning for a special type of RMDPs (1-exit RMDPs with a positive reward restriction) was presented. The goal of the current paper is to study reinforcement learning in more general RMDPs. The paper offers both theoretical and empirical contributions. A number of theoretical assertions are made, e.g., about undecidability or PAC-learnability. The RMDPs can model infinite state spaces, but the algorithms for solving them have to deal with infinite quantities in one way or the other. For this reason, an approximate algorithm for RMDPs is proposed in this paper. Empirical results are presented and the new method is compared with standard flat Q-learning. Strengths And Weaknesses This paper offers a competent discussion of RMDPs and adds useful extensions to reinforcement learning algorithms for solving these models. If we disregard the broader reinforcement learning literature, this paper looks like a strong contribution. This work on RL in RMDPs is, however, not placed correctly in the context of the existing methods in the reinforcement learning community. The biggest issue is that the current narrative of criticism of flat reinforcement learning muddles the waters, which makes it impossible to see how this work can be placed in the context of the existing literature. In several parts of the paper, the authors refer to flat RL saying that flat RL cannot cope with certain properties of the tasks that can be addressed by RMDPs. The problem is that there exist powerful methods in the RL literature that can cope with structured domains. Hierarchical reinforcement learning or representations with state features can exploit the fact that the state space is composed of a number of state features. This makes the empirical results in this paper unfair since the comparisons should be against hierarchical or factored algorithms. Comparisons with flat Q-learning are not sufficient since there exist algorithms that are at least as intelligent as the methods proposed here. This is certainly authors responsibility to find out how exactly their new method relates to the existing methods. RMDP is an interesting model, but solving it has its own challenges. For example, to solve the example presented in Fig. 2, it would be possible to use a RL algorithm with state features, and to use an integer variable to count how many times S() was executed. This would introduce a possibly infinite number of states in the representation, but the size of the stack has exactly the same problem since S() can be executed infinitely many times. So, the stack will grow too. The relationship of this method with the existing RL literature is not clear, and it difficult to judge if the method adds anything that is not used in the current reinforcement learning methods. Questions Line 36 criticises HRL for not being able to model infinite recursion? But, the new method has the same problem since the stack can be infinite too? Is this claim justified then? Example 1 is great. Have you tried to find a hierarchical or maybe a PDDL-like method in RL that could model recursion? Having an integer variable seems to be sufficient. I am not sure why HRL or factored RL methods cannot cope with the example presented in lines 94 to 108. Could you please clarify? Why the new definition of proper strategies is required? In standard MDPs (see [26]), discounting is used to cope with summations over infinite sets. The authors talk about the discount factor in the appendix, but the definition of the proper polices seems to be redundant. I believe that the entire theory could be presented without this concept. It adds a lot of confusion. Why the comparisons are only against flat Q-learning? I am not sure why Q-learning oscillates in section 5.1? Given a sufficient amount of exploration, Q-learning is guaranteed to find an optimal policy. If Q-learning is put in a disadvantaged position here, the authors should implement Q-learning in a way that will allow it to use the same information as their method based on RMDPs. It would be useful if the paper was more explicit about which concepts are known and which were proposed in this paper. Limitations n.a.
NIPS
Title Recursive Reinforcement Learning Abstract Recursion is the fundamental paradigm to finitely describe potentially infinite objects. As state-of-the-art reinforcement learning (RL) algorithms cannot directly reason about recursion, they must rely on the practitioner’s ingenuity in designing a suitable “flat” representation of the environment. The resulting manual feature constructions and approximations are cumbersome and error-prone; their lack of transparency hampers scalability. To overcome these challenges, we develop RL algorithms capable of computing optimal policies in environments described as a collection of Markov decision processes (MDPs) that can recursively invoke one another. Each constituent MDP is characterized by several entry and exit points that correspond to input and output values of these invocations. These recursive MDPs (or RMDPs) are expressively equivalent to probabilistic pushdown systems (with call-stack playing the role of the pushdown stack), and can model probabilistic programs with recursive procedural calls. We introduce Recursive Q-learning— a model-free RL algorithm for RMDPs—and prove that it converges for finite, single-exit and deterministic multi-exit RMDPs under mild assumptions. 1 Introduction Reinforcement learning [36] (RL) is a stochastic approximation based approach to optimization, where learning agents rely on scalar reward signals from the environment to converge to an optimal behavior. Watkins’s seminal approach [41] to RL, known as Q-learning, judiciously combines exploration/exploitation with dynamic programming to provide guaranteed convergence [40] to optimal behaviors in environments modeled as Markov decision processes (MDPs) with finite state and action spaces. RL has also been applied to MDPs with uncountable state and action spaces, although convergence guarantees for such environments require strong regularity assumptions. Modern variants of Q-learning (and other tabular RL algorithms) harness the universal approximability and ease-of-training rendered by deep neural networks [18] to discover creative solutions to problems traditionally considered beyond the reach of AI [30, 39, 33]. These RL algorithms are designed with a flat Markovian view of the environment in the form of a “state, action, reward, and next state” interface [9] in every interaction with the learning agent, where the states/actions may come from infinite sets. When such infinitude presents itself in the form of finitely represented recursive structures, the inability of the RL algorithms to handle structured environments means that the structure present in the environment is not available to the RL algorithm to generalize its learning upon. The work of [41] already provides a roadmap for hierarchically structured environments; since then, considerable progress has been made in developing algorithms for hierarchical RL [6, 13, 37, 31] with varying optimality guarantees. Still, the hierarchical MDPs ∗Authors listed alphabetically. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). are expressively equivalent to finite-state MDPs, although they may be exponentially more succinct (Lemma 1). Thus, hierarchical RL algorithms are inapplicable in the presence of unbounded recursion. On the other hand, recursion occurs naturally in human reasoning [10], mathematics and computation [34, 35], and physical environments [29]. Recursion is a powerful cognitive tool in enabling a divide-and-conquer strategy [11] to problem solving (e.g., tower of Hanoi, depth-first search) and, consequently, recursive solutions enhance explainability in the form of intuitive inductive proofs of correctness. Unlike flat representations, the structure exposed by recursive definitions enables generalizability. Recursive concepts, such as recursive functions and data structures, provide scaffolding for efficient and transparent algorithms. Finally, the models of physical environments express the system evolution in the form of recursive equations. We posit that the lack of RL algorithms in handling recursion is an obstacle to their applicability, explainability, and generalizability. This paper aims to fill the gap by studying recursive Markov decision processes [16] as environment models in reinforcement learning. We dub this setting recursive reinforcement learning (RRL). MDPs with Recursion. A recursive Markov Decision Process (RMDP) [16] is a finite collection of special MDPs, called component MDPs, with special entry and exit nodes that take the role of input parameter and return value, respectively. The states of component MDPs may either be the standard MDP states, or they may be “boxes” with input and output ports; these boxes are mapped to other component MDPs (possibly, the component itself) with some matching of the entry and exit nodes. An RMDP where every component has only one exit is called a 1-exit RMDP, otherwise we call it a general or multi-exit RMDP. Single-exit RMDPs are strictly less expressive than general RMDPs as they are equivalent to functions without any return value. Nonetheless, 1-exit RMDPs are more expressive than finite-state MDPs [32] and relate closely to controlled branching processes [17]. Example 1 (Cloud Computing). As an example of recursive MDP, consider the Boolean program shown in Figure 2. This example (inspired from [21]) models a cloud computing scenario to compute a task T depicted as the component T . Here, a decision maker is faced with two choices: either she can choose to execute the task monolithically (with a cost of 8 units) or chose to decompose the task into three S tasks. The process of decomposition and later combining the results cost 0.5 units. Each task S can either be executed on a fast, but unreliable server that costs 1 unit, but with probability 0.4 the server may crash, and require a recomputation of the task. When executed on a reliable server, the task S costs 1.5 units, however the task may be interrupted by a higher-priority task and the decision maker will be compensated for this delay by earning a 0.2 unit reward. During the interrupt service routine H , there is a choice to upgrade the priority of the task for a cost of 0.2 units. Otherwise, the interrupt service routing costs 0.01 unit (due to added delay) and the interrupt service routine itself can be interrupted, causing the service routine to be re-executed in addition to that of the new interrupt service routine. The goal of the RL agent is to compute an optimal policy that maximize the total reward to termination. This example is represented as a recursive MDP in Figure 1. This RMDP has three components T , S, and H . The component T has three boxes b1, b2, and b3 all mapped to components S. The component T and S both have single entry and exit nodes, while the component H has two exit nodes. Removing the thick (maroon) transitions and the exit u7 makes the RMDP 1-exit. The edges specify both, the name of the action and the corresponding reward. While the component T is non-stochastic, components S and H both have stochastic transitions depicted by the grey circle. Recursive MDPs strictly generalize finite-state MDPs and hierarchical MDPs, and semantically encode countable state MDPs with the context encoding the stack frame of unbounded depth. RMDPs generalize several well-known stochastic models including stochastic context-free grammars [28, 26] and multi-type branching processes [22, 38]. Moreover, RMDPs are expressively equivalent to proba- bilistic pushdown systems (MDPs with unbounded stacks), and can model probabilistic programs with unrestricted recursion. Given their expressive power, it is not surprising that reachability and termination problems for general RMDPs are undecidable. Single-exit RMDPs, on the other hand, are considerably more well-behaved with decidable termination [16] and total reward optimization under positive reward restriction [15]. Exploiting these properties, a convergent RL algorithm [21] has been proposed for 1-exit RMDPs with positive cost restriction. However, to the best of our knowledge, no RL algorithm exists for general RMDPs. Applications of Recursive RL. Next, we present some paradigmatic applications of recursive RL. – Probabilistic Program Synthesis. As shown in Example 1, RMDPs can model procedural probabilistic Boolean program. Hence, the recursive RL can be used for program synthesis in unknown, uncertain environments. Boolean abstractions of programs [4] are particularly suited to modeling as RMDPs. Potential applications include program verification [4, 5, 14] and program synthesis [19]. – Context-Free Reward Machines. Recently, reward machines [23] have been proposed to model non-Markovian reward signals in RL. In this setting, a regular language extended with the reward signals (Mealy machines) over the observation sequences of the MDP is used to encode reward signals. In this setting the RL algorithms operate on the finite MDP given by the product of the MDP with the reward machine. Following the Chomsky hierarchy, context-free grammars or pushdown automata can be used to provide more expressive reward schemes than regular languages. This results in context-free reward machines: reward machines with an unbounded stack. As an example of such a more expressive reward language, consider a grid-world with a reachability objective with some designated charging stations, where 1-unit dwell-time charges the unbounded capacity battery by 1-unit. If every action discharges the battery by 1-unit, the reward scheme to reach the target location without ever draining the battery cannot be captured by a regular reward machine. On the other hand, this reward signal can be captured with an RMDP, where charging by 1-unit amounts to calling a component and discharging amounts to returning from the component such that the length of the call stack encodes the battery level. More generally, any context-free requirement over finite-state MDPs can be captured using general RMDPs. – Stochastic Context-Free Grammars. Stochastic CFGs and branching decision processes can capture a structured evolution of a system. These can be used for modeling disease spread, population dynamics, and natural languages. RRL can be used to learn optimal behavior in systems expressed using such stochastic grammars. Overview. We begin the technical presentation by providing the formal definition of RMDPs and the total reward problem: which we show to be undecidable in general. We then develop PAC learning results under mild restrictions. In Section 3, we develop Recursive Q-learning, a model-free RL algorithm for RMDPs. In Section 4, we show that Recursive Q-learning converges to an optimal solution in the single-exit setting. Section 5 then demonstrates the empirical performance of Recursive Q-learning. 2 Recursive Markov Decision Processes A Markov decision processM is a tuple (A,S, T, r) where A is the set of actions, S is the set of states, T : S ×A→ D(S) is the probabilistic transition function, and r : S ×A→ R is the reward function. We say that an MDPM is finite if both S and A are finite. For any state s ∈ S, A(s) denotes the set of actions that may be selected in state s. A recursive Markov decision process (RMDP) [16] is a tuple M = (M1, . . . ,Mk), where each component Mi = (Ai, Ni, Bi, Yi,Eni,Exi, δi) consists of: – A set Ai of actions; – A set Ni of nodes, with a distinguished subset Eni of entry nodes and a (disjoint) subset Exi of exit nodes (we assume an arbitrary but fixed ordering on Exi and Eni); – A set Bi of boxes along with a mapping Yi : Bi 7→ {1, . . . , k} that assigns to every box (the index of) a component. To each box b ∈ Bi, we associate a set of call ports, Callb = {(b, en) | en ∈ EnY (b)}, and a set of return ports, Retb = {(b, ex) | ex ∈ ExY (b)}; – we let Calli = ∪b∈BiCallb, Reti = ∪b∈BiRetb, and let Qi = Ni ∪Calli ∪Reti be the set of all nodes, call ports and return ports; we refer to these as the vertices of component Mi. – A transition function δi : Qi × Ai → D(Qi), where, for each tuple δi(u, a)(v) = p is the transition probability of a transition from the source u ∈ (Ni \Exi)∪Reti to the destination v ∈ (Ni \ Eni) ∪ Calli; we often write p(v|u, a) for δi(u, a)(v). – A reward function ri : Qi ×Ai → R is the reward associated with transitions. We assume that the set of boxes B1, . . . , Bk and set of nodes N1, N2, . . . , Nk are mutually disjoint. We use symbols N,B,A,Q,En,Ex, δ to denote the union of the corresponding symbols over all components. We say that an RMDP is finite if k and all Ai, Ni and Bi are finite. An execution of an RMDP begins at an entry node of some component and, depending upon the sequence of input actions, the state evolves naturally like an MDP according to the transition distributions. However, when the execution reaches an entry port of a box, this box is stored on a stack of pending calls, and the execution continues naturally from the corresponding entry node of the component mapped to that box. When an exit node of a component is encountered, and if the stack of pending calls is empty then the run terminates; otherwise, it pops the box from the top of the stack and jumps to the exit port of the just popped box corresponding to the just reached exit of the component. The semantics of an RMDP is an infinite state MDP, whose states are pairs consisting of a sequence of boxes, called the context, mimicking the stack of pending calls and the current vertex. The height of the call stack is incremented (decremented) when a call (return) is made. A stack height of 0 refers to termination, while the empty stack has height 1. The semantics of a recursive MDP M = (M1, . . . ,Mk) with Mi = (Ai, Ni, Bi, Yi,Eni,Exi, δi, ri) are given as a (infinite-state) MDP [[M ]] = (AM , SM , TM , rM ) where – AM = ∪ki=1Ai is the set of actions; – SM ⊆ B∗×Q is the set of states, consisting of the stack context and the current node; – TM : SM×AM → D(SM ) is the transition function such that for s = (〈κ〉, q) ∈ SM and action a ∈ AM , the distribution δM (s, a) is defined as: 1. if the vertex q is a call port, i.e. q = (b, en) ∈ Call, then δM (s, a)(〈κ, b〉, en) = 1; 2. if the vertex q is an exit node, i.e. q = ex ∈ Ex, then if κ = 〈∅〉 then the process terminates and otherwise δM (s, a)(〈κ′〉, (b, ex)) = 1 where (b, ex) ∈ Ret(b) and κ = 〈κ′, b〉; 3. otherwise, δM (s, a)(〈κ〉, q′) = δ(q, a)(q′). – the reward function rM : SM × AM → R is such that for s = (〈κ〉, q) ∈ SM and action a ∈ AM , the reward rM (s, a) is zero if q is either a call port or the exit node, and otherwise rM (s, a)(〈κ〉, q′) = r(q, a)(q′). We call the maximum value of the absolute one-step reward the diameter of an RMDP and denote it by rmax = maxs,a |r(s, a)|. Given the semantics of an RMDP M as an (infinite) MDP [[M ]], the concepts of strategies (also called policies) as well as positional strategies are well defined. We distinguish a special class of strategies—called stackless strategies—that are deterministic and do not depend on the history or the stack context at all. We are interested in computing strategies σ that maximize the expected total reward. Given RMDP M , a strategy σ determines sequences Xi and Yi of random variables denoting the ith state and action of the MDP [[M ]]. The total reward under strategy σ and its optimal value are respectively defined as ETotalMσ (s) = lim N→∞ EMσ (s) {∑ 1≤i≤N r(Xi−1, Yi) } , ETotalM (s) = sup σ ETotalMσ (s). For an RMDP M and a state s, a strategy σ is called proper if the expected number of steps taken by M before termination when starting at s is finite. To ensure that the limit above exists, as the sum of rewards can otherwise oscillate arbitrarily, we assume the following. Assumption 1 (Proper Policy Assumption). All strategies are proper for all states. We call an RMDP that satisfies Assumption 1 a proper RMDP. This assumption is akin to proper policy assumptions [7] often posed on the stochastic shortest path problems, and ensures that the total expected reward is finite. The expected total reward optimization problem over proper RMDPs subsumes the discounted optimization problem over finite-state MDPs since discounting with a factor λ is analogous to terminating with probability 1−λ at every step [36]. The properness assumption on RMDPs can be enforced by introducing an appropriate discounting (see Appendix F). Undecidability. Given an RMDP M , an initial node v, and a threshold D, the strategy existence problem is to decide whether there exists a strategy in [[M ]] with value greater than or equal to D when starting at the initial state (〈∅〉, q), i.e., at some entry node q with an empty context. Theorem 1 (Undecidability of the Strategy Existence Problem). Given a proper RMDP and a threshold D, deciding whether there exists a strategy with expected value greater than D is undecidable. PAC-learnability. Although it is undecidable to determine whether or not a strategy can exceed some threshold in a proper RMDP, the problem of ε-approximating the optimal value is decidable when parameters co, λ and b (defined below) are known. Our approach to PAC-learnability [1] is to learn the distribution of the transition function δ well enough and then produce an approximate, but not necessarily efficient, evaluation of our learned model. To allow PAC-learnability, we need a further nuanced notion of ε-proper policies. A policy is called ε-proper, if it terminates with a uniform bound on the expected number of steps for allM′ that differ fromM only in the transition function, where ∑ q∈S,a∈A,r∈S |δM(q, a)(r)− δM′(q, a)(r)| ≤ ε (we then say thatM′ is ε-close toM), and where the support of δM ′(q, a) is a subset of the support of δM (q, a) for all q ∈ S and a ∈ A. An RMDP is called ε-proper, if all strategies are ε-proper for M for all states of the RMDP. Assumption 2 (PAC-learnability). We restrict our attention to ε-proper RMDPs. We further require that all policies have a falling expected stack height. Namely, we require for all M′ ε-close to M and all policies σ that the expected stack height in step k is bounded by some function co − µ · ∑k i=1 p M′σ run (k), where co ≥ 1 is an offset, µ ∈]0, 1] is the decline per step, and p M′σ run (k) is the likelihood that the RMDPM′ with strategy σ is still running after k steps. We finally require that the absolute expected value from every strategy is bounded: ∣∣ETotalM′σ ((〈∅〉, q))∣∣ ≤ b for some b. Theorem 2. For every ε-proper RMDP with parameters co, µ, and b, ETotalM(s) is PAC-learnable. These parameters can be replaced by discounting. Indeed, our proofs start with discounted rewards, and then relax the assumptions to allow for using undiscounted rewards. Using a discount factor λ translates to parameters b = d1−λ , co = 1 + 1 1−λ , and µ = 1−λ. 3 Recursive Q-Learning for Multi-Exit RMDPs While RMDPs with multiple exits come with undecidability results, they are the interesting cases as they represent systems with an arbitrary call stack. We suggest an abstraction that turns them into a fixed size data structure, which is well suited for neural networks. Given a proper recursive MDP M = (M1, . . . ,Mk) with Mi = (Ai, Ni, Bi, Yi,Eni,Exi, δi, ri) with semantics [[M ]] = (AM , SM , TM , rM ), the optimal total expected reward can be captured by the following equations OPTrecur(M). For every κ ∈ B∗ and q ∈ Q: y(〈κ〉, q) = y(〈κ, b〉, en) q=(b, en) ∈ Call 0 q ∈ Ex, κ = 〈∅〉 y(〈κ′〉, (b, q)) q ∈ Ex, (b, q) ∈ Ret(b), κ=〈κ′, b〉 max a∈A(q) { r(q, a)+ ∑ q′∈Q p(q′|q, a)y(〈κ〉, q′) } otherwise. These equations capture the optimality equations on the underlying infinite MDP [[M ]]. It is straightforward to see that, if these equations admit a solution, then the solution equals the optimal total expected reward [32]. Moreover, an optimal policy can be computed by choosing the actions that maximize the right-hand-side. However, since the state space is countably infinite and has an intricate structure, an algorithm to compute a solution to these equations is not immediate. To make it accessible to learning, we abstract the call stack 〈κ, b〉 to its exit value, i.e. the total expected reward from the exit nodes of the box b, under the stack context 〈κ〉. Note that when a box is called, the value of each of its exits may still be unknown, but it is (for a given strategy) fixed. Naturally, if two stack contexts 〈κ, b〉 and 〈κ′, b〉 achieve the same expected total reward from each exit of the block b, then both the optimal strategy and the expected total reward, are the same. This simple but precise and effective abstraction of stacks with exit values allows us to consider the following optimality equations OPTcont(M). For every 1 ≤ i ≤ k, q ∈ Qi, v ∈ R|Exi|: x(v, q) = x(v′, en)[v′ 7→ (x(v, q′))q′∈Retb ] q = (b, en) ∈ Call v(q) q ∈ Ex max a∈A(q) { r(q, a)+ ∑ q′∈Q p(q′|q, a)x(v, q′) } otherwise. Here v is a vector where v(ex) is the (expected) reward that we get once we reach exit ex of the current component. Informally when a box is called, this vector is being updated with the current estimates of the reward that we get once the box is exited. The ex entry of this vector v′ = (x(v, q′))q′∈Retb is x(v, (b, ex)), which is the value that we achieve from exit (b, ex). This continuous-space abstraction of the countably infinite space of the stack contexts enables the application of deep feedforward neural networks [18] with a finite state encoding in RL. It also provides an elegant connection to the smoothness of differences to exit values: if all exit costs are changed by less than ε, then the cost of each state within a box changes by less than ε, too. The following theorem connects both versions of optimality equations. Theorem 3 (Fixed Point). If y is a fixed point of OPTrecur and x is a fixed point of OPTcont, then y(〈∅〉, q) = x(0, q). Moreover, any policy optimal from (0, q) is also optimal from (〈∅〉, q). We design a generalization of the Q-learning algorithm [40] for recursive MDPs based on the optimality equations OPTcont shown in Algorithm 1. We implement several optimizations in our algorithm. We assume implicit transitions from the entry and exit ports of the box to the corresponding entry and exit nodes of the components. A further optimization is achieved by applying a dimension reduction on the representation of the exit value vector v by normalizing these values in such a way that one of the exits has value 0. This normalization does not affect optimal strategies as, when two stacks incur similar costs in that they have the same offset between the cost of each exit, the optimal strategy is still the same, with the difference in cost being this offset. While the convergence of Algorithm 1 is not guaranteed for the general multi-exit RMDPs, the algorithm converges for the special cases of deterministic proper RMDPs and 1-exit RMDPs (Section 4). For the deterministic multi-exit case, the observation is straightforward as the properness assumption reduces the semantics to be directed acyclic graph, and the correct values are eventually propagated from the leafs backwards. Theorem 4. Tabular Recursive Q-learning converges to the optimal values for deterministic proper multi-exit RMDPs with a learning rate of 1 when all state-action pairs are visited infinitely often. Algorithm 1: Recursive Q-learning 1 Initialize Q(s, v, a) arbitrarily 2 while not converged do 3 v ← 0 4 stack← ∅ 5 Sample trajectory τ ∼ {(s, a, r, s′), ...} 6 for s, a, r, s′ in τ do 7 Update αi according to learning rate schedule 8 if entered box then 9 {sexit1 , . . . , sexitn} ← getExits(s′) 10 v′ ← [maxa′∈A(sexit1 )Q(sexit1 , v, a ′), . . . ,maxa′∈A(sexitn )Q(sexitn , v, a ′)] 11 v′min ← min(v′) 12 v′ ← v′ − v′min 13 Q(s, v, a)← (1− αi)Q(s, v, a) + αi(r + maxa′∈A(s′)Q(s′, v′, a′) + v′min ) 14 stack.push(v) 15 v ← v′ 16 else if exited box then 17 {sexit1 , . . . , sexitn} ← getExits(s) 18 Set k such that s′ = sexitk 19 Q(s, v, a)← (1− αi)Q(s, v, a) + αi(r + v(k)) 20 v ← stack.pop() 21 else 22 Q(s, v, a)← (1− αi)Q(s, v, a) + αi(r + maxa∈A(s′)Q(s′, v, a′)) 23 end 24 end 25 end 26 return Q 4 Convergence of Recursive Q-Learning for Proper 1-exit RMDPs Recall that a proper 1-exit RMDP is a proper RMDP where, for each component Mi, the set of exits Exi is a singleton. For this special case, we show that the recursive Q-learning algorithm converges to the optimal strategy. The optimality equations OPTcont(M) (similar to [15]) can be simplified in the case of 1-exit RMDPs whose unique fixed point solution will give us the optimal values of the total reward objective. For every q ∈ Q: x(q) = x(en) + x(b, ex ′) q = (b, en) ∈ Call, ex = ExY (b) maxa∈A(q) { r(q, a) + ∑ q′∈Q p(q′|q, a)x(q′) } otherwise. We now denote the system of all these equations in a vector form as x̄ = F (x̄). Given a 1-exit RMDP, we can easily construct its associated equation system above in linear time. Theorem 5 (Unique Fixed Point). The vector consisting of ETotalM (q) values is the unique fixed point of F . Moreover, a solution of these equations provide optimal stackless strategies. Note that for the 1-exit setting, Algorithm 1 simplifies to Algorithm 2 since v is always 0 and vmin is always the maximum Q-value for the exit. The convergence of the recursive Q-learning algorithm for 1-exit RMDPs follows from Theorem 5 and stochastic approximation [40, 8]. Theorem 6. Algorithm 2 converges to the optimal values in 1-exit RMDP when the learning rates satisfy ∑∞ i=0 αi =∞, ∑∞ i=0 α 2 i <∞, and all state-action pairs are visited infinitely often. In order to show efficient PAC learnability for -proper 1-exit RMDP M , it suffices to know an upper bound on the expected number of steps taken by M when starting at any vertex with the empty stack content, which will be denoted by K. Theorem 7 (Efficient PAC Learning for 1-Exit RMDPs). For every -proper 1-exit RMDP with diameter rmax and the expected time to terminate ≤ K, ETotalM(s) is efficiently PAC-learnable. Algorithm 2: Recursive Q-learning (1-exit special case) 1 Initialize Q(s, a) arbitrarily 2 while not converged do 3 Sample trajectory τ ∼ {(s, a, r, s′), ...} 4 for s, a, r, s′ in τ do 5 Update αi according to learning rate schedule 6 if entered box then 7 sexit ← getExit(s′) 8 Q(s, a)← (1− αi)Q(s, a) + αi(r + maxa′∈A(s′)Q(s′, a′) + maxa′∈A(sexit)Q(sexit, a′)) 9 else if exited box then 10 Q(s, a)← (1− αi)Q(s, a) + αi(r) 11 else 12 Q(s, a)← (1− αi)Q(s, a) + αi(r + maxa∈A(s′)Q(s′, a′)) 13 end 14 end 15 end 16 return Q 5 Experiments We implemented Algorithm 1 in tabular form as well as with a neural network. For the tabular implementation, we quantized the vector v directly after its computation to ensure that the Qtable remains small and discrete. For the neural network implementation we used the techniques used in DQN [30], replay buffers and target networks, for additional stability. The details of this implementation can be found in the appendix. We consider three examples: one to demonstrate the application of Recursive Q-learning for synthesizing probabilistic programs, one to demonstrate convergence in the single-exit setting, and one to demonstrate the use of a context-free reward machine. We compare Recursive Q-learning to Q-learning where the RMDP is treated as an MDP by the agent, i.e., the agent treats stack calls and returns as if they were normal MDP transitions. 5.1 Cloud computing The cloud computing example, introduced in Example 1, is a recursive probabilistic program with decision points for an RL agent to optimize over. The optimal strategy is to select the reliable server and to never upgrade. This strategy produces an expected total reward of −5.3425. Figure 3 shows that tabular Recursive Q-learning with discretization quickly converges to the optimal solution on this multi-exit RMDP while Q-learning oscillates around a suboptimal policy. 5.2 Infinite spelunking E I 1 2 I E I E I1 1 2 2 1 E I E I1 1 2 2 Consider a single-exit RMDP gridworld with two box types, 1 and 2, shown at the bottom of the figure to the right. These box types are the two types of levels in an infinitely deep cave. When falling or descending to another level, the level type switches. Passing over a trap, shown in red, results in the agent teleporting to a random position and falling with probability 0.5. The agent has fallen into the cave at the position denoted by I without climbing equipment. However, there is climbing equipment in one of the types of levels at a known location denoted by E. The agent has four move directions—north, east, south, west—as well as an additional action to descend further or ascend. Until the climbing equipment is obtained, the agent can only descend. Once the climbing equipment is obtained, the traps no longer affect the agent and the agent can ascend only from the position where it fell down. With probability 0.01 the agent ascends from the current level with the climbing gear. This has the effect of box-wise discounting with discount factor 0.99. The agent’s objective is to leave the cave from where it fell in as as soon as possible. The reward is −1 on every step. There are two main strategies to consider. The first strategy tries to obtain the climbing gear by going over the traps. This strategy leads to an unbounded number of possible levels since the traps may repeatedly trigger. The second strategy avoids the traps entirely. The figure to the right shows partial descending trajectories from these strategies, with the actions shown in green, the trap teleportations shown in blue, and the locations the agent fell down from are shown as small black squares. Which strategy is better depends on the probability of the traps triggering. With a trap probability of 0.5, the optimal strategy is to try and reach the climbing equipment by going over the traps. Figure 1 shows the convergence of tabular Recursive Q-learning for 1-exit RMDPs to this optimal strategy while the strategy learned by Q-learning does not improve. 5.3 Palindrome gridworld To demonstrate the ability to incorporate context-free objectives, consider a 3× 3 gridworld with a goal cell in the center and a randomly selected initial state. The agent has four move actions—north, east, south, west—and a special control action. The objective of the agent is to reach the goal cell while forming an action sequence that is a palindrome of even length. What makes this possible is that when the agent performs an action that pushes against a wall of the gridworld, no movement occurs. To monitor the progress of the property, we compose this MDP with a nondeterministic pushdown automaton. The agent must use its special action to determine when to resolve the nondeterminism in the pushdown automaton. Additionally, the agent uses its special action to declare the end of its action sequence. To ensure properness, the agent’s selected action is corrupted into the special action with probability 0.01. The agent is given a reward of 50 upon success, −5 when the agent selects an action that causes the pushdown automaton to enter a rejecting sink, and −1 on all other timesteps. Figure 3 shows the convergence of Deep Recursive Q-learning to an optimal strategy on this example, while DQN fails to find a good strategy. 6 Related Work Hierarchical RL is an approach for learning policies on MDPs that introduces a hierarchy in policy space. There are three prevalent approaches to specify this hierarchy. The options framework [37] represents these hierarchies as policies each with a starting and termination condition. The hierarchy of abstract machines (HAM) framework [31] represents the policy as as a hierarchical composition of nondeterministic finite-state machines. Finally, the MAXQ framework [13] represent the hierarchy using a programmatic representation with finite range variables and strict hierarchy among modules. Semi-Markov decision processes (SDMPs) and hierarchical MDPs are fundamental models that appear in the context of hierarchical RL. SMDPs generalize MDPs with timed actions. The RL algorithms for SMDPs are based on natural generalization of the Bellman equations to accommodate timed actions. Hierarchical MDPs model bounded recursion and can be solved by flattening to a MDP, or by producing policies that are only optimal locally. Recursive MDPs model unbounded recursion in the environment space. The orthogonality of recursion in environment space and in policy space means they are complementary—one can consider applying ideas in hierarchical RL to find a policy in an RMDP. The authors of [3] proposed using partially specified programs with recursion to constrain the policy space, but only considered bounded recursion. Hierarchical MDPs [37, 31, 13], and factored MDPs [12, 20] indeed offer compact representations of finite MDPs. These representations can be exponentially more succinct. However, note that finite instances of these formalism are not any more expressive than finite MDPs as instances of these formalism can always be rewritten as a finite MDP. On the other hand, recursive MDPs studied in this paper are strictly more expressive than finite MDPs. Even 1-exit RMDPs may not be expressible as finite MDPs, due to a potentially unbounded stack, but remarkably they can be solved exactly with a finite tabular model-free reinforcement learning algorithm (Theorem 6) without needing to resort to ad-hoc approximations of the unbounded stack configurations. Context-free grammars in RL for optimization of molecules has been considered before by introducing a bound on the recursion depth to induce a finite MDP [25, 42]. Combining context-free grammars and reward machines was proposed as a future research direction in [24], where context-free reward machines have been first described in this paper. Recursive MDPs have been studied outside the RL setting [16], including results for 1-exit RMDP termination [16] and total reward optimization under positive reward restriction [15]. A convergent model-free RL algorithm for 1-exit RMDPs with positive cost restriction was proposed in [21]. The results on undecidability, on PAC-learnability, the introduction of the algorithm Recursive Q-learning, convergence results for Recursive Q-learning, and its deep learning extension are novel contributions of this paper. 7 Conclusion Reinforcement learning so far has primarily considered Markov decision processes (MDPs). Although extremely expressive, this formalism may require “flattening” a more expressive representation that contains recursion. In this paper we examine the use of recursive MDPs (RMDPs) in reinforcement learning—a setting we call recursive reinforcement learning. A recursive MDP is a collection of MDP components, where each component has the ability to recursive call each other. This allows the introduction of an unbounded stack. We propose abstracting this discrete stack with a continuous abstraction in the form of the costs of the exits of a component. Using this abstraction, we introduce Recursive Q-learning—a model-free reinforcement learning algorithm for RMDPs. We prove that tabular Recursive Q-learning converges to an optimal solution on finite 1-exit RMDPs, even though the underlying MDP has an infinite state space. We demonstrate the potential of our approach on a set of examples that includes probabilistic program synthesis, a single-exit RMDP, and an MDP composed with a context-free property. Acknowledgments. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreements 864075 (CAESAR), and 956123 (FOCETA). This work is supported in part by the National Science Foundation (NSF) grant CCF2009022 and by NSF CAREER award CCF-2146563. This work utilized the Summit supercomputer, which is supported by the National Science Foundation (awards ACI-1532235 and ACI-1532236), the University of Colorado Boulder, and Colorado State University. The Summit supercomputer is a joint effort of the University of Colorado Boulder and Colorado State University.
1. What is the focus of the paper regarding MDPs? 2. What are the strengths of the proposed approach, particularly in terms of originality and significance? 3. What are the weaknesses of the paper, especially regarding the experimental domains? 4. Do you have any questions regarding the applicability of the method to specific types of MDPs? 5. How does the reviewer assess the clarity and quality of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper presents an analysis of a special class of infinite MDPs, recursive MDPs. Recursive MDPs are represented as a collection of components, which have nodes and boxes. Boxes call other components and have associated "call ports" and "return ports". Actions in each component affect transitions to nodes and boxes. A component reward function is defined over component transitions. The components then implicitly define an infinite MDP whose (countable) state space is defined by pairs of box stacks and nodes. Importantly, the stack acts as a context analogous to a stack in a pushdown automaton and can grow infinitely (this is the sense in which recursion is implemented). The authors analyze RMDPs under two assumptions: 1 - all RMDP strategies have finite return (this can be enforced with a discount) and 2 - RMDPs are epsilon-proper and all policies have a fallign expected stack height. They then report a recursive Q-learning algorithm for multi-exit RMDPs based on an abstraction of the call stack. Specifically, call-stacks are equivalent if they share an exit-value--that is, the vector of total expected rewards from the exit nodes of a box given the rest of the stack. Theorem 3 states that the fixed point of the abstract/continuous version of the RMDP is equivalent to that of the original RMDP, which makes it possible to apply algorithms that operate over a fixed-size state representation (eg standard deep RL methods). Their algorithm (alg 1) is not guaranteed to converge for general multi-exit RMDPs, but they report a version (alg 2) that converges for single-exit RMDPs (single-exit RMDPs only have components with a single exit node, are more expressive than finite-MDPs, and are related to "controlled branching processes"). Experiments are then reported on 3 domains, including one in which a DQN-based learning algorithm is used. Compared to using standard Q-learning/DQN, their recusive Q-learning algorithm does better. Strengths And Weaknesses Originality The paper provides an analysis of a novel and interesting class of MDPs Quality The types of analyses reported are not in my main field of study, so I cannot comment on whether they meet technical standards typical for this subfield, but I found them to be a sensible way to approach the problem Clarity Given the technical nature of the paper, I thought the main ideas and results were presented clearly Significance The authors note in the introduction that recursive RL can be applied to probabilistic program synthesis, context-free reward machines, and stochastic context-free grammars. These are important domains, but the main test cases they apply their method were mainly toy domains. Questions I only have a few clarifying questions: Were the domains 5.1 and 5.3 general RMDPs? It would be helpful if this were stated explicitly. Stackless strategies are mentioned on lines 166 and then in Theorm 5 in relation to recursive Q-learing applied to 1-exit RMDPs. Do the other results apply to general stack-based strategies? Limitations The authors have been up front about where their method applies and its limitations.
NIPS
Title Recursive Reinforcement Learning Abstract Recursion is the fundamental paradigm to finitely describe potentially infinite objects. As state-of-the-art reinforcement learning (RL) algorithms cannot directly reason about recursion, they must rely on the practitioner’s ingenuity in designing a suitable “flat” representation of the environment. The resulting manual feature constructions and approximations are cumbersome and error-prone; their lack of transparency hampers scalability. To overcome these challenges, we develop RL algorithms capable of computing optimal policies in environments described as a collection of Markov decision processes (MDPs) that can recursively invoke one another. Each constituent MDP is characterized by several entry and exit points that correspond to input and output values of these invocations. These recursive MDPs (or RMDPs) are expressively equivalent to probabilistic pushdown systems (with call-stack playing the role of the pushdown stack), and can model probabilistic programs with recursive procedural calls. We introduce Recursive Q-learning— a model-free RL algorithm for RMDPs—and prove that it converges for finite, single-exit and deterministic multi-exit RMDPs under mild assumptions. 1 Introduction Reinforcement learning [36] (RL) is a stochastic approximation based approach to optimization, where learning agents rely on scalar reward signals from the environment to converge to an optimal behavior. Watkins’s seminal approach [41] to RL, known as Q-learning, judiciously combines exploration/exploitation with dynamic programming to provide guaranteed convergence [40] to optimal behaviors in environments modeled as Markov decision processes (MDPs) with finite state and action spaces. RL has also been applied to MDPs with uncountable state and action spaces, although convergence guarantees for such environments require strong regularity assumptions. Modern variants of Q-learning (and other tabular RL algorithms) harness the universal approximability and ease-of-training rendered by deep neural networks [18] to discover creative solutions to problems traditionally considered beyond the reach of AI [30, 39, 33]. These RL algorithms are designed with a flat Markovian view of the environment in the form of a “state, action, reward, and next state” interface [9] in every interaction with the learning agent, where the states/actions may come from infinite sets. When such infinitude presents itself in the form of finitely represented recursive structures, the inability of the RL algorithms to handle structured environments means that the structure present in the environment is not available to the RL algorithm to generalize its learning upon. The work of [41] already provides a roadmap for hierarchically structured environments; since then, considerable progress has been made in developing algorithms for hierarchical RL [6, 13, 37, 31] with varying optimality guarantees. Still, the hierarchical MDPs ∗Authors listed alphabetically. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). are expressively equivalent to finite-state MDPs, although they may be exponentially more succinct (Lemma 1). Thus, hierarchical RL algorithms are inapplicable in the presence of unbounded recursion. On the other hand, recursion occurs naturally in human reasoning [10], mathematics and computation [34, 35], and physical environments [29]. Recursion is a powerful cognitive tool in enabling a divide-and-conquer strategy [11] to problem solving (e.g., tower of Hanoi, depth-first search) and, consequently, recursive solutions enhance explainability in the form of intuitive inductive proofs of correctness. Unlike flat representations, the structure exposed by recursive definitions enables generalizability. Recursive concepts, such as recursive functions and data structures, provide scaffolding for efficient and transparent algorithms. Finally, the models of physical environments express the system evolution in the form of recursive equations. We posit that the lack of RL algorithms in handling recursion is an obstacle to their applicability, explainability, and generalizability. This paper aims to fill the gap by studying recursive Markov decision processes [16] as environment models in reinforcement learning. We dub this setting recursive reinforcement learning (RRL). MDPs with Recursion. A recursive Markov Decision Process (RMDP) [16] is a finite collection of special MDPs, called component MDPs, with special entry and exit nodes that take the role of input parameter and return value, respectively. The states of component MDPs may either be the standard MDP states, or they may be “boxes” with input and output ports; these boxes are mapped to other component MDPs (possibly, the component itself) with some matching of the entry and exit nodes. An RMDP where every component has only one exit is called a 1-exit RMDP, otherwise we call it a general or multi-exit RMDP. Single-exit RMDPs are strictly less expressive than general RMDPs as they are equivalent to functions without any return value. Nonetheless, 1-exit RMDPs are more expressive than finite-state MDPs [32] and relate closely to controlled branching processes [17]. Example 1 (Cloud Computing). As an example of recursive MDP, consider the Boolean program shown in Figure 2. This example (inspired from [21]) models a cloud computing scenario to compute a task T depicted as the component T . Here, a decision maker is faced with two choices: either she can choose to execute the task monolithically (with a cost of 8 units) or chose to decompose the task into three S tasks. The process of decomposition and later combining the results cost 0.5 units. Each task S can either be executed on a fast, but unreliable server that costs 1 unit, but with probability 0.4 the server may crash, and require a recomputation of the task. When executed on a reliable server, the task S costs 1.5 units, however the task may be interrupted by a higher-priority task and the decision maker will be compensated for this delay by earning a 0.2 unit reward. During the interrupt service routine H , there is a choice to upgrade the priority of the task for a cost of 0.2 units. Otherwise, the interrupt service routing costs 0.01 unit (due to added delay) and the interrupt service routine itself can be interrupted, causing the service routine to be re-executed in addition to that of the new interrupt service routine. The goal of the RL agent is to compute an optimal policy that maximize the total reward to termination. This example is represented as a recursive MDP in Figure 1. This RMDP has three components T , S, and H . The component T has three boxes b1, b2, and b3 all mapped to components S. The component T and S both have single entry and exit nodes, while the component H has two exit nodes. Removing the thick (maroon) transitions and the exit u7 makes the RMDP 1-exit. The edges specify both, the name of the action and the corresponding reward. While the component T is non-stochastic, components S and H both have stochastic transitions depicted by the grey circle. Recursive MDPs strictly generalize finite-state MDPs and hierarchical MDPs, and semantically encode countable state MDPs with the context encoding the stack frame of unbounded depth. RMDPs generalize several well-known stochastic models including stochastic context-free grammars [28, 26] and multi-type branching processes [22, 38]. Moreover, RMDPs are expressively equivalent to proba- bilistic pushdown systems (MDPs with unbounded stacks), and can model probabilistic programs with unrestricted recursion. Given their expressive power, it is not surprising that reachability and termination problems for general RMDPs are undecidable. Single-exit RMDPs, on the other hand, are considerably more well-behaved with decidable termination [16] and total reward optimization under positive reward restriction [15]. Exploiting these properties, a convergent RL algorithm [21] has been proposed for 1-exit RMDPs with positive cost restriction. However, to the best of our knowledge, no RL algorithm exists for general RMDPs. Applications of Recursive RL. Next, we present some paradigmatic applications of recursive RL. – Probabilistic Program Synthesis. As shown in Example 1, RMDPs can model procedural probabilistic Boolean program. Hence, the recursive RL can be used for program synthesis in unknown, uncertain environments. Boolean abstractions of programs [4] are particularly suited to modeling as RMDPs. Potential applications include program verification [4, 5, 14] and program synthesis [19]. – Context-Free Reward Machines. Recently, reward machines [23] have been proposed to model non-Markovian reward signals in RL. In this setting, a regular language extended with the reward signals (Mealy machines) over the observation sequences of the MDP is used to encode reward signals. In this setting the RL algorithms operate on the finite MDP given by the product of the MDP with the reward machine. Following the Chomsky hierarchy, context-free grammars or pushdown automata can be used to provide more expressive reward schemes than regular languages. This results in context-free reward machines: reward machines with an unbounded stack. As an example of such a more expressive reward language, consider a grid-world with a reachability objective with some designated charging stations, where 1-unit dwell-time charges the unbounded capacity battery by 1-unit. If every action discharges the battery by 1-unit, the reward scheme to reach the target location without ever draining the battery cannot be captured by a regular reward machine. On the other hand, this reward signal can be captured with an RMDP, where charging by 1-unit amounts to calling a component and discharging amounts to returning from the component such that the length of the call stack encodes the battery level. More generally, any context-free requirement over finite-state MDPs can be captured using general RMDPs. – Stochastic Context-Free Grammars. Stochastic CFGs and branching decision processes can capture a structured evolution of a system. These can be used for modeling disease spread, population dynamics, and natural languages. RRL can be used to learn optimal behavior in systems expressed using such stochastic grammars. Overview. We begin the technical presentation by providing the formal definition of RMDPs and the total reward problem: which we show to be undecidable in general. We then develop PAC learning results under mild restrictions. In Section 3, we develop Recursive Q-learning, a model-free RL algorithm for RMDPs. In Section 4, we show that Recursive Q-learning converges to an optimal solution in the single-exit setting. Section 5 then demonstrates the empirical performance of Recursive Q-learning. 2 Recursive Markov Decision Processes A Markov decision processM is a tuple (A,S, T, r) where A is the set of actions, S is the set of states, T : S ×A→ D(S) is the probabilistic transition function, and r : S ×A→ R is the reward function. We say that an MDPM is finite if both S and A are finite. For any state s ∈ S, A(s) denotes the set of actions that may be selected in state s. A recursive Markov decision process (RMDP) [16] is a tuple M = (M1, . . . ,Mk), where each component Mi = (Ai, Ni, Bi, Yi,Eni,Exi, δi) consists of: – A set Ai of actions; – A set Ni of nodes, with a distinguished subset Eni of entry nodes and a (disjoint) subset Exi of exit nodes (we assume an arbitrary but fixed ordering on Exi and Eni); – A set Bi of boxes along with a mapping Yi : Bi 7→ {1, . . . , k} that assigns to every box (the index of) a component. To each box b ∈ Bi, we associate a set of call ports, Callb = {(b, en) | en ∈ EnY (b)}, and a set of return ports, Retb = {(b, ex) | ex ∈ ExY (b)}; – we let Calli = ∪b∈BiCallb, Reti = ∪b∈BiRetb, and let Qi = Ni ∪Calli ∪Reti be the set of all nodes, call ports and return ports; we refer to these as the vertices of component Mi. – A transition function δi : Qi × Ai → D(Qi), where, for each tuple δi(u, a)(v) = p is the transition probability of a transition from the source u ∈ (Ni \Exi)∪Reti to the destination v ∈ (Ni \ Eni) ∪ Calli; we often write p(v|u, a) for δi(u, a)(v). – A reward function ri : Qi ×Ai → R is the reward associated with transitions. We assume that the set of boxes B1, . . . , Bk and set of nodes N1, N2, . . . , Nk are mutually disjoint. We use symbols N,B,A,Q,En,Ex, δ to denote the union of the corresponding symbols over all components. We say that an RMDP is finite if k and all Ai, Ni and Bi are finite. An execution of an RMDP begins at an entry node of some component and, depending upon the sequence of input actions, the state evolves naturally like an MDP according to the transition distributions. However, when the execution reaches an entry port of a box, this box is stored on a stack of pending calls, and the execution continues naturally from the corresponding entry node of the component mapped to that box. When an exit node of a component is encountered, and if the stack of pending calls is empty then the run terminates; otherwise, it pops the box from the top of the stack and jumps to the exit port of the just popped box corresponding to the just reached exit of the component. The semantics of an RMDP is an infinite state MDP, whose states are pairs consisting of a sequence of boxes, called the context, mimicking the stack of pending calls and the current vertex. The height of the call stack is incremented (decremented) when a call (return) is made. A stack height of 0 refers to termination, while the empty stack has height 1. The semantics of a recursive MDP M = (M1, . . . ,Mk) with Mi = (Ai, Ni, Bi, Yi,Eni,Exi, δi, ri) are given as a (infinite-state) MDP [[M ]] = (AM , SM , TM , rM ) where – AM = ∪ki=1Ai is the set of actions; – SM ⊆ B∗×Q is the set of states, consisting of the stack context and the current node; – TM : SM×AM → D(SM ) is the transition function such that for s = (〈κ〉, q) ∈ SM and action a ∈ AM , the distribution δM (s, a) is defined as: 1. if the vertex q is a call port, i.e. q = (b, en) ∈ Call, then δM (s, a)(〈κ, b〉, en) = 1; 2. if the vertex q is an exit node, i.e. q = ex ∈ Ex, then if κ = 〈∅〉 then the process terminates and otherwise δM (s, a)(〈κ′〉, (b, ex)) = 1 where (b, ex) ∈ Ret(b) and κ = 〈κ′, b〉; 3. otherwise, δM (s, a)(〈κ〉, q′) = δ(q, a)(q′). – the reward function rM : SM × AM → R is such that for s = (〈κ〉, q) ∈ SM and action a ∈ AM , the reward rM (s, a) is zero if q is either a call port or the exit node, and otherwise rM (s, a)(〈κ〉, q′) = r(q, a)(q′). We call the maximum value of the absolute one-step reward the diameter of an RMDP and denote it by rmax = maxs,a |r(s, a)|. Given the semantics of an RMDP M as an (infinite) MDP [[M ]], the concepts of strategies (also called policies) as well as positional strategies are well defined. We distinguish a special class of strategies—called stackless strategies—that are deterministic and do not depend on the history or the stack context at all. We are interested in computing strategies σ that maximize the expected total reward. Given RMDP M , a strategy σ determines sequences Xi and Yi of random variables denoting the ith state and action of the MDP [[M ]]. The total reward under strategy σ and its optimal value are respectively defined as ETotalMσ (s) = lim N→∞ EMσ (s) {∑ 1≤i≤N r(Xi−1, Yi) } , ETotalM (s) = sup σ ETotalMσ (s). For an RMDP M and a state s, a strategy σ is called proper if the expected number of steps taken by M before termination when starting at s is finite. To ensure that the limit above exists, as the sum of rewards can otherwise oscillate arbitrarily, we assume the following. Assumption 1 (Proper Policy Assumption). All strategies are proper for all states. We call an RMDP that satisfies Assumption 1 a proper RMDP. This assumption is akin to proper policy assumptions [7] often posed on the stochastic shortest path problems, and ensures that the total expected reward is finite. The expected total reward optimization problem over proper RMDPs subsumes the discounted optimization problem over finite-state MDPs since discounting with a factor λ is analogous to terminating with probability 1−λ at every step [36]. The properness assumption on RMDPs can be enforced by introducing an appropriate discounting (see Appendix F). Undecidability. Given an RMDP M , an initial node v, and a threshold D, the strategy existence problem is to decide whether there exists a strategy in [[M ]] with value greater than or equal to D when starting at the initial state (〈∅〉, q), i.e., at some entry node q with an empty context. Theorem 1 (Undecidability of the Strategy Existence Problem). Given a proper RMDP and a threshold D, deciding whether there exists a strategy with expected value greater than D is undecidable. PAC-learnability. Although it is undecidable to determine whether or not a strategy can exceed some threshold in a proper RMDP, the problem of ε-approximating the optimal value is decidable when parameters co, λ and b (defined below) are known. Our approach to PAC-learnability [1] is to learn the distribution of the transition function δ well enough and then produce an approximate, but not necessarily efficient, evaluation of our learned model. To allow PAC-learnability, we need a further nuanced notion of ε-proper policies. A policy is called ε-proper, if it terminates with a uniform bound on the expected number of steps for allM′ that differ fromM only in the transition function, where ∑ q∈S,a∈A,r∈S |δM(q, a)(r)− δM′(q, a)(r)| ≤ ε (we then say thatM′ is ε-close toM), and where the support of δM ′(q, a) is a subset of the support of δM (q, a) for all q ∈ S and a ∈ A. An RMDP is called ε-proper, if all strategies are ε-proper for M for all states of the RMDP. Assumption 2 (PAC-learnability). We restrict our attention to ε-proper RMDPs. We further require that all policies have a falling expected stack height. Namely, we require for all M′ ε-close to M and all policies σ that the expected stack height in step k is bounded by some function co − µ · ∑k i=1 p M′σ run (k), where co ≥ 1 is an offset, µ ∈]0, 1] is the decline per step, and p M′σ run (k) is the likelihood that the RMDPM′ with strategy σ is still running after k steps. We finally require that the absolute expected value from every strategy is bounded: ∣∣ETotalM′σ ((〈∅〉, q))∣∣ ≤ b for some b. Theorem 2. For every ε-proper RMDP with parameters co, µ, and b, ETotalM(s) is PAC-learnable. These parameters can be replaced by discounting. Indeed, our proofs start with discounted rewards, and then relax the assumptions to allow for using undiscounted rewards. Using a discount factor λ translates to parameters b = d1−λ , co = 1 + 1 1−λ , and µ = 1−λ. 3 Recursive Q-Learning for Multi-Exit RMDPs While RMDPs with multiple exits come with undecidability results, they are the interesting cases as they represent systems with an arbitrary call stack. We suggest an abstraction that turns them into a fixed size data structure, which is well suited for neural networks. Given a proper recursive MDP M = (M1, . . . ,Mk) with Mi = (Ai, Ni, Bi, Yi,Eni,Exi, δi, ri) with semantics [[M ]] = (AM , SM , TM , rM ), the optimal total expected reward can be captured by the following equations OPTrecur(M). For every κ ∈ B∗ and q ∈ Q: y(〈κ〉, q) = y(〈κ, b〉, en) q=(b, en) ∈ Call 0 q ∈ Ex, κ = 〈∅〉 y(〈κ′〉, (b, q)) q ∈ Ex, (b, q) ∈ Ret(b), κ=〈κ′, b〉 max a∈A(q) { r(q, a)+ ∑ q′∈Q p(q′|q, a)y(〈κ〉, q′) } otherwise. These equations capture the optimality equations on the underlying infinite MDP [[M ]]. It is straightforward to see that, if these equations admit a solution, then the solution equals the optimal total expected reward [32]. Moreover, an optimal policy can be computed by choosing the actions that maximize the right-hand-side. However, since the state space is countably infinite and has an intricate structure, an algorithm to compute a solution to these equations is not immediate. To make it accessible to learning, we abstract the call stack 〈κ, b〉 to its exit value, i.e. the total expected reward from the exit nodes of the box b, under the stack context 〈κ〉. Note that when a box is called, the value of each of its exits may still be unknown, but it is (for a given strategy) fixed. Naturally, if two stack contexts 〈κ, b〉 and 〈κ′, b〉 achieve the same expected total reward from each exit of the block b, then both the optimal strategy and the expected total reward, are the same. This simple but precise and effective abstraction of stacks with exit values allows us to consider the following optimality equations OPTcont(M). For every 1 ≤ i ≤ k, q ∈ Qi, v ∈ R|Exi|: x(v, q) = x(v′, en)[v′ 7→ (x(v, q′))q′∈Retb ] q = (b, en) ∈ Call v(q) q ∈ Ex max a∈A(q) { r(q, a)+ ∑ q′∈Q p(q′|q, a)x(v, q′) } otherwise. Here v is a vector where v(ex) is the (expected) reward that we get once we reach exit ex of the current component. Informally when a box is called, this vector is being updated with the current estimates of the reward that we get once the box is exited. The ex entry of this vector v′ = (x(v, q′))q′∈Retb is x(v, (b, ex)), which is the value that we achieve from exit (b, ex). This continuous-space abstraction of the countably infinite space of the stack contexts enables the application of deep feedforward neural networks [18] with a finite state encoding in RL. It also provides an elegant connection to the smoothness of differences to exit values: if all exit costs are changed by less than ε, then the cost of each state within a box changes by less than ε, too. The following theorem connects both versions of optimality equations. Theorem 3 (Fixed Point). If y is a fixed point of OPTrecur and x is a fixed point of OPTcont, then y(〈∅〉, q) = x(0, q). Moreover, any policy optimal from (0, q) is also optimal from (〈∅〉, q). We design a generalization of the Q-learning algorithm [40] for recursive MDPs based on the optimality equations OPTcont shown in Algorithm 1. We implement several optimizations in our algorithm. We assume implicit transitions from the entry and exit ports of the box to the corresponding entry and exit nodes of the components. A further optimization is achieved by applying a dimension reduction on the representation of the exit value vector v by normalizing these values in such a way that one of the exits has value 0. This normalization does not affect optimal strategies as, when two stacks incur similar costs in that they have the same offset between the cost of each exit, the optimal strategy is still the same, with the difference in cost being this offset. While the convergence of Algorithm 1 is not guaranteed for the general multi-exit RMDPs, the algorithm converges for the special cases of deterministic proper RMDPs and 1-exit RMDPs (Section 4). For the deterministic multi-exit case, the observation is straightforward as the properness assumption reduces the semantics to be directed acyclic graph, and the correct values are eventually propagated from the leafs backwards. Theorem 4. Tabular Recursive Q-learning converges to the optimal values for deterministic proper multi-exit RMDPs with a learning rate of 1 when all state-action pairs are visited infinitely often. Algorithm 1: Recursive Q-learning 1 Initialize Q(s, v, a) arbitrarily 2 while not converged do 3 v ← 0 4 stack← ∅ 5 Sample trajectory τ ∼ {(s, a, r, s′), ...} 6 for s, a, r, s′ in τ do 7 Update αi according to learning rate schedule 8 if entered box then 9 {sexit1 , . . . , sexitn} ← getExits(s′) 10 v′ ← [maxa′∈A(sexit1 )Q(sexit1 , v, a ′), . . . ,maxa′∈A(sexitn )Q(sexitn , v, a ′)] 11 v′min ← min(v′) 12 v′ ← v′ − v′min 13 Q(s, v, a)← (1− αi)Q(s, v, a) + αi(r + maxa′∈A(s′)Q(s′, v′, a′) + v′min ) 14 stack.push(v) 15 v ← v′ 16 else if exited box then 17 {sexit1 , . . . , sexitn} ← getExits(s) 18 Set k such that s′ = sexitk 19 Q(s, v, a)← (1− αi)Q(s, v, a) + αi(r + v(k)) 20 v ← stack.pop() 21 else 22 Q(s, v, a)← (1− αi)Q(s, v, a) + αi(r + maxa∈A(s′)Q(s′, v, a′)) 23 end 24 end 25 end 26 return Q 4 Convergence of Recursive Q-Learning for Proper 1-exit RMDPs Recall that a proper 1-exit RMDP is a proper RMDP where, for each component Mi, the set of exits Exi is a singleton. For this special case, we show that the recursive Q-learning algorithm converges to the optimal strategy. The optimality equations OPTcont(M) (similar to [15]) can be simplified in the case of 1-exit RMDPs whose unique fixed point solution will give us the optimal values of the total reward objective. For every q ∈ Q: x(q) = x(en) + x(b, ex ′) q = (b, en) ∈ Call, ex = ExY (b) maxa∈A(q) { r(q, a) + ∑ q′∈Q p(q′|q, a)x(q′) } otherwise. We now denote the system of all these equations in a vector form as x̄ = F (x̄). Given a 1-exit RMDP, we can easily construct its associated equation system above in linear time. Theorem 5 (Unique Fixed Point). The vector consisting of ETotalM (q) values is the unique fixed point of F . Moreover, a solution of these equations provide optimal stackless strategies. Note that for the 1-exit setting, Algorithm 1 simplifies to Algorithm 2 since v is always 0 and vmin is always the maximum Q-value for the exit. The convergence of the recursive Q-learning algorithm for 1-exit RMDPs follows from Theorem 5 and stochastic approximation [40, 8]. Theorem 6. Algorithm 2 converges to the optimal values in 1-exit RMDP when the learning rates satisfy ∑∞ i=0 αi =∞, ∑∞ i=0 α 2 i <∞, and all state-action pairs are visited infinitely often. In order to show efficient PAC learnability for -proper 1-exit RMDP M , it suffices to know an upper bound on the expected number of steps taken by M when starting at any vertex with the empty stack content, which will be denoted by K. Theorem 7 (Efficient PAC Learning for 1-Exit RMDPs). For every -proper 1-exit RMDP with diameter rmax and the expected time to terminate ≤ K, ETotalM(s) is efficiently PAC-learnable. Algorithm 2: Recursive Q-learning (1-exit special case) 1 Initialize Q(s, a) arbitrarily 2 while not converged do 3 Sample trajectory τ ∼ {(s, a, r, s′), ...} 4 for s, a, r, s′ in τ do 5 Update αi according to learning rate schedule 6 if entered box then 7 sexit ← getExit(s′) 8 Q(s, a)← (1− αi)Q(s, a) + αi(r + maxa′∈A(s′)Q(s′, a′) + maxa′∈A(sexit)Q(sexit, a′)) 9 else if exited box then 10 Q(s, a)← (1− αi)Q(s, a) + αi(r) 11 else 12 Q(s, a)← (1− αi)Q(s, a) + αi(r + maxa∈A(s′)Q(s′, a′)) 13 end 14 end 15 end 16 return Q 5 Experiments We implemented Algorithm 1 in tabular form as well as with a neural network. For the tabular implementation, we quantized the vector v directly after its computation to ensure that the Qtable remains small and discrete. For the neural network implementation we used the techniques used in DQN [30], replay buffers and target networks, for additional stability. The details of this implementation can be found in the appendix. We consider three examples: one to demonstrate the application of Recursive Q-learning for synthesizing probabilistic programs, one to demonstrate convergence in the single-exit setting, and one to demonstrate the use of a context-free reward machine. We compare Recursive Q-learning to Q-learning where the RMDP is treated as an MDP by the agent, i.e., the agent treats stack calls and returns as if they were normal MDP transitions. 5.1 Cloud computing The cloud computing example, introduced in Example 1, is a recursive probabilistic program with decision points for an RL agent to optimize over. The optimal strategy is to select the reliable server and to never upgrade. This strategy produces an expected total reward of −5.3425. Figure 3 shows that tabular Recursive Q-learning with discretization quickly converges to the optimal solution on this multi-exit RMDP while Q-learning oscillates around a suboptimal policy. 5.2 Infinite spelunking E I 1 2 I E I E I1 1 2 2 1 E I E I1 1 2 2 Consider a single-exit RMDP gridworld with two box types, 1 and 2, shown at the bottom of the figure to the right. These box types are the two types of levels in an infinitely deep cave. When falling or descending to another level, the level type switches. Passing over a trap, shown in red, results in the agent teleporting to a random position and falling with probability 0.5. The agent has fallen into the cave at the position denoted by I without climbing equipment. However, there is climbing equipment in one of the types of levels at a known location denoted by E. The agent has four move directions—north, east, south, west—as well as an additional action to descend further or ascend. Until the climbing equipment is obtained, the agent can only descend. Once the climbing equipment is obtained, the traps no longer affect the agent and the agent can ascend only from the position where it fell down. With probability 0.01 the agent ascends from the current level with the climbing gear. This has the effect of box-wise discounting with discount factor 0.99. The agent’s objective is to leave the cave from where it fell in as as soon as possible. The reward is −1 on every step. There are two main strategies to consider. The first strategy tries to obtain the climbing gear by going over the traps. This strategy leads to an unbounded number of possible levels since the traps may repeatedly trigger. The second strategy avoids the traps entirely. The figure to the right shows partial descending trajectories from these strategies, with the actions shown in green, the trap teleportations shown in blue, and the locations the agent fell down from are shown as small black squares. Which strategy is better depends on the probability of the traps triggering. With a trap probability of 0.5, the optimal strategy is to try and reach the climbing equipment by going over the traps. Figure 1 shows the convergence of tabular Recursive Q-learning for 1-exit RMDPs to this optimal strategy while the strategy learned by Q-learning does not improve. 5.3 Palindrome gridworld To demonstrate the ability to incorporate context-free objectives, consider a 3× 3 gridworld with a goal cell in the center and a randomly selected initial state. The agent has four move actions—north, east, south, west—and a special control action. The objective of the agent is to reach the goal cell while forming an action sequence that is a palindrome of even length. What makes this possible is that when the agent performs an action that pushes against a wall of the gridworld, no movement occurs. To monitor the progress of the property, we compose this MDP with a nondeterministic pushdown automaton. The agent must use its special action to determine when to resolve the nondeterminism in the pushdown automaton. Additionally, the agent uses its special action to declare the end of its action sequence. To ensure properness, the agent’s selected action is corrupted into the special action with probability 0.01. The agent is given a reward of 50 upon success, −5 when the agent selects an action that causes the pushdown automaton to enter a rejecting sink, and −1 on all other timesteps. Figure 3 shows the convergence of Deep Recursive Q-learning to an optimal strategy on this example, while DQN fails to find a good strategy. 6 Related Work Hierarchical RL is an approach for learning policies on MDPs that introduces a hierarchy in policy space. There are three prevalent approaches to specify this hierarchy. The options framework [37] represents these hierarchies as policies each with a starting and termination condition. The hierarchy of abstract machines (HAM) framework [31] represents the policy as as a hierarchical composition of nondeterministic finite-state machines. Finally, the MAXQ framework [13] represent the hierarchy using a programmatic representation with finite range variables and strict hierarchy among modules. Semi-Markov decision processes (SDMPs) and hierarchical MDPs are fundamental models that appear in the context of hierarchical RL. SMDPs generalize MDPs with timed actions. The RL algorithms for SMDPs are based on natural generalization of the Bellman equations to accommodate timed actions. Hierarchical MDPs model bounded recursion and can be solved by flattening to a MDP, or by producing policies that are only optimal locally. Recursive MDPs model unbounded recursion in the environment space. The orthogonality of recursion in environment space and in policy space means they are complementary—one can consider applying ideas in hierarchical RL to find a policy in an RMDP. The authors of [3] proposed using partially specified programs with recursion to constrain the policy space, but only considered bounded recursion. Hierarchical MDPs [37, 31, 13], and factored MDPs [12, 20] indeed offer compact representations of finite MDPs. These representations can be exponentially more succinct. However, note that finite instances of these formalism are not any more expressive than finite MDPs as instances of these formalism can always be rewritten as a finite MDP. On the other hand, recursive MDPs studied in this paper are strictly more expressive than finite MDPs. Even 1-exit RMDPs may not be expressible as finite MDPs, due to a potentially unbounded stack, but remarkably they can be solved exactly with a finite tabular model-free reinforcement learning algorithm (Theorem 6) without needing to resort to ad-hoc approximations of the unbounded stack configurations. Context-free grammars in RL for optimization of molecules has been considered before by introducing a bound on the recursion depth to induce a finite MDP [25, 42]. Combining context-free grammars and reward machines was proposed as a future research direction in [24], where context-free reward machines have been first described in this paper. Recursive MDPs have been studied outside the RL setting [16], including results for 1-exit RMDP termination [16] and total reward optimization under positive reward restriction [15]. A convergent model-free RL algorithm for 1-exit RMDPs with positive cost restriction was proposed in [21]. The results on undecidability, on PAC-learnability, the introduction of the algorithm Recursive Q-learning, convergence results for Recursive Q-learning, and its deep learning extension are novel contributions of this paper. 7 Conclusion Reinforcement learning so far has primarily considered Markov decision processes (MDPs). Although extremely expressive, this formalism may require “flattening” a more expressive representation that contains recursion. In this paper we examine the use of recursive MDPs (RMDPs) in reinforcement learning—a setting we call recursive reinforcement learning. A recursive MDP is a collection of MDP components, where each component has the ability to recursive call each other. This allows the introduction of an unbounded stack. We propose abstracting this discrete stack with a continuous abstraction in the form of the costs of the exits of a component. Using this abstraction, we introduce Recursive Q-learning—a model-free reinforcement learning algorithm for RMDPs. We prove that tabular Recursive Q-learning converges to an optimal solution on finite 1-exit RMDPs, even though the underlying MDP has an infinite state space. We demonstrate the potential of our approach on a set of examples that includes probabilistic program synthesis, a single-exit RMDP, and an MDP composed with a context-free property. Acknowledgments. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreements 864075 (CAESAR), and 956123 (FOCETA). This work is supported in part by the National Science Foundation (NSF) grant CCF2009022 and by NSF CAREER award CCF-2146563. This work utilized the Summit supercomputer, which is supported by the National Science Foundation (awards ACI-1532235 and ACI-1532236), the University of Colorado Boulder, and Colorado State University. The Summit supercomputer is a joint effort of the University of Colorado Boulder and Colorado State University.
1. What is the main contribution of the paper regarding Q-learning and MDPs? 2. What are the strengths and weaknesses of the proposed method in exploiting recursive MDPs? 3. Do you have any questions regarding the formal definition of recursive MDPs or the proposed method? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any potential applications or tradeoffs involved in using the proposed method?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The main idea of this paper is to formulate Q-learning on a type of MDP with a recursive structure, akin to the "call stack" of operating systems. The two arguments are that (1) many realistic MDPs have this sort of structure, and that (2) an RL algorithm designed to exploit this structure will perform better than algorithms with such knowledge. The paper includes some theoretical results about when such recursive MDPs are learnable (similar to standard computability results) and then proposes a variant of Q-learning for solving these recursive MDPs. The variant of Q-learning looks like standard Q-learning, with extra bookkeepping to keep track of the stack trace. On three didactic problems, the proposed method outperforms RL methods that do not exploit the recursive problem nature. Strengths And Weaknesses Strengths The paper is generally well written, with compelling motivation for why some problems should be treated as recursive MDPs. The potential applications discussed on page 3 are great for explaining the motivation for the recursive MDP. The experiments in Fig. 3 nicely illustrate three applications. The paper is novel to the best of my knowledge. Formalizing the connections between MDPs and computability seem potentially impactful. Weaknesses The tradeoffs involved in the paper are a bit unclear. See questions below. The proposed method was a bit hard to follow. Are there some special cases where the proposed method is equivalent to Q-learning? (e.g., if the observation is augmented with the stack trace)? The formal definition of recursive MDPs was complicated and hard to follow (esp. L124 -- L135) Minor comments L1 -- I didn't understand this sentence L3 "they must rely on the practitioner's ingenuity" -- Nicely put! L23 "regularity assumptions" -- Cite. L32 "to generalize its learning upon" -- I didn't understand this sentence L41 "Unlike ... enables generalizability" -- Where is this demonstrated/proven? L83 "single-exit" -- Be consistent with "1-exit" and "single-exit" Fig 1, Fig 2 -- I found these figures pretty hard to parse. L94 -- It wasn't clear to me why the charging example requires recursion. Shouldn't it be possible to formulate this as a simple, flat MDP? L127 -- L129 -- I found this discussion confusing. L318 -- If this task requires memory, then using a Markovian Q-learning policy as a baseline seems a bit unfair. Questions Some questions that would be good to address: Is it easier or harder for users to define a recursive MDP than a flat MDP? What, precisely, is the relationship with hierarchical RL? Is the recursive MDP a special case of hierarchical RL where the structure is manually provided? If the recursion depth is limited (say, to 3 levels), how beneficial is the proposed framework over standard Q-learning? Note that today's computers seem to work just fine with limited recursion depth. Limitations I did not see any explicit sections on limitations.
NIPS
Title Probing the Compositionality of Intuitive Functions Abstract How do people learn about complex functional structure? Taking inspiration from other areas of cognitive science, we propose that this is accomplished by harnessing compositionality: complex structure is decomposed into simpler building blocks. We formalize this idea within the framework of Bayesian regression using a grammar over Gaussian process kernels. We show that participants prefer compositional over non-compositional function extrapolations, that samples from the human prior over functions are best described by a compositional model, and that people perceive compositional functions as more predictable than their non-compositional but otherwise similar counterparts. We argue that the compositional nature of intuitive functions is consistent with broad principles of human cognition. 1 Introduction Function learning underlies many intuitive judgments, such as the perception of time, space and number. All of these tasks require the construction of mental representations that map inputs to outputs. Since the space of such mappings is infinite, inductive biases are necessary to constrain the plausible inferences. What is the nature of human inductive biases over functions? It has been suggested that Gaussian processes (GPs) provide a good characterization of these inductive biases [15]. As we describe more formally below, GPs are distributions over functions that can encode properties such as smoothness, linearity, periodicity, and other inductive biases indicated by research on human function learning [5, 3]. Lucas et al. [15] showed how Bayesian inference with GP priors can unify previous rule-based and exemplar-based theories of function learning [18]. A major unresolved question is how people deal with complex functions that are not easily captured by any simple GP. Insight into this question is provided by the observation that many complex functions encountered in the real world can be broken down into compositions of simpler functions [6, 11]. We pursue this idea theoretically and experimentally, by first defining a hypothetical compositional grammar for intuitive functions (based on [6]) and then investigating whether this grammar quantitatively predicts human function learning performance. We compare the compositional model to a flexible non-compositional model (the spectral mixture representation proposed by [21]). Both models use Bayesian inference to reason about functions, but differ in their inductive biases. We show that (a) participants prefer compositional pattern extrapolations in both forced choice and manual drawing tasks; (b) samples elicited from participants’ priors over functions are more consistent with the compositional grammar; and (c) participants perceive compositional functions as more predictable than non-compositional ones. Taken together, these findings provide support for the compositional nature of intuitive functions. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 2 Gaussian process regression as a theory of intuitive function learning A GP is a collection of random variables, any finite subset of which are jointly Gaussian-distributed (see [18] for an introduction). A GP can be expressed as a distribution over functions: f ∼ GP(m, k), where m(x) = E[f(x)] is a mean function modeling the expected output of the function given input x, and k(x,x′) = E [(f(x)−m(x))(f(x′)−m(x′))] is a kernel function modeling the covariance between points. Intuitively, the kernel encodes an inductive bias about the expected smoothness of functions drawn from the GP. To simplify exposition, we follow standard convention in assuming a constant mean of 0. Conditional on data D = {X,y}, where yn ∼ N (f(xn), σ2), the posterior predictive distribution for a new input x∗ is Gaussian with mean and variance given by: E[f(x?)|D] = k>? (K+ σ2I)−1y (1) V[f(x?)|D] = k(x?,x?)− k>? (K+ σ2I)−1k?, (2) where K is the N × N matrix of covariances evaluated at each input in X and k? = [k(x1,x∗), . . . , k(xN ,x∗)]. As pointed out by Griffiths et al. [10] (see also [15]), the predictive distribution can be viewed as an exemplar (similarity-based) model of function learning [5, 16], since it can be written as a linear combination of the covariance between past and current inputs: f(x∗) = N∑ n=1 αnk(xn,x?) (3) with α = (K + σ2I)−1y. Equivalently, by Mercer’s theorem any positive definite kernel can be expressed as an outer product of feature vectors: k(x,x′) = ∞∑ d=1 λdφd(x)φd(x ′), (4) where {φd(x)} are the eigenfunctions of the kernel and {λd} are the eigenvalues. The posterior predictive mean is a linear combination of the features, which from a psychological perspective can be thought of as encoding “rules” mapping inputs to outputs [4, 14]. Thus, a GP can be expressed as both an exemplar (similarity-based) model and a feature (rule-based) model, unifying the two dominant classes of function learning theories in cognitive science [15]. 3 Structure learning with Gaussian processes So far we have assumed a fixed kernel function. However, humans can adapt to a wide variety of structural forms [13, 8], suggesting that they have the flexibility to learn the kernel function from experience. The key question addressed in this paper is what space of kernels humans are optimizing over—how rich is their representational vocabulary? This vocabulary will in turn act as an inductive bias, making some functions easier to learn, and other functions harder to learn. Broadly speaking, there are two approaches to parameterizing the kernel space: a fixed functional form with continuous parameters, or a combinatorial space of functional forms. These approaches are not mutually exclusive; indeed, the success of the combinatorial approach depends on optimizing the continuous parameters for each form. Nonetheless, this distinction is useful because it allows us to separate different forms of functional complexity. A function might have internal structure such that when this structure is revealed, the apparent functional complexity is significantly reduced. For example, a function composed of many piecewise linear segments might have a long description length under a typical continuous parametrization (e.g., the radial basis kernel described below), because it violates the smoothness assumptions of the prior. However, conditional on the changepoints between segments, the function can be decomposed into independent parts each of which is well-described by a simple continuous parametrization. If internally structured functions are “natural kinds,” then the combinatorial approach may be a good model of human intuitive functions. In the rest of this section, we describe three kernel parameterizations. The first two are continuous, differing in their expressiveness. The third one is combinatorial, allowing it to capture complex patterns by composing simpler kernels. For all kernels, we take the standard approach of choosing the parameter values that optimize the log marginal likelihood. 3.1 Radial basis kernel The radial basis kernel is a commonly used kernel in machine learning applications, embodying the assumption that the covariance between function values decays exponentially with input distance: k(x,x′) = θ2 exp ( −|x− x ′|2 2l2 ) , (5) where θ is a scaling parameter and l is a length-scale parameter. This kernel assumes that the same smoothness properties apply globally for all inputs. It provides a standard baseline to compare with more expressive kernels. 3.2 Spectral mixture kernel The second approach is based on the fact that any stationary kernel can be expressed as an integral using Bochner’s theorem. Letting τ = |x− x′| ∈ RP , then k(τ ) = ∫ RP e2πis >τψ(ds). (6) If ψ has a density S(s), then S is the spectral density of k; S and k are thus Fourier duals [18]. This means that a spectral density fully defines the kernel and that furthermore every stationary kernel can be expressed as a spectral density. Wilson & Adams [21] showed that the spectral density can be approximated by a mixture of Q Gaussians, such that k(τ ) = Q∑ q=1 wq P∏ p=1 exp ( −2π2τ2pυpq ) cos ( 2πτpµ (p) q ) (7) Here, the qth component has mean vector µq = ( µ (1) q , . . . , µ (P ) q ) and a covariance matrix Mq = diag ( υ (1) q , . . . , υ (P ) q ) . The result is a non-parametric approach to Gaussian process re- gression, in which complex kernels are approximated by mixtures of simpler ones. This approach is appealing when simpler kernels fail to capture functional structure. Its main drawback is that because structure is captured implicitly via the spectral density, the building blocks are psychologically less intuitive: humans appear to have preferences for linear [12] and periodic [1] functions, which are not straightforwardly encoded in the spectral mixture (though of course the mixture can approximate these functions). Since the spectral kernel has been successfully applied to reverse engineer human kernels [22], it is a useful reference of comparison to more structured compositional approaches. 3.3 Compositional kernel As positive semidefinite kernels are closed under addition and multiplication, we can create richly structured and interpretable kernels from well understood base components. For example, by summing kernels, we can model the data as a superposition of independent functions. Figure 1 shows an example of how different kernels (radial basis, linear, periodic) can be combined. Table 1 summarizes the kernels used in our grammar. Many other compositional grammars are possible. For example, we could have included a more diverse set of kernels, and other composition operators (e.g., convolution, scaling) that generate valid kernels. However, we believe that our simple grammar is a useful starting point, since the components are intuitive and likely to be psychologically plausible. For tractability, we fix the maximum number of combined kernels to 3. Additionally, we do not allow for repetition of kernels in order to restrict the complexity of the kernel space. . 4 Experiment 1: Extrapolation The first experiment assessed whether people prefer compositional over non-compositional extrapolations. In experiment 1a, functions were sampled from a compositional GP and different extrapolations (mean predictions) were produced using each of the aforementioned kernels. Participants were then asked to choose among the 3 different extrapolations for a given function (see Figure 2). In detail, the outputs for xlearn = [0, 0.1, · · · , 7] were used as a training set to which all three kernels were fitted and then used to generate predictions for the test set xtest = [7.1, 7.2, · · · , 10]. Their mean predictions were then used to generate one plot for every approach that showed the learned input as a blue line and the extrapolation as a red line. The procedure was repeated for 20 different compositional functions. 52 participants (mean age=36.15, SD = 9.11) were recruited via Amazon Mechanical Turk and received $0.5 for their participation. Participants were asked to select one of 3 extrapolations (displayed as red lines) they thought best completed a given blue line. Results showed that participants chose compositional predictions 69%, spectral mixture predictions 17%, and radial basis predictions 14% of the time. Overall, the compositional predictions were chosen significantly more often than the other two (χ2 = 591.2, p < 0.01) as shown in Figure 3a. In experiment 1b, again 20 functions were sampled but this time from a spectral mixture kernel and 65 participants (mean age=30, SD = 9.84) were asked to choose among either compositional or spectral mixture extrapolations and received $0.5 as before. Results (displayed in Figure 3b) showed that participants again chose compositional extrapolations more frequently (68% vs. 32%, χ2 = 172.8, p < 0.01), even if the ground truth happened to be generated by a spectral mixture kernel. Thus, people seem to prefer compositional over non-compositional extrapolations in forced choice extrapolation tasks. 5 Markov chain Monte Carlo with people In a second set of experiments, we assessed participants’ inductive biases directly using a Markov chain Monte Carlo with People (MCMCP) approach [19]. Participants accept or reject proposed extrapolations, effectively simulating a Markov chain whose stationary distribution is in this case the posterior predictive. Extrapolations from all possible kernel combinations (up to 3 combined kernels) were generated and stored a priori. These were then used to generate plots of different proposal extrapolations (as in the previous experiment). On each trial, participants chose between their most recently accepted extrapolation and a new proposal. 5.1 Experiment 2a: Compositional ground truth In the first MCMCP experiment, we sampled functions from compositional kernels. Eight different functions were sampled from various compositional kernels, the input space was split into training and test sets, and then all kernel combinations were used to generate extrapolations. Proposals were sampled uniformly from this set. 51 participants with an average age of 32.55 (SD = 8.21) were recruited via Amazon’s Mechanical Turk and paid $1. There were 8 blocks of 30 trials, where each block corresponded to a single training set. We calculated the average proportion of accepted kernels over the last 5 trials, as shown in Figure 4. In all cases participants’ subjective probability distribution over kernels corresponded well with the data-generating kernels. Moreover, the inverse marginal likelihood, standardized over all kernels, correlated highly with the subjective beliefs assessed by MCMCP (ρ = 0.91, p < .01). Thus, participants seemed to converge to sensible structures when the functions were generated by compositional kernels. 5.2 Experiment 2b: Naturalistic functions The second MCMCP experiment assessed what structures people converged to when faced with real world data. 51 participants with an average age of 32.55 (SD = 12.14) were recruited via Amazon Mechanical Turk and received $1 for their participation. The functions were an airline passenger data set, volcano CO2 emission data, the number of gym memberships over 5 years, and the number of times people googled the band “Wham!” over the last 8 years; all shown in Figure 5a. Participants were not told any information about the data set (including input and output descriptions) beyond the input-output pairs. As periodicity in the real world is rarely ever purely periodic, we adapted the periodic component of the grammar by multiplying a periodic kernel with a radial basis kernel, thereby locally smoothing the periodic part of the function.1 Apart from the different training sets, the procedure was identical to the last experiment. Results are shown in Figure 5b, demonstrating that participants converged to intuitively plausible patterns. In particular, for both the volcano and the airline passenger data, participants converged to compositions resembling those found in previous analyses [6]. The correlation between the mean proportion of accepted predictions and the inverse standardized marginal likelihoods of the different kernels was again significantly positive (ρ = 0.83, p < .01). 6 Experiment 3: Manual function completion In the next experiment, we let participants draw the functions underlying observed data manually. As all of the prior experiments asked participants to judge between “pre-generated” predictions of functions, we wanted to compare this to how participants generate predictions themselves. On each round of the experiment, functions were sampled from the compositional grammar, the number of points to be presented on each trial was sampled uniformly between 100 and 200, and the noise variance was sampled uniformly between 0 and 25. Finally, the size of an unobserved region of the 1See the following page for an example: http://learning.eng.cam.ac.uk/carl/mauna. function was sampled to lie between 5 and 50. Participants were asked to manually draw the function best describing observed data and to inter- and extrapolate this function in two unobserved regions. A screen shot of the experiment is shown in Figure 6. 36 participants with a mean age of 30.5 (SD = 7.15) were recruited from Amazon Mechanical Turk and received $2 for their participation. Participants were asked to draw lines in a cloud of dots that they thought best described the given data. To facilitate this process, participants placed black dots into the cloud, which were then automatically connected by a black line based on a cubic Bezier smoothing curve. They were asked to place the first dot on the left boundary and the final dot on the right boundary of the graph. In between, participants were allowed to place as many dots as they liked (from left to right) and could remove previously placed dots. There were 50 trials in total. We assessed the average root mean squared distance between participants’ predictions (the line they drew) and the mean predictions of each kernel given the data participants had seen, for both interpolation and extrapolation areas. Results are shown in Figure 7. The mean distance from participants’ drawings was significantly higher for the spectral mixture kernel than for the compositional kernel in both interpolation (86.96 vs. 58.33, t(1291.1) = −6.3, p < .001) and extrapolation areas (110.45 vs 83.91, t(1475.7) = 6.39, p < 0.001). The radial basis kernel produced similar distances as the compositional kernel in interpolation (55.8), but predicted participants’ drawings significantly worse in extrapolation areas (97.9, t(1459.9) = 3.26, p < 0.01). 7 Experiment 4: Assessing predictability Compositional patterns might also affect the way in which participants perceive functions a priori [20]. To assess this, we asked participants to judge how well they thought they could predict 40 different functions that were similar on many measures such as their spectral entropy and their average wavelet distance to each other, but 20 of which were sampled from a compositional and 20 from a spectral mixture kernel. Figure 8 shows a screenshot of the experiment. 50 participants with a mean age of 32 (SD = 7.82) were recruited via Amazon Mechanical Turk and received $0.5 for their participation. Participants were asked to rate the predictability of different functions. On each trial participants were shown a total of nj ∈ {50, 60, . . . , 100} randomly sampled input-output points of a given function and asked to judge how well they thought they could predict the output for a randomly sampled input point on a scale of 0 (not at all) to 100 (very well). Afterwards, they had to rate which of two functions was easier to predict (Figure 8) on a scale from -100 (left graph is definitely easier to predict) to 100 (right graph is definitely easier predict). As shown in Figure 9, compositional functions were perceived as more predictable than spectral functions in isolation (t(948) = 11.422, p < 0.01) and in paired comparisons (t(499) = 13.502, p < 0.01). Perceived predictability increases with the number of observed outputs (r = 0.23, p < 0.01) and the larger the number of observations, the larger the difference between compositional and spectral mixture functions (r = 0.14, p < 0.01). 8 Discussion In this paper, we probed human intuitions about functions and found that these intuitions are best described as compositional. We operationalized compositionality using a grammar over kernels within a GP regression framework and found that people prefer extrapolations based on compositional kernels over other alternatives, such as a spectral mixture or the standard radial basis kernel. Two Markov chain Monte Carlo with people experiments revealed that participants converge to extrapolations consistent with the compositional kernels. These findings were replicated when people manually drew the functions underlying observed data. Moreover, participants perceived compositional functions as more predictable than non-compositional – but otherwise similar – ones. The work presented here is connected to several lines of previous research, most importantly that of Lucas et al. [15], which introduced GP regression as a model of human function learning, and Wilson et al. [22], which attempted to reverse-engineer the human kernel using a spectral mixture. We see our work as complementary; we need both a theory to describe how people make sense of structure as well as a method to indicate what the final structure might look like when represented as a kernel. Our approach also ties together neatly with past attempts to model structure in other cognitive domains such as motion perception [9] and decision making [7]. Our work can be extended in a number of ways. First, it is desirable to more thoroughly explore the space of base kernels and composition operators, since we used an elementary grammar in our analyses that is probably too simple. Second, the compositional approach could be used in traditional function learning paradigms (e.g., [5, 14]) as well as in active input selection paradigms [17]. Another interesting avenue for future research would be to explore the broader implications of compositional function representations. For example, evidence suggests that statistical regularities reduce perceived numerosity [23] and increase memory capacity [2]; these tasks can therefore provide clues about the underlying representations. If compositional functions alter number perception or memory performance to a greater extent than alternative functions, that suggests that our theory extends beyond simple function learning.
1. What is the main contribution of the paper in terms of decision-making processes? 2. What are the strengths of the proposed framework in combining Bayesian Regression and Gaussian Process Kernels? 3. What are the weaknesses of the paper regarding its experimental design and presentation? 4. How does the reviewer assess the clarity and quality of the paper's content?
Review
Review The authors formalize the idea of human decision over functions (compositional vs non-compositional) in a framework that combines Bayesian Regression and Gaussian Process Kernels. The authors provide an extensive set of experiments that compare the decision of subjects in different tasks with a grammar set (based on kernels). The presentation is very well written. However, there is one part missing regarding on the experiments. Which control mechanism was used in the experiments?. Two minor points. The x-axis is difficult to read in Figure3, and Figure 7 is quite small to read the text.
NIPS
Title Probing the Compositionality of Intuitive Functions Abstract How do people learn about complex functional structure? Taking inspiration from other areas of cognitive science, we propose that this is accomplished by harnessing compositionality: complex structure is decomposed into simpler building blocks. We formalize this idea within the framework of Bayesian regression using a grammar over Gaussian process kernels. We show that participants prefer compositional over non-compositional function extrapolations, that samples from the human prior over functions are best described by a compositional model, and that people perceive compositional functions as more predictable than their non-compositional but otherwise similar counterparts. We argue that the compositional nature of intuitive functions is consistent with broad principles of human cognition. 1 Introduction Function learning underlies many intuitive judgments, such as the perception of time, space and number. All of these tasks require the construction of mental representations that map inputs to outputs. Since the space of such mappings is infinite, inductive biases are necessary to constrain the plausible inferences. What is the nature of human inductive biases over functions? It has been suggested that Gaussian processes (GPs) provide a good characterization of these inductive biases [15]. As we describe more formally below, GPs are distributions over functions that can encode properties such as smoothness, linearity, periodicity, and other inductive biases indicated by research on human function learning [5, 3]. Lucas et al. [15] showed how Bayesian inference with GP priors can unify previous rule-based and exemplar-based theories of function learning [18]. A major unresolved question is how people deal with complex functions that are not easily captured by any simple GP. Insight into this question is provided by the observation that many complex functions encountered in the real world can be broken down into compositions of simpler functions [6, 11]. We pursue this idea theoretically and experimentally, by first defining a hypothetical compositional grammar for intuitive functions (based on [6]) and then investigating whether this grammar quantitatively predicts human function learning performance. We compare the compositional model to a flexible non-compositional model (the spectral mixture representation proposed by [21]). Both models use Bayesian inference to reason about functions, but differ in their inductive biases. We show that (a) participants prefer compositional pattern extrapolations in both forced choice and manual drawing tasks; (b) samples elicited from participants’ priors over functions are more consistent with the compositional grammar; and (c) participants perceive compositional functions as more predictable than non-compositional ones. Taken together, these findings provide support for the compositional nature of intuitive functions. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 2 Gaussian process regression as a theory of intuitive function learning A GP is a collection of random variables, any finite subset of which are jointly Gaussian-distributed (see [18] for an introduction). A GP can be expressed as a distribution over functions: f ∼ GP(m, k), where m(x) = E[f(x)] is a mean function modeling the expected output of the function given input x, and k(x,x′) = E [(f(x)−m(x))(f(x′)−m(x′))] is a kernel function modeling the covariance between points. Intuitively, the kernel encodes an inductive bias about the expected smoothness of functions drawn from the GP. To simplify exposition, we follow standard convention in assuming a constant mean of 0. Conditional on data D = {X,y}, where yn ∼ N (f(xn), σ2), the posterior predictive distribution for a new input x∗ is Gaussian with mean and variance given by: E[f(x?)|D] = k>? (K+ σ2I)−1y (1) V[f(x?)|D] = k(x?,x?)− k>? (K+ σ2I)−1k?, (2) where K is the N × N matrix of covariances evaluated at each input in X and k? = [k(x1,x∗), . . . , k(xN ,x∗)]. As pointed out by Griffiths et al. [10] (see also [15]), the predictive distribution can be viewed as an exemplar (similarity-based) model of function learning [5, 16], since it can be written as a linear combination of the covariance between past and current inputs: f(x∗) = N∑ n=1 αnk(xn,x?) (3) with α = (K + σ2I)−1y. Equivalently, by Mercer’s theorem any positive definite kernel can be expressed as an outer product of feature vectors: k(x,x′) = ∞∑ d=1 λdφd(x)φd(x ′), (4) where {φd(x)} are the eigenfunctions of the kernel and {λd} are the eigenvalues. The posterior predictive mean is a linear combination of the features, which from a psychological perspective can be thought of as encoding “rules” mapping inputs to outputs [4, 14]. Thus, a GP can be expressed as both an exemplar (similarity-based) model and a feature (rule-based) model, unifying the two dominant classes of function learning theories in cognitive science [15]. 3 Structure learning with Gaussian processes So far we have assumed a fixed kernel function. However, humans can adapt to a wide variety of structural forms [13, 8], suggesting that they have the flexibility to learn the kernel function from experience. The key question addressed in this paper is what space of kernels humans are optimizing over—how rich is their representational vocabulary? This vocabulary will in turn act as an inductive bias, making some functions easier to learn, and other functions harder to learn. Broadly speaking, there are two approaches to parameterizing the kernel space: a fixed functional form with continuous parameters, or a combinatorial space of functional forms. These approaches are not mutually exclusive; indeed, the success of the combinatorial approach depends on optimizing the continuous parameters for each form. Nonetheless, this distinction is useful because it allows us to separate different forms of functional complexity. A function might have internal structure such that when this structure is revealed, the apparent functional complexity is significantly reduced. For example, a function composed of many piecewise linear segments might have a long description length under a typical continuous parametrization (e.g., the radial basis kernel described below), because it violates the smoothness assumptions of the prior. However, conditional on the changepoints between segments, the function can be decomposed into independent parts each of which is well-described by a simple continuous parametrization. If internally structured functions are “natural kinds,” then the combinatorial approach may be a good model of human intuitive functions. In the rest of this section, we describe three kernel parameterizations. The first two are continuous, differing in their expressiveness. The third one is combinatorial, allowing it to capture complex patterns by composing simpler kernels. For all kernels, we take the standard approach of choosing the parameter values that optimize the log marginal likelihood. 3.1 Radial basis kernel The radial basis kernel is a commonly used kernel in machine learning applications, embodying the assumption that the covariance between function values decays exponentially with input distance: k(x,x′) = θ2 exp ( −|x− x ′|2 2l2 ) , (5) where θ is a scaling parameter and l is a length-scale parameter. This kernel assumes that the same smoothness properties apply globally for all inputs. It provides a standard baseline to compare with more expressive kernels. 3.2 Spectral mixture kernel The second approach is based on the fact that any stationary kernel can be expressed as an integral using Bochner’s theorem. Letting τ = |x− x′| ∈ RP , then k(τ ) = ∫ RP e2πis >τψ(ds). (6) If ψ has a density S(s), then S is the spectral density of k; S and k are thus Fourier duals [18]. This means that a spectral density fully defines the kernel and that furthermore every stationary kernel can be expressed as a spectral density. Wilson & Adams [21] showed that the spectral density can be approximated by a mixture of Q Gaussians, such that k(τ ) = Q∑ q=1 wq P∏ p=1 exp ( −2π2τ2pυpq ) cos ( 2πτpµ (p) q ) (7) Here, the qth component has mean vector µq = ( µ (1) q , . . . , µ (P ) q ) and a covariance matrix Mq = diag ( υ (1) q , . . . , υ (P ) q ) . The result is a non-parametric approach to Gaussian process re- gression, in which complex kernels are approximated by mixtures of simpler ones. This approach is appealing when simpler kernels fail to capture functional structure. Its main drawback is that because structure is captured implicitly via the spectral density, the building blocks are psychologically less intuitive: humans appear to have preferences for linear [12] and periodic [1] functions, which are not straightforwardly encoded in the spectral mixture (though of course the mixture can approximate these functions). Since the spectral kernel has been successfully applied to reverse engineer human kernels [22], it is a useful reference of comparison to more structured compositional approaches. 3.3 Compositional kernel As positive semidefinite kernels are closed under addition and multiplication, we can create richly structured and interpretable kernels from well understood base components. For example, by summing kernels, we can model the data as a superposition of independent functions. Figure 1 shows an example of how different kernels (radial basis, linear, periodic) can be combined. Table 1 summarizes the kernels used in our grammar. Many other compositional grammars are possible. For example, we could have included a more diverse set of kernels, and other composition operators (e.g., convolution, scaling) that generate valid kernels. However, we believe that our simple grammar is a useful starting point, since the components are intuitive and likely to be psychologically plausible. For tractability, we fix the maximum number of combined kernels to 3. Additionally, we do not allow for repetition of kernels in order to restrict the complexity of the kernel space. . 4 Experiment 1: Extrapolation The first experiment assessed whether people prefer compositional over non-compositional extrapolations. In experiment 1a, functions were sampled from a compositional GP and different extrapolations (mean predictions) were produced using each of the aforementioned kernels. Participants were then asked to choose among the 3 different extrapolations for a given function (see Figure 2). In detail, the outputs for xlearn = [0, 0.1, · · · , 7] were used as a training set to which all three kernels were fitted and then used to generate predictions for the test set xtest = [7.1, 7.2, · · · , 10]. Their mean predictions were then used to generate one plot for every approach that showed the learned input as a blue line and the extrapolation as a red line. The procedure was repeated for 20 different compositional functions. 52 participants (mean age=36.15, SD = 9.11) were recruited via Amazon Mechanical Turk and received $0.5 for their participation. Participants were asked to select one of 3 extrapolations (displayed as red lines) they thought best completed a given blue line. Results showed that participants chose compositional predictions 69%, spectral mixture predictions 17%, and radial basis predictions 14% of the time. Overall, the compositional predictions were chosen significantly more often than the other two (χ2 = 591.2, p < 0.01) as shown in Figure 3a. In experiment 1b, again 20 functions were sampled but this time from a spectral mixture kernel and 65 participants (mean age=30, SD = 9.84) were asked to choose among either compositional or spectral mixture extrapolations and received $0.5 as before. Results (displayed in Figure 3b) showed that participants again chose compositional extrapolations more frequently (68% vs. 32%, χ2 = 172.8, p < 0.01), even if the ground truth happened to be generated by a spectral mixture kernel. Thus, people seem to prefer compositional over non-compositional extrapolations in forced choice extrapolation tasks. 5 Markov chain Monte Carlo with people In a second set of experiments, we assessed participants’ inductive biases directly using a Markov chain Monte Carlo with People (MCMCP) approach [19]. Participants accept or reject proposed extrapolations, effectively simulating a Markov chain whose stationary distribution is in this case the posterior predictive. Extrapolations from all possible kernel combinations (up to 3 combined kernels) were generated and stored a priori. These were then used to generate plots of different proposal extrapolations (as in the previous experiment). On each trial, participants chose between their most recently accepted extrapolation and a new proposal. 5.1 Experiment 2a: Compositional ground truth In the first MCMCP experiment, we sampled functions from compositional kernels. Eight different functions were sampled from various compositional kernels, the input space was split into training and test sets, and then all kernel combinations were used to generate extrapolations. Proposals were sampled uniformly from this set. 51 participants with an average age of 32.55 (SD = 8.21) were recruited via Amazon’s Mechanical Turk and paid $1. There were 8 blocks of 30 trials, where each block corresponded to a single training set. We calculated the average proportion of accepted kernels over the last 5 trials, as shown in Figure 4. In all cases participants’ subjective probability distribution over kernels corresponded well with the data-generating kernels. Moreover, the inverse marginal likelihood, standardized over all kernels, correlated highly with the subjective beliefs assessed by MCMCP (ρ = 0.91, p < .01). Thus, participants seemed to converge to sensible structures when the functions were generated by compositional kernels. 5.2 Experiment 2b: Naturalistic functions The second MCMCP experiment assessed what structures people converged to when faced with real world data. 51 participants with an average age of 32.55 (SD = 12.14) were recruited via Amazon Mechanical Turk and received $1 for their participation. The functions were an airline passenger data set, volcano CO2 emission data, the number of gym memberships over 5 years, and the number of times people googled the band “Wham!” over the last 8 years; all shown in Figure 5a. Participants were not told any information about the data set (including input and output descriptions) beyond the input-output pairs. As periodicity in the real world is rarely ever purely periodic, we adapted the periodic component of the grammar by multiplying a periodic kernel with a radial basis kernel, thereby locally smoothing the periodic part of the function.1 Apart from the different training sets, the procedure was identical to the last experiment. Results are shown in Figure 5b, demonstrating that participants converged to intuitively plausible patterns. In particular, for both the volcano and the airline passenger data, participants converged to compositions resembling those found in previous analyses [6]. The correlation between the mean proportion of accepted predictions and the inverse standardized marginal likelihoods of the different kernels was again significantly positive (ρ = 0.83, p < .01). 6 Experiment 3: Manual function completion In the next experiment, we let participants draw the functions underlying observed data manually. As all of the prior experiments asked participants to judge between “pre-generated” predictions of functions, we wanted to compare this to how participants generate predictions themselves. On each round of the experiment, functions were sampled from the compositional grammar, the number of points to be presented on each trial was sampled uniformly between 100 and 200, and the noise variance was sampled uniformly between 0 and 25. Finally, the size of an unobserved region of the 1See the following page for an example: http://learning.eng.cam.ac.uk/carl/mauna. function was sampled to lie between 5 and 50. Participants were asked to manually draw the function best describing observed data and to inter- and extrapolate this function in two unobserved regions. A screen shot of the experiment is shown in Figure 6. 36 participants with a mean age of 30.5 (SD = 7.15) were recruited from Amazon Mechanical Turk and received $2 for their participation. Participants were asked to draw lines in a cloud of dots that they thought best described the given data. To facilitate this process, participants placed black dots into the cloud, which were then automatically connected by a black line based on a cubic Bezier smoothing curve. They were asked to place the first dot on the left boundary and the final dot on the right boundary of the graph. In between, participants were allowed to place as many dots as they liked (from left to right) and could remove previously placed dots. There were 50 trials in total. We assessed the average root mean squared distance between participants’ predictions (the line they drew) and the mean predictions of each kernel given the data participants had seen, for both interpolation and extrapolation areas. Results are shown in Figure 7. The mean distance from participants’ drawings was significantly higher for the spectral mixture kernel than for the compositional kernel in both interpolation (86.96 vs. 58.33, t(1291.1) = −6.3, p < .001) and extrapolation areas (110.45 vs 83.91, t(1475.7) = 6.39, p < 0.001). The radial basis kernel produced similar distances as the compositional kernel in interpolation (55.8), but predicted participants’ drawings significantly worse in extrapolation areas (97.9, t(1459.9) = 3.26, p < 0.01). 7 Experiment 4: Assessing predictability Compositional patterns might also affect the way in which participants perceive functions a priori [20]. To assess this, we asked participants to judge how well they thought they could predict 40 different functions that were similar on many measures such as their spectral entropy and their average wavelet distance to each other, but 20 of which were sampled from a compositional and 20 from a spectral mixture kernel. Figure 8 shows a screenshot of the experiment. 50 participants with a mean age of 32 (SD = 7.82) were recruited via Amazon Mechanical Turk and received $0.5 for their participation. Participants were asked to rate the predictability of different functions. On each trial participants were shown a total of nj ∈ {50, 60, . . . , 100} randomly sampled input-output points of a given function and asked to judge how well they thought they could predict the output for a randomly sampled input point on a scale of 0 (not at all) to 100 (very well). Afterwards, they had to rate which of two functions was easier to predict (Figure 8) on a scale from -100 (left graph is definitely easier to predict) to 100 (right graph is definitely easier predict). As shown in Figure 9, compositional functions were perceived as more predictable than spectral functions in isolation (t(948) = 11.422, p < 0.01) and in paired comparisons (t(499) = 13.502, p < 0.01). Perceived predictability increases with the number of observed outputs (r = 0.23, p < 0.01) and the larger the number of observations, the larger the difference between compositional and spectral mixture functions (r = 0.14, p < 0.01). 8 Discussion In this paper, we probed human intuitions about functions and found that these intuitions are best described as compositional. We operationalized compositionality using a grammar over kernels within a GP regression framework and found that people prefer extrapolations based on compositional kernels over other alternatives, such as a spectral mixture or the standard radial basis kernel. Two Markov chain Monte Carlo with people experiments revealed that participants converge to extrapolations consistent with the compositional kernels. These findings were replicated when people manually drew the functions underlying observed data. Moreover, participants perceived compositional functions as more predictable than non-compositional – but otherwise similar – ones. The work presented here is connected to several lines of previous research, most importantly that of Lucas et al. [15], which introduced GP regression as a model of human function learning, and Wilson et al. [22], which attempted to reverse-engineer the human kernel using a spectral mixture. We see our work as complementary; we need both a theory to describe how people make sense of structure as well as a method to indicate what the final structure might look like when represented as a kernel. Our approach also ties together neatly with past attempts to model structure in other cognitive domains such as motion perception [9] and decision making [7]. Our work can be extended in a number of ways. First, it is desirable to more thoroughly explore the space of base kernels and composition operators, since we used an elementary grammar in our analyses that is probably too simple. Second, the compositional approach could be used in traditional function learning paradigms (e.g., [5, 14]) as well as in active input selection paradigms [17]. Another interesting avenue for future research would be to explore the broader implications of compositional function representations. For example, evidence suggests that statistical regularities reduce perceived numerosity [23] and increase memory capacity [2]; these tasks can therefore provide clues about the underlying representations. If compositional functions alter number perception or memory performance to a greater extent than alternative functions, that suggests that our theory extends beyond simple function learning.
1. What is the main contribution of the paper regarding functional relationships between continuous variables? 2. What are the strengths of the paper, particularly in its central idea and evaluation through behavioral experiments? 3. What are the weaknesses of the paper, especially regarding the tasks used in the experiments? 4. How do the results of the paper compare to traditional function learning tasks? 5. What is the role of periodicity in the paper's findings, and how might this impact the generalizability of the results? 6. Are there any questions or concerns regarding the models used in the paper, such as the grammar over kernels and the probability associated with it?
Review
Review This paper provides a detailed analysis of people's inferences about the functional relationship between continuous variables. The authors argue for a "compositional" representation of functions, where periodicity and linearity can combine with local similarity to produce relationships, as in some recent machine learning models. Four experiments provide support for this approach.This is a very nice paper, with a clear central idea and extensive evaluation through behavioral experiments. However, there are a few weaknesses: 1. Most of the human function learning literature has used tasks in which people never visualize data or functions. This is also the case in naturalistic settings where function learning takes place, where we have to form a continuous mapping between variables from experience. All of the tasks that were used in this paper involved presenting people with data in the form of a scatterplot or functional relationship, and asking them to evaluate lines applied to those axes. This task is more akin to data analysis than the traditional function learning task, and much less naturalistic. This distinction matters because performance in the two tasks is likely to be quite different. In the standard function learning task, it is quite hard to get people to learn periodic functions without other cues to periodicity. Many of the effects in this paper seem to be driven by periodic functions, suggesting that they may not hold if traditional tasks were used. I don't think this is a major problem if it is clearly acknowledged and it is made clear that the goal is to evaluate whether data-analysis systems using compositional functions match human intuitions about data analysis. But it is important if the paper is intended to be primarily about function learning in relation to the psychological literature, which has focused on a very different task. 2. I'm curious to what extent the results are due to being able to capture periodicity, rather than compositionality more generally. The comparison model is one that cannot capture periodic relationships, and in all of the experiments except Experiment 1b the relationships that people were learning involved periodicity. Would adding periodicity to the spectral kernel be enough to allow it to capture all of these results at a similar level to the explicitly compositional model? 3. Some of the details of the models are missing. In particular the grammar over kernels is not explained in any detail, making it hard to understand how this approach is applied in practice. Presumably there are also probabilities associated with the grammar that define a hypothesis space of kernels? How is inference performed?
NIPS
Title Probing the Compositionality of Intuitive Functions Abstract How do people learn about complex functional structure? Taking inspiration from other areas of cognitive science, we propose that this is accomplished by harnessing compositionality: complex structure is decomposed into simpler building blocks. We formalize this idea within the framework of Bayesian regression using a grammar over Gaussian process kernels. We show that participants prefer compositional over non-compositional function extrapolations, that samples from the human prior over functions are best described by a compositional model, and that people perceive compositional functions as more predictable than their non-compositional but otherwise similar counterparts. We argue that the compositional nature of intuitive functions is consistent with broad principles of human cognition. 1 Introduction Function learning underlies many intuitive judgments, such as the perception of time, space and number. All of these tasks require the construction of mental representations that map inputs to outputs. Since the space of such mappings is infinite, inductive biases are necessary to constrain the plausible inferences. What is the nature of human inductive biases over functions? It has been suggested that Gaussian processes (GPs) provide a good characterization of these inductive biases [15]. As we describe more formally below, GPs are distributions over functions that can encode properties such as smoothness, linearity, periodicity, and other inductive biases indicated by research on human function learning [5, 3]. Lucas et al. [15] showed how Bayesian inference with GP priors can unify previous rule-based and exemplar-based theories of function learning [18]. A major unresolved question is how people deal with complex functions that are not easily captured by any simple GP. Insight into this question is provided by the observation that many complex functions encountered in the real world can be broken down into compositions of simpler functions [6, 11]. We pursue this idea theoretically and experimentally, by first defining a hypothetical compositional grammar for intuitive functions (based on [6]) and then investigating whether this grammar quantitatively predicts human function learning performance. We compare the compositional model to a flexible non-compositional model (the spectral mixture representation proposed by [21]). Both models use Bayesian inference to reason about functions, but differ in their inductive biases. We show that (a) participants prefer compositional pattern extrapolations in both forced choice and manual drawing tasks; (b) samples elicited from participants’ priors over functions are more consistent with the compositional grammar; and (c) participants perceive compositional functions as more predictable than non-compositional ones. Taken together, these findings provide support for the compositional nature of intuitive functions. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 2 Gaussian process regression as a theory of intuitive function learning A GP is a collection of random variables, any finite subset of which are jointly Gaussian-distributed (see [18] for an introduction). A GP can be expressed as a distribution over functions: f ∼ GP(m, k), where m(x) = E[f(x)] is a mean function modeling the expected output of the function given input x, and k(x,x′) = E [(f(x)−m(x))(f(x′)−m(x′))] is a kernel function modeling the covariance between points. Intuitively, the kernel encodes an inductive bias about the expected smoothness of functions drawn from the GP. To simplify exposition, we follow standard convention in assuming a constant mean of 0. Conditional on data D = {X,y}, where yn ∼ N (f(xn), σ2), the posterior predictive distribution for a new input x∗ is Gaussian with mean and variance given by: E[f(x?)|D] = k>? (K+ σ2I)−1y (1) V[f(x?)|D] = k(x?,x?)− k>? (K+ σ2I)−1k?, (2) where K is the N × N matrix of covariances evaluated at each input in X and k? = [k(x1,x∗), . . . , k(xN ,x∗)]. As pointed out by Griffiths et al. [10] (see also [15]), the predictive distribution can be viewed as an exemplar (similarity-based) model of function learning [5, 16], since it can be written as a linear combination of the covariance between past and current inputs: f(x∗) = N∑ n=1 αnk(xn,x?) (3) with α = (K + σ2I)−1y. Equivalently, by Mercer’s theorem any positive definite kernel can be expressed as an outer product of feature vectors: k(x,x′) = ∞∑ d=1 λdφd(x)φd(x ′), (4) where {φd(x)} are the eigenfunctions of the kernel and {λd} are the eigenvalues. The posterior predictive mean is a linear combination of the features, which from a psychological perspective can be thought of as encoding “rules” mapping inputs to outputs [4, 14]. Thus, a GP can be expressed as both an exemplar (similarity-based) model and a feature (rule-based) model, unifying the two dominant classes of function learning theories in cognitive science [15]. 3 Structure learning with Gaussian processes So far we have assumed a fixed kernel function. However, humans can adapt to a wide variety of structural forms [13, 8], suggesting that they have the flexibility to learn the kernel function from experience. The key question addressed in this paper is what space of kernels humans are optimizing over—how rich is their representational vocabulary? This vocabulary will in turn act as an inductive bias, making some functions easier to learn, and other functions harder to learn. Broadly speaking, there are two approaches to parameterizing the kernel space: a fixed functional form with continuous parameters, or a combinatorial space of functional forms. These approaches are not mutually exclusive; indeed, the success of the combinatorial approach depends on optimizing the continuous parameters for each form. Nonetheless, this distinction is useful because it allows us to separate different forms of functional complexity. A function might have internal structure such that when this structure is revealed, the apparent functional complexity is significantly reduced. For example, a function composed of many piecewise linear segments might have a long description length under a typical continuous parametrization (e.g., the radial basis kernel described below), because it violates the smoothness assumptions of the prior. However, conditional on the changepoints between segments, the function can be decomposed into independent parts each of which is well-described by a simple continuous parametrization. If internally structured functions are “natural kinds,” then the combinatorial approach may be a good model of human intuitive functions. In the rest of this section, we describe three kernel parameterizations. The first two are continuous, differing in their expressiveness. The third one is combinatorial, allowing it to capture complex patterns by composing simpler kernels. For all kernels, we take the standard approach of choosing the parameter values that optimize the log marginal likelihood. 3.1 Radial basis kernel The radial basis kernel is a commonly used kernel in machine learning applications, embodying the assumption that the covariance between function values decays exponentially with input distance: k(x,x′) = θ2 exp ( −|x− x ′|2 2l2 ) , (5) where θ is a scaling parameter and l is a length-scale parameter. This kernel assumes that the same smoothness properties apply globally for all inputs. It provides a standard baseline to compare with more expressive kernels. 3.2 Spectral mixture kernel The second approach is based on the fact that any stationary kernel can be expressed as an integral using Bochner’s theorem. Letting τ = |x− x′| ∈ RP , then k(τ ) = ∫ RP e2πis >τψ(ds). (6) If ψ has a density S(s), then S is the spectral density of k; S and k are thus Fourier duals [18]. This means that a spectral density fully defines the kernel and that furthermore every stationary kernel can be expressed as a spectral density. Wilson & Adams [21] showed that the spectral density can be approximated by a mixture of Q Gaussians, such that k(τ ) = Q∑ q=1 wq P∏ p=1 exp ( −2π2τ2pυpq ) cos ( 2πτpµ (p) q ) (7) Here, the qth component has mean vector µq = ( µ (1) q , . . . , µ (P ) q ) and a covariance matrix Mq = diag ( υ (1) q , . . . , υ (P ) q ) . The result is a non-parametric approach to Gaussian process re- gression, in which complex kernels are approximated by mixtures of simpler ones. This approach is appealing when simpler kernels fail to capture functional structure. Its main drawback is that because structure is captured implicitly via the spectral density, the building blocks are psychologically less intuitive: humans appear to have preferences for linear [12] and periodic [1] functions, which are not straightforwardly encoded in the spectral mixture (though of course the mixture can approximate these functions). Since the spectral kernel has been successfully applied to reverse engineer human kernels [22], it is a useful reference of comparison to more structured compositional approaches. 3.3 Compositional kernel As positive semidefinite kernels are closed under addition and multiplication, we can create richly structured and interpretable kernels from well understood base components. For example, by summing kernels, we can model the data as a superposition of independent functions. Figure 1 shows an example of how different kernels (radial basis, linear, periodic) can be combined. Table 1 summarizes the kernels used in our grammar. Many other compositional grammars are possible. For example, we could have included a more diverse set of kernels, and other composition operators (e.g., convolution, scaling) that generate valid kernels. However, we believe that our simple grammar is a useful starting point, since the components are intuitive and likely to be psychologically plausible. For tractability, we fix the maximum number of combined kernels to 3. Additionally, we do not allow for repetition of kernels in order to restrict the complexity of the kernel space. . 4 Experiment 1: Extrapolation The first experiment assessed whether people prefer compositional over non-compositional extrapolations. In experiment 1a, functions were sampled from a compositional GP and different extrapolations (mean predictions) were produced using each of the aforementioned kernels. Participants were then asked to choose among the 3 different extrapolations for a given function (see Figure 2). In detail, the outputs for xlearn = [0, 0.1, · · · , 7] were used as a training set to which all three kernels were fitted and then used to generate predictions for the test set xtest = [7.1, 7.2, · · · , 10]. Their mean predictions were then used to generate one plot for every approach that showed the learned input as a blue line and the extrapolation as a red line. The procedure was repeated for 20 different compositional functions. 52 participants (mean age=36.15, SD = 9.11) were recruited via Amazon Mechanical Turk and received $0.5 for their participation. Participants were asked to select one of 3 extrapolations (displayed as red lines) they thought best completed a given blue line. Results showed that participants chose compositional predictions 69%, spectral mixture predictions 17%, and radial basis predictions 14% of the time. Overall, the compositional predictions were chosen significantly more often than the other two (χ2 = 591.2, p < 0.01) as shown in Figure 3a. In experiment 1b, again 20 functions were sampled but this time from a spectral mixture kernel and 65 participants (mean age=30, SD = 9.84) were asked to choose among either compositional or spectral mixture extrapolations and received $0.5 as before. Results (displayed in Figure 3b) showed that participants again chose compositional extrapolations more frequently (68% vs. 32%, χ2 = 172.8, p < 0.01), even if the ground truth happened to be generated by a spectral mixture kernel. Thus, people seem to prefer compositional over non-compositional extrapolations in forced choice extrapolation tasks. 5 Markov chain Monte Carlo with people In a second set of experiments, we assessed participants’ inductive biases directly using a Markov chain Monte Carlo with People (MCMCP) approach [19]. Participants accept or reject proposed extrapolations, effectively simulating a Markov chain whose stationary distribution is in this case the posterior predictive. Extrapolations from all possible kernel combinations (up to 3 combined kernels) were generated and stored a priori. These were then used to generate plots of different proposal extrapolations (as in the previous experiment). On each trial, participants chose between their most recently accepted extrapolation and a new proposal. 5.1 Experiment 2a: Compositional ground truth In the first MCMCP experiment, we sampled functions from compositional kernels. Eight different functions were sampled from various compositional kernels, the input space was split into training and test sets, and then all kernel combinations were used to generate extrapolations. Proposals were sampled uniformly from this set. 51 participants with an average age of 32.55 (SD = 8.21) were recruited via Amazon’s Mechanical Turk and paid $1. There were 8 blocks of 30 trials, where each block corresponded to a single training set. We calculated the average proportion of accepted kernels over the last 5 trials, as shown in Figure 4. In all cases participants’ subjective probability distribution over kernels corresponded well with the data-generating kernels. Moreover, the inverse marginal likelihood, standardized over all kernels, correlated highly with the subjective beliefs assessed by MCMCP (ρ = 0.91, p < .01). Thus, participants seemed to converge to sensible structures when the functions were generated by compositional kernels. 5.2 Experiment 2b: Naturalistic functions The second MCMCP experiment assessed what structures people converged to when faced with real world data. 51 participants with an average age of 32.55 (SD = 12.14) were recruited via Amazon Mechanical Turk and received $1 for their participation. The functions were an airline passenger data set, volcano CO2 emission data, the number of gym memberships over 5 years, and the number of times people googled the band “Wham!” over the last 8 years; all shown in Figure 5a. Participants were not told any information about the data set (including input and output descriptions) beyond the input-output pairs. As periodicity in the real world is rarely ever purely periodic, we adapted the periodic component of the grammar by multiplying a periodic kernel with a radial basis kernel, thereby locally smoothing the periodic part of the function.1 Apart from the different training sets, the procedure was identical to the last experiment. Results are shown in Figure 5b, demonstrating that participants converged to intuitively plausible patterns. In particular, for both the volcano and the airline passenger data, participants converged to compositions resembling those found in previous analyses [6]. The correlation between the mean proportion of accepted predictions and the inverse standardized marginal likelihoods of the different kernels was again significantly positive (ρ = 0.83, p < .01). 6 Experiment 3: Manual function completion In the next experiment, we let participants draw the functions underlying observed data manually. As all of the prior experiments asked participants to judge between “pre-generated” predictions of functions, we wanted to compare this to how participants generate predictions themselves. On each round of the experiment, functions were sampled from the compositional grammar, the number of points to be presented on each trial was sampled uniformly between 100 and 200, and the noise variance was sampled uniformly between 0 and 25. Finally, the size of an unobserved region of the 1See the following page for an example: http://learning.eng.cam.ac.uk/carl/mauna. function was sampled to lie between 5 and 50. Participants were asked to manually draw the function best describing observed data and to inter- and extrapolate this function in two unobserved regions. A screen shot of the experiment is shown in Figure 6. 36 participants with a mean age of 30.5 (SD = 7.15) were recruited from Amazon Mechanical Turk and received $2 for their participation. Participants were asked to draw lines in a cloud of dots that they thought best described the given data. To facilitate this process, participants placed black dots into the cloud, which were then automatically connected by a black line based on a cubic Bezier smoothing curve. They were asked to place the first dot on the left boundary and the final dot on the right boundary of the graph. In between, participants were allowed to place as many dots as they liked (from left to right) and could remove previously placed dots. There were 50 trials in total. We assessed the average root mean squared distance between participants’ predictions (the line they drew) and the mean predictions of each kernel given the data participants had seen, for both interpolation and extrapolation areas. Results are shown in Figure 7. The mean distance from participants’ drawings was significantly higher for the spectral mixture kernel than for the compositional kernel in both interpolation (86.96 vs. 58.33, t(1291.1) = −6.3, p < .001) and extrapolation areas (110.45 vs 83.91, t(1475.7) = 6.39, p < 0.001). The radial basis kernel produced similar distances as the compositional kernel in interpolation (55.8), but predicted participants’ drawings significantly worse in extrapolation areas (97.9, t(1459.9) = 3.26, p < 0.01). 7 Experiment 4: Assessing predictability Compositional patterns might also affect the way in which participants perceive functions a priori [20]. To assess this, we asked participants to judge how well they thought they could predict 40 different functions that were similar on many measures such as their spectral entropy and their average wavelet distance to each other, but 20 of which were sampled from a compositional and 20 from a spectral mixture kernel. Figure 8 shows a screenshot of the experiment. 50 participants with a mean age of 32 (SD = 7.82) were recruited via Amazon Mechanical Turk and received $0.5 for their participation. Participants were asked to rate the predictability of different functions. On each trial participants were shown a total of nj ∈ {50, 60, . . . , 100} randomly sampled input-output points of a given function and asked to judge how well they thought they could predict the output for a randomly sampled input point on a scale of 0 (not at all) to 100 (very well). Afterwards, they had to rate which of two functions was easier to predict (Figure 8) on a scale from -100 (left graph is definitely easier to predict) to 100 (right graph is definitely easier predict). As shown in Figure 9, compositional functions were perceived as more predictable than spectral functions in isolation (t(948) = 11.422, p < 0.01) and in paired comparisons (t(499) = 13.502, p < 0.01). Perceived predictability increases with the number of observed outputs (r = 0.23, p < 0.01) and the larger the number of observations, the larger the difference between compositional and spectral mixture functions (r = 0.14, p < 0.01). 8 Discussion In this paper, we probed human intuitions about functions and found that these intuitions are best described as compositional. We operationalized compositionality using a grammar over kernels within a GP regression framework and found that people prefer extrapolations based on compositional kernels over other alternatives, such as a spectral mixture or the standard radial basis kernel. Two Markov chain Monte Carlo with people experiments revealed that participants converge to extrapolations consistent with the compositional kernels. These findings were replicated when people manually drew the functions underlying observed data. Moreover, participants perceived compositional functions as more predictable than non-compositional – but otherwise similar – ones. The work presented here is connected to several lines of previous research, most importantly that of Lucas et al. [15], which introduced GP regression as a model of human function learning, and Wilson et al. [22], which attempted to reverse-engineer the human kernel using a spectral mixture. We see our work as complementary; we need both a theory to describe how people make sense of structure as well as a method to indicate what the final structure might look like when represented as a kernel. Our approach also ties together neatly with past attempts to model structure in other cognitive domains such as motion perception [9] and decision making [7]. Our work can be extended in a number of ways. First, it is desirable to more thoroughly explore the space of base kernels and composition operators, since we used an elementary grammar in our analyses that is probably too simple. Second, the compositional approach could be used in traditional function learning paradigms (e.g., [5, 14]) as well as in active input selection paradigms [17]. Another interesting avenue for future research would be to explore the broader implications of compositional function representations. For example, evidence suggests that statistical regularities reduce perceived numerosity [23] and increase memory capacity [2]; these tasks can therefore provide clues about the underlying representations. If compositional functions alter number perception or memory performance to a greater extent than alternative functions, that suggests that our theory extends beyond simple function learning.
1. What is the focus of the paper regarding human inductive bias in function extrapolation and interpolation? 2. What are the strengths of the proposed approach, particularly in comparing different types of kernels? 3. What are the weaknesses of the paper, especially regarding the choice of real-world data and kernel compositions? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review The authors explore human inductive bias in function extrapolation and interpolation and compare it to the inductive biases incurred using different types of kernels. Compositions of kernels, i.e. sums and products of linear, Gaussian and periodic kernels seem to capture the human extra/interpolations more immediately and naturally compared to spectral kernels. A number of psychophysics experiments on humans was conducted using Amazon Mechanical Turk and the results are presented in the paper.The compositional kernel seems to be closer adapted to the human occam's razor than the other proposed variants. This seems to be the case whether the functions to be inter/extrapolated are generated from a compositional kernel or not. That is interesting. The question is whether the comparison to the reference kernels is fair. Drawing an extra/interpolation using a spectral kernel which is much more global will probably look "wrong" most of the time. Please show some examples of extrapolations generated on the real-world data. Comparing to a harder null might thus have been appropriate. The Human Kernel paper has some functions that may not be that easy to extrapolate using the compositional kernel either (although obviously better than the alternative proposed). All in all the work done seems solid and the results are there, but one can remain afraid of hidden biases in the choice of real world data and the types of kernel compositions (why 3? why Gaussian + linear + periodic).
NIPS
Title Probing the Compositionality of Intuitive Functions Abstract How do people learn about complex functional structure? Taking inspiration from other areas of cognitive science, we propose that this is accomplished by harnessing compositionality: complex structure is decomposed into simpler building blocks. We formalize this idea within the framework of Bayesian regression using a grammar over Gaussian process kernels. We show that participants prefer compositional over non-compositional function extrapolations, that samples from the human prior over functions are best described by a compositional model, and that people perceive compositional functions as more predictable than their non-compositional but otherwise similar counterparts. We argue that the compositional nature of intuitive functions is consistent with broad principles of human cognition. 1 Introduction Function learning underlies many intuitive judgments, such as the perception of time, space and number. All of these tasks require the construction of mental representations that map inputs to outputs. Since the space of such mappings is infinite, inductive biases are necessary to constrain the plausible inferences. What is the nature of human inductive biases over functions? It has been suggested that Gaussian processes (GPs) provide a good characterization of these inductive biases [15]. As we describe more formally below, GPs are distributions over functions that can encode properties such as smoothness, linearity, periodicity, and other inductive biases indicated by research on human function learning [5, 3]. Lucas et al. [15] showed how Bayesian inference with GP priors can unify previous rule-based and exemplar-based theories of function learning [18]. A major unresolved question is how people deal with complex functions that are not easily captured by any simple GP. Insight into this question is provided by the observation that many complex functions encountered in the real world can be broken down into compositions of simpler functions [6, 11]. We pursue this idea theoretically and experimentally, by first defining a hypothetical compositional grammar for intuitive functions (based on [6]) and then investigating whether this grammar quantitatively predicts human function learning performance. We compare the compositional model to a flexible non-compositional model (the spectral mixture representation proposed by [21]). Both models use Bayesian inference to reason about functions, but differ in their inductive biases. We show that (a) participants prefer compositional pattern extrapolations in both forced choice and manual drawing tasks; (b) samples elicited from participants’ priors over functions are more consistent with the compositional grammar; and (c) participants perceive compositional functions as more predictable than non-compositional ones. Taken together, these findings provide support for the compositional nature of intuitive functions. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 2 Gaussian process regression as a theory of intuitive function learning A GP is a collection of random variables, any finite subset of which are jointly Gaussian-distributed (see [18] for an introduction). A GP can be expressed as a distribution over functions: f ∼ GP(m, k), where m(x) = E[f(x)] is a mean function modeling the expected output of the function given input x, and k(x,x′) = E [(f(x)−m(x))(f(x′)−m(x′))] is a kernel function modeling the covariance between points. Intuitively, the kernel encodes an inductive bias about the expected smoothness of functions drawn from the GP. To simplify exposition, we follow standard convention in assuming a constant mean of 0. Conditional on data D = {X,y}, where yn ∼ N (f(xn), σ2), the posterior predictive distribution for a new input x∗ is Gaussian with mean and variance given by: E[f(x?)|D] = k>? (K+ σ2I)−1y (1) V[f(x?)|D] = k(x?,x?)− k>? (K+ σ2I)−1k?, (2) where K is the N × N matrix of covariances evaluated at each input in X and k? = [k(x1,x∗), . . . , k(xN ,x∗)]. As pointed out by Griffiths et al. [10] (see also [15]), the predictive distribution can be viewed as an exemplar (similarity-based) model of function learning [5, 16], since it can be written as a linear combination of the covariance between past and current inputs: f(x∗) = N∑ n=1 αnk(xn,x?) (3) with α = (K + σ2I)−1y. Equivalently, by Mercer’s theorem any positive definite kernel can be expressed as an outer product of feature vectors: k(x,x′) = ∞∑ d=1 λdφd(x)φd(x ′), (4) where {φd(x)} are the eigenfunctions of the kernel and {λd} are the eigenvalues. The posterior predictive mean is a linear combination of the features, which from a psychological perspective can be thought of as encoding “rules” mapping inputs to outputs [4, 14]. Thus, a GP can be expressed as both an exemplar (similarity-based) model and a feature (rule-based) model, unifying the two dominant classes of function learning theories in cognitive science [15]. 3 Structure learning with Gaussian processes So far we have assumed a fixed kernel function. However, humans can adapt to a wide variety of structural forms [13, 8], suggesting that they have the flexibility to learn the kernel function from experience. The key question addressed in this paper is what space of kernels humans are optimizing over—how rich is their representational vocabulary? This vocabulary will in turn act as an inductive bias, making some functions easier to learn, and other functions harder to learn. Broadly speaking, there are two approaches to parameterizing the kernel space: a fixed functional form with continuous parameters, or a combinatorial space of functional forms. These approaches are not mutually exclusive; indeed, the success of the combinatorial approach depends on optimizing the continuous parameters for each form. Nonetheless, this distinction is useful because it allows us to separate different forms of functional complexity. A function might have internal structure such that when this structure is revealed, the apparent functional complexity is significantly reduced. For example, a function composed of many piecewise linear segments might have a long description length under a typical continuous parametrization (e.g., the radial basis kernel described below), because it violates the smoothness assumptions of the prior. However, conditional on the changepoints between segments, the function can be decomposed into independent parts each of which is well-described by a simple continuous parametrization. If internally structured functions are “natural kinds,” then the combinatorial approach may be a good model of human intuitive functions. In the rest of this section, we describe three kernel parameterizations. The first two are continuous, differing in their expressiveness. The third one is combinatorial, allowing it to capture complex patterns by composing simpler kernels. For all kernels, we take the standard approach of choosing the parameter values that optimize the log marginal likelihood. 3.1 Radial basis kernel The radial basis kernel is a commonly used kernel in machine learning applications, embodying the assumption that the covariance between function values decays exponentially with input distance: k(x,x′) = θ2 exp ( −|x− x ′|2 2l2 ) , (5) where θ is a scaling parameter and l is a length-scale parameter. This kernel assumes that the same smoothness properties apply globally for all inputs. It provides a standard baseline to compare with more expressive kernels. 3.2 Spectral mixture kernel The second approach is based on the fact that any stationary kernel can be expressed as an integral using Bochner’s theorem. Letting τ = |x− x′| ∈ RP , then k(τ ) = ∫ RP e2πis >τψ(ds). (6) If ψ has a density S(s), then S is the spectral density of k; S and k are thus Fourier duals [18]. This means that a spectral density fully defines the kernel and that furthermore every stationary kernel can be expressed as a spectral density. Wilson & Adams [21] showed that the spectral density can be approximated by a mixture of Q Gaussians, such that k(τ ) = Q∑ q=1 wq P∏ p=1 exp ( −2π2τ2pυpq ) cos ( 2πτpµ (p) q ) (7) Here, the qth component has mean vector µq = ( µ (1) q , . . . , µ (P ) q ) and a covariance matrix Mq = diag ( υ (1) q , . . . , υ (P ) q ) . The result is a non-parametric approach to Gaussian process re- gression, in which complex kernels are approximated by mixtures of simpler ones. This approach is appealing when simpler kernels fail to capture functional structure. Its main drawback is that because structure is captured implicitly via the spectral density, the building blocks are psychologically less intuitive: humans appear to have preferences for linear [12] and periodic [1] functions, which are not straightforwardly encoded in the spectral mixture (though of course the mixture can approximate these functions). Since the spectral kernel has been successfully applied to reverse engineer human kernels [22], it is a useful reference of comparison to more structured compositional approaches. 3.3 Compositional kernel As positive semidefinite kernels are closed under addition and multiplication, we can create richly structured and interpretable kernels from well understood base components. For example, by summing kernels, we can model the data as a superposition of independent functions. Figure 1 shows an example of how different kernels (radial basis, linear, periodic) can be combined. Table 1 summarizes the kernels used in our grammar. Many other compositional grammars are possible. For example, we could have included a more diverse set of kernels, and other composition operators (e.g., convolution, scaling) that generate valid kernels. However, we believe that our simple grammar is a useful starting point, since the components are intuitive and likely to be psychologically plausible. For tractability, we fix the maximum number of combined kernels to 3. Additionally, we do not allow for repetition of kernels in order to restrict the complexity of the kernel space. . 4 Experiment 1: Extrapolation The first experiment assessed whether people prefer compositional over non-compositional extrapolations. In experiment 1a, functions were sampled from a compositional GP and different extrapolations (mean predictions) were produced using each of the aforementioned kernels. Participants were then asked to choose among the 3 different extrapolations for a given function (see Figure 2). In detail, the outputs for xlearn = [0, 0.1, · · · , 7] were used as a training set to which all three kernels were fitted and then used to generate predictions for the test set xtest = [7.1, 7.2, · · · , 10]. Their mean predictions were then used to generate one plot for every approach that showed the learned input as a blue line and the extrapolation as a red line. The procedure was repeated for 20 different compositional functions. 52 participants (mean age=36.15, SD = 9.11) were recruited via Amazon Mechanical Turk and received $0.5 for their participation. Participants were asked to select one of 3 extrapolations (displayed as red lines) they thought best completed a given blue line. Results showed that participants chose compositional predictions 69%, spectral mixture predictions 17%, and radial basis predictions 14% of the time. Overall, the compositional predictions were chosen significantly more often than the other two (χ2 = 591.2, p < 0.01) as shown in Figure 3a. In experiment 1b, again 20 functions were sampled but this time from a spectral mixture kernel and 65 participants (mean age=30, SD = 9.84) were asked to choose among either compositional or spectral mixture extrapolations and received $0.5 as before. Results (displayed in Figure 3b) showed that participants again chose compositional extrapolations more frequently (68% vs. 32%, χ2 = 172.8, p < 0.01), even if the ground truth happened to be generated by a spectral mixture kernel. Thus, people seem to prefer compositional over non-compositional extrapolations in forced choice extrapolation tasks. 5 Markov chain Monte Carlo with people In a second set of experiments, we assessed participants’ inductive biases directly using a Markov chain Monte Carlo with People (MCMCP) approach [19]. Participants accept or reject proposed extrapolations, effectively simulating a Markov chain whose stationary distribution is in this case the posterior predictive. Extrapolations from all possible kernel combinations (up to 3 combined kernels) were generated and stored a priori. These were then used to generate plots of different proposal extrapolations (as in the previous experiment). On each trial, participants chose between their most recently accepted extrapolation and a new proposal. 5.1 Experiment 2a: Compositional ground truth In the first MCMCP experiment, we sampled functions from compositional kernels. Eight different functions were sampled from various compositional kernels, the input space was split into training and test sets, and then all kernel combinations were used to generate extrapolations. Proposals were sampled uniformly from this set. 51 participants with an average age of 32.55 (SD = 8.21) were recruited via Amazon’s Mechanical Turk and paid $1. There were 8 blocks of 30 trials, where each block corresponded to a single training set. We calculated the average proportion of accepted kernels over the last 5 trials, as shown in Figure 4. In all cases participants’ subjective probability distribution over kernels corresponded well with the data-generating kernels. Moreover, the inverse marginal likelihood, standardized over all kernels, correlated highly with the subjective beliefs assessed by MCMCP (ρ = 0.91, p < .01). Thus, participants seemed to converge to sensible structures when the functions were generated by compositional kernels. 5.2 Experiment 2b: Naturalistic functions The second MCMCP experiment assessed what structures people converged to when faced with real world data. 51 participants with an average age of 32.55 (SD = 12.14) were recruited via Amazon Mechanical Turk and received $1 for their participation. The functions were an airline passenger data set, volcano CO2 emission data, the number of gym memberships over 5 years, and the number of times people googled the band “Wham!” over the last 8 years; all shown in Figure 5a. Participants were not told any information about the data set (including input and output descriptions) beyond the input-output pairs. As periodicity in the real world is rarely ever purely periodic, we adapted the periodic component of the grammar by multiplying a periodic kernel with a radial basis kernel, thereby locally smoothing the periodic part of the function.1 Apart from the different training sets, the procedure was identical to the last experiment. Results are shown in Figure 5b, demonstrating that participants converged to intuitively plausible patterns. In particular, for both the volcano and the airline passenger data, participants converged to compositions resembling those found in previous analyses [6]. The correlation between the mean proportion of accepted predictions and the inverse standardized marginal likelihoods of the different kernels was again significantly positive (ρ = 0.83, p < .01). 6 Experiment 3: Manual function completion In the next experiment, we let participants draw the functions underlying observed data manually. As all of the prior experiments asked participants to judge between “pre-generated” predictions of functions, we wanted to compare this to how participants generate predictions themselves. On each round of the experiment, functions were sampled from the compositional grammar, the number of points to be presented on each trial was sampled uniformly between 100 and 200, and the noise variance was sampled uniformly between 0 and 25. Finally, the size of an unobserved region of the 1See the following page for an example: http://learning.eng.cam.ac.uk/carl/mauna. function was sampled to lie between 5 and 50. Participants were asked to manually draw the function best describing observed data and to inter- and extrapolate this function in two unobserved regions. A screen shot of the experiment is shown in Figure 6. 36 participants with a mean age of 30.5 (SD = 7.15) were recruited from Amazon Mechanical Turk and received $2 for their participation. Participants were asked to draw lines in a cloud of dots that they thought best described the given data. To facilitate this process, participants placed black dots into the cloud, which were then automatically connected by a black line based on a cubic Bezier smoothing curve. They were asked to place the first dot on the left boundary and the final dot on the right boundary of the graph. In between, participants were allowed to place as many dots as they liked (from left to right) and could remove previously placed dots. There were 50 trials in total. We assessed the average root mean squared distance between participants’ predictions (the line they drew) and the mean predictions of each kernel given the data participants had seen, for both interpolation and extrapolation areas. Results are shown in Figure 7. The mean distance from participants’ drawings was significantly higher for the spectral mixture kernel than for the compositional kernel in both interpolation (86.96 vs. 58.33, t(1291.1) = −6.3, p < .001) and extrapolation areas (110.45 vs 83.91, t(1475.7) = 6.39, p < 0.001). The radial basis kernel produced similar distances as the compositional kernel in interpolation (55.8), but predicted participants’ drawings significantly worse in extrapolation areas (97.9, t(1459.9) = 3.26, p < 0.01). 7 Experiment 4: Assessing predictability Compositional patterns might also affect the way in which participants perceive functions a priori [20]. To assess this, we asked participants to judge how well they thought they could predict 40 different functions that were similar on many measures such as their spectral entropy and their average wavelet distance to each other, but 20 of which were sampled from a compositional and 20 from a spectral mixture kernel. Figure 8 shows a screenshot of the experiment. 50 participants with a mean age of 32 (SD = 7.82) were recruited via Amazon Mechanical Turk and received $0.5 for their participation. Participants were asked to rate the predictability of different functions. On each trial participants were shown a total of nj ∈ {50, 60, . . . , 100} randomly sampled input-output points of a given function and asked to judge how well they thought they could predict the output for a randomly sampled input point on a scale of 0 (not at all) to 100 (very well). Afterwards, they had to rate which of two functions was easier to predict (Figure 8) on a scale from -100 (left graph is definitely easier to predict) to 100 (right graph is definitely easier predict). As shown in Figure 9, compositional functions were perceived as more predictable than spectral functions in isolation (t(948) = 11.422, p < 0.01) and in paired comparisons (t(499) = 13.502, p < 0.01). Perceived predictability increases with the number of observed outputs (r = 0.23, p < 0.01) and the larger the number of observations, the larger the difference between compositional and spectral mixture functions (r = 0.14, p < 0.01). 8 Discussion In this paper, we probed human intuitions about functions and found that these intuitions are best described as compositional. We operationalized compositionality using a grammar over kernels within a GP regression framework and found that people prefer extrapolations based on compositional kernels over other alternatives, such as a spectral mixture or the standard radial basis kernel. Two Markov chain Monte Carlo with people experiments revealed that participants converge to extrapolations consistent with the compositional kernels. These findings were replicated when people manually drew the functions underlying observed data. Moreover, participants perceived compositional functions as more predictable than non-compositional – but otherwise similar – ones. The work presented here is connected to several lines of previous research, most importantly that of Lucas et al. [15], which introduced GP regression as a model of human function learning, and Wilson et al. [22], which attempted to reverse-engineer the human kernel using a spectral mixture. We see our work as complementary; we need both a theory to describe how people make sense of structure as well as a method to indicate what the final structure might look like when represented as a kernel. Our approach also ties together neatly with past attempts to model structure in other cognitive domains such as motion perception [9] and decision making [7]. Our work can be extended in a number of ways. First, it is desirable to more thoroughly explore the space of base kernels and composition operators, since we used an elementary grammar in our analyses that is probably too simple. Second, the compositional approach could be used in traditional function learning paradigms (e.g., [5, 14]) as well as in active input selection paradigms [17]. Another interesting avenue for future research would be to explore the broader implications of compositional function representations. For example, evidence suggests that statistical regularities reduce perceived numerosity [23] and increase memory capacity [2]; these tasks can therefore provide clues about the underlying representations. If compositional functions alter number perception or memory performance to a greater extent than alternative functions, that suggests that our theory extends beyond simple function learning.
1. What is the main contribution of the paper regarding human function learning? 2. What are the strengths of the paper, particularly in its use of modern tools? 3. What are the weaknesses of the paper, especially in terms of the support for compositionality? 4. How does the reviewer assess the clarity and detail of the paper's explanations? 5. What are the reviewer's questions regarding the paper's methodology and results?
Review
Review The authors study the inductive bias of human function learning and find evidence that this bias exhibits compositional structure.I think this should be accepted. Function learning has a long tradition of study in cognitive science and this paper brings modern tools -- Gaussian Processes and MCMC with people -- to bear on the problem. The formal setup and experiments are sensible and decently explained, and the results are interesting. I do think that the overall support for compositionality is a bit less than the authors assert, e.g.,: - Figure 3: The extrapolation distributions don't look *that* good in Lin + Per 3, Lin + Per 4, or Lin x Per. - Figure 4: It's odd that l+p extrapolations weren't accepted more. - Figure 6: The RBF kernel is competitive for interpolation. So the strength of the claims should be scaled back a bit. In addition, I wasn't totally clear on some details of MCMCP. Some questions: - I would like to understand why the stationary distribution is the posterior predictive -- can you give intuition for this? Or show a sketch in a supplement? - Why not plot the distributions implied by the Markov chains and compare them to the actual posterior predictives? - What exactly is the inverse marginal likelihood? ---------------------------- Update: I read the rebuttal and reviews and my assessment is largely unchanged - I'm in favor of this paper.
NIPS
Title Probing the Compositionality of Intuitive Functions Abstract How do people learn about complex functional structure? Taking inspiration from other areas of cognitive science, we propose that this is accomplished by harnessing compositionality: complex structure is decomposed into simpler building blocks. We formalize this idea within the framework of Bayesian regression using a grammar over Gaussian process kernels. We show that participants prefer compositional over non-compositional function extrapolations, that samples from the human prior over functions are best described by a compositional model, and that people perceive compositional functions as more predictable than their non-compositional but otherwise similar counterparts. We argue that the compositional nature of intuitive functions is consistent with broad principles of human cognition. 1 Introduction Function learning underlies many intuitive judgments, such as the perception of time, space and number. All of these tasks require the construction of mental representations that map inputs to outputs. Since the space of such mappings is infinite, inductive biases are necessary to constrain the plausible inferences. What is the nature of human inductive biases over functions? It has been suggested that Gaussian processes (GPs) provide a good characterization of these inductive biases [15]. As we describe more formally below, GPs are distributions over functions that can encode properties such as smoothness, linearity, periodicity, and other inductive biases indicated by research on human function learning [5, 3]. Lucas et al. [15] showed how Bayesian inference with GP priors can unify previous rule-based and exemplar-based theories of function learning [18]. A major unresolved question is how people deal with complex functions that are not easily captured by any simple GP. Insight into this question is provided by the observation that many complex functions encountered in the real world can be broken down into compositions of simpler functions [6, 11]. We pursue this idea theoretically and experimentally, by first defining a hypothetical compositional grammar for intuitive functions (based on [6]) and then investigating whether this grammar quantitatively predicts human function learning performance. We compare the compositional model to a flexible non-compositional model (the spectral mixture representation proposed by [21]). Both models use Bayesian inference to reason about functions, but differ in their inductive biases. We show that (a) participants prefer compositional pattern extrapolations in both forced choice and manual drawing tasks; (b) samples elicited from participants’ priors over functions are more consistent with the compositional grammar; and (c) participants perceive compositional functions as more predictable than non-compositional ones. Taken together, these findings provide support for the compositional nature of intuitive functions. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 2 Gaussian process regression as a theory of intuitive function learning A GP is a collection of random variables, any finite subset of which are jointly Gaussian-distributed (see [18] for an introduction). A GP can be expressed as a distribution over functions: f ∼ GP(m, k), where m(x) = E[f(x)] is a mean function modeling the expected output of the function given input x, and k(x,x′) = E [(f(x)−m(x))(f(x′)−m(x′))] is a kernel function modeling the covariance between points. Intuitively, the kernel encodes an inductive bias about the expected smoothness of functions drawn from the GP. To simplify exposition, we follow standard convention in assuming a constant mean of 0. Conditional on data D = {X,y}, where yn ∼ N (f(xn), σ2), the posterior predictive distribution for a new input x∗ is Gaussian with mean and variance given by: E[f(x?)|D] = k>? (K+ σ2I)−1y (1) V[f(x?)|D] = k(x?,x?)− k>? (K+ σ2I)−1k?, (2) where K is the N × N matrix of covariances evaluated at each input in X and k? = [k(x1,x∗), . . . , k(xN ,x∗)]. As pointed out by Griffiths et al. [10] (see also [15]), the predictive distribution can be viewed as an exemplar (similarity-based) model of function learning [5, 16], since it can be written as a linear combination of the covariance between past and current inputs: f(x∗) = N∑ n=1 αnk(xn,x?) (3) with α = (K + σ2I)−1y. Equivalently, by Mercer’s theorem any positive definite kernel can be expressed as an outer product of feature vectors: k(x,x′) = ∞∑ d=1 λdφd(x)φd(x ′), (4) where {φd(x)} are the eigenfunctions of the kernel and {λd} are the eigenvalues. The posterior predictive mean is a linear combination of the features, which from a psychological perspective can be thought of as encoding “rules” mapping inputs to outputs [4, 14]. Thus, a GP can be expressed as both an exemplar (similarity-based) model and a feature (rule-based) model, unifying the two dominant classes of function learning theories in cognitive science [15]. 3 Structure learning with Gaussian processes So far we have assumed a fixed kernel function. However, humans can adapt to a wide variety of structural forms [13, 8], suggesting that they have the flexibility to learn the kernel function from experience. The key question addressed in this paper is what space of kernels humans are optimizing over—how rich is their representational vocabulary? This vocabulary will in turn act as an inductive bias, making some functions easier to learn, and other functions harder to learn. Broadly speaking, there are two approaches to parameterizing the kernel space: a fixed functional form with continuous parameters, or a combinatorial space of functional forms. These approaches are not mutually exclusive; indeed, the success of the combinatorial approach depends on optimizing the continuous parameters for each form. Nonetheless, this distinction is useful because it allows us to separate different forms of functional complexity. A function might have internal structure such that when this structure is revealed, the apparent functional complexity is significantly reduced. For example, a function composed of many piecewise linear segments might have a long description length under a typical continuous parametrization (e.g., the radial basis kernel described below), because it violates the smoothness assumptions of the prior. However, conditional on the changepoints between segments, the function can be decomposed into independent parts each of which is well-described by a simple continuous parametrization. If internally structured functions are “natural kinds,” then the combinatorial approach may be a good model of human intuitive functions. In the rest of this section, we describe three kernel parameterizations. The first two are continuous, differing in their expressiveness. The third one is combinatorial, allowing it to capture complex patterns by composing simpler kernels. For all kernels, we take the standard approach of choosing the parameter values that optimize the log marginal likelihood. 3.1 Radial basis kernel The radial basis kernel is a commonly used kernel in machine learning applications, embodying the assumption that the covariance between function values decays exponentially with input distance: k(x,x′) = θ2 exp ( −|x− x ′|2 2l2 ) , (5) where θ is a scaling parameter and l is a length-scale parameter. This kernel assumes that the same smoothness properties apply globally for all inputs. It provides a standard baseline to compare with more expressive kernels. 3.2 Spectral mixture kernel The second approach is based on the fact that any stationary kernel can be expressed as an integral using Bochner’s theorem. Letting τ = |x− x′| ∈ RP , then k(τ ) = ∫ RP e2πis >τψ(ds). (6) If ψ has a density S(s), then S is the spectral density of k; S and k are thus Fourier duals [18]. This means that a spectral density fully defines the kernel and that furthermore every stationary kernel can be expressed as a spectral density. Wilson & Adams [21] showed that the spectral density can be approximated by a mixture of Q Gaussians, such that k(τ ) = Q∑ q=1 wq P∏ p=1 exp ( −2π2τ2pυpq ) cos ( 2πτpµ (p) q ) (7) Here, the qth component has mean vector µq = ( µ (1) q , . . . , µ (P ) q ) and a covariance matrix Mq = diag ( υ (1) q , . . . , υ (P ) q ) . The result is a non-parametric approach to Gaussian process re- gression, in which complex kernels are approximated by mixtures of simpler ones. This approach is appealing when simpler kernels fail to capture functional structure. Its main drawback is that because structure is captured implicitly via the spectral density, the building blocks are psychologically less intuitive: humans appear to have preferences for linear [12] and periodic [1] functions, which are not straightforwardly encoded in the spectral mixture (though of course the mixture can approximate these functions). Since the spectral kernel has been successfully applied to reverse engineer human kernels [22], it is a useful reference of comparison to more structured compositional approaches. 3.3 Compositional kernel As positive semidefinite kernels are closed under addition and multiplication, we can create richly structured and interpretable kernels from well understood base components. For example, by summing kernels, we can model the data as a superposition of independent functions. Figure 1 shows an example of how different kernels (radial basis, linear, periodic) can be combined. Table 1 summarizes the kernels used in our grammar. Many other compositional grammars are possible. For example, we could have included a more diverse set of kernels, and other composition operators (e.g., convolution, scaling) that generate valid kernels. However, we believe that our simple grammar is a useful starting point, since the components are intuitive and likely to be psychologically plausible. For tractability, we fix the maximum number of combined kernels to 3. Additionally, we do not allow for repetition of kernels in order to restrict the complexity of the kernel space. . 4 Experiment 1: Extrapolation The first experiment assessed whether people prefer compositional over non-compositional extrapolations. In experiment 1a, functions were sampled from a compositional GP and different extrapolations (mean predictions) were produced using each of the aforementioned kernels. Participants were then asked to choose among the 3 different extrapolations for a given function (see Figure 2). In detail, the outputs for xlearn = [0, 0.1, · · · , 7] were used as a training set to which all three kernels were fitted and then used to generate predictions for the test set xtest = [7.1, 7.2, · · · , 10]. Their mean predictions were then used to generate one plot for every approach that showed the learned input as a blue line and the extrapolation as a red line. The procedure was repeated for 20 different compositional functions. 52 participants (mean age=36.15, SD = 9.11) were recruited via Amazon Mechanical Turk and received $0.5 for their participation. Participants were asked to select one of 3 extrapolations (displayed as red lines) they thought best completed a given blue line. Results showed that participants chose compositional predictions 69%, spectral mixture predictions 17%, and radial basis predictions 14% of the time. Overall, the compositional predictions were chosen significantly more often than the other two (χ2 = 591.2, p < 0.01) as shown in Figure 3a. In experiment 1b, again 20 functions were sampled but this time from a spectral mixture kernel and 65 participants (mean age=30, SD = 9.84) were asked to choose among either compositional or spectral mixture extrapolations and received $0.5 as before. Results (displayed in Figure 3b) showed that participants again chose compositional extrapolations more frequently (68% vs. 32%, χ2 = 172.8, p < 0.01), even if the ground truth happened to be generated by a spectral mixture kernel. Thus, people seem to prefer compositional over non-compositional extrapolations in forced choice extrapolation tasks. 5 Markov chain Monte Carlo with people In a second set of experiments, we assessed participants’ inductive biases directly using a Markov chain Monte Carlo with People (MCMCP) approach [19]. Participants accept or reject proposed extrapolations, effectively simulating a Markov chain whose stationary distribution is in this case the posterior predictive. Extrapolations from all possible kernel combinations (up to 3 combined kernels) were generated and stored a priori. These were then used to generate plots of different proposal extrapolations (as in the previous experiment). On each trial, participants chose between their most recently accepted extrapolation and a new proposal. 5.1 Experiment 2a: Compositional ground truth In the first MCMCP experiment, we sampled functions from compositional kernels. Eight different functions were sampled from various compositional kernels, the input space was split into training and test sets, and then all kernel combinations were used to generate extrapolations. Proposals were sampled uniformly from this set. 51 participants with an average age of 32.55 (SD = 8.21) were recruited via Amazon’s Mechanical Turk and paid $1. There were 8 blocks of 30 trials, where each block corresponded to a single training set. We calculated the average proportion of accepted kernels over the last 5 trials, as shown in Figure 4. In all cases participants’ subjective probability distribution over kernels corresponded well with the data-generating kernels. Moreover, the inverse marginal likelihood, standardized over all kernels, correlated highly with the subjective beliefs assessed by MCMCP (ρ = 0.91, p < .01). Thus, participants seemed to converge to sensible structures when the functions were generated by compositional kernels. 5.2 Experiment 2b: Naturalistic functions The second MCMCP experiment assessed what structures people converged to when faced with real world data. 51 participants with an average age of 32.55 (SD = 12.14) were recruited via Amazon Mechanical Turk and received $1 for their participation. The functions were an airline passenger data set, volcano CO2 emission data, the number of gym memberships over 5 years, and the number of times people googled the band “Wham!” over the last 8 years; all shown in Figure 5a. Participants were not told any information about the data set (including input and output descriptions) beyond the input-output pairs. As periodicity in the real world is rarely ever purely periodic, we adapted the periodic component of the grammar by multiplying a periodic kernel with a radial basis kernel, thereby locally smoothing the periodic part of the function.1 Apart from the different training sets, the procedure was identical to the last experiment. Results are shown in Figure 5b, demonstrating that participants converged to intuitively plausible patterns. In particular, for both the volcano and the airline passenger data, participants converged to compositions resembling those found in previous analyses [6]. The correlation between the mean proportion of accepted predictions and the inverse standardized marginal likelihoods of the different kernels was again significantly positive (ρ = 0.83, p < .01). 6 Experiment 3: Manual function completion In the next experiment, we let participants draw the functions underlying observed data manually. As all of the prior experiments asked participants to judge between “pre-generated” predictions of functions, we wanted to compare this to how participants generate predictions themselves. On each round of the experiment, functions were sampled from the compositional grammar, the number of points to be presented on each trial was sampled uniformly between 100 and 200, and the noise variance was sampled uniformly between 0 and 25. Finally, the size of an unobserved region of the 1See the following page for an example: http://learning.eng.cam.ac.uk/carl/mauna. function was sampled to lie between 5 and 50. Participants were asked to manually draw the function best describing observed data and to inter- and extrapolate this function in two unobserved regions. A screen shot of the experiment is shown in Figure 6. 36 participants with a mean age of 30.5 (SD = 7.15) were recruited from Amazon Mechanical Turk and received $2 for their participation. Participants were asked to draw lines in a cloud of dots that they thought best described the given data. To facilitate this process, participants placed black dots into the cloud, which were then automatically connected by a black line based on a cubic Bezier smoothing curve. They were asked to place the first dot on the left boundary and the final dot on the right boundary of the graph. In between, participants were allowed to place as many dots as they liked (from left to right) and could remove previously placed dots. There were 50 trials in total. We assessed the average root mean squared distance between participants’ predictions (the line they drew) and the mean predictions of each kernel given the data participants had seen, for both interpolation and extrapolation areas. Results are shown in Figure 7. The mean distance from participants’ drawings was significantly higher for the spectral mixture kernel than for the compositional kernel in both interpolation (86.96 vs. 58.33, t(1291.1) = −6.3, p < .001) and extrapolation areas (110.45 vs 83.91, t(1475.7) = 6.39, p < 0.001). The radial basis kernel produced similar distances as the compositional kernel in interpolation (55.8), but predicted participants’ drawings significantly worse in extrapolation areas (97.9, t(1459.9) = 3.26, p < 0.01). 7 Experiment 4: Assessing predictability Compositional patterns might also affect the way in which participants perceive functions a priori [20]. To assess this, we asked participants to judge how well they thought they could predict 40 different functions that were similar on many measures such as their spectral entropy and their average wavelet distance to each other, but 20 of which were sampled from a compositional and 20 from a spectral mixture kernel. Figure 8 shows a screenshot of the experiment. 50 participants with a mean age of 32 (SD = 7.82) were recruited via Amazon Mechanical Turk and received $0.5 for their participation. Participants were asked to rate the predictability of different functions. On each trial participants were shown a total of nj ∈ {50, 60, . . . , 100} randomly sampled input-output points of a given function and asked to judge how well they thought they could predict the output for a randomly sampled input point on a scale of 0 (not at all) to 100 (very well). Afterwards, they had to rate which of two functions was easier to predict (Figure 8) on a scale from -100 (left graph is definitely easier to predict) to 100 (right graph is definitely easier predict). As shown in Figure 9, compositional functions were perceived as more predictable than spectral functions in isolation (t(948) = 11.422, p < 0.01) and in paired comparisons (t(499) = 13.502, p < 0.01). Perceived predictability increases with the number of observed outputs (r = 0.23, p < 0.01) and the larger the number of observations, the larger the difference between compositional and spectral mixture functions (r = 0.14, p < 0.01). 8 Discussion In this paper, we probed human intuitions about functions and found that these intuitions are best described as compositional. We operationalized compositionality using a grammar over kernels within a GP regression framework and found that people prefer extrapolations based on compositional kernels over other alternatives, such as a spectral mixture or the standard radial basis kernel. Two Markov chain Monte Carlo with people experiments revealed that participants converge to extrapolations consistent with the compositional kernels. These findings were replicated when people manually drew the functions underlying observed data. Moreover, participants perceived compositional functions as more predictable than non-compositional – but otherwise similar – ones. The work presented here is connected to several lines of previous research, most importantly that of Lucas et al. [15], which introduced GP regression as a model of human function learning, and Wilson et al. [22], which attempted to reverse-engineer the human kernel using a spectral mixture. We see our work as complementary; we need both a theory to describe how people make sense of structure as well as a method to indicate what the final structure might look like when represented as a kernel. Our approach also ties together neatly with past attempts to model structure in other cognitive domains such as motion perception [9] and decision making [7]. Our work can be extended in a number of ways. First, it is desirable to more thoroughly explore the space of base kernels and composition operators, since we used an elementary grammar in our analyses that is probably too simple. Second, the compositional approach could be used in traditional function learning paradigms (e.g., [5, 14]) as well as in active input selection paradigms [17]. Another interesting avenue for future research would be to explore the broader implications of compositional function representations. For example, evidence suggests that statistical regularities reduce perceived numerosity [23] and increase memory capacity [2]; these tasks can therefore provide clues about the underlying representations. If compositional functions alter number perception or memory performance to a greater extent than alternative functions, that suggests that our theory extends beyond simple function learning.
1. Can you provide more background information on why function learning is essential for human intuition? 2. Can you elaborate further on the concepts mentioned briefly in paragraphs 1-3? 3. Why did the authors choose these particular kernels for their proposed compositional kernel? 4. How was the error bar calculated for each plot? 5. Are there any typos or inconsistencies in the paper that need to be addressed?
Review
Review The paper focuses on understanding human intuition about functions. The authors first define a hypothetical grammar for intuitive functions and then investigate how this grammar predicts human function learning performance. The authors describe three different kernel parametrizations and provide data from experiments ran through Amazon mechanical turk. They find that people seem to prefer extrapolations based on compositional kernels over other alternatives. Please provide some more background about the importance of function learning in intuition. Specify in more details the concepts briefly introduced in paragraphs 1 to 3. What are the reasons behind choosing the specified kernels for the proposed compositional kernel? How is the error bar calculated for each plot? Typo 1: On page two, last paragraph, "second" should be replaced with third. Typo 2: Inconsistency in reporting the number of participants (either in number or letter).
NIPS
Title Probing the Compositionality of Intuitive Functions Abstract How do people learn about complex functional structure? Taking inspiration from other areas of cognitive science, we propose that this is accomplished by harnessing compositionality: complex structure is decomposed into simpler building blocks. We formalize this idea within the framework of Bayesian regression using a grammar over Gaussian process kernels. We show that participants prefer compositional over non-compositional function extrapolations, that samples from the human prior over functions are best described by a compositional model, and that people perceive compositional functions as more predictable than their non-compositional but otherwise similar counterparts. We argue that the compositional nature of intuitive functions is consistent with broad principles of human cognition. 1 Introduction Function learning underlies many intuitive judgments, such as the perception of time, space and number. All of these tasks require the construction of mental representations that map inputs to outputs. Since the space of such mappings is infinite, inductive biases are necessary to constrain the plausible inferences. What is the nature of human inductive biases over functions? It has been suggested that Gaussian processes (GPs) provide a good characterization of these inductive biases [15]. As we describe more formally below, GPs are distributions over functions that can encode properties such as smoothness, linearity, periodicity, and other inductive biases indicated by research on human function learning [5, 3]. Lucas et al. [15] showed how Bayesian inference with GP priors can unify previous rule-based and exemplar-based theories of function learning [18]. A major unresolved question is how people deal with complex functions that are not easily captured by any simple GP. Insight into this question is provided by the observation that many complex functions encountered in the real world can be broken down into compositions of simpler functions [6, 11]. We pursue this idea theoretically and experimentally, by first defining a hypothetical compositional grammar for intuitive functions (based on [6]) and then investigating whether this grammar quantitatively predicts human function learning performance. We compare the compositional model to a flexible non-compositional model (the spectral mixture representation proposed by [21]). Both models use Bayesian inference to reason about functions, but differ in their inductive biases. We show that (a) participants prefer compositional pattern extrapolations in both forced choice and manual drawing tasks; (b) samples elicited from participants’ priors over functions are more consistent with the compositional grammar; and (c) participants perceive compositional functions as more predictable than non-compositional ones. Taken together, these findings provide support for the compositional nature of intuitive functions. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 2 Gaussian process regression as a theory of intuitive function learning A GP is a collection of random variables, any finite subset of which are jointly Gaussian-distributed (see [18] for an introduction). A GP can be expressed as a distribution over functions: f ∼ GP(m, k), where m(x) = E[f(x)] is a mean function modeling the expected output of the function given input x, and k(x,x′) = E [(f(x)−m(x))(f(x′)−m(x′))] is a kernel function modeling the covariance between points. Intuitively, the kernel encodes an inductive bias about the expected smoothness of functions drawn from the GP. To simplify exposition, we follow standard convention in assuming a constant mean of 0. Conditional on data D = {X,y}, where yn ∼ N (f(xn), σ2), the posterior predictive distribution for a new input x∗ is Gaussian with mean and variance given by: E[f(x?)|D] = k>? (K+ σ2I)−1y (1) V[f(x?)|D] = k(x?,x?)− k>? (K+ σ2I)−1k?, (2) where K is the N × N matrix of covariances evaluated at each input in X and k? = [k(x1,x∗), . . . , k(xN ,x∗)]. As pointed out by Griffiths et al. [10] (see also [15]), the predictive distribution can be viewed as an exemplar (similarity-based) model of function learning [5, 16], since it can be written as a linear combination of the covariance between past and current inputs: f(x∗) = N∑ n=1 αnk(xn,x?) (3) with α = (K + σ2I)−1y. Equivalently, by Mercer’s theorem any positive definite kernel can be expressed as an outer product of feature vectors: k(x,x′) = ∞∑ d=1 λdφd(x)φd(x ′), (4) where {φd(x)} are the eigenfunctions of the kernel and {λd} are the eigenvalues. The posterior predictive mean is a linear combination of the features, which from a psychological perspective can be thought of as encoding “rules” mapping inputs to outputs [4, 14]. Thus, a GP can be expressed as both an exemplar (similarity-based) model and a feature (rule-based) model, unifying the two dominant classes of function learning theories in cognitive science [15]. 3 Structure learning with Gaussian processes So far we have assumed a fixed kernel function. However, humans can adapt to a wide variety of structural forms [13, 8], suggesting that they have the flexibility to learn the kernel function from experience. The key question addressed in this paper is what space of kernels humans are optimizing over—how rich is their representational vocabulary? This vocabulary will in turn act as an inductive bias, making some functions easier to learn, and other functions harder to learn. Broadly speaking, there are two approaches to parameterizing the kernel space: a fixed functional form with continuous parameters, or a combinatorial space of functional forms. These approaches are not mutually exclusive; indeed, the success of the combinatorial approach depends on optimizing the continuous parameters for each form. Nonetheless, this distinction is useful because it allows us to separate different forms of functional complexity. A function might have internal structure such that when this structure is revealed, the apparent functional complexity is significantly reduced. For example, a function composed of many piecewise linear segments might have a long description length under a typical continuous parametrization (e.g., the radial basis kernel described below), because it violates the smoothness assumptions of the prior. However, conditional on the changepoints between segments, the function can be decomposed into independent parts each of which is well-described by a simple continuous parametrization. If internally structured functions are “natural kinds,” then the combinatorial approach may be a good model of human intuitive functions. In the rest of this section, we describe three kernel parameterizations. The first two are continuous, differing in their expressiveness. The third one is combinatorial, allowing it to capture complex patterns by composing simpler kernels. For all kernels, we take the standard approach of choosing the parameter values that optimize the log marginal likelihood. 3.1 Radial basis kernel The radial basis kernel is a commonly used kernel in machine learning applications, embodying the assumption that the covariance between function values decays exponentially with input distance: k(x,x′) = θ2 exp ( −|x− x ′|2 2l2 ) , (5) where θ is a scaling parameter and l is a length-scale parameter. This kernel assumes that the same smoothness properties apply globally for all inputs. It provides a standard baseline to compare with more expressive kernels. 3.2 Spectral mixture kernel The second approach is based on the fact that any stationary kernel can be expressed as an integral using Bochner’s theorem. Letting τ = |x− x′| ∈ RP , then k(τ ) = ∫ RP e2πis >τψ(ds). (6) If ψ has a density S(s), then S is the spectral density of k; S and k are thus Fourier duals [18]. This means that a spectral density fully defines the kernel and that furthermore every stationary kernel can be expressed as a spectral density. Wilson & Adams [21] showed that the spectral density can be approximated by a mixture of Q Gaussians, such that k(τ ) = Q∑ q=1 wq P∏ p=1 exp ( −2π2τ2pυpq ) cos ( 2πτpµ (p) q ) (7) Here, the qth component has mean vector µq = ( µ (1) q , . . . , µ (P ) q ) and a covariance matrix Mq = diag ( υ (1) q , . . . , υ (P ) q ) . The result is a non-parametric approach to Gaussian process re- gression, in which complex kernels are approximated by mixtures of simpler ones. This approach is appealing when simpler kernels fail to capture functional structure. Its main drawback is that because structure is captured implicitly via the spectral density, the building blocks are psychologically less intuitive: humans appear to have preferences for linear [12] and periodic [1] functions, which are not straightforwardly encoded in the spectral mixture (though of course the mixture can approximate these functions). Since the spectral kernel has been successfully applied to reverse engineer human kernels [22], it is a useful reference of comparison to more structured compositional approaches. 3.3 Compositional kernel As positive semidefinite kernels are closed under addition and multiplication, we can create richly structured and interpretable kernels from well understood base components. For example, by summing kernels, we can model the data as a superposition of independent functions. Figure 1 shows an example of how different kernels (radial basis, linear, periodic) can be combined. Table 1 summarizes the kernels used in our grammar. Many other compositional grammars are possible. For example, we could have included a more diverse set of kernels, and other composition operators (e.g., convolution, scaling) that generate valid kernels. However, we believe that our simple grammar is a useful starting point, since the components are intuitive and likely to be psychologically plausible. For tractability, we fix the maximum number of combined kernels to 3. Additionally, we do not allow for repetition of kernels in order to restrict the complexity of the kernel space. . 4 Experiment 1: Extrapolation The first experiment assessed whether people prefer compositional over non-compositional extrapolations. In experiment 1a, functions were sampled from a compositional GP and different extrapolations (mean predictions) were produced using each of the aforementioned kernels. Participants were then asked to choose among the 3 different extrapolations for a given function (see Figure 2). In detail, the outputs for xlearn = [0, 0.1, · · · , 7] were used as a training set to which all three kernels were fitted and then used to generate predictions for the test set xtest = [7.1, 7.2, · · · , 10]. Their mean predictions were then used to generate one plot for every approach that showed the learned input as a blue line and the extrapolation as a red line. The procedure was repeated for 20 different compositional functions. 52 participants (mean age=36.15, SD = 9.11) were recruited via Amazon Mechanical Turk and received $0.5 for their participation. Participants were asked to select one of 3 extrapolations (displayed as red lines) they thought best completed a given blue line. Results showed that participants chose compositional predictions 69%, spectral mixture predictions 17%, and radial basis predictions 14% of the time. Overall, the compositional predictions were chosen significantly more often than the other two (χ2 = 591.2, p < 0.01) as shown in Figure 3a. In experiment 1b, again 20 functions were sampled but this time from a spectral mixture kernel and 65 participants (mean age=30, SD = 9.84) were asked to choose among either compositional or spectral mixture extrapolations and received $0.5 as before. Results (displayed in Figure 3b) showed that participants again chose compositional extrapolations more frequently (68% vs. 32%, χ2 = 172.8, p < 0.01), even if the ground truth happened to be generated by a spectral mixture kernel. Thus, people seem to prefer compositional over non-compositional extrapolations in forced choice extrapolation tasks. 5 Markov chain Monte Carlo with people In a second set of experiments, we assessed participants’ inductive biases directly using a Markov chain Monte Carlo with People (MCMCP) approach [19]. Participants accept or reject proposed extrapolations, effectively simulating a Markov chain whose stationary distribution is in this case the posterior predictive. Extrapolations from all possible kernel combinations (up to 3 combined kernels) were generated and stored a priori. These were then used to generate plots of different proposal extrapolations (as in the previous experiment). On each trial, participants chose between their most recently accepted extrapolation and a new proposal. 5.1 Experiment 2a: Compositional ground truth In the first MCMCP experiment, we sampled functions from compositional kernels. Eight different functions were sampled from various compositional kernels, the input space was split into training and test sets, and then all kernel combinations were used to generate extrapolations. Proposals were sampled uniformly from this set. 51 participants with an average age of 32.55 (SD = 8.21) were recruited via Amazon’s Mechanical Turk and paid $1. There were 8 blocks of 30 trials, where each block corresponded to a single training set. We calculated the average proportion of accepted kernels over the last 5 trials, as shown in Figure 4. In all cases participants’ subjective probability distribution over kernels corresponded well with the data-generating kernels. Moreover, the inverse marginal likelihood, standardized over all kernels, correlated highly with the subjective beliefs assessed by MCMCP (ρ = 0.91, p < .01). Thus, participants seemed to converge to sensible structures when the functions were generated by compositional kernels. 5.2 Experiment 2b: Naturalistic functions The second MCMCP experiment assessed what structures people converged to when faced with real world data. 51 participants with an average age of 32.55 (SD = 12.14) were recruited via Amazon Mechanical Turk and received $1 for their participation. The functions were an airline passenger data set, volcano CO2 emission data, the number of gym memberships over 5 years, and the number of times people googled the band “Wham!” over the last 8 years; all shown in Figure 5a. Participants were not told any information about the data set (including input and output descriptions) beyond the input-output pairs. As periodicity in the real world is rarely ever purely periodic, we adapted the periodic component of the grammar by multiplying a periodic kernel with a radial basis kernel, thereby locally smoothing the periodic part of the function.1 Apart from the different training sets, the procedure was identical to the last experiment. Results are shown in Figure 5b, demonstrating that participants converged to intuitively plausible patterns. In particular, for both the volcano and the airline passenger data, participants converged to compositions resembling those found in previous analyses [6]. The correlation between the mean proportion of accepted predictions and the inverse standardized marginal likelihoods of the different kernels was again significantly positive (ρ = 0.83, p < .01). 6 Experiment 3: Manual function completion In the next experiment, we let participants draw the functions underlying observed data manually. As all of the prior experiments asked participants to judge between “pre-generated” predictions of functions, we wanted to compare this to how participants generate predictions themselves. On each round of the experiment, functions were sampled from the compositional grammar, the number of points to be presented on each trial was sampled uniformly between 100 and 200, and the noise variance was sampled uniformly between 0 and 25. Finally, the size of an unobserved region of the 1See the following page for an example: http://learning.eng.cam.ac.uk/carl/mauna. function was sampled to lie between 5 and 50. Participants were asked to manually draw the function best describing observed data and to inter- and extrapolate this function in two unobserved regions. A screen shot of the experiment is shown in Figure 6. 36 participants with a mean age of 30.5 (SD = 7.15) were recruited from Amazon Mechanical Turk and received $2 for their participation. Participants were asked to draw lines in a cloud of dots that they thought best described the given data. To facilitate this process, participants placed black dots into the cloud, which were then automatically connected by a black line based on a cubic Bezier smoothing curve. They were asked to place the first dot on the left boundary and the final dot on the right boundary of the graph. In between, participants were allowed to place as many dots as they liked (from left to right) and could remove previously placed dots. There were 50 trials in total. We assessed the average root mean squared distance between participants’ predictions (the line they drew) and the mean predictions of each kernel given the data participants had seen, for both interpolation and extrapolation areas. Results are shown in Figure 7. The mean distance from participants’ drawings was significantly higher for the spectral mixture kernel than for the compositional kernel in both interpolation (86.96 vs. 58.33, t(1291.1) = −6.3, p < .001) and extrapolation areas (110.45 vs 83.91, t(1475.7) = 6.39, p < 0.001). The radial basis kernel produced similar distances as the compositional kernel in interpolation (55.8), but predicted participants’ drawings significantly worse in extrapolation areas (97.9, t(1459.9) = 3.26, p < 0.01). 7 Experiment 4: Assessing predictability Compositional patterns might also affect the way in which participants perceive functions a priori [20]. To assess this, we asked participants to judge how well they thought they could predict 40 different functions that were similar on many measures such as their spectral entropy and their average wavelet distance to each other, but 20 of which were sampled from a compositional and 20 from a spectral mixture kernel. Figure 8 shows a screenshot of the experiment. 50 participants with a mean age of 32 (SD = 7.82) were recruited via Amazon Mechanical Turk and received $0.5 for their participation. Participants were asked to rate the predictability of different functions. On each trial participants were shown a total of nj ∈ {50, 60, . . . , 100} randomly sampled input-output points of a given function and asked to judge how well they thought they could predict the output for a randomly sampled input point on a scale of 0 (not at all) to 100 (very well). Afterwards, they had to rate which of two functions was easier to predict (Figure 8) on a scale from -100 (left graph is definitely easier to predict) to 100 (right graph is definitely easier predict). As shown in Figure 9, compositional functions were perceived as more predictable than spectral functions in isolation (t(948) = 11.422, p < 0.01) and in paired comparisons (t(499) = 13.502, p < 0.01). Perceived predictability increases with the number of observed outputs (r = 0.23, p < 0.01) and the larger the number of observations, the larger the difference between compositional and spectral mixture functions (r = 0.14, p < 0.01). 8 Discussion In this paper, we probed human intuitions about functions and found that these intuitions are best described as compositional. We operationalized compositionality using a grammar over kernels within a GP regression framework and found that people prefer extrapolations based on compositional kernels over other alternatives, such as a spectral mixture or the standard radial basis kernel. Two Markov chain Monte Carlo with people experiments revealed that participants converge to extrapolations consistent with the compositional kernels. These findings were replicated when people manually drew the functions underlying observed data. Moreover, participants perceived compositional functions as more predictable than non-compositional – but otherwise similar – ones. The work presented here is connected to several lines of previous research, most importantly that of Lucas et al. [15], which introduced GP regression as a model of human function learning, and Wilson et al. [22], which attempted to reverse-engineer the human kernel using a spectral mixture. We see our work as complementary; we need both a theory to describe how people make sense of structure as well as a method to indicate what the final structure might look like when represented as a kernel. Our approach also ties together neatly with past attempts to model structure in other cognitive domains such as motion perception [9] and decision making [7]. Our work can be extended in a number of ways. First, it is desirable to more thoroughly explore the space of base kernels and composition operators, since we used an elementary grammar in our analyses that is probably too simple. Second, the compositional approach could be used in traditional function learning paradigms (e.g., [5, 14]) as well as in active input selection paradigms [17]. Another interesting avenue for future research would be to explore the broader implications of compositional function representations. For example, evidence suggests that statistical regularities reduce perceived numerosity [23] and increase memory capacity [2]; these tasks can therefore provide clues about the underlying representations. If compositional functions alter number perception or memory performance to a greater extent than alternative functions, that suggests that our theory extends beyond simple function learning.
1. What is the main contribution of the paper regarding hypothesis space for learning functions? 2. What are the strengths and weaknesses of the proposed approach, particularly in its technical quality and experimental design? 3. How does the reviewer assess the novelty and potential impact of the paper? 4. Are there any concerns or suggestions regarding the clarity and presentation of the paper's content?
Review
Review This paper tests the hypothesis that people's hypothesis space for learning functions is compositional. Function learning is formalized as Bayesian regression with functional forms given by compositional Gaussian process kernels (summing and multiplying radial basis, linear, and periodic kernels). A spectral mixture kernel provides an alternative hypothesis. A series of experiments demonstrates that people prefer to extrapolate functions (given points on the function) using compositional structure and find compositional functions to be more predictable.Technical quality The number of experiments reported in the paper is impressive. Moreover, the use of the MCMCP method to probe people's posterior belief distribution is interesting. However, because Experiments 1a, 2a, and 3 use compositional ground truth, the conclusions that can be drawn from these experiments seem limited. It is not clear if people prefer compositional structures in these experiments because the data are generated from them or because they always prefer compositional structures. Furthermore, why are the compositional kernels not compared to the spectral mixture kernel in Experiments 2a and 2b? Perhaps the most convincing demonstration that people prefer compositional structure is Experiment 1b, in which compositional extrapolations are chosen more often even when the ground truth is the spectral mixture. I wonder if it would be possible to incorporate a prior over the compositional structures that expresses people's preference for linear and periodic functions and (presumably) simpler combinations, and look at the full posterior belief distribution rather than just the likelihood? A small comment on Experiment 3: I don't think many participants would find drawing a function using a Bezier curve very intuitive. What did the MTurk participants think of this? Maybe this experiment could be conducted on touchscreen devices instead. Novelty/originality This work introduces the compositional kernel and includes several novel and interesting behavioral experiments. Potential impact This paper will probably be of interest to many cognitive scientists and some computer scientists as well. However, it seems somewhat unsurprising from the outset that people prefer to think of functions as consisting of compositional structure rather than as a spectral density (unless there is strong evidence to believe the latter). The paper would be improved if some reasons to think otherwise were presented at the beginning. Clarity and presentation The paper is well-written and clear. However, it would be helpful to include in the Introduction some examples of why function learning is important for an organism. I did not fully understand the math in Sections 2, 3.1, and 3.2, but I appreciated that the authors tried to provide intuitive explanations for the mathematical formalisms at every step. One small comment is that in lines 43-45, there are two symbols, x_asterisk and x_star, and it's not clear what they each denote (or if they're actually the same symbol).
NIPS
Title Learning Collaborative Policies to Solve NP-hard Routing Problems Abstract Recently, deep reinforcement learning (DRL) frameworks have shown potential for solving NP-hard routing problems such as the traveling salesman problem (TSP) without problem-specific expert knowledge. Although DRL can be used to solve complex problems, DRL frameworks still struggle to compete with state-of-the-art heuristics showing a substantial performance gap. This paper proposes a novel hierarchical problem-solving strategy, termed learning collaborative policies (LCP), which can effectively find the near-optimum solution using two iterative DRL policies: the seeder and reviser. The seeder generates as diversified candidate solutions as possible (seeds) while being dedicated to exploring over the full combinatorial action space (i.e., sequence of assignment action). To this end, we train the seeder’s policy using a simple yet effective entropy regularization reward to encourage the seeder to find diverse solutions. On the other hand, the reviser modifies each candidate solution generated by the seeder; it partitions the full trajectory into sub-tours and simultaneously revises each sub-tour to minimize its traveling distance. Thus, the reviser is trained to improve the candidate solution’s quality, focusing on the reduced solution space (which is beneficial for exploitation). Extensive experiments demonstrate that the proposed two-policies collaboration scheme improves over single-policy DRL framework on various NP-hard routing problems, including TSP, prize collecting TSP (PCTSP), and capacitated vehicle routing problem (CVRP). 1 Introduction Routing is a combinatorial optimization problem, one of the prominent fields in discrete mathematics and computational theory. Among routing problems, the traveling salesman problem (TSP) is a canonical example. TSP can be applied to real-world problems in various engineering fields, such as robot routing, biology, and electrical design automation (EDA) [1, 2, 3, 4, 5] by expanding constraints and objectives to real-world settings : coined TSP variants are expanded version of TSP. However, TSP and its variants are NP-hard, making it challenging to design an exact solver [6]. Due to NP-hardness, solvers of TSP-like problems rely on mixed-integer linear programming (MILP) solvers [7] and handcrafted heuristics [8, 9]. Although they often provide a remarkable performance on target problems, the conventional approaches have several limitations. Firstly, in the case of MILP solvers, the objective functions and constraints must be formulated into linear forms, but many real-world routing applications, including biology and EDA, have a non-linear objective. Secondly, handcrafted heuristics rely on expert knowledge on target problems, thus hard to solve other problems. That is, whenever the target problem changes, the algorithm must also be re-designed. Deep reinforcement learning (DRL)-routing frameworks [10, 11, 12] is proposed to tackle the limitation of conventional approaches. One of the benefits of DRL is that reward of DRL can be any 35th Conference on Neural Information Processing Systems (NeurIPS 2021). value, even from a black-box simulator; therefore, DRL can overcome the limitations of MILP on real-world applications. Moreover, DRL frameworks can automatically design solvers relying less on a handcrafted manner. We note that the main objective of our research is not outperforming problem-specific solvers like the Concorde [9], a TSP solver. Our problem-solving strategy based on DRL, however, ultimately focuses on practical applications1 including intelligent transportation [13], biological sequence design [14], routing on electrical device [15] and device placement [16, 17]. Therefore, this paper evaluates the performance of DRL frameworks on TSP-like problems as a benchmark for potential applicability to practical applications, including speed, optimality, scalability, and expand-ability to other problems. TSP-like problems are excellent benchmarks as they have various baselines to compare with and can easily be modeled and evaluated. Contribution. This paper presents a novel DRL scheme, coined learning collaborative policies (LCP), a hierarchical solving protocol with two policies: seeder and reviser. The seeder generates various candidate solutions (seeds), each of which will be iteratively revised by the reviser to generate fine-tuned solutions. Having diversified candidate solutions is important, as it gives a better chance to find the best solution among them. Thus, the seeder is dedicated to exploring the full combinatorial action space (i.e., sequence of assignment action) so that it can provide as diversified candidate solutions as possible. It is important to explore over the full combinatorial action space because the solution quality highly fluctuates depending on its composition; however, exploring over the combinatorial action space is inherently difficult due to its inevitably many possible solutions. Therefore, this study provides an effective exploration strategy applying an entropy maximization scheme. The reviser modifies each candidate solution generated by the seeder. The reviser is dedicated to exploiting the policy (i.e., derived knowledge about the problem) to improve the quality of the candidate solution. The reviser partitions the full trajectory into sub-tours and revises each subtour to minimize its traveling distance in a parallel manner. This scheme provides two advantages: (a) searching over the restricted solution space can be more effective because the reward signal corresponding to the sub-tour is less variable than that of the full trajectory when using reinforcement learning to derive a policy, and (b) searching over sub-tours of seeds can be parallelized to expedite the revising process. The most significant advantage of our method is that the reviser can re-evaluate diversified but underrated candidates from the seeder without dropping it out early. Since the seeder explores the full trajectory, there may be a mistake in the local sub-trajectory. Thus, it is essential to correct such mistakes locally to improve the solution quality. The proposed revising scheme parallelizes revising process by decomposing the full solution and locally updating the decomposed solution. Thus it allows the revisers to search over larger solution space in a single inference than conventional local search (i.e., number of iteration of the reviser is smaller than that of conventional local search 2-opt [18], or DRL-based 2-opt [19]), consequently reducing computing costs. Therefore, we can keep the candidates without eliminating them early because of computing costs. The proposed method is an architecture-agnostic method, which can be applied to various neural architectures. The seeder and reviser can be parameterized with any neural architecture; this research utilizes AM [12], the representative DRL model on combinatorial optimization, to parameterize the seeder and the reviser. According to the experimental results, the LCP improves the target neural architecture AM [12], and outperforms competitive DRL frameworks on TSP, PCTSP, and CVRP (N = 20, 50, 100, 500, N : number of nodes) and real-world problems in TSPLIB [20]. Moreover, by conducting extensive ablation studies, we show proposed techniques, including entropy regularization scheme and revision scheme, clearly contribute to the performance improvement. 2 Related Works There have been continuous advances in DRL frameworks for solving various routing problems. DRL framework can generate solvers that do not rely on the ground-truth label of target problems: it can be applied to un-explored problems. DRL-based approaches can be categorized into two parts; 1These works [13, 14, 15, 16, 17] are inspired by DRL frameworks [10, 12] on combinatorial optimization constructive heuristics and improvement heuristics. We survey these two categories and current emerging hybrid approaches of machine learning (ML) with conventional solvers. 2.1 DRL-based Constructive Heuristics Bello et al. [10] introduced an actor-critic algorithm with a policy parameterized by the pointer network [21]. They proposed a constructive Markov decision process (MDP), where the action is defined as choosing one of the un-served nodes to visit, given a partial solution; the policy is trained to add a node to provide a complete solution sequentially. Later, DRL-based constructive heuristics were developed to design the architecture of neural networks while preserving the constructive MDP [10]. Khalil et al. [11] proposed a DRL framework with a graph embedding structure. Nazari et al. [22], Duedon et al. [23] and Kool et al. [12] redesigned the pointer network [21] using the transformer [24] and trained it with a policy gradient method [25]. The AM by Kool et al. [12] reports substantial results on various NP-hard routing problems, including TSP, PCTSP, CVRP, and orienteering problem (OP) in high-speed computation. AM-variants. After the meaningful success of the AM, many studies are expanded from the AM. Many engineering fields and industries apply AM into their domain. For example, Liao et al. [4] proposed a routing algorithm for the circuit using AM. Some researches focus on increasing the performances of AM on classic routing problems like TSP by simple techniques. Kwon et al. [26] proposed the POMO, effective reinforcement learning method for AM. They proposed a new RL baseline that can reduce the training variance of AM using the problem-specific property of TSP and CVRP. In addition, they presented an effective post-processing algorithm for TSP and CVRP. However, their proposed method has a limitation in that it is problem-specific because it uses the domain properties of TSP and CVRP (e.g., their method is limited to be applied to PCTSP.). Xin et al. [27] proposed AM-style DRL-model, MDAM, for NP-hard routing problems. Their method learns multiple AM decoders and derives various solutions through the multiple decoders. The goal of increasing the solution diversity is similar to our research. However, our study is different where it increases the entropy of a single decoder and improves the mistakes of various solutions through a reviser. 2.2 DRL-based Improvement Heuristics Unlike the constructive MDP, DRL-based improvement heuristics are designed to improve the completed solution iteratively. Most researches on DRL-based improvement heuristics are inspired by classical local search algorithms such as 2-opt [18] and the large neighborhood search (LNS) [28]. Chen et al. [29] proposed a DRL-based local search framework, termed NeuRewriter, that shows a promising performance on CVRP and job scheduling problems. Wu et al. [30], and Costa et al. [31] proposed a DRL-based TSP solver by learning the 2-opt. Their method improves the randomly generated solutions, unlike the method of Chen et al. [29] rewrites a solution given by a conventional heuristic solver. Hottung & Tierney [32] proposed a novel search method of VRP that destroys and repairs a solution repeatably inspired LNS. Their method gives promising performances on CVRP. Improvement heuristic approaches generally show better performance than constructive heuristics but are usually slower than constructive heuristics. In the case of TSP, the number of neural network’s inferences of constructive heuristics is the same as the number of cities to visit. However, the number of inferences of the improvement heuristics is generally much larger. 2.3 Hybrid Approaches with Conventional Solvers There are several studies on hybrid approaches with conventional solvers having promising performance recently. Lu et al. [33] proposed a hybrid method, where the policy is learned to control improvement operators (handcrafted heuristic). Significantly, they outperforms the LKH3, which is widely considered as mountain to climb in machine learning (ML) communities. Joshi et al. [34] combined graph neural network (GNN) model with the beam search algorithm. They trained the GNN with supervised learning for generating a hit map of candidate nodes. Then trained GNN reduces a searching space for improvement heuristics. Similarly, Fu et al. [35] combined supervised GNN model with Monte Carlo tree search (MCTS) and Kool et al. [36] combined supervised GNN model with dynamic programming. Their method achieves significant performances, showing ML method can effectively collaborate with conventional operational research (OR) methods. The research scope of hybrid approaches and DRL-based methods is different. Hybrid approaches can overcome classical solvers in target tasks by collaborating with the classical solvers. However, hybrid approaches have inherited limitations from classical solvers that are poor expandability to other tasks. The DRL-based method can be applied to various real-world tasks without a classic solver; we can also utilize DRL-based to unexplored tasks. This paper investigates the DRL-based NP-hard routing method without the help of classical solvers. 3 Formulation of Routing Problems This section explains the Markov decision process (MDP) formulation for the given 2D Euclidean TSP as a representative example. The formulation of MDP for other problems is described in Appendix A.1. The main objective of TSP is to find the shortest path of the Hamiltonian cycle. The TSP graph can be represented as a sequence of N nodes in 2D Euclidean space, s = {xi}Ni=1, where xi ∈ R2. Then, the solution of TSP can be represented as the permutation π of input sequences: π = t=N⋃ t=1 {πt}, πt ∈ {1, ..., N}, πt1 ̸= πt2 if t1 ̸= t2 The objective is minimizing the tour length L(π|s) = ∑N−1 t=1 ||xπt+1 − xπt ||2 + ||xπN − xπ1 ||2. Then, we formulate the constructive Markov decision process (MDP) of TSP. State. State of MDP is represented as a partial solution of TSP or a sequence of previously selected actions: π1:t−1. Action. Action is defined as selecting one of un-served tasks. Therefore, action is represented as πt where the πt ∈ {{1, ..., N} \ {π1:t−1}}. Cumulative Reward. We define cumulative reward for solution (a sequence of assignments) from problem instance s as negative of tourlength: −L(π|s). Constructive Policy. Finally we define constructive policy p(π|s) that generates a solution π from TSP graph s. The constructive policy p(π|s) is decomposed as: p(π|s) = t=N∏ t=1 pθ(πt|π1:t−1, s) Where pθ(πt|π1:t−1, s) is a single-step assignment policy parameterized by parameter θ. 4 Learning Collaborative Policies This section describes a novel hierarchical problem-solving strategy, termed learning collaborative policies (LCP), which can effectively find the near-optimum solution using two hierarchical steps, seeding process and revising process (see Figure 1 for detail). In the seeding process, the seeder policy pS generates M number of diversified candidate solutions. In the revising process, the reviser policy pR re-writes each candidate solution I times to minimize the tour length of the candidate. The final solution is then selected as the best solution among M revised (updated) candidate solutions. See pseudo-code in Appendix A.4 for a detailed technical explanation. 4.1 Seeding Process The seeder generates as diversified candidate solutions as possible while being dedicated to exploring the full combinatorial action space. To this end, the seeder is trained to solve the following problems. Solution space. Solution space of seeder is a set of full trajectory solutions : {π(1), ...,π(M)}. The M is the number of candidate solutions from the seeder: termed sample width. Policy structure. Seeder is a constructive policy, as defined in section 3 as follows: pS(π|s) = t=N∏ t=1 pθS (πt|π1:t−1, s) The segment policy pθS (πt|π1:t−1, s), parameterized by θS , is derived form AM [12]. Entropy Reward. To force the seeder policy pS to sample diverse solutions, we trained pS such that the entropy H of pS to be maximized. To this end, we use the reward RS defined as: RS = H ( π ∼ t=N∏ t=1 pθS (πt|π1:t−1, s) ) ≈ N∑ t=1 wtH (πt ∼ pθS (πt|π1:t−1, s)) (1) The entropy of constructive policy is appropriate for measuring solution diversity. However, computing the entropy of constructive policy is intractable because search space is too large: N !. Therefore, we approximate it as a weighted sum of the entropy of segment policies pθS (πt|π1:t−1, s) evaluated at different time step. We use a linear scheduler (time-varying weights) wt = N−tNw to boost exploration at the earlier stage of composing a solution; higher randomness imposed by the higher weight wt at the early stage tends to generate more diversified full trajectories later. The Nw is the normalizing factor, which is a hyperparameter. Training scheme. To train the seeder, we use the REINFORCE [25] algorithm with rollout baseline b introduced by Kool et al. [12]. Then the gradient of each objective function is expressed as follows: ∇J(θS |s) = Eπ∼pS [(L(π|s)− αRS(pS1:N ,π)− b(s))∇log(pS)] (2) Note that the α is hyperparameter for RS and pS1:N is the sequence of segment polices {pθS (πt|π1:t−1, s)}Nt=1. We use the ADAM [37] optimizer to obtain the optimal parameter θ∗ that minimizes the objective function. 4.2 Revision Process In the revision process, given a candidate solution, the reviser decomposes the candidate solution into K segments and simultaneously finds the optimum routing sequence for each segment of each candidate solution. The reviser repeats this revising process I times to find the best-updated candidate solution. To be specific, the reviser sequentially updates candidate solutions (I times) by repeatably decomposing the full trajectories computed from the previous iteration into segments and revising the segments to produce M updated full trajectory solutions. To sum up, reviser solves M ×K segments in parallel (M : number of candidate solutions, K: number of the segment in each candidate solution), I times in series. The proposed scheme has advantages over conventional local search methods or DRL-based improvement heuristics. It searches larger solution spaces in a single inference; therefore, it reduces iteration I . For example, 2-opt and DRL-2opt [19] search O(N2) solution space (if it is parallelizable, O(MN2)), while the reviser searches O(MK × l!) which is much larger (when the number of nodes of the segment l is big enough) in a single inference. Hence we can reduce the number of iterations I significantly compared to 2-opt, or DRL-2opt [19], thus expediting the speed of the solution search (see Appendix E). Solution space. Solution space of reviser is a partial segment of full trajectory solution represented as πk+1:k+l. The k is starting index, and l is the number of nodes of the segment. For details of assigning segment including k and l, see Appendix A.3. Policy structure. Reviser is a constructive policy as follows: pR(πk+1:k+l|s) = t=l∏ t=1 pθR(πk+t|πk:k+t−1, πk+l+1, s) The segment policy pθR , parameterized by θR, is in the similar form with that of AM [12]. Each πk and πk+l+1 indicate the starting point and the destination point of the partial segment, respectively (see red-points in Figure 3). We modify the context embedding vector h(N)(c) = [h̄ (N), h (N) πt−1 , h (N) π1 ] of AM, which is designed for solving TSP. Hence, h is a high dimensional embedding vector from the transformer-based encoder, and N is the number of multi-head attention layers. h̄(N) is the mean of the entire embedding, h (N) πt−1 is the embedding of previously selected nodes, and h (N) π1 is the embedding of the first node. However, since the destination of reviser is πk+l+1, not the first node π1 , we change the embedding of the first node h(N)π1 to be the embedding of the last node h (N) πk+l+1 for the context embedding as h (N) (c) = [h̄ (N), h (N) πk+t−1 , h (N) πk+l+1 ]. Revision Reward: negative of partial tour length LR(πk+1:k+l|s) = ∑l+1 t=1 ||xπk+t − xπk+t−1 ||2. Training scheme. The training process is mostly the same as described in section 4.1, except that we have modified the length term L to LR, and set α = 0 to remove entropy reward RS for training the reviser. Note that the seeder and reviser are trained separately. 5 Experiments This section reports the experimental results2 of the LCP scheme on TSP, PCTSP, and CVRP (N = 20, 50, 100, 500, N : number of nodes). Also, we report several ablation studies in section 5.3 and Appendix B-F. We evaluate performance on real-world TSPs in the TSPLIB in Appendix G. Training Hyperparamters. Throughout the entire training process of the seeder and reviser, we have exactly the same hyperparameters as Kool et al. [12], except that the training batch size of our seeder is 1024. To train the seeder’s policy, we set α = 0.5 (2) and Nw = ∑N i=1 i for linear weight wt = N−t Nw for entropy scheduling. Details in the experimental setting, including hyperparameters, dataset configuration, and run time evaluation, are described in Appendix A.5. 5.1 Target Problems and Baselines We evaluate the performance of LCP in solving the three routing problems: TSP, PCTSP, and CVRP. We provide a brief explanation of them. The detailed descriptions for these problems are in Appendix A.1. Travelling salesman problem (TSP). TSP is a problem to find the shortest Hamiltonian cycle given node sequences. Price collecting travelling salesman problem (PCTSP). PCTSP [38] is a problem, where each node has a prize and a penalty. The goal is to collect the nodes with at least a minimum total prize (constraint) and minimize tour length added with unvisited nodes’ penalties. Capacitated vehicle routing problem (CVRP). CVRP [39] is a problem where each node has a demand, while a vehicle must terminate the tour when the total demand limit is exceeded (constraint). The objective is to minimize the tour length. For the baseline algorithms, we use two types of algorithms: conventional heuristics and DRL-based solvers. For the conventional heuristics, we use Gurobi [7] (the commercial optimization solver), and the OR Tools [40] (the commercial optimization solver) for all three problems. In Table 1, Gurobi (t) indicates time-limited Gurobi whose running time is restricted below t. In addition, OR Tools (t) is the OR Tools that allows additional local search over a duration of t. For problem-specific heuristics, we use Concorde [9] for TSP, the iterative local search (ILS) [12] for PCTSP, and LKH3 [41] for CVRP. For the baselines using DRL-based solvers, we concentrated on the ability of the LCP scheme, which is improved performance over AM. Validating that the two-policies collaboration scheme outperforms the single-policy scheme (i.e., AM) is a crucial part of this research; thus, the most important metric for performance evaluation is improvement between vanilla AM the AM + LCP. Also, we reproduced other competitive DRL frameworks: current emerging improvement heuristics. We exclude recently proposed AM-style constructive heuristics, including the POMO [26] and MDAM [27] because they can be candidate collaborators with LCP, not competitors (e.g., POMO + LCP is possible). The detailed method for evaluation baselines in Table 1 is described as follows: TSP. We follow baseline setting of Kool et al. [12] and Costa et al. [19]. We set DRL baselines including the S2V-DQN [11], EAN [23], GAT-T [30], DRL-2opt [19], and AM [12]. We show the results of S2V-DQN and EAN reported by Kool et al. [12], and the results of GAT-T reported by Costa et al. [19]. Then we directly reproduce the two most competitive DRL frameworks among baselines, the AM and DRL-2opt, in our machine to make a fair comparison of the speed. PCTSP. We follow baseline setting of Kool et al. [12]. We reproduce AM [12] for DRL baseline. 2See source code in https://github.com/alstn12088/LCP CVRP. We follow baseline setting of Houttung & Tierney [32]. We report result of RL [22] based on Houttung & Tierney [32] and we reproduce AM [12] and NLNS [32]. 5.2 Performance Evaluation In this section, we report the performance of LCP on small-scale problems (N = 20, 50, 100) in Table 1. Then we provide a time-performance trade-off analysis including large-scale problems (N = 500). We note that time-performance analysis is significant because any method can find an optimal solution when given an infinite time budget. From the analysis, we can identify a specific time region, called winner region, where LCP performs the best in terms of both speed and performance. Performance evaluation on N = 20, 50, 100. Our method outperforms all the DRL baselines and OR-tools in TSP, PCTSP, and CVRP, as clearly shown in Table 1. Note that for TSP (N = 100), we applied two types of revisers, each of which is denoted LCP and LCP*, respectively. The details are described in Appendix A.4 with pseudo-code. Our LCP and LCP* outperforms DRL-2opt, the current state-of-the-art DRL-based improvement heuristic in N = 20, 50, 100, surpass 0.33% in N = 100. In PCTSP, LCP outperforms AM with less time. Our method (AM + LCP {640,1}) outperforms the OR-Tools (10s), with 4× and 2× faster speed in N = 50, 100 respectively. Compared to the ILS, our method (AM + LCP {1280,5}) underperforms by 1.0%, but has 11 × faster speed for N = 100 . For CVRP, our method outperforms competitive DRL frameworks. Time-performance analysis on N = 100, 500. In Figure 4, we describe the time-performance analysis. We cannot control the speed of the Concorde, ILS, and LKH3. We can control the speed of DRL solvers by adjusting sample width M or the number of iterations I . For PCTSP, we can change the speed of OR-tools by managing the time for additional local searches. Our scheme clearly outperforms DRL-solvers in terms of both speed and performance. For PCTSP (N = 100, 500) and CVRP (N = 500), our method achieves the winner region of t < 10, which is best performed in a specific time region among all kind of baseline solvers (for CVRP (N = 100), our method achieves the winner region of t < 5). Performance on TSPLIB [20] data: see Appendix G. 5.3 Ablation Study In this section, we conduct an ablation study on LCP components. We leave further ablation studies to Appendix B-F. Ablation study of collaborative policies. In Table 2, we ablate three significant components of LCP and show the experimental results for every case. In the case of vanilla AM, having none of LCP components, the performance is the poorest. On the other hand, collaboration of seeder trained with linearly scheduled-entropy and the reviser shows the best performance. Therefore, the experimental results empirically validate our proposal of hierarchically collaborating two policies and also demonstrate the effectiveness of using a linearly scheduled-entropy term shown in section 4.1 and Figure 2. Ablation study of entropy regularization: see Appendix B. Ablation study of SoftMax temperature: see Appendix C. Ablation study of application of LCP to pointer network [10, 21]: see Appendix D. Comparison with reviser and other improvement heuristics: see Appendix E. Training convergence of seeder and reviser in different PyTorch seeds: see Appendix F. 6 Discussion In this paper, we proposed a novel DRL scheme, learning collaborative policies (LCP). The extensive experiments demonstrate that our two-policies collaboration algorithm (i.e., LCP) outperforms conventional single-policy DRL frameworks, including AM [12], on various NP-hard routing problems, such as TSP, PCTSP, and CVRP. We highlight that LCP is a reusable scheme, can solve various problems. The neural architecture of the seeder and reviser proposed in this paper is derived from AM [12]. It can be substituted by other architectures, such as the pointer network [10, 21] and AM-style architectures including POMO [26] and MDAM [27]. If further studies on neural architecture for combinatorial optimization are carried out, the seeder and reviser can be improved further. Also, LCP can be directly applied to other combinatorial optimization tasks, including TSP with time windows (TSPTW), orienteering problem (OP), multiple TSP (mTSP), variations of the vehicle routing problem (VRP), and other practical applications. Further Works. We made an important first step: two-policies collaboration where each policy specializing in exploration or exploitation can improve conventional single-policy systems on combinatorial optimization tasks. The important direction of further research is introducing more sophisticated strategies to explore or exploit combinatorial solution space. New exploration strategies for overcoming the proposed approximated entropy maximization scheme are needed. Also, it is necessary to investigate more effective exploitation strategies beyond the proposed revision scheme. Acknowledgements and Disclosure of Funding This research is supported in part by the KAIST undergraduates research program (URP), 2019. We thank Hankook Lee and Prof. Jinwoo Shin for building part of this project in the URP. We thank Joonsang Park, Keeyoung Son, Hyunwook Park, Haeyeon Rachel Kim, and our anonymous reviewers for feedback and discussions.
1. What is the focus of the paper regarding deep reinforcement learning and routing problems? 2. What are the strengths of the proposed methodology involving two DRL agents? 3. What are the reviewer's concerns regarding the approach's applicability to other optimization domains? 4. How does the reviewer assess the effectiveness of the seeder-reviser methodology, and what suggestions do they have for improvement? 5. How does the reviewer evaluate the clarity and depth of the paper's content?
Summary Of The Paper Review
Summary Of The Paper Deep reinforcement learning (DRL) has recently been applied to variety of optimization problems, outperforming traditional approaches. This paper investigates the application of DRL to the context of routing problems such as the classical travelling salesman problem. The methodology proposed involves two DRL agents: the seeder, which is tasked with generating candidates for the solution that provide a good coverage of the solution space, and the reviser, which is tasked with attaining better solutions based on the reduced solution space provided by the seeder. To generate diverse candidates, the seeder utilizes entropy regularization reward. The authors present experiments indicating that their framework, called learning collaborative policies (LCP) improves over previous DRL frameworks for various routing challenges (e.g., TSP and capacitated vehicle routing). Review Thanks you for submitting to NeurIPS 2021. The paper is reasonably well written and the results are, overall, clearly explained. I found the approach presented for addressing routing problems (seeder + reviser) interesting. I wonder what makes routing problems special? Do you expect the same approach to attain good results in other optimization domains? Or is there something about the structure of TSP and the other routing problems discussed that is of special significance here? I encourage the authors to elaborate on this point. I think that the paper could also benefit from more discussion of why the approach produces good results. Specifically, the seeder-reviser methodology is somewhat reminiscent of traditional local search via genetic algorithms, simulated annealing, etc., where many random solutions are generated and these are combined in various ways to generate better solutions. I realize that the argument is that the generation of such candidate solutions by the seeder is better because of the entropy regularization reward, and that employing DRL by the reviser somehow helps it generalize better, but I would have appreciated more discussion/investigation of this point. Post author response: Thank you for your responses to my concerns. I still view this as a borderline paper but am adjusting my score to 6.
NIPS
Title Learning Collaborative Policies to Solve NP-hard Routing Problems Abstract Recently, deep reinforcement learning (DRL) frameworks have shown potential for solving NP-hard routing problems such as the traveling salesman problem (TSP) without problem-specific expert knowledge. Although DRL can be used to solve complex problems, DRL frameworks still struggle to compete with state-of-the-art heuristics showing a substantial performance gap. This paper proposes a novel hierarchical problem-solving strategy, termed learning collaborative policies (LCP), which can effectively find the near-optimum solution using two iterative DRL policies: the seeder and reviser. The seeder generates as diversified candidate solutions as possible (seeds) while being dedicated to exploring over the full combinatorial action space (i.e., sequence of assignment action). To this end, we train the seeder’s policy using a simple yet effective entropy regularization reward to encourage the seeder to find diverse solutions. On the other hand, the reviser modifies each candidate solution generated by the seeder; it partitions the full trajectory into sub-tours and simultaneously revises each sub-tour to minimize its traveling distance. Thus, the reviser is trained to improve the candidate solution’s quality, focusing on the reduced solution space (which is beneficial for exploitation). Extensive experiments demonstrate that the proposed two-policies collaboration scheme improves over single-policy DRL framework on various NP-hard routing problems, including TSP, prize collecting TSP (PCTSP), and capacitated vehicle routing problem (CVRP). 1 Introduction Routing is a combinatorial optimization problem, one of the prominent fields in discrete mathematics and computational theory. Among routing problems, the traveling salesman problem (TSP) is a canonical example. TSP can be applied to real-world problems in various engineering fields, such as robot routing, biology, and electrical design automation (EDA) [1, 2, 3, 4, 5] by expanding constraints and objectives to real-world settings : coined TSP variants are expanded version of TSP. However, TSP and its variants are NP-hard, making it challenging to design an exact solver [6]. Due to NP-hardness, solvers of TSP-like problems rely on mixed-integer linear programming (MILP) solvers [7] and handcrafted heuristics [8, 9]. Although they often provide a remarkable performance on target problems, the conventional approaches have several limitations. Firstly, in the case of MILP solvers, the objective functions and constraints must be formulated into linear forms, but many real-world routing applications, including biology and EDA, have a non-linear objective. Secondly, handcrafted heuristics rely on expert knowledge on target problems, thus hard to solve other problems. That is, whenever the target problem changes, the algorithm must also be re-designed. Deep reinforcement learning (DRL)-routing frameworks [10, 11, 12] is proposed to tackle the limitation of conventional approaches. One of the benefits of DRL is that reward of DRL can be any 35th Conference on Neural Information Processing Systems (NeurIPS 2021). value, even from a black-box simulator; therefore, DRL can overcome the limitations of MILP on real-world applications. Moreover, DRL frameworks can automatically design solvers relying less on a handcrafted manner. We note that the main objective of our research is not outperforming problem-specific solvers like the Concorde [9], a TSP solver. Our problem-solving strategy based on DRL, however, ultimately focuses on practical applications1 including intelligent transportation [13], biological sequence design [14], routing on electrical device [15] and device placement [16, 17]. Therefore, this paper evaluates the performance of DRL frameworks on TSP-like problems as a benchmark for potential applicability to practical applications, including speed, optimality, scalability, and expand-ability to other problems. TSP-like problems are excellent benchmarks as they have various baselines to compare with and can easily be modeled and evaluated. Contribution. This paper presents a novel DRL scheme, coined learning collaborative policies (LCP), a hierarchical solving protocol with two policies: seeder and reviser. The seeder generates various candidate solutions (seeds), each of which will be iteratively revised by the reviser to generate fine-tuned solutions. Having diversified candidate solutions is important, as it gives a better chance to find the best solution among them. Thus, the seeder is dedicated to exploring the full combinatorial action space (i.e., sequence of assignment action) so that it can provide as diversified candidate solutions as possible. It is important to explore over the full combinatorial action space because the solution quality highly fluctuates depending on its composition; however, exploring over the combinatorial action space is inherently difficult due to its inevitably many possible solutions. Therefore, this study provides an effective exploration strategy applying an entropy maximization scheme. The reviser modifies each candidate solution generated by the seeder. The reviser is dedicated to exploiting the policy (i.e., derived knowledge about the problem) to improve the quality of the candidate solution. The reviser partitions the full trajectory into sub-tours and revises each subtour to minimize its traveling distance in a parallel manner. This scheme provides two advantages: (a) searching over the restricted solution space can be more effective because the reward signal corresponding to the sub-tour is less variable than that of the full trajectory when using reinforcement learning to derive a policy, and (b) searching over sub-tours of seeds can be parallelized to expedite the revising process. The most significant advantage of our method is that the reviser can re-evaluate diversified but underrated candidates from the seeder without dropping it out early. Since the seeder explores the full trajectory, there may be a mistake in the local sub-trajectory. Thus, it is essential to correct such mistakes locally to improve the solution quality. The proposed revising scheme parallelizes revising process by decomposing the full solution and locally updating the decomposed solution. Thus it allows the revisers to search over larger solution space in a single inference than conventional local search (i.e., number of iteration of the reviser is smaller than that of conventional local search 2-opt [18], or DRL-based 2-opt [19]), consequently reducing computing costs. Therefore, we can keep the candidates without eliminating them early because of computing costs. The proposed method is an architecture-agnostic method, which can be applied to various neural architectures. The seeder and reviser can be parameterized with any neural architecture; this research utilizes AM [12], the representative DRL model on combinatorial optimization, to parameterize the seeder and the reviser. According to the experimental results, the LCP improves the target neural architecture AM [12], and outperforms competitive DRL frameworks on TSP, PCTSP, and CVRP (N = 20, 50, 100, 500, N : number of nodes) and real-world problems in TSPLIB [20]. Moreover, by conducting extensive ablation studies, we show proposed techniques, including entropy regularization scheme and revision scheme, clearly contribute to the performance improvement. 2 Related Works There have been continuous advances in DRL frameworks for solving various routing problems. DRL framework can generate solvers that do not rely on the ground-truth label of target problems: it can be applied to un-explored problems. DRL-based approaches can be categorized into two parts; 1These works [13, 14, 15, 16, 17] are inspired by DRL frameworks [10, 12] on combinatorial optimization constructive heuristics and improvement heuristics. We survey these two categories and current emerging hybrid approaches of machine learning (ML) with conventional solvers. 2.1 DRL-based Constructive Heuristics Bello et al. [10] introduced an actor-critic algorithm with a policy parameterized by the pointer network [21]. They proposed a constructive Markov decision process (MDP), where the action is defined as choosing one of the un-served nodes to visit, given a partial solution; the policy is trained to add a node to provide a complete solution sequentially. Later, DRL-based constructive heuristics were developed to design the architecture of neural networks while preserving the constructive MDP [10]. Khalil et al. [11] proposed a DRL framework with a graph embedding structure. Nazari et al. [22], Duedon et al. [23] and Kool et al. [12] redesigned the pointer network [21] using the transformer [24] and trained it with a policy gradient method [25]. The AM by Kool et al. [12] reports substantial results on various NP-hard routing problems, including TSP, PCTSP, CVRP, and orienteering problem (OP) in high-speed computation. AM-variants. After the meaningful success of the AM, many studies are expanded from the AM. Many engineering fields and industries apply AM into their domain. For example, Liao et al. [4] proposed a routing algorithm for the circuit using AM. Some researches focus on increasing the performances of AM on classic routing problems like TSP by simple techniques. Kwon et al. [26] proposed the POMO, effective reinforcement learning method for AM. They proposed a new RL baseline that can reduce the training variance of AM using the problem-specific property of TSP and CVRP. In addition, they presented an effective post-processing algorithm for TSP and CVRP. However, their proposed method has a limitation in that it is problem-specific because it uses the domain properties of TSP and CVRP (e.g., their method is limited to be applied to PCTSP.). Xin et al. [27] proposed AM-style DRL-model, MDAM, for NP-hard routing problems. Their method learns multiple AM decoders and derives various solutions through the multiple decoders. The goal of increasing the solution diversity is similar to our research. However, our study is different where it increases the entropy of a single decoder and improves the mistakes of various solutions through a reviser. 2.2 DRL-based Improvement Heuristics Unlike the constructive MDP, DRL-based improvement heuristics are designed to improve the completed solution iteratively. Most researches on DRL-based improvement heuristics are inspired by classical local search algorithms such as 2-opt [18] and the large neighborhood search (LNS) [28]. Chen et al. [29] proposed a DRL-based local search framework, termed NeuRewriter, that shows a promising performance on CVRP and job scheduling problems. Wu et al. [30], and Costa et al. [31] proposed a DRL-based TSP solver by learning the 2-opt. Their method improves the randomly generated solutions, unlike the method of Chen et al. [29] rewrites a solution given by a conventional heuristic solver. Hottung & Tierney [32] proposed a novel search method of VRP that destroys and repairs a solution repeatably inspired LNS. Their method gives promising performances on CVRP. Improvement heuristic approaches generally show better performance than constructive heuristics but are usually slower than constructive heuristics. In the case of TSP, the number of neural network’s inferences of constructive heuristics is the same as the number of cities to visit. However, the number of inferences of the improvement heuristics is generally much larger. 2.3 Hybrid Approaches with Conventional Solvers There are several studies on hybrid approaches with conventional solvers having promising performance recently. Lu et al. [33] proposed a hybrid method, where the policy is learned to control improvement operators (handcrafted heuristic). Significantly, they outperforms the LKH3, which is widely considered as mountain to climb in machine learning (ML) communities. Joshi et al. [34] combined graph neural network (GNN) model with the beam search algorithm. They trained the GNN with supervised learning for generating a hit map of candidate nodes. Then trained GNN reduces a searching space for improvement heuristics. Similarly, Fu et al. [35] combined supervised GNN model with Monte Carlo tree search (MCTS) and Kool et al. [36] combined supervised GNN model with dynamic programming. Their method achieves significant performances, showing ML method can effectively collaborate with conventional operational research (OR) methods. The research scope of hybrid approaches and DRL-based methods is different. Hybrid approaches can overcome classical solvers in target tasks by collaborating with the classical solvers. However, hybrid approaches have inherited limitations from classical solvers that are poor expandability to other tasks. The DRL-based method can be applied to various real-world tasks without a classic solver; we can also utilize DRL-based to unexplored tasks. This paper investigates the DRL-based NP-hard routing method without the help of classical solvers. 3 Formulation of Routing Problems This section explains the Markov decision process (MDP) formulation for the given 2D Euclidean TSP as a representative example. The formulation of MDP for other problems is described in Appendix A.1. The main objective of TSP is to find the shortest path of the Hamiltonian cycle. The TSP graph can be represented as a sequence of N nodes in 2D Euclidean space, s = {xi}Ni=1, where xi ∈ R2. Then, the solution of TSP can be represented as the permutation π of input sequences: π = t=N⋃ t=1 {πt}, πt ∈ {1, ..., N}, πt1 ̸= πt2 if t1 ̸= t2 The objective is minimizing the tour length L(π|s) = ∑N−1 t=1 ||xπt+1 − xπt ||2 + ||xπN − xπ1 ||2. Then, we formulate the constructive Markov decision process (MDP) of TSP. State. State of MDP is represented as a partial solution of TSP or a sequence of previously selected actions: π1:t−1. Action. Action is defined as selecting one of un-served tasks. Therefore, action is represented as πt where the πt ∈ {{1, ..., N} \ {π1:t−1}}. Cumulative Reward. We define cumulative reward for solution (a sequence of assignments) from problem instance s as negative of tourlength: −L(π|s). Constructive Policy. Finally we define constructive policy p(π|s) that generates a solution π from TSP graph s. The constructive policy p(π|s) is decomposed as: p(π|s) = t=N∏ t=1 pθ(πt|π1:t−1, s) Where pθ(πt|π1:t−1, s) is a single-step assignment policy parameterized by parameter θ. 4 Learning Collaborative Policies This section describes a novel hierarchical problem-solving strategy, termed learning collaborative policies (LCP), which can effectively find the near-optimum solution using two hierarchical steps, seeding process and revising process (see Figure 1 for detail). In the seeding process, the seeder policy pS generates M number of diversified candidate solutions. In the revising process, the reviser policy pR re-writes each candidate solution I times to minimize the tour length of the candidate. The final solution is then selected as the best solution among M revised (updated) candidate solutions. See pseudo-code in Appendix A.4 for a detailed technical explanation. 4.1 Seeding Process The seeder generates as diversified candidate solutions as possible while being dedicated to exploring the full combinatorial action space. To this end, the seeder is trained to solve the following problems. Solution space. Solution space of seeder is a set of full trajectory solutions : {π(1), ...,π(M)}. The M is the number of candidate solutions from the seeder: termed sample width. Policy structure. Seeder is a constructive policy, as defined in section 3 as follows: pS(π|s) = t=N∏ t=1 pθS (πt|π1:t−1, s) The segment policy pθS (πt|π1:t−1, s), parameterized by θS , is derived form AM [12]. Entropy Reward. To force the seeder policy pS to sample diverse solutions, we trained pS such that the entropy H of pS to be maximized. To this end, we use the reward RS defined as: RS = H ( π ∼ t=N∏ t=1 pθS (πt|π1:t−1, s) ) ≈ N∑ t=1 wtH (πt ∼ pθS (πt|π1:t−1, s)) (1) The entropy of constructive policy is appropriate for measuring solution diversity. However, computing the entropy of constructive policy is intractable because search space is too large: N !. Therefore, we approximate it as a weighted sum of the entropy of segment policies pθS (πt|π1:t−1, s) evaluated at different time step. We use a linear scheduler (time-varying weights) wt = N−tNw to boost exploration at the earlier stage of composing a solution; higher randomness imposed by the higher weight wt at the early stage tends to generate more diversified full trajectories later. The Nw is the normalizing factor, which is a hyperparameter. Training scheme. To train the seeder, we use the REINFORCE [25] algorithm with rollout baseline b introduced by Kool et al. [12]. Then the gradient of each objective function is expressed as follows: ∇J(θS |s) = Eπ∼pS [(L(π|s)− αRS(pS1:N ,π)− b(s))∇log(pS)] (2) Note that the α is hyperparameter for RS and pS1:N is the sequence of segment polices {pθS (πt|π1:t−1, s)}Nt=1. We use the ADAM [37] optimizer to obtain the optimal parameter θ∗ that minimizes the objective function. 4.2 Revision Process In the revision process, given a candidate solution, the reviser decomposes the candidate solution into K segments and simultaneously finds the optimum routing sequence for each segment of each candidate solution. The reviser repeats this revising process I times to find the best-updated candidate solution. To be specific, the reviser sequentially updates candidate solutions (I times) by repeatably decomposing the full trajectories computed from the previous iteration into segments and revising the segments to produce M updated full trajectory solutions. To sum up, reviser solves M ×K segments in parallel (M : number of candidate solutions, K: number of the segment in each candidate solution), I times in series. The proposed scheme has advantages over conventional local search methods or DRL-based improvement heuristics. It searches larger solution spaces in a single inference; therefore, it reduces iteration I . For example, 2-opt and DRL-2opt [19] search O(N2) solution space (if it is parallelizable, O(MN2)), while the reviser searches O(MK × l!) which is much larger (when the number of nodes of the segment l is big enough) in a single inference. Hence we can reduce the number of iterations I significantly compared to 2-opt, or DRL-2opt [19], thus expediting the speed of the solution search (see Appendix E). Solution space. Solution space of reviser is a partial segment of full trajectory solution represented as πk+1:k+l. The k is starting index, and l is the number of nodes of the segment. For details of assigning segment including k and l, see Appendix A.3. Policy structure. Reviser is a constructive policy as follows: pR(πk+1:k+l|s) = t=l∏ t=1 pθR(πk+t|πk:k+t−1, πk+l+1, s) The segment policy pθR , parameterized by θR, is in the similar form with that of AM [12]. Each πk and πk+l+1 indicate the starting point and the destination point of the partial segment, respectively (see red-points in Figure 3). We modify the context embedding vector h(N)(c) = [h̄ (N), h (N) πt−1 , h (N) π1 ] of AM, which is designed for solving TSP. Hence, h is a high dimensional embedding vector from the transformer-based encoder, and N is the number of multi-head attention layers. h̄(N) is the mean of the entire embedding, h (N) πt−1 is the embedding of previously selected nodes, and h (N) π1 is the embedding of the first node. However, since the destination of reviser is πk+l+1, not the first node π1 , we change the embedding of the first node h(N)π1 to be the embedding of the last node h (N) πk+l+1 for the context embedding as h (N) (c) = [h̄ (N), h (N) πk+t−1 , h (N) πk+l+1 ]. Revision Reward: negative of partial tour length LR(πk+1:k+l|s) = ∑l+1 t=1 ||xπk+t − xπk+t−1 ||2. Training scheme. The training process is mostly the same as described in section 4.1, except that we have modified the length term L to LR, and set α = 0 to remove entropy reward RS for training the reviser. Note that the seeder and reviser are trained separately. 5 Experiments This section reports the experimental results2 of the LCP scheme on TSP, PCTSP, and CVRP (N = 20, 50, 100, 500, N : number of nodes). Also, we report several ablation studies in section 5.3 and Appendix B-F. We evaluate performance on real-world TSPs in the TSPLIB in Appendix G. Training Hyperparamters. Throughout the entire training process of the seeder and reviser, we have exactly the same hyperparameters as Kool et al. [12], except that the training batch size of our seeder is 1024. To train the seeder’s policy, we set α = 0.5 (2) and Nw = ∑N i=1 i for linear weight wt = N−t Nw for entropy scheduling. Details in the experimental setting, including hyperparameters, dataset configuration, and run time evaluation, are described in Appendix A.5. 5.1 Target Problems and Baselines We evaluate the performance of LCP in solving the three routing problems: TSP, PCTSP, and CVRP. We provide a brief explanation of them. The detailed descriptions for these problems are in Appendix A.1. Travelling salesman problem (TSP). TSP is a problem to find the shortest Hamiltonian cycle given node sequences. Price collecting travelling salesman problem (PCTSP). PCTSP [38] is a problem, where each node has a prize and a penalty. The goal is to collect the nodes with at least a minimum total prize (constraint) and minimize tour length added with unvisited nodes’ penalties. Capacitated vehicle routing problem (CVRP). CVRP [39] is a problem where each node has a demand, while a vehicle must terminate the tour when the total demand limit is exceeded (constraint). The objective is to minimize the tour length. For the baseline algorithms, we use two types of algorithms: conventional heuristics and DRL-based solvers. For the conventional heuristics, we use Gurobi [7] (the commercial optimization solver), and the OR Tools [40] (the commercial optimization solver) for all three problems. In Table 1, Gurobi (t) indicates time-limited Gurobi whose running time is restricted below t. In addition, OR Tools (t) is the OR Tools that allows additional local search over a duration of t. For problem-specific heuristics, we use Concorde [9] for TSP, the iterative local search (ILS) [12] for PCTSP, and LKH3 [41] for CVRP. For the baselines using DRL-based solvers, we concentrated on the ability of the LCP scheme, which is improved performance over AM. Validating that the two-policies collaboration scheme outperforms the single-policy scheme (i.e., AM) is a crucial part of this research; thus, the most important metric for performance evaluation is improvement between vanilla AM the AM + LCP. Also, we reproduced other competitive DRL frameworks: current emerging improvement heuristics. We exclude recently proposed AM-style constructive heuristics, including the POMO [26] and MDAM [27] because they can be candidate collaborators with LCP, not competitors (e.g., POMO + LCP is possible). The detailed method for evaluation baselines in Table 1 is described as follows: TSP. We follow baseline setting of Kool et al. [12] and Costa et al. [19]. We set DRL baselines including the S2V-DQN [11], EAN [23], GAT-T [30], DRL-2opt [19], and AM [12]. We show the results of S2V-DQN and EAN reported by Kool et al. [12], and the results of GAT-T reported by Costa et al. [19]. Then we directly reproduce the two most competitive DRL frameworks among baselines, the AM and DRL-2opt, in our machine to make a fair comparison of the speed. PCTSP. We follow baseline setting of Kool et al. [12]. We reproduce AM [12] for DRL baseline. 2See source code in https://github.com/alstn12088/LCP CVRP. We follow baseline setting of Houttung & Tierney [32]. We report result of RL [22] based on Houttung & Tierney [32] and we reproduce AM [12] and NLNS [32]. 5.2 Performance Evaluation In this section, we report the performance of LCP on small-scale problems (N = 20, 50, 100) in Table 1. Then we provide a time-performance trade-off analysis including large-scale problems (N = 500). We note that time-performance analysis is significant because any method can find an optimal solution when given an infinite time budget. From the analysis, we can identify a specific time region, called winner region, where LCP performs the best in terms of both speed and performance. Performance evaluation on N = 20, 50, 100. Our method outperforms all the DRL baselines and OR-tools in TSP, PCTSP, and CVRP, as clearly shown in Table 1. Note that for TSP (N = 100), we applied two types of revisers, each of which is denoted LCP and LCP*, respectively. The details are described in Appendix A.4 with pseudo-code. Our LCP and LCP* outperforms DRL-2opt, the current state-of-the-art DRL-based improvement heuristic in N = 20, 50, 100, surpass 0.33% in N = 100. In PCTSP, LCP outperforms AM with less time. Our method (AM + LCP {640,1}) outperforms the OR-Tools (10s), with 4× and 2× faster speed in N = 50, 100 respectively. Compared to the ILS, our method (AM + LCP {1280,5}) underperforms by 1.0%, but has 11 × faster speed for N = 100 . For CVRP, our method outperforms competitive DRL frameworks. Time-performance analysis on N = 100, 500. In Figure 4, we describe the time-performance analysis. We cannot control the speed of the Concorde, ILS, and LKH3. We can control the speed of DRL solvers by adjusting sample width M or the number of iterations I . For PCTSP, we can change the speed of OR-tools by managing the time for additional local searches. Our scheme clearly outperforms DRL-solvers in terms of both speed and performance. For PCTSP (N = 100, 500) and CVRP (N = 500), our method achieves the winner region of t < 10, which is best performed in a specific time region among all kind of baseline solvers (for CVRP (N = 100), our method achieves the winner region of t < 5). Performance on TSPLIB [20] data: see Appendix G. 5.3 Ablation Study In this section, we conduct an ablation study on LCP components. We leave further ablation studies to Appendix B-F. Ablation study of collaborative policies. In Table 2, we ablate three significant components of LCP and show the experimental results for every case. In the case of vanilla AM, having none of LCP components, the performance is the poorest. On the other hand, collaboration of seeder trained with linearly scheduled-entropy and the reviser shows the best performance. Therefore, the experimental results empirically validate our proposal of hierarchically collaborating two policies and also demonstrate the effectiveness of using a linearly scheduled-entropy term shown in section 4.1 and Figure 2. Ablation study of entropy regularization: see Appendix B. Ablation study of SoftMax temperature: see Appendix C. Ablation study of application of LCP to pointer network [10, 21]: see Appendix D. Comparison with reviser and other improvement heuristics: see Appendix E. Training convergence of seeder and reviser in different PyTorch seeds: see Appendix F. 6 Discussion In this paper, we proposed a novel DRL scheme, learning collaborative policies (LCP). The extensive experiments demonstrate that our two-policies collaboration algorithm (i.e., LCP) outperforms conventional single-policy DRL frameworks, including AM [12], on various NP-hard routing problems, such as TSP, PCTSP, and CVRP. We highlight that LCP is a reusable scheme, can solve various problems. The neural architecture of the seeder and reviser proposed in this paper is derived from AM [12]. It can be substituted by other architectures, such as the pointer network [10, 21] and AM-style architectures including POMO [26] and MDAM [27]. If further studies on neural architecture for combinatorial optimization are carried out, the seeder and reviser can be improved further. Also, LCP can be directly applied to other combinatorial optimization tasks, including TSP with time windows (TSPTW), orienteering problem (OP), multiple TSP (mTSP), variations of the vehicle routing problem (VRP), and other practical applications. Further Works. We made an important first step: two-policies collaboration where each policy specializing in exploration or exploitation can improve conventional single-policy systems on combinatorial optimization tasks. The important direction of further research is introducing more sophisticated strategies to explore or exploit combinatorial solution space. New exploration strategies for overcoming the proposed approximated entropy maximization scheme are needed. Also, it is necessary to investigate more effective exploitation strategies beyond the proposed revision scheme. Acknowledgements and Disclosure of Funding This research is supported in part by the KAIST undergraduates research program (URP), 2019. We thank Hankook Lee and Prof. Jinwoo Shin for building part of this project in the URP. We thank Joonsang Park, Keeyoung Son, Hyunwook Park, Haeyeon Rachel Kim, and our anonymous reviewers for feedback and discussions.
1. What is the main contribution of the paper, and how does it compare to other optimization methods? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its scalability and real-world applicability? 3. Do you have any questions or concerns about the writing style, figures, and tables in the paper? 4. How does the reviewer assess the novelty and significance of the paper's content? 5. Are there any suggestions for improving the paper or its presentation?
Summary Of The Paper Review
Summary Of The Paper This paper introduces Learning Collaborative Policies (LCP) for learning to optimize TSP-style routing problems. The goal of this was not to outperform all other optimizers, but to outperform RL optimizers, which are usually less effective than traditional approaches, but also are more scalable to task-variation. This makes this class of optimizers more suitable for real-world problems. LCP uses 2 policies to optimize a TSP: a seeder, and a reviser. The seeder generates many valid but diverse solutions (ensured with an entropy maximization term in the reward). The reviser learns to relax small sections of each seed to create more optimal solutions. Revision is repeated a set number of times and the best solution is given. Review Strengths: • Elegant idea. - Macro-optimization through learned seeding + micro-optimization through revision. • Beat baseline RL algorithms in 3 TSP variants with 20, 50, and 100 nodes. - Variants seemed representative (educated guess). • Real-world applicability + scalability. - Some of these applications are mentioned. • Figures are well-made and understandable. • Ablation study is good. • Reproducibility info including hyperparameters is mentioned and indexed in the appendix. Code will also be provided in the final version. Weaknesses: • Unprofessional writing. - Most starkly, “policies” is misspelled in the title. • At times, information is not given in an easy-to-understand way. - E.G. lines 147 - 152, 284 - 289. • Captions of figures do not help elucidate what is going on in the figure. This problem is mitigated by the quality of the figures, but it still makes it much harder to understand the pipeline of LCP and its components. More emphasis on that pipeline would help with the understanding. • 100 nodes seem like a small maximum test size for TSP problems (though this is an educated guess). Many real-world problems have thousands or tens of thousands of nodes. • Increase in optimality is either not very significant, or not presented to highlight its significance. It would be better to put the improvement into perspective. • Blank spaces in table 1 are unclear. Opportunities: • It would be good to describe why certain choices were made. For example, why is the REINFORCE algorithm used for training versus something like PPO? I presume it has to do with the attention model paper this one iterates on, but clarification would be good. • More real-world uses of the algorithm could be included to better understand the societal impact, including details on how LCP could be integrated well. The paper lacks a high degree of polish and professionalism, but its formatting (e.g. bolded inline subsubsections) and figures are its saving grace. The tables are also well structured, if a bit cluttered --- values are small and bolding is indistinct. This paper does a good job of giving this information and promises open source-code on publication. Overall, the paper and its presentation have several problems, but the idea seems elegant and useful.
NIPS
Title Learning Collaborative Policies to Solve NP-hard Routing Problems Abstract Recently, deep reinforcement learning (DRL) frameworks have shown potential for solving NP-hard routing problems such as the traveling salesman problem (TSP) without problem-specific expert knowledge. Although DRL can be used to solve complex problems, DRL frameworks still struggle to compete with state-of-the-art heuristics showing a substantial performance gap. This paper proposes a novel hierarchical problem-solving strategy, termed learning collaborative policies (LCP), which can effectively find the near-optimum solution using two iterative DRL policies: the seeder and reviser. The seeder generates as diversified candidate solutions as possible (seeds) while being dedicated to exploring over the full combinatorial action space (i.e., sequence of assignment action). To this end, we train the seeder’s policy using a simple yet effective entropy regularization reward to encourage the seeder to find diverse solutions. On the other hand, the reviser modifies each candidate solution generated by the seeder; it partitions the full trajectory into sub-tours and simultaneously revises each sub-tour to minimize its traveling distance. Thus, the reviser is trained to improve the candidate solution’s quality, focusing on the reduced solution space (which is beneficial for exploitation). Extensive experiments demonstrate that the proposed two-policies collaboration scheme improves over single-policy DRL framework on various NP-hard routing problems, including TSP, prize collecting TSP (PCTSP), and capacitated vehicle routing problem (CVRP). 1 Introduction Routing is a combinatorial optimization problem, one of the prominent fields in discrete mathematics and computational theory. Among routing problems, the traveling salesman problem (TSP) is a canonical example. TSP can be applied to real-world problems in various engineering fields, such as robot routing, biology, and electrical design automation (EDA) [1, 2, 3, 4, 5] by expanding constraints and objectives to real-world settings : coined TSP variants are expanded version of TSP. However, TSP and its variants are NP-hard, making it challenging to design an exact solver [6]. Due to NP-hardness, solvers of TSP-like problems rely on mixed-integer linear programming (MILP) solvers [7] and handcrafted heuristics [8, 9]. Although they often provide a remarkable performance on target problems, the conventional approaches have several limitations. Firstly, in the case of MILP solvers, the objective functions and constraints must be formulated into linear forms, but many real-world routing applications, including biology and EDA, have a non-linear objective. Secondly, handcrafted heuristics rely on expert knowledge on target problems, thus hard to solve other problems. That is, whenever the target problem changes, the algorithm must also be re-designed. Deep reinforcement learning (DRL)-routing frameworks [10, 11, 12] is proposed to tackle the limitation of conventional approaches. One of the benefits of DRL is that reward of DRL can be any 35th Conference on Neural Information Processing Systems (NeurIPS 2021). value, even from a black-box simulator; therefore, DRL can overcome the limitations of MILP on real-world applications. Moreover, DRL frameworks can automatically design solvers relying less on a handcrafted manner. We note that the main objective of our research is not outperforming problem-specific solvers like the Concorde [9], a TSP solver. Our problem-solving strategy based on DRL, however, ultimately focuses on practical applications1 including intelligent transportation [13], biological sequence design [14], routing on electrical device [15] and device placement [16, 17]. Therefore, this paper evaluates the performance of DRL frameworks on TSP-like problems as a benchmark for potential applicability to practical applications, including speed, optimality, scalability, and expand-ability to other problems. TSP-like problems are excellent benchmarks as they have various baselines to compare with and can easily be modeled and evaluated. Contribution. This paper presents a novel DRL scheme, coined learning collaborative policies (LCP), a hierarchical solving protocol with two policies: seeder and reviser. The seeder generates various candidate solutions (seeds), each of which will be iteratively revised by the reviser to generate fine-tuned solutions. Having diversified candidate solutions is important, as it gives a better chance to find the best solution among them. Thus, the seeder is dedicated to exploring the full combinatorial action space (i.e., sequence of assignment action) so that it can provide as diversified candidate solutions as possible. It is important to explore over the full combinatorial action space because the solution quality highly fluctuates depending on its composition; however, exploring over the combinatorial action space is inherently difficult due to its inevitably many possible solutions. Therefore, this study provides an effective exploration strategy applying an entropy maximization scheme. The reviser modifies each candidate solution generated by the seeder. The reviser is dedicated to exploiting the policy (i.e., derived knowledge about the problem) to improve the quality of the candidate solution. The reviser partitions the full trajectory into sub-tours and revises each subtour to minimize its traveling distance in a parallel manner. This scheme provides two advantages: (a) searching over the restricted solution space can be more effective because the reward signal corresponding to the sub-tour is less variable than that of the full trajectory when using reinforcement learning to derive a policy, and (b) searching over sub-tours of seeds can be parallelized to expedite the revising process. The most significant advantage of our method is that the reviser can re-evaluate diversified but underrated candidates from the seeder without dropping it out early. Since the seeder explores the full trajectory, there may be a mistake in the local sub-trajectory. Thus, it is essential to correct such mistakes locally to improve the solution quality. The proposed revising scheme parallelizes revising process by decomposing the full solution and locally updating the decomposed solution. Thus it allows the revisers to search over larger solution space in a single inference than conventional local search (i.e., number of iteration of the reviser is smaller than that of conventional local search 2-opt [18], or DRL-based 2-opt [19]), consequently reducing computing costs. Therefore, we can keep the candidates without eliminating them early because of computing costs. The proposed method is an architecture-agnostic method, which can be applied to various neural architectures. The seeder and reviser can be parameterized with any neural architecture; this research utilizes AM [12], the representative DRL model on combinatorial optimization, to parameterize the seeder and the reviser. According to the experimental results, the LCP improves the target neural architecture AM [12], and outperforms competitive DRL frameworks on TSP, PCTSP, and CVRP (N = 20, 50, 100, 500, N : number of nodes) and real-world problems in TSPLIB [20]. Moreover, by conducting extensive ablation studies, we show proposed techniques, including entropy regularization scheme and revision scheme, clearly contribute to the performance improvement. 2 Related Works There have been continuous advances in DRL frameworks for solving various routing problems. DRL framework can generate solvers that do not rely on the ground-truth label of target problems: it can be applied to un-explored problems. DRL-based approaches can be categorized into two parts; 1These works [13, 14, 15, 16, 17] are inspired by DRL frameworks [10, 12] on combinatorial optimization constructive heuristics and improvement heuristics. We survey these two categories and current emerging hybrid approaches of machine learning (ML) with conventional solvers. 2.1 DRL-based Constructive Heuristics Bello et al. [10] introduced an actor-critic algorithm with a policy parameterized by the pointer network [21]. They proposed a constructive Markov decision process (MDP), where the action is defined as choosing one of the un-served nodes to visit, given a partial solution; the policy is trained to add a node to provide a complete solution sequentially. Later, DRL-based constructive heuristics were developed to design the architecture of neural networks while preserving the constructive MDP [10]. Khalil et al. [11] proposed a DRL framework with a graph embedding structure. Nazari et al. [22], Duedon et al. [23] and Kool et al. [12] redesigned the pointer network [21] using the transformer [24] and trained it with a policy gradient method [25]. The AM by Kool et al. [12] reports substantial results on various NP-hard routing problems, including TSP, PCTSP, CVRP, and orienteering problem (OP) in high-speed computation. AM-variants. After the meaningful success of the AM, many studies are expanded from the AM. Many engineering fields and industries apply AM into their domain. For example, Liao et al. [4] proposed a routing algorithm for the circuit using AM. Some researches focus on increasing the performances of AM on classic routing problems like TSP by simple techniques. Kwon et al. [26] proposed the POMO, effective reinforcement learning method for AM. They proposed a new RL baseline that can reduce the training variance of AM using the problem-specific property of TSP and CVRP. In addition, they presented an effective post-processing algorithm for TSP and CVRP. However, their proposed method has a limitation in that it is problem-specific because it uses the domain properties of TSP and CVRP (e.g., their method is limited to be applied to PCTSP.). Xin et al. [27] proposed AM-style DRL-model, MDAM, for NP-hard routing problems. Their method learns multiple AM decoders and derives various solutions through the multiple decoders. The goal of increasing the solution diversity is similar to our research. However, our study is different where it increases the entropy of a single decoder and improves the mistakes of various solutions through a reviser. 2.2 DRL-based Improvement Heuristics Unlike the constructive MDP, DRL-based improvement heuristics are designed to improve the completed solution iteratively. Most researches on DRL-based improvement heuristics are inspired by classical local search algorithms such as 2-opt [18] and the large neighborhood search (LNS) [28]. Chen et al. [29] proposed a DRL-based local search framework, termed NeuRewriter, that shows a promising performance on CVRP and job scheduling problems. Wu et al. [30], and Costa et al. [31] proposed a DRL-based TSP solver by learning the 2-opt. Their method improves the randomly generated solutions, unlike the method of Chen et al. [29] rewrites a solution given by a conventional heuristic solver. Hottung & Tierney [32] proposed a novel search method of VRP that destroys and repairs a solution repeatably inspired LNS. Their method gives promising performances on CVRP. Improvement heuristic approaches generally show better performance than constructive heuristics but are usually slower than constructive heuristics. In the case of TSP, the number of neural network’s inferences of constructive heuristics is the same as the number of cities to visit. However, the number of inferences of the improvement heuristics is generally much larger. 2.3 Hybrid Approaches with Conventional Solvers There are several studies on hybrid approaches with conventional solvers having promising performance recently. Lu et al. [33] proposed a hybrid method, where the policy is learned to control improvement operators (handcrafted heuristic). Significantly, they outperforms the LKH3, which is widely considered as mountain to climb in machine learning (ML) communities. Joshi et al. [34] combined graph neural network (GNN) model with the beam search algorithm. They trained the GNN with supervised learning for generating a hit map of candidate nodes. Then trained GNN reduces a searching space for improvement heuristics. Similarly, Fu et al. [35] combined supervised GNN model with Monte Carlo tree search (MCTS) and Kool et al. [36] combined supervised GNN model with dynamic programming. Their method achieves significant performances, showing ML method can effectively collaborate with conventional operational research (OR) methods. The research scope of hybrid approaches and DRL-based methods is different. Hybrid approaches can overcome classical solvers in target tasks by collaborating with the classical solvers. However, hybrid approaches have inherited limitations from classical solvers that are poor expandability to other tasks. The DRL-based method can be applied to various real-world tasks without a classic solver; we can also utilize DRL-based to unexplored tasks. This paper investigates the DRL-based NP-hard routing method without the help of classical solvers. 3 Formulation of Routing Problems This section explains the Markov decision process (MDP) formulation for the given 2D Euclidean TSP as a representative example. The formulation of MDP for other problems is described in Appendix A.1. The main objective of TSP is to find the shortest path of the Hamiltonian cycle. The TSP graph can be represented as a sequence of N nodes in 2D Euclidean space, s = {xi}Ni=1, where xi ∈ R2. Then, the solution of TSP can be represented as the permutation π of input sequences: π = t=N⋃ t=1 {πt}, πt ∈ {1, ..., N}, πt1 ̸= πt2 if t1 ̸= t2 The objective is minimizing the tour length L(π|s) = ∑N−1 t=1 ||xπt+1 − xπt ||2 + ||xπN − xπ1 ||2. Then, we formulate the constructive Markov decision process (MDP) of TSP. State. State of MDP is represented as a partial solution of TSP or a sequence of previously selected actions: π1:t−1. Action. Action is defined as selecting one of un-served tasks. Therefore, action is represented as πt where the πt ∈ {{1, ..., N} \ {π1:t−1}}. Cumulative Reward. We define cumulative reward for solution (a sequence of assignments) from problem instance s as negative of tourlength: −L(π|s). Constructive Policy. Finally we define constructive policy p(π|s) that generates a solution π from TSP graph s. The constructive policy p(π|s) is decomposed as: p(π|s) = t=N∏ t=1 pθ(πt|π1:t−1, s) Where pθ(πt|π1:t−1, s) is a single-step assignment policy parameterized by parameter θ. 4 Learning Collaborative Policies This section describes a novel hierarchical problem-solving strategy, termed learning collaborative policies (LCP), which can effectively find the near-optimum solution using two hierarchical steps, seeding process and revising process (see Figure 1 for detail). In the seeding process, the seeder policy pS generates M number of diversified candidate solutions. In the revising process, the reviser policy pR re-writes each candidate solution I times to minimize the tour length of the candidate. The final solution is then selected as the best solution among M revised (updated) candidate solutions. See pseudo-code in Appendix A.4 for a detailed technical explanation. 4.1 Seeding Process The seeder generates as diversified candidate solutions as possible while being dedicated to exploring the full combinatorial action space. To this end, the seeder is trained to solve the following problems. Solution space. Solution space of seeder is a set of full trajectory solutions : {π(1), ...,π(M)}. The M is the number of candidate solutions from the seeder: termed sample width. Policy structure. Seeder is a constructive policy, as defined in section 3 as follows: pS(π|s) = t=N∏ t=1 pθS (πt|π1:t−1, s) The segment policy pθS (πt|π1:t−1, s), parameterized by θS , is derived form AM [12]. Entropy Reward. To force the seeder policy pS to sample diverse solutions, we trained pS such that the entropy H of pS to be maximized. To this end, we use the reward RS defined as: RS = H ( π ∼ t=N∏ t=1 pθS (πt|π1:t−1, s) ) ≈ N∑ t=1 wtH (πt ∼ pθS (πt|π1:t−1, s)) (1) The entropy of constructive policy is appropriate for measuring solution diversity. However, computing the entropy of constructive policy is intractable because search space is too large: N !. Therefore, we approximate it as a weighted sum of the entropy of segment policies pθS (πt|π1:t−1, s) evaluated at different time step. We use a linear scheduler (time-varying weights) wt = N−tNw to boost exploration at the earlier stage of composing a solution; higher randomness imposed by the higher weight wt at the early stage tends to generate more diversified full trajectories later. The Nw is the normalizing factor, which is a hyperparameter. Training scheme. To train the seeder, we use the REINFORCE [25] algorithm with rollout baseline b introduced by Kool et al. [12]. Then the gradient of each objective function is expressed as follows: ∇J(θS |s) = Eπ∼pS [(L(π|s)− αRS(pS1:N ,π)− b(s))∇log(pS)] (2) Note that the α is hyperparameter for RS and pS1:N is the sequence of segment polices {pθS (πt|π1:t−1, s)}Nt=1. We use the ADAM [37] optimizer to obtain the optimal parameter θ∗ that minimizes the objective function. 4.2 Revision Process In the revision process, given a candidate solution, the reviser decomposes the candidate solution into K segments and simultaneously finds the optimum routing sequence for each segment of each candidate solution. The reviser repeats this revising process I times to find the best-updated candidate solution. To be specific, the reviser sequentially updates candidate solutions (I times) by repeatably decomposing the full trajectories computed from the previous iteration into segments and revising the segments to produce M updated full trajectory solutions. To sum up, reviser solves M ×K segments in parallel (M : number of candidate solutions, K: number of the segment in each candidate solution), I times in series. The proposed scheme has advantages over conventional local search methods or DRL-based improvement heuristics. It searches larger solution spaces in a single inference; therefore, it reduces iteration I . For example, 2-opt and DRL-2opt [19] search O(N2) solution space (if it is parallelizable, O(MN2)), while the reviser searches O(MK × l!) which is much larger (when the number of nodes of the segment l is big enough) in a single inference. Hence we can reduce the number of iterations I significantly compared to 2-opt, or DRL-2opt [19], thus expediting the speed of the solution search (see Appendix E). Solution space. Solution space of reviser is a partial segment of full trajectory solution represented as πk+1:k+l. The k is starting index, and l is the number of nodes of the segment. For details of assigning segment including k and l, see Appendix A.3. Policy structure. Reviser is a constructive policy as follows: pR(πk+1:k+l|s) = t=l∏ t=1 pθR(πk+t|πk:k+t−1, πk+l+1, s) The segment policy pθR , parameterized by θR, is in the similar form with that of AM [12]. Each πk and πk+l+1 indicate the starting point and the destination point of the partial segment, respectively (see red-points in Figure 3). We modify the context embedding vector h(N)(c) = [h̄ (N), h (N) πt−1 , h (N) π1 ] of AM, which is designed for solving TSP. Hence, h is a high dimensional embedding vector from the transformer-based encoder, and N is the number of multi-head attention layers. h̄(N) is the mean of the entire embedding, h (N) πt−1 is the embedding of previously selected nodes, and h (N) π1 is the embedding of the first node. However, since the destination of reviser is πk+l+1, not the first node π1 , we change the embedding of the first node h(N)π1 to be the embedding of the last node h (N) πk+l+1 for the context embedding as h (N) (c) = [h̄ (N), h (N) πk+t−1 , h (N) πk+l+1 ]. Revision Reward: negative of partial tour length LR(πk+1:k+l|s) = ∑l+1 t=1 ||xπk+t − xπk+t−1 ||2. Training scheme. The training process is mostly the same as described in section 4.1, except that we have modified the length term L to LR, and set α = 0 to remove entropy reward RS for training the reviser. Note that the seeder and reviser are trained separately. 5 Experiments This section reports the experimental results2 of the LCP scheme on TSP, PCTSP, and CVRP (N = 20, 50, 100, 500, N : number of nodes). Also, we report several ablation studies in section 5.3 and Appendix B-F. We evaluate performance on real-world TSPs in the TSPLIB in Appendix G. Training Hyperparamters. Throughout the entire training process of the seeder and reviser, we have exactly the same hyperparameters as Kool et al. [12], except that the training batch size of our seeder is 1024. To train the seeder’s policy, we set α = 0.5 (2) and Nw = ∑N i=1 i for linear weight wt = N−t Nw for entropy scheduling. Details in the experimental setting, including hyperparameters, dataset configuration, and run time evaluation, are described in Appendix A.5. 5.1 Target Problems and Baselines We evaluate the performance of LCP in solving the three routing problems: TSP, PCTSP, and CVRP. We provide a brief explanation of them. The detailed descriptions for these problems are in Appendix A.1. Travelling salesman problem (TSP). TSP is a problem to find the shortest Hamiltonian cycle given node sequences. Price collecting travelling salesman problem (PCTSP). PCTSP [38] is a problem, where each node has a prize and a penalty. The goal is to collect the nodes with at least a minimum total prize (constraint) and minimize tour length added with unvisited nodes’ penalties. Capacitated vehicle routing problem (CVRP). CVRP [39] is a problem where each node has a demand, while a vehicle must terminate the tour when the total demand limit is exceeded (constraint). The objective is to minimize the tour length. For the baseline algorithms, we use two types of algorithms: conventional heuristics and DRL-based solvers. For the conventional heuristics, we use Gurobi [7] (the commercial optimization solver), and the OR Tools [40] (the commercial optimization solver) for all three problems. In Table 1, Gurobi (t) indicates time-limited Gurobi whose running time is restricted below t. In addition, OR Tools (t) is the OR Tools that allows additional local search over a duration of t. For problem-specific heuristics, we use Concorde [9] for TSP, the iterative local search (ILS) [12] for PCTSP, and LKH3 [41] for CVRP. For the baselines using DRL-based solvers, we concentrated on the ability of the LCP scheme, which is improved performance over AM. Validating that the two-policies collaboration scheme outperforms the single-policy scheme (i.e., AM) is a crucial part of this research; thus, the most important metric for performance evaluation is improvement between vanilla AM the AM + LCP. Also, we reproduced other competitive DRL frameworks: current emerging improvement heuristics. We exclude recently proposed AM-style constructive heuristics, including the POMO [26] and MDAM [27] because they can be candidate collaborators with LCP, not competitors (e.g., POMO + LCP is possible). The detailed method for evaluation baselines in Table 1 is described as follows: TSP. We follow baseline setting of Kool et al. [12] and Costa et al. [19]. We set DRL baselines including the S2V-DQN [11], EAN [23], GAT-T [30], DRL-2opt [19], and AM [12]. We show the results of S2V-DQN and EAN reported by Kool et al. [12], and the results of GAT-T reported by Costa et al. [19]. Then we directly reproduce the two most competitive DRL frameworks among baselines, the AM and DRL-2opt, in our machine to make a fair comparison of the speed. PCTSP. We follow baseline setting of Kool et al. [12]. We reproduce AM [12] for DRL baseline. 2See source code in https://github.com/alstn12088/LCP CVRP. We follow baseline setting of Houttung & Tierney [32]. We report result of RL [22] based on Houttung & Tierney [32] and we reproduce AM [12] and NLNS [32]. 5.2 Performance Evaluation In this section, we report the performance of LCP on small-scale problems (N = 20, 50, 100) in Table 1. Then we provide a time-performance trade-off analysis including large-scale problems (N = 500). We note that time-performance analysis is significant because any method can find an optimal solution when given an infinite time budget. From the analysis, we can identify a specific time region, called winner region, where LCP performs the best in terms of both speed and performance. Performance evaluation on N = 20, 50, 100. Our method outperforms all the DRL baselines and OR-tools in TSP, PCTSP, and CVRP, as clearly shown in Table 1. Note that for TSP (N = 100), we applied two types of revisers, each of which is denoted LCP and LCP*, respectively. The details are described in Appendix A.4 with pseudo-code. Our LCP and LCP* outperforms DRL-2opt, the current state-of-the-art DRL-based improvement heuristic in N = 20, 50, 100, surpass 0.33% in N = 100. In PCTSP, LCP outperforms AM with less time. Our method (AM + LCP {640,1}) outperforms the OR-Tools (10s), with 4× and 2× faster speed in N = 50, 100 respectively. Compared to the ILS, our method (AM + LCP {1280,5}) underperforms by 1.0%, but has 11 × faster speed for N = 100 . For CVRP, our method outperforms competitive DRL frameworks. Time-performance analysis on N = 100, 500. In Figure 4, we describe the time-performance analysis. We cannot control the speed of the Concorde, ILS, and LKH3. We can control the speed of DRL solvers by adjusting sample width M or the number of iterations I . For PCTSP, we can change the speed of OR-tools by managing the time for additional local searches. Our scheme clearly outperforms DRL-solvers in terms of both speed and performance. For PCTSP (N = 100, 500) and CVRP (N = 500), our method achieves the winner region of t < 10, which is best performed in a specific time region among all kind of baseline solvers (for CVRP (N = 100), our method achieves the winner region of t < 5). Performance on TSPLIB [20] data: see Appendix G. 5.3 Ablation Study In this section, we conduct an ablation study on LCP components. We leave further ablation studies to Appendix B-F. Ablation study of collaborative policies. In Table 2, we ablate three significant components of LCP and show the experimental results for every case. In the case of vanilla AM, having none of LCP components, the performance is the poorest. On the other hand, collaboration of seeder trained with linearly scheduled-entropy and the reviser shows the best performance. Therefore, the experimental results empirically validate our proposal of hierarchically collaborating two policies and also demonstrate the effectiveness of using a linearly scheduled-entropy term shown in section 4.1 and Figure 2. Ablation study of entropy regularization: see Appendix B. Ablation study of SoftMax temperature: see Appendix C. Ablation study of application of LCP to pointer network [10, 21]: see Appendix D. Comparison with reviser and other improvement heuristics: see Appendix E. Training convergence of seeder and reviser in different PyTorch seeds: see Appendix F. 6 Discussion In this paper, we proposed a novel DRL scheme, learning collaborative policies (LCP). The extensive experiments demonstrate that our two-policies collaboration algorithm (i.e., LCP) outperforms conventional single-policy DRL frameworks, including AM [12], on various NP-hard routing problems, such as TSP, PCTSP, and CVRP. We highlight that LCP is a reusable scheme, can solve various problems. The neural architecture of the seeder and reviser proposed in this paper is derived from AM [12]. It can be substituted by other architectures, such as the pointer network [10, 21] and AM-style architectures including POMO [26] and MDAM [27]. If further studies on neural architecture for combinatorial optimization are carried out, the seeder and reviser can be improved further. Also, LCP can be directly applied to other combinatorial optimization tasks, including TSP with time windows (TSPTW), orienteering problem (OP), multiple TSP (mTSP), variations of the vehicle routing problem (VRP), and other practical applications. Further Works. We made an important first step: two-policies collaboration where each policy specializing in exploration or exploitation can improve conventional single-policy systems on combinatorial optimization tasks. The important direction of further research is introducing more sophisticated strategies to explore or exploit combinatorial solution space. New exploration strategies for overcoming the proposed approximated entropy maximization scheme are needed. Also, it is necessary to investigate more effective exploitation strategies beyond the proposed revision scheme. Acknowledgements and Disclosure of Funding This research is supported in part by the KAIST undergraduates research program (URP), 2019. We thank Hankook Lee and Prof. Jinwoo Shin for building part of this project in the URP. We thank Joonsang Park, Keeyoung Son, Hyunwook Park, Haeyeon Rachel Kim, and our anonymous reviewers for feedback and discussions.
1. What is the main contribution of the paper regarding hierarchical strategies for routing problems? 2. What are the strengths of the proposed LCP framework, particularly in its ability to work efficiently across various problems? 3. Do you have any concerns about the weighting mechanism used in the entropy-regularized reward? 4. How do fixed parameters in Table 5 affect the comparison of provided results? 5. Can similar results be achieved for real instances, and how do characteristics of random instances impact results? 6. Was the performance of the seeder evaluated, and what kind of ablation test was used to evaluate the reviser's performance? 7. How does the revision process of candidate seeds impact LCP's ability to generate solutions? 8. Are there any investigations into LCP with smaller l (i.e., L) generating worse solutions than Drl2-opt? 9. Can the advantages of LCP be highlighted by comparing it to other RL-based solvers like AM and NLNS in terms of search space? 10. Why were relatively small values of I used for PCTSP and CVRP, while TSP used larger values? Can the concept of these used parameter values be added to the paper?
Summary Of The Paper Review
Summary Of The Paper The paper addressed the hard routing problems based on a hierarchical strategy consists of two DRL policies called the seeder and reviser. Together with the two policies, to assess the diversity of candidate solutions, the entropy-regularized reward is adopted. The revise divides the tour into subtours and optimizes the length of them. Experimental results suggest that the proposed LCP framework works on TSP, PCTSP, and CVRP instances. Review As the authors mentioned, the proposed LCP framework is not designed to outperform problem-specific solvers (e.g., Concorde for TSP). Therefore, the authors formulate the routing problems as MDP and the task of DRL-policies is to sequentially design a new task (i.e., select one of the unserved tasks). Since the next task should be taken depends on the current status (and implicitly already assigned tasks), the MDP-based approach seems to be reasonable. In addition, the reward-based study covers various problem tasks. The idea of the seeder (preparing diverse solutions) and revise (updating solutions) is a kind of a traditional approach to solvecombinatorial problems, but the RL-based formulation seems to work efficiently to various problems. Technically the paper seems to be sound but the structure of the paper is a bit hard to follow although the author revised the paper when submission, because the paper covers branches of classes and showed various experimental comparisons (and most of them are included in Appendix). Weights for diverse candidate routes As far as I understand the proposed formulation correctly, the seeder follows a parameterized policy from the previous work [11]. Therefore, the important part of the LCP seems from the entropy-regularized reward to search diverse areas of all solution spaces, which is supported by the reviser part. Although the entropy is hard to compute exactly, a weighted sum of the selected routings is adopted and the weights on the early stage are set to be higher. This weighting mechanism seems to be interesting to generate diverse solutions. Although this weighting scheme is a hyper-parameter (or hand-crafted setting), the experimental results indicated that the difference of weights does not affect the result too much (e.g., Table 5 of Appendix B). My concerns are (a) Do fixed parameters in Table 5 (e.g., N of TSP, I=10, T=2, ... ) affect the comparison of the provided results? (b) Can we have similar results for real instances? As far as I know, characteristics of random TSP (or other class) instances are hard to tune and in some cases, results are different from those obtained from real instances. (c) In Appendix E, the performance of the reviser is evaluated by a kind of ablation test using Drl2-Opt. Did the author evaluate the performance of the seeder? (i.e., some heuristics + reviser) If this kind of comparison is useless for some reason, please clarify it to highlight the advantage of the proposed method (or, for example, Appendix D covers this kind of experiment?). Revise The revision process of candidate seeds is essential for LCP. The key background to design the reviser is solving small routing instances is easy (l = L-2). As noted in Sec.4.2, DRL-2opt searches O(N^2) solution space but the LCP searches O(MK l!). (d) I suspect that LCP with smaller l (i.e., L) could generate worse solutions than Drl2-opt. Is this correct or do the authors investigate the case? (e) The authors mentioned only Drl-2Opt in Sec.4.2. Since the proposed LCP could be applied to PCTSP and CVRP, some RL-based solvers (e.g., AM, NLNS) should be mentioned on the viewpoint of search space to highlight the advantage of LCP if possible. In addition to l , the parameter I should be carefully examined in my opinion. For PCTSP and CVRP, relatively small I is used (1, 5), while TSP uses l = 10 an 45 . Why such small values are used? Please add the concept of these used parameter values if possible.
NIPS
Title Learning Collaborative Policies to Solve NP-hard Routing Problems Abstract Recently, deep reinforcement learning (DRL) frameworks have shown potential for solving NP-hard routing problems such as the traveling salesman problem (TSP) without problem-specific expert knowledge. Although DRL can be used to solve complex problems, DRL frameworks still struggle to compete with state-of-the-art heuristics showing a substantial performance gap. This paper proposes a novel hierarchical problem-solving strategy, termed learning collaborative policies (LCP), which can effectively find the near-optimum solution using two iterative DRL policies: the seeder and reviser. The seeder generates as diversified candidate solutions as possible (seeds) while being dedicated to exploring over the full combinatorial action space (i.e., sequence of assignment action). To this end, we train the seeder’s policy using a simple yet effective entropy regularization reward to encourage the seeder to find diverse solutions. On the other hand, the reviser modifies each candidate solution generated by the seeder; it partitions the full trajectory into sub-tours and simultaneously revises each sub-tour to minimize its traveling distance. Thus, the reviser is trained to improve the candidate solution’s quality, focusing on the reduced solution space (which is beneficial for exploitation). Extensive experiments demonstrate that the proposed two-policies collaboration scheme improves over single-policy DRL framework on various NP-hard routing problems, including TSP, prize collecting TSP (PCTSP), and capacitated vehicle routing problem (CVRP). 1 Introduction Routing is a combinatorial optimization problem, one of the prominent fields in discrete mathematics and computational theory. Among routing problems, the traveling salesman problem (TSP) is a canonical example. TSP can be applied to real-world problems in various engineering fields, such as robot routing, biology, and electrical design automation (EDA) [1, 2, 3, 4, 5] by expanding constraints and objectives to real-world settings : coined TSP variants are expanded version of TSP. However, TSP and its variants are NP-hard, making it challenging to design an exact solver [6]. Due to NP-hardness, solvers of TSP-like problems rely on mixed-integer linear programming (MILP) solvers [7] and handcrafted heuristics [8, 9]. Although they often provide a remarkable performance on target problems, the conventional approaches have several limitations. Firstly, in the case of MILP solvers, the objective functions and constraints must be formulated into linear forms, but many real-world routing applications, including biology and EDA, have a non-linear objective. Secondly, handcrafted heuristics rely on expert knowledge on target problems, thus hard to solve other problems. That is, whenever the target problem changes, the algorithm must also be re-designed. Deep reinforcement learning (DRL)-routing frameworks [10, 11, 12] is proposed to tackle the limitation of conventional approaches. One of the benefits of DRL is that reward of DRL can be any 35th Conference on Neural Information Processing Systems (NeurIPS 2021). value, even from a black-box simulator; therefore, DRL can overcome the limitations of MILP on real-world applications. Moreover, DRL frameworks can automatically design solvers relying less on a handcrafted manner. We note that the main objective of our research is not outperforming problem-specific solvers like the Concorde [9], a TSP solver. Our problem-solving strategy based on DRL, however, ultimately focuses on practical applications1 including intelligent transportation [13], biological sequence design [14], routing on electrical device [15] and device placement [16, 17]. Therefore, this paper evaluates the performance of DRL frameworks on TSP-like problems as a benchmark for potential applicability to practical applications, including speed, optimality, scalability, and expand-ability to other problems. TSP-like problems are excellent benchmarks as they have various baselines to compare with and can easily be modeled and evaluated. Contribution. This paper presents a novel DRL scheme, coined learning collaborative policies (LCP), a hierarchical solving protocol with two policies: seeder and reviser. The seeder generates various candidate solutions (seeds), each of which will be iteratively revised by the reviser to generate fine-tuned solutions. Having diversified candidate solutions is important, as it gives a better chance to find the best solution among them. Thus, the seeder is dedicated to exploring the full combinatorial action space (i.e., sequence of assignment action) so that it can provide as diversified candidate solutions as possible. It is important to explore over the full combinatorial action space because the solution quality highly fluctuates depending on its composition; however, exploring over the combinatorial action space is inherently difficult due to its inevitably many possible solutions. Therefore, this study provides an effective exploration strategy applying an entropy maximization scheme. The reviser modifies each candidate solution generated by the seeder. The reviser is dedicated to exploiting the policy (i.e., derived knowledge about the problem) to improve the quality of the candidate solution. The reviser partitions the full trajectory into sub-tours and revises each subtour to minimize its traveling distance in a parallel manner. This scheme provides two advantages: (a) searching over the restricted solution space can be more effective because the reward signal corresponding to the sub-tour is less variable than that of the full trajectory when using reinforcement learning to derive a policy, and (b) searching over sub-tours of seeds can be parallelized to expedite the revising process. The most significant advantage of our method is that the reviser can re-evaluate diversified but underrated candidates from the seeder without dropping it out early. Since the seeder explores the full trajectory, there may be a mistake in the local sub-trajectory. Thus, it is essential to correct such mistakes locally to improve the solution quality. The proposed revising scheme parallelizes revising process by decomposing the full solution and locally updating the decomposed solution. Thus it allows the revisers to search over larger solution space in a single inference than conventional local search (i.e., number of iteration of the reviser is smaller than that of conventional local search 2-opt [18], or DRL-based 2-opt [19]), consequently reducing computing costs. Therefore, we can keep the candidates without eliminating them early because of computing costs. The proposed method is an architecture-agnostic method, which can be applied to various neural architectures. The seeder and reviser can be parameterized with any neural architecture; this research utilizes AM [12], the representative DRL model on combinatorial optimization, to parameterize the seeder and the reviser. According to the experimental results, the LCP improves the target neural architecture AM [12], and outperforms competitive DRL frameworks on TSP, PCTSP, and CVRP (N = 20, 50, 100, 500, N : number of nodes) and real-world problems in TSPLIB [20]. Moreover, by conducting extensive ablation studies, we show proposed techniques, including entropy regularization scheme and revision scheme, clearly contribute to the performance improvement. 2 Related Works There have been continuous advances in DRL frameworks for solving various routing problems. DRL framework can generate solvers that do not rely on the ground-truth label of target problems: it can be applied to un-explored problems. DRL-based approaches can be categorized into two parts; 1These works [13, 14, 15, 16, 17] are inspired by DRL frameworks [10, 12] on combinatorial optimization constructive heuristics and improvement heuristics. We survey these two categories and current emerging hybrid approaches of machine learning (ML) with conventional solvers. 2.1 DRL-based Constructive Heuristics Bello et al. [10] introduced an actor-critic algorithm with a policy parameterized by the pointer network [21]. They proposed a constructive Markov decision process (MDP), where the action is defined as choosing one of the un-served nodes to visit, given a partial solution; the policy is trained to add a node to provide a complete solution sequentially. Later, DRL-based constructive heuristics were developed to design the architecture of neural networks while preserving the constructive MDP [10]. Khalil et al. [11] proposed a DRL framework with a graph embedding structure. Nazari et al. [22], Duedon et al. [23] and Kool et al. [12] redesigned the pointer network [21] using the transformer [24] and trained it with a policy gradient method [25]. The AM by Kool et al. [12] reports substantial results on various NP-hard routing problems, including TSP, PCTSP, CVRP, and orienteering problem (OP) in high-speed computation. AM-variants. After the meaningful success of the AM, many studies are expanded from the AM. Many engineering fields and industries apply AM into their domain. For example, Liao et al. [4] proposed a routing algorithm for the circuit using AM. Some researches focus on increasing the performances of AM on classic routing problems like TSP by simple techniques. Kwon et al. [26] proposed the POMO, effective reinforcement learning method for AM. They proposed a new RL baseline that can reduce the training variance of AM using the problem-specific property of TSP and CVRP. In addition, they presented an effective post-processing algorithm for TSP and CVRP. However, their proposed method has a limitation in that it is problem-specific because it uses the domain properties of TSP and CVRP (e.g., their method is limited to be applied to PCTSP.). Xin et al. [27] proposed AM-style DRL-model, MDAM, for NP-hard routing problems. Their method learns multiple AM decoders and derives various solutions through the multiple decoders. The goal of increasing the solution diversity is similar to our research. However, our study is different where it increases the entropy of a single decoder and improves the mistakes of various solutions through a reviser. 2.2 DRL-based Improvement Heuristics Unlike the constructive MDP, DRL-based improvement heuristics are designed to improve the completed solution iteratively. Most researches on DRL-based improvement heuristics are inspired by classical local search algorithms such as 2-opt [18] and the large neighborhood search (LNS) [28]. Chen et al. [29] proposed a DRL-based local search framework, termed NeuRewriter, that shows a promising performance on CVRP and job scheduling problems. Wu et al. [30], and Costa et al. [31] proposed a DRL-based TSP solver by learning the 2-opt. Their method improves the randomly generated solutions, unlike the method of Chen et al. [29] rewrites a solution given by a conventional heuristic solver. Hottung & Tierney [32] proposed a novel search method of VRP that destroys and repairs a solution repeatably inspired LNS. Their method gives promising performances on CVRP. Improvement heuristic approaches generally show better performance than constructive heuristics but are usually slower than constructive heuristics. In the case of TSP, the number of neural network’s inferences of constructive heuristics is the same as the number of cities to visit. However, the number of inferences of the improvement heuristics is generally much larger. 2.3 Hybrid Approaches with Conventional Solvers There are several studies on hybrid approaches with conventional solvers having promising performance recently. Lu et al. [33] proposed a hybrid method, where the policy is learned to control improvement operators (handcrafted heuristic). Significantly, they outperforms the LKH3, which is widely considered as mountain to climb in machine learning (ML) communities. Joshi et al. [34] combined graph neural network (GNN) model with the beam search algorithm. They trained the GNN with supervised learning for generating a hit map of candidate nodes. Then trained GNN reduces a searching space for improvement heuristics. Similarly, Fu et al. [35] combined supervised GNN model with Monte Carlo tree search (MCTS) and Kool et al. [36] combined supervised GNN model with dynamic programming. Their method achieves significant performances, showing ML method can effectively collaborate with conventional operational research (OR) methods. The research scope of hybrid approaches and DRL-based methods is different. Hybrid approaches can overcome classical solvers in target tasks by collaborating with the classical solvers. However, hybrid approaches have inherited limitations from classical solvers that are poor expandability to other tasks. The DRL-based method can be applied to various real-world tasks without a classic solver; we can also utilize DRL-based to unexplored tasks. This paper investigates the DRL-based NP-hard routing method without the help of classical solvers. 3 Formulation of Routing Problems This section explains the Markov decision process (MDP) formulation for the given 2D Euclidean TSP as a representative example. The formulation of MDP for other problems is described in Appendix A.1. The main objective of TSP is to find the shortest path of the Hamiltonian cycle. The TSP graph can be represented as a sequence of N nodes in 2D Euclidean space, s = {xi}Ni=1, where xi ∈ R2. Then, the solution of TSP can be represented as the permutation π of input sequences: π = t=N⋃ t=1 {πt}, πt ∈ {1, ..., N}, πt1 ̸= πt2 if t1 ̸= t2 The objective is minimizing the tour length L(π|s) = ∑N−1 t=1 ||xπt+1 − xπt ||2 + ||xπN − xπ1 ||2. Then, we formulate the constructive Markov decision process (MDP) of TSP. State. State of MDP is represented as a partial solution of TSP or a sequence of previously selected actions: π1:t−1. Action. Action is defined as selecting one of un-served tasks. Therefore, action is represented as πt where the πt ∈ {{1, ..., N} \ {π1:t−1}}. Cumulative Reward. We define cumulative reward for solution (a sequence of assignments) from problem instance s as negative of tourlength: −L(π|s). Constructive Policy. Finally we define constructive policy p(π|s) that generates a solution π from TSP graph s. The constructive policy p(π|s) is decomposed as: p(π|s) = t=N∏ t=1 pθ(πt|π1:t−1, s) Where pθ(πt|π1:t−1, s) is a single-step assignment policy parameterized by parameter θ. 4 Learning Collaborative Policies This section describes a novel hierarchical problem-solving strategy, termed learning collaborative policies (LCP), which can effectively find the near-optimum solution using two hierarchical steps, seeding process and revising process (see Figure 1 for detail). In the seeding process, the seeder policy pS generates M number of diversified candidate solutions. In the revising process, the reviser policy pR re-writes each candidate solution I times to minimize the tour length of the candidate. The final solution is then selected as the best solution among M revised (updated) candidate solutions. See pseudo-code in Appendix A.4 for a detailed technical explanation. 4.1 Seeding Process The seeder generates as diversified candidate solutions as possible while being dedicated to exploring the full combinatorial action space. To this end, the seeder is trained to solve the following problems. Solution space. Solution space of seeder is a set of full trajectory solutions : {π(1), ...,π(M)}. The M is the number of candidate solutions from the seeder: termed sample width. Policy structure. Seeder is a constructive policy, as defined in section 3 as follows: pS(π|s) = t=N∏ t=1 pθS (πt|π1:t−1, s) The segment policy pθS (πt|π1:t−1, s), parameterized by θS , is derived form AM [12]. Entropy Reward. To force the seeder policy pS to sample diverse solutions, we trained pS such that the entropy H of pS to be maximized. To this end, we use the reward RS defined as: RS = H ( π ∼ t=N∏ t=1 pθS (πt|π1:t−1, s) ) ≈ N∑ t=1 wtH (πt ∼ pθS (πt|π1:t−1, s)) (1) The entropy of constructive policy is appropriate for measuring solution diversity. However, computing the entropy of constructive policy is intractable because search space is too large: N !. Therefore, we approximate it as a weighted sum of the entropy of segment policies pθS (πt|π1:t−1, s) evaluated at different time step. We use a linear scheduler (time-varying weights) wt = N−tNw to boost exploration at the earlier stage of composing a solution; higher randomness imposed by the higher weight wt at the early stage tends to generate more diversified full trajectories later. The Nw is the normalizing factor, which is a hyperparameter. Training scheme. To train the seeder, we use the REINFORCE [25] algorithm with rollout baseline b introduced by Kool et al. [12]. Then the gradient of each objective function is expressed as follows: ∇J(θS |s) = Eπ∼pS [(L(π|s)− αRS(pS1:N ,π)− b(s))∇log(pS)] (2) Note that the α is hyperparameter for RS and pS1:N is the sequence of segment polices {pθS (πt|π1:t−1, s)}Nt=1. We use the ADAM [37] optimizer to obtain the optimal parameter θ∗ that minimizes the objective function. 4.2 Revision Process In the revision process, given a candidate solution, the reviser decomposes the candidate solution into K segments and simultaneously finds the optimum routing sequence for each segment of each candidate solution. The reviser repeats this revising process I times to find the best-updated candidate solution. To be specific, the reviser sequentially updates candidate solutions (I times) by repeatably decomposing the full trajectories computed from the previous iteration into segments and revising the segments to produce M updated full trajectory solutions. To sum up, reviser solves M ×K segments in parallel (M : number of candidate solutions, K: number of the segment in each candidate solution), I times in series. The proposed scheme has advantages over conventional local search methods or DRL-based improvement heuristics. It searches larger solution spaces in a single inference; therefore, it reduces iteration I . For example, 2-opt and DRL-2opt [19] search O(N2) solution space (if it is parallelizable, O(MN2)), while the reviser searches O(MK × l!) which is much larger (when the number of nodes of the segment l is big enough) in a single inference. Hence we can reduce the number of iterations I significantly compared to 2-opt, or DRL-2opt [19], thus expediting the speed of the solution search (see Appendix E). Solution space. Solution space of reviser is a partial segment of full trajectory solution represented as πk+1:k+l. The k is starting index, and l is the number of nodes of the segment. For details of assigning segment including k and l, see Appendix A.3. Policy structure. Reviser is a constructive policy as follows: pR(πk+1:k+l|s) = t=l∏ t=1 pθR(πk+t|πk:k+t−1, πk+l+1, s) The segment policy pθR , parameterized by θR, is in the similar form with that of AM [12]. Each πk and πk+l+1 indicate the starting point and the destination point of the partial segment, respectively (see red-points in Figure 3). We modify the context embedding vector h(N)(c) = [h̄ (N), h (N) πt−1 , h (N) π1 ] of AM, which is designed for solving TSP. Hence, h is a high dimensional embedding vector from the transformer-based encoder, and N is the number of multi-head attention layers. h̄(N) is the mean of the entire embedding, h (N) πt−1 is the embedding of previously selected nodes, and h (N) π1 is the embedding of the first node. However, since the destination of reviser is πk+l+1, not the first node π1 , we change the embedding of the first node h(N)π1 to be the embedding of the last node h (N) πk+l+1 for the context embedding as h (N) (c) = [h̄ (N), h (N) πk+t−1 , h (N) πk+l+1 ]. Revision Reward: negative of partial tour length LR(πk+1:k+l|s) = ∑l+1 t=1 ||xπk+t − xπk+t−1 ||2. Training scheme. The training process is mostly the same as described in section 4.1, except that we have modified the length term L to LR, and set α = 0 to remove entropy reward RS for training the reviser. Note that the seeder and reviser are trained separately. 5 Experiments This section reports the experimental results2 of the LCP scheme on TSP, PCTSP, and CVRP (N = 20, 50, 100, 500, N : number of nodes). Also, we report several ablation studies in section 5.3 and Appendix B-F. We evaluate performance on real-world TSPs in the TSPLIB in Appendix G. Training Hyperparamters. Throughout the entire training process of the seeder and reviser, we have exactly the same hyperparameters as Kool et al. [12], except that the training batch size of our seeder is 1024. To train the seeder’s policy, we set α = 0.5 (2) and Nw = ∑N i=1 i for linear weight wt = N−t Nw for entropy scheduling. Details in the experimental setting, including hyperparameters, dataset configuration, and run time evaluation, are described in Appendix A.5. 5.1 Target Problems and Baselines We evaluate the performance of LCP in solving the three routing problems: TSP, PCTSP, and CVRP. We provide a brief explanation of them. The detailed descriptions for these problems are in Appendix A.1. Travelling salesman problem (TSP). TSP is a problem to find the shortest Hamiltonian cycle given node sequences. Price collecting travelling salesman problem (PCTSP). PCTSP [38] is a problem, where each node has a prize and a penalty. The goal is to collect the nodes with at least a minimum total prize (constraint) and minimize tour length added with unvisited nodes’ penalties. Capacitated vehicle routing problem (CVRP). CVRP [39] is a problem where each node has a demand, while a vehicle must terminate the tour when the total demand limit is exceeded (constraint). The objective is to minimize the tour length. For the baseline algorithms, we use two types of algorithms: conventional heuristics and DRL-based solvers. For the conventional heuristics, we use Gurobi [7] (the commercial optimization solver), and the OR Tools [40] (the commercial optimization solver) for all three problems. In Table 1, Gurobi (t) indicates time-limited Gurobi whose running time is restricted below t. In addition, OR Tools (t) is the OR Tools that allows additional local search over a duration of t. For problem-specific heuristics, we use Concorde [9] for TSP, the iterative local search (ILS) [12] for PCTSP, and LKH3 [41] for CVRP. For the baselines using DRL-based solvers, we concentrated on the ability of the LCP scheme, which is improved performance over AM. Validating that the two-policies collaboration scheme outperforms the single-policy scheme (i.e., AM) is a crucial part of this research; thus, the most important metric for performance evaluation is improvement between vanilla AM the AM + LCP. Also, we reproduced other competitive DRL frameworks: current emerging improvement heuristics. We exclude recently proposed AM-style constructive heuristics, including the POMO [26] and MDAM [27] because they can be candidate collaborators with LCP, not competitors (e.g., POMO + LCP is possible). The detailed method for evaluation baselines in Table 1 is described as follows: TSP. We follow baseline setting of Kool et al. [12] and Costa et al. [19]. We set DRL baselines including the S2V-DQN [11], EAN [23], GAT-T [30], DRL-2opt [19], and AM [12]. We show the results of S2V-DQN and EAN reported by Kool et al. [12], and the results of GAT-T reported by Costa et al. [19]. Then we directly reproduce the two most competitive DRL frameworks among baselines, the AM and DRL-2opt, in our machine to make a fair comparison of the speed. PCTSP. We follow baseline setting of Kool et al. [12]. We reproduce AM [12] for DRL baseline. 2See source code in https://github.com/alstn12088/LCP CVRP. We follow baseline setting of Houttung & Tierney [32]. We report result of RL [22] based on Houttung & Tierney [32] and we reproduce AM [12] and NLNS [32]. 5.2 Performance Evaluation In this section, we report the performance of LCP on small-scale problems (N = 20, 50, 100) in Table 1. Then we provide a time-performance trade-off analysis including large-scale problems (N = 500). We note that time-performance analysis is significant because any method can find an optimal solution when given an infinite time budget. From the analysis, we can identify a specific time region, called winner region, where LCP performs the best in terms of both speed and performance. Performance evaluation on N = 20, 50, 100. Our method outperforms all the DRL baselines and OR-tools in TSP, PCTSP, and CVRP, as clearly shown in Table 1. Note that for TSP (N = 100), we applied two types of revisers, each of which is denoted LCP and LCP*, respectively. The details are described in Appendix A.4 with pseudo-code. Our LCP and LCP* outperforms DRL-2opt, the current state-of-the-art DRL-based improvement heuristic in N = 20, 50, 100, surpass 0.33% in N = 100. In PCTSP, LCP outperforms AM with less time. Our method (AM + LCP {640,1}) outperforms the OR-Tools (10s), with 4× and 2× faster speed in N = 50, 100 respectively. Compared to the ILS, our method (AM + LCP {1280,5}) underperforms by 1.0%, but has 11 × faster speed for N = 100 . For CVRP, our method outperforms competitive DRL frameworks. Time-performance analysis on N = 100, 500. In Figure 4, we describe the time-performance analysis. We cannot control the speed of the Concorde, ILS, and LKH3. We can control the speed of DRL solvers by adjusting sample width M or the number of iterations I . For PCTSP, we can change the speed of OR-tools by managing the time for additional local searches. Our scheme clearly outperforms DRL-solvers in terms of both speed and performance. For PCTSP (N = 100, 500) and CVRP (N = 500), our method achieves the winner region of t < 10, which is best performed in a specific time region among all kind of baseline solvers (for CVRP (N = 100), our method achieves the winner region of t < 5). Performance on TSPLIB [20] data: see Appendix G. 5.3 Ablation Study In this section, we conduct an ablation study on LCP components. We leave further ablation studies to Appendix B-F. Ablation study of collaborative policies. In Table 2, we ablate three significant components of LCP and show the experimental results for every case. In the case of vanilla AM, having none of LCP components, the performance is the poorest. On the other hand, collaboration of seeder trained with linearly scheduled-entropy and the reviser shows the best performance. Therefore, the experimental results empirically validate our proposal of hierarchically collaborating two policies and also demonstrate the effectiveness of using a linearly scheduled-entropy term shown in section 4.1 and Figure 2. Ablation study of entropy regularization: see Appendix B. Ablation study of SoftMax temperature: see Appendix C. Ablation study of application of LCP to pointer network [10, 21]: see Appendix D. Comparison with reviser and other improvement heuristics: see Appendix E. Training convergence of seeder and reviser in different PyTorch seeds: see Appendix F. 6 Discussion In this paper, we proposed a novel DRL scheme, learning collaborative policies (LCP). The extensive experiments demonstrate that our two-policies collaboration algorithm (i.e., LCP) outperforms conventional single-policy DRL frameworks, including AM [12], on various NP-hard routing problems, such as TSP, PCTSP, and CVRP. We highlight that LCP is a reusable scheme, can solve various problems. The neural architecture of the seeder and reviser proposed in this paper is derived from AM [12]. It can be substituted by other architectures, such as the pointer network [10, 21] and AM-style architectures including POMO [26] and MDAM [27]. If further studies on neural architecture for combinatorial optimization are carried out, the seeder and reviser can be improved further. Also, LCP can be directly applied to other combinatorial optimization tasks, including TSP with time windows (TSPTW), orienteering problem (OP), multiple TSP (mTSP), variations of the vehicle routing problem (VRP), and other practical applications. Further Works. We made an important first step: two-policies collaboration where each policy specializing in exploration or exploitation can improve conventional single-policy systems on combinatorial optimization tasks. The important direction of further research is introducing more sophisticated strategies to explore or exploit combinatorial solution space. New exploration strategies for overcoming the proposed approximated entropy maximization scheme are needed. Also, it is necessary to investigate more effective exploitation strategies beyond the proposed revision scheme. Acknowledgements and Disclosure of Funding This research is supported in part by the KAIST undergraduates research program (URP), 2019. We thank Hankook Lee and Prof. Jinwoo Shin for building part of this project in the URP. We thank Joonsang Park, Keeyoung Son, Hyunwook Park, Haeyeon Rachel Kim, and our anonymous reviewers for feedback and discussions.
1. What is the main contribution of the paper regarding collaborative policies for routing problems? 2. What are the strengths of the proposed approach, particularly in its fully deep learning pipeline and diversity enhancement? 3. Are there any concerns or questions regarding the generation of sub-problems and the training process of the revision policy? 4. How does the reviewer assess the significance and originality of the paper's contributions? 5. What are some potential ablation studies that could further improve the method's performance?
Summary Of The Paper Review
Summary Of The Paper This paper proposes to learn collaborative policies for routing problems. Collaborative policies firstly learn to generate many diversified and primitive solution candidates. Then for each solution candidate, they segment the original problem into several sub-problems. Another neural network model is trained to improve these sub-problems. The improved solutions of the sub-problems can be merged to form a new solution for the original problem. They teste their methods on the TSP, PCTSP, CVRP problems and achieve the best performance among all the baselines. Review Originality: I think the method is a smart combination of "Generalize a Small Pre-trained Model to Arbitrarily Large TSP Instances" Fu et al., "Deep Policy Dynamic Programming for Vehicle Routing Problems" Kool et al and "Learning 2-opt Heuristics for the Traveling Salesman Problem via Deep Reinforcement Learning" Costa et al. However, the related work section mentions these two works but does not discuss the relation to these three works. Quality: The main contribution of this paper is to make the entire pipeline fully deep learning and do not require any label during the training process. The paper uses a neural network model trained with DRL to generate the primitive solutions and proposes an entropy regularizer term to increase the diversity of the solutions. Previous works typically supervised train a model to predict the importance of each edge and create the sub-problems (solutions). For the improving phase, this paper uses another neural network model trained through DRL to improve the sub-solutions. While the previous works leverage DP algorithm, MCTS, or require labels. The experiment results look good to me. The method beats all the fully neural baselines with reasonable time. Clarity: The paper has some important points that I am confused about. How did you generate the sub-problems (segments) of a candidate solution? I assume it is fully random and make sure the points you selected are connected? I think this step is vital in the entire process but it is not explained. For training the revision policy, how did you generate the training instances? It is exactly the same as the one for seeder policy? Significance: The method is smart to leverage the previous ideas in fully deep learning and no label required way. Questions: I think one interesting ablation study can be: what if you use a supervised trained model to generate the segments/sub-problems like what they did in "Generalize a Small Pre-trained Model to Arbitrarily Large TSP Instances". I am wondering how the entropy regularize help compared with a supervised method. For the TSP problems, the size of the sub-problems is 10 vertices. In this case, why a revision model is required since you can compute the optimal solution very fast for only 10 vertices? What're the results then? What if you increase the size of the sub-problems (which can potentially be generalized to huge TSP problems).
NIPS
Title On the Efficient Implementation of High Accuracy Optimality of Profile Maximum Likelihood Abstract We provide an efficient unified plug-in approach for estimating symmetric properties of distributions given n independent samples. Our estimator is based on profile-maximum-likelihood (PML) and is sample optimal for estimating various symmetric properties when the estimation error n−1/3. This result improves upon the previous best accuracy threshold of n−1/4 achievable by polynomial time computable PML-based universal estimators [ACSS21, ACSS20]. Our estimator reaches a theoretical limit for universal symmetric property estimation as [Han21] shows that a broad class of universal estimators (containing many well known approaches including ours) cannot be sample optimal for every 1-Lipschitz property when n−1/3. N/A We provide an efficient unified plug-in approach for estimating symmetric properties of distributions given n independent samples. Our estimator is based on profile-maximum-likelihood (PML) and is sample optimal for estimating various symmetric properties when the estimation error n−1/3. This result improves upon the previous best accuracy threshold of n−1/4 achievable by polynomial time computable PML-based universal estimators [ACSS21, ACSS20]. Our estimator reaches a theoretical limit for universal symmetric property estimation as [Han21] shows that a broad class of universal estimators (containing many well known approaches including ours) cannot be sample optimal for every 1-Lipschitz property when n−1/3. 1 Introduction Given n independent samples y1, ..., yn ∈ D from an unknown discrete distribution p ∈ ∆D the problem of estimating properties of p, e.g. entropy, distance to uniformity, support size and coverage are among the most fundamental in statistics and learning. Further, the problem of estimating symmetric properties of distributions p (i.e. properties invariant to label permutations) are well studied and have numerous applications [Cha84, BF93, CCG+12, TE87, Für05, KLR99, PBG+01, DS13, RCS+09, GTPB07, HHRB01]. Over the past decade, symmetric property estimation has been studied extensively and there have been many improvements to the time and sample complexity for estimating different properties, e.g. support [VV11b, WY15], coverage [ZVV+16, OSW16], entropy [VV11b, WY16, JVHW15], and distance to uniformity [VV11a, JHW16]. Towards unifying the attainment of computationallyefficient, sample-optimal estimators a striking work of [ADOS17] provided a universal plug-in approach based on a (approximate) profile maximum likelihood (PML) distribution, that (approximately) maximizes the likelihood of the observed profile (i.e. multiset of observed frequencies). Formally, [ADOS17] showed that given y1, ..., yn if there exists an estimator for a symmetric property f achieving accuracy and failure probability δ, then this PML-based plug-in approach achieves error 2 with failure probability δ exp (3 √ n). As the failure probability δ for many estimators for well-known properties (e.g. support size and coverage, entropy, and distance to uniformity) is roughly exp (− 2n), this result implied a sample optimal unified approach for estimating these properties when the estimation error n−1/4. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). This result of [ADOS17] laid the groundwork for a line of work on the study of computational and statistical aspects of PML-based approaches to symmetric property estimation. For example, follow up work of [HS21] improved the analysis of [ADOS17] and showed that the failure probability of PML is at most δ1−c exp(−n1/3+c), for any constant c > 0 and therefore it is sample optimal in the regime n−1/3. The condition n−1/3 on the optimality of PML is tight [Han21], in the sense that, PML is known to be not sample optimal in the regime n−1/3. In fact, no estimator (that obeys some mild conditions), is sample optimal for estimating all symmetric properties in the regime n−1/3; see Section 2 after Theorem 2.6 for details. We also remark that the statistical guarantees in [ADOS17, HS21] hold for any β-approximate PML1 for suitable values of β. In particular, [HS21] showed that any β-approximate PML for β > exp(−n1−c′) and any constant c′ > 0, has a failure probability of δ1−c exp(n1/3+c + n1−c′) for any constant c > 0. These results further imply a sample optimal estimator in the regime n−min(1/3,c′/2) for properties with failure probability less than exp(− 2n). Note that better approximation leads to a larger range of for which the estimator is sample optimal. Regarding computational aspects of PML, [CSS19a] provided the first efficient algorithm with a non-trivial approximation guarantee of exp(−n2/3 log n), which further implied a sample optimal universal estimator for n−1/6. This was then improved by [ACSS21] which showed how to efficiently compute PML to higher accuracy of exp(− √ n log n) thereby achieving a sample optimal universal estimator in the regime n−1/4. The current best polynomial time approximate PML algorithm by [ACSS20] achieves an accuracy of exp(−k log n), where k is the number of distinct observed frequencies. Although this result achieves better instance based statistical guarantees, in the worst case it still only implies a sample optimal universal estimator in the regime n−1/4. In light of these results, a key open problem is to close the gap between the regimes n−1/3 and n−1/4, where the former is the regime in which PML based estimators are statistically optimal and the later is the regime where efficient PML based estimators exist. In this work we ask: Is there an efficient approximate PML-based estimator that is sample optimal for n−1/3. In this paper, we answer this question in the affirmative. In particular, we give an efficient PML-based estimator that has failure probability at most δ1−c exp(n1/3+c + n1−c ′ ), and consequently is sample optimal in the regime n−1/3. As remarked, this result is tight in the sense that PML and a broad class of estimators are known to be not optimal in the regime n−1/3. To obtain this result we depart slightly from the previous approaches in [ADOS17, CSS19a, ACSS21]. Rather than directly compute an approximate PML distribution we compute a weaker notion of approximation which we show suffices to get us the desired universal estimator. We propose a notion of a β-weak approximate PML distribution inspired by [HS21] and show that an exp(−n1/3 log n)weak approximate PML achieves the desired failure probability of δ1−c exp(n1/3+c) for any constant c > 0. Further, we provide an efficient algorithm to compute an exp(−n1/3 log n)-weak approximate PML distribution. Our paper can be viewed as an efficient algorithmic instantiation of [HS21]. Ultimately, our algorithms use the convex relaxation presented in [CSS19a, ACSS21] and provide a new rounding algorithm. We differ from the previous best exp(−k log n) approximate PML algorithm [ACSS20] only in the matrix rounding procedure which controls the approximation guarantee. At a high level, the approximation guarantee for the rounding procedure in [ACSS20] is exponential in the sum of matrix dimensions. In the present work, we need to round a rectangular matrix with an approximation exponential in the smaller dimension, which may be infeasible for arbitrary matrices. Our key technical innovation is to introduce a swap operation (see Section 4.1) which facilitates such an approximation guarantee. In addition to a better approximation guarantee than [ACSS20], our algorithm also exhibits better run times (see Section 2). Organization: We introduce preliminaries in Section 1.1. In Section 2, we state our main results and also cover related work. In Section 3, we provide the convex relaxation to PML studied in [CSS19a, ACSS21]. Finally, in Section 4, we provide a proof sketch of our main computational result. Many proofs are then differed to the appendix. 1β-approximate PML is a distribution that achieves a multiplicative β-approximation to the PML objective. 1.1 Preliminaries General notation: For matrices A,B ∈ Rs×t, we use A ≤ B to denote that Aij ≤ Bij for all i ∈ [s] and j ∈ [t]. We let [a, b] and [a, b]R denote the interval ≥ a and ≤ b of integers and reals respectively. We use Õ(·), Ω̃(·) notation to hide all polylogarithmic factors in n and N . We let an bn to denote that an ∈ Ω(bnnc) or bn ∈ O(n−can), for some small constant c > 0. Throughout this paper, we assume we receive a sequence of n independent samples from a distribution p ∈ ∆D, where ∆D def= {q ∈ [0, 1]DR | ∥∥q∥∥ 1 = 1} is the set of all discrete distributions supported on domain D. Let Dn be the set of all length n sequences of elements of D and for yn ∈ Dn let yni denoting its ith element. Let f(yn, x) def= |{i ∈ [n] | yni = x}| and px be the frequency and probability of x ∈ D respectively. For a sequence yn ∈ Dn, let M = {f(yn, x)}x∈D\{0} be the set of all its non-zero distinct frequencies and m1,m2, . . . ,m|M| be these distinct frequencies. The profile of a sequence yn denoted φ = Φ(yn) is a vector in Z|M|, where φj def = |{x ∈ D | f(yn, x) = mj}| is the number of domain elements with frequency mj . We call n the length of profile φ and let Φn denote the set of all profiles of length n. The probability of observing sequence yn and profile φ for distribution p are P(p, yn) = ∏ x∈D p f(yn,x) x and P(p, φ) = ∑ {yn∈Dn | Φ(yn)=φ} P(p, yn). Profile maximum likelihood: A distribution pφ ∈ ∆D is a profile maximum likelihood (PML) distribution for profile φ ∈ Φn if pφ ∈ argmaxp∈∆DP(p, φ). Further, a distribution p β φ is a βapproximate PML distribution if P(pβφ, φ) ≥ β · P(pφ, φ). For a distribution p and a length n, let X be a random variable that takes value φ ∈ Φn with probability P (p, φ). We call H(X) (entropy of X) the profile entropy with respect to (p, n) and denote it by H(Φn,p). Probability discretization: Let R def= {ri}i∈[1,`] be a finite discretization of the probability space, where ri ∈ [0, 1]R and ` def = |R|. We call q ∈ [0, 1]DR a pseudo-distribution if ‖q‖1 ≤ 1 and a discrete pseudo-distribution with respect to R if all its entries are in R as well. We use ∆Dpseudo and ∆DR to denote the set of all pseudo-distributions and discrete pseudo-distributions with respect to R respectively. In our work, we use the following most commonly used [CSS19a, ACSS21, ACSS20] probability discretization set. For any α > 0, Rn,α def = {1} ∪ { 1 2n2 (1 + n−α)i | for all i ∈ Z≥0 such that 1 2n2 (1 + n−α)i ≤ 1 } . (1) For all probability terms defined involving distributions p, we extend those definitions to pseudo distributions q by replacing px with qx everywhere. See ?? for the definition of an estimator and optimal sample complexity. 2 Results Here we provide our main results. In our first result (Theorem 2.2), we show that a weaker notion of approximate PML suffices to obtain the desired universal estimator. Later we show that these weaker approximate PML distributions can be efficiently computed (Theorem 2.3). Definition 2.1. Given a profile φ, we call a distribution p′ ∈ ∆D β-approximate PML distribution with respect to R if P (p′, φ) ≥ β ·maxq∈∆DR P ( q ‖q‖1 , φ ) . The above definition generalizes β-approximate PML distributions which is simply the special case when R = [0, 1]R. Using our new definition, we show that for a specific choice of the discretization set Rn,1/3, a distribution p′ that is an approximate PML with respect to Rn,1/3 suffices to obtain a universal estimator; this result is formally stated below. Theorem 2.2 (Competitiveness of an approximate PML w.r.t R). For symmetric property f , suppose there exists an estimator f̂ that takes input a profile φ ∈ Φn drawn from p ∈ ∆D and satisfies, P ( |f(p)− f̂(φ)| ≥ ) ≤ δ , then for R = Rn,1/3 (See Equation (1)), a discrete pseudo distribution q′ ∈ ∆DR such that q′/‖q′‖1 is an exp(−O(|R| log n))-approximate PML distribution with respect to the R satisfies, P (∣∣∣∣f ( q′‖q′‖1 ) − f(p) ∣∣∣∣ ≥ 2 ) ≤ δ1−c exp(O(n1/3+c)), for any constant c > 0 . (2) The proof of the above theorem is implicit in the analysis of [HS21], however we provide a short simpler proof using their continuity lemma (Lemma 2 in [HS21]). Note that the bound on the failure probability we get is the same asymptotically as that of exact PML from [HS21], which is known to be tight [Han21]. Furthermore, to achieve such an improved failure probability bound all we need is an approximate PML distribution with respect to R, for some R which is of small size. Taking advantage of this fact and building upon [CSS19a, ACSS21], we provide a new rounding algorithm that outputs the desired approximate PML distribution with respect to R. Theorem 2.3 (Computation of an approximate PML w.r.t R). We provide an algorithm that given a probability discretization set R = Rn,α for α > 0 (See Equation (1)) and a profile φ with k distinct frequencies, runs in time Õ ( |R|+ nmin(k,|R|) ( min(|R|, n/k)kω + min(|R|, k)k2 )) , where ω < 2.373 is the current matrix multiplication constant [Wil12, Gal14, AW21] and returns a pseudo distribution q′ ∈ ∆DR such that, P ( q′ ‖q′‖1 , φ ) ≥ exp (−O(min(k, |R|) log n)) · max q∈∆DR P ( q ‖q‖1 , φ ) . When R = Rn,1, our algorithm computes an exp(−O(k log n)) approximate PML distribution, therefore our result is at least as good as the previous best known approximate PML algorithm due to [ACSS20]. In comparison to [ACSS20], our rounding algorithm is simpler and we suspect, more practical. We provide a more detailed comparison to it later in this section. Applications: Our main results have several applications which we discuss here. First note that, combining Theorem 2.2 and 2.3 immediately yields the following corollary. Corollary 2.4 (Efficient unified estimator). Given a profile φ ∈ Φn with k distinct frequencies, we can compute an approximate PML distribution q′ that satisfies Equation (2) in Theorem 2.2 in time Õ ( n min(k,n1/3) ( min(n1/3, n/k)kω + min(n1/3, k)k2 )) . For many symmetric properties the failure probability is exponentially small as stated below. Lemma 2.5 (Lemma 2 in [ADOS17], Theorem 3 in [HS21]). For distance to uniformity, entropy, support size and coverage, and sorted `1 distance there exists an estimator that is sample optimal and the failure probability is at most exp(− 2n1−α) for any constant α > 0. The above result combined with Corollary 2.4, immediately yields the following theorem. Theorem 2.6 (Efficient sample optimal unified estimator). There exists an efficient approximate PML-based estimator that for n−1/3 and symmetric properties such as, distance to uniformity, entropy, support size and coverage, and sorted `1 distance achieves optimal sample complexity and has failure probability upper bounded by exp(−n1/3). As our work computes an exp(−O(k log n)) approximate PML, we recover efficient version of Lemma 2.3 and Theorem 2.4 from [ACSS20]. The first result uses exp(−O(k log n)) approximate PML algorithm to efficiently implement an estimator that has better statistical guarantees based on profile entropy [HO20] (See Section 1.1). The second result provides an efficient implementation of the PseudoPML estimators [CSS19b, HO19]. Please refer to the respective papers for further details. Tightness of our result: Recall that [HS21] showed that the failure probability of an (approximate) PML based estimator is upper bounded by δ1−c exp(−n1/3+c), for any constant c > 0. This result further implied a sample optimal universal estimator in the regime n−1/3 for various symmetric properties (Theorem 2.6). In our work, we efficiently recover these results and a natural question to ask here is if these results can be improved. As remarked earlier, [Han21] showed that the condition for optimality of PML ( n−1/3) is in some sense tight. More formally, they showed that PML is not sample optimal in estimating every 1-Lipschitz property in the regime n−1/3. In fact, the results in [Han21] hold more broadly for any universal plug-in based estimator that outputs a distribution p̂ satisfying, max p∈∆D E‖p− p̂‖sorted1 ≤ A(n) √ k/n , where A(n) ≤ nγ for every γ > 0 and ‖p − q‖sorted1 def = minpermutations σ ‖p − qσ‖1 denotes the sorted `1 distance between p and q. In other words, if an estimator is based on a reasonably good estimate of the true distribution p (in terms of sorted-`1 distance), then it cannot be sample optimal for every 1-Lipschitz property. Furthermore, many well-known universal estimators including PML and LLM [HJW18] indeed provide a reasonably good estimate of the true distribution and therefore cannot be sample optimal in the regime n−1/3. Please refer to [Han21] for further details. Comparison to approximate PML algorithms: All prior provable approximate PML algorithms [CSS19a, ACSS21, ACSS20] have two key steps: (Step 1) solve a convex approximation to the PML and (Step 2) round the (fractional) solution to a valid approximate PML distribution. A convex approximation to PML was first provided in [CSS19a] and a better analysis for it is shown in [ACSS21]. In particular, [CSS19a] and [ACSS21] showed that an integral optimal solution to step 1 approximates the PML up to accuracy exp(−n2/3 log n) and exp(−min(k, |R|) log n) respectively, where k and |R| are the number of distinct frequencies and distinct probability values respectively. In addition to the loss from convex approximation, the previous algorithms also incurred a loss in the rounding step (Step 2). The loss in the rounding step for the previous works is bounded by exp(−n2/3 log n) [CSS19a], exp(− √ n log n) [ACSS21] and exp(−k log n) [ACSS20]. In our work, we show that there exists a choice of R (=Rn,1/3) that is of small size (|R| ≤ n1/3) and suffices to get the desired universal estimator. As |R| ≤ n1/3, our approach only incur a loss of exp(−min(k, |R|) log n) ∈ exp(−n1/3 log n) in the convex approximation step (Step 1). Furthermore for the rounding step (Step 2), we provide a new simpler and a practical rounding algorithm with a better approximation loss of exp(−O(min(k, |R|) log n)) ∈ exp(−O(n−1/3 log n)). Regarding the run times, both [ACSS20] and ours have run times of the form Tsolve+Tsparsify+Tround, where the terms correspond to the time required to solve the convex program, sparsify and round a solution. In our algorithm, we pay the same cost as [ACSS20] for the first two steps but our run time guarantees are superior to theirs in the rounding step. In particular, the run time of [ACSS20] is shown as a large polynomial and perhaps not practical as their approach requires enumerating all the approximate min cuts. In contrast, our algorithm has a run time that is subquadratic. Other related work PML was introduced by [OSS+04]. Many heuristic approaches have been proposed to compute approximate PML, such as the EM algorithm in [OSS+04], an algebraic approaches in [ADM+10], Bethe approximation in [Von12] and [Von14], and a dynamic programming approach in [PJW17]. For the broad applicability of PML in property testing and to estimate other symmetric properties please refer to [HO19]. Please refer to [HO20] for details related to profile entropy. Other approaches for designing universal estimators are: [VV11b] based on [ET76], [HJW18] based on local moment matching, and variants of PML by [CSS19b, HO19] that weakly depend on the target property that we wish to estimate. Optimal sample complexities for estimating many symmetric properties were also obtained by constructing property specific estimators, e.g. support [VV11b, WY15], support coverage [OSW16, ZVV+16], entropy [VV11b, WY16, JVHW15], distance to uniformity [VV11a, JHW16], sorted `1 distance [VV11a, HJW18], Renyi entropy [AOST14, AOST17], KL divergence [BZLV16, HJW16] and others. Limitations of our work One of the limitations of all the provable approximate PML algorithms [CSS19a, ACSS21, ACSS20] (including ours) is that they require the solution of a convex program that approximates the PML objective and all these previous works use the CVX solver which is not practical for large sample instances; note that our results hold for small error regimes which lead to such large sample instances. Therefore, designing a practical algorithm to solve the convex program is an important future research direction. As discussed above, local moment matching (LLM) based approach is another universal approach for property estimation. It is unclear which of the two (PML or LLM) can lead to practical algorithms. 3 Convex relaxation to PML Here we restate the convex program from [CSS19a] that approximates the PML objective. The current best analysis of this convex program is in [ACSS21]. We first describe the notation and later state several results from [CSS19a, ACSS21] that capture the guarantees of the convex program. Notation: For any matrices X ∈ Ra×c and Y ∈ Rb×c, we let concat(X,Y) denote the matrix W ∈ R(a+b)×c that satisfies, Wi,j = Xi,j for all i ∈ [a] and j ∈ [c] and Wa+i,j = Yij for all i ∈ [b] and j ∈ [c]. Recall we let R def= {ri}i∈[`] be a finite discretization of the probability space, where ri ∈ [0, 1]R and ` def = |R|. Let r ∈ [0, 1]`R be a vector whose i’th element is equal to ri. Lemma 3.1 (Lemma 4.4 in [CSS19a]). Let R = Rn,α for some α > 0. For any profile φ ∈ Φn and distribution p ∈ ∆D, there exists a pseudo distribution q ∈ ∆DR that satisfies P(p, φ) ≥ P(q, φ) ≥ exp (−αn− 6)P(p, φ) and therefore, max p∈∆D P(p, φ) ≥ max q∈∆DR P(q, φ) ≥ exp (−αn− 6) max p∈∆D P(p, φ) . For any probability discretization set R, profile φ and pseudo distribution q ∈ ∆DR , define: ZφR def = { X ∈ R`×[0,k]≥0 ∣∣∣ X1 ∈ Z`, [X>1]j = φj for all j ∈ [1, k] and r>X1 ≤ 1} , (3) Zφ,fracR def = { X ∈ R`×[0,k]≥0 ∣∣∣ [X>1]j = φj for all j ∈ [1, k] and r>X1 ≤ 1} . (4) The j’th column corresponds to frequency mj and we use m0 def = 0 to capture the unseen elements. Without loss of generality, we assume m0 < m1 < · · · < mk. Let Cij def = mj log ri for all i ∈ [`] and j ∈ [0, k]. The objective of the optimization problem is follows: for any X ∈ R`×[0,k]≥0 define, g(X) def= exp ( ∑ i∈[`],j∈[0,k] [CijXij − Xij log Xij ] + ∑ i∈[`] [X1]i log[X1]i ) . (5) For any q ∈ ∆DR , the function g(X) approximates the P(q, φ) term and is stated below. Lemma 3.2 (Theorem 6.7 and Lemma 6.9 in [ACSS21]). Let R be a probability discretization set. For any profile φ ∈ Φn with k distinct frequencies the following statements hold for α = min(k, |R|) log n: exp (−O(α)) · Cφ · maxX∈ZφR g(X) ≤ maxq∈∆DR P(q, φ) ≤ exp (O (α)) · Cφ · maxX∈ZφR g(X) and maxq∈∆DR P(q, φ) ≤ exp (O (min(k, |R|) log n))·Cφ ·maxX∈Zφ,fracR g(X) , where Cφ def = n!∏ j∈[1,k](mj !) φj is a term that only depends on the profile.2 The proof of concavity for the function g(X) and a running time analysis to solve the convex program are provided in [CSS19a]. For any X ∈ ZφR, a pseudo-distributions associated with it is defined below. Definition 3.3. For any X ∈ ZφR, the discrete pseudo-distribution qX associated with X and R is defined as follows: for arbitrary [X1]i number of domain elements assign probability ri. Further pX def = qX/‖qX‖1 is the distribution associated with X and R. Note that qX is a valid pseudo-distribution because of the third condition in Equation (3) and these pseudo distributions pX and qX satisfy the following lemma. Lemma 3.4 (Theorem 6.7 in [ACSS21]). Let R and φ ∈ Φn be a probability discretization set and a profile with k distinct frequencies. For any X ∈ ZφR, the discrete pseudo distribution qX and distribution pX associated with X and R satisfy: exp (−O(k log n))Cφ · g(X) ≤ P(qX, φ) ≤ P(pX, φ) . 2The theorem statement in [ACSS21] is only written with an approximation factor of exp(O(k logn)). However, their proof provides a stronger approximation factor which is upper bounded by the non-negative rank of the probability matrix, which in turn is upper bounded by the minimum of distinct frequencies and distinct probabilities. Therefore the theorem statement in [ACSS21] holds with a much stronger approximation guarantee of exp (O (min(k, |R|) logn)). 4 Approximate PML algorithm Here we provide a proof sketch of Theorem 2.3 and provide a rounding algorithm that proves it. Our rounding algorithm takes as input a matrix X ∈ Zφ,fracR which may have fractional row sums and round it to integral values. This new rounded matrix Xfinal corresponds to our approximate PML distribution (See Definition 3.3). The description of our algorithm is as follows. Algorithm 1 ApproximatePML(φ,R = Rn,α) 1: Let X be any solution that satisfies, log g(X) ≥ maxY∈Zφ,fracR log g(Y)−O (min(k, |R|) log n). 2: X′ = sparsify(X). 3: (A,B) = swapmatrixround(X′). 4: (Xfinal,Rfinal) = create(A,B,R) 5: Let p′ be the distribution with respect to Xfinal and Rfinal (See Definition 3.3). 6: Return q = discretize(p′, φ,R) We now provide a guarantee for each of these lines of Algorithm 1. We later use these guarantees to prove our final theorem (Theorem 2.3). The guarantees of the approximate maximizer X computed in the first step of the algorithm are summarized in the following lemma. Lemma 4.1 ([CSS19a, ACSS21]). Line 1 of the algorithm can be implemented in Õ(|R|k2 + |R|2k) time and the approximate maximizer X satisfies: Cφ · g(X) ≥ exp (−O (min(k, |R|) log n)) maxq∈∆DR P(q, φ) . The guarantees of the second step of our algorithm are summarized in the following lemma. Please refer to [ACSS20] for the description of the procedure sparsify. We use this procedure so that we can assume |R| ≤ k + 1 as we can ignore the zero rows of the matrix X. Lemma 4.2 (Lemma 4.3 in [ACSS20]). For any X ∈ Zφ,fracR , the algorithm sparsify(X) runs in time Õ(|R| kω) and outputs X′ ∈ Zφ,fracR such that: g(X ′) ≥ g(X) and ∣∣{i ∈ [`] | [X′−→1 ]i > 0}∣∣ ≤ k+ 1 . To explain our next step, we need to define a new operation called the swap. Definition 4.3. Given a matrix A, indices i1 < i2, j1 < j2 and a parameter ≥ 0, the operation swap(A, i1, i2, j1, j2, ) outputs a matrix A′ that satisfies, A′ij = Ai,j + for i = i1, j = j1 Ai,j − for i = i1, j = j2 , Ai,j − for i = i2, j = j1 Ai,j + for i = i2, j = j2 , Aij otherwise. (6) Definition 4.4 (Swap distance). A′ is x-swap distance from A, if A′ can be obtained from A through a sequence of swap operations and the summation of the value ’s in these operations is at most x, i.e. there is a set of parameters {(i(s)1 , i (s) 2 , j (s) 1 , j (s) 2 , (s))}s∈[t], where ∑ s∈[t] (s) ≤ x, such that A(s) = swap(A(s−1), i(s)1 , i (s) 2 , j (s) 1 , j (s) 2 , (s)) for s ∈ [t], where A(0) = A and A(t) = A′. The following lemma directly follows from Definition 4.3 and Definition 4.4. Lemma 4.5. For any matrices A,A′ ∈ Rs×t, if A′ is x-swap distance from A for some x ≥ 0, then A′ −→ 1 = A −→ 1 and A′> −→ 1 = A> −→ 1 . Recall that our objective g(X) contains two terms: (1) the linear term ∑ i∈[`],j∈[0,k] CijXij and (2) the entropy term ∑ i∈[`][X −→ 1 ]i log[X −→ 1 ]i − ∑ i∈[`],j∈[0,k] Xij log Xij . The swap operation always increases the first term, and in the following lemma we bound the loss due to the second term. Lemma 4.6. If A′ ∈ R`×[0,k] is x-swap distance from A ∈ Zφ,fracR , then, A ′ ∈ Zφ,fracR and g(A ′) ≥ exp(−O(x log n))g(A). One of the main contributions of our work is the following lemma, where we repeatedly apply swap operation to recover a matrix A which exhibits several nice properties as stated below. Lemma 4.7. For any matrix A ∈ Rs×t (s ≤ t) that satisfies A>−→1 ∈ Zt≥0. The algorithm swapmatrixround runs in O(s2t) time and returns matrices A′ and B such that, • A′ is O(s)-swap distance from A, A′ −→ 1 = A −→ 1 and A′> −→ 1 = A> −→ 1 . • 0 ≤ Bij ≤ A′ij for all i ∈ [s] and j ∈ [t], B −→ 1 ∈ Zs≥0, B >−→1 ∈ Zt≥0 and ‖A ′−B‖1 ≤ O(s). The above lemma helps us modify our matrix X to a new matrix A that we can round using the create procedure. The guarantees of this procedure are summarized below. Lemma 4.8 (Lemma 6.13 in [ACSS21]). For any A ∈ Zφ,fracR ⊆ R `×[0,k] ≥0 and B ∈ R `×[0,k] ≥0 such that B ≤ A, B−→1 ∈ Z`, B>−→1 ∈ Z[0,k] and ‖A − B‖1 ≤ t. The algorithm create(A,B,R) runs in time O(`k) and returns a solution A′ and a probability discretization set R′ such that |R′| ≤ |R|+ min(k + 1, t), A′ ∈ ZφR′ and g(A ′) ≥ exp (−O (t log n)) g(A) . As our final goal is to return a distribution in ∆DR , we also use the following discretization lemma. Lemma 4.9. The function discretize takes as input a distribution p ∈ ∆D with `′ distinct probability values, a profile φ, a discretization set of the form R = Rn,α for some α > 0 and outputs a pseudo distribution q ∈ ∆DR such that: P ( q ‖q‖1 , φ ) ≥ exp(−O(min(k, |R|) + min(k, `′) + α2n) log n)P (p, φ) . In Section 5, we use the guarantees stated above for each line of Algorithm 1 to prove Theorem 2.3. The description of the function discretize is specified in the proof of Lemma 4.9. We describe the procedure swapmatrixround and provide a proof sketch of Lemma 4.7 in Section 4.1. 4.1 Description of swapmatrixround and comparison to [ACSS20] Here we describe the procedure swapmatrixround and compare our rounding algorithm to [ACSS20]. Both of [ACSS20] and our approximate PML algorithm have four main lines (1-4); we differ from [ACSS20] in the key Line 3. This line in [ACSS20] invokes a procedure called matrixround that takes as input a matrix A ∈ R`×[0,k] and outputs a matrix B ∈ R`×[0,k] such that: B ≤ A, B−→1 ∈ Z`≥0, B>−→1 ∈ Z[0,k]≥0 and ‖A−B‖1 ≤ O(`+ k). Such a matrix B is crucial as the procedure create uses B to round fractional row sums of matrix A to integral values. The error incurred in these two steps is at most exp(O(‖A− B‖1 log n)) ∈ exp(O((`+ k) log n)). As the procedure sparsify allows us to assume ` ≤ k+1, we get an exp(−k log n) approximate PML using [ACSS20]. However, the setting that we are interested in is when ` k; for instance when ` ∈ O(n1/3) and k ∈ Θ( √ n). In these settings, we desire an exp(−O(min(`, k) log n)) ∈ exp(−O(` log n)) approximate PML. In order to get such an improved approximation using [ACSS20], we need a matrix B satisfying the earlier mentioned inequalities along with ‖A−B‖1 ≤ O(min(k, `)). However, such a matrix B may not exist for arbitrary matrices A and the best guarantee any algorithm can achieve is ‖A− B‖1 ∈ O(`+ k). To overcome this, we introduce a new procedure called swapmatrixround that takes as input, a matrix A and transforms it to a new matrix A′ that satisfies: g(A′) ≥ exp(−O(min(k, `) log n))g(A). Furthermore, this transformed matrix A′ exhibits a matrix B that satisfies the guarantees: B ≤ A′, B−→1 ∈ Z`≥0, B >−→1 ∈ Zk≥0 and ‖A ′ − B‖1 ≤ O(`). These matrices A′ and B are nice in that we can invoke the procedure create, which would output a valid distribution with required guarantees. In the following we provide a description of the algorithm that finds these matrices A′ and B. Algorithm 2 swapmatrixround(A) 1: Let A(0) = A and D(0) = 0. 2: for r = 1 . . . ` do 3: (Y, j) = partialRound(A(r−1), r) 4: A(r) = roundiRow(Y, j, r). 5: D(r) = D(r−1) + Y− A(r). 6: end for 7: Return A′ = D(`) + A(`) and B = A(`). Our algorithm includes two main subroutines: partialRound and roundiRow. At each iteration i, the procedure partialRound considers row i and modifies it by repeatedly applying the swap operation. This modified row is nice as the procedure roundiRow can round this row to have an integral row sum while not affecting the rows in [i − 1]. By iterating through all rows, we get the required matrices A′ and B that satisfy the required guarantees. In the remainder, we formally state the guarantees achieved by the procedures partialRound and roundiRow. Lemma 4.10. The algorithm partialRound takes as inputs X ∈ R`×[0,k]≥0 and i ∈ [`−1] that satisfies the following, [X −→ 1 ]i′ ∈ Z≥0 for all i′ ∈ [1, i− 1] and [X> −→ 1 ]j ∈ Z≥0 for all j ∈ [0, k], and outputs a matrix Y ∈ R`×[0,k]≥0 and an index j′ such that: • Y is within 3-swap distance from X. • Yij′ ≥ o and ∑i−1 i′=1 Yi′j′ + Yij′ − o ∈ Z≥0, where o = [X −→ 1 ]i − b[X −→ 1 ]ic. Furthermore, the running time of the algorithm is O(`k). Note that by Lemma 4.5, if Y is within 3-swap distance from X, then Y−→1 = X−→1 and Y>−→1 = X>−→1 . Lemma 4.11. The algorithm roundiRow takes as inputs Y ∈ R`×[0,k]≥0 , an column index j ∈ [0, k] and a row index i ∈ [` − 1] such that: Y>−→1 ∈ Z[0,k]≥0 , Yij ≥ o and ∑i−1 i′=1 Yi′j + Yij − o ∈ Z≥0, where o = [Y −→ 1 ]i − b[Y −→ 1 ]ic. Outputs a matrix X ∈ R`×[0,k]≥0 such that, • X ≤ Y and ‖X− Y‖1 ≤ 1. • [X −→ 1 ]i′ = [Y −→ 1 ]i′ for all i′ ∈ [i− 1], [X −→ 1 ]i ∈ Z≥0, and X> −→ 1 ∈ Z[0,k]≥0 . We defer the description of all the missing procedures and proofs to appendix. 5 Proof of Main Result (Theorem 2.3) Here we put together the results from the previous sections to prove, Theorem 2.3. Proof of Theorem 2.3. Algorithm 1 achieves the guarantees of Theorem 2.3. In the remainder of the proof, we combine the guarantees of each step of the algorithm to prove the theorem. Toward this end, we first show the following two inequalities: Xfinal ∈ ZφRfinal and g(Xfinal) ≥ exp(−O(min(k, |R|) log n))g(X). By Lemma 4.1, the Line 1 of Algorithm 1 returns a solution X ∈ Zφ,fracR that satisfies, Cφ · g(X) ≥ exp (−O (min(k, |R|) log n)) max q∈∆DR P(q, φ) . (7) By Lemma 4.2, the Line 2 of Algorithm 1 takes input X and outputs X′ such that X′ ∈ Zφ,fracR and g(X ′) ≥ g(X), (8) and ∣∣{i ∈ [`] | [X′−→1 ]i > 0}∣∣ ≤ k + 1. As the matrix X′ has at most k + 1 non-zero rows, without loss of generality we can assume |R| ≤ k + 1 (by discarding zero rows). As matrix X′ ∈ Zφ,fracR , we have that X ′ has integral column sums and by invoking Lemma 4.7 with parameters s = |R| and t = k + 1, we get matrices A and B that satisfy guarantees of Lemma 4.7. As [A−→1 ]i = [X′ −→ 1 ]i for all i ∈ [`], [A> −→ 1 ]j = [X′> −→ 1 ]j for all j ∈ [0, k] and X′ ∈ Zφ,fracR , we immediately get that A ∈ Zφ,fracR . Further note that A is within O(|R|) = O(min(|R|, k))-swap distance from X′ and by Lemma 4.6 we get that g(A) ≥ exp(−O(min(|R|, k) log n))g(X′). To summarize, we showed the following inequalities, A ∈ Zφ,fracR and g(A) ≥ exp(−O(min(|R|, k) log n))g(X ′) . (9) Note that, Lemma 4.7 also outputs a matrix B that satisfies: B ≤ A, B−→1 ∈ Z`, B>−→1 ∈ Z[0,k] and ‖A − B‖1 ≤ O(min(|R|, k)). These matrices A and B satisfy the conditions of Lemma 4.8 with parameter value t = O(min(|R|, k)). Therefore, the procedure create takes in input matrices A,B and returns a solution (Xfinal,Rfinal) such that |Rfinal| ≤ |R|+ min(R, k) ≤ 2|R| and, Xfinal ∈ ZφRfinal and g(Xfinal) ≥ exp(−O(min(|R|, k) log n))g(A) . (10) As Xfinal ∈ ZφRfinal , by definition Definition 3.3 and Lemma 3.2, the distribution p ′ satisfies, P (p′, φ) ≥ exp(−O(min(k, |Rfinal|) log n))Cφg(Xfinal) ≥ exp(−O(min(k, |R|)) log n))Cφg(A) ≥ exp(−O(min(k, |R|) log n))Cφg(X′) ≥ exp(−O(min(k, |R|) log n))Cφg(X) ≥ exp(−O(min(k, |R|) log n)) max q∈∆DR P(q, φ) . In the second inequality we used Equation (10) and |Rfinal| ≤ 2|R|. In the third, fourth and fifth inequalities, we used Equation (9), Equation (8) and Equation (7) respectively. Recall we need a distribution that approximately maximizes maxq∈∆DR P( q ‖q‖1 , φ) instead of just maxq∈∆DR P(q, φ). In the remainder of the proof we provide a procedure to output such a distribution. For any constant c > 0, let c · R def= {c · ri | ri ∈ R}. For any q ∈ ∆DR , as ‖q‖1 satisfies: rmin ≤ ‖q‖1 ≤ 1, we get that, max q∈∆DR P( q ‖q‖1 , φ) = max c∈[1,1/rmin]R max q∈∆Dc·R P(q, φ) . (11) The above expression holds as the maximizer q∗ of the left hand side satisfies: q∗ ∈ ∆D(1/‖q∗‖1)·R. Define C def= {(1 + β)i}i∈[a] for some β ∈ o(1), where a ∈ O( 1β log(1/rmin)) is such that rmin(1 + β)a = 1. For any constant c ∈ [1, 1/rmin]R, note that there exists a constant c′ ∈ C such that c(1 − β) ≤ c′ ≤ c. Furthermore, for any distribution q ∈ ∆DR with ‖q‖1 = 1/c, note that the distribution q′ = c′q ∈ ∆Dc′·R and satisfies: P( q ‖q‖1 , φ) = P(c · q, φ) = P( c c′ q ′, φ) =( c c′ )n P(q′, φ) . Therefore we get that, P(q′, φ) = ( c′c )n P( q‖q‖1 , φ) ≥ (1 − β)nP( q‖q‖1 , φ) ≥ exp(−2βn)P( q‖q‖1 , φ) . Combining this analysis with Equation (11) we get that, max c∈C max q∈∆Dc·R P(q, φ) ≥ exp(−2βn) max q∈∆DR P( q ‖q‖1 , φ). (12) For each c > 0 as |R| = |c · R|, our algorithm (Algorithm 1) returns a distribution pc that satisfies, P (pc, φ) ≥ exp(−O(min(k, |R|) log n)) max q∈∆Dc·R P(q, φ) . Let p∗ be the distribution that achieves the maximum objective value to our convex program among the distributions {pc}c∈C . Then note that p∗ satisfies: P (p∗, φ) ≥ exp(−O(min(k, |R|) log n) − 2βn) maxq∈∆DR P( q ‖q‖1 , φ) . Substituting β = min(k,|R|) n in the previous expression, we get, P (p∗, φ) ≥ exp(−O(min(k, |R|) log n)) max q∈∆DR P( q ‖q‖1 , φ) . As each of our distributions pc (including p∗) have the number of distinct probability values upper bounded by 2|R|, by applying Lemma 4.9, we get a pseudo distribution q ∈ ∆DR with the desired guarantees. The final run time of our algorithm is O(|C|T1) ∈ O( nmin(k,|R|) · T1), where T1 is the time to implement Algorithm 1. Further note that by Lemma 3.1, without loss of generality we can assume |R| ≤ n/k. As all the lines of Algorithm 1 are polynomial in n, our final running time follows from the run times of each line and we conclude the proof. Acknowledgments and Disclosure of Funding We would like to thank the reviewers for their valuable feedback. Researchers on this project were supported by an Amazon Research Award, a Dantzig-Lieberman Operations Research Fellowship, a Google Faculty Research Award, a Microsoft Research Faculty Fellowship, NSF CAREER Award CCF-1844855, NSF Grant CCF-1955039, a PayPal research gift, a Simons-Berkeley Research Fellowship, a Simons Investigator Award, a Sloan Research Fellowship and a Stanford Data Science Scholarship.
1. What is the focus of the paper regarding computing an approximate profile maximum likelihood estimator? 2. What are the strengths of the proposed approach, particularly in bridging the gap between computationally efficient results and statistically optimal thresholds? 3. Do you have any concerns or suggestions regarding the technical writing in the paper, especially regarding Lemmas 4.8 and 4.9 and the rounding algorithm in the appendix? 4. How do the technical ideas in the paper potentially impact the theoretical machine learning and algorithms communities? 5. Are there any potential applications of these ideas to domains beyond the discrete setting studied in this field?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper concerns the computation of an approximate profile maximum likelihood estimator (APML) -- a plug-in estimator used to estimate symmetric properties of discrete distributions. Formally, one is given access to n iid samples from a discrete distribution over a set D and the goal is to estimate symmetric properties of this distribution (properties independent of the permutation of the labels of the distribution). This is a broad class of properties which includes practically important examples such as entropy and support size. While information theoretically optimal estimators for specific properties were known previously, the past decade or so has witnessed much interest in the development of statistically and computationally efficient universal estimators for all symmetric properties. Instead of developing specialized approaches for specific properties, these approaches instead propose a unified framework for estimating all symmetric properties. Broadly speaking, there are two main approaches towards building such estimators -- one based on computing APML plug-in estimators and an alternative based on local moment matching. The local moment matching framework is both computationally and statistically efficient and is known to recover optimal sample complexity up to an accuracy threshold of ϵ ≫ n − 1 / 3 . However, while the statistical limit of the APML approach has been established as the regime ϵ ≈ n − 1 / 3 with prior work demonstrating that no reasonable variant of APML succeeds below the n − 1 / 3 threshold, the best known computationally efficient results only hold when ϵ ≫ n − 1 / 4 . The goal of this paper is to bridge this gap by proposing a computationally efficient estimator achieving this n − 1 / 3 threshold. Technically, the APML approach first constructs a distribution over D from the observed data and estimates a symmetric property by evaluating the property on the fitted distribution. A naive approach to do this would be to use the distribution maximizing the likelihood of the observed sample -- i.e the empirical distribution of the samples. However, this leads to sub-optimal sample complexity. The Profile Maximum Likelihood approach instead constructs the plug-in distribution by maximizing the observed profile of the samples which consists of the multiset of frequencies observed in the sample. Unfortunately, this exact formulation while statistically more efficient and optimal in several regimes (when ϵ ≫ n − 1 / 3 ), introduces several computational barriers. Fortunately, it suffices to compute an Approximate Profile Maximum Likelihood distribution where one uses any distribution, q which satisfies P ( ϕ , q ) ≥ β max p P ( ϕ , p ) where ϕ is the observed profile for β ≥ exp ⁡ ( − n 1 − δ ) with δ > 0 and the quality of approximation improving with δ ( ϵ ≫ n − min ( 1 / 3 , δ / 2 ) ). Prior work had constructed APML distributions for δ = 1 / 2 leading to the sub-optimal n − 1 / 4 threshold. Informally, this approach first re-parameterizes the distribution being optimized over by discretizing it over a pre-defined set of levels (at levels ( 1 + α ) i / ( n 2 ) for i ∈ [ 0 , ℓ ] ) and only storing the number of elements at a particular level of the distribution. Subsequently, a convex approximation to the PML objective is constructed over this re-parameterized space. Unfortunately, these two steps result in a trade-off where refining the discretization (by lowering α ) may better approximate the underlying distribution but worsens the quality of the convex relaxation. The combination of these two factors result in the optimal choice of α (in prior work) being n − 1 / 2 leading to the aforementioned sub-optimal results. The key technical idea of this paper is to effectively eliminate the first step (of approximating the true PML distribution by a discretized approximation) and instead only search for weak APML distributions; i.e distributions satisfying P ( ϕ , q ) ≥ β max p ∈ Δ D R P ( ϕ , p ) where Δ D R consists of a discretized set of distributions. Here, we are only competitive with respect to the optimal discretized distribution as opposed to the unconditionally optimal distribution. While the objective value of the recovered distribution might be a poor approximation of the optimal PML objective, this nevertheless suffices to accurately estimate a broad class of symmetric properties. In fact, the recovered distribution is at best a exp ⁡ ( − n − 2 / 3 ) approximate PML distribution but nevertheless, this still suffices to achieve the optimal ϵ ≫ n − 1 / 3 threshold. Constructing this estimator however requires a significant refinement of the rounding algorithm which converts the fractional solution of the convex program to a discrete integral solution corresponding to a true distribution. Informally, the novel regime studied in this paper results in a solution matrix which much fewer rows than columns and the guarantee of the rounding algorithm is required to be depend exponentially in only the smaller (row number) dimension while the previous approach only achieved the much looser row + column number bound. The authors construct such a rounding algorithm leading to their improved results. I do have concerns about the technical writing in the paper. While the proof looks correct and the technical ideas are novel, there are some steps which could benefit from further exposition and others in need of correction. Specifically, Lemmas 4.8 and 4.9 in the paper are applied in a chained fashion in the proof of Lemma 4.5 (Lemma E.1 in the appendix from line 671-677). However, as stated the precondition of the application of Lemma 4.9 are not satisfied by the conclusion of Lemma 4.8. In particular, the condition that the columns sum to integral values is not satisfied as stated in Lemma 4.8. While this is probably true since the construction of the trans operation (in Lemma C.1) underlying the transformation PartialRound is built off of the operation swap which maintains row and column sums by definition, this should still be stated and clarified in the main text. Likewise, this also requires a more precise formulation of Lemma C.1. Additionally, the technical exposition could also be improved surrounding the definition of the rounding algorithm in the appendix. While the main idea behind the rounding algorithm is neat and intuitive, the notation is extremely cumbersome and makes the exposition hard to follow. Providing some intuition prior to their technical definitions in Appendices C-E would help greatly. Overall, I quite like this paper. It makes significant progress on an important problem essentially closing the gap between the best results achieved by prior computationally efficient APML based estimators and a strong information theoretic barrier. The technical ideas behind the improvement are novel and intricate and are potentially of broad interest to the theoretical machine learning and algorithms communities. While there are minor bugs in the technical statements and some of the technical writing could be improved, these seem quite easy to fix. **** POST REBUTTAL UPDATE **** I acknowledge the authors' response and will retain my current evaluation. Strengths And Weaknesses Strengths: -- The paper makes strong progress on an important problem essentially achieving optimal results for this particular approach -- The technical ideas in the paper are novel, interesting and non-trivial Weaknesses: -- The technical exposition in the paper could be improved both in the main text and the appendix (see main review). Questions It would be great if the authors could clarify some of the technical material in the paper in case my current understanding is correct. Additionally, I'm also curious about when this n − 1 / 3 barrier may be circumvented for narrower classes properties as opposed to all symmetric properties. Applications of such ideas to domains beyond the discrete setting studied in this field would also be interesting to see. Limitations Yes
NIPS
Title On the Efficient Implementation of High Accuracy Optimality of Profile Maximum Likelihood Abstract We provide an efficient unified plug-in approach for estimating symmetric properties of distributions given n independent samples. Our estimator is based on profile-maximum-likelihood (PML) and is sample optimal for estimating various symmetric properties when the estimation error n−1/3. This result improves upon the previous best accuracy threshold of n−1/4 achievable by polynomial time computable PML-based universal estimators [ACSS21, ACSS20]. Our estimator reaches a theoretical limit for universal symmetric property estimation as [Han21] shows that a broad class of universal estimators (containing many well known approaches including ours) cannot be sample optimal for every 1-Lipschitz property when n−1/3. N/A We provide an efficient unified plug-in approach for estimating symmetric properties of distributions given n independent samples. Our estimator is based on profile-maximum-likelihood (PML) and is sample optimal for estimating various symmetric properties when the estimation error n−1/3. This result improves upon the previous best accuracy threshold of n−1/4 achievable by polynomial time computable PML-based universal estimators [ACSS21, ACSS20]. Our estimator reaches a theoretical limit for universal symmetric property estimation as [Han21] shows that a broad class of universal estimators (containing many well known approaches including ours) cannot be sample optimal for every 1-Lipschitz property when n−1/3. 1 Introduction Given n independent samples y1, ..., yn ∈ D from an unknown discrete distribution p ∈ ∆D the problem of estimating properties of p, e.g. entropy, distance to uniformity, support size and coverage are among the most fundamental in statistics and learning. Further, the problem of estimating symmetric properties of distributions p (i.e. properties invariant to label permutations) are well studied and have numerous applications [Cha84, BF93, CCG+12, TE87, Für05, KLR99, PBG+01, DS13, RCS+09, GTPB07, HHRB01]. Over the past decade, symmetric property estimation has been studied extensively and there have been many improvements to the time and sample complexity for estimating different properties, e.g. support [VV11b, WY15], coverage [ZVV+16, OSW16], entropy [VV11b, WY16, JVHW15], and distance to uniformity [VV11a, JHW16]. Towards unifying the attainment of computationallyefficient, sample-optimal estimators a striking work of [ADOS17] provided a universal plug-in approach based on a (approximate) profile maximum likelihood (PML) distribution, that (approximately) maximizes the likelihood of the observed profile (i.e. multiset of observed frequencies). Formally, [ADOS17] showed that given y1, ..., yn if there exists an estimator for a symmetric property f achieving accuracy and failure probability δ, then this PML-based plug-in approach achieves error 2 with failure probability δ exp (3 √ n). As the failure probability δ for many estimators for well-known properties (e.g. support size and coverage, entropy, and distance to uniformity) is roughly exp (− 2n), this result implied a sample optimal unified approach for estimating these properties when the estimation error n−1/4. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). This result of [ADOS17] laid the groundwork for a line of work on the study of computational and statistical aspects of PML-based approaches to symmetric property estimation. For example, follow up work of [HS21] improved the analysis of [ADOS17] and showed that the failure probability of PML is at most δ1−c exp(−n1/3+c), for any constant c > 0 and therefore it is sample optimal in the regime n−1/3. The condition n−1/3 on the optimality of PML is tight [Han21], in the sense that, PML is known to be not sample optimal in the regime n−1/3. In fact, no estimator (that obeys some mild conditions), is sample optimal for estimating all symmetric properties in the regime n−1/3; see Section 2 after Theorem 2.6 for details. We also remark that the statistical guarantees in [ADOS17, HS21] hold for any β-approximate PML1 for suitable values of β. In particular, [HS21] showed that any β-approximate PML for β > exp(−n1−c′) and any constant c′ > 0, has a failure probability of δ1−c exp(n1/3+c + n1−c′) for any constant c > 0. These results further imply a sample optimal estimator in the regime n−min(1/3,c′/2) for properties with failure probability less than exp(− 2n). Note that better approximation leads to a larger range of for which the estimator is sample optimal. Regarding computational aspects of PML, [CSS19a] provided the first efficient algorithm with a non-trivial approximation guarantee of exp(−n2/3 log n), which further implied a sample optimal universal estimator for n−1/6. This was then improved by [ACSS21] which showed how to efficiently compute PML to higher accuracy of exp(− √ n log n) thereby achieving a sample optimal universal estimator in the regime n−1/4. The current best polynomial time approximate PML algorithm by [ACSS20] achieves an accuracy of exp(−k log n), where k is the number of distinct observed frequencies. Although this result achieves better instance based statistical guarantees, in the worst case it still only implies a sample optimal universal estimator in the regime n−1/4. In light of these results, a key open problem is to close the gap between the regimes n−1/3 and n−1/4, where the former is the regime in which PML based estimators are statistically optimal and the later is the regime where efficient PML based estimators exist. In this work we ask: Is there an efficient approximate PML-based estimator that is sample optimal for n−1/3. In this paper, we answer this question in the affirmative. In particular, we give an efficient PML-based estimator that has failure probability at most δ1−c exp(n1/3+c + n1−c ′ ), and consequently is sample optimal in the regime n−1/3. As remarked, this result is tight in the sense that PML and a broad class of estimators are known to be not optimal in the regime n−1/3. To obtain this result we depart slightly from the previous approaches in [ADOS17, CSS19a, ACSS21]. Rather than directly compute an approximate PML distribution we compute a weaker notion of approximation which we show suffices to get us the desired universal estimator. We propose a notion of a β-weak approximate PML distribution inspired by [HS21] and show that an exp(−n1/3 log n)weak approximate PML achieves the desired failure probability of δ1−c exp(n1/3+c) for any constant c > 0. Further, we provide an efficient algorithm to compute an exp(−n1/3 log n)-weak approximate PML distribution. Our paper can be viewed as an efficient algorithmic instantiation of [HS21]. Ultimately, our algorithms use the convex relaxation presented in [CSS19a, ACSS21] and provide a new rounding algorithm. We differ from the previous best exp(−k log n) approximate PML algorithm [ACSS20] only in the matrix rounding procedure which controls the approximation guarantee. At a high level, the approximation guarantee for the rounding procedure in [ACSS20] is exponential in the sum of matrix dimensions. In the present work, we need to round a rectangular matrix with an approximation exponential in the smaller dimension, which may be infeasible for arbitrary matrices. Our key technical innovation is to introduce a swap operation (see Section 4.1) which facilitates such an approximation guarantee. In addition to a better approximation guarantee than [ACSS20], our algorithm also exhibits better run times (see Section 2). Organization: We introduce preliminaries in Section 1.1. In Section 2, we state our main results and also cover related work. In Section 3, we provide the convex relaxation to PML studied in [CSS19a, ACSS21]. Finally, in Section 4, we provide a proof sketch of our main computational result. Many proofs are then differed to the appendix. 1β-approximate PML is a distribution that achieves a multiplicative β-approximation to the PML objective. 1.1 Preliminaries General notation: For matrices A,B ∈ Rs×t, we use A ≤ B to denote that Aij ≤ Bij for all i ∈ [s] and j ∈ [t]. We let [a, b] and [a, b]R denote the interval ≥ a and ≤ b of integers and reals respectively. We use Õ(·), Ω̃(·) notation to hide all polylogarithmic factors in n and N . We let an bn to denote that an ∈ Ω(bnnc) or bn ∈ O(n−can), for some small constant c > 0. Throughout this paper, we assume we receive a sequence of n independent samples from a distribution p ∈ ∆D, where ∆D def= {q ∈ [0, 1]DR | ∥∥q∥∥ 1 = 1} is the set of all discrete distributions supported on domain D. Let Dn be the set of all length n sequences of elements of D and for yn ∈ Dn let yni denoting its ith element. Let f(yn, x) def= |{i ∈ [n] | yni = x}| and px be the frequency and probability of x ∈ D respectively. For a sequence yn ∈ Dn, let M = {f(yn, x)}x∈D\{0} be the set of all its non-zero distinct frequencies and m1,m2, . . . ,m|M| be these distinct frequencies. The profile of a sequence yn denoted φ = Φ(yn) is a vector in Z|M|, where φj def = |{x ∈ D | f(yn, x) = mj}| is the number of domain elements with frequency mj . We call n the length of profile φ and let Φn denote the set of all profiles of length n. The probability of observing sequence yn and profile φ for distribution p are P(p, yn) = ∏ x∈D p f(yn,x) x and P(p, φ) = ∑ {yn∈Dn | Φ(yn)=φ} P(p, yn). Profile maximum likelihood: A distribution pφ ∈ ∆D is a profile maximum likelihood (PML) distribution for profile φ ∈ Φn if pφ ∈ argmaxp∈∆DP(p, φ). Further, a distribution p β φ is a βapproximate PML distribution if P(pβφ, φ) ≥ β · P(pφ, φ). For a distribution p and a length n, let X be a random variable that takes value φ ∈ Φn with probability P (p, φ). We call H(X) (entropy of X) the profile entropy with respect to (p, n) and denote it by H(Φn,p). Probability discretization: Let R def= {ri}i∈[1,`] be a finite discretization of the probability space, where ri ∈ [0, 1]R and ` def = |R|. We call q ∈ [0, 1]DR a pseudo-distribution if ‖q‖1 ≤ 1 and a discrete pseudo-distribution with respect to R if all its entries are in R as well. We use ∆Dpseudo and ∆DR to denote the set of all pseudo-distributions and discrete pseudo-distributions with respect to R respectively. In our work, we use the following most commonly used [CSS19a, ACSS21, ACSS20] probability discretization set. For any α > 0, Rn,α def = {1} ∪ { 1 2n2 (1 + n−α)i | for all i ∈ Z≥0 such that 1 2n2 (1 + n−α)i ≤ 1 } . (1) For all probability terms defined involving distributions p, we extend those definitions to pseudo distributions q by replacing px with qx everywhere. See ?? for the definition of an estimator and optimal sample complexity. 2 Results Here we provide our main results. In our first result (Theorem 2.2), we show that a weaker notion of approximate PML suffices to obtain the desired universal estimator. Later we show that these weaker approximate PML distributions can be efficiently computed (Theorem 2.3). Definition 2.1. Given a profile φ, we call a distribution p′ ∈ ∆D β-approximate PML distribution with respect to R if P (p′, φ) ≥ β ·maxq∈∆DR P ( q ‖q‖1 , φ ) . The above definition generalizes β-approximate PML distributions which is simply the special case when R = [0, 1]R. Using our new definition, we show that for a specific choice of the discretization set Rn,1/3, a distribution p′ that is an approximate PML with respect to Rn,1/3 suffices to obtain a universal estimator; this result is formally stated below. Theorem 2.2 (Competitiveness of an approximate PML w.r.t R). For symmetric property f , suppose there exists an estimator f̂ that takes input a profile φ ∈ Φn drawn from p ∈ ∆D and satisfies, P ( |f(p)− f̂(φ)| ≥ ) ≤ δ , then for R = Rn,1/3 (See Equation (1)), a discrete pseudo distribution q′ ∈ ∆DR such that q′/‖q′‖1 is an exp(−O(|R| log n))-approximate PML distribution with respect to the R satisfies, P (∣∣∣∣f ( q′‖q′‖1 ) − f(p) ∣∣∣∣ ≥ 2 ) ≤ δ1−c exp(O(n1/3+c)), for any constant c > 0 . (2) The proof of the above theorem is implicit in the analysis of [HS21], however we provide a short simpler proof using their continuity lemma (Lemma 2 in [HS21]). Note that the bound on the failure probability we get is the same asymptotically as that of exact PML from [HS21], which is known to be tight [Han21]. Furthermore, to achieve such an improved failure probability bound all we need is an approximate PML distribution with respect to R, for some R which is of small size. Taking advantage of this fact and building upon [CSS19a, ACSS21], we provide a new rounding algorithm that outputs the desired approximate PML distribution with respect to R. Theorem 2.3 (Computation of an approximate PML w.r.t R). We provide an algorithm that given a probability discretization set R = Rn,α for α > 0 (See Equation (1)) and a profile φ with k distinct frequencies, runs in time Õ ( |R|+ nmin(k,|R|) ( min(|R|, n/k)kω + min(|R|, k)k2 )) , where ω < 2.373 is the current matrix multiplication constant [Wil12, Gal14, AW21] and returns a pseudo distribution q′ ∈ ∆DR such that, P ( q′ ‖q′‖1 , φ ) ≥ exp (−O(min(k, |R|) log n)) · max q∈∆DR P ( q ‖q‖1 , φ ) . When R = Rn,1, our algorithm computes an exp(−O(k log n)) approximate PML distribution, therefore our result is at least as good as the previous best known approximate PML algorithm due to [ACSS20]. In comparison to [ACSS20], our rounding algorithm is simpler and we suspect, more practical. We provide a more detailed comparison to it later in this section. Applications: Our main results have several applications which we discuss here. First note that, combining Theorem 2.2 and 2.3 immediately yields the following corollary. Corollary 2.4 (Efficient unified estimator). Given a profile φ ∈ Φn with k distinct frequencies, we can compute an approximate PML distribution q′ that satisfies Equation (2) in Theorem 2.2 in time Õ ( n min(k,n1/3) ( min(n1/3, n/k)kω + min(n1/3, k)k2 )) . For many symmetric properties the failure probability is exponentially small as stated below. Lemma 2.5 (Lemma 2 in [ADOS17], Theorem 3 in [HS21]). For distance to uniformity, entropy, support size and coverage, and sorted `1 distance there exists an estimator that is sample optimal and the failure probability is at most exp(− 2n1−α) for any constant α > 0. The above result combined with Corollary 2.4, immediately yields the following theorem. Theorem 2.6 (Efficient sample optimal unified estimator). There exists an efficient approximate PML-based estimator that for n−1/3 and symmetric properties such as, distance to uniformity, entropy, support size and coverage, and sorted `1 distance achieves optimal sample complexity and has failure probability upper bounded by exp(−n1/3). As our work computes an exp(−O(k log n)) approximate PML, we recover efficient version of Lemma 2.3 and Theorem 2.4 from [ACSS20]. The first result uses exp(−O(k log n)) approximate PML algorithm to efficiently implement an estimator that has better statistical guarantees based on profile entropy [HO20] (See Section 1.1). The second result provides an efficient implementation of the PseudoPML estimators [CSS19b, HO19]. Please refer to the respective papers for further details. Tightness of our result: Recall that [HS21] showed that the failure probability of an (approximate) PML based estimator is upper bounded by δ1−c exp(−n1/3+c), for any constant c > 0. This result further implied a sample optimal universal estimator in the regime n−1/3 for various symmetric properties (Theorem 2.6). In our work, we efficiently recover these results and a natural question to ask here is if these results can be improved. As remarked earlier, [Han21] showed that the condition for optimality of PML ( n−1/3) is in some sense tight. More formally, they showed that PML is not sample optimal in estimating every 1-Lipschitz property in the regime n−1/3. In fact, the results in [Han21] hold more broadly for any universal plug-in based estimator that outputs a distribution p̂ satisfying, max p∈∆D E‖p− p̂‖sorted1 ≤ A(n) √ k/n , where A(n) ≤ nγ for every γ > 0 and ‖p − q‖sorted1 def = minpermutations σ ‖p − qσ‖1 denotes the sorted `1 distance between p and q. In other words, if an estimator is based on a reasonably good estimate of the true distribution p (in terms of sorted-`1 distance), then it cannot be sample optimal for every 1-Lipschitz property. Furthermore, many well-known universal estimators including PML and LLM [HJW18] indeed provide a reasonably good estimate of the true distribution and therefore cannot be sample optimal in the regime n−1/3. Please refer to [Han21] for further details. Comparison to approximate PML algorithms: All prior provable approximate PML algorithms [CSS19a, ACSS21, ACSS20] have two key steps: (Step 1) solve a convex approximation to the PML and (Step 2) round the (fractional) solution to a valid approximate PML distribution. A convex approximation to PML was first provided in [CSS19a] and a better analysis for it is shown in [ACSS21]. In particular, [CSS19a] and [ACSS21] showed that an integral optimal solution to step 1 approximates the PML up to accuracy exp(−n2/3 log n) and exp(−min(k, |R|) log n) respectively, where k and |R| are the number of distinct frequencies and distinct probability values respectively. In addition to the loss from convex approximation, the previous algorithms also incurred a loss in the rounding step (Step 2). The loss in the rounding step for the previous works is bounded by exp(−n2/3 log n) [CSS19a], exp(− √ n log n) [ACSS21] and exp(−k log n) [ACSS20]. In our work, we show that there exists a choice of R (=Rn,1/3) that is of small size (|R| ≤ n1/3) and suffices to get the desired universal estimator. As |R| ≤ n1/3, our approach only incur a loss of exp(−min(k, |R|) log n) ∈ exp(−n1/3 log n) in the convex approximation step (Step 1). Furthermore for the rounding step (Step 2), we provide a new simpler and a practical rounding algorithm with a better approximation loss of exp(−O(min(k, |R|) log n)) ∈ exp(−O(n−1/3 log n)). Regarding the run times, both [ACSS20] and ours have run times of the form Tsolve+Tsparsify+Tround, where the terms correspond to the time required to solve the convex program, sparsify and round a solution. In our algorithm, we pay the same cost as [ACSS20] for the first two steps but our run time guarantees are superior to theirs in the rounding step. In particular, the run time of [ACSS20] is shown as a large polynomial and perhaps not practical as their approach requires enumerating all the approximate min cuts. In contrast, our algorithm has a run time that is subquadratic. Other related work PML was introduced by [OSS+04]. Many heuristic approaches have been proposed to compute approximate PML, such as the EM algorithm in [OSS+04], an algebraic approaches in [ADM+10], Bethe approximation in [Von12] and [Von14], and a dynamic programming approach in [PJW17]. For the broad applicability of PML in property testing and to estimate other symmetric properties please refer to [HO19]. Please refer to [HO20] for details related to profile entropy. Other approaches for designing universal estimators are: [VV11b] based on [ET76], [HJW18] based on local moment matching, and variants of PML by [CSS19b, HO19] that weakly depend on the target property that we wish to estimate. Optimal sample complexities for estimating many symmetric properties were also obtained by constructing property specific estimators, e.g. support [VV11b, WY15], support coverage [OSW16, ZVV+16], entropy [VV11b, WY16, JVHW15], distance to uniformity [VV11a, JHW16], sorted `1 distance [VV11a, HJW18], Renyi entropy [AOST14, AOST17], KL divergence [BZLV16, HJW16] and others. Limitations of our work One of the limitations of all the provable approximate PML algorithms [CSS19a, ACSS21, ACSS20] (including ours) is that they require the solution of a convex program that approximates the PML objective and all these previous works use the CVX solver which is not practical for large sample instances; note that our results hold for small error regimes which lead to such large sample instances. Therefore, designing a practical algorithm to solve the convex program is an important future research direction. As discussed above, local moment matching (LLM) based approach is another universal approach for property estimation. It is unclear which of the two (PML or LLM) can lead to practical algorithms. 3 Convex relaxation to PML Here we restate the convex program from [CSS19a] that approximates the PML objective. The current best analysis of this convex program is in [ACSS21]. We first describe the notation and later state several results from [CSS19a, ACSS21] that capture the guarantees of the convex program. Notation: For any matrices X ∈ Ra×c and Y ∈ Rb×c, we let concat(X,Y) denote the matrix W ∈ R(a+b)×c that satisfies, Wi,j = Xi,j for all i ∈ [a] and j ∈ [c] and Wa+i,j = Yij for all i ∈ [b] and j ∈ [c]. Recall we let R def= {ri}i∈[`] be a finite discretization of the probability space, where ri ∈ [0, 1]R and ` def = |R|. Let r ∈ [0, 1]`R be a vector whose i’th element is equal to ri. Lemma 3.1 (Lemma 4.4 in [CSS19a]). Let R = Rn,α for some α > 0. For any profile φ ∈ Φn and distribution p ∈ ∆D, there exists a pseudo distribution q ∈ ∆DR that satisfies P(p, φ) ≥ P(q, φ) ≥ exp (−αn− 6)P(p, φ) and therefore, max p∈∆D P(p, φ) ≥ max q∈∆DR P(q, φ) ≥ exp (−αn− 6) max p∈∆D P(p, φ) . For any probability discretization set R, profile φ and pseudo distribution q ∈ ∆DR , define: ZφR def = { X ∈ R`×[0,k]≥0 ∣∣∣ X1 ∈ Z`, [X>1]j = φj for all j ∈ [1, k] and r>X1 ≤ 1} , (3) Zφ,fracR def = { X ∈ R`×[0,k]≥0 ∣∣∣ [X>1]j = φj for all j ∈ [1, k] and r>X1 ≤ 1} . (4) The j’th column corresponds to frequency mj and we use m0 def = 0 to capture the unseen elements. Without loss of generality, we assume m0 < m1 < · · · < mk. Let Cij def = mj log ri for all i ∈ [`] and j ∈ [0, k]. The objective of the optimization problem is follows: for any X ∈ R`×[0,k]≥0 define, g(X) def= exp ( ∑ i∈[`],j∈[0,k] [CijXij − Xij log Xij ] + ∑ i∈[`] [X1]i log[X1]i ) . (5) For any q ∈ ∆DR , the function g(X) approximates the P(q, φ) term and is stated below. Lemma 3.2 (Theorem 6.7 and Lemma 6.9 in [ACSS21]). Let R be a probability discretization set. For any profile φ ∈ Φn with k distinct frequencies the following statements hold for α = min(k, |R|) log n: exp (−O(α)) · Cφ · maxX∈ZφR g(X) ≤ maxq∈∆DR P(q, φ) ≤ exp (O (α)) · Cφ · maxX∈ZφR g(X) and maxq∈∆DR P(q, φ) ≤ exp (O (min(k, |R|) log n))·Cφ ·maxX∈Zφ,fracR g(X) , where Cφ def = n!∏ j∈[1,k](mj !) φj is a term that only depends on the profile.2 The proof of concavity for the function g(X) and a running time analysis to solve the convex program are provided in [CSS19a]. For any X ∈ ZφR, a pseudo-distributions associated with it is defined below. Definition 3.3. For any X ∈ ZφR, the discrete pseudo-distribution qX associated with X and R is defined as follows: for arbitrary [X1]i number of domain elements assign probability ri. Further pX def = qX/‖qX‖1 is the distribution associated with X and R. Note that qX is a valid pseudo-distribution because of the third condition in Equation (3) and these pseudo distributions pX and qX satisfy the following lemma. Lemma 3.4 (Theorem 6.7 in [ACSS21]). Let R and φ ∈ Φn be a probability discretization set and a profile with k distinct frequencies. For any X ∈ ZφR, the discrete pseudo distribution qX and distribution pX associated with X and R satisfy: exp (−O(k log n))Cφ · g(X) ≤ P(qX, φ) ≤ P(pX, φ) . 2The theorem statement in [ACSS21] is only written with an approximation factor of exp(O(k logn)). However, their proof provides a stronger approximation factor which is upper bounded by the non-negative rank of the probability matrix, which in turn is upper bounded by the minimum of distinct frequencies and distinct probabilities. Therefore the theorem statement in [ACSS21] holds with a much stronger approximation guarantee of exp (O (min(k, |R|) logn)). 4 Approximate PML algorithm Here we provide a proof sketch of Theorem 2.3 and provide a rounding algorithm that proves it. Our rounding algorithm takes as input a matrix X ∈ Zφ,fracR which may have fractional row sums and round it to integral values. This new rounded matrix Xfinal corresponds to our approximate PML distribution (See Definition 3.3). The description of our algorithm is as follows. Algorithm 1 ApproximatePML(φ,R = Rn,α) 1: Let X be any solution that satisfies, log g(X) ≥ maxY∈Zφ,fracR log g(Y)−O (min(k, |R|) log n). 2: X′ = sparsify(X). 3: (A,B) = swapmatrixround(X′). 4: (Xfinal,Rfinal) = create(A,B,R) 5: Let p′ be the distribution with respect to Xfinal and Rfinal (See Definition 3.3). 6: Return q = discretize(p′, φ,R) We now provide a guarantee for each of these lines of Algorithm 1. We later use these guarantees to prove our final theorem (Theorem 2.3). The guarantees of the approximate maximizer X computed in the first step of the algorithm are summarized in the following lemma. Lemma 4.1 ([CSS19a, ACSS21]). Line 1 of the algorithm can be implemented in Õ(|R|k2 + |R|2k) time and the approximate maximizer X satisfies: Cφ · g(X) ≥ exp (−O (min(k, |R|) log n)) maxq∈∆DR P(q, φ) . The guarantees of the second step of our algorithm are summarized in the following lemma. Please refer to [ACSS20] for the description of the procedure sparsify. We use this procedure so that we can assume |R| ≤ k + 1 as we can ignore the zero rows of the matrix X. Lemma 4.2 (Lemma 4.3 in [ACSS20]). For any X ∈ Zφ,fracR , the algorithm sparsify(X) runs in time Õ(|R| kω) and outputs X′ ∈ Zφ,fracR such that: g(X ′) ≥ g(X) and ∣∣{i ∈ [`] | [X′−→1 ]i > 0}∣∣ ≤ k+ 1 . To explain our next step, we need to define a new operation called the swap. Definition 4.3. Given a matrix A, indices i1 < i2, j1 < j2 and a parameter ≥ 0, the operation swap(A, i1, i2, j1, j2, ) outputs a matrix A′ that satisfies, A′ij = Ai,j + for i = i1, j = j1 Ai,j − for i = i1, j = j2 , Ai,j − for i = i2, j = j1 Ai,j + for i = i2, j = j2 , Aij otherwise. (6) Definition 4.4 (Swap distance). A′ is x-swap distance from A, if A′ can be obtained from A through a sequence of swap operations and the summation of the value ’s in these operations is at most x, i.e. there is a set of parameters {(i(s)1 , i (s) 2 , j (s) 1 , j (s) 2 , (s))}s∈[t], where ∑ s∈[t] (s) ≤ x, such that A(s) = swap(A(s−1), i(s)1 , i (s) 2 , j (s) 1 , j (s) 2 , (s)) for s ∈ [t], where A(0) = A and A(t) = A′. The following lemma directly follows from Definition 4.3 and Definition 4.4. Lemma 4.5. For any matrices A,A′ ∈ Rs×t, if A′ is x-swap distance from A for some x ≥ 0, then A′ −→ 1 = A −→ 1 and A′> −→ 1 = A> −→ 1 . Recall that our objective g(X) contains two terms: (1) the linear term ∑ i∈[`],j∈[0,k] CijXij and (2) the entropy term ∑ i∈[`][X −→ 1 ]i log[X −→ 1 ]i − ∑ i∈[`],j∈[0,k] Xij log Xij . The swap operation always increases the first term, and in the following lemma we bound the loss due to the second term. Lemma 4.6. If A′ ∈ R`×[0,k] is x-swap distance from A ∈ Zφ,fracR , then, A ′ ∈ Zφ,fracR and g(A ′) ≥ exp(−O(x log n))g(A). One of the main contributions of our work is the following lemma, where we repeatedly apply swap operation to recover a matrix A which exhibits several nice properties as stated below. Lemma 4.7. For any matrix A ∈ Rs×t (s ≤ t) that satisfies A>−→1 ∈ Zt≥0. The algorithm swapmatrixround runs in O(s2t) time and returns matrices A′ and B such that, • A′ is O(s)-swap distance from A, A′ −→ 1 = A −→ 1 and A′> −→ 1 = A> −→ 1 . • 0 ≤ Bij ≤ A′ij for all i ∈ [s] and j ∈ [t], B −→ 1 ∈ Zs≥0, B >−→1 ∈ Zt≥0 and ‖A ′−B‖1 ≤ O(s). The above lemma helps us modify our matrix X to a new matrix A that we can round using the create procedure. The guarantees of this procedure are summarized below. Lemma 4.8 (Lemma 6.13 in [ACSS21]). For any A ∈ Zφ,fracR ⊆ R `×[0,k] ≥0 and B ∈ R `×[0,k] ≥0 such that B ≤ A, B−→1 ∈ Z`, B>−→1 ∈ Z[0,k] and ‖A − B‖1 ≤ t. The algorithm create(A,B,R) runs in time O(`k) and returns a solution A′ and a probability discretization set R′ such that |R′| ≤ |R|+ min(k + 1, t), A′ ∈ ZφR′ and g(A ′) ≥ exp (−O (t log n)) g(A) . As our final goal is to return a distribution in ∆DR , we also use the following discretization lemma. Lemma 4.9. The function discretize takes as input a distribution p ∈ ∆D with `′ distinct probability values, a profile φ, a discretization set of the form R = Rn,α for some α > 0 and outputs a pseudo distribution q ∈ ∆DR such that: P ( q ‖q‖1 , φ ) ≥ exp(−O(min(k, |R|) + min(k, `′) + α2n) log n)P (p, φ) . In Section 5, we use the guarantees stated above for each line of Algorithm 1 to prove Theorem 2.3. The description of the function discretize is specified in the proof of Lemma 4.9. We describe the procedure swapmatrixround and provide a proof sketch of Lemma 4.7 in Section 4.1. 4.1 Description of swapmatrixround and comparison to [ACSS20] Here we describe the procedure swapmatrixround and compare our rounding algorithm to [ACSS20]. Both of [ACSS20] and our approximate PML algorithm have four main lines (1-4); we differ from [ACSS20] in the key Line 3. This line in [ACSS20] invokes a procedure called matrixround that takes as input a matrix A ∈ R`×[0,k] and outputs a matrix B ∈ R`×[0,k] such that: B ≤ A, B−→1 ∈ Z`≥0, B>−→1 ∈ Z[0,k]≥0 and ‖A−B‖1 ≤ O(`+ k). Such a matrix B is crucial as the procedure create uses B to round fractional row sums of matrix A to integral values. The error incurred in these two steps is at most exp(O(‖A− B‖1 log n)) ∈ exp(O((`+ k) log n)). As the procedure sparsify allows us to assume ` ≤ k+1, we get an exp(−k log n) approximate PML using [ACSS20]. However, the setting that we are interested in is when ` k; for instance when ` ∈ O(n1/3) and k ∈ Θ( √ n). In these settings, we desire an exp(−O(min(`, k) log n)) ∈ exp(−O(` log n)) approximate PML. In order to get such an improved approximation using [ACSS20], we need a matrix B satisfying the earlier mentioned inequalities along with ‖A−B‖1 ≤ O(min(k, `)). However, such a matrix B may not exist for arbitrary matrices A and the best guarantee any algorithm can achieve is ‖A− B‖1 ∈ O(`+ k). To overcome this, we introduce a new procedure called swapmatrixround that takes as input, a matrix A and transforms it to a new matrix A′ that satisfies: g(A′) ≥ exp(−O(min(k, `) log n))g(A). Furthermore, this transformed matrix A′ exhibits a matrix B that satisfies the guarantees: B ≤ A′, B−→1 ∈ Z`≥0, B >−→1 ∈ Zk≥0 and ‖A ′ − B‖1 ≤ O(`). These matrices A′ and B are nice in that we can invoke the procedure create, which would output a valid distribution with required guarantees. In the following we provide a description of the algorithm that finds these matrices A′ and B. Algorithm 2 swapmatrixround(A) 1: Let A(0) = A and D(0) = 0. 2: for r = 1 . . . ` do 3: (Y, j) = partialRound(A(r−1), r) 4: A(r) = roundiRow(Y, j, r). 5: D(r) = D(r−1) + Y− A(r). 6: end for 7: Return A′ = D(`) + A(`) and B = A(`). Our algorithm includes two main subroutines: partialRound and roundiRow. At each iteration i, the procedure partialRound considers row i and modifies it by repeatedly applying the swap operation. This modified row is nice as the procedure roundiRow can round this row to have an integral row sum while not affecting the rows in [i − 1]. By iterating through all rows, we get the required matrices A′ and B that satisfy the required guarantees. In the remainder, we formally state the guarantees achieved by the procedures partialRound and roundiRow. Lemma 4.10. The algorithm partialRound takes as inputs X ∈ R`×[0,k]≥0 and i ∈ [`−1] that satisfies the following, [X −→ 1 ]i′ ∈ Z≥0 for all i′ ∈ [1, i− 1] and [X> −→ 1 ]j ∈ Z≥0 for all j ∈ [0, k], and outputs a matrix Y ∈ R`×[0,k]≥0 and an index j′ such that: • Y is within 3-swap distance from X. • Yij′ ≥ o and ∑i−1 i′=1 Yi′j′ + Yij′ − o ∈ Z≥0, where o = [X −→ 1 ]i − b[X −→ 1 ]ic. Furthermore, the running time of the algorithm is O(`k). Note that by Lemma 4.5, if Y is within 3-swap distance from X, then Y−→1 = X−→1 and Y>−→1 = X>−→1 . Lemma 4.11. The algorithm roundiRow takes as inputs Y ∈ R`×[0,k]≥0 , an column index j ∈ [0, k] and a row index i ∈ [` − 1] such that: Y>−→1 ∈ Z[0,k]≥0 , Yij ≥ o and ∑i−1 i′=1 Yi′j + Yij − o ∈ Z≥0, where o = [Y −→ 1 ]i − b[Y −→ 1 ]ic. Outputs a matrix X ∈ R`×[0,k]≥0 such that, • X ≤ Y and ‖X− Y‖1 ≤ 1. • [X −→ 1 ]i′ = [Y −→ 1 ]i′ for all i′ ∈ [i− 1], [X −→ 1 ]i ∈ Z≥0, and X> −→ 1 ∈ Z[0,k]≥0 . We defer the description of all the missing procedures and proofs to appendix. 5 Proof of Main Result (Theorem 2.3) Here we put together the results from the previous sections to prove, Theorem 2.3. Proof of Theorem 2.3. Algorithm 1 achieves the guarantees of Theorem 2.3. In the remainder of the proof, we combine the guarantees of each step of the algorithm to prove the theorem. Toward this end, we first show the following two inequalities: Xfinal ∈ ZφRfinal and g(Xfinal) ≥ exp(−O(min(k, |R|) log n))g(X). By Lemma 4.1, the Line 1 of Algorithm 1 returns a solution X ∈ Zφ,fracR that satisfies, Cφ · g(X) ≥ exp (−O (min(k, |R|) log n)) max q∈∆DR P(q, φ) . (7) By Lemma 4.2, the Line 2 of Algorithm 1 takes input X and outputs X′ such that X′ ∈ Zφ,fracR and g(X ′) ≥ g(X), (8) and ∣∣{i ∈ [`] | [X′−→1 ]i > 0}∣∣ ≤ k + 1. As the matrix X′ has at most k + 1 non-zero rows, without loss of generality we can assume |R| ≤ k + 1 (by discarding zero rows). As matrix X′ ∈ Zφ,fracR , we have that X ′ has integral column sums and by invoking Lemma 4.7 with parameters s = |R| and t = k + 1, we get matrices A and B that satisfy guarantees of Lemma 4.7. As [A−→1 ]i = [X′ −→ 1 ]i for all i ∈ [`], [A> −→ 1 ]j = [X′> −→ 1 ]j for all j ∈ [0, k] and X′ ∈ Zφ,fracR , we immediately get that A ∈ Zφ,fracR . Further note that A is within O(|R|) = O(min(|R|, k))-swap distance from X′ and by Lemma 4.6 we get that g(A) ≥ exp(−O(min(|R|, k) log n))g(X′). To summarize, we showed the following inequalities, A ∈ Zφ,fracR and g(A) ≥ exp(−O(min(|R|, k) log n))g(X ′) . (9) Note that, Lemma 4.7 also outputs a matrix B that satisfies: B ≤ A, B−→1 ∈ Z`, B>−→1 ∈ Z[0,k] and ‖A − B‖1 ≤ O(min(|R|, k)). These matrices A and B satisfy the conditions of Lemma 4.8 with parameter value t = O(min(|R|, k)). Therefore, the procedure create takes in input matrices A,B and returns a solution (Xfinal,Rfinal) such that |Rfinal| ≤ |R|+ min(R, k) ≤ 2|R| and, Xfinal ∈ ZφRfinal and g(Xfinal) ≥ exp(−O(min(|R|, k) log n))g(A) . (10) As Xfinal ∈ ZφRfinal , by definition Definition 3.3 and Lemma 3.2, the distribution p ′ satisfies, P (p′, φ) ≥ exp(−O(min(k, |Rfinal|) log n))Cφg(Xfinal) ≥ exp(−O(min(k, |R|)) log n))Cφg(A) ≥ exp(−O(min(k, |R|) log n))Cφg(X′) ≥ exp(−O(min(k, |R|) log n))Cφg(X) ≥ exp(−O(min(k, |R|) log n)) max q∈∆DR P(q, φ) . In the second inequality we used Equation (10) and |Rfinal| ≤ 2|R|. In the third, fourth and fifth inequalities, we used Equation (9), Equation (8) and Equation (7) respectively. Recall we need a distribution that approximately maximizes maxq∈∆DR P( q ‖q‖1 , φ) instead of just maxq∈∆DR P(q, φ). In the remainder of the proof we provide a procedure to output such a distribution. For any constant c > 0, let c · R def= {c · ri | ri ∈ R}. For any q ∈ ∆DR , as ‖q‖1 satisfies: rmin ≤ ‖q‖1 ≤ 1, we get that, max q∈∆DR P( q ‖q‖1 , φ) = max c∈[1,1/rmin]R max q∈∆Dc·R P(q, φ) . (11) The above expression holds as the maximizer q∗ of the left hand side satisfies: q∗ ∈ ∆D(1/‖q∗‖1)·R. Define C def= {(1 + β)i}i∈[a] for some β ∈ o(1), where a ∈ O( 1β log(1/rmin)) is such that rmin(1 + β)a = 1. For any constant c ∈ [1, 1/rmin]R, note that there exists a constant c′ ∈ C such that c(1 − β) ≤ c′ ≤ c. Furthermore, for any distribution q ∈ ∆DR with ‖q‖1 = 1/c, note that the distribution q′ = c′q ∈ ∆Dc′·R and satisfies: P( q ‖q‖1 , φ) = P(c · q, φ) = P( c c′ q ′, φ) =( c c′ )n P(q′, φ) . Therefore we get that, P(q′, φ) = ( c′c )n P( q‖q‖1 , φ) ≥ (1 − β)nP( q‖q‖1 , φ) ≥ exp(−2βn)P( q‖q‖1 , φ) . Combining this analysis with Equation (11) we get that, max c∈C max q∈∆Dc·R P(q, φ) ≥ exp(−2βn) max q∈∆DR P( q ‖q‖1 , φ). (12) For each c > 0 as |R| = |c · R|, our algorithm (Algorithm 1) returns a distribution pc that satisfies, P (pc, φ) ≥ exp(−O(min(k, |R|) log n)) max q∈∆Dc·R P(q, φ) . Let p∗ be the distribution that achieves the maximum objective value to our convex program among the distributions {pc}c∈C . Then note that p∗ satisfies: P (p∗, φ) ≥ exp(−O(min(k, |R|) log n) − 2βn) maxq∈∆DR P( q ‖q‖1 , φ) . Substituting β = min(k,|R|) n in the previous expression, we get, P (p∗, φ) ≥ exp(−O(min(k, |R|) log n)) max q∈∆DR P( q ‖q‖1 , φ) . As each of our distributions pc (including p∗) have the number of distinct probability values upper bounded by 2|R|, by applying Lemma 4.9, we get a pseudo distribution q ∈ ∆DR with the desired guarantees. The final run time of our algorithm is O(|C|T1) ∈ O( nmin(k,|R|) · T1), where T1 is the time to implement Algorithm 1. Further note that by Lemma 3.1, without loss of generality we can assume |R| ≤ n/k. As all the lines of Algorithm 1 are polynomial in n, our final running time follows from the run times of each line and we conclude the proof. Acknowledgments and Disclosure of Funding We would like to thank the reviewers for their valuable feedback. Researchers on this project were supported by an Amazon Research Award, a Dantzig-Lieberman Operations Research Fellowship, a Google Faculty Research Award, a Microsoft Research Faculty Fellowship, NSF CAREER Award CCF-1844855, NSF Grant CCF-1955039, a PayPal research gift, a Simons-Berkeley Research Fellowship, a Simons Investigator Award, a Sloan Research Fellowship and a Stanford Data Science Scholarship.
1. What is the focus and contribution of the paper regarding profile maximum likelihood? 2. What are the strengths and weaknesses of the proposed approach, particularly in its efficiency and theoretical limits? 3. Do you have any concerns or questions regarding the paper's content, such as definitions, references, and lemmas? 4. Are there any limitations or potential negative social impacts associated with the authors' work that they should address?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper studies the now classical method of profile maximum likelihood (PML) and provides an efficient estimator that works for tiny errors below n^{-1/4}, where n is the sample size. Strengths And Weaknesses The strength is that n^{-1/3} is one type of theoretical limit: For any plug-in estimator, a symmetric functional exists for which the plug-in estimator is suboptimal. This point is also a weakness since prior work already provided efficient estimators for any error above n^{-1/4}; closing this tiny gap between n^{-1/3} and n^{-1/4} might be incremental. I like that the proofs are relatively short compared to prior works. However, this may rely on the fact that existing results can serve as lemmas. It is not entirely clear what a 'sample optimal universal estimator' means in the paper. I would say it's better if the authors can provide a concrete & mathematical definition for this. Also, it doesn't seem that [ADOS16] establishes the PML estimator's optimality beyond the four mentioned functionals. And [HO19] showed that PML is sample-optimal for a "Lipschitz" functional class, which should appear around line 28. Lemma 2.4 is simply Incorrect. Theorem 3 in [HS20] shows the existence of a sublinear estimator with no sample optimality guarantees for functionals like 1-Lipschitz. It seems that [HO19] presented something close to the desired claim. I didn't check the proofs in the appendix. But besides the above error, looking at the references also concerned me about this paper's rigorousness. For example, the citation for [HS20] provides no more than the publishing year (how about which journal/conference?). Maybe this would hint that the paper was completed in a rush. Questions It would be great if the authors could address the improper definitions, references, and lemmas mentioned above. Limitations Page 5 presents a section on the limitations of the proposed algorithm, as solving the involved convex program can be problematic in the large-sample regime. Maybe the authors can also describe some theoretical limitations, if any. I couldn't find much discussion on the potential negative social impact of the authors' work.
NIPS
Title On the Efficient Implementation of High Accuracy Optimality of Profile Maximum Likelihood Abstract We provide an efficient unified plug-in approach for estimating symmetric properties of distributions given n independent samples. Our estimator is based on profile-maximum-likelihood (PML) and is sample optimal for estimating various symmetric properties when the estimation error n−1/3. This result improves upon the previous best accuracy threshold of n−1/4 achievable by polynomial time computable PML-based universal estimators [ACSS21, ACSS20]. Our estimator reaches a theoretical limit for universal symmetric property estimation as [Han21] shows that a broad class of universal estimators (containing many well known approaches including ours) cannot be sample optimal for every 1-Lipschitz property when n−1/3. N/A We provide an efficient unified plug-in approach for estimating symmetric properties of distributions given n independent samples. Our estimator is based on profile-maximum-likelihood (PML) and is sample optimal for estimating various symmetric properties when the estimation error n−1/3. This result improves upon the previous best accuracy threshold of n−1/4 achievable by polynomial time computable PML-based universal estimators [ACSS21, ACSS20]. Our estimator reaches a theoretical limit for universal symmetric property estimation as [Han21] shows that a broad class of universal estimators (containing many well known approaches including ours) cannot be sample optimal for every 1-Lipschitz property when n−1/3. 1 Introduction Given n independent samples y1, ..., yn ∈ D from an unknown discrete distribution p ∈ ∆D the problem of estimating properties of p, e.g. entropy, distance to uniformity, support size and coverage are among the most fundamental in statistics and learning. Further, the problem of estimating symmetric properties of distributions p (i.e. properties invariant to label permutations) are well studied and have numerous applications [Cha84, BF93, CCG+12, TE87, Für05, KLR99, PBG+01, DS13, RCS+09, GTPB07, HHRB01]. Over the past decade, symmetric property estimation has been studied extensively and there have been many improvements to the time and sample complexity for estimating different properties, e.g. support [VV11b, WY15], coverage [ZVV+16, OSW16], entropy [VV11b, WY16, JVHW15], and distance to uniformity [VV11a, JHW16]. Towards unifying the attainment of computationallyefficient, sample-optimal estimators a striking work of [ADOS17] provided a universal plug-in approach based on a (approximate) profile maximum likelihood (PML) distribution, that (approximately) maximizes the likelihood of the observed profile (i.e. multiset of observed frequencies). Formally, [ADOS17] showed that given y1, ..., yn if there exists an estimator for a symmetric property f achieving accuracy and failure probability δ, then this PML-based plug-in approach achieves error 2 with failure probability δ exp (3 √ n). As the failure probability δ for many estimators for well-known properties (e.g. support size and coverage, entropy, and distance to uniformity) is roughly exp (− 2n), this result implied a sample optimal unified approach for estimating these properties when the estimation error n−1/4. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). This result of [ADOS17] laid the groundwork for a line of work on the study of computational and statistical aspects of PML-based approaches to symmetric property estimation. For example, follow up work of [HS21] improved the analysis of [ADOS17] and showed that the failure probability of PML is at most δ1−c exp(−n1/3+c), for any constant c > 0 and therefore it is sample optimal in the regime n−1/3. The condition n−1/3 on the optimality of PML is tight [Han21], in the sense that, PML is known to be not sample optimal in the regime n−1/3. In fact, no estimator (that obeys some mild conditions), is sample optimal for estimating all symmetric properties in the regime n−1/3; see Section 2 after Theorem 2.6 for details. We also remark that the statistical guarantees in [ADOS17, HS21] hold for any β-approximate PML1 for suitable values of β. In particular, [HS21] showed that any β-approximate PML for β > exp(−n1−c′) and any constant c′ > 0, has a failure probability of δ1−c exp(n1/3+c + n1−c′) for any constant c > 0. These results further imply a sample optimal estimator in the regime n−min(1/3,c′/2) for properties with failure probability less than exp(− 2n). Note that better approximation leads to a larger range of for which the estimator is sample optimal. Regarding computational aspects of PML, [CSS19a] provided the first efficient algorithm with a non-trivial approximation guarantee of exp(−n2/3 log n), which further implied a sample optimal universal estimator for n−1/6. This was then improved by [ACSS21] which showed how to efficiently compute PML to higher accuracy of exp(− √ n log n) thereby achieving a sample optimal universal estimator in the regime n−1/4. The current best polynomial time approximate PML algorithm by [ACSS20] achieves an accuracy of exp(−k log n), where k is the number of distinct observed frequencies. Although this result achieves better instance based statistical guarantees, in the worst case it still only implies a sample optimal universal estimator in the regime n−1/4. In light of these results, a key open problem is to close the gap between the regimes n−1/3 and n−1/4, where the former is the regime in which PML based estimators are statistically optimal and the later is the regime where efficient PML based estimators exist. In this work we ask: Is there an efficient approximate PML-based estimator that is sample optimal for n−1/3. In this paper, we answer this question in the affirmative. In particular, we give an efficient PML-based estimator that has failure probability at most δ1−c exp(n1/3+c + n1−c ′ ), and consequently is sample optimal in the regime n−1/3. As remarked, this result is tight in the sense that PML and a broad class of estimators are known to be not optimal in the regime n−1/3. To obtain this result we depart slightly from the previous approaches in [ADOS17, CSS19a, ACSS21]. Rather than directly compute an approximate PML distribution we compute a weaker notion of approximation which we show suffices to get us the desired universal estimator. We propose a notion of a β-weak approximate PML distribution inspired by [HS21] and show that an exp(−n1/3 log n)weak approximate PML achieves the desired failure probability of δ1−c exp(n1/3+c) for any constant c > 0. Further, we provide an efficient algorithm to compute an exp(−n1/3 log n)-weak approximate PML distribution. Our paper can be viewed as an efficient algorithmic instantiation of [HS21]. Ultimately, our algorithms use the convex relaxation presented in [CSS19a, ACSS21] and provide a new rounding algorithm. We differ from the previous best exp(−k log n) approximate PML algorithm [ACSS20] only in the matrix rounding procedure which controls the approximation guarantee. At a high level, the approximation guarantee for the rounding procedure in [ACSS20] is exponential in the sum of matrix dimensions. In the present work, we need to round a rectangular matrix with an approximation exponential in the smaller dimension, which may be infeasible for arbitrary matrices. Our key technical innovation is to introduce a swap operation (see Section 4.1) which facilitates such an approximation guarantee. In addition to a better approximation guarantee than [ACSS20], our algorithm also exhibits better run times (see Section 2). Organization: We introduce preliminaries in Section 1.1. In Section 2, we state our main results and also cover related work. In Section 3, we provide the convex relaxation to PML studied in [CSS19a, ACSS21]. Finally, in Section 4, we provide a proof sketch of our main computational result. Many proofs are then differed to the appendix. 1β-approximate PML is a distribution that achieves a multiplicative β-approximation to the PML objective. 1.1 Preliminaries General notation: For matrices A,B ∈ Rs×t, we use A ≤ B to denote that Aij ≤ Bij for all i ∈ [s] and j ∈ [t]. We let [a, b] and [a, b]R denote the interval ≥ a and ≤ b of integers and reals respectively. We use Õ(·), Ω̃(·) notation to hide all polylogarithmic factors in n and N . We let an bn to denote that an ∈ Ω(bnnc) or bn ∈ O(n−can), for some small constant c > 0. Throughout this paper, we assume we receive a sequence of n independent samples from a distribution p ∈ ∆D, where ∆D def= {q ∈ [0, 1]DR | ∥∥q∥∥ 1 = 1} is the set of all discrete distributions supported on domain D. Let Dn be the set of all length n sequences of elements of D and for yn ∈ Dn let yni denoting its ith element. Let f(yn, x) def= |{i ∈ [n] | yni = x}| and px be the frequency and probability of x ∈ D respectively. For a sequence yn ∈ Dn, let M = {f(yn, x)}x∈D\{0} be the set of all its non-zero distinct frequencies and m1,m2, . . . ,m|M| be these distinct frequencies. The profile of a sequence yn denoted φ = Φ(yn) is a vector in Z|M|, where φj def = |{x ∈ D | f(yn, x) = mj}| is the number of domain elements with frequency mj . We call n the length of profile φ and let Φn denote the set of all profiles of length n. The probability of observing sequence yn and profile φ for distribution p are P(p, yn) = ∏ x∈D p f(yn,x) x and P(p, φ) = ∑ {yn∈Dn | Φ(yn)=φ} P(p, yn). Profile maximum likelihood: A distribution pφ ∈ ∆D is a profile maximum likelihood (PML) distribution for profile φ ∈ Φn if pφ ∈ argmaxp∈∆DP(p, φ). Further, a distribution p β φ is a βapproximate PML distribution if P(pβφ, φ) ≥ β · P(pφ, φ). For a distribution p and a length n, let X be a random variable that takes value φ ∈ Φn with probability P (p, φ). We call H(X) (entropy of X) the profile entropy with respect to (p, n) and denote it by H(Φn,p). Probability discretization: Let R def= {ri}i∈[1,`] be a finite discretization of the probability space, where ri ∈ [0, 1]R and ` def = |R|. We call q ∈ [0, 1]DR a pseudo-distribution if ‖q‖1 ≤ 1 and a discrete pseudo-distribution with respect to R if all its entries are in R as well. We use ∆Dpseudo and ∆DR to denote the set of all pseudo-distributions and discrete pseudo-distributions with respect to R respectively. In our work, we use the following most commonly used [CSS19a, ACSS21, ACSS20] probability discretization set. For any α > 0, Rn,α def = {1} ∪ { 1 2n2 (1 + n−α)i | for all i ∈ Z≥0 such that 1 2n2 (1 + n−α)i ≤ 1 } . (1) For all probability terms defined involving distributions p, we extend those definitions to pseudo distributions q by replacing px with qx everywhere. See ?? for the definition of an estimator and optimal sample complexity. 2 Results Here we provide our main results. In our first result (Theorem 2.2), we show that a weaker notion of approximate PML suffices to obtain the desired universal estimator. Later we show that these weaker approximate PML distributions can be efficiently computed (Theorem 2.3). Definition 2.1. Given a profile φ, we call a distribution p′ ∈ ∆D β-approximate PML distribution with respect to R if P (p′, φ) ≥ β ·maxq∈∆DR P ( q ‖q‖1 , φ ) . The above definition generalizes β-approximate PML distributions which is simply the special case when R = [0, 1]R. Using our new definition, we show that for a specific choice of the discretization set Rn,1/3, a distribution p′ that is an approximate PML with respect to Rn,1/3 suffices to obtain a universal estimator; this result is formally stated below. Theorem 2.2 (Competitiveness of an approximate PML w.r.t R). For symmetric property f , suppose there exists an estimator f̂ that takes input a profile φ ∈ Φn drawn from p ∈ ∆D and satisfies, P ( |f(p)− f̂(φ)| ≥ ) ≤ δ , then for R = Rn,1/3 (See Equation (1)), a discrete pseudo distribution q′ ∈ ∆DR such that q′/‖q′‖1 is an exp(−O(|R| log n))-approximate PML distribution with respect to the R satisfies, P (∣∣∣∣f ( q′‖q′‖1 ) − f(p) ∣∣∣∣ ≥ 2 ) ≤ δ1−c exp(O(n1/3+c)), for any constant c > 0 . (2) The proof of the above theorem is implicit in the analysis of [HS21], however we provide a short simpler proof using their continuity lemma (Lemma 2 in [HS21]). Note that the bound on the failure probability we get is the same asymptotically as that of exact PML from [HS21], which is known to be tight [Han21]. Furthermore, to achieve such an improved failure probability bound all we need is an approximate PML distribution with respect to R, for some R which is of small size. Taking advantage of this fact and building upon [CSS19a, ACSS21], we provide a new rounding algorithm that outputs the desired approximate PML distribution with respect to R. Theorem 2.3 (Computation of an approximate PML w.r.t R). We provide an algorithm that given a probability discretization set R = Rn,α for α > 0 (See Equation (1)) and a profile φ with k distinct frequencies, runs in time Õ ( |R|+ nmin(k,|R|) ( min(|R|, n/k)kω + min(|R|, k)k2 )) , where ω < 2.373 is the current matrix multiplication constant [Wil12, Gal14, AW21] and returns a pseudo distribution q′ ∈ ∆DR such that, P ( q′ ‖q′‖1 , φ ) ≥ exp (−O(min(k, |R|) log n)) · max q∈∆DR P ( q ‖q‖1 , φ ) . When R = Rn,1, our algorithm computes an exp(−O(k log n)) approximate PML distribution, therefore our result is at least as good as the previous best known approximate PML algorithm due to [ACSS20]. In comparison to [ACSS20], our rounding algorithm is simpler and we suspect, more practical. We provide a more detailed comparison to it later in this section. Applications: Our main results have several applications which we discuss here. First note that, combining Theorem 2.2 and 2.3 immediately yields the following corollary. Corollary 2.4 (Efficient unified estimator). Given a profile φ ∈ Φn with k distinct frequencies, we can compute an approximate PML distribution q′ that satisfies Equation (2) in Theorem 2.2 in time Õ ( n min(k,n1/3) ( min(n1/3, n/k)kω + min(n1/3, k)k2 )) . For many symmetric properties the failure probability is exponentially small as stated below. Lemma 2.5 (Lemma 2 in [ADOS17], Theorem 3 in [HS21]). For distance to uniformity, entropy, support size and coverage, and sorted `1 distance there exists an estimator that is sample optimal and the failure probability is at most exp(− 2n1−α) for any constant α > 0. The above result combined with Corollary 2.4, immediately yields the following theorem. Theorem 2.6 (Efficient sample optimal unified estimator). There exists an efficient approximate PML-based estimator that for n−1/3 and symmetric properties such as, distance to uniformity, entropy, support size and coverage, and sorted `1 distance achieves optimal sample complexity and has failure probability upper bounded by exp(−n1/3). As our work computes an exp(−O(k log n)) approximate PML, we recover efficient version of Lemma 2.3 and Theorem 2.4 from [ACSS20]. The first result uses exp(−O(k log n)) approximate PML algorithm to efficiently implement an estimator that has better statistical guarantees based on profile entropy [HO20] (See Section 1.1). The second result provides an efficient implementation of the PseudoPML estimators [CSS19b, HO19]. Please refer to the respective papers for further details. Tightness of our result: Recall that [HS21] showed that the failure probability of an (approximate) PML based estimator is upper bounded by δ1−c exp(−n1/3+c), for any constant c > 0. This result further implied a sample optimal universal estimator in the regime n−1/3 for various symmetric properties (Theorem 2.6). In our work, we efficiently recover these results and a natural question to ask here is if these results can be improved. As remarked earlier, [Han21] showed that the condition for optimality of PML ( n−1/3) is in some sense tight. More formally, they showed that PML is not sample optimal in estimating every 1-Lipschitz property in the regime n−1/3. In fact, the results in [Han21] hold more broadly for any universal plug-in based estimator that outputs a distribution p̂ satisfying, max p∈∆D E‖p− p̂‖sorted1 ≤ A(n) √ k/n , where A(n) ≤ nγ for every γ > 0 and ‖p − q‖sorted1 def = minpermutations σ ‖p − qσ‖1 denotes the sorted `1 distance between p and q. In other words, if an estimator is based on a reasonably good estimate of the true distribution p (in terms of sorted-`1 distance), then it cannot be sample optimal for every 1-Lipschitz property. Furthermore, many well-known universal estimators including PML and LLM [HJW18] indeed provide a reasonably good estimate of the true distribution and therefore cannot be sample optimal in the regime n−1/3. Please refer to [Han21] for further details. Comparison to approximate PML algorithms: All prior provable approximate PML algorithms [CSS19a, ACSS21, ACSS20] have two key steps: (Step 1) solve a convex approximation to the PML and (Step 2) round the (fractional) solution to a valid approximate PML distribution. A convex approximation to PML was first provided in [CSS19a] and a better analysis for it is shown in [ACSS21]. In particular, [CSS19a] and [ACSS21] showed that an integral optimal solution to step 1 approximates the PML up to accuracy exp(−n2/3 log n) and exp(−min(k, |R|) log n) respectively, where k and |R| are the number of distinct frequencies and distinct probability values respectively. In addition to the loss from convex approximation, the previous algorithms also incurred a loss in the rounding step (Step 2). The loss in the rounding step for the previous works is bounded by exp(−n2/3 log n) [CSS19a], exp(− √ n log n) [ACSS21] and exp(−k log n) [ACSS20]. In our work, we show that there exists a choice of R (=Rn,1/3) that is of small size (|R| ≤ n1/3) and suffices to get the desired universal estimator. As |R| ≤ n1/3, our approach only incur a loss of exp(−min(k, |R|) log n) ∈ exp(−n1/3 log n) in the convex approximation step (Step 1). Furthermore for the rounding step (Step 2), we provide a new simpler and a practical rounding algorithm with a better approximation loss of exp(−O(min(k, |R|) log n)) ∈ exp(−O(n−1/3 log n)). Regarding the run times, both [ACSS20] and ours have run times of the form Tsolve+Tsparsify+Tround, where the terms correspond to the time required to solve the convex program, sparsify and round a solution. In our algorithm, we pay the same cost as [ACSS20] for the first two steps but our run time guarantees are superior to theirs in the rounding step. In particular, the run time of [ACSS20] is shown as a large polynomial and perhaps not practical as their approach requires enumerating all the approximate min cuts. In contrast, our algorithm has a run time that is subquadratic. Other related work PML was introduced by [OSS+04]. Many heuristic approaches have been proposed to compute approximate PML, such as the EM algorithm in [OSS+04], an algebraic approaches in [ADM+10], Bethe approximation in [Von12] and [Von14], and a dynamic programming approach in [PJW17]. For the broad applicability of PML in property testing and to estimate other symmetric properties please refer to [HO19]. Please refer to [HO20] for details related to profile entropy. Other approaches for designing universal estimators are: [VV11b] based on [ET76], [HJW18] based on local moment matching, and variants of PML by [CSS19b, HO19] that weakly depend on the target property that we wish to estimate. Optimal sample complexities for estimating many symmetric properties were also obtained by constructing property specific estimators, e.g. support [VV11b, WY15], support coverage [OSW16, ZVV+16], entropy [VV11b, WY16, JVHW15], distance to uniformity [VV11a, JHW16], sorted `1 distance [VV11a, HJW18], Renyi entropy [AOST14, AOST17], KL divergence [BZLV16, HJW16] and others. Limitations of our work One of the limitations of all the provable approximate PML algorithms [CSS19a, ACSS21, ACSS20] (including ours) is that they require the solution of a convex program that approximates the PML objective and all these previous works use the CVX solver which is not practical for large sample instances; note that our results hold for small error regimes which lead to such large sample instances. Therefore, designing a practical algorithm to solve the convex program is an important future research direction. As discussed above, local moment matching (LLM) based approach is another universal approach for property estimation. It is unclear which of the two (PML or LLM) can lead to practical algorithms. 3 Convex relaxation to PML Here we restate the convex program from [CSS19a] that approximates the PML objective. The current best analysis of this convex program is in [ACSS21]. We first describe the notation and later state several results from [CSS19a, ACSS21] that capture the guarantees of the convex program. Notation: For any matrices X ∈ Ra×c and Y ∈ Rb×c, we let concat(X,Y) denote the matrix W ∈ R(a+b)×c that satisfies, Wi,j = Xi,j for all i ∈ [a] and j ∈ [c] and Wa+i,j = Yij for all i ∈ [b] and j ∈ [c]. Recall we let R def= {ri}i∈[`] be a finite discretization of the probability space, where ri ∈ [0, 1]R and ` def = |R|. Let r ∈ [0, 1]`R be a vector whose i’th element is equal to ri. Lemma 3.1 (Lemma 4.4 in [CSS19a]). Let R = Rn,α for some α > 0. For any profile φ ∈ Φn and distribution p ∈ ∆D, there exists a pseudo distribution q ∈ ∆DR that satisfies P(p, φ) ≥ P(q, φ) ≥ exp (−αn− 6)P(p, φ) and therefore, max p∈∆D P(p, φ) ≥ max q∈∆DR P(q, φ) ≥ exp (−αn− 6) max p∈∆D P(p, φ) . For any probability discretization set R, profile φ and pseudo distribution q ∈ ∆DR , define: ZφR def = { X ∈ R`×[0,k]≥0 ∣∣∣ X1 ∈ Z`, [X>1]j = φj for all j ∈ [1, k] and r>X1 ≤ 1} , (3) Zφ,fracR def = { X ∈ R`×[0,k]≥0 ∣∣∣ [X>1]j = φj for all j ∈ [1, k] and r>X1 ≤ 1} . (4) The j’th column corresponds to frequency mj and we use m0 def = 0 to capture the unseen elements. Without loss of generality, we assume m0 < m1 < · · · < mk. Let Cij def = mj log ri for all i ∈ [`] and j ∈ [0, k]. The objective of the optimization problem is follows: for any X ∈ R`×[0,k]≥0 define, g(X) def= exp ( ∑ i∈[`],j∈[0,k] [CijXij − Xij log Xij ] + ∑ i∈[`] [X1]i log[X1]i ) . (5) For any q ∈ ∆DR , the function g(X) approximates the P(q, φ) term and is stated below. Lemma 3.2 (Theorem 6.7 and Lemma 6.9 in [ACSS21]). Let R be a probability discretization set. For any profile φ ∈ Φn with k distinct frequencies the following statements hold for α = min(k, |R|) log n: exp (−O(α)) · Cφ · maxX∈ZφR g(X) ≤ maxq∈∆DR P(q, φ) ≤ exp (O (α)) · Cφ · maxX∈ZφR g(X) and maxq∈∆DR P(q, φ) ≤ exp (O (min(k, |R|) log n))·Cφ ·maxX∈Zφ,fracR g(X) , where Cφ def = n!∏ j∈[1,k](mj !) φj is a term that only depends on the profile.2 The proof of concavity for the function g(X) and a running time analysis to solve the convex program are provided in [CSS19a]. For any X ∈ ZφR, a pseudo-distributions associated with it is defined below. Definition 3.3. For any X ∈ ZφR, the discrete pseudo-distribution qX associated with X and R is defined as follows: for arbitrary [X1]i number of domain elements assign probability ri. Further pX def = qX/‖qX‖1 is the distribution associated with X and R. Note that qX is a valid pseudo-distribution because of the third condition in Equation (3) and these pseudo distributions pX and qX satisfy the following lemma. Lemma 3.4 (Theorem 6.7 in [ACSS21]). Let R and φ ∈ Φn be a probability discretization set and a profile with k distinct frequencies. For any X ∈ ZφR, the discrete pseudo distribution qX and distribution pX associated with X and R satisfy: exp (−O(k log n))Cφ · g(X) ≤ P(qX, φ) ≤ P(pX, φ) . 2The theorem statement in [ACSS21] is only written with an approximation factor of exp(O(k logn)). However, their proof provides a stronger approximation factor which is upper bounded by the non-negative rank of the probability matrix, which in turn is upper bounded by the minimum of distinct frequencies and distinct probabilities. Therefore the theorem statement in [ACSS21] holds with a much stronger approximation guarantee of exp (O (min(k, |R|) logn)). 4 Approximate PML algorithm Here we provide a proof sketch of Theorem 2.3 and provide a rounding algorithm that proves it. Our rounding algorithm takes as input a matrix X ∈ Zφ,fracR which may have fractional row sums and round it to integral values. This new rounded matrix Xfinal corresponds to our approximate PML distribution (See Definition 3.3). The description of our algorithm is as follows. Algorithm 1 ApproximatePML(φ,R = Rn,α) 1: Let X be any solution that satisfies, log g(X) ≥ maxY∈Zφ,fracR log g(Y)−O (min(k, |R|) log n). 2: X′ = sparsify(X). 3: (A,B) = swapmatrixround(X′). 4: (Xfinal,Rfinal) = create(A,B,R) 5: Let p′ be the distribution with respect to Xfinal and Rfinal (See Definition 3.3). 6: Return q = discretize(p′, φ,R) We now provide a guarantee for each of these lines of Algorithm 1. We later use these guarantees to prove our final theorem (Theorem 2.3). The guarantees of the approximate maximizer X computed in the first step of the algorithm are summarized in the following lemma. Lemma 4.1 ([CSS19a, ACSS21]). Line 1 of the algorithm can be implemented in Õ(|R|k2 + |R|2k) time and the approximate maximizer X satisfies: Cφ · g(X) ≥ exp (−O (min(k, |R|) log n)) maxq∈∆DR P(q, φ) . The guarantees of the second step of our algorithm are summarized in the following lemma. Please refer to [ACSS20] for the description of the procedure sparsify. We use this procedure so that we can assume |R| ≤ k + 1 as we can ignore the zero rows of the matrix X. Lemma 4.2 (Lemma 4.3 in [ACSS20]). For any X ∈ Zφ,fracR , the algorithm sparsify(X) runs in time Õ(|R| kω) and outputs X′ ∈ Zφ,fracR such that: g(X ′) ≥ g(X) and ∣∣{i ∈ [`] | [X′−→1 ]i > 0}∣∣ ≤ k+ 1 . To explain our next step, we need to define a new operation called the swap. Definition 4.3. Given a matrix A, indices i1 < i2, j1 < j2 and a parameter ≥ 0, the operation swap(A, i1, i2, j1, j2, ) outputs a matrix A′ that satisfies, A′ij = Ai,j + for i = i1, j = j1 Ai,j − for i = i1, j = j2 , Ai,j − for i = i2, j = j1 Ai,j + for i = i2, j = j2 , Aij otherwise. (6) Definition 4.4 (Swap distance). A′ is x-swap distance from A, if A′ can be obtained from A through a sequence of swap operations and the summation of the value ’s in these operations is at most x, i.e. there is a set of parameters {(i(s)1 , i (s) 2 , j (s) 1 , j (s) 2 , (s))}s∈[t], where ∑ s∈[t] (s) ≤ x, such that A(s) = swap(A(s−1), i(s)1 , i (s) 2 , j (s) 1 , j (s) 2 , (s)) for s ∈ [t], where A(0) = A and A(t) = A′. The following lemma directly follows from Definition 4.3 and Definition 4.4. Lemma 4.5. For any matrices A,A′ ∈ Rs×t, if A′ is x-swap distance from A for some x ≥ 0, then A′ −→ 1 = A −→ 1 and A′> −→ 1 = A> −→ 1 . Recall that our objective g(X) contains two terms: (1) the linear term ∑ i∈[`],j∈[0,k] CijXij and (2) the entropy term ∑ i∈[`][X −→ 1 ]i log[X −→ 1 ]i − ∑ i∈[`],j∈[0,k] Xij log Xij . The swap operation always increases the first term, and in the following lemma we bound the loss due to the second term. Lemma 4.6. If A′ ∈ R`×[0,k] is x-swap distance from A ∈ Zφ,fracR , then, A ′ ∈ Zφ,fracR and g(A ′) ≥ exp(−O(x log n))g(A). One of the main contributions of our work is the following lemma, where we repeatedly apply swap operation to recover a matrix A which exhibits several nice properties as stated below. Lemma 4.7. For any matrix A ∈ Rs×t (s ≤ t) that satisfies A>−→1 ∈ Zt≥0. The algorithm swapmatrixround runs in O(s2t) time and returns matrices A′ and B such that, • A′ is O(s)-swap distance from A, A′ −→ 1 = A −→ 1 and A′> −→ 1 = A> −→ 1 . • 0 ≤ Bij ≤ A′ij for all i ∈ [s] and j ∈ [t], B −→ 1 ∈ Zs≥0, B >−→1 ∈ Zt≥0 and ‖A ′−B‖1 ≤ O(s). The above lemma helps us modify our matrix X to a new matrix A that we can round using the create procedure. The guarantees of this procedure are summarized below. Lemma 4.8 (Lemma 6.13 in [ACSS21]). For any A ∈ Zφ,fracR ⊆ R `×[0,k] ≥0 and B ∈ R `×[0,k] ≥0 such that B ≤ A, B−→1 ∈ Z`, B>−→1 ∈ Z[0,k] and ‖A − B‖1 ≤ t. The algorithm create(A,B,R) runs in time O(`k) and returns a solution A′ and a probability discretization set R′ such that |R′| ≤ |R|+ min(k + 1, t), A′ ∈ ZφR′ and g(A ′) ≥ exp (−O (t log n)) g(A) . As our final goal is to return a distribution in ∆DR , we also use the following discretization lemma. Lemma 4.9. The function discretize takes as input a distribution p ∈ ∆D with `′ distinct probability values, a profile φ, a discretization set of the form R = Rn,α for some α > 0 and outputs a pseudo distribution q ∈ ∆DR such that: P ( q ‖q‖1 , φ ) ≥ exp(−O(min(k, |R|) + min(k, `′) + α2n) log n)P (p, φ) . In Section 5, we use the guarantees stated above for each line of Algorithm 1 to prove Theorem 2.3. The description of the function discretize is specified in the proof of Lemma 4.9. We describe the procedure swapmatrixround and provide a proof sketch of Lemma 4.7 in Section 4.1. 4.1 Description of swapmatrixround and comparison to [ACSS20] Here we describe the procedure swapmatrixround and compare our rounding algorithm to [ACSS20]. Both of [ACSS20] and our approximate PML algorithm have four main lines (1-4); we differ from [ACSS20] in the key Line 3. This line in [ACSS20] invokes a procedure called matrixround that takes as input a matrix A ∈ R`×[0,k] and outputs a matrix B ∈ R`×[0,k] such that: B ≤ A, B−→1 ∈ Z`≥0, B>−→1 ∈ Z[0,k]≥0 and ‖A−B‖1 ≤ O(`+ k). Such a matrix B is crucial as the procedure create uses B to round fractional row sums of matrix A to integral values. The error incurred in these two steps is at most exp(O(‖A− B‖1 log n)) ∈ exp(O((`+ k) log n)). As the procedure sparsify allows us to assume ` ≤ k+1, we get an exp(−k log n) approximate PML using [ACSS20]. However, the setting that we are interested in is when ` k; for instance when ` ∈ O(n1/3) and k ∈ Θ( √ n). In these settings, we desire an exp(−O(min(`, k) log n)) ∈ exp(−O(` log n)) approximate PML. In order to get such an improved approximation using [ACSS20], we need a matrix B satisfying the earlier mentioned inequalities along with ‖A−B‖1 ≤ O(min(k, `)). However, such a matrix B may not exist for arbitrary matrices A and the best guarantee any algorithm can achieve is ‖A− B‖1 ∈ O(`+ k). To overcome this, we introduce a new procedure called swapmatrixround that takes as input, a matrix A and transforms it to a new matrix A′ that satisfies: g(A′) ≥ exp(−O(min(k, `) log n))g(A). Furthermore, this transformed matrix A′ exhibits a matrix B that satisfies the guarantees: B ≤ A′, B−→1 ∈ Z`≥0, B >−→1 ∈ Zk≥0 and ‖A ′ − B‖1 ≤ O(`). These matrices A′ and B are nice in that we can invoke the procedure create, which would output a valid distribution with required guarantees. In the following we provide a description of the algorithm that finds these matrices A′ and B. Algorithm 2 swapmatrixround(A) 1: Let A(0) = A and D(0) = 0. 2: for r = 1 . . . ` do 3: (Y, j) = partialRound(A(r−1), r) 4: A(r) = roundiRow(Y, j, r). 5: D(r) = D(r−1) + Y− A(r). 6: end for 7: Return A′ = D(`) + A(`) and B = A(`). Our algorithm includes two main subroutines: partialRound and roundiRow. At each iteration i, the procedure partialRound considers row i and modifies it by repeatedly applying the swap operation. This modified row is nice as the procedure roundiRow can round this row to have an integral row sum while not affecting the rows in [i − 1]. By iterating through all rows, we get the required matrices A′ and B that satisfy the required guarantees. In the remainder, we formally state the guarantees achieved by the procedures partialRound and roundiRow. Lemma 4.10. The algorithm partialRound takes as inputs X ∈ R`×[0,k]≥0 and i ∈ [`−1] that satisfies the following, [X −→ 1 ]i′ ∈ Z≥0 for all i′ ∈ [1, i− 1] and [X> −→ 1 ]j ∈ Z≥0 for all j ∈ [0, k], and outputs a matrix Y ∈ R`×[0,k]≥0 and an index j′ such that: • Y is within 3-swap distance from X. • Yij′ ≥ o and ∑i−1 i′=1 Yi′j′ + Yij′ − o ∈ Z≥0, where o = [X −→ 1 ]i − b[X −→ 1 ]ic. Furthermore, the running time of the algorithm is O(`k). Note that by Lemma 4.5, if Y is within 3-swap distance from X, then Y−→1 = X−→1 and Y>−→1 = X>−→1 . Lemma 4.11. The algorithm roundiRow takes as inputs Y ∈ R`×[0,k]≥0 , an column index j ∈ [0, k] and a row index i ∈ [` − 1] such that: Y>−→1 ∈ Z[0,k]≥0 , Yij ≥ o and ∑i−1 i′=1 Yi′j + Yij − o ∈ Z≥0, where o = [Y −→ 1 ]i − b[Y −→ 1 ]ic. Outputs a matrix X ∈ R`×[0,k]≥0 such that, • X ≤ Y and ‖X− Y‖1 ≤ 1. • [X −→ 1 ]i′ = [Y −→ 1 ]i′ for all i′ ∈ [i− 1], [X −→ 1 ]i ∈ Z≥0, and X> −→ 1 ∈ Z[0,k]≥0 . We defer the description of all the missing procedures and proofs to appendix. 5 Proof of Main Result (Theorem 2.3) Here we put together the results from the previous sections to prove, Theorem 2.3. Proof of Theorem 2.3. Algorithm 1 achieves the guarantees of Theorem 2.3. In the remainder of the proof, we combine the guarantees of each step of the algorithm to prove the theorem. Toward this end, we first show the following two inequalities: Xfinal ∈ ZφRfinal and g(Xfinal) ≥ exp(−O(min(k, |R|) log n))g(X). By Lemma 4.1, the Line 1 of Algorithm 1 returns a solution X ∈ Zφ,fracR that satisfies, Cφ · g(X) ≥ exp (−O (min(k, |R|) log n)) max q∈∆DR P(q, φ) . (7) By Lemma 4.2, the Line 2 of Algorithm 1 takes input X and outputs X′ such that X′ ∈ Zφ,fracR and g(X ′) ≥ g(X), (8) and ∣∣{i ∈ [`] | [X′−→1 ]i > 0}∣∣ ≤ k + 1. As the matrix X′ has at most k + 1 non-zero rows, without loss of generality we can assume |R| ≤ k + 1 (by discarding zero rows). As matrix X′ ∈ Zφ,fracR , we have that X ′ has integral column sums and by invoking Lemma 4.7 with parameters s = |R| and t = k + 1, we get matrices A and B that satisfy guarantees of Lemma 4.7. As [A−→1 ]i = [X′ −→ 1 ]i for all i ∈ [`], [A> −→ 1 ]j = [X′> −→ 1 ]j for all j ∈ [0, k] and X′ ∈ Zφ,fracR , we immediately get that A ∈ Zφ,fracR . Further note that A is within O(|R|) = O(min(|R|, k))-swap distance from X′ and by Lemma 4.6 we get that g(A) ≥ exp(−O(min(|R|, k) log n))g(X′). To summarize, we showed the following inequalities, A ∈ Zφ,fracR and g(A) ≥ exp(−O(min(|R|, k) log n))g(X ′) . (9) Note that, Lemma 4.7 also outputs a matrix B that satisfies: B ≤ A, B−→1 ∈ Z`, B>−→1 ∈ Z[0,k] and ‖A − B‖1 ≤ O(min(|R|, k)). These matrices A and B satisfy the conditions of Lemma 4.8 with parameter value t = O(min(|R|, k)). Therefore, the procedure create takes in input matrices A,B and returns a solution (Xfinal,Rfinal) such that |Rfinal| ≤ |R|+ min(R, k) ≤ 2|R| and, Xfinal ∈ ZφRfinal and g(Xfinal) ≥ exp(−O(min(|R|, k) log n))g(A) . (10) As Xfinal ∈ ZφRfinal , by definition Definition 3.3 and Lemma 3.2, the distribution p ′ satisfies, P (p′, φ) ≥ exp(−O(min(k, |Rfinal|) log n))Cφg(Xfinal) ≥ exp(−O(min(k, |R|)) log n))Cφg(A) ≥ exp(−O(min(k, |R|) log n))Cφg(X′) ≥ exp(−O(min(k, |R|) log n))Cφg(X) ≥ exp(−O(min(k, |R|) log n)) max q∈∆DR P(q, φ) . In the second inequality we used Equation (10) and |Rfinal| ≤ 2|R|. In the third, fourth and fifth inequalities, we used Equation (9), Equation (8) and Equation (7) respectively. Recall we need a distribution that approximately maximizes maxq∈∆DR P( q ‖q‖1 , φ) instead of just maxq∈∆DR P(q, φ). In the remainder of the proof we provide a procedure to output such a distribution. For any constant c > 0, let c · R def= {c · ri | ri ∈ R}. For any q ∈ ∆DR , as ‖q‖1 satisfies: rmin ≤ ‖q‖1 ≤ 1, we get that, max q∈∆DR P( q ‖q‖1 , φ) = max c∈[1,1/rmin]R max q∈∆Dc·R P(q, φ) . (11) The above expression holds as the maximizer q∗ of the left hand side satisfies: q∗ ∈ ∆D(1/‖q∗‖1)·R. Define C def= {(1 + β)i}i∈[a] for some β ∈ o(1), where a ∈ O( 1β log(1/rmin)) is such that rmin(1 + β)a = 1. For any constant c ∈ [1, 1/rmin]R, note that there exists a constant c′ ∈ C such that c(1 − β) ≤ c′ ≤ c. Furthermore, for any distribution q ∈ ∆DR with ‖q‖1 = 1/c, note that the distribution q′ = c′q ∈ ∆Dc′·R and satisfies: P( q ‖q‖1 , φ) = P(c · q, φ) = P( c c′ q ′, φ) =( c c′ )n P(q′, φ) . Therefore we get that, P(q′, φ) = ( c′c )n P( q‖q‖1 , φ) ≥ (1 − β)nP( q‖q‖1 , φ) ≥ exp(−2βn)P( q‖q‖1 , φ) . Combining this analysis with Equation (11) we get that, max c∈C max q∈∆Dc·R P(q, φ) ≥ exp(−2βn) max q∈∆DR P( q ‖q‖1 , φ). (12) For each c > 0 as |R| = |c · R|, our algorithm (Algorithm 1) returns a distribution pc that satisfies, P (pc, φ) ≥ exp(−O(min(k, |R|) log n)) max q∈∆Dc·R P(q, φ) . Let p∗ be the distribution that achieves the maximum objective value to our convex program among the distributions {pc}c∈C . Then note that p∗ satisfies: P (p∗, φ) ≥ exp(−O(min(k, |R|) log n) − 2βn) maxq∈∆DR P( q ‖q‖1 , φ) . Substituting β = min(k,|R|) n in the previous expression, we get, P (p∗, φ) ≥ exp(−O(min(k, |R|) log n)) max q∈∆DR P( q ‖q‖1 , φ) . As each of our distributions pc (including p∗) have the number of distinct probability values upper bounded by 2|R|, by applying Lemma 4.9, we get a pseudo distribution q ∈ ∆DR with the desired guarantees. The final run time of our algorithm is O(|C|T1) ∈ O( nmin(k,|R|) · T1), where T1 is the time to implement Algorithm 1. Further note that by Lemma 3.1, without loss of generality we can assume |R| ≤ n/k. As all the lines of Algorithm 1 are polynomial in n, our final running time follows from the run times of each line and we conclude the proof. Acknowledgments and Disclosure of Funding We would like to thank the reviewers for their valuable feedback. Researchers on this project were supported by an Amazon Research Award, a Dantzig-Lieberman Operations Research Fellowship, a Google Faculty Research Award, a Microsoft Research Faculty Fellowship, NSF CAREER Award CCF-1844855, NSF Grant CCF-1955039, a PayPal research gift, a Simons-Berkeley Research Fellowship, a Simons Investigator Award, a Sloan Research Fellowship and a Stanford Data Science Scholarship.
1. What is the focus of the paper regarding symmetric properties of discrete distributions? 2. What are the strengths and weaknesses of the proposed approach, particularly in its technical aspects? 3. Do you have any questions regarding the paper's content or presentation? 4. How does the reviewer assess the significance and novelty of the paper's contributions? 5. Are there any concerns about the paper's limitations or potential negative societal impact?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The submission considers the problem of estimating some symmetric properties of finite-supported discrete distributions. The paper is interested in a very specific type of such methods, an "approximate profile maximum likelihood (PML) estimator", a variant of MLE optimized on a more compactly defined objectives. The paper concerns a highly technical question: can any efficient and approximate PML estimator be and sample competitive in the regime ϵ ≫ n − 1 / 3 ? Maybe I should summarize a few preliminaries as well: "Sample-competitive" of an estimator means: the failure probability of the estimator matches the standard exp ⁡ ( − Ω ( n ϵ 2 ) ) failure probability. The exact PML estimator is shown to be sample-optimal in ϵ ≫ n − 1 / 3 regime, and cannot be sample-optimal below the n − 1 / 3 threshold. Existing approximate PML estimators are shown to be sample optimal in the regime ϵ ≫ n − 1 / 4 . Can they tighten this to n − 1 / 3 ? They answers this question positively, closing the problem (in my guess). For this, the paper improves the algorithm proposed in [ACSS20, ACSS21]. Main technical innovation seems to be a certain swap operator, which is hard for me to follow further. Strengths And Weaknesses It seems like that the paper is mostly an extension of [ACSS21], but I feel that the paper is not sufficiently self-contained. Overall the presentation is very hard to follow. It might be just because I am not familiar with the problem. But even when considering that, in my opinion, the authors should have more taken care of broader audiences, given that the scope of the paper is very narrow and specific. For instance, Section 3 and 4 are extremely technical and very hard to follow. Most of the results (theorems and lemmas) seem to be coming from [ACSS21] without much explanation. I could not evaluate the merit of the paper further if not I just believe their claim. Questions Can you give some comments on why do we care an approximate PML estimator in the intro explicitly? It would have been helpful to make more contrast between this paper and previous work in [ACSS21]. e.g., why previous work can only work in ϵ ≫ n − 1 / 4 . Limitations I do not see any negative societal impact.
NIPS
Title Matrix Completion with Hierarchical Graph Side Information Abstract We consider a matrix completion problem that exploits social or item similarity graphs as side information. We develop a universal, parameter-free, and computationally efficient algorithm that starts with hierarchical graph clustering and then iteratively refines estimates both on graph clustering and matrix ratings. Under a hierarchical stochastic block model that well respects practically-relevant social graphs and a low-rank rating matrix model (to be detailed), we demonstrate that our algorithm achieves the information-theoretic limit on the number of observed matrix entries (i.e., optimal sample complexity) that is derived by maximum likelihood estimation together with a lower-bound impossibility result. One consequence of this result is that exploiting the hierarchical structure of social graphs yields a substantial gain in sample complexity relative to the one that simply identifies different groups without resorting to the relational structure across them. We conduct extensive experiments both on synthetic and real-world datasets to corroborate our theoretical results as well as to demonstrate significant performance improvements over other matrix completion algorithms that leverage graph side information. 1 Introduction Recommender systems have been powerful in a widening array of applications for providing users with relevant items of their potential interest [1]. A prominent well-known technique for operating the systems is low-rank matrix completion [2–18]: Given partially observed entries of an interested matrix, the goal is to predict the values of missing entries. One challenge that arises in the big data era is the so-called cold start problem in which high-quality recommendations are not feasible for new users/items that bear little or no information. One natural and popular way to address the challenge is to exploit other available side information. Motivated by the social homophily theory [19] that users within the same community are more likely to share similar preferences, social networks such as Facebook’s friendship graph have often been employed to improve the quality of recommendation. While there has been a proliferation of social-graph-assisted recommendation algorithms [1, 20–40], few works were dedicated to developing theoretical insights on the usefulness of side information, and therefore the maximum gain due to side information has been unknown. A few recent efforts have been made from an information-theoretic perspective [41–44]. Ahn et al. [41] have identified the maximum gain by characterizing the optimal sample complexity of matrix completion in the presence of graph side information under a simple setting in which there are two clusters and users ∗Equal contribution. Corresponding author: Changho Suh. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. within each cluster share the same ratings over items. A follow-up work [42] extended to an arbitrary number of clusters while maintaining the same-rating-vector assumption per user in each cluster. While [41, 42] lay out the theoretical foundation for the problem, the assumption of the single rating vector per cluster limits the practicality of the considered model. In an effort to make a further progress on theoretical insights, and motivated by [45], we consider a more generalized setting in which each cluster exhibits another sub-clustering structure, each subcluster (or that we call a “group”) being represented by a different rating vector yet intimately-related to other rating vectors within the same cluster. More specifically, we focus on a hierarchical graph setting wherein users are categorized into two clusters, each of which comprises three groups in which rating vectors are broadly similar yet distinct subject to a linear subspace of two basis vectors. Contributions: Our contributions are two folded. First we characterize the information-theoretic sharp threshold on the minimum number of observed matrix entries required for reliable matrix completion, as a function of the quantified quality (to be detailed) of the considered hierarchical graph side information. The second yet more practically-appealing contribution is to develop a computationally efficient algorithm that achieves the optimal sample complexity for a wide range of scenarios. One implication of this result is that our algorithm fully utilizing the hierarchical graph structure yields a significant gain in sample complexity, compared to a simple variant of [41, 42] that does not exploit the relational structure across rating vectors of groups. Technical novelty and algorithmic distinctions also come in the process of exploiting the hierarchical structure; see Remarks 2 and 3. Our experiments conducted on both synthetic and real-world datasets corroborate our theoretical results as well as demonstrate the efficacy of our proposed algorithm. Related works: In addition to the initial works [41, 42], more generalized settings have been taken into consideration with distinct directions. Zhang et al. [43] explore a setting in which both social and item similarity graphs are given as side information, thus demonstrating a synergistic effect due to the availability of two graphs. Jo et al. [44] go beyond binary matrix completion to investigate a setting in which a matrix entry, say (i, j)-entry, denotes the probability of user i picking up item j as the most preferable, yet chosen from a known finite set of probabilities. Recently a so-called dual problem has been explored in which clustering is performed with a partially observed matrix as side information [46, 47]. Ashtiani et al. [46] demonstrate that the use of side information given in the form of pairwise queries plays a crucial role in making an NP-hard clustering problem tractable via an efficient k-means algorithm. Mazumdar et al. [47] characterize the optimal sample complexity of clustering in the presence of similarity matrix side information together with the development of an efficient algorithm. One distinction of our work compared to [47] is that we are interested in both clustering and matrix completion, while [47] only focused on finding the clusters, from which the rating matrix cannot be necessarily inferred. Our problem can be viewed as the prominent low-rank matrix completion problem [1–4, 6–18] which has been considered notoriously difficult. Even for the simple scenarios such as rank-1 or rank-2 matrix settings, the optimal sample complexity has been open for decades, although some upper and lower bounds are derived. The matrix of our consideration in this work is of rank 4. Hence, in this regard, we could make a progress on this long-standing open problem by exploiting the structural property posed by our considered application. The statistical model that we consider for theoretical guarantees of our proposed algorithm relies on the Stochastic Block Model (SBM) [48] and its hierarchical counterpart [49–52] which have been shown to well respect many practically-relevant scenarios [53–56]. Also our algorithm builds in part upon prominent clustering [57,58] and hierarchical clustering [51,52] algorithms, although it exhibits a notable distinction in other matrix-completion-related procedures together with their corresponding technical analyses. Notations: Row vectors and matrices are denoted by lowercase and uppercase letters, respectively. Random matrices are denoted by boldface uppercase letters, while their realizations are denoted by uppercase letters. Sets are denoted by calligraphic letters. Let 0m×n and 1m×n be all-zero and all-one matrices of dimension m × n, respectively. For an integer n ≥ 1, [n] indicates the set of integers {1, 2, . . . , n}. Let {0, 1}n be the set of all binary numbers with n digits. The hamming distance between two binary vectors u and v is denoted by dH (u, v) := ‖u⊕ v‖0, where ⊕ stands for modulo-2 addition operator. Let 1 [·] denote the indicator function. For a graph G = (V,E) and two disjoint subsets X and Y of V , e (X,Y ) indicates the number of edges between X and Y . 2 Problem Formulation Setting: Consider a rating matrix with n users and m items. Each user rates m items by a binary vector, where 0/1 components denote “dislike”/“like” respectively. We assume that there are two clusters of users, say A and B. To capture the low-rank of the rating matrix, we assume that each user’s rating vector within a cluster lies in a linear subspace of two basis vectors. Specifically, let vA1 ∈ F1×m2 and vA2 ∈ F1×m2 be the two linearly-independent basis vectors of cluster A. Then users in Cluster A can be split into three groups (e.g., say GA1 , G A 2 and G A 3 ) based on their rating vectors. More precisely, we denote by GAi the set of users whose rating vector is v A i for i = 1, 2. Finally, the remaining users of cluster A from group GA3 , and their rating vector is v A 3 = v A 1 ⊕ vB2 (a linear combination of the basis vectors). Similarly we have vB1 , v B 2 and v B 3 = v B 1 ⊕ vB2 for cluster B. For presentational simplicity, we assume equal-sized groups (each being of size n/6), although our algorithm (to be presented in Section 4) allows for any group size, and our theoretical guarantees (to be presented in Theorem 2) hold as long as the group sizes are order-wise same. Let M ∈ Fn×m be a rating matrix wherein the ith row corresponds to user i’s rating vector. We find the Hamming distance instrumental in expressing our main results (to be stated in Section 3) as well as proving the main theorems. Let δg be the normalized Hamming distance among distinct pairs of group’s rating vectors within the same cluster: δg = 1m minc∈{A,B}mini,j∈[3] dH ( vci , v c j ) . Also let δc be the counterpart w.r.t. distinct pairs of rating vectors across different clusters: δc = 1 m mini,j∈[3] dH ( vAi , v B j ) , and define δ := {δg, δc}. We partition all the possible rating matrices into subsets depending on δ. LetM(δ) be the set of rating matrices subject to δ. Problem of interest: Our goal is to estimate a rating matrix M ∈ M(δ) given two types of information: (1) partial ratings Y ∈ {0, 1, ∗}n×m; (2) a graph, say social graph G. Here ∗ indicates no observation, and we denote the set of observed entries of Y by Ω, that is Ω = {(r, c) ∈ [n] × [m] : Yrc 6= ∗}. Below is a list of assumptions made for the analysis of the optimal sample complexity (Theorem 1) and theoretical guarantees of our proposed algorithm (Theorem 2), but not for the algorithm itself. We assume that each element of Y is observed with probability p ∈ [0, 1], independently from others, and its observation can possibly be flipped with probability θ ∈ [0, 12 ). Let social graph G = ([n], E) be an undirected graph, where E denotes the set of edges, each capturing the social connection between two associated users. The set [n] of vertices is partitioned into two disjoint clusters, each being further partitioned into three disjoint groups. We assume that the graph follows the hierarchical stochastic block model (HSBM) [51, 59] with three types of edge probabilities: (i) α indicates an edge probability between two users in the same group; (ii) β denotes the one w.r.t. two users of different groups yet within the same cluster; (iii) γ is associated with two users of different clusters. We focus on realistic scenarios in which users within the same group (or cluster) are more likely to be connected as per the social homophily theory [19]: α ≥ β ≥ γ. Performance metric: Let ψ be a rating matrix estimator that takes (Y,G) as an input, yielding an estimate. As a performance metric, we consider the worst-case probability of error: P (δ)e (ψ) := max M∈M(δ) P [ψ(Y,G) 6= M ] . (1) Note that M(δ) is the set of ground-truth matrices M subject to δ := {δg, δc}. Since the error probability may vary depending on different choices of M (i.e., some matrices may be harder to estimate), we employ a conventional minimax approach wherein the goal is to minimize the maximum error probability. We characterize the optimal sample complexity for reliable exact matrix recovery, concentrated around nmp? in the limit of n and m. Here p? indicates the sharp threshold on the observation probability: (i) above which the error probability can be made arbitrarily close to 0 in the limit; (ii) under which P (δ)e (ψ) 9 0 no matter what and whatsoever. 3 Optimal sample complexity We first present the optimal sample complexity characterized under the considered model. We find that an intuitive and insightful expression can be made via the quality of hierarchical social graph, which can be quantified by the following: (i) Ig := ( √ α −√β)2 represents the capability of separating distinct groups within a cluster; (ii) Ic1 := ( √ α − √γ)2 and Ic2 := ( √ β − √γ)2 capture the clustering capabilities of the social graph. Note that the larger the quantities, the easier to do grouping/clustering. Our sample complexity result is formally stated below as a function of (Ig, Ic1, Ic2). As in [41], we make the same assumption on m and n that turns out to ease the proof via prominent large deviation theories: m = ω(log n) and logm = o(n). This assumption is also practically relevant as it rules out highly asymmetric matrices. Theorem 1 (Information-theoretic limits). Assume that m = ω(log n) and logm = o(n). Let the item ratings be drawn from a finite field Fq. Let c and g denote the number of clusters and groups, respectively. Within each cluster, let the set of g rating vectors be spanned by any r ≤ g vectors in the same set. Define p? as p? := 1 (√ 1−θ− √ θ q−1 )2 max { gc g−r+1 logm n , log n− ngcIg δgm , log n− ngcIc1 − (g−1)n gc Ic2 δcm } . (2) Fix > 0. If p ≥ (1+ )p?, then there exists a sequence of estimators ψ satisfying limn→∞ P (δ)e (ψ) = 0. Conversely, if p ≤ (1− )p?, then limn→∞ P (δ)e (ψ) 6= 0 for anyψ. Setting (c, g, r, q) = (2, 3, 2, 2), the bound in (1) reduces to p? = 1 ( √ 1− θ − √ θ)2 max { 3 logm n , log n− 16nIg mδg , log n− 16nIc1 − 13nIc2 mδc } , (3) which is the optimal sample complexity of the problem formulated in Section 2. Proof. We provide the proof sketch for (c, g, r) = (2, 3, 2). We defer the complete proof for (c, g, r) = (2, 3, 2) to the supplementary material. The extension to general (c, g, r) is a natural generalization of the analysis for the parameters (c, g, r) = (2, 3, 2). The achievability proof is based on maximum likelihood estimation (MLE). We first evaluate the likelihood for a given clustering/grouping of users and the corresponding rating matrix. We then show that if p ≥ (1+ )p?, the likelihood is maximized only by the ground-truth rating matrix in the limit of n: limn→∞ P (δ) e (ψML) = 0. For the converse (impossibility) proof, we first establish a lower bound on the error probability, and show that it is minimized when employing the maximum likelihood estimator. Next we prove that if p is smaller than any of the three terms in the RHS of (3), then there exists another solution that yields a larger likelihood, compared to the ground-truth matrix. More precisely, if p ≤ (1− )3 logm ( √ 1−θ− √ θ)2n , we can find a grouping with the only distinction in two user-item pairs relative to the ground truth, yet yielding a larger likelihood. Similarly when p ≤ (1− )(logn− 1 6nIg) ( √ 1−θ− √ θ)2mδg , consider two users in the same cluster yet from distinct groups such that the hamming distance between their rating vectors is mδg . We can then show that a grouping in which their rating vectors are swapped provides a larger likelihood. Similarly when p ≤ (1− )(logn− 1 6nIc1− 1 3nIc2) ( √ 1−θ− √ θ)2mδc , we can swap the rating vectors of two users from different clusters with a hamming distance of mδc, and get a greater likelihood. The technical distinctions w.r.t. the prior works [41,42] are three folded: (i) the likelihood computation requires more involved combinatorial arguments due to the hierarchical structure; (ii) sophisticated upper/lower bounding techniques are developed in order to exploit the relational structure across different groups; (iii) delicate choices are made for two users to be swapped in the converse proof. We next present the second yet more practically-appealing contribution: Our proposed algorithm in Section 4 achieves the information-theoretic limits. The algorithm optimality is guaranteed for a certain yet wide range of scenarios in which graph information yields negligible clustering/grouping errors, formally stated below. We provide the proof outline in Section 4 throughout the description of the algorithm, leaving details in the supplementary material. Theorem 2 (Theoretical guarantees of the proposed algorithm). Assume thatm = ω(log n), logm = o(n), m = O(n), Ic2 > 2 lognn and Ig > ω( 1 n ). Then, as long as the sample size is beyond the optimal sample complexity in Theorem 1 (i.e., mnp > mnp?), then the algorithm presented in Section 4 with T = O(log n) iterations ensures the worse-case error probability tends to 0 as n→∞. That is, the algorithm returns M̂ such that P[M̂ = M ] = 1− o(1). Theorem 1 establishes the optimal sample complexity (the number of entries of the rating matrix to be observed) to be mnp?, where p? is given in (3). The required sample complexity is a non-increasing function of δg and δc. This makes an intuitive sense because increasing δg (or δc) yields more distinct rating vectors, thus ensuring easier grouping (or clustering). We emphasize three regimes depending on (Ig, Ic1, Ic2). The first refers to the so-called perfect clustering/grouping regime in which (Ig, Ic1, Ic2) are large enough, thereby activating the 1st term in the max function. The second is the grouping-limited regime, in which the quantity Ig is not large enough so that the 2nd term becomes dominant. The last is the clustering-limited regime where the 3rd term is activated. A few observations are in order. For illustrative simplicity, we focus on the noiseless case, i.e., θ = 0. Remark 1 (Perfect clustering/grouping regime). The optimal sample complexity reads 3m logm. This result is interesting. A naive generalization of [41,42] requires 4m logm, as we have four rating vectors (vA1 , v A 2 , v B 1 , v B 2 ) to estimate and each requires m logm observations under our random sampling, due to the coupon-collecting effect. On the other hand, we exploit the relational structure across rating vectors of different group, reflected in vA3 = v A 1 ⊕ vA2 and vB3 = vB1 ⊕ vB2 ; and we find this serves to estimate (vA1 , v A 2 , v B 1 , v B 2 ) more efficiently, precisely by a factor of 4 3 improvement, thus yielding 3m logm. This exploitation is reflected as novel technical contributions in the converse proof, as well as the achievability proofs of MLE and the proposed algorithm. Remark 2 (Grouping-limited regime). We find that the sample complexity n logn− 1 6n 2Ig δg in this regime coincides with that of [42]. This implies that exploiting the relational structure across different groups does not help improving sample complexity when grouping information is not reliable. Remark 3 (Clustering-limited regime). This is the most challenging scenario which has not been explored by any prior works. The challenge is actually reflected in the complicated sample complexity formula: n logn− 1 6n 2Ic1− 13n 2Ic2 δc . When β = γ, i.e., groups and clusters are not distinguishable, Ig = Ic1 and Ic2 = 0. Therefore, in this case, it indeed reduces to a 6-group setting: n logn− 16n 2Ig δc . The only distinction appears in the denominator. We read δc instead of δg due to different rating vectors across clusters and groups. When Ic2 6= 0, it reads the complicated formula, reflecting non-trivial technical contribution as well. Fig. 1 depicts the different regimes of the optimal sample complexity as a function of (Ig, Ic2) for n = 1000, m = 500 and θ = 0. In Fig. 1a, where δg = 13 and δc = 1 6 , the region depicted by diagonal stripes corresponds to the perfect clustering/grouping regime. Here, Ig and Ic2 are large, and graph information is rich enough to perfectly retrieve the clusters and groups. In this regime, the 1st term in (3) dominates. The region shown by dots corresponds to grouping-limited regime, where the 2nd term in (3) is dominant. In this regime, graph information suffices to exactly recover the clusters, but we need to rely on rating observation to exactly recover the groups. Finally, the 3rd term in (3) dominates in the region captured by horizontal stripes. This indicates the clustering-limited regimes, where neither clustering nor grouping is exact without the side information of the rating vectors. It is worth noting that in practically-relevant systems, where δc > δg (for rating vectors of users in the same cluster are expected to be more similar compared to those in a different cluster), the third regime vanishes, as shown by Fig. 1b, where δg = 17 and δc = 1 6 . It is straightforward to show that the third term in (3) is inactive whenever δc > δg. Fig. 1c compares the optimal sample complexity between the one reported in (3), as a function of Ig, and that of [42]. The considered setting is n=1000, m=500, θ=0, δg= 13 , δc= 1 6 , γ=0.01 and Ic2 =0.002. Note that [42] leverages neither the hierarchical structure of the graph, nor the linear dependency among the rating vectors. Thus, the problem formulated in Section 2 will be translated to a graph with six clusters with linearly independent rating vectors in the setting of [42]. Also, the minimum hamming distance for [42] is δc. In Fig. 1c, we can see that the noticeable gain in the sample complexity of our result in the diagonal parts of the plot (the two regimes on the left side) is due to leveraging the hierarchical graph structure, while the improvement in the sample complexity in the flat part of the plot is a consequence of exploiting the linear dependency among the rating vectors within each cluster (See Remark 1). 4 Proposed Algorithm We propose a computationally feasible matrix completion algorithm that achieves the optimal sample complexity characterized by Theorem 1. The proposed algorithm is motivated by a line of research on iterative algorithms that solve non-convex optimization problems [6, 58, 60–70]. The idea is to first find a good initial estimate, and then successively refine this estimate until the optimal solution is reached. This approach has been employed in several problems such as matrix completion [6, 60], community recovery [58, 61–63], rank aggregation [64], phase retrieval [65, 66], robust PCA [67], EM-algorithm [68], and rating estimation in crowdsourcing [69, 70]. In the following, we describe the proposed algorithm that consists of four phases to recover clusters, groups and rating vectors. Then, we discuss the computational complexity of the algorithm. Recall that Y ∈ {0,+1, ∗}n×m. For the sake of tractable analysis, it is convenient to map Y to Z ∈ {−1, 0,+1}n×m where the mapping of the alphabet of Y is as follows: 0←→ +1, +1←→ −1 and ∗ ←→ 0. Under this mapping, the modulo-2 addition over {0, 1} in Y is represented by the multiplication of integers over {+1,−1} in Z. Also, note that all recovery guarantees are asymptotic, i.e., they are characterized with high probability as n→∞. Throughout the design and analysis of the proposed algorithm, the number and size of clusters and groups are assumed to be known. 4.1 Algorithm Description Phase 1 (Exact Recovery of Clusters): We use the community detection algorithm in [57] on G to exactly recover the two clusters A and B. As proved in [57], the decomposition of the graph into two clusters is correct with high probability when Ic2 > 2 lognn . Phase 2 (Almost Exact Recovery of Groups): The goal of Phase 2 is to decompose the set of users in cluster A (cluster B) into three groups, namely GA1 , G A 2 , G A 3 (or G B 1 , G B 2 , G B 3 for cluster B). It is worth noting that grouping at this stage is almost exact, and will be further refined in the next phases. To this end, we run a spectral clustering algorithm [58] on A and B separately. Let Ĝxi (0) denote the initial estimate of the ith group of cluster x that is recovered by Phase 2 algorithm, for i ∈ [3] and x ∈ {A,B}. It is shown that the groups within each cluster are recovered with a vanishing fraction of error if Ig = ω(1/n). It is worth mentioning that there are other clustering algorithms [62, 71–77] that can be employed for this phase. Examples include: spectral clustering [62, 71–74], semidefinite programming (SDP) [75], non-backtracking matrix spectrum [76], and belief propagation [77]. Phase 3 (Exact Recovery of Rating Vectors): We propose a novel algorithm that optimally recovers the rating vectors of the groups within each cluster. The algorithm is based on maximum likelihood (ML) decoding of users’ ratings based on the partial and noisy observations. For this model, the ML decoding boils down to a counting rule: for each item, find the group with maximum gap between the number of observed zeros and ones, and set the rating entry of this group to 0. The other two rating vectors are either both 0 or both 1 for this item, which will be determined based on the majority of the union of their observed entries. It turns out that the vector recovery is exact with probability 1−o(1). This is one of the technical distinctions, relative to the prior works [41, 42] which employ the simple majority voting rule under non-hierarchical SBMs. Define v̂xi as the estimated rating vector of v x i , i.e., the output of Phase 3 algorithm. Let the c th element of the rating vector vxi (or v̂ x i ) be denoted by v x i (c) (or v̂ x i (c)), for i ∈ [3], x ∈ {A,B} and c ∈ [m]. Let Yr,c be the entry of matrix Y at row r and column c, and Zr,c be its mapping to {+1, 0,−1}. The pseudocode of Phase 3 algorithm is given by Algorithm 1. Algorithm 1 Exact Recovery of Rating Vectors 1: function VECRCV (n,m,Z, {Ĝxi (0) : i ∈ [3], x ∈ {A,B}}) 2: for c ∈ [m] and x ∈ {A,B} do 3: for i ∈ [3] do ρi,x(c)← ∑ r∈Ĝxi (0) Zr,c 4: j ← arg maxi∈[3] ρi,x(c) 5: v̂xj (c)← 0 6: if ∑ i∈[3]\{j} ρi,x(c) ≥ 0 then 7: for i ∈ [3] \ {j} do v̂xi (c)← 0 8: else 9: for i ∈ [3] \ {j} do v̂xi (c)← 1 10: return {v̂xi : i ∈ [3], x ∈ {A,B}} Algorithm 2 Local Iterative Refinement of Groups (Set flag = 0) 1: function REFINE (flag, n,m, T, Y, Z,G, {(Ĝxi (0), v̂xi ) : i ∈ [3], x ∈ {A,B}}) 2: α̂← 1 6(n/62 ) |{(f, g) ∈ E : f, g ∈ Gxi , x ∈ {A,B}, i ∈ [3]}| 3: β̂ ← 6n2 ∣∣{(f, g) ∈ E : f ∈ Gxi , g ∈ Gxj , x ∈ {A,B}, i ∈ [3], j ∈ [3] \ i }∣∣ 4: θ̂ ← |{(r, c) ∈ Ω : Yrc 6= v̂xi (c), r ∈ Ĝxi (0)}|/|Ω| 5: for t ∈ [T ] and x ∈ {A,B} do 6: for i ∈ [3] do Ĝxi (t)← ∅ 7: for r ← 1 to n do 8: j ← arg maxi∈[3] |{c : Yr,c = v̂xi (c)}|· log ( 1−̂θ θ̂ ) +e ( {r}, Ĝxi (t− 1) ) · log ( (1−β̂)α̂ (1−α̂)β̂ ) 9: Ĝxj (t)← Ĝxj (t) ∪ {r} 10: if flag == 1 then 11: {v̂xi : i∈ [3], x∈{A,B}} ← VECRCV (n,m,Z, {Ĝxi (t) : i∈ [3], x∈{A,B}}) 12: return {Ĝxi (T ) : i ∈ [3], x ∈ {A,B}}, {v̂xi : i ∈ [3], x ∈ {A,B}} Phase 4 (Exact Recovery of Groups): Finally, the goal is to refine the groups which are almost recovered in Phase 2, to obtain an exact grouping. To this end, we propose an iterative algorithm that locally refines the estimates on the user grouping within each cluster for T iterations. Specifically, at each iteration, the affiliation of each user is updated to the group that yields the maximum local likelihood. This is determined based on (i) the number of edges between the user and the set of users which belong to that group, and (ii) the number of observed rating matrix entries of the user that coincide with the corresponding entries of the rating vector of that group. Algorithm 2 describes the pseudocode of Phase 4 algorithm. Note that we do not assume the knowledge of the model parameters α, β and θ, and estimate them using Y and G, i.e., the proposed algorithm is parameter-free. In order to prove the exact recovery of groups after running Algorithm 2, we need to show that the number of misclassified users in each cluster strictly decreases with each iteration of Algorithm 2. More specifically, assuming that the previous phases are executed successfully, if we start with ηn misclassified users within one cluster, for some small η > 0, then one can show that we end up with η2n misclassified users with high probability as n → ∞ after one iteration of refinement. Hence, running the local refinement for T = log(ηn)log 2 within the groups of each cluster would suffice to converge to the ground truth assignments. The analysis of this phase follows the one in [42, Theorem 2] in which the problem of recovering K communities of possibly different sizes is studied. By considering the case of three equal-sized communities, the guarantees of exact recovery of the groups within each cluster readily follows when T = O(log n). Remark 4. The iterative refinement in Algorithm 2 can be applied only on the groups (when flag = 0), or on the groups as well as the rating vectors (for flag = 1). Even though the former is sufficient for reliable estimation of the rating matrix, we show, through our simulation results in the following section, that the latter achieves a better performance for finite regimes of n and m. Remark 5. The problem is formulated under the finite-field model only for the purpose of making an initial step towards a more generalized and realistic algorithm. Fortunately, as many of the theory-inspiring works do, the theory process of characterizing the optimal sample complexity under this model could also shed insights into developing a universal algorithm that is applicable to a general problem setting rather than the specific problem setting considered for the theoretical analysis, as long as some slight algorithmic modifications are made. To demonstrate the universality of the algorithm, we consider a practical scenario in which ratings are real-valued (for which linear dependency between rating vectors is well-accepted) and observation noise is Gaussian. In this setting, the detection problem (under the current model) will be replaced by an estimation problem. Consequently, we update Algorithm 1 to incorporate an MLE of the rating vectors; and modify the local refinement criterion on Line 8 in Algorithm 2 to find the group that minimizes some properlydefined distance metric between the observed and estimated ratings such as Root Mean Squared Error (RMSE). In Section 5, we conduct experiments under the aforementioned setting, and show that our algorithm achieves superior performance over the state-of-the-art algorithms. 4.2 Computational Complexity One of the crucial aspects of the proposed algorithm is its computational efficiency. Phase 1 can be done in polynomial time in the number of vertices [57,78]. Phase 2 can be done inO(|E| log n) using the power method [79]. Phase 3 requires a single pass over all entries of the observed matrix, which corresponds to O(|Ω|). Finally, in each iteration of Phase 4, the affiliation update of user r ∈ [n] requires reading the entries of the rth row of Y and the edges connected to user r, which amounts to O(|Ω|+ |E|) for each of the T iterations, assuming an appropriate data structure. Hence, the overall computational complexity reads poly(n) +O(|Ω| log n). Remark 6. The complexity bottleneck is in Phase 1 (exact clustering), as it relies upon [57, 78], exhibiting poly(n) runtime. This can be improved, without any performance degradation, by replacing the exact clustering in Phase 1 with almost exact clustering, yielding O(|E| log n) runtime [79]. In return, Phase 4 should be modified so that the local iterative refinement is applied on cluster affiliation, as well as group affiliation and rating vectors. As a result, the improved overall runtime reads O((|Ω|+ |E|) log n). 5 Experimental Results We first conduct Monte Carlo experiments to corroborate Theorem 1. Let α = α̃ lognn , β = β̃ logn n , and γ = γ̃ lognn . We consider a setting where θ = 0.1, β̃ = 10, γ̃ = 0.5, δg = δc = 0.5. The synthetic data is generated as per the model in Section 2. In Figs. 2a and 2b, we evaluate the performance of the proposed algorithm (with local iterative refinement of groups and rating vectors), and quantify the empirical success rate as a function of the normalized sample complexity, over 103 randomly drawn realizations of rating vectors and hierarchical graphs. We vary n and m, preserving the ratio n/m = 3. Fig. 2a depicts the case of α̃ = 40 which corresponds to perfect clustering/grouping regime (Remark 1). On the other hand, Fig. 2b depicts the case of α̃ = 40 which corresponds to grouping-limited regime (Remark 2). In both figures, we observe a phase transition2 in the success rate at p = p?, and as we increase n and m, the phase transition gets sharper. These figures corroborate Theorem 1 in different regimes when the graph side information is not scarce. Fig. 2c compares the performance of the proposed algorithm for n = 3000 and m = 1000 under two different strategies of local iterative refinement: (i) local refinement of groups only (set flag = 0 in Algorithm 2); and (ii) local refinement of both groups and rating vectors (set flag = 1 in Algorithm 2). It is clear that the second strategy outperforms the first in the finite regime of n and m, which is consistent with Remark 4. Furthermore, the gap between the two versions shrinks as we gradually increase α̃ (i.e., as the quality of the graph gradually improves). Next, similar to [41–44], the performance of the proposed algorithm is assessed on semi-real data (real graph but synthetic rating vectors). We consider a subgraph of the political blog network [80], which is shown to exhibit a hierarchical structure [50]. In particular, we consider a tall matrix setting of n = 381 and m = 200 in order to investigate the gain in sample complexity due to the graph side information. The selected subgraph consists of two clusters of political parties, each of which comprises three groups. The three groups of the first cluster consist of 98, 34 and 103 users, while the 2The transition is ideally a step function at p = p? as n and m tend to infinity. three groups of the second cluster consist of 58, 68 and 20 users3. The corresponding rating vectors are generated such that the ratings are drawn from [0, 10] (i.e., real numbers), and the observations are corrupted by a Gaussian noise with mean zero and a given variance σ2. We use root mean square error (RMSE) as the evaluation metric, and assess the performance of the proposed algorithm against various recommendation algorithms, namely User Average, Item Average, User k-Nearest Neighbor (k-NN) [81], Item k-NN [81], TrustSVD [28], Biased Matrix Factorization (MF) [82], and Matrix Factorization with Social Regularization (SoReg) [24]. Note that [41, 42] are designed to work for rating matrices whose elements are drawn from a finite field, and hence they cannot be run under the practical scenario considered in this setting. In Fig. 2d, we compute RMSE as a function of p, for fixed σ2 = 0.5. On the other hand, Fig. 2e depicts RMSE as a function of the normalized signal-to-noise ratio 1/σ2, for fixed p = 0.08. It is evident that the proposed algorithm achieves superior performance over the state-of-the-art algorithms for a wide range of observation probabilities and Gaussian noise variances, demonstrating its viability and efficiency in practical scenarios. Finally, Table 1 demonstrates the computational efficiency of the proposed algorithm, and reports the runtimes of recommendation algorithms for the experiment setting of Fig. 2d and p = 0.1. The runtimes are averaged over 20 trials. The proposed algorithm achieves a faster runtime than all other algorithms except for User Average and Item Average. However, as shown in Fig. 2d, the performance of these faster algorithms, in terms of RMSE, is inferior to the majority of other algorithms. 3We refer to the supplementary material for a visualization of the selected subgraph of the political blog network using t-SNE algorithm. Broader Impact We emphasize two positive impacts of our work. First, it serves to enhance the performance of personalized recommender systems (one of the most influential commercial applications) with the aid of social graph which is often available in a variety of applications. Second, it achieves fairness among all users by providing high quality recommendations even to new users who have not rated any items before. One negative consequence of this work is w.r.t. the privacy of users. User privacy may not be preserved in the process of exploiting indirect information posed in social graphs, even though direct information, such as user profiles, is protected. Acknowledgments and Disclosure of Funding The work of A. Elmahdy and S. Mohajer is supported in part by the National Science Foundation under Grants CCF-1617884 and CCF-1749981. The work of J. Ahn and C. Suh is supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No.2018R1A1A1A05022889).
1. What is the main contribution of the paper regarding the binary version of the matrix completion problem? 2. What are the strengths of the proposed algorithm, particularly in utilizing side information? 3. What are the weaknesses of the paper, especially regarding its assumptions and experimental designs? 4. How does the reviewer assess the novelty and significance of the paper's results compared to prior works? 5. Are there any suggestions or recommendations for improving the paper, such as generalizing the model or providing more realistic experiments?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper studies a binary version of the matrix completion problem, where we are additionally given a graph that provides "side information". Specifically, there is an ideal ratings matrix (users, items), and there is a graph between the users. The ideal matrix is assumed to have a _very stylized_ structure: users are divided into precisely two clusters of equal size, each cluster is divided into three groups of equal size, and within each group all the users have _exactly the same_ rating vector. (There's an additional linear algebraic dependence which I will ignore here.) The side information graph is assumed to be a block model whose parameters correspond to edge probability within groups, within cluster but across groups, and across clusters (and these are in decreasing order). In this setting, given a partially observed and independently corrupted ranking matrix, the paper gives an iterative refinement algorithm that can _perfectly_ reconstruct the ideal matrix with a good probability. They give conditions under which the error probability goes to 0. Strengths - The algorithm is fairly clean, and the fact that perfect recovery is possible is nice. - Taking advantage of side information in a provable way is a nice feature that is often difficult to prove. Weaknesses - The setting is a bit too stylized, and I found it very unconvincing. Perhaps the authors can generalize at least to a "k distinct vectors" and "t clusters" model instead of 2 vectors and 2 clusters. - The boolean operations are also unrealistic. Is there an intuitive rationale for one group of people having rating vector x, another y, and a third having x \XOR y ? - While there are technical improvements, the substantial improvement over [39, 40] seems unconvincing. - Experiments are primarily on synthetic data. The only real dataset appears to be one where the graph is real but the ratings are synthetic. Some explanation on this would be helpful. Overall, I think that a slightly worse result (in terms of imperfect recovery), but allowing a richer model (e.g., more groups/clusters, not necessarily perfect alignment between the graph and the matrix) would be a welcome addition to the paper.
NIPS
Title Matrix Completion with Hierarchical Graph Side Information Abstract We consider a matrix completion problem that exploits social or item similarity graphs as side information. We develop a universal, parameter-free, and computationally efficient algorithm that starts with hierarchical graph clustering and then iteratively refines estimates both on graph clustering and matrix ratings. Under a hierarchical stochastic block model that well respects practically-relevant social graphs and a low-rank rating matrix model (to be detailed), we demonstrate that our algorithm achieves the information-theoretic limit on the number of observed matrix entries (i.e., optimal sample complexity) that is derived by maximum likelihood estimation together with a lower-bound impossibility result. One consequence of this result is that exploiting the hierarchical structure of social graphs yields a substantial gain in sample complexity relative to the one that simply identifies different groups without resorting to the relational structure across them. We conduct extensive experiments both on synthetic and real-world datasets to corroborate our theoretical results as well as to demonstrate significant performance improvements over other matrix completion algorithms that leverage graph side information. 1 Introduction Recommender systems have been powerful in a widening array of applications for providing users with relevant items of their potential interest [1]. A prominent well-known technique for operating the systems is low-rank matrix completion [2–18]: Given partially observed entries of an interested matrix, the goal is to predict the values of missing entries. One challenge that arises in the big data era is the so-called cold start problem in which high-quality recommendations are not feasible for new users/items that bear little or no information. One natural and popular way to address the challenge is to exploit other available side information. Motivated by the social homophily theory [19] that users within the same community are more likely to share similar preferences, social networks such as Facebook’s friendship graph have often been employed to improve the quality of recommendation. While there has been a proliferation of social-graph-assisted recommendation algorithms [1, 20–40], few works were dedicated to developing theoretical insights on the usefulness of side information, and therefore the maximum gain due to side information has been unknown. A few recent efforts have been made from an information-theoretic perspective [41–44]. Ahn et al. [41] have identified the maximum gain by characterizing the optimal sample complexity of matrix completion in the presence of graph side information under a simple setting in which there are two clusters and users ∗Equal contribution. Corresponding author: Changho Suh. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. within each cluster share the same ratings over items. A follow-up work [42] extended to an arbitrary number of clusters while maintaining the same-rating-vector assumption per user in each cluster. While [41, 42] lay out the theoretical foundation for the problem, the assumption of the single rating vector per cluster limits the practicality of the considered model. In an effort to make a further progress on theoretical insights, and motivated by [45], we consider a more generalized setting in which each cluster exhibits another sub-clustering structure, each subcluster (or that we call a “group”) being represented by a different rating vector yet intimately-related to other rating vectors within the same cluster. More specifically, we focus on a hierarchical graph setting wherein users are categorized into two clusters, each of which comprises three groups in which rating vectors are broadly similar yet distinct subject to a linear subspace of two basis vectors. Contributions: Our contributions are two folded. First we characterize the information-theoretic sharp threshold on the minimum number of observed matrix entries required for reliable matrix completion, as a function of the quantified quality (to be detailed) of the considered hierarchical graph side information. The second yet more practically-appealing contribution is to develop a computationally efficient algorithm that achieves the optimal sample complexity for a wide range of scenarios. One implication of this result is that our algorithm fully utilizing the hierarchical graph structure yields a significant gain in sample complexity, compared to a simple variant of [41, 42] that does not exploit the relational structure across rating vectors of groups. Technical novelty and algorithmic distinctions also come in the process of exploiting the hierarchical structure; see Remarks 2 and 3. Our experiments conducted on both synthetic and real-world datasets corroborate our theoretical results as well as demonstrate the efficacy of our proposed algorithm. Related works: In addition to the initial works [41, 42], more generalized settings have been taken into consideration with distinct directions. Zhang et al. [43] explore a setting in which both social and item similarity graphs are given as side information, thus demonstrating a synergistic effect due to the availability of two graphs. Jo et al. [44] go beyond binary matrix completion to investigate a setting in which a matrix entry, say (i, j)-entry, denotes the probability of user i picking up item j as the most preferable, yet chosen from a known finite set of probabilities. Recently a so-called dual problem has been explored in which clustering is performed with a partially observed matrix as side information [46, 47]. Ashtiani et al. [46] demonstrate that the use of side information given in the form of pairwise queries plays a crucial role in making an NP-hard clustering problem tractable via an efficient k-means algorithm. Mazumdar et al. [47] characterize the optimal sample complexity of clustering in the presence of similarity matrix side information together with the development of an efficient algorithm. One distinction of our work compared to [47] is that we are interested in both clustering and matrix completion, while [47] only focused on finding the clusters, from which the rating matrix cannot be necessarily inferred. Our problem can be viewed as the prominent low-rank matrix completion problem [1–4, 6–18] which has been considered notoriously difficult. Even for the simple scenarios such as rank-1 or rank-2 matrix settings, the optimal sample complexity has been open for decades, although some upper and lower bounds are derived. The matrix of our consideration in this work is of rank 4. Hence, in this regard, we could make a progress on this long-standing open problem by exploiting the structural property posed by our considered application. The statistical model that we consider for theoretical guarantees of our proposed algorithm relies on the Stochastic Block Model (SBM) [48] and its hierarchical counterpart [49–52] which have been shown to well respect many practically-relevant scenarios [53–56]. Also our algorithm builds in part upon prominent clustering [57,58] and hierarchical clustering [51,52] algorithms, although it exhibits a notable distinction in other matrix-completion-related procedures together with their corresponding technical analyses. Notations: Row vectors and matrices are denoted by lowercase and uppercase letters, respectively. Random matrices are denoted by boldface uppercase letters, while their realizations are denoted by uppercase letters. Sets are denoted by calligraphic letters. Let 0m×n and 1m×n be all-zero and all-one matrices of dimension m × n, respectively. For an integer n ≥ 1, [n] indicates the set of integers {1, 2, . . . , n}. Let {0, 1}n be the set of all binary numbers with n digits. The hamming distance between two binary vectors u and v is denoted by dH (u, v) := ‖u⊕ v‖0, where ⊕ stands for modulo-2 addition operator. Let 1 [·] denote the indicator function. For a graph G = (V,E) and two disjoint subsets X and Y of V , e (X,Y ) indicates the number of edges between X and Y . 2 Problem Formulation Setting: Consider a rating matrix with n users and m items. Each user rates m items by a binary vector, where 0/1 components denote “dislike”/“like” respectively. We assume that there are two clusters of users, say A and B. To capture the low-rank of the rating matrix, we assume that each user’s rating vector within a cluster lies in a linear subspace of two basis vectors. Specifically, let vA1 ∈ F1×m2 and vA2 ∈ F1×m2 be the two linearly-independent basis vectors of cluster A. Then users in Cluster A can be split into three groups (e.g., say GA1 , G A 2 and G A 3 ) based on their rating vectors. More precisely, we denote by GAi the set of users whose rating vector is v A i for i = 1, 2. Finally, the remaining users of cluster A from group GA3 , and their rating vector is v A 3 = v A 1 ⊕ vB2 (a linear combination of the basis vectors). Similarly we have vB1 , v B 2 and v B 3 = v B 1 ⊕ vB2 for cluster B. For presentational simplicity, we assume equal-sized groups (each being of size n/6), although our algorithm (to be presented in Section 4) allows for any group size, and our theoretical guarantees (to be presented in Theorem 2) hold as long as the group sizes are order-wise same. Let M ∈ Fn×m be a rating matrix wherein the ith row corresponds to user i’s rating vector. We find the Hamming distance instrumental in expressing our main results (to be stated in Section 3) as well as proving the main theorems. Let δg be the normalized Hamming distance among distinct pairs of group’s rating vectors within the same cluster: δg = 1m minc∈{A,B}mini,j∈[3] dH ( vci , v c j ) . Also let δc be the counterpart w.r.t. distinct pairs of rating vectors across different clusters: δc = 1 m mini,j∈[3] dH ( vAi , v B j ) , and define δ := {δg, δc}. We partition all the possible rating matrices into subsets depending on δ. LetM(δ) be the set of rating matrices subject to δ. Problem of interest: Our goal is to estimate a rating matrix M ∈ M(δ) given two types of information: (1) partial ratings Y ∈ {0, 1, ∗}n×m; (2) a graph, say social graph G. Here ∗ indicates no observation, and we denote the set of observed entries of Y by Ω, that is Ω = {(r, c) ∈ [n] × [m] : Yrc 6= ∗}. Below is a list of assumptions made for the analysis of the optimal sample complexity (Theorem 1) and theoretical guarantees of our proposed algorithm (Theorem 2), but not for the algorithm itself. We assume that each element of Y is observed with probability p ∈ [0, 1], independently from others, and its observation can possibly be flipped with probability θ ∈ [0, 12 ). Let social graph G = ([n], E) be an undirected graph, where E denotes the set of edges, each capturing the social connection between two associated users. The set [n] of vertices is partitioned into two disjoint clusters, each being further partitioned into three disjoint groups. We assume that the graph follows the hierarchical stochastic block model (HSBM) [51, 59] with three types of edge probabilities: (i) α indicates an edge probability between two users in the same group; (ii) β denotes the one w.r.t. two users of different groups yet within the same cluster; (iii) γ is associated with two users of different clusters. We focus on realistic scenarios in which users within the same group (or cluster) are more likely to be connected as per the social homophily theory [19]: α ≥ β ≥ γ. Performance metric: Let ψ be a rating matrix estimator that takes (Y,G) as an input, yielding an estimate. As a performance metric, we consider the worst-case probability of error: P (δ)e (ψ) := max M∈M(δ) P [ψ(Y,G) 6= M ] . (1) Note that M(δ) is the set of ground-truth matrices M subject to δ := {δg, δc}. Since the error probability may vary depending on different choices of M (i.e., some matrices may be harder to estimate), we employ a conventional minimax approach wherein the goal is to minimize the maximum error probability. We characterize the optimal sample complexity for reliable exact matrix recovery, concentrated around nmp? in the limit of n and m. Here p? indicates the sharp threshold on the observation probability: (i) above which the error probability can be made arbitrarily close to 0 in the limit; (ii) under which P (δ)e (ψ) 9 0 no matter what and whatsoever. 3 Optimal sample complexity We first present the optimal sample complexity characterized under the considered model. We find that an intuitive and insightful expression can be made via the quality of hierarchical social graph, which can be quantified by the following: (i) Ig := ( √ α −√β)2 represents the capability of separating distinct groups within a cluster; (ii) Ic1 := ( √ α − √γ)2 and Ic2 := ( √ β − √γ)2 capture the clustering capabilities of the social graph. Note that the larger the quantities, the easier to do grouping/clustering. Our sample complexity result is formally stated below as a function of (Ig, Ic1, Ic2). As in [41], we make the same assumption on m and n that turns out to ease the proof via prominent large deviation theories: m = ω(log n) and logm = o(n). This assumption is also practically relevant as it rules out highly asymmetric matrices. Theorem 1 (Information-theoretic limits). Assume that m = ω(log n) and logm = o(n). Let the item ratings be drawn from a finite field Fq. Let c and g denote the number of clusters and groups, respectively. Within each cluster, let the set of g rating vectors be spanned by any r ≤ g vectors in the same set. Define p? as p? := 1 (√ 1−θ− √ θ q−1 )2 max { gc g−r+1 logm n , log n− ngcIg δgm , log n− ngcIc1 − (g−1)n gc Ic2 δcm } . (2) Fix > 0. If p ≥ (1+ )p?, then there exists a sequence of estimators ψ satisfying limn→∞ P (δ)e (ψ) = 0. Conversely, if p ≤ (1− )p?, then limn→∞ P (δ)e (ψ) 6= 0 for anyψ. Setting (c, g, r, q) = (2, 3, 2, 2), the bound in (1) reduces to p? = 1 ( √ 1− θ − √ θ)2 max { 3 logm n , log n− 16nIg mδg , log n− 16nIc1 − 13nIc2 mδc } , (3) which is the optimal sample complexity of the problem formulated in Section 2. Proof. We provide the proof sketch for (c, g, r) = (2, 3, 2). We defer the complete proof for (c, g, r) = (2, 3, 2) to the supplementary material. The extension to general (c, g, r) is a natural generalization of the analysis for the parameters (c, g, r) = (2, 3, 2). The achievability proof is based on maximum likelihood estimation (MLE). We first evaluate the likelihood for a given clustering/grouping of users and the corresponding rating matrix. We then show that if p ≥ (1+ )p?, the likelihood is maximized only by the ground-truth rating matrix in the limit of n: limn→∞ P (δ) e (ψML) = 0. For the converse (impossibility) proof, we first establish a lower bound on the error probability, and show that it is minimized when employing the maximum likelihood estimator. Next we prove that if p is smaller than any of the three terms in the RHS of (3), then there exists another solution that yields a larger likelihood, compared to the ground-truth matrix. More precisely, if p ≤ (1− )3 logm ( √ 1−θ− √ θ)2n , we can find a grouping with the only distinction in two user-item pairs relative to the ground truth, yet yielding a larger likelihood. Similarly when p ≤ (1− )(logn− 1 6nIg) ( √ 1−θ− √ θ)2mδg , consider two users in the same cluster yet from distinct groups such that the hamming distance between their rating vectors is mδg . We can then show that a grouping in which their rating vectors are swapped provides a larger likelihood. Similarly when p ≤ (1− )(logn− 1 6nIc1− 1 3nIc2) ( √ 1−θ− √ θ)2mδc , we can swap the rating vectors of two users from different clusters with a hamming distance of mδc, and get a greater likelihood. The technical distinctions w.r.t. the prior works [41,42] are three folded: (i) the likelihood computation requires more involved combinatorial arguments due to the hierarchical structure; (ii) sophisticated upper/lower bounding techniques are developed in order to exploit the relational structure across different groups; (iii) delicate choices are made for two users to be swapped in the converse proof. We next present the second yet more practically-appealing contribution: Our proposed algorithm in Section 4 achieves the information-theoretic limits. The algorithm optimality is guaranteed for a certain yet wide range of scenarios in which graph information yields negligible clustering/grouping errors, formally stated below. We provide the proof outline in Section 4 throughout the description of the algorithm, leaving details in the supplementary material. Theorem 2 (Theoretical guarantees of the proposed algorithm). Assume thatm = ω(log n), logm = o(n), m = O(n), Ic2 > 2 lognn and Ig > ω( 1 n ). Then, as long as the sample size is beyond the optimal sample complexity in Theorem 1 (i.e., mnp > mnp?), then the algorithm presented in Section 4 with T = O(log n) iterations ensures the worse-case error probability tends to 0 as n→∞. That is, the algorithm returns M̂ such that P[M̂ = M ] = 1− o(1). Theorem 1 establishes the optimal sample complexity (the number of entries of the rating matrix to be observed) to be mnp?, where p? is given in (3). The required sample complexity is a non-increasing function of δg and δc. This makes an intuitive sense because increasing δg (or δc) yields more distinct rating vectors, thus ensuring easier grouping (or clustering). We emphasize three regimes depending on (Ig, Ic1, Ic2). The first refers to the so-called perfect clustering/grouping regime in which (Ig, Ic1, Ic2) are large enough, thereby activating the 1st term in the max function. The second is the grouping-limited regime, in which the quantity Ig is not large enough so that the 2nd term becomes dominant. The last is the clustering-limited regime where the 3rd term is activated. A few observations are in order. For illustrative simplicity, we focus on the noiseless case, i.e., θ = 0. Remark 1 (Perfect clustering/grouping regime). The optimal sample complexity reads 3m logm. This result is interesting. A naive generalization of [41,42] requires 4m logm, as we have four rating vectors (vA1 , v A 2 , v B 1 , v B 2 ) to estimate and each requires m logm observations under our random sampling, due to the coupon-collecting effect. On the other hand, we exploit the relational structure across rating vectors of different group, reflected in vA3 = v A 1 ⊕ vA2 and vB3 = vB1 ⊕ vB2 ; and we find this serves to estimate (vA1 , v A 2 , v B 1 , v B 2 ) more efficiently, precisely by a factor of 4 3 improvement, thus yielding 3m logm. This exploitation is reflected as novel technical contributions in the converse proof, as well as the achievability proofs of MLE and the proposed algorithm. Remark 2 (Grouping-limited regime). We find that the sample complexity n logn− 1 6n 2Ig δg in this regime coincides with that of [42]. This implies that exploiting the relational structure across different groups does not help improving sample complexity when grouping information is not reliable. Remark 3 (Clustering-limited regime). This is the most challenging scenario which has not been explored by any prior works. The challenge is actually reflected in the complicated sample complexity formula: n logn− 1 6n 2Ic1− 13n 2Ic2 δc . When β = γ, i.e., groups and clusters are not distinguishable, Ig = Ic1 and Ic2 = 0. Therefore, in this case, it indeed reduces to a 6-group setting: n logn− 16n 2Ig δc . The only distinction appears in the denominator. We read δc instead of δg due to different rating vectors across clusters and groups. When Ic2 6= 0, it reads the complicated formula, reflecting non-trivial technical contribution as well. Fig. 1 depicts the different regimes of the optimal sample complexity as a function of (Ig, Ic2) for n = 1000, m = 500 and θ = 0. In Fig. 1a, where δg = 13 and δc = 1 6 , the region depicted by diagonal stripes corresponds to the perfect clustering/grouping regime. Here, Ig and Ic2 are large, and graph information is rich enough to perfectly retrieve the clusters and groups. In this regime, the 1st term in (3) dominates. The region shown by dots corresponds to grouping-limited regime, where the 2nd term in (3) is dominant. In this regime, graph information suffices to exactly recover the clusters, but we need to rely on rating observation to exactly recover the groups. Finally, the 3rd term in (3) dominates in the region captured by horizontal stripes. This indicates the clustering-limited regimes, where neither clustering nor grouping is exact without the side information of the rating vectors. It is worth noting that in practically-relevant systems, where δc > δg (for rating vectors of users in the same cluster are expected to be more similar compared to those in a different cluster), the third regime vanishes, as shown by Fig. 1b, where δg = 17 and δc = 1 6 . It is straightforward to show that the third term in (3) is inactive whenever δc > δg. Fig. 1c compares the optimal sample complexity between the one reported in (3), as a function of Ig, and that of [42]. The considered setting is n=1000, m=500, θ=0, δg= 13 , δc= 1 6 , γ=0.01 and Ic2 =0.002. Note that [42] leverages neither the hierarchical structure of the graph, nor the linear dependency among the rating vectors. Thus, the problem formulated in Section 2 will be translated to a graph with six clusters with linearly independent rating vectors in the setting of [42]. Also, the minimum hamming distance for [42] is δc. In Fig. 1c, we can see that the noticeable gain in the sample complexity of our result in the diagonal parts of the plot (the two regimes on the left side) is due to leveraging the hierarchical graph structure, while the improvement in the sample complexity in the flat part of the plot is a consequence of exploiting the linear dependency among the rating vectors within each cluster (See Remark 1). 4 Proposed Algorithm We propose a computationally feasible matrix completion algorithm that achieves the optimal sample complexity characterized by Theorem 1. The proposed algorithm is motivated by a line of research on iterative algorithms that solve non-convex optimization problems [6, 58, 60–70]. The idea is to first find a good initial estimate, and then successively refine this estimate until the optimal solution is reached. This approach has been employed in several problems such as matrix completion [6, 60], community recovery [58, 61–63], rank aggregation [64], phase retrieval [65, 66], robust PCA [67], EM-algorithm [68], and rating estimation in crowdsourcing [69, 70]. In the following, we describe the proposed algorithm that consists of four phases to recover clusters, groups and rating vectors. Then, we discuss the computational complexity of the algorithm. Recall that Y ∈ {0,+1, ∗}n×m. For the sake of tractable analysis, it is convenient to map Y to Z ∈ {−1, 0,+1}n×m where the mapping of the alphabet of Y is as follows: 0←→ +1, +1←→ −1 and ∗ ←→ 0. Under this mapping, the modulo-2 addition over {0, 1} in Y is represented by the multiplication of integers over {+1,−1} in Z. Also, note that all recovery guarantees are asymptotic, i.e., they are characterized with high probability as n→∞. Throughout the design and analysis of the proposed algorithm, the number and size of clusters and groups are assumed to be known. 4.1 Algorithm Description Phase 1 (Exact Recovery of Clusters): We use the community detection algorithm in [57] on G to exactly recover the two clusters A and B. As proved in [57], the decomposition of the graph into two clusters is correct with high probability when Ic2 > 2 lognn . Phase 2 (Almost Exact Recovery of Groups): The goal of Phase 2 is to decompose the set of users in cluster A (cluster B) into three groups, namely GA1 , G A 2 , G A 3 (or G B 1 , G B 2 , G B 3 for cluster B). It is worth noting that grouping at this stage is almost exact, and will be further refined in the next phases. To this end, we run a spectral clustering algorithm [58] on A and B separately. Let Ĝxi (0) denote the initial estimate of the ith group of cluster x that is recovered by Phase 2 algorithm, for i ∈ [3] and x ∈ {A,B}. It is shown that the groups within each cluster are recovered with a vanishing fraction of error if Ig = ω(1/n). It is worth mentioning that there are other clustering algorithms [62, 71–77] that can be employed for this phase. Examples include: spectral clustering [62, 71–74], semidefinite programming (SDP) [75], non-backtracking matrix spectrum [76], and belief propagation [77]. Phase 3 (Exact Recovery of Rating Vectors): We propose a novel algorithm that optimally recovers the rating vectors of the groups within each cluster. The algorithm is based on maximum likelihood (ML) decoding of users’ ratings based on the partial and noisy observations. For this model, the ML decoding boils down to a counting rule: for each item, find the group with maximum gap between the number of observed zeros and ones, and set the rating entry of this group to 0. The other two rating vectors are either both 0 or both 1 for this item, which will be determined based on the majority of the union of their observed entries. It turns out that the vector recovery is exact with probability 1−o(1). This is one of the technical distinctions, relative to the prior works [41, 42] which employ the simple majority voting rule under non-hierarchical SBMs. Define v̂xi as the estimated rating vector of v x i , i.e., the output of Phase 3 algorithm. Let the c th element of the rating vector vxi (or v̂ x i ) be denoted by v x i (c) (or v̂ x i (c)), for i ∈ [3], x ∈ {A,B} and c ∈ [m]. Let Yr,c be the entry of matrix Y at row r and column c, and Zr,c be its mapping to {+1, 0,−1}. The pseudocode of Phase 3 algorithm is given by Algorithm 1. Algorithm 1 Exact Recovery of Rating Vectors 1: function VECRCV (n,m,Z, {Ĝxi (0) : i ∈ [3], x ∈ {A,B}}) 2: for c ∈ [m] and x ∈ {A,B} do 3: for i ∈ [3] do ρi,x(c)← ∑ r∈Ĝxi (0) Zr,c 4: j ← arg maxi∈[3] ρi,x(c) 5: v̂xj (c)← 0 6: if ∑ i∈[3]\{j} ρi,x(c) ≥ 0 then 7: for i ∈ [3] \ {j} do v̂xi (c)← 0 8: else 9: for i ∈ [3] \ {j} do v̂xi (c)← 1 10: return {v̂xi : i ∈ [3], x ∈ {A,B}} Algorithm 2 Local Iterative Refinement of Groups (Set flag = 0) 1: function REFINE (flag, n,m, T, Y, Z,G, {(Ĝxi (0), v̂xi ) : i ∈ [3], x ∈ {A,B}}) 2: α̂← 1 6(n/62 ) |{(f, g) ∈ E : f, g ∈ Gxi , x ∈ {A,B}, i ∈ [3]}| 3: β̂ ← 6n2 ∣∣{(f, g) ∈ E : f ∈ Gxi , g ∈ Gxj , x ∈ {A,B}, i ∈ [3], j ∈ [3] \ i }∣∣ 4: θ̂ ← |{(r, c) ∈ Ω : Yrc 6= v̂xi (c), r ∈ Ĝxi (0)}|/|Ω| 5: for t ∈ [T ] and x ∈ {A,B} do 6: for i ∈ [3] do Ĝxi (t)← ∅ 7: for r ← 1 to n do 8: j ← arg maxi∈[3] |{c : Yr,c = v̂xi (c)}|· log ( 1−̂θ θ̂ ) +e ( {r}, Ĝxi (t− 1) ) · log ( (1−β̂)α̂ (1−α̂)β̂ ) 9: Ĝxj (t)← Ĝxj (t) ∪ {r} 10: if flag == 1 then 11: {v̂xi : i∈ [3], x∈{A,B}} ← VECRCV (n,m,Z, {Ĝxi (t) : i∈ [3], x∈{A,B}}) 12: return {Ĝxi (T ) : i ∈ [3], x ∈ {A,B}}, {v̂xi : i ∈ [3], x ∈ {A,B}} Phase 4 (Exact Recovery of Groups): Finally, the goal is to refine the groups which are almost recovered in Phase 2, to obtain an exact grouping. To this end, we propose an iterative algorithm that locally refines the estimates on the user grouping within each cluster for T iterations. Specifically, at each iteration, the affiliation of each user is updated to the group that yields the maximum local likelihood. This is determined based on (i) the number of edges between the user and the set of users which belong to that group, and (ii) the number of observed rating matrix entries of the user that coincide with the corresponding entries of the rating vector of that group. Algorithm 2 describes the pseudocode of Phase 4 algorithm. Note that we do not assume the knowledge of the model parameters α, β and θ, and estimate them using Y and G, i.e., the proposed algorithm is parameter-free. In order to prove the exact recovery of groups after running Algorithm 2, we need to show that the number of misclassified users in each cluster strictly decreases with each iteration of Algorithm 2. More specifically, assuming that the previous phases are executed successfully, if we start with ηn misclassified users within one cluster, for some small η > 0, then one can show that we end up with η2n misclassified users with high probability as n → ∞ after one iteration of refinement. Hence, running the local refinement for T = log(ηn)log 2 within the groups of each cluster would suffice to converge to the ground truth assignments. The analysis of this phase follows the one in [42, Theorem 2] in which the problem of recovering K communities of possibly different sizes is studied. By considering the case of three equal-sized communities, the guarantees of exact recovery of the groups within each cluster readily follows when T = O(log n). Remark 4. The iterative refinement in Algorithm 2 can be applied only on the groups (when flag = 0), or on the groups as well as the rating vectors (for flag = 1). Even though the former is sufficient for reliable estimation of the rating matrix, we show, through our simulation results in the following section, that the latter achieves a better performance for finite regimes of n and m. Remark 5. The problem is formulated under the finite-field model only for the purpose of making an initial step towards a more generalized and realistic algorithm. Fortunately, as many of the theory-inspiring works do, the theory process of characterizing the optimal sample complexity under this model could also shed insights into developing a universal algorithm that is applicable to a general problem setting rather than the specific problem setting considered for the theoretical analysis, as long as some slight algorithmic modifications are made. To demonstrate the universality of the algorithm, we consider a practical scenario in which ratings are real-valued (for which linear dependency between rating vectors is well-accepted) and observation noise is Gaussian. In this setting, the detection problem (under the current model) will be replaced by an estimation problem. Consequently, we update Algorithm 1 to incorporate an MLE of the rating vectors; and modify the local refinement criterion on Line 8 in Algorithm 2 to find the group that minimizes some properlydefined distance metric between the observed and estimated ratings such as Root Mean Squared Error (RMSE). In Section 5, we conduct experiments under the aforementioned setting, and show that our algorithm achieves superior performance over the state-of-the-art algorithms. 4.2 Computational Complexity One of the crucial aspects of the proposed algorithm is its computational efficiency. Phase 1 can be done in polynomial time in the number of vertices [57,78]. Phase 2 can be done inO(|E| log n) using the power method [79]. Phase 3 requires a single pass over all entries of the observed matrix, which corresponds to O(|Ω|). Finally, in each iteration of Phase 4, the affiliation update of user r ∈ [n] requires reading the entries of the rth row of Y and the edges connected to user r, which amounts to O(|Ω|+ |E|) for each of the T iterations, assuming an appropriate data structure. Hence, the overall computational complexity reads poly(n) +O(|Ω| log n). Remark 6. The complexity bottleneck is in Phase 1 (exact clustering), as it relies upon [57, 78], exhibiting poly(n) runtime. This can be improved, without any performance degradation, by replacing the exact clustering in Phase 1 with almost exact clustering, yielding O(|E| log n) runtime [79]. In return, Phase 4 should be modified so that the local iterative refinement is applied on cluster affiliation, as well as group affiliation and rating vectors. As a result, the improved overall runtime reads O((|Ω|+ |E|) log n). 5 Experimental Results We first conduct Monte Carlo experiments to corroborate Theorem 1. Let α = α̃ lognn , β = β̃ logn n , and γ = γ̃ lognn . We consider a setting where θ = 0.1, β̃ = 10, γ̃ = 0.5, δg = δc = 0.5. The synthetic data is generated as per the model in Section 2. In Figs. 2a and 2b, we evaluate the performance of the proposed algorithm (with local iterative refinement of groups and rating vectors), and quantify the empirical success rate as a function of the normalized sample complexity, over 103 randomly drawn realizations of rating vectors and hierarchical graphs. We vary n and m, preserving the ratio n/m = 3. Fig. 2a depicts the case of α̃ = 40 which corresponds to perfect clustering/grouping regime (Remark 1). On the other hand, Fig. 2b depicts the case of α̃ = 40 which corresponds to grouping-limited regime (Remark 2). In both figures, we observe a phase transition2 in the success rate at p = p?, and as we increase n and m, the phase transition gets sharper. These figures corroborate Theorem 1 in different regimes when the graph side information is not scarce. Fig. 2c compares the performance of the proposed algorithm for n = 3000 and m = 1000 under two different strategies of local iterative refinement: (i) local refinement of groups only (set flag = 0 in Algorithm 2); and (ii) local refinement of both groups and rating vectors (set flag = 1 in Algorithm 2). It is clear that the second strategy outperforms the first in the finite regime of n and m, which is consistent with Remark 4. Furthermore, the gap between the two versions shrinks as we gradually increase α̃ (i.e., as the quality of the graph gradually improves). Next, similar to [41–44], the performance of the proposed algorithm is assessed on semi-real data (real graph but synthetic rating vectors). We consider a subgraph of the political blog network [80], which is shown to exhibit a hierarchical structure [50]. In particular, we consider a tall matrix setting of n = 381 and m = 200 in order to investigate the gain in sample complexity due to the graph side information. The selected subgraph consists of two clusters of political parties, each of which comprises three groups. The three groups of the first cluster consist of 98, 34 and 103 users, while the 2The transition is ideally a step function at p = p? as n and m tend to infinity. three groups of the second cluster consist of 58, 68 and 20 users3. The corresponding rating vectors are generated such that the ratings are drawn from [0, 10] (i.e., real numbers), and the observations are corrupted by a Gaussian noise with mean zero and a given variance σ2. We use root mean square error (RMSE) as the evaluation metric, and assess the performance of the proposed algorithm against various recommendation algorithms, namely User Average, Item Average, User k-Nearest Neighbor (k-NN) [81], Item k-NN [81], TrustSVD [28], Biased Matrix Factorization (MF) [82], and Matrix Factorization with Social Regularization (SoReg) [24]. Note that [41, 42] are designed to work for rating matrices whose elements are drawn from a finite field, and hence they cannot be run under the practical scenario considered in this setting. In Fig. 2d, we compute RMSE as a function of p, for fixed σ2 = 0.5. On the other hand, Fig. 2e depicts RMSE as a function of the normalized signal-to-noise ratio 1/σ2, for fixed p = 0.08. It is evident that the proposed algorithm achieves superior performance over the state-of-the-art algorithms for a wide range of observation probabilities and Gaussian noise variances, demonstrating its viability and efficiency in practical scenarios. Finally, Table 1 demonstrates the computational efficiency of the proposed algorithm, and reports the runtimes of recommendation algorithms for the experiment setting of Fig. 2d and p = 0.1. The runtimes are averaged over 20 trials. The proposed algorithm achieves a faster runtime than all other algorithms except for User Average and Item Average. However, as shown in Fig. 2d, the performance of these faster algorithms, in terms of RMSE, is inferior to the majority of other algorithms. 3We refer to the supplementary material for a visualization of the selected subgraph of the political blog network using t-SNE algorithm. Broader Impact We emphasize two positive impacts of our work. First, it serves to enhance the performance of personalized recommender systems (one of the most influential commercial applications) with the aid of social graph which is often available in a variety of applications. Second, it achieves fairness among all users by providing high quality recommendations even to new users who have not rated any items before. One negative consequence of this work is w.r.t. the privacy of users. User privacy may not be preserved in the process of exploiting indirect information posed in social graphs, even though direct information, such as user profiles, is protected. Acknowledgments and Disclosure of Funding The work of A. Elmahdy and S. Mohajer is supported in part by the National Science Foundation under Grants CCF-1617884 and CCF-1749981. The work of J. Ahn and C. Suh is supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No.2018R1A1A1A05022889).
1. What is the focus and contribution of the paper on matrix completion? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical guarantees? 3. What are the weaknesses of the paper, especially regarding its assumptions and lack of real-data experiments? 4. Do you have any concerns about the novel structure of graph side information? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes a matrix completion algorithm for binary matrix with graph side information. While this problem has been studied before, the paper assumes there is group structure inside the cluster. The paper gives the sample complexity for such kind of problem, depending on the graph structure and the rate of observation. The paper also proposes an algorithm for solving such kind of algorithm, with a theoretical guarantee. Empirical results on synthetic and semi-synthetic data are used. Strengths The paper proposes a new structure of graph side information. Rigorous theoretical bound are provided for using such new structure of graph side information. They also propose an algorithm which are shown to be both theoretically and empirically tractable. Weaknesses The new structure needs a motivation. The problem of binary matrix completion with graph side information is already been studied. The novelty of the current paper is that in addition to the classical clustering structure, it assumes one-layer of clustering: cluster within cluster. How can such a new structure agree with the real application? I see from the experimental part no experiments on real data is given. Does this mean this problem is theoretically interesting but has few practical meanings? It is not clear why using such a performance measurement in the paper. In Eq.1, \psi is used to estimate the matrix. So, the worse-case means there is no ground truth for the target matrix? Or there is no constraint on the ground truth for the target matrix M? Such a part needs more explanation to make the metric reasonable. More discussions are needed on how the current theoretical guarantee on sample complexity compared to previous ones. Otherwise, we can not sure the new structure information in the graph do help completion. -------------------------------------------------- Not all questions have been successfully addressed (such as experimental on real data and a motivation for the problem regarding "binary" matrix completion). But I still think this paper can be accepted. I will keep my score.
NIPS
Title Matrix Completion with Hierarchical Graph Side Information Abstract We consider a matrix completion problem that exploits social or item similarity graphs as side information. We develop a universal, parameter-free, and computationally efficient algorithm that starts with hierarchical graph clustering and then iteratively refines estimates both on graph clustering and matrix ratings. Under a hierarchical stochastic block model that well respects practically-relevant social graphs and a low-rank rating matrix model (to be detailed), we demonstrate that our algorithm achieves the information-theoretic limit on the number of observed matrix entries (i.e., optimal sample complexity) that is derived by maximum likelihood estimation together with a lower-bound impossibility result. One consequence of this result is that exploiting the hierarchical structure of social graphs yields a substantial gain in sample complexity relative to the one that simply identifies different groups without resorting to the relational structure across them. We conduct extensive experiments both on synthetic and real-world datasets to corroborate our theoretical results as well as to demonstrate significant performance improvements over other matrix completion algorithms that leverage graph side information. 1 Introduction Recommender systems have been powerful in a widening array of applications for providing users with relevant items of their potential interest [1]. A prominent well-known technique for operating the systems is low-rank matrix completion [2–18]: Given partially observed entries of an interested matrix, the goal is to predict the values of missing entries. One challenge that arises in the big data era is the so-called cold start problem in which high-quality recommendations are not feasible for new users/items that bear little or no information. One natural and popular way to address the challenge is to exploit other available side information. Motivated by the social homophily theory [19] that users within the same community are more likely to share similar preferences, social networks such as Facebook’s friendship graph have often been employed to improve the quality of recommendation. While there has been a proliferation of social-graph-assisted recommendation algorithms [1, 20–40], few works were dedicated to developing theoretical insights on the usefulness of side information, and therefore the maximum gain due to side information has been unknown. A few recent efforts have been made from an information-theoretic perspective [41–44]. Ahn et al. [41] have identified the maximum gain by characterizing the optimal sample complexity of matrix completion in the presence of graph side information under a simple setting in which there are two clusters and users ∗Equal contribution. Corresponding author: Changho Suh. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. within each cluster share the same ratings over items. A follow-up work [42] extended to an arbitrary number of clusters while maintaining the same-rating-vector assumption per user in each cluster. While [41, 42] lay out the theoretical foundation for the problem, the assumption of the single rating vector per cluster limits the practicality of the considered model. In an effort to make a further progress on theoretical insights, and motivated by [45], we consider a more generalized setting in which each cluster exhibits another sub-clustering structure, each subcluster (or that we call a “group”) being represented by a different rating vector yet intimately-related to other rating vectors within the same cluster. More specifically, we focus on a hierarchical graph setting wherein users are categorized into two clusters, each of which comprises three groups in which rating vectors are broadly similar yet distinct subject to a linear subspace of two basis vectors. Contributions: Our contributions are two folded. First we characterize the information-theoretic sharp threshold on the minimum number of observed matrix entries required for reliable matrix completion, as a function of the quantified quality (to be detailed) of the considered hierarchical graph side information. The second yet more practically-appealing contribution is to develop a computationally efficient algorithm that achieves the optimal sample complexity for a wide range of scenarios. One implication of this result is that our algorithm fully utilizing the hierarchical graph structure yields a significant gain in sample complexity, compared to a simple variant of [41, 42] that does not exploit the relational structure across rating vectors of groups. Technical novelty and algorithmic distinctions also come in the process of exploiting the hierarchical structure; see Remarks 2 and 3. Our experiments conducted on both synthetic and real-world datasets corroborate our theoretical results as well as demonstrate the efficacy of our proposed algorithm. Related works: In addition to the initial works [41, 42], more generalized settings have been taken into consideration with distinct directions. Zhang et al. [43] explore a setting in which both social and item similarity graphs are given as side information, thus demonstrating a synergistic effect due to the availability of two graphs. Jo et al. [44] go beyond binary matrix completion to investigate a setting in which a matrix entry, say (i, j)-entry, denotes the probability of user i picking up item j as the most preferable, yet chosen from a known finite set of probabilities. Recently a so-called dual problem has been explored in which clustering is performed with a partially observed matrix as side information [46, 47]. Ashtiani et al. [46] demonstrate that the use of side information given in the form of pairwise queries plays a crucial role in making an NP-hard clustering problem tractable via an efficient k-means algorithm. Mazumdar et al. [47] characterize the optimal sample complexity of clustering in the presence of similarity matrix side information together with the development of an efficient algorithm. One distinction of our work compared to [47] is that we are interested in both clustering and matrix completion, while [47] only focused on finding the clusters, from which the rating matrix cannot be necessarily inferred. Our problem can be viewed as the prominent low-rank matrix completion problem [1–4, 6–18] which has been considered notoriously difficult. Even for the simple scenarios such as rank-1 or rank-2 matrix settings, the optimal sample complexity has been open for decades, although some upper and lower bounds are derived. The matrix of our consideration in this work is of rank 4. Hence, in this regard, we could make a progress on this long-standing open problem by exploiting the structural property posed by our considered application. The statistical model that we consider for theoretical guarantees of our proposed algorithm relies on the Stochastic Block Model (SBM) [48] and its hierarchical counterpart [49–52] which have been shown to well respect many practically-relevant scenarios [53–56]. Also our algorithm builds in part upon prominent clustering [57,58] and hierarchical clustering [51,52] algorithms, although it exhibits a notable distinction in other matrix-completion-related procedures together with their corresponding technical analyses. Notations: Row vectors and matrices are denoted by lowercase and uppercase letters, respectively. Random matrices are denoted by boldface uppercase letters, while their realizations are denoted by uppercase letters. Sets are denoted by calligraphic letters. Let 0m×n and 1m×n be all-zero and all-one matrices of dimension m × n, respectively. For an integer n ≥ 1, [n] indicates the set of integers {1, 2, . . . , n}. Let {0, 1}n be the set of all binary numbers with n digits. The hamming distance between two binary vectors u and v is denoted by dH (u, v) := ‖u⊕ v‖0, where ⊕ stands for modulo-2 addition operator. Let 1 [·] denote the indicator function. For a graph G = (V,E) and two disjoint subsets X and Y of V , e (X,Y ) indicates the number of edges between X and Y . 2 Problem Formulation Setting: Consider a rating matrix with n users and m items. Each user rates m items by a binary vector, where 0/1 components denote “dislike”/“like” respectively. We assume that there are two clusters of users, say A and B. To capture the low-rank of the rating matrix, we assume that each user’s rating vector within a cluster lies in a linear subspace of two basis vectors. Specifically, let vA1 ∈ F1×m2 and vA2 ∈ F1×m2 be the two linearly-independent basis vectors of cluster A. Then users in Cluster A can be split into three groups (e.g., say GA1 , G A 2 and G A 3 ) based on their rating vectors. More precisely, we denote by GAi the set of users whose rating vector is v A i for i = 1, 2. Finally, the remaining users of cluster A from group GA3 , and their rating vector is v A 3 = v A 1 ⊕ vB2 (a linear combination of the basis vectors). Similarly we have vB1 , v B 2 and v B 3 = v B 1 ⊕ vB2 for cluster B. For presentational simplicity, we assume equal-sized groups (each being of size n/6), although our algorithm (to be presented in Section 4) allows for any group size, and our theoretical guarantees (to be presented in Theorem 2) hold as long as the group sizes are order-wise same. Let M ∈ Fn×m be a rating matrix wherein the ith row corresponds to user i’s rating vector. We find the Hamming distance instrumental in expressing our main results (to be stated in Section 3) as well as proving the main theorems. Let δg be the normalized Hamming distance among distinct pairs of group’s rating vectors within the same cluster: δg = 1m minc∈{A,B}mini,j∈[3] dH ( vci , v c j ) . Also let δc be the counterpart w.r.t. distinct pairs of rating vectors across different clusters: δc = 1 m mini,j∈[3] dH ( vAi , v B j ) , and define δ := {δg, δc}. We partition all the possible rating matrices into subsets depending on δ. LetM(δ) be the set of rating matrices subject to δ. Problem of interest: Our goal is to estimate a rating matrix M ∈ M(δ) given two types of information: (1) partial ratings Y ∈ {0, 1, ∗}n×m; (2) a graph, say social graph G. Here ∗ indicates no observation, and we denote the set of observed entries of Y by Ω, that is Ω = {(r, c) ∈ [n] × [m] : Yrc 6= ∗}. Below is a list of assumptions made for the analysis of the optimal sample complexity (Theorem 1) and theoretical guarantees of our proposed algorithm (Theorem 2), but not for the algorithm itself. We assume that each element of Y is observed with probability p ∈ [0, 1], independently from others, and its observation can possibly be flipped with probability θ ∈ [0, 12 ). Let social graph G = ([n], E) be an undirected graph, where E denotes the set of edges, each capturing the social connection between two associated users. The set [n] of vertices is partitioned into two disjoint clusters, each being further partitioned into three disjoint groups. We assume that the graph follows the hierarchical stochastic block model (HSBM) [51, 59] with three types of edge probabilities: (i) α indicates an edge probability between two users in the same group; (ii) β denotes the one w.r.t. two users of different groups yet within the same cluster; (iii) γ is associated with two users of different clusters. We focus on realistic scenarios in which users within the same group (or cluster) are more likely to be connected as per the social homophily theory [19]: α ≥ β ≥ γ. Performance metric: Let ψ be a rating matrix estimator that takes (Y,G) as an input, yielding an estimate. As a performance metric, we consider the worst-case probability of error: P (δ)e (ψ) := max M∈M(δ) P [ψ(Y,G) 6= M ] . (1) Note that M(δ) is the set of ground-truth matrices M subject to δ := {δg, δc}. Since the error probability may vary depending on different choices of M (i.e., some matrices may be harder to estimate), we employ a conventional minimax approach wherein the goal is to minimize the maximum error probability. We characterize the optimal sample complexity for reliable exact matrix recovery, concentrated around nmp? in the limit of n and m. Here p? indicates the sharp threshold on the observation probability: (i) above which the error probability can be made arbitrarily close to 0 in the limit; (ii) under which P (δ)e (ψ) 9 0 no matter what and whatsoever. 3 Optimal sample complexity We first present the optimal sample complexity characterized under the considered model. We find that an intuitive and insightful expression can be made via the quality of hierarchical social graph, which can be quantified by the following: (i) Ig := ( √ α −√β)2 represents the capability of separating distinct groups within a cluster; (ii) Ic1 := ( √ α − √γ)2 and Ic2 := ( √ β − √γ)2 capture the clustering capabilities of the social graph. Note that the larger the quantities, the easier to do grouping/clustering. Our sample complexity result is formally stated below as a function of (Ig, Ic1, Ic2). As in [41], we make the same assumption on m and n that turns out to ease the proof via prominent large deviation theories: m = ω(log n) and logm = o(n). This assumption is also practically relevant as it rules out highly asymmetric matrices. Theorem 1 (Information-theoretic limits). Assume that m = ω(log n) and logm = o(n). Let the item ratings be drawn from a finite field Fq. Let c and g denote the number of clusters and groups, respectively. Within each cluster, let the set of g rating vectors be spanned by any r ≤ g vectors in the same set. Define p? as p? := 1 (√ 1−θ− √ θ q−1 )2 max { gc g−r+1 logm n , log n− ngcIg δgm , log n− ngcIc1 − (g−1)n gc Ic2 δcm } . (2) Fix > 0. If p ≥ (1+ )p?, then there exists a sequence of estimators ψ satisfying limn→∞ P (δ)e (ψ) = 0. Conversely, if p ≤ (1− )p?, then limn→∞ P (δ)e (ψ) 6= 0 for anyψ. Setting (c, g, r, q) = (2, 3, 2, 2), the bound in (1) reduces to p? = 1 ( √ 1− θ − √ θ)2 max { 3 logm n , log n− 16nIg mδg , log n− 16nIc1 − 13nIc2 mδc } , (3) which is the optimal sample complexity of the problem formulated in Section 2. Proof. We provide the proof sketch for (c, g, r) = (2, 3, 2). We defer the complete proof for (c, g, r) = (2, 3, 2) to the supplementary material. The extension to general (c, g, r) is a natural generalization of the analysis for the parameters (c, g, r) = (2, 3, 2). The achievability proof is based on maximum likelihood estimation (MLE). We first evaluate the likelihood for a given clustering/grouping of users and the corresponding rating matrix. We then show that if p ≥ (1+ )p?, the likelihood is maximized only by the ground-truth rating matrix in the limit of n: limn→∞ P (δ) e (ψML) = 0. For the converse (impossibility) proof, we first establish a lower bound on the error probability, and show that it is minimized when employing the maximum likelihood estimator. Next we prove that if p is smaller than any of the three terms in the RHS of (3), then there exists another solution that yields a larger likelihood, compared to the ground-truth matrix. More precisely, if p ≤ (1− )3 logm ( √ 1−θ− √ θ)2n , we can find a grouping with the only distinction in two user-item pairs relative to the ground truth, yet yielding a larger likelihood. Similarly when p ≤ (1− )(logn− 1 6nIg) ( √ 1−θ− √ θ)2mδg , consider two users in the same cluster yet from distinct groups such that the hamming distance between their rating vectors is mδg . We can then show that a grouping in which their rating vectors are swapped provides a larger likelihood. Similarly when p ≤ (1− )(logn− 1 6nIc1− 1 3nIc2) ( √ 1−θ− √ θ)2mδc , we can swap the rating vectors of two users from different clusters with a hamming distance of mδc, and get a greater likelihood. The technical distinctions w.r.t. the prior works [41,42] are three folded: (i) the likelihood computation requires more involved combinatorial arguments due to the hierarchical structure; (ii) sophisticated upper/lower bounding techniques are developed in order to exploit the relational structure across different groups; (iii) delicate choices are made for two users to be swapped in the converse proof. We next present the second yet more practically-appealing contribution: Our proposed algorithm in Section 4 achieves the information-theoretic limits. The algorithm optimality is guaranteed for a certain yet wide range of scenarios in which graph information yields negligible clustering/grouping errors, formally stated below. We provide the proof outline in Section 4 throughout the description of the algorithm, leaving details in the supplementary material. Theorem 2 (Theoretical guarantees of the proposed algorithm). Assume thatm = ω(log n), logm = o(n), m = O(n), Ic2 > 2 lognn and Ig > ω( 1 n ). Then, as long as the sample size is beyond the optimal sample complexity in Theorem 1 (i.e., mnp > mnp?), then the algorithm presented in Section 4 with T = O(log n) iterations ensures the worse-case error probability tends to 0 as n→∞. That is, the algorithm returns M̂ such that P[M̂ = M ] = 1− o(1). Theorem 1 establishes the optimal sample complexity (the number of entries of the rating matrix to be observed) to be mnp?, where p? is given in (3). The required sample complexity is a non-increasing function of δg and δc. This makes an intuitive sense because increasing δg (or δc) yields more distinct rating vectors, thus ensuring easier grouping (or clustering). We emphasize three regimes depending on (Ig, Ic1, Ic2). The first refers to the so-called perfect clustering/grouping regime in which (Ig, Ic1, Ic2) are large enough, thereby activating the 1st term in the max function. The second is the grouping-limited regime, in which the quantity Ig is not large enough so that the 2nd term becomes dominant. The last is the clustering-limited regime where the 3rd term is activated. A few observations are in order. For illustrative simplicity, we focus on the noiseless case, i.e., θ = 0. Remark 1 (Perfect clustering/grouping regime). The optimal sample complexity reads 3m logm. This result is interesting. A naive generalization of [41,42] requires 4m logm, as we have four rating vectors (vA1 , v A 2 , v B 1 , v B 2 ) to estimate and each requires m logm observations under our random sampling, due to the coupon-collecting effect. On the other hand, we exploit the relational structure across rating vectors of different group, reflected in vA3 = v A 1 ⊕ vA2 and vB3 = vB1 ⊕ vB2 ; and we find this serves to estimate (vA1 , v A 2 , v B 1 , v B 2 ) more efficiently, precisely by a factor of 4 3 improvement, thus yielding 3m logm. This exploitation is reflected as novel technical contributions in the converse proof, as well as the achievability proofs of MLE and the proposed algorithm. Remark 2 (Grouping-limited regime). We find that the sample complexity n logn− 1 6n 2Ig δg in this regime coincides with that of [42]. This implies that exploiting the relational structure across different groups does not help improving sample complexity when grouping information is not reliable. Remark 3 (Clustering-limited regime). This is the most challenging scenario which has not been explored by any prior works. The challenge is actually reflected in the complicated sample complexity formula: n logn− 1 6n 2Ic1− 13n 2Ic2 δc . When β = γ, i.e., groups and clusters are not distinguishable, Ig = Ic1 and Ic2 = 0. Therefore, in this case, it indeed reduces to a 6-group setting: n logn− 16n 2Ig δc . The only distinction appears in the denominator. We read δc instead of δg due to different rating vectors across clusters and groups. When Ic2 6= 0, it reads the complicated formula, reflecting non-trivial technical contribution as well. Fig. 1 depicts the different regimes of the optimal sample complexity as a function of (Ig, Ic2) for n = 1000, m = 500 and θ = 0. In Fig. 1a, where δg = 13 and δc = 1 6 , the region depicted by diagonal stripes corresponds to the perfect clustering/grouping regime. Here, Ig and Ic2 are large, and graph information is rich enough to perfectly retrieve the clusters and groups. In this regime, the 1st term in (3) dominates. The region shown by dots corresponds to grouping-limited regime, where the 2nd term in (3) is dominant. In this regime, graph information suffices to exactly recover the clusters, but we need to rely on rating observation to exactly recover the groups. Finally, the 3rd term in (3) dominates in the region captured by horizontal stripes. This indicates the clustering-limited regimes, where neither clustering nor grouping is exact without the side information of the rating vectors. It is worth noting that in practically-relevant systems, where δc > δg (for rating vectors of users in the same cluster are expected to be more similar compared to those in a different cluster), the third regime vanishes, as shown by Fig. 1b, where δg = 17 and δc = 1 6 . It is straightforward to show that the third term in (3) is inactive whenever δc > δg. Fig. 1c compares the optimal sample complexity between the one reported in (3), as a function of Ig, and that of [42]. The considered setting is n=1000, m=500, θ=0, δg= 13 , δc= 1 6 , γ=0.01 and Ic2 =0.002. Note that [42] leverages neither the hierarchical structure of the graph, nor the linear dependency among the rating vectors. Thus, the problem formulated in Section 2 will be translated to a graph with six clusters with linearly independent rating vectors in the setting of [42]. Also, the minimum hamming distance for [42] is δc. In Fig. 1c, we can see that the noticeable gain in the sample complexity of our result in the diagonal parts of the plot (the two regimes on the left side) is due to leveraging the hierarchical graph structure, while the improvement in the sample complexity in the flat part of the plot is a consequence of exploiting the linear dependency among the rating vectors within each cluster (See Remark 1). 4 Proposed Algorithm We propose a computationally feasible matrix completion algorithm that achieves the optimal sample complexity characterized by Theorem 1. The proposed algorithm is motivated by a line of research on iterative algorithms that solve non-convex optimization problems [6, 58, 60–70]. The idea is to first find a good initial estimate, and then successively refine this estimate until the optimal solution is reached. This approach has been employed in several problems such as matrix completion [6, 60], community recovery [58, 61–63], rank aggregation [64], phase retrieval [65, 66], robust PCA [67], EM-algorithm [68], and rating estimation in crowdsourcing [69, 70]. In the following, we describe the proposed algorithm that consists of four phases to recover clusters, groups and rating vectors. Then, we discuss the computational complexity of the algorithm. Recall that Y ∈ {0,+1, ∗}n×m. For the sake of tractable analysis, it is convenient to map Y to Z ∈ {−1, 0,+1}n×m where the mapping of the alphabet of Y is as follows: 0←→ +1, +1←→ −1 and ∗ ←→ 0. Under this mapping, the modulo-2 addition over {0, 1} in Y is represented by the multiplication of integers over {+1,−1} in Z. Also, note that all recovery guarantees are asymptotic, i.e., they are characterized with high probability as n→∞. Throughout the design and analysis of the proposed algorithm, the number and size of clusters and groups are assumed to be known. 4.1 Algorithm Description Phase 1 (Exact Recovery of Clusters): We use the community detection algorithm in [57] on G to exactly recover the two clusters A and B. As proved in [57], the decomposition of the graph into two clusters is correct with high probability when Ic2 > 2 lognn . Phase 2 (Almost Exact Recovery of Groups): The goal of Phase 2 is to decompose the set of users in cluster A (cluster B) into three groups, namely GA1 , G A 2 , G A 3 (or G B 1 , G B 2 , G B 3 for cluster B). It is worth noting that grouping at this stage is almost exact, and will be further refined in the next phases. To this end, we run a spectral clustering algorithm [58] on A and B separately. Let Ĝxi (0) denote the initial estimate of the ith group of cluster x that is recovered by Phase 2 algorithm, for i ∈ [3] and x ∈ {A,B}. It is shown that the groups within each cluster are recovered with a vanishing fraction of error if Ig = ω(1/n). It is worth mentioning that there are other clustering algorithms [62, 71–77] that can be employed for this phase. Examples include: spectral clustering [62, 71–74], semidefinite programming (SDP) [75], non-backtracking matrix spectrum [76], and belief propagation [77]. Phase 3 (Exact Recovery of Rating Vectors): We propose a novel algorithm that optimally recovers the rating vectors of the groups within each cluster. The algorithm is based on maximum likelihood (ML) decoding of users’ ratings based on the partial and noisy observations. For this model, the ML decoding boils down to a counting rule: for each item, find the group with maximum gap between the number of observed zeros and ones, and set the rating entry of this group to 0. The other two rating vectors are either both 0 or both 1 for this item, which will be determined based on the majority of the union of their observed entries. It turns out that the vector recovery is exact with probability 1−o(1). This is one of the technical distinctions, relative to the prior works [41, 42] which employ the simple majority voting rule under non-hierarchical SBMs. Define v̂xi as the estimated rating vector of v x i , i.e., the output of Phase 3 algorithm. Let the c th element of the rating vector vxi (or v̂ x i ) be denoted by v x i (c) (or v̂ x i (c)), for i ∈ [3], x ∈ {A,B} and c ∈ [m]. Let Yr,c be the entry of matrix Y at row r and column c, and Zr,c be its mapping to {+1, 0,−1}. The pseudocode of Phase 3 algorithm is given by Algorithm 1. Algorithm 1 Exact Recovery of Rating Vectors 1: function VECRCV (n,m,Z, {Ĝxi (0) : i ∈ [3], x ∈ {A,B}}) 2: for c ∈ [m] and x ∈ {A,B} do 3: for i ∈ [3] do ρi,x(c)← ∑ r∈Ĝxi (0) Zr,c 4: j ← arg maxi∈[3] ρi,x(c) 5: v̂xj (c)← 0 6: if ∑ i∈[3]\{j} ρi,x(c) ≥ 0 then 7: for i ∈ [3] \ {j} do v̂xi (c)← 0 8: else 9: for i ∈ [3] \ {j} do v̂xi (c)← 1 10: return {v̂xi : i ∈ [3], x ∈ {A,B}} Algorithm 2 Local Iterative Refinement of Groups (Set flag = 0) 1: function REFINE (flag, n,m, T, Y, Z,G, {(Ĝxi (0), v̂xi ) : i ∈ [3], x ∈ {A,B}}) 2: α̂← 1 6(n/62 ) |{(f, g) ∈ E : f, g ∈ Gxi , x ∈ {A,B}, i ∈ [3]}| 3: β̂ ← 6n2 ∣∣{(f, g) ∈ E : f ∈ Gxi , g ∈ Gxj , x ∈ {A,B}, i ∈ [3], j ∈ [3] \ i }∣∣ 4: θ̂ ← |{(r, c) ∈ Ω : Yrc 6= v̂xi (c), r ∈ Ĝxi (0)}|/|Ω| 5: for t ∈ [T ] and x ∈ {A,B} do 6: for i ∈ [3] do Ĝxi (t)← ∅ 7: for r ← 1 to n do 8: j ← arg maxi∈[3] |{c : Yr,c = v̂xi (c)}|· log ( 1−̂θ θ̂ ) +e ( {r}, Ĝxi (t− 1) ) · log ( (1−β̂)α̂ (1−α̂)β̂ ) 9: Ĝxj (t)← Ĝxj (t) ∪ {r} 10: if flag == 1 then 11: {v̂xi : i∈ [3], x∈{A,B}} ← VECRCV (n,m,Z, {Ĝxi (t) : i∈ [3], x∈{A,B}}) 12: return {Ĝxi (T ) : i ∈ [3], x ∈ {A,B}}, {v̂xi : i ∈ [3], x ∈ {A,B}} Phase 4 (Exact Recovery of Groups): Finally, the goal is to refine the groups which are almost recovered in Phase 2, to obtain an exact grouping. To this end, we propose an iterative algorithm that locally refines the estimates on the user grouping within each cluster for T iterations. Specifically, at each iteration, the affiliation of each user is updated to the group that yields the maximum local likelihood. This is determined based on (i) the number of edges between the user and the set of users which belong to that group, and (ii) the number of observed rating matrix entries of the user that coincide with the corresponding entries of the rating vector of that group. Algorithm 2 describes the pseudocode of Phase 4 algorithm. Note that we do not assume the knowledge of the model parameters α, β and θ, and estimate them using Y and G, i.e., the proposed algorithm is parameter-free. In order to prove the exact recovery of groups after running Algorithm 2, we need to show that the number of misclassified users in each cluster strictly decreases with each iteration of Algorithm 2. More specifically, assuming that the previous phases are executed successfully, if we start with ηn misclassified users within one cluster, for some small η > 0, then one can show that we end up with η2n misclassified users with high probability as n → ∞ after one iteration of refinement. Hence, running the local refinement for T = log(ηn)log 2 within the groups of each cluster would suffice to converge to the ground truth assignments. The analysis of this phase follows the one in [42, Theorem 2] in which the problem of recovering K communities of possibly different sizes is studied. By considering the case of three equal-sized communities, the guarantees of exact recovery of the groups within each cluster readily follows when T = O(log n). Remark 4. The iterative refinement in Algorithm 2 can be applied only on the groups (when flag = 0), or on the groups as well as the rating vectors (for flag = 1). Even though the former is sufficient for reliable estimation of the rating matrix, we show, through our simulation results in the following section, that the latter achieves a better performance for finite regimes of n and m. Remark 5. The problem is formulated under the finite-field model only for the purpose of making an initial step towards a more generalized and realistic algorithm. Fortunately, as many of the theory-inspiring works do, the theory process of characterizing the optimal sample complexity under this model could also shed insights into developing a universal algorithm that is applicable to a general problem setting rather than the specific problem setting considered for the theoretical analysis, as long as some slight algorithmic modifications are made. To demonstrate the universality of the algorithm, we consider a practical scenario in which ratings are real-valued (for which linear dependency between rating vectors is well-accepted) and observation noise is Gaussian. In this setting, the detection problem (under the current model) will be replaced by an estimation problem. Consequently, we update Algorithm 1 to incorporate an MLE of the rating vectors; and modify the local refinement criterion on Line 8 in Algorithm 2 to find the group that minimizes some properlydefined distance metric between the observed and estimated ratings such as Root Mean Squared Error (RMSE). In Section 5, we conduct experiments under the aforementioned setting, and show that our algorithm achieves superior performance over the state-of-the-art algorithms. 4.2 Computational Complexity One of the crucial aspects of the proposed algorithm is its computational efficiency. Phase 1 can be done in polynomial time in the number of vertices [57,78]. Phase 2 can be done inO(|E| log n) using the power method [79]. Phase 3 requires a single pass over all entries of the observed matrix, which corresponds to O(|Ω|). Finally, in each iteration of Phase 4, the affiliation update of user r ∈ [n] requires reading the entries of the rth row of Y and the edges connected to user r, which amounts to O(|Ω|+ |E|) for each of the T iterations, assuming an appropriate data structure. Hence, the overall computational complexity reads poly(n) +O(|Ω| log n). Remark 6. The complexity bottleneck is in Phase 1 (exact clustering), as it relies upon [57, 78], exhibiting poly(n) runtime. This can be improved, without any performance degradation, by replacing the exact clustering in Phase 1 with almost exact clustering, yielding O(|E| log n) runtime [79]. In return, Phase 4 should be modified so that the local iterative refinement is applied on cluster affiliation, as well as group affiliation and rating vectors. As a result, the improved overall runtime reads O((|Ω|+ |E|) log n). 5 Experimental Results We first conduct Monte Carlo experiments to corroborate Theorem 1. Let α = α̃ lognn , β = β̃ logn n , and γ = γ̃ lognn . We consider a setting where θ = 0.1, β̃ = 10, γ̃ = 0.5, δg = δc = 0.5. The synthetic data is generated as per the model in Section 2. In Figs. 2a and 2b, we evaluate the performance of the proposed algorithm (with local iterative refinement of groups and rating vectors), and quantify the empirical success rate as a function of the normalized sample complexity, over 103 randomly drawn realizations of rating vectors and hierarchical graphs. We vary n and m, preserving the ratio n/m = 3. Fig. 2a depicts the case of α̃ = 40 which corresponds to perfect clustering/grouping regime (Remark 1). On the other hand, Fig. 2b depicts the case of α̃ = 40 which corresponds to grouping-limited regime (Remark 2). In both figures, we observe a phase transition2 in the success rate at p = p?, and as we increase n and m, the phase transition gets sharper. These figures corroborate Theorem 1 in different regimes when the graph side information is not scarce. Fig. 2c compares the performance of the proposed algorithm for n = 3000 and m = 1000 under two different strategies of local iterative refinement: (i) local refinement of groups only (set flag = 0 in Algorithm 2); and (ii) local refinement of both groups and rating vectors (set flag = 1 in Algorithm 2). It is clear that the second strategy outperforms the first in the finite regime of n and m, which is consistent with Remark 4. Furthermore, the gap between the two versions shrinks as we gradually increase α̃ (i.e., as the quality of the graph gradually improves). Next, similar to [41–44], the performance of the proposed algorithm is assessed on semi-real data (real graph but synthetic rating vectors). We consider a subgraph of the political blog network [80], which is shown to exhibit a hierarchical structure [50]. In particular, we consider a tall matrix setting of n = 381 and m = 200 in order to investigate the gain in sample complexity due to the graph side information. The selected subgraph consists of two clusters of political parties, each of which comprises three groups. The three groups of the first cluster consist of 98, 34 and 103 users, while the 2The transition is ideally a step function at p = p? as n and m tend to infinity. three groups of the second cluster consist of 58, 68 and 20 users3. The corresponding rating vectors are generated such that the ratings are drawn from [0, 10] (i.e., real numbers), and the observations are corrupted by a Gaussian noise with mean zero and a given variance σ2. We use root mean square error (RMSE) as the evaluation metric, and assess the performance of the proposed algorithm against various recommendation algorithms, namely User Average, Item Average, User k-Nearest Neighbor (k-NN) [81], Item k-NN [81], TrustSVD [28], Biased Matrix Factorization (MF) [82], and Matrix Factorization with Social Regularization (SoReg) [24]. Note that [41, 42] are designed to work for rating matrices whose elements are drawn from a finite field, and hence they cannot be run under the practical scenario considered in this setting. In Fig. 2d, we compute RMSE as a function of p, for fixed σ2 = 0.5. On the other hand, Fig. 2e depicts RMSE as a function of the normalized signal-to-noise ratio 1/σ2, for fixed p = 0.08. It is evident that the proposed algorithm achieves superior performance over the state-of-the-art algorithms for a wide range of observation probabilities and Gaussian noise variances, demonstrating its viability and efficiency in practical scenarios. Finally, Table 1 demonstrates the computational efficiency of the proposed algorithm, and reports the runtimes of recommendation algorithms for the experiment setting of Fig. 2d and p = 0.1. The runtimes are averaged over 20 trials. The proposed algorithm achieves a faster runtime than all other algorithms except for User Average and Item Average. However, as shown in Fig. 2d, the performance of these faster algorithms, in terms of RMSE, is inferior to the majority of other algorithms. 3We refer to the supplementary material for a visualization of the selected subgraph of the political blog network using t-SNE algorithm. Broader Impact We emphasize two positive impacts of our work. First, it serves to enhance the performance of personalized recommender systems (one of the most influential commercial applications) with the aid of social graph which is often available in a variety of applications. Second, it achieves fairness among all users by providing high quality recommendations even to new users who have not rated any items before. One negative consequence of this work is w.r.t. the privacy of users. User privacy may not be preserved in the process of exploiting indirect information posed in social graphs, even though direct information, such as user profiles, is protected. Acknowledgments and Disclosure of Funding The work of A. Elmahdy and S. Mohajer is supported in part by the National Science Foundation under Grants CCF-1617884 and CCF-1749981. The work of J. Ahn and C. Suh is supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No.2018R1A1A1A05022889).
1. What is the focus and contribution of the paper on matrix completion with graph side information? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis and experimental verification? 3. What are the weaknesses of the paper regarding its problem setting and computational complexity? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors consider a matrix completion problem with graph side information, where the graph forms a hierarchy. They consider the stochastic block model for analysis. Under a specific observation model and structure on the graph, the authors characterize the recovery probability of the underlying low rank matrix. They also present an algorithm that is guaranteed to obtain the right solutions under the same assumptions. The authors experimentally verify that the proposed method achieves superior results compared to a few baselines. Strengths Well written paper, and the theoretical results are well explained with intuition. Both an information theoretic bound as well as an algorithm that achieves the correct answer when the bounds are satisfied are presented. I'm not completely aware of some of the related works, so it's hard to tell how truly novel the theoretical results are. The experimental results are encouraging, but i'd prefer some runtime plots too. Weaknesses The problem setting seems contrived. Why 2 groups within each cluster specifically? can this be generalized? the polynomial time dependence in (n) for the method is potentially prohibitive. Are there ways this can be addressed? Algorithmic runtime is not discussed. Especially, in comparison to other methods that use graph structured information and are highly scalable, and in light of the poly(n) complexity, this is something that should be addressed.
NIPS
Title Matrix Completion with Hierarchical Graph Side Information Abstract We consider a matrix completion problem that exploits social or item similarity graphs as side information. We develop a universal, parameter-free, and computationally efficient algorithm that starts with hierarchical graph clustering and then iteratively refines estimates both on graph clustering and matrix ratings. Under a hierarchical stochastic block model that well respects practically-relevant social graphs and a low-rank rating matrix model (to be detailed), we demonstrate that our algorithm achieves the information-theoretic limit on the number of observed matrix entries (i.e., optimal sample complexity) that is derived by maximum likelihood estimation together with a lower-bound impossibility result. One consequence of this result is that exploiting the hierarchical structure of social graphs yields a substantial gain in sample complexity relative to the one that simply identifies different groups without resorting to the relational structure across them. We conduct extensive experiments both on synthetic and real-world datasets to corroborate our theoretical results as well as to demonstrate significant performance improvements over other matrix completion algorithms that leverage graph side information. 1 Introduction Recommender systems have been powerful in a widening array of applications for providing users with relevant items of their potential interest [1]. A prominent well-known technique for operating the systems is low-rank matrix completion [2–18]: Given partially observed entries of an interested matrix, the goal is to predict the values of missing entries. One challenge that arises in the big data era is the so-called cold start problem in which high-quality recommendations are not feasible for new users/items that bear little or no information. One natural and popular way to address the challenge is to exploit other available side information. Motivated by the social homophily theory [19] that users within the same community are more likely to share similar preferences, social networks such as Facebook’s friendship graph have often been employed to improve the quality of recommendation. While there has been a proliferation of social-graph-assisted recommendation algorithms [1, 20–40], few works were dedicated to developing theoretical insights on the usefulness of side information, and therefore the maximum gain due to side information has been unknown. A few recent efforts have been made from an information-theoretic perspective [41–44]. Ahn et al. [41] have identified the maximum gain by characterizing the optimal sample complexity of matrix completion in the presence of graph side information under a simple setting in which there are two clusters and users ∗Equal contribution. Corresponding author: Changho Suh. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. within each cluster share the same ratings over items. A follow-up work [42] extended to an arbitrary number of clusters while maintaining the same-rating-vector assumption per user in each cluster. While [41, 42] lay out the theoretical foundation for the problem, the assumption of the single rating vector per cluster limits the practicality of the considered model. In an effort to make a further progress on theoretical insights, and motivated by [45], we consider a more generalized setting in which each cluster exhibits another sub-clustering structure, each subcluster (or that we call a “group”) being represented by a different rating vector yet intimately-related to other rating vectors within the same cluster. More specifically, we focus on a hierarchical graph setting wherein users are categorized into two clusters, each of which comprises three groups in which rating vectors are broadly similar yet distinct subject to a linear subspace of two basis vectors. Contributions: Our contributions are two folded. First we characterize the information-theoretic sharp threshold on the minimum number of observed matrix entries required for reliable matrix completion, as a function of the quantified quality (to be detailed) of the considered hierarchical graph side information. The second yet more practically-appealing contribution is to develop a computationally efficient algorithm that achieves the optimal sample complexity for a wide range of scenarios. One implication of this result is that our algorithm fully utilizing the hierarchical graph structure yields a significant gain in sample complexity, compared to a simple variant of [41, 42] that does not exploit the relational structure across rating vectors of groups. Technical novelty and algorithmic distinctions also come in the process of exploiting the hierarchical structure; see Remarks 2 and 3. Our experiments conducted on both synthetic and real-world datasets corroborate our theoretical results as well as demonstrate the efficacy of our proposed algorithm. Related works: In addition to the initial works [41, 42], more generalized settings have been taken into consideration with distinct directions. Zhang et al. [43] explore a setting in which both social and item similarity graphs are given as side information, thus demonstrating a synergistic effect due to the availability of two graphs. Jo et al. [44] go beyond binary matrix completion to investigate a setting in which a matrix entry, say (i, j)-entry, denotes the probability of user i picking up item j as the most preferable, yet chosen from a known finite set of probabilities. Recently a so-called dual problem has been explored in which clustering is performed with a partially observed matrix as side information [46, 47]. Ashtiani et al. [46] demonstrate that the use of side information given in the form of pairwise queries plays a crucial role in making an NP-hard clustering problem tractable via an efficient k-means algorithm. Mazumdar et al. [47] characterize the optimal sample complexity of clustering in the presence of similarity matrix side information together with the development of an efficient algorithm. One distinction of our work compared to [47] is that we are interested in both clustering and matrix completion, while [47] only focused on finding the clusters, from which the rating matrix cannot be necessarily inferred. Our problem can be viewed as the prominent low-rank matrix completion problem [1–4, 6–18] which has been considered notoriously difficult. Even for the simple scenarios such as rank-1 or rank-2 matrix settings, the optimal sample complexity has been open for decades, although some upper and lower bounds are derived. The matrix of our consideration in this work is of rank 4. Hence, in this regard, we could make a progress on this long-standing open problem by exploiting the structural property posed by our considered application. The statistical model that we consider for theoretical guarantees of our proposed algorithm relies on the Stochastic Block Model (SBM) [48] and its hierarchical counterpart [49–52] which have been shown to well respect many practically-relevant scenarios [53–56]. Also our algorithm builds in part upon prominent clustering [57,58] and hierarchical clustering [51,52] algorithms, although it exhibits a notable distinction in other matrix-completion-related procedures together with their corresponding technical analyses. Notations: Row vectors and matrices are denoted by lowercase and uppercase letters, respectively. Random matrices are denoted by boldface uppercase letters, while their realizations are denoted by uppercase letters. Sets are denoted by calligraphic letters. Let 0m×n and 1m×n be all-zero and all-one matrices of dimension m × n, respectively. For an integer n ≥ 1, [n] indicates the set of integers {1, 2, . . . , n}. Let {0, 1}n be the set of all binary numbers with n digits. The hamming distance between two binary vectors u and v is denoted by dH (u, v) := ‖u⊕ v‖0, where ⊕ stands for modulo-2 addition operator. Let 1 [·] denote the indicator function. For a graph G = (V,E) and two disjoint subsets X and Y of V , e (X,Y ) indicates the number of edges between X and Y . 2 Problem Formulation Setting: Consider a rating matrix with n users and m items. Each user rates m items by a binary vector, where 0/1 components denote “dislike”/“like” respectively. We assume that there are two clusters of users, say A and B. To capture the low-rank of the rating matrix, we assume that each user’s rating vector within a cluster lies in a linear subspace of two basis vectors. Specifically, let vA1 ∈ F1×m2 and vA2 ∈ F1×m2 be the two linearly-independent basis vectors of cluster A. Then users in Cluster A can be split into three groups (e.g., say GA1 , G A 2 and G A 3 ) based on their rating vectors. More precisely, we denote by GAi the set of users whose rating vector is v A i for i = 1, 2. Finally, the remaining users of cluster A from group GA3 , and their rating vector is v A 3 = v A 1 ⊕ vB2 (a linear combination of the basis vectors). Similarly we have vB1 , v B 2 and v B 3 = v B 1 ⊕ vB2 for cluster B. For presentational simplicity, we assume equal-sized groups (each being of size n/6), although our algorithm (to be presented in Section 4) allows for any group size, and our theoretical guarantees (to be presented in Theorem 2) hold as long as the group sizes are order-wise same. Let M ∈ Fn×m be a rating matrix wherein the ith row corresponds to user i’s rating vector. We find the Hamming distance instrumental in expressing our main results (to be stated in Section 3) as well as proving the main theorems. Let δg be the normalized Hamming distance among distinct pairs of group’s rating vectors within the same cluster: δg = 1m minc∈{A,B}mini,j∈[3] dH ( vci , v c j ) . Also let δc be the counterpart w.r.t. distinct pairs of rating vectors across different clusters: δc = 1 m mini,j∈[3] dH ( vAi , v B j ) , and define δ := {δg, δc}. We partition all the possible rating matrices into subsets depending on δ. LetM(δ) be the set of rating matrices subject to δ. Problem of interest: Our goal is to estimate a rating matrix M ∈ M(δ) given two types of information: (1) partial ratings Y ∈ {0, 1, ∗}n×m; (2) a graph, say social graph G. Here ∗ indicates no observation, and we denote the set of observed entries of Y by Ω, that is Ω = {(r, c) ∈ [n] × [m] : Yrc 6= ∗}. Below is a list of assumptions made for the analysis of the optimal sample complexity (Theorem 1) and theoretical guarantees of our proposed algorithm (Theorem 2), but not for the algorithm itself. We assume that each element of Y is observed with probability p ∈ [0, 1], independently from others, and its observation can possibly be flipped with probability θ ∈ [0, 12 ). Let social graph G = ([n], E) be an undirected graph, where E denotes the set of edges, each capturing the social connection between two associated users. The set [n] of vertices is partitioned into two disjoint clusters, each being further partitioned into three disjoint groups. We assume that the graph follows the hierarchical stochastic block model (HSBM) [51, 59] with three types of edge probabilities: (i) α indicates an edge probability between two users in the same group; (ii) β denotes the one w.r.t. two users of different groups yet within the same cluster; (iii) γ is associated with two users of different clusters. We focus on realistic scenarios in which users within the same group (or cluster) are more likely to be connected as per the social homophily theory [19]: α ≥ β ≥ γ. Performance metric: Let ψ be a rating matrix estimator that takes (Y,G) as an input, yielding an estimate. As a performance metric, we consider the worst-case probability of error: P (δ)e (ψ) := max M∈M(δ) P [ψ(Y,G) 6= M ] . (1) Note that M(δ) is the set of ground-truth matrices M subject to δ := {δg, δc}. Since the error probability may vary depending on different choices of M (i.e., some matrices may be harder to estimate), we employ a conventional minimax approach wherein the goal is to minimize the maximum error probability. We characterize the optimal sample complexity for reliable exact matrix recovery, concentrated around nmp? in the limit of n and m. Here p? indicates the sharp threshold on the observation probability: (i) above which the error probability can be made arbitrarily close to 0 in the limit; (ii) under which P (δ)e (ψ) 9 0 no matter what and whatsoever. 3 Optimal sample complexity We first present the optimal sample complexity characterized under the considered model. We find that an intuitive and insightful expression can be made via the quality of hierarchical social graph, which can be quantified by the following: (i) Ig := ( √ α −√β)2 represents the capability of separating distinct groups within a cluster; (ii) Ic1 := ( √ α − √γ)2 and Ic2 := ( √ β − √γ)2 capture the clustering capabilities of the social graph. Note that the larger the quantities, the easier to do grouping/clustering. Our sample complexity result is formally stated below as a function of (Ig, Ic1, Ic2). As in [41], we make the same assumption on m and n that turns out to ease the proof via prominent large deviation theories: m = ω(log n) and logm = o(n). This assumption is also practically relevant as it rules out highly asymmetric matrices. Theorem 1 (Information-theoretic limits). Assume that m = ω(log n) and logm = o(n). Let the item ratings be drawn from a finite field Fq. Let c and g denote the number of clusters and groups, respectively. Within each cluster, let the set of g rating vectors be spanned by any r ≤ g vectors in the same set. Define p? as p? := 1 (√ 1−θ− √ θ q−1 )2 max { gc g−r+1 logm n , log n− ngcIg δgm , log n− ngcIc1 − (g−1)n gc Ic2 δcm } . (2) Fix > 0. If p ≥ (1+ )p?, then there exists a sequence of estimators ψ satisfying limn→∞ P (δ)e (ψ) = 0. Conversely, if p ≤ (1− )p?, then limn→∞ P (δ)e (ψ) 6= 0 for anyψ. Setting (c, g, r, q) = (2, 3, 2, 2), the bound in (1) reduces to p? = 1 ( √ 1− θ − √ θ)2 max { 3 logm n , log n− 16nIg mδg , log n− 16nIc1 − 13nIc2 mδc } , (3) which is the optimal sample complexity of the problem formulated in Section 2. Proof. We provide the proof sketch for (c, g, r) = (2, 3, 2). We defer the complete proof for (c, g, r) = (2, 3, 2) to the supplementary material. The extension to general (c, g, r) is a natural generalization of the analysis for the parameters (c, g, r) = (2, 3, 2). The achievability proof is based on maximum likelihood estimation (MLE). We first evaluate the likelihood for a given clustering/grouping of users and the corresponding rating matrix. We then show that if p ≥ (1+ )p?, the likelihood is maximized only by the ground-truth rating matrix in the limit of n: limn→∞ P (δ) e (ψML) = 0. For the converse (impossibility) proof, we first establish a lower bound on the error probability, and show that it is minimized when employing the maximum likelihood estimator. Next we prove that if p is smaller than any of the three terms in the RHS of (3), then there exists another solution that yields a larger likelihood, compared to the ground-truth matrix. More precisely, if p ≤ (1− )3 logm ( √ 1−θ− √ θ)2n , we can find a grouping with the only distinction in two user-item pairs relative to the ground truth, yet yielding a larger likelihood. Similarly when p ≤ (1− )(logn− 1 6nIg) ( √ 1−θ− √ θ)2mδg , consider two users in the same cluster yet from distinct groups such that the hamming distance between their rating vectors is mδg . We can then show that a grouping in which their rating vectors are swapped provides a larger likelihood. Similarly when p ≤ (1− )(logn− 1 6nIc1− 1 3nIc2) ( √ 1−θ− √ θ)2mδc , we can swap the rating vectors of two users from different clusters with a hamming distance of mδc, and get a greater likelihood. The technical distinctions w.r.t. the prior works [41,42] are three folded: (i) the likelihood computation requires more involved combinatorial arguments due to the hierarchical structure; (ii) sophisticated upper/lower bounding techniques are developed in order to exploit the relational structure across different groups; (iii) delicate choices are made for two users to be swapped in the converse proof. We next present the second yet more practically-appealing contribution: Our proposed algorithm in Section 4 achieves the information-theoretic limits. The algorithm optimality is guaranteed for a certain yet wide range of scenarios in which graph information yields negligible clustering/grouping errors, formally stated below. We provide the proof outline in Section 4 throughout the description of the algorithm, leaving details in the supplementary material. Theorem 2 (Theoretical guarantees of the proposed algorithm). Assume thatm = ω(log n), logm = o(n), m = O(n), Ic2 > 2 lognn and Ig > ω( 1 n ). Then, as long as the sample size is beyond the optimal sample complexity in Theorem 1 (i.e., mnp > mnp?), then the algorithm presented in Section 4 with T = O(log n) iterations ensures the worse-case error probability tends to 0 as n→∞. That is, the algorithm returns M̂ such that P[M̂ = M ] = 1− o(1). Theorem 1 establishes the optimal sample complexity (the number of entries of the rating matrix to be observed) to be mnp?, where p? is given in (3). The required sample complexity is a non-increasing function of δg and δc. This makes an intuitive sense because increasing δg (or δc) yields more distinct rating vectors, thus ensuring easier grouping (or clustering). We emphasize three regimes depending on (Ig, Ic1, Ic2). The first refers to the so-called perfect clustering/grouping regime in which (Ig, Ic1, Ic2) are large enough, thereby activating the 1st term in the max function. The second is the grouping-limited regime, in which the quantity Ig is not large enough so that the 2nd term becomes dominant. The last is the clustering-limited regime where the 3rd term is activated. A few observations are in order. For illustrative simplicity, we focus on the noiseless case, i.e., θ = 0. Remark 1 (Perfect clustering/grouping regime). The optimal sample complexity reads 3m logm. This result is interesting. A naive generalization of [41,42] requires 4m logm, as we have four rating vectors (vA1 , v A 2 , v B 1 , v B 2 ) to estimate and each requires m logm observations under our random sampling, due to the coupon-collecting effect. On the other hand, we exploit the relational structure across rating vectors of different group, reflected in vA3 = v A 1 ⊕ vA2 and vB3 = vB1 ⊕ vB2 ; and we find this serves to estimate (vA1 , v A 2 , v B 1 , v B 2 ) more efficiently, precisely by a factor of 4 3 improvement, thus yielding 3m logm. This exploitation is reflected as novel technical contributions in the converse proof, as well as the achievability proofs of MLE and the proposed algorithm. Remark 2 (Grouping-limited regime). We find that the sample complexity n logn− 1 6n 2Ig δg in this regime coincides with that of [42]. This implies that exploiting the relational structure across different groups does not help improving sample complexity when grouping information is not reliable. Remark 3 (Clustering-limited regime). This is the most challenging scenario which has not been explored by any prior works. The challenge is actually reflected in the complicated sample complexity formula: n logn− 1 6n 2Ic1− 13n 2Ic2 δc . When β = γ, i.e., groups and clusters are not distinguishable, Ig = Ic1 and Ic2 = 0. Therefore, in this case, it indeed reduces to a 6-group setting: n logn− 16n 2Ig δc . The only distinction appears in the denominator. We read δc instead of δg due to different rating vectors across clusters and groups. When Ic2 6= 0, it reads the complicated formula, reflecting non-trivial technical contribution as well. Fig. 1 depicts the different regimes of the optimal sample complexity as a function of (Ig, Ic2) for n = 1000, m = 500 and θ = 0. In Fig. 1a, where δg = 13 and δc = 1 6 , the region depicted by diagonal stripes corresponds to the perfect clustering/grouping regime. Here, Ig and Ic2 are large, and graph information is rich enough to perfectly retrieve the clusters and groups. In this regime, the 1st term in (3) dominates. The region shown by dots corresponds to grouping-limited regime, where the 2nd term in (3) is dominant. In this regime, graph information suffices to exactly recover the clusters, but we need to rely on rating observation to exactly recover the groups. Finally, the 3rd term in (3) dominates in the region captured by horizontal stripes. This indicates the clustering-limited regimes, where neither clustering nor grouping is exact without the side information of the rating vectors. It is worth noting that in practically-relevant systems, where δc > δg (for rating vectors of users in the same cluster are expected to be more similar compared to those in a different cluster), the third regime vanishes, as shown by Fig. 1b, where δg = 17 and δc = 1 6 . It is straightforward to show that the third term in (3) is inactive whenever δc > δg. Fig. 1c compares the optimal sample complexity between the one reported in (3), as a function of Ig, and that of [42]. The considered setting is n=1000, m=500, θ=0, δg= 13 , δc= 1 6 , γ=0.01 and Ic2 =0.002. Note that [42] leverages neither the hierarchical structure of the graph, nor the linear dependency among the rating vectors. Thus, the problem formulated in Section 2 will be translated to a graph with six clusters with linearly independent rating vectors in the setting of [42]. Also, the minimum hamming distance for [42] is δc. In Fig. 1c, we can see that the noticeable gain in the sample complexity of our result in the diagonal parts of the plot (the two regimes on the left side) is due to leveraging the hierarchical graph structure, while the improvement in the sample complexity in the flat part of the plot is a consequence of exploiting the linear dependency among the rating vectors within each cluster (See Remark 1). 4 Proposed Algorithm We propose a computationally feasible matrix completion algorithm that achieves the optimal sample complexity characterized by Theorem 1. The proposed algorithm is motivated by a line of research on iterative algorithms that solve non-convex optimization problems [6, 58, 60–70]. The idea is to first find a good initial estimate, and then successively refine this estimate until the optimal solution is reached. This approach has been employed in several problems such as matrix completion [6, 60], community recovery [58, 61–63], rank aggregation [64], phase retrieval [65, 66], robust PCA [67], EM-algorithm [68], and rating estimation in crowdsourcing [69, 70]. In the following, we describe the proposed algorithm that consists of four phases to recover clusters, groups and rating vectors. Then, we discuss the computational complexity of the algorithm. Recall that Y ∈ {0,+1, ∗}n×m. For the sake of tractable analysis, it is convenient to map Y to Z ∈ {−1, 0,+1}n×m where the mapping of the alphabet of Y is as follows: 0←→ +1, +1←→ −1 and ∗ ←→ 0. Under this mapping, the modulo-2 addition over {0, 1} in Y is represented by the multiplication of integers over {+1,−1} in Z. Also, note that all recovery guarantees are asymptotic, i.e., they are characterized with high probability as n→∞. Throughout the design and analysis of the proposed algorithm, the number and size of clusters and groups are assumed to be known. 4.1 Algorithm Description Phase 1 (Exact Recovery of Clusters): We use the community detection algorithm in [57] on G to exactly recover the two clusters A and B. As proved in [57], the decomposition of the graph into two clusters is correct with high probability when Ic2 > 2 lognn . Phase 2 (Almost Exact Recovery of Groups): The goal of Phase 2 is to decompose the set of users in cluster A (cluster B) into three groups, namely GA1 , G A 2 , G A 3 (or G B 1 , G B 2 , G B 3 for cluster B). It is worth noting that grouping at this stage is almost exact, and will be further refined in the next phases. To this end, we run a spectral clustering algorithm [58] on A and B separately. Let Ĝxi (0) denote the initial estimate of the ith group of cluster x that is recovered by Phase 2 algorithm, for i ∈ [3] and x ∈ {A,B}. It is shown that the groups within each cluster are recovered with a vanishing fraction of error if Ig = ω(1/n). It is worth mentioning that there are other clustering algorithms [62, 71–77] that can be employed for this phase. Examples include: spectral clustering [62, 71–74], semidefinite programming (SDP) [75], non-backtracking matrix spectrum [76], and belief propagation [77]. Phase 3 (Exact Recovery of Rating Vectors): We propose a novel algorithm that optimally recovers the rating vectors of the groups within each cluster. The algorithm is based on maximum likelihood (ML) decoding of users’ ratings based on the partial and noisy observations. For this model, the ML decoding boils down to a counting rule: for each item, find the group with maximum gap between the number of observed zeros and ones, and set the rating entry of this group to 0. The other two rating vectors are either both 0 or both 1 for this item, which will be determined based on the majority of the union of their observed entries. It turns out that the vector recovery is exact with probability 1−o(1). This is one of the technical distinctions, relative to the prior works [41, 42] which employ the simple majority voting rule under non-hierarchical SBMs. Define v̂xi as the estimated rating vector of v x i , i.e., the output of Phase 3 algorithm. Let the c th element of the rating vector vxi (or v̂ x i ) be denoted by v x i (c) (or v̂ x i (c)), for i ∈ [3], x ∈ {A,B} and c ∈ [m]. Let Yr,c be the entry of matrix Y at row r and column c, and Zr,c be its mapping to {+1, 0,−1}. The pseudocode of Phase 3 algorithm is given by Algorithm 1. Algorithm 1 Exact Recovery of Rating Vectors 1: function VECRCV (n,m,Z, {Ĝxi (0) : i ∈ [3], x ∈ {A,B}}) 2: for c ∈ [m] and x ∈ {A,B} do 3: for i ∈ [3] do ρi,x(c)← ∑ r∈Ĝxi (0) Zr,c 4: j ← arg maxi∈[3] ρi,x(c) 5: v̂xj (c)← 0 6: if ∑ i∈[3]\{j} ρi,x(c) ≥ 0 then 7: for i ∈ [3] \ {j} do v̂xi (c)← 0 8: else 9: for i ∈ [3] \ {j} do v̂xi (c)← 1 10: return {v̂xi : i ∈ [3], x ∈ {A,B}} Algorithm 2 Local Iterative Refinement of Groups (Set flag = 0) 1: function REFINE (flag, n,m, T, Y, Z,G, {(Ĝxi (0), v̂xi ) : i ∈ [3], x ∈ {A,B}}) 2: α̂← 1 6(n/62 ) |{(f, g) ∈ E : f, g ∈ Gxi , x ∈ {A,B}, i ∈ [3]}| 3: β̂ ← 6n2 ∣∣{(f, g) ∈ E : f ∈ Gxi , g ∈ Gxj , x ∈ {A,B}, i ∈ [3], j ∈ [3] \ i }∣∣ 4: θ̂ ← |{(r, c) ∈ Ω : Yrc 6= v̂xi (c), r ∈ Ĝxi (0)}|/|Ω| 5: for t ∈ [T ] and x ∈ {A,B} do 6: for i ∈ [3] do Ĝxi (t)← ∅ 7: for r ← 1 to n do 8: j ← arg maxi∈[3] |{c : Yr,c = v̂xi (c)}|· log ( 1−̂θ θ̂ ) +e ( {r}, Ĝxi (t− 1) ) · log ( (1−β̂)α̂ (1−α̂)β̂ ) 9: Ĝxj (t)← Ĝxj (t) ∪ {r} 10: if flag == 1 then 11: {v̂xi : i∈ [3], x∈{A,B}} ← VECRCV (n,m,Z, {Ĝxi (t) : i∈ [3], x∈{A,B}}) 12: return {Ĝxi (T ) : i ∈ [3], x ∈ {A,B}}, {v̂xi : i ∈ [3], x ∈ {A,B}} Phase 4 (Exact Recovery of Groups): Finally, the goal is to refine the groups which are almost recovered in Phase 2, to obtain an exact grouping. To this end, we propose an iterative algorithm that locally refines the estimates on the user grouping within each cluster for T iterations. Specifically, at each iteration, the affiliation of each user is updated to the group that yields the maximum local likelihood. This is determined based on (i) the number of edges between the user and the set of users which belong to that group, and (ii) the number of observed rating matrix entries of the user that coincide with the corresponding entries of the rating vector of that group. Algorithm 2 describes the pseudocode of Phase 4 algorithm. Note that we do not assume the knowledge of the model parameters α, β and θ, and estimate them using Y and G, i.e., the proposed algorithm is parameter-free. In order to prove the exact recovery of groups after running Algorithm 2, we need to show that the number of misclassified users in each cluster strictly decreases with each iteration of Algorithm 2. More specifically, assuming that the previous phases are executed successfully, if we start with ηn misclassified users within one cluster, for some small η > 0, then one can show that we end up with η2n misclassified users with high probability as n → ∞ after one iteration of refinement. Hence, running the local refinement for T = log(ηn)log 2 within the groups of each cluster would suffice to converge to the ground truth assignments. The analysis of this phase follows the one in [42, Theorem 2] in which the problem of recovering K communities of possibly different sizes is studied. By considering the case of three equal-sized communities, the guarantees of exact recovery of the groups within each cluster readily follows when T = O(log n). Remark 4. The iterative refinement in Algorithm 2 can be applied only on the groups (when flag = 0), or on the groups as well as the rating vectors (for flag = 1). Even though the former is sufficient for reliable estimation of the rating matrix, we show, through our simulation results in the following section, that the latter achieves a better performance for finite regimes of n and m. Remark 5. The problem is formulated under the finite-field model only for the purpose of making an initial step towards a more generalized and realistic algorithm. Fortunately, as many of the theory-inspiring works do, the theory process of characterizing the optimal sample complexity under this model could also shed insights into developing a universal algorithm that is applicable to a general problem setting rather than the specific problem setting considered for the theoretical analysis, as long as some slight algorithmic modifications are made. To demonstrate the universality of the algorithm, we consider a practical scenario in which ratings are real-valued (for which linear dependency between rating vectors is well-accepted) and observation noise is Gaussian. In this setting, the detection problem (under the current model) will be replaced by an estimation problem. Consequently, we update Algorithm 1 to incorporate an MLE of the rating vectors; and modify the local refinement criterion on Line 8 in Algorithm 2 to find the group that minimizes some properlydefined distance metric between the observed and estimated ratings such as Root Mean Squared Error (RMSE). In Section 5, we conduct experiments under the aforementioned setting, and show that our algorithm achieves superior performance over the state-of-the-art algorithms. 4.2 Computational Complexity One of the crucial aspects of the proposed algorithm is its computational efficiency. Phase 1 can be done in polynomial time in the number of vertices [57,78]. Phase 2 can be done inO(|E| log n) using the power method [79]. Phase 3 requires a single pass over all entries of the observed matrix, which corresponds to O(|Ω|). Finally, in each iteration of Phase 4, the affiliation update of user r ∈ [n] requires reading the entries of the rth row of Y and the edges connected to user r, which amounts to O(|Ω|+ |E|) for each of the T iterations, assuming an appropriate data structure. Hence, the overall computational complexity reads poly(n) +O(|Ω| log n). Remark 6. The complexity bottleneck is in Phase 1 (exact clustering), as it relies upon [57, 78], exhibiting poly(n) runtime. This can be improved, without any performance degradation, by replacing the exact clustering in Phase 1 with almost exact clustering, yielding O(|E| log n) runtime [79]. In return, Phase 4 should be modified so that the local iterative refinement is applied on cluster affiliation, as well as group affiliation and rating vectors. As a result, the improved overall runtime reads O((|Ω|+ |E|) log n). 5 Experimental Results We first conduct Monte Carlo experiments to corroborate Theorem 1. Let α = α̃ lognn , β = β̃ logn n , and γ = γ̃ lognn . We consider a setting where θ = 0.1, β̃ = 10, γ̃ = 0.5, δg = δc = 0.5. The synthetic data is generated as per the model in Section 2. In Figs. 2a and 2b, we evaluate the performance of the proposed algorithm (with local iterative refinement of groups and rating vectors), and quantify the empirical success rate as a function of the normalized sample complexity, over 103 randomly drawn realizations of rating vectors and hierarchical graphs. We vary n and m, preserving the ratio n/m = 3. Fig. 2a depicts the case of α̃ = 40 which corresponds to perfect clustering/grouping regime (Remark 1). On the other hand, Fig. 2b depicts the case of α̃ = 40 which corresponds to grouping-limited regime (Remark 2). In both figures, we observe a phase transition2 in the success rate at p = p?, and as we increase n and m, the phase transition gets sharper. These figures corroborate Theorem 1 in different regimes when the graph side information is not scarce. Fig. 2c compares the performance of the proposed algorithm for n = 3000 and m = 1000 under two different strategies of local iterative refinement: (i) local refinement of groups only (set flag = 0 in Algorithm 2); and (ii) local refinement of both groups and rating vectors (set flag = 1 in Algorithm 2). It is clear that the second strategy outperforms the first in the finite regime of n and m, which is consistent with Remark 4. Furthermore, the gap between the two versions shrinks as we gradually increase α̃ (i.e., as the quality of the graph gradually improves). Next, similar to [41–44], the performance of the proposed algorithm is assessed on semi-real data (real graph but synthetic rating vectors). We consider a subgraph of the political blog network [80], which is shown to exhibit a hierarchical structure [50]. In particular, we consider a tall matrix setting of n = 381 and m = 200 in order to investigate the gain in sample complexity due to the graph side information. The selected subgraph consists of two clusters of political parties, each of which comprises three groups. The three groups of the first cluster consist of 98, 34 and 103 users, while the 2The transition is ideally a step function at p = p? as n and m tend to infinity. three groups of the second cluster consist of 58, 68 and 20 users3. The corresponding rating vectors are generated such that the ratings are drawn from [0, 10] (i.e., real numbers), and the observations are corrupted by a Gaussian noise with mean zero and a given variance σ2. We use root mean square error (RMSE) as the evaluation metric, and assess the performance of the proposed algorithm against various recommendation algorithms, namely User Average, Item Average, User k-Nearest Neighbor (k-NN) [81], Item k-NN [81], TrustSVD [28], Biased Matrix Factorization (MF) [82], and Matrix Factorization with Social Regularization (SoReg) [24]. Note that [41, 42] are designed to work for rating matrices whose elements are drawn from a finite field, and hence they cannot be run under the practical scenario considered in this setting. In Fig. 2d, we compute RMSE as a function of p, for fixed σ2 = 0.5. On the other hand, Fig. 2e depicts RMSE as a function of the normalized signal-to-noise ratio 1/σ2, for fixed p = 0.08. It is evident that the proposed algorithm achieves superior performance over the state-of-the-art algorithms for a wide range of observation probabilities and Gaussian noise variances, demonstrating its viability and efficiency in practical scenarios. Finally, Table 1 demonstrates the computational efficiency of the proposed algorithm, and reports the runtimes of recommendation algorithms for the experiment setting of Fig. 2d and p = 0.1. The runtimes are averaged over 20 trials. The proposed algorithm achieves a faster runtime than all other algorithms except for User Average and Item Average. However, as shown in Fig. 2d, the performance of these faster algorithms, in terms of RMSE, is inferior to the majority of other algorithms. 3We refer to the supplementary material for a visualization of the selected subgraph of the political blog network using t-SNE algorithm. Broader Impact We emphasize two positive impacts of our work. First, it serves to enhance the performance of personalized recommender systems (one of the most influential commercial applications) with the aid of social graph which is often available in a variety of applications. Second, it achieves fairness among all users by providing high quality recommendations even to new users who have not rated any items before. One negative consequence of this work is w.r.t. the privacy of users. User privacy may not be preserved in the process of exploiting indirect information posed in social graphs, even though direct information, such as user profiles, is protected. Acknowledgments and Disclosure of Funding The work of A. Elmahdy and S. Mohajer is supported in part by the National Science Foundation under Grants CCF-1617884 and CCF-1749981. The work of J. Ahn and C. Suh is supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No.2018R1A1A1A05022889).
1. What is the focus and contribution of the paper regarding binary matrix completion? 2. What are the strengths of the proposed approach, particularly in terms of its applicability and theoretical bounds? 3. What are the weaknesses of the paper, especially regarding its problem setting and experimental evaluation? 4. How does the reviewer assess the significance and practicality of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper considers a binary matrix completion problem with side information and proposes a hierarchical clustering method with refinements to achieve efficient and accurate matrix completion. The information-theoretic limit on the number of observed matrix entries is provided. Empirical analyses on synthetic and semi-real datasets showed that the proposed method can outperform several baseline methods. Strengths The theoretical contribution of this paper is to provide the optimal sample complexity bound for exact recovery of the binary matrix completion problem. The proposed method is not limited to recommendation problem but is also applicable to clustering problems with side information. Weaknesses The problem setting of this paper seems to be impractical. The authors assumed that the users are divided into two clusters and each cluster only contains three groups. This may not be a very practical setting for real recommender systems. The experiments are conducted on synthetic datasets. Although some experiments are conducted on the Poliblog Dataset, the ratings are synthetic. It will be more interesting to see how the performance of the proposed method differ from state-of-the-art recommendation algorithms on real recommendation tasks. ------------------------- I have read the authors' rebuttal and will keep my original scores for this paper.
NIPS
Title Rethinking the compositionality of point clouds through regularization in the hyperbolic space Abstract Point clouds of 3D objects exhibit an inherent compositional nature where simple parts can be assembled into progressively more complex shapes to form whole objects. Explicitly capturing such part-whole hierarchy is a long-sought objective in order to build effective models, but its tree-like nature has made the task elusive. In this paper, we propose to embed the features of a point cloud classifier into the hyperbolic space and explicitly regularize the space to account for the part-whole hierarchy. The hyperbolic space is the only space that can successfully embed the tree-like nature of the hierarchy. This leads to substantial improvements in the performance of state-of-art supervised models for point cloud classification. 1 Introduction Is the whole more than the sum of its parts? While philosophers have been debating such deep question since the time of Aristotle, we can certainly say that understanding and capturing the relationship between parts as constituents of whole complex structures is of paramount importance in building models of reality. In this paper, we turn our attention to the compositional nature of 3D objects, represented as point clouds, where simple parts can be assembled to form progressively more complex shapes. Indeed, the complex geometry of an object can be better understood by unraveling the implicit hierarchy of its parts. Such hierarchy can be intuitively captured by a tree where nodes close to the root represent basic universal shapes, which become progressively more complex as we approach the whole-object leaves. Transforming an object into another requires swapping parts by traversing the tree up to a common ancestor part. It is thus clear that a model extracting features, that claim to capture the nature of 3D objects, needs to incorporate such hierarchy. In the last years, point cloud processing methods have tried to devise methods to extract complex geometric information from points and neighborhoods. Architectures like graph neural networks [1] compose the features extracted by local receptive fields, with sophisticated geometric priors [2] exploiting locality and self-similarity, while a different school of thought argues that simple architectures, such as PointMLP [3] and SimpleView [4], with limited geometric priors are nevertherless very effective. It thus raises a question whether prior knowledge about the data is being exploited effectively. In this sense, works such as PointGLR [5], Info3D [6] and DCGLR [7] recognized the need to reason about local and global interactions in the feature extraction process. In particular, their claim is that ∗Code of the project: https://github.com/diegovalsesia/HyCoRe 36th Conference on Neural Information Processing Systems (NeurIPS 2022). maximizing the mutual information between parts and whole objects leads to understanding of local and global relations. Although these methods present compelling results for unsupervised feature extraction, they still fall short of providing significant improvements when finetuned with supervision. In our work, we argue that those methods do not fulfill their promise of capturing the part-whole relationship because they are unable to represent the tree-like nature of the compositional hierarchy. Indeed, their fundamental weakness lies in the use of spaces that are either flat (Euclidean) or with positive curvature (spherical). However, it is known that only spaces with negative curvature (hyperbolic) are able to embed tree structures with low distortion [8]. This is due to the fact that the volume of the Euclidean space grows only as a power of its radius rather than exponentially, limiting the representation capacity of tree-like data with an exponential number of leaves. This unique characteristic has inspired many researchers to represent hierarchical relations in many domains, from natural language processing [9],[10] to computer vision [11] ,[12]. However, the use of such principles for point clouds and 3D data is still unexplored. The main contributions of this paper lie in the following aspects: • we propose a novel regularizer to supervised training of point cloud classification models that promotes the part-whole hierarchy of compositionality in the hyperbolic space; • this regularizer can be applied to any state-of-art architecture with a simple modification of its head to perform classification with hyperbolic layers in the regularized space, coupled with Riemannian optimization [13]; • we observe a significant improvement in the performance of a number of popular architectures, including state-of-the-art techniques, surpassing the currently known best results on two different datasets; • we are the first to experimentally observe the desired part-whole hierarchy, by noticing that the geodesics in hyperbolic space between whole objects pass through common part ancestors. 2 Related work Point Cloud Analysis Point cloud data are sets of multiple points and, in recent years, several deep neural networks have been studied to process them. Early works adapted models for images through 2D projections [14], [15]. Later, PointNet [16] established new models working directly on the raw set of 3D coordinates by exploiting shared architectures invariant to points permutation. Originally, PointNet independently processed individual points through a shared MLP. To improve performance, PointNet++ [17] exploited spatial correlation by using a hierarchical feature learning paradigm. Other methods [18], [19], [20], treat point clouds as a graph and exploit operators defined over irregular sets to capture relations among points and their neighbors at different resolutions. This is the case of DGCNN [21], where the EdgeConv graph convolution operation aggregates features supported on neighborhoods as defined by a nearest neighbor graph dynamically computed in the feature space. Recently, PointMLP [3] revisits PointNet++ to include the concept of residual connections. Through this simple model, the authors show that sophisticated geometric models are not essential to obtain state-of-the-art performance. Part Compositionality Successfully capturing the semantics of 3D objects represented as point clouds requires to learn interactions between local and global information, and, in particular, the compositional nature of 3D objects as constructed from local parts. Indeed, some works have focused on capturing global-local reasoning in point cloud processing. One of the first and most representative works is PointGLR [5]. In this work, the authors map local features at different levels within the network to a common hypersphere where the global features embedding is made close to such local embeddings. This is the first approach towards modeling the similarities of parts (local features) and whole objects (global features). The use of a hypersphere as embedding space for similarity promotion traces its roots in metric learning works for face recognition [22]. In addition to the global-local embedding, PointGLR added two other pretext tasks, namely normal estimation and self-reconstruction, to further promote learning of highly discriminative features. Our work significantly differs from PointGLR in multiple ways: i) a positive curvature manifold such as the hypersphere is unable to accurately embed hierarchies (tree-like structures), hence our adoption of the hyperbolic space; ii) we actively promote a continuous embedding of part-whole hierarchies by penalizing the hyperbolic norm of parts proportionally to their number of points (a proxy for part complexity); iii) we move the classification head of the model to the hyperbolic space to exploit our regularized geometry. A further limitation of PointGLR is the implicit assumption of a model generating progressive hierarchies (e.g. via expanding receptive fields) in the intermediate layers. In contrast, our work can be readily adopted by any state-of-the-art model with just a replacement of the final layers. Other works revisit the global-local relations using maximization of mutual information between different views [6], clustering and contrastive learning [23], distillation with constrast [7], self-similarity and contrastive learning with hard negative samples [24]. Although most of these works include the contrastive strategy, they differ in the way they contrast the positive and negative samples and in the details of the self-supervision procedures, e.g., contrastive loss and point cloud augmentations. We also notice that most these works focus on unsupervised learning, and, while they show that the features learned in this manner are highly discriminative, they are also mostly unable to improve upon state-of-the-art supervised methods when finetuned with full supervision. These approaches differ from the one followed in this paper, where we focus on regularization of a fully supervised method, and we show improvements upon the supervised baselines that do not adopt our regularizer. Hyperbolic Learning The intuition that the hyperbolic space is crucial to embed hierarchical structures comes from the work of Sarkar [8] who proved that trees can be embedded in the hyperbolic space with arbitrarily low distortion. This inspired several works which investigated how various frameworks of representation learning can be reformulated in non-Euclidean manifolds. In particular, [9] [13] and [10] were some of the first works to explore hyperbolic representation learning by introducing Riemannian adaptive optimization, Poincarè embeddings and hyperbolic neural networks for natural language processing. The new mathematical formalism introduced by Ganea et al. [10] was decisive to demonstrate the effectiveness of hyperbolic variants of neural network layers compared to the Euclidean counterparts. Generalizations to other data, such as images [25] and graphs [26] with the corresponding hyperbolic variants of the main operations like graph convolution [26] and gyroplane convolution [12] have also been studied. In the context of unsupervised learning, new objectives in the hyperbolic space force the models to include the implicit hierarchical structure of the data leading to a better clustering in the embedding space [12], [11]. To the best of our knowledge, no work has yet focused on hyperbolic representations for point clouds. Indeed, 3D objects present an intrinsic hierarchy where whole objects are made by parts of different size. While the smallest parts may be shared across different object classes, the larger the parts the more class-specific they become. This consistently fits with the structure of a tree where simple fundamental parts are shared ancestors of complex objects and hence we show how the hyperbolic space can fruitfully capture this data prior. 3 Method In this section we present our proposed method, named HyCoRe (Hyperbolic Compositional Regularizer). An overview is presented in Fig. 1. At a high level, HyCoRe enhances any state-of-the-art neural network model for point cloud classification by 1) replacing its last layers with layers performing transformations in the hyperbolic space (see Sec. 3.2), and 2) regularizing the classification loss to induce a desirable configuration of the hyperbolic feature space where embeddings of parts both follow a hierarchy and cluster according to class labels. 3.1 Compositional Hierarchy in 3D Point Clouds The objective of HyCoRe is to regularize the feature space produced by a neural network so that it captures the compositional structure of the 3D point cloud at different levels. In particular, we notice that there exists a hierarchy where small parts (e.g., simple structures like disks, squares, triangles) composed of few points are universal ancestors to more complex shapes included in many different objects. As these structures are composed into more complex parts with more points, they progressively become more specific to an object or class. This hierarchy can be mathematically represented by a tree, as depicted in Fig.2 where a simple cylinder can be the ancestor of both pieces of a chair or a table. While the leaves in the tree are whole objects, thus belonging to a specific class, their ancestors are progressively more universal the higher up in the hierarchy they sit. At this point, it is important noting that the graph distance between leaves is determined by the shortest path passing through the first common ancestor for objects in the same or similar classes, while objects from significantly dissimilar classes have the shortest path passing through the root of the hierarchy. In order to ensure that we can embed this tree structure in a feature space, we need a space that preserves the geometrical properties of trees and especially the graph distance. In particular, the embedding space must be able to accommodate the exponential volume growth of a tree along its radius. A classic result by Sarkar [8] showed that flat Euclidean space does not provide this, leading to high errors when embedding trees, even in high dimensions. On the contrary, the hyperbolic space, a Riemannian manifold with negative curvature, does support exponentially increasing volumes and can embed trees with arbitrarily low distortion. Indeed, the geodesic (shortest path) between two points in this space does pass through points closer to the origin, mimicking the behavior of distance defined over a tree. In particular, we will focus on the Poincarè ball model of hyperbolic space. Since hyperbolic space is a non-Euclidean manifold, it cannot benefit from conventional vector representations and linear algebra. As a consequence, classical neural networks cannot operate in such a space. However, we will use extensions [10] of classic layers defined through the concept of gyrovector spaces. 3.2 Hyperbolic Space and Neural Networks The hyperbolic space is a Riemannian manifold with constant negative curvature. The curvature determines the metric of a space by the following formula: gR = (λ c x) 2gE = 2 1 + c∥x∥2 gE (1) where gR is the metric tensor of a generic Riemannian manifold, λcx is the conformal factor that depends on the curvature c and on the point x on which is calculated, and gE is the metric tensor of the Euclidean space Rn, i.e., the identity tensor In. Note how the metric depends on the coordinates (through ∥x∥) for c ̸= 0, and how c = 0 yields gR = 2gE , i.e., the Euclidean space is a flat Riemannian manifold with zero curvature. Spaces with c > 0 are spherical, and with c < 0 hyperbolic. The Poincarè Ball in n dimensions Dn is a hyperbolic space with c = −1, and it is isometric to other models such as the Lorentz model. The distance and norm are defined as: dD(x,y) = cosh −1 ( 1 + 2 ∥x− y∥2 (1− ∥x∥2)(1− ∥y∥2) ) ) , ∥x∥D = 2 tanh−1 (∥x∥) (2) Since the Poincarè Ball is a Riemannian manifold, for each point x ∈ Dn we can define a logarithmic map logx : Dn → TxDn that maps points from the Poincarè Ball to the corresponding tangent space TxDn ∈ Rn, and an exponential map expx : TxDn → Dn that does the opposite. These operations [10] are fundamental to move from one space to the other and viceversa. The formalism to generalize tensor operations in the hyperbolic space is called the gyrovector space, where addition, scalar multiplication, vector-matrix multiplication and other operations are redefined as Möbius operations and work in Riemannian manifolds with curvature c. These become the basic blocks of the hyperbolic neural networks. In particular, we will use the hyperbolic feed forward (FF) layer (also known as Möbius layer). Considering the Euclidean case, for a FF layer, we need a matrix M : Rn → Rm to linearly project the input x ∈ Rn to the feature space Rm, and, additionally, a translation made by a bias addition, i.e., y+ b with y,b ∈ Rm and, finally, a pointwise non-linearity ϕ : Rm → Rm. Matrix multiplication, bias and pointwise non-linearity are replaced by Möbius operations in the gyrovector space and become: y = M⊗c(x) = 1√ c tanh ( ∥Mx∥ ∥x∥ tanh−1( √ c∥x∥) ) Mx ∥x∥ (3) z = y ⊕c b = expcy ( λc0 λcy logc0(b) ) , ϕ⊗c(z) = expcz (ϕ(log c 0(z))) (4) where M and b are the same matrix and vector defined above, c is the magnitude of the curvature. Note that when c → 0 we recover the Euclidean feed-forward layer. An interesting property of the Möbius layer is that it is highly nonlinear; indeed the bias addition in hyperbolic space becomes a nonlinear mapping since geodesics are curved paths in non-flat manifolds. 3.3 Hyperbolic Compositional Regularization Armed with the formalism introduced in the previous section, we are ready to formulate our HyCoRe framework, anticipated in Fig. 1. Consider a point cloud PN as a set of 3D points p ∈ R3 with N elements. We use any state-of-the-art point cloud processing network as a feature extraction backbone E : RN×3 → Rm to encode PN in the corresponding feature space. At this point we apply an exponential map expcx : Rm → Dm to map the Euclidean feature vector into the hyperbolic space and then a Möbius layer H : Dm → Df to project the hyperbolic vector in an f -dimensional Poincarè ball. This is the hyperbolic embedding of the whole point cloud PN , i.e., zwhole = H(exp(E(PN ))) ∈ Df . We repeat the same procedure for a sub-part of PN , which we call PN ′ with a number of points N ′ < N , to create the part embedding zpart = H(exp(E(PN ′))) ∈ Df in the same feature space as before. We now want to regularize the feature space to induce the previously mentioned properties, namely the part-whole hierarchy and clustering according to the class labels. This is performed by defining the following triplet regularizers: Rhier(z + whole, z + part) = max(0,−∥z+whole∥D + ∥z + part∥D + γ/N ′) (5) Rcontr(z + whole, z + part, z − part) = max ( 0, dD(z + whole, z + part)− dD(z+whole, z − part) + δ ) (6) where z+whole and z + part are the hyperbolic representation of the whole and a part from the same point cloud, while z−part is the embedding of a part of a different point cloud from a different class. The Rhier regularizer in Eq. (5) induces the compositional part-whole hierarchy by promoting part embeddings to lie closer to the center of the Poincarè ball and whole embeddings to be closer to the edge. In particular, we use a variable margin γ/N ′ that depends on the number of points N ′ of the part PN ′ . This means that shapes composed by few points (hence simple universal shapes) will be far from the whole object representation and with lower hyperbolic norm (near the centre). On the other hand, embeddings of larger parts will be progressively closer to the edge of the Poincarè ball, depend- ing on the part size. Since geodesics between two points pass closer to the ball center (Fig. 3), this structure we impose to the space allows to visit common part ancestors while traversing a geodesic between two whole objects. This regularization thus mimics a continuous version of a part-whole tree embedded in the Poincarè ball. The Rcontr regularizer in Eq. (6) promotes correct clustering of objects and parts in the hyperbolic space. In particular, parts and whole of the same point cloud are promoted to be close while a part from a different class is mapped far apart with respect to the other whole. It ensures that the parts of a point cloud of a different class are far in terms of geodesic distance. δ is a margin hyperparameter to control the degree of separation between positive and negative samples. The two regularizations are included in the final loss in this way: L = LCE + αRcontr + βRhier (7) where LCE is the conventional classification loss (e.g., cross-entropy) evaluated on the whole objects. The classification head is a hyperbolic Möbius layer followed by softmax. In principle, one could argue that LCE could already promote correct clustering according to class labels, rendering Rcontr redundant. However, several works [10] have noticed that the Möbius-softmax hyperbolic head is weaker than its Euclidean counterpart. We thus found it more effective to evaluate LCE on the whole objects only, and use Rcontr as a metric penalty that explicitly considers geodesic distances to ensure correct clustering of both parts and whole objects. At each iteration of training with HyCoRe we sample shapes with a random N ′ varying within a predefined range. A part is defined as the N ′ nearest neighbors of a random point. In future work, it would be interesting to explore alternative definitions for parts, e.g., using part labels if available but, at the moment, we only address definition via spatial neighbors to avoid extra labeling requirements. Method AA(%) OA(%) 4 Experimental results 4.1 Experimental setting We study the performance of our regularizer HyCoRe on the synthetic dataset ModelNet40 [27] (12,331 objects with 1024 points, 40 classes) and on the real dataset ScanObjectNN [28] (15,000 objects with 1024 points, 15 classes). We apply our method over multiple classification architectures, namely the widely popular DGCNN and PointNet++ baselines, as well as the recent state-of-the-art PointMLP model. We substitute the standard classifier with its hyperbolic version (Möbius+softmax), as shown in Fig. 1. We use f = 256 features to be comparable to the official implementations in the Euclidean space, then we test the model over different embedding dimensions in the ablation study. Moreover, we set α = β = 0.01, γ = 1000 and δ = 4. For the number of points of each part N ′, we select a random number between 200 and 600, and for the whole object a random number between 800 and 1024 to ensure better flexibility of the learned to model to part sizes. We train the models using Riemannian SGD optimization. Our implementation is on Pytorch and we use geoopt [29] for the hyperbolic operations. Models are trained on an Nvidia A6000 GPU. 4.2 Main Results Table 1 shows the results for ModelNet40 classification. In the first part we report well-known and state-of-the-art supervised models. We retrained PointNet++, DGCNN and the state-of-theart PointMLP as baselines, noting some documented difficulty [38] with exactly reproducing the Table 5: Performance vs. curvature of the Poincarè Ball Average Accuracy (%) Curvature c 1 0.5 0.1 0.01 Hype-DGCNN 76.5 76.9 76.6 76.9 DGCNN+HyCoRe 80.2 79.4 78.7 78.5 official results. In addition, the second part of the table reports the performance of methods [33], [34], [7] proposing self-supervised pretraining techniques, after supervised finetuning. Concerning PointGLR [5], the most similar method to HyCoRe, we ensure a fair comparison by using only the L2G embedding loss and not the pretext tasks of normal estimation and reconstruction. Finally, the last part of the table presents the results with HyCoRe applied to the selected baselines. We can see that the proposed method achieves substantial gains not only compared to the randomly initialized models, but also compared to the finetuned models. When applied to the PointMLP, HyCoRe exceed the state-of-the-art performance on ModelNet40. Moreover, it is interesting to notice that the embedding framework of PointGLR is not particularly effective without the pretext tasks. This is due to the unsuitability of the spherical space to embed hierarchical information, as explained in Sec. 2, and it is indeed not far from results we obtain with our method in Euclidean space. Table 2 reports the classification results on the ScanObjectNN dataset. Also in this case, HyCoRe significantly improves the baseline DGCNN leading it to be comparable with the state-of-the-art methods such as SimpleView [4], PRANet [35] and MVTN [36]. In addition, PointMLP that holds the state of the art for this dataset, is further improved by our method and reaches an impressive overall accuracy of 87.2 %, substantially outperforming all the previous approaches. Although the authors in [3] claim that classification performance has reached a saturation point, we show that including novel regularizers in the training process can still lead to significant gains. This demonstrates that the proposed method leverages novel ideas, complementary to what is exploited by existing architectures, and it is thus able to boost the performance even of state-of-the-art methods. It is also remarkable that an older, yet still popular, architecture like DGCNN is able to outperform complex and sophisticated models such as the Point Transformer, when regularized by HyCoRe. 5.32 4.56 2.08 4.07 2.05 1.99 In addition, to further prove that enforcing the hierarchy between parts is useful to build better clusters, we show in Fig. 4 a 2D visualization with UMAP of the hyperbolic representations for the ModelNet40 data. Colors denote classes, big points whole objects and small points parts. Besides the clear clustering according to class labels, it is fascinating to notice the emergence of the part-whole hierarchy with part objects closer to the center of the disk. Importantly, some parts bridge multiple classes, such as the ones in the bottom right zoom, i.e., they are found along a geodesic connecting two class clusters, serving as common ancestors. This can happen due to the fact that some simple parts having roughly the same shape appear with multiple class labels during training, and the net effect of Rcontr is to position them midway across the classes. The tree-likeness of the hyperbolic space can be also be seen in the visualization in Fig. 2 (right). There we embed shapes with gradually large number of points up to the whole object made by 1024 points. We can notice that the parts are moved towards the disk edge as more points are added. Furthermore, a quantitative analysis of the part-whole hierarchy is shown in Table 6. Here we calculated the hyperbolic norms of compositions of labeled parts. We can see that, as the parts are assembled with other parts, their hyperbolic norms grow, up to the whole object that is pushed close to the ball edge. 4.3 Ablation study In the following we show an ablation study focusing on the DGCNN backbone and the ScanObjectNN dataset. The dataset selection is motivated by the fact that it is a real dataset, able to provide more stable and representative results compared to ModelNet40. We first compare HyCoRe with its Euclidean version (EuCoRe) to investigate the effectiveness of the hyperbolic space. The basic principles and losses are the same, but in EuCoRe distances and network layers are defined in the Euclidean space. Table 3 shows the results. With Hype-DGCNN we indicate the hyperbolic version of DGCNN, as represented in Fig. 1, but without any regularization, serving as a baseline to assess the individual effect of the regularizer. We also test the models over a different number of embedding dimensions. We can see that EuCoRe only provides a modest improvement, underlining the importance of the hyperbolic space. We also notice that the hyperbolic baseline struggles to be on par with its Euclidean counterpart, as observed by many recent works [25], [10]. However, when regularized with HyCoRe, we can observe significant gains, even in low dimensions. This also leaves an open research question, about whether better hyperbolic baselines could be built so that HyCoRe starts from a less disadvantaged point. In Table 4 we ablate HyCoRe by removing one of the two regularizers. We can see that the combination of the two provides the overall best gain. In order to study the effect of different space curvatures c, Table 5 evaluates HyCoRe from the standard curvature 1 down to 0.01. We remark that some works [25],[39], report significant improvements when c is very low (e.g., 0.001), but this is counter-intuitive since the hyperbolic space then resembles an almost flat manifold. On the contrary, we do see improved results at higher curvatures. Since HyCoRe constrains the network to learn the relations between parts and whole object, we claim that, at the end of the training process, the model should be better able to classify coarser objects. In Figs. 5a and 5b we show the test accuracy of DGCNN on ModelNet40, when presented with a uniformly subsampled point cloud and with a small randomly chosen and spatially-contiguous part, respectively. Indeed, we can notice that HyCoRe provides a gain up to 20 percentage points for very sparse point clouds, and is also able to successfully detect the object from smaller parts. For a fair comparison, we also report the baseline DGCNN with training augmented by random crops of parts. Even though the augmentation is useful to improve accuracy, HyCoRe is more effective demonstrating the importance of compositional reasoning. 5 Conclusions Although deep learning in the hyperbolic space is in its infancy, in this paper we showed how it can successfully capture the hierarchical nature of 3D point clouds, boosting the performance of state-of-the-art models for classification. Reasoning about the relations between objects and the parts that compose them leads not only to better results but also more robust and explainable models. In the future, it would be interesting to explore different ways of defining parts, not based on spatial nearest neighbors but rather on more semantic constructions. One important extension is to adapt HyCoRe to segmentation. Since segmentation aims to classify single points and the corresponding parts, contrary to classification, the parts embeddings should be placed on the boundary of the Poincarè Ball, where there is more space to correctly cluster them, and the whole objects (made by composition of parts) near the origin. We could exploit the label of the parts to this end or investigate unsupervised settings where the part hierarchy emerges naturally. Acknowledgments and Disclosure of Funding Computational resources were provided by HPC@POLITO, a project of Academic Computing within the Department of Control and Computer Engineering at the Politecnico di Torino (http://www.hpc.polito.it). This research received no external funding.
1. What is the focus and contribution of the paper regarding 3D shape classification? 2. What are the strengths of the proposed approach, particularly in terms of regularization techniques? 3. What are the weaknesses of the paper, especially regarding the experimental scope? 4. Do you have any concerns about extending the approach to semantic segmentation tasks? 5. Are there any limitations to the method that the author should discuss?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes learning shape representation for the 3D shape classification task, by regularizing embeddings in hyperbolic space. The intuition on which this paper is based is that complex shapes can be made by combining simpler parts and this composition can be explained by a tree-like hierarchy. The paper proposes regularizing shape embeddings such that simplest and most basic parts are embedded at the root level and entire shapes are embedded at the leaf level, where embeddings are defined using hyperbolic space. Specifically, a shape embedding is first mapped to hyperbolic space and then a mobius layer is applied to project it to Poincare ball. The paper proposes two regularization, first is to encourage the whole shape embeddings to be close to leaf level and part level embeddings are close to the root. The second regularization encourages a shape and its parts to be closed to each other in embedding space and far from embeddings of parts from different shapes. This approach consistently improve the performance of shape classification task on several neural network architectures. Strengths And Weaknesses Strengths The paper is clearly written and explains the motivation of the proposed approach well. The proposed approach is well described All experiment details are provided ensuring reproducibility. The proposed approach consistently improve the performance using several neural network architecture, ensuring generalizability. All ablations are provided that all components are pertinent. Weakness The main weakness I see is that only shape classification is chosen to benchmark the approach. Specifically, since the embeddings are better aware of the composition the embeddings can shine in part segmentation task. Questions Is it easy to extend this approach to semantic segmentation task? If yes, why did authors not include those experiments? If not, can authors include some discussion on this matter. Limitations Yes.
NIPS
Title Rethinking the compositionality of point clouds through regularization in the hyperbolic space Abstract Point clouds of 3D objects exhibit an inherent compositional nature where simple parts can be assembled into progressively more complex shapes to form whole objects. Explicitly capturing such part-whole hierarchy is a long-sought objective in order to build effective models, but its tree-like nature has made the task elusive. In this paper, we propose to embed the features of a point cloud classifier into the hyperbolic space and explicitly regularize the space to account for the part-whole hierarchy. The hyperbolic space is the only space that can successfully embed the tree-like nature of the hierarchy. This leads to substantial improvements in the performance of state-of-art supervised models for point cloud classification. 1 Introduction Is the whole more than the sum of its parts? While philosophers have been debating such deep question since the time of Aristotle, we can certainly say that understanding and capturing the relationship between parts as constituents of whole complex structures is of paramount importance in building models of reality. In this paper, we turn our attention to the compositional nature of 3D objects, represented as point clouds, where simple parts can be assembled to form progressively more complex shapes. Indeed, the complex geometry of an object can be better understood by unraveling the implicit hierarchy of its parts. Such hierarchy can be intuitively captured by a tree where nodes close to the root represent basic universal shapes, which become progressively more complex as we approach the whole-object leaves. Transforming an object into another requires swapping parts by traversing the tree up to a common ancestor part. It is thus clear that a model extracting features, that claim to capture the nature of 3D objects, needs to incorporate such hierarchy. In the last years, point cloud processing methods have tried to devise methods to extract complex geometric information from points and neighborhoods. Architectures like graph neural networks [1] compose the features extracted by local receptive fields, with sophisticated geometric priors [2] exploiting locality and self-similarity, while a different school of thought argues that simple architectures, such as PointMLP [3] and SimpleView [4], with limited geometric priors are nevertherless very effective. It thus raises a question whether prior knowledge about the data is being exploited effectively. In this sense, works such as PointGLR [5], Info3D [6] and DCGLR [7] recognized the need to reason about local and global interactions in the feature extraction process. In particular, their claim is that ∗Code of the project: https://github.com/diegovalsesia/HyCoRe 36th Conference on Neural Information Processing Systems (NeurIPS 2022). maximizing the mutual information between parts and whole objects leads to understanding of local and global relations. Although these methods present compelling results for unsupervised feature extraction, they still fall short of providing significant improvements when finetuned with supervision. In our work, we argue that those methods do not fulfill their promise of capturing the part-whole relationship because they are unable to represent the tree-like nature of the compositional hierarchy. Indeed, their fundamental weakness lies in the use of spaces that are either flat (Euclidean) or with positive curvature (spherical). However, it is known that only spaces with negative curvature (hyperbolic) are able to embed tree structures with low distortion [8]. This is due to the fact that the volume of the Euclidean space grows only as a power of its radius rather than exponentially, limiting the representation capacity of tree-like data with an exponential number of leaves. This unique characteristic has inspired many researchers to represent hierarchical relations in many domains, from natural language processing [9],[10] to computer vision [11] ,[12]. However, the use of such principles for point clouds and 3D data is still unexplored. The main contributions of this paper lie in the following aspects: • we propose a novel regularizer to supervised training of point cloud classification models that promotes the part-whole hierarchy of compositionality in the hyperbolic space; • this regularizer can be applied to any state-of-art architecture with a simple modification of its head to perform classification with hyperbolic layers in the regularized space, coupled with Riemannian optimization [13]; • we observe a significant improvement in the performance of a number of popular architectures, including state-of-the-art techniques, surpassing the currently known best results on two different datasets; • we are the first to experimentally observe the desired part-whole hierarchy, by noticing that the geodesics in hyperbolic space between whole objects pass through common part ancestors. 2 Related work Point Cloud Analysis Point cloud data are sets of multiple points and, in recent years, several deep neural networks have been studied to process them. Early works adapted models for images through 2D projections [14], [15]. Later, PointNet [16] established new models working directly on the raw set of 3D coordinates by exploiting shared architectures invariant to points permutation. Originally, PointNet independently processed individual points through a shared MLP. To improve performance, PointNet++ [17] exploited spatial correlation by using a hierarchical feature learning paradigm. Other methods [18], [19], [20], treat point clouds as a graph and exploit operators defined over irregular sets to capture relations among points and their neighbors at different resolutions. This is the case of DGCNN [21], where the EdgeConv graph convolution operation aggregates features supported on neighborhoods as defined by a nearest neighbor graph dynamically computed in the feature space. Recently, PointMLP [3] revisits PointNet++ to include the concept of residual connections. Through this simple model, the authors show that sophisticated geometric models are not essential to obtain state-of-the-art performance. Part Compositionality Successfully capturing the semantics of 3D objects represented as point clouds requires to learn interactions between local and global information, and, in particular, the compositional nature of 3D objects as constructed from local parts. Indeed, some works have focused on capturing global-local reasoning in point cloud processing. One of the first and most representative works is PointGLR [5]. In this work, the authors map local features at different levels within the network to a common hypersphere where the global features embedding is made close to such local embeddings. This is the first approach towards modeling the similarities of parts (local features) and whole objects (global features). The use of a hypersphere as embedding space for similarity promotion traces its roots in metric learning works for face recognition [22]. In addition to the global-local embedding, PointGLR added two other pretext tasks, namely normal estimation and self-reconstruction, to further promote learning of highly discriminative features. Our work significantly differs from PointGLR in multiple ways: i) a positive curvature manifold such as the hypersphere is unable to accurately embed hierarchies (tree-like structures), hence our adoption of the hyperbolic space; ii) we actively promote a continuous embedding of part-whole hierarchies by penalizing the hyperbolic norm of parts proportionally to their number of points (a proxy for part complexity); iii) we move the classification head of the model to the hyperbolic space to exploit our regularized geometry. A further limitation of PointGLR is the implicit assumption of a model generating progressive hierarchies (e.g. via expanding receptive fields) in the intermediate layers. In contrast, our work can be readily adopted by any state-of-the-art model with just a replacement of the final layers. Other works revisit the global-local relations using maximization of mutual information between different views [6], clustering and contrastive learning [23], distillation with constrast [7], self-similarity and contrastive learning with hard negative samples [24]. Although most of these works include the contrastive strategy, they differ in the way they contrast the positive and negative samples and in the details of the self-supervision procedures, e.g., contrastive loss and point cloud augmentations. We also notice that most these works focus on unsupervised learning, and, while they show that the features learned in this manner are highly discriminative, they are also mostly unable to improve upon state-of-the-art supervised methods when finetuned with full supervision. These approaches differ from the one followed in this paper, where we focus on regularization of a fully supervised method, and we show improvements upon the supervised baselines that do not adopt our regularizer. Hyperbolic Learning The intuition that the hyperbolic space is crucial to embed hierarchical structures comes from the work of Sarkar [8] who proved that trees can be embedded in the hyperbolic space with arbitrarily low distortion. This inspired several works which investigated how various frameworks of representation learning can be reformulated in non-Euclidean manifolds. In particular, [9] [13] and [10] were some of the first works to explore hyperbolic representation learning by introducing Riemannian adaptive optimization, Poincarè embeddings and hyperbolic neural networks for natural language processing. The new mathematical formalism introduced by Ganea et al. [10] was decisive to demonstrate the effectiveness of hyperbolic variants of neural network layers compared to the Euclidean counterparts. Generalizations to other data, such as images [25] and graphs [26] with the corresponding hyperbolic variants of the main operations like graph convolution [26] and gyroplane convolution [12] have also been studied. In the context of unsupervised learning, new objectives in the hyperbolic space force the models to include the implicit hierarchical structure of the data leading to a better clustering in the embedding space [12], [11]. To the best of our knowledge, no work has yet focused on hyperbolic representations for point clouds. Indeed, 3D objects present an intrinsic hierarchy where whole objects are made by parts of different size. While the smallest parts may be shared across different object classes, the larger the parts the more class-specific they become. This consistently fits with the structure of a tree where simple fundamental parts are shared ancestors of complex objects and hence we show how the hyperbolic space can fruitfully capture this data prior. 3 Method In this section we present our proposed method, named HyCoRe (Hyperbolic Compositional Regularizer). An overview is presented in Fig. 1. At a high level, HyCoRe enhances any state-of-the-art neural network model for point cloud classification by 1) replacing its last layers with layers performing transformations in the hyperbolic space (see Sec. 3.2), and 2) regularizing the classification loss to induce a desirable configuration of the hyperbolic feature space where embeddings of parts both follow a hierarchy and cluster according to class labels. 3.1 Compositional Hierarchy in 3D Point Clouds The objective of HyCoRe is to regularize the feature space produced by a neural network so that it captures the compositional structure of the 3D point cloud at different levels. In particular, we notice that there exists a hierarchy where small parts (e.g., simple structures like disks, squares, triangles) composed of few points are universal ancestors to more complex shapes included in many different objects. As these structures are composed into more complex parts with more points, they progressively become more specific to an object or class. This hierarchy can be mathematically represented by a tree, as depicted in Fig.2 where a simple cylinder can be the ancestor of both pieces of a chair or a table. While the leaves in the tree are whole objects, thus belonging to a specific class, their ancestors are progressively more universal the higher up in the hierarchy they sit. At this point, it is important noting that the graph distance between leaves is determined by the shortest path passing through the first common ancestor for objects in the same or similar classes, while objects from significantly dissimilar classes have the shortest path passing through the root of the hierarchy. In order to ensure that we can embed this tree structure in a feature space, we need a space that preserves the geometrical properties of trees and especially the graph distance. In particular, the embedding space must be able to accommodate the exponential volume growth of a tree along its radius. A classic result by Sarkar [8] showed that flat Euclidean space does not provide this, leading to high errors when embedding trees, even in high dimensions. On the contrary, the hyperbolic space, a Riemannian manifold with negative curvature, does support exponentially increasing volumes and can embed trees with arbitrarily low distortion. Indeed, the geodesic (shortest path) between two points in this space does pass through points closer to the origin, mimicking the behavior of distance defined over a tree. In particular, we will focus on the Poincarè ball model of hyperbolic space. Since hyperbolic space is a non-Euclidean manifold, it cannot benefit from conventional vector representations and linear algebra. As a consequence, classical neural networks cannot operate in such a space. However, we will use extensions [10] of classic layers defined through the concept of gyrovector spaces. 3.2 Hyperbolic Space and Neural Networks The hyperbolic space is a Riemannian manifold with constant negative curvature. The curvature determines the metric of a space by the following formula: gR = (λ c x) 2gE = 2 1 + c∥x∥2 gE (1) where gR is the metric tensor of a generic Riemannian manifold, λcx is the conformal factor that depends on the curvature c and on the point x on which is calculated, and gE is the metric tensor of the Euclidean space Rn, i.e., the identity tensor In. Note how the metric depends on the coordinates (through ∥x∥) for c ̸= 0, and how c = 0 yields gR = 2gE , i.e., the Euclidean space is a flat Riemannian manifold with zero curvature. Spaces with c > 0 are spherical, and with c < 0 hyperbolic. The Poincarè Ball in n dimensions Dn is a hyperbolic space with c = −1, and it is isometric to other models such as the Lorentz model. The distance and norm are defined as: dD(x,y) = cosh −1 ( 1 + 2 ∥x− y∥2 (1− ∥x∥2)(1− ∥y∥2) ) ) , ∥x∥D = 2 tanh−1 (∥x∥) (2) Since the Poincarè Ball is a Riemannian manifold, for each point x ∈ Dn we can define a logarithmic map logx : Dn → TxDn that maps points from the Poincarè Ball to the corresponding tangent space TxDn ∈ Rn, and an exponential map expx : TxDn → Dn that does the opposite. These operations [10] are fundamental to move from one space to the other and viceversa. The formalism to generalize tensor operations in the hyperbolic space is called the gyrovector space, where addition, scalar multiplication, vector-matrix multiplication and other operations are redefined as Möbius operations and work in Riemannian manifolds with curvature c. These become the basic blocks of the hyperbolic neural networks. In particular, we will use the hyperbolic feed forward (FF) layer (also known as Möbius layer). Considering the Euclidean case, for a FF layer, we need a matrix M : Rn → Rm to linearly project the input x ∈ Rn to the feature space Rm, and, additionally, a translation made by a bias addition, i.e., y+ b with y,b ∈ Rm and, finally, a pointwise non-linearity ϕ : Rm → Rm. Matrix multiplication, bias and pointwise non-linearity are replaced by Möbius operations in the gyrovector space and become: y = M⊗c(x) = 1√ c tanh ( ∥Mx∥ ∥x∥ tanh−1( √ c∥x∥) ) Mx ∥x∥ (3) z = y ⊕c b = expcy ( λc0 λcy logc0(b) ) , ϕ⊗c(z) = expcz (ϕ(log c 0(z))) (4) where M and b are the same matrix and vector defined above, c is the magnitude of the curvature. Note that when c → 0 we recover the Euclidean feed-forward layer. An interesting property of the Möbius layer is that it is highly nonlinear; indeed the bias addition in hyperbolic space becomes a nonlinear mapping since geodesics are curved paths in non-flat manifolds. 3.3 Hyperbolic Compositional Regularization Armed with the formalism introduced in the previous section, we are ready to formulate our HyCoRe framework, anticipated in Fig. 1. Consider a point cloud PN as a set of 3D points p ∈ R3 with N elements. We use any state-of-the-art point cloud processing network as a feature extraction backbone E : RN×3 → Rm to encode PN in the corresponding feature space. At this point we apply an exponential map expcx : Rm → Dm to map the Euclidean feature vector into the hyperbolic space and then a Möbius layer H : Dm → Df to project the hyperbolic vector in an f -dimensional Poincarè ball. This is the hyperbolic embedding of the whole point cloud PN , i.e., zwhole = H(exp(E(PN ))) ∈ Df . We repeat the same procedure for a sub-part of PN , which we call PN ′ with a number of points N ′ < N , to create the part embedding zpart = H(exp(E(PN ′))) ∈ Df in the same feature space as before. We now want to regularize the feature space to induce the previously mentioned properties, namely the part-whole hierarchy and clustering according to the class labels. This is performed by defining the following triplet regularizers: Rhier(z + whole, z + part) = max(0,−∥z+whole∥D + ∥z + part∥D + γ/N ′) (5) Rcontr(z + whole, z + part, z − part) = max ( 0, dD(z + whole, z + part)− dD(z+whole, z − part) + δ ) (6) where z+whole and z + part are the hyperbolic representation of the whole and a part from the same point cloud, while z−part is the embedding of a part of a different point cloud from a different class. The Rhier regularizer in Eq. (5) induces the compositional part-whole hierarchy by promoting part embeddings to lie closer to the center of the Poincarè ball and whole embeddings to be closer to the edge. In particular, we use a variable margin γ/N ′ that depends on the number of points N ′ of the part PN ′ . This means that shapes composed by few points (hence simple universal shapes) will be far from the whole object representation and with lower hyperbolic norm (near the centre). On the other hand, embeddings of larger parts will be progressively closer to the edge of the Poincarè ball, depend- ing on the part size. Since geodesics between two points pass closer to the ball center (Fig. 3), this structure we impose to the space allows to visit common part ancestors while traversing a geodesic between two whole objects. This regularization thus mimics a continuous version of a part-whole tree embedded in the Poincarè ball. The Rcontr regularizer in Eq. (6) promotes correct clustering of objects and parts in the hyperbolic space. In particular, parts and whole of the same point cloud are promoted to be close while a part from a different class is mapped far apart with respect to the other whole. It ensures that the parts of a point cloud of a different class are far in terms of geodesic distance. δ is a margin hyperparameter to control the degree of separation between positive and negative samples. The two regularizations are included in the final loss in this way: L = LCE + αRcontr + βRhier (7) where LCE is the conventional classification loss (e.g., cross-entropy) evaluated on the whole objects. The classification head is a hyperbolic Möbius layer followed by softmax. In principle, one could argue that LCE could already promote correct clustering according to class labels, rendering Rcontr redundant. However, several works [10] have noticed that the Möbius-softmax hyperbolic head is weaker than its Euclidean counterpart. We thus found it more effective to evaluate LCE on the whole objects only, and use Rcontr as a metric penalty that explicitly considers geodesic distances to ensure correct clustering of both parts and whole objects. At each iteration of training with HyCoRe we sample shapes with a random N ′ varying within a predefined range. A part is defined as the N ′ nearest neighbors of a random point. In future work, it would be interesting to explore alternative definitions for parts, e.g., using part labels if available but, at the moment, we only address definition via spatial neighbors to avoid extra labeling requirements. Method AA(%) OA(%) 4 Experimental results 4.1 Experimental setting We study the performance of our regularizer HyCoRe on the synthetic dataset ModelNet40 [27] (12,331 objects with 1024 points, 40 classes) and on the real dataset ScanObjectNN [28] (15,000 objects with 1024 points, 15 classes). We apply our method over multiple classification architectures, namely the widely popular DGCNN and PointNet++ baselines, as well as the recent state-of-the-art PointMLP model. We substitute the standard classifier with its hyperbolic version (Möbius+softmax), as shown in Fig. 1. We use f = 256 features to be comparable to the official implementations in the Euclidean space, then we test the model over different embedding dimensions in the ablation study. Moreover, we set α = β = 0.01, γ = 1000 and δ = 4. For the number of points of each part N ′, we select a random number between 200 and 600, and for the whole object a random number between 800 and 1024 to ensure better flexibility of the learned to model to part sizes. We train the models using Riemannian SGD optimization. Our implementation is on Pytorch and we use geoopt [29] for the hyperbolic operations. Models are trained on an Nvidia A6000 GPU. 4.2 Main Results Table 1 shows the results for ModelNet40 classification. In the first part we report well-known and state-of-the-art supervised models. We retrained PointNet++, DGCNN and the state-of-theart PointMLP as baselines, noting some documented difficulty [38] with exactly reproducing the Table 5: Performance vs. curvature of the Poincarè Ball Average Accuracy (%) Curvature c 1 0.5 0.1 0.01 Hype-DGCNN 76.5 76.9 76.6 76.9 DGCNN+HyCoRe 80.2 79.4 78.7 78.5 official results. In addition, the second part of the table reports the performance of methods [33], [34], [7] proposing self-supervised pretraining techniques, after supervised finetuning. Concerning PointGLR [5], the most similar method to HyCoRe, we ensure a fair comparison by using only the L2G embedding loss and not the pretext tasks of normal estimation and reconstruction. Finally, the last part of the table presents the results with HyCoRe applied to the selected baselines. We can see that the proposed method achieves substantial gains not only compared to the randomly initialized models, but also compared to the finetuned models. When applied to the PointMLP, HyCoRe exceed the state-of-the-art performance on ModelNet40. Moreover, it is interesting to notice that the embedding framework of PointGLR is not particularly effective without the pretext tasks. This is due to the unsuitability of the spherical space to embed hierarchical information, as explained in Sec. 2, and it is indeed not far from results we obtain with our method in Euclidean space. Table 2 reports the classification results on the ScanObjectNN dataset. Also in this case, HyCoRe significantly improves the baseline DGCNN leading it to be comparable with the state-of-the-art methods such as SimpleView [4], PRANet [35] and MVTN [36]. In addition, PointMLP that holds the state of the art for this dataset, is further improved by our method and reaches an impressive overall accuracy of 87.2 %, substantially outperforming all the previous approaches. Although the authors in [3] claim that classification performance has reached a saturation point, we show that including novel regularizers in the training process can still lead to significant gains. This demonstrates that the proposed method leverages novel ideas, complementary to what is exploited by existing architectures, and it is thus able to boost the performance even of state-of-the-art methods. It is also remarkable that an older, yet still popular, architecture like DGCNN is able to outperform complex and sophisticated models such as the Point Transformer, when regularized by HyCoRe. 5.32 4.56 2.08 4.07 2.05 1.99 In addition, to further prove that enforcing the hierarchy between parts is useful to build better clusters, we show in Fig. 4 a 2D visualization with UMAP of the hyperbolic representations for the ModelNet40 data. Colors denote classes, big points whole objects and small points parts. Besides the clear clustering according to class labels, it is fascinating to notice the emergence of the part-whole hierarchy with part objects closer to the center of the disk. Importantly, some parts bridge multiple classes, such as the ones in the bottom right zoom, i.e., they are found along a geodesic connecting two class clusters, serving as common ancestors. This can happen due to the fact that some simple parts having roughly the same shape appear with multiple class labels during training, and the net effect of Rcontr is to position them midway across the classes. The tree-likeness of the hyperbolic space can be also be seen in the visualization in Fig. 2 (right). There we embed shapes with gradually large number of points up to the whole object made by 1024 points. We can notice that the parts are moved towards the disk edge as more points are added. Furthermore, a quantitative analysis of the part-whole hierarchy is shown in Table 6. Here we calculated the hyperbolic norms of compositions of labeled parts. We can see that, as the parts are assembled with other parts, their hyperbolic norms grow, up to the whole object that is pushed close to the ball edge. 4.3 Ablation study In the following we show an ablation study focusing on the DGCNN backbone and the ScanObjectNN dataset. The dataset selection is motivated by the fact that it is a real dataset, able to provide more stable and representative results compared to ModelNet40. We first compare HyCoRe with its Euclidean version (EuCoRe) to investigate the effectiveness of the hyperbolic space. The basic principles and losses are the same, but in EuCoRe distances and network layers are defined in the Euclidean space. Table 3 shows the results. With Hype-DGCNN we indicate the hyperbolic version of DGCNN, as represented in Fig. 1, but without any regularization, serving as a baseline to assess the individual effect of the regularizer. We also test the models over a different number of embedding dimensions. We can see that EuCoRe only provides a modest improvement, underlining the importance of the hyperbolic space. We also notice that the hyperbolic baseline struggles to be on par with its Euclidean counterpart, as observed by many recent works [25], [10]. However, when regularized with HyCoRe, we can observe significant gains, even in low dimensions. This also leaves an open research question, about whether better hyperbolic baselines could be built so that HyCoRe starts from a less disadvantaged point. In Table 4 we ablate HyCoRe by removing one of the two regularizers. We can see that the combination of the two provides the overall best gain. In order to study the effect of different space curvatures c, Table 5 evaluates HyCoRe from the standard curvature 1 down to 0.01. We remark that some works [25],[39], report significant improvements when c is very low (e.g., 0.001), but this is counter-intuitive since the hyperbolic space then resembles an almost flat manifold. On the contrary, we do see improved results at higher curvatures. Since HyCoRe constrains the network to learn the relations between parts and whole object, we claim that, at the end of the training process, the model should be better able to classify coarser objects. In Figs. 5a and 5b we show the test accuracy of DGCNN on ModelNet40, when presented with a uniformly subsampled point cloud and with a small randomly chosen and spatially-contiguous part, respectively. Indeed, we can notice that HyCoRe provides a gain up to 20 percentage points for very sparse point clouds, and is also able to successfully detect the object from smaller parts. For a fair comparison, we also report the baseline DGCNN with training augmented by random crops of parts. Even though the augmentation is useful to improve accuracy, HyCoRe is more effective demonstrating the importance of compositional reasoning. 5 Conclusions Although deep learning in the hyperbolic space is in its infancy, in this paper we showed how it can successfully capture the hierarchical nature of 3D point clouds, boosting the performance of state-of-the-art models for classification. Reasoning about the relations between objects and the parts that compose them leads not only to better results but also more robust and explainable models. In the future, it would be interesting to explore different ways of defining parts, not based on spatial nearest neighbors but rather on more semantic constructions. One important extension is to adapt HyCoRe to segmentation. Since segmentation aims to classify single points and the corresponding parts, contrary to classification, the parts embeddings should be placed on the boundary of the Poincarè Ball, where there is more space to correctly cluster them, and the whole objects (made by composition of parts) near the origin. We could exploit the label of the parts to this end or investigate unsupervised settings where the part hierarchy emerges naturally. Acknowledgments and Disclosure of Funding Computational resources were provided by HPC@POLITO, a project of Academic Computing within the Department of Control and Computer Engineering at the Politecnico di Torino (http://www.hpc.polito.it). This research received no external funding.
1. What is the main contribution of the paper regarding part-whole hierarchy for 3D point clouds? 2. What are the strengths of the proposed regularization method in utilizing hyperbolic space? 3. Do you have any concerns or questions regarding the definition of part-whole hierarchy in the paper? 4. How does the reviewer assess the comparison between the proposed method and other baseline methods? 5. What are the limitations of the paper regarding its claims and experiments?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposed to utilize hyperbolic space to learn part-whole hierarchy for 3D point clouds. The main idea is regularizing in the hyperbolic space if part or whole are representing same object, and if they are from different objects, the regularization is pushing them to larger distance (similar to triplet loss, or contrastive loss). The paper applied the proposed regularization to many different network architecture (e.g. DGCNN, PointNet++, PointMLP, etc) and improved the baseline across two datasets (ModelNet40 and ScanObjectNN). Strengths And Weaknesses Strength: Utilizing hyperbolic space for regularizing part-whole hierarchy is a new idea. The geodesic distance in the hyperbolic space naturally suitable for the tree structure of part-whole hierarchy and defining the regularization on the part-whole space makes more sense comparing with defining on the Euclidean space. (although I have some questions below) The proposed regularization is agnostic to different network architectures, and the paper experimented on multiple different backbone for point cloud classification, each achieved improvement over compared baselines on two dataset. The paper is well written and easy to understand. Weakness: (minor point) The definition of part-whole hierarchy seems controversial. In the paper, the part is defined as the ancestor of the whole object, and different object can share same ancestor. This is not intuitive, as one object is composed of multiple different parts, that means, if looking from the path from part to whole, it's not a tree structure (one child node is strictly below one parent node). An intuitive way of defining part-whole hierarchy is the whole is the ancestor of the parts and parts can be share with many different objects. This is also how PartNet [a] (A dataset for part-whole hierarchy for 3D object) is created, and also the part-whole hierarchy mentioned in the seminal work [b]. Overclaiming. I don't agree with the author the paper is learning to "promotes the part-whole hierarchy of compositionality in the hyperbolic space" Specifically, the only thing the paper is proposing a regularization on the feature space, therefore how the object is composed with different part is not clear, there's no explicit way of representing object as a hierarchical tree using the proposed method, not mentioning the compositionality. Besides, the way the paper is defining "parts" is through subsampling a small local region of the object (Line 188), this is not a semantic part and is only a subgroup of the object, and also the definition of part-whole hierarchy is controversial as above The comparison is not a fair comparison. I appreciate the paper compared with multiple network architectures and see the improvement over compared baselines. However, since the main idea is proposing a regularization loss in the hyperbolic space, a fair comparison should be apply the same regularization loss, but in the Euclidean space. The compared results is only using supervised loss for pretraining and fine-tuning with the classification loss, what about using both losses for fine-tuning? It was mentioned in Line 95 that "they are also mostly unable to improve upon state-of-the-art supervised methods when finetuned with full supervision", but there's no evidence to support this. Since the paper is proposing to learning compositionality with part-whole hierarchy, a more convincing experiment would be running with a dataset that contains part-whole hierarchy (e.g. PartNet [a]), and provide a quantitative analysis on the learnt hierarchy v.s. other baselines that learns part whole hierarchy. (minor point) In the Fig. 4 visualization, it seems the clustering is not great? If we only focus on the light green color, they are spread in many regions in the graph, does this mean the clustering of light green class is not great? [a] PartNet: A Large-scale Benchmark for Fine-grained and Hierarchical Part-level 3D Object Understanding Kaichun Mo, Shilin Zhu, Angel X. Chang3, Li Yi, Subarna Tripathi, Leonidas J. Guibas, Hao Su [b]How to represent part-whole hierarchies in a neural network Geoffrey Hinton Questions See Weakness section above. Limitations See weakness section above.
NIPS
Title Rethinking the compositionality of point clouds through regularization in the hyperbolic space Abstract Point clouds of 3D objects exhibit an inherent compositional nature where simple parts can be assembled into progressively more complex shapes to form whole objects. Explicitly capturing such part-whole hierarchy is a long-sought objective in order to build effective models, but its tree-like nature has made the task elusive. In this paper, we propose to embed the features of a point cloud classifier into the hyperbolic space and explicitly regularize the space to account for the part-whole hierarchy. The hyperbolic space is the only space that can successfully embed the tree-like nature of the hierarchy. This leads to substantial improvements in the performance of state-of-art supervised models for point cloud classification. 1 Introduction Is the whole more than the sum of its parts? While philosophers have been debating such deep question since the time of Aristotle, we can certainly say that understanding and capturing the relationship between parts as constituents of whole complex structures is of paramount importance in building models of reality. In this paper, we turn our attention to the compositional nature of 3D objects, represented as point clouds, where simple parts can be assembled to form progressively more complex shapes. Indeed, the complex geometry of an object can be better understood by unraveling the implicit hierarchy of its parts. Such hierarchy can be intuitively captured by a tree where nodes close to the root represent basic universal shapes, which become progressively more complex as we approach the whole-object leaves. Transforming an object into another requires swapping parts by traversing the tree up to a common ancestor part. It is thus clear that a model extracting features, that claim to capture the nature of 3D objects, needs to incorporate such hierarchy. In the last years, point cloud processing methods have tried to devise methods to extract complex geometric information from points and neighborhoods. Architectures like graph neural networks [1] compose the features extracted by local receptive fields, with sophisticated geometric priors [2] exploiting locality and self-similarity, while a different school of thought argues that simple architectures, such as PointMLP [3] and SimpleView [4], with limited geometric priors are nevertherless very effective. It thus raises a question whether prior knowledge about the data is being exploited effectively. In this sense, works such as PointGLR [5], Info3D [6] and DCGLR [7] recognized the need to reason about local and global interactions in the feature extraction process. In particular, their claim is that ∗Code of the project: https://github.com/diegovalsesia/HyCoRe 36th Conference on Neural Information Processing Systems (NeurIPS 2022). maximizing the mutual information between parts and whole objects leads to understanding of local and global relations. Although these methods present compelling results for unsupervised feature extraction, they still fall short of providing significant improvements when finetuned with supervision. In our work, we argue that those methods do not fulfill their promise of capturing the part-whole relationship because they are unable to represent the tree-like nature of the compositional hierarchy. Indeed, their fundamental weakness lies in the use of spaces that are either flat (Euclidean) or with positive curvature (spherical). However, it is known that only spaces with negative curvature (hyperbolic) are able to embed tree structures with low distortion [8]. This is due to the fact that the volume of the Euclidean space grows only as a power of its radius rather than exponentially, limiting the representation capacity of tree-like data with an exponential number of leaves. This unique characteristic has inspired many researchers to represent hierarchical relations in many domains, from natural language processing [9],[10] to computer vision [11] ,[12]. However, the use of such principles for point clouds and 3D data is still unexplored. The main contributions of this paper lie in the following aspects: • we propose a novel regularizer to supervised training of point cloud classification models that promotes the part-whole hierarchy of compositionality in the hyperbolic space; • this regularizer can be applied to any state-of-art architecture with a simple modification of its head to perform classification with hyperbolic layers in the regularized space, coupled with Riemannian optimization [13]; • we observe a significant improvement in the performance of a number of popular architectures, including state-of-the-art techniques, surpassing the currently known best results on two different datasets; • we are the first to experimentally observe the desired part-whole hierarchy, by noticing that the geodesics in hyperbolic space between whole objects pass through common part ancestors. 2 Related work Point Cloud Analysis Point cloud data are sets of multiple points and, in recent years, several deep neural networks have been studied to process them. Early works adapted models for images through 2D projections [14], [15]. Later, PointNet [16] established new models working directly on the raw set of 3D coordinates by exploiting shared architectures invariant to points permutation. Originally, PointNet independently processed individual points through a shared MLP. To improve performance, PointNet++ [17] exploited spatial correlation by using a hierarchical feature learning paradigm. Other methods [18], [19], [20], treat point clouds as a graph and exploit operators defined over irregular sets to capture relations among points and their neighbors at different resolutions. This is the case of DGCNN [21], where the EdgeConv graph convolution operation aggregates features supported on neighborhoods as defined by a nearest neighbor graph dynamically computed in the feature space. Recently, PointMLP [3] revisits PointNet++ to include the concept of residual connections. Through this simple model, the authors show that sophisticated geometric models are not essential to obtain state-of-the-art performance. Part Compositionality Successfully capturing the semantics of 3D objects represented as point clouds requires to learn interactions between local and global information, and, in particular, the compositional nature of 3D objects as constructed from local parts. Indeed, some works have focused on capturing global-local reasoning in point cloud processing. One of the first and most representative works is PointGLR [5]. In this work, the authors map local features at different levels within the network to a common hypersphere where the global features embedding is made close to such local embeddings. This is the first approach towards modeling the similarities of parts (local features) and whole objects (global features). The use of a hypersphere as embedding space for similarity promotion traces its roots in metric learning works for face recognition [22]. In addition to the global-local embedding, PointGLR added two other pretext tasks, namely normal estimation and self-reconstruction, to further promote learning of highly discriminative features. Our work significantly differs from PointGLR in multiple ways: i) a positive curvature manifold such as the hypersphere is unable to accurately embed hierarchies (tree-like structures), hence our adoption of the hyperbolic space; ii) we actively promote a continuous embedding of part-whole hierarchies by penalizing the hyperbolic norm of parts proportionally to their number of points (a proxy for part complexity); iii) we move the classification head of the model to the hyperbolic space to exploit our regularized geometry. A further limitation of PointGLR is the implicit assumption of a model generating progressive hierarchies (e.g. via expanding receptive fields) in the intermediate layers. In contrast, our work can be readily adopted by any state-of-the-art model with just a replacement of the final layers. Other works revisit the global-local relations using maximization of mutual information between different views [6], clustering and contrastive learning [23], distillation with constrast [7], self-similarity and contrastive learning with hard negative samples [24]. Although most of these works include the contrastive strategy, they differ in the way they contrast the positive and negative samples and in the details of the self-supervision procedures, e.g., contrastive loss and point cloud augmentations. We also notice that most these works focus on unsupervised learning, and, while they show that the features learned in this manner are highly discriminative, they are also mostly unable to improve upon state-of-the-art supervised methods when finetuned with full supervision. These approaches differ from the one followed in this paper, where we focus on regularization of a fully supervised method, and we show improvements upon the supervised baselines that do not adopt our regularizer. Hyperbolic Learning The intuition that the hyperbolic space is crucial to embed hierarchical structures comes from the work of Sarkar [8] who proved that trees can be embedded in the hyperbolic space with arbitrarily low distortion. This inspired several works which investigated how various frameworks of representation learning can be reformulated in non-Euclidean manifolds. In particular, [9] [13] and [10] were some of the first works to explore hyperbolic representation learning by introducing Riemannian adaptive optimization, Poincarè embeddings and hyperbolic neural networks for natural language processing. The new mathematical formalism introduced by Ganea et al. [10] was decisive to demonstrate the effectiveness of hyperbolic variants of neural network layers compared to the Euclidean counterparts. Generalizations to other data, such as images [25] and graphs [26] with the corresponding hyperbolic variants of the main operations like graph convolution [26] and gyroplane convolution [12] have also been studied. In the context of unsupervised learning, new objectives in the hyperbolic space force the models to include the implicit hierarchical structure of the data leading to a better clustering in the embedding space [12], [11]. To the best of our knowledge, no work has yet focused on hyperbolic representations for point clouds. Indeed, 3D objects present an intrinsic hierarchy where whole objects are made by parts of different size. While the smallest parts may be shared across different object classes, the larger the parts the more class-specific they become. This consistently fits with the structure of a tree where simple fundamental parts are shared ancestors of complex objects and hence we show how the hyperbolic space can fruitfully capture this data prior. 3 Method In this section we present our proposed method, named HyCoRe (Hyperbolic Compositional Regularizer). An overview is presented in Fig. 1. At a high level, HyCoRe enhances any state-of-the-art neural network model for point cloud classification by 1) replacing its last layers with layers performing transformations in the hyperbolic space (see Sec. 3.2), and 2) regularizing the classification loss to induce a desirable configuration of the hyperbolic feature space where embeddings of parts both follow a hierarchy and cluster according to class labels. 3.1 Compositional Hierarchy in 3D Point Clouds The objective of HyCoRe is to regularize the feature space produced by a neural network so that it captures the compositional structure of the 3D point cloud at different levels. In particular, we notice that there exists a hierarchy where small parts (e.g., simple structures like disks, squares, triangles) composed of few points are universal ancestors to more complex shapes included in many different objects. As these structures are composed into more complex parts with more points, they progressively become more specific to an object or class. This hierarchy can be mathematically represented by a tree, as depicted in Fig.2 where a simple cylinder can be the ancestor of both pieces of a chair or a table. While the leaves in the tree are whole objects, thus belonging to a specific class, their ancestors are progressively more universal the higher up in the hierarchy they sit. At this point, it is important noting that the graph distance between leaves is determined by the shortest path passing through the first common ancestor for objects in the same or similar classes, while objects from significantly dissimilar classes have the shortest path passing through the root of the hierarchy. In order to ensure that we can embed this tree structure in a feature space, we need a space that preserves the geometrical properties of trees and especially the graph distance. In particular, the embedding space must be able to accommodate the exponential volume growth of a tree along its radius. A classic result by Sarkar [8] showed that flat Euclidean space does not provide this, leading to high errors when embedding trees, even in high dimensions. On the contrary, the hyperbolic space, a Riemannian manifold with negative curvature, does support exponentially increasing volumes and can embed trees with arbitrarily low distortion. Indeed, the geodesic (shortest path) between two points in this space does pass through points closer to the origin, mimicking the behavior of distance defined over a tree. In particular, we will focus on the Poincarè ball model of hyperbolic space. Since hyperbolic space is a non-Euclidean manifold, it cannot benefit from conventional vector representations and linear algebra. As a consequence, classical neural networks cannot operate in such a space. However, we will use extensions [10] of classic layers defined through the concept of gyrovector spaces. 3.2 Hyperbolic Space and Neural Networks The hyperbolic space is a Riemannian manifold with constant negative curvature. The curvature determines the metric of a space by the following formula: gR = (λ c x) 2gE = 2 1 + c∥x∥2 gE (1) where gR is the metric tensor of a generic Riemannian manifold, λcx is the conformal factor that depends on the curvature c and on the point x on which is calculated, and gE is the metric tensor of the Euclidean space Rn, i.e., the identity tensor In. Note how the metric depends on the coordinates (through ∥x∥) for c ̸= 0, and how c = 0 yields gR = 2gE , i.e., the Euclidean space is a flat Riemannian manifold with zero curvature. Spaces with c > 0 are spherical, and with c < 0 hyperbolic. The Poincarè Ball in n dimensions Dn is a hyperbolic space with c = −1, and it is isometric to other models such as the Lorentz model. The distance and norm are defined as: dD(x,y) = cosh −1 ( 1 + 2 ∥x− y∥2 (1− ∥x∥2)(1− ∥y∥2) ) ) , ∥x∥D = 2 tanh−1 (∥x∥) (2) Since the Poincarè Ball is a Riemannian manifold, for each point x ∈ Dn we can define a logarithmic map logx : Dn → TxDn that maps points from the Poincarè Ball to the corresponding tangent space TxDn ∈ Rn, and an exponential map expx : TxDn → Dn that does the opposite. These operations [10] are fundamental to move from one space to the other and viceversa. The formalism to generalize tensor operations in the hyperbolic space is called the gyrovector space, where addition, scalar multiplication, vector-matrix multiplication and other operations are redefined as Möbius operations and work in Riemannian manifolds with curvature c. These become the basic blocks of the hyperbolic neural networks. In particular, we will use the hyperbolic feed forward (FF) layer (also known as Möbius layer). Considering the Euclidean case, for a FF layer, we need a matrix M : Rn → Rm to linearly project the input x ∈ Rn to the feature space Rm, and, additionally, a translation made by a bias addition, i.e., y+ b with y,b ∈ Rm and, finally, a pointwise non-linearity ϕ : Rm → Rm. Matrix multiplication, bias and pointwise non-linearity are replaced by Möbius operations in the gyrovector space and become: y = M⊗c(x) = 1√ c tanh ( ∥Mx∥ ∥x∥ tanh−1( √ c∥x∥) ) Mx ∥x∥ (3) z = y ⊕c b = expcy ( λc0 λcy logc0(b) ) , ϕ⊗c(z) = expcz (ϕ(log c 0(z))) (4) where M and b are the same matrix and vector defined above, c is the magnitude of the curvature. Note that when c → 0 we recover the Euclidean feed-forward layer. An interesting property of the Möbius layer is that it is highly nonlinear; indeed the bias addition in hyperbolic space becomes a nonlinear mapping since geodesics are curved paths in non-flat manifolds. 3.3 Hyperbolic Compositional Regularization Armed with the formalism introduced in the previous section, we are ready to formulate our HyCoRe framework, anticipated in Fig. 1. Consider a point cloud PN as a set of 3D points p ∈ R3 with N elements. We use any state-of-the-art point cloud processing network as a feature extraction backbone E : RN×3 → Rm to encode PN in the corresponding feature space. At this point we apply an exponential map expcx : Rm → Dm to map the Euclidean feature vector into the hyperbolic space and then a Möbius layer H : Dm → Df to project the hyperbolic vector in an f -dimensional Poincarè ball. This is the hyperbolic embedding of the whole point cloud PN , i.e., zwhole = H(exp(E(PN ))) ∈ Df . We repeat the same procedure for a sub-part of PN , which we call PN ′ with a number of points N ′ < N , to create the part embedding zpart = H(exp(E(PN ′))) ∈ Df in the same feature space as before. We now want to regularize the feature space to induce the previously mentioned properties, namely the part-whole hierarchy and clustering according to the class labels. This is performed by defining the following triplet regularizers: Rhier(z + whole, z + part) = max(0,−∥z+whole∥D + ∥z + part∥D + γ/N ′) (5) Rcontr(z + whole, z + part, z − part) = max ( 0, dD(z + whole, z + part)− dD(z+whole, z − part) + δ ) (6) where z+whole and z + part are the hyperbolic representation of the whole and a part from the same point cloud, while z−part is the embedding of a part of a different point cloud from a different class. The Rhier regularizer in Eq. (5) induces the compositional part-whole hierarchy by promoting part embeddings to lie closer to the center of the Poincarè ball and whole embeddings to be closer to the edge. In particular, we use a variable margin γ/N ′ that depends on the number of points N ′ of the part PN ′ . This means that shapes composed by few points (hence simple universal shapes) will be far from the whole object representation and with lower hyperbolic norm (near the centre). On the other hand, embeddings of larger parts will be progressively closer to the edge of the Poincarè ball, depend- ing on the part size. Since geodesics between two points pass closer to the ball center (Fig. 3), this structure we impose to the space allows to visit common part ancestors while traversing a geodesic between two whole objects. This regularization thus mimics a continuous version of a part-whole tree embedded in the Poincarè ball. The Rcontr regularizer in Eq. (6) promotes correct clustering of objects and parts in the hyperbolic space. In particular, parts and whole of the same point cloud are promoted to be close while a part from a different class is mapped far apart with respect to the other whole. It ensures that the parts of a point cloud of a different class are far in terms of geodesic distance. δ is a margin hyperparameter to control the degree of separation between positive and negative samples. The two regularizations are included in the final loss in this way: L = LCE + αRcontr + βRhier (7) where LCE is the conventional classification loss (e.g., cross-entropy) evaluated on the whole objects. The classification head is a hyperbolic Möbius layer followed by softmax. In principle, one could argue that LCE could already promote correct clustering according to class labels, rendering Rcontr redundant. However, several works [10] have noticed that the Möbius-softmax hyperbolic head is weaker than its Euclidean counterpart. We thus found it more effective to evaluate LCE on the whole objects only, and use Rcontr as a metric penalty that explicitly considers geodesic distances to ensure correct clustering of both parts and whole objects. At each iteration of training with HyCoRe we sample shapes with a random N ′ varying within a predefined range. A part is defined as the N ′ nearest neighbors of a random point. In future work, it would be interesting to explore alternative definitions for parts, e.g., using part labels if available but, at the moment, we only address definition via spatial neighbors to avoid extra labeling requirements. Method AA(%) OA(%) 4 Experimental results 4.1 Experimental setting We study the performance of our regularizer HyCoRe on the synthetic dataset ModelNet40 [27] (12,331 objects with 1024 points, 40 classes) and on the real dataset ScanObjectNN [28] (15,000 objects with 1024 points, 15 classes). We apply our method over multiple classification architectures, namely the widely popular DGCNN and PointNet++ baselines, as well as the recent state-of-the-art PointMLP model. We substitute the standard classifier with its hyperbolic version (Möbius+softmax), as shown in Fig. 1. We use f = 256 features to be comparable to the official implementations in the Euclidean space, then we test the model over different embedding dimensions in the ablation study. Moreover, we set α = β = 0.01, γ = 1000 and δ = 4. For the number of points of each part N ′, we select a random number between 200 and 600, and for the whole object a random number between 800 and 1024 to ensure better flexibility of the learned to model to part sizes. We train the models using Riemannian SGD optimization. Our implementation is on Pytorch and we use geoopt [29] for the hyperbolic operations. Models are trained on an Nvidia A6000 GPU. 4.2 Main Results Table 1 shows the results for ModelNet40 classification. In the first part we report well-known and state-of-the-art supervised models. We retrained PointNet++, DGCNN and the state-of-theart PointMLP as baselines, noting some documented difficulty [38] with exactly reproducing the Table 5: Performance vs. curvature of the Poincarè Ball Average Accuracy (%) Curvature c 1 0.5 0.1 0.01 Hype-DGCNN 76.5 76.9 76.6 76.9 DGCNN+HyCoRe 80.2 79.4 78.7 78.5 official results. In addition, the second part of the table reports the performance of methods [33], [34], [7] proposing self-supervised pretraining techniques, after supervised finetuning. Concerning PointGLR [5], the most similar method to HyCoRe, we ensure a fair comparison by using only the L2G embedding loss and not the pretext tasks of normal estimation and reconstruction. Finally, the last part of the table presents the results with HyCoRe applied to the selected baselines. We can see that the proposed method achieves substantial gains not only compared to the randomly initialized models, but also compared to the finetuned models. When applied to the PointMLP, HyCoRe exceed the state-of-the-art performance on ModelNet40. Moreover, it is interesting to notice that the embedding framework of PointGLR is not particularly effective without the pretext tasks. This is due to the unsuitability of the spherical space to embed hierarchical information, as explained in Sec. 2, and it is indeed not far from results we obtain with our method in Euclidean space. Table 2 reports the classification results on the ScanObjectNN dataset. Also in this case, HyCoRe significantly improves the baseline DGCNN leading it to be comparable with the state-of-the-art methods such as SimpleView [4], PRANet [35] and MVTN [36]. In addition, PointMLP that holds the state of the art for this dataset, is further improved by our method and reaches an impressive overall accuracy of 87.2 %, substantially outperforming all the previous approaches. Although the authors in [3] claim that classification performance has reached a saturation point, we show that including novel regularizers in the training process can still lead to significant gains. This demonstrates that the proposed method leverages novel ideas, complementary to what is exploited by existing architectures, and it is thus able to boost the performance even of state-of-the-art methods. It is also remarkable that an older, yet still popular, architecture like DGCNN is able to outperform complex and sophisticated models such as the Point Transformer, when regularized by HyCoRe. 5.32 4.56 2.08 4.07 2.05 1.99 In addition, to further prove that enforcing the hierarchy between parts is useful to build better clusters, we show in Fig. 4 a 2D visualization with UMAP of the hyperbolic representations for the ModelNet40 data. Colors denote classes, big points whole objects and small points parts. Besides the clear clustering according to class labels, it is fascinating to notice the emergence of the part-whole hierarchy with part objects closer to the center of the disk. Importantly, some parts bridge multiple classes, such as the ones in the bottom right zoom, i.e., they are found along a geodesic connecting two class clusters, serving as common ancestors. This can happen due to the fact that some simple parts having roughly the same shape appear with multiple class labels during training, and the net effect of Rcontr is to position them midway across the classes. The tree-likeness of the hyperbolic space can be also be seen in the visualization in Fig. 2 (right). There we embed shapes with gradually large number of points up to the whole object made by 1024 points. We can notice that the parts are moved towards the disk edge as more points are added. Furthermore, a quantitative analysis of the part-whole hierarchy is shown in Table 6. Here we calculated the hyperbolic norms of compositions of labeled parts. We can see that, as the parts are assembled with other parts, their hyperbolic norms grow, up to the whole object that is pushed close to the ball edge. 4.3 Ablation study In the following we show an ablation study focusing on the DGCNN backbone and the ScanObjectNN dataset. The dataset selection is motivated by the fact that it is a real dataset, able to provide more stable and representative results compared to ModelNet40. We first compare HyCoRe with its Euclidean version (EuCoRe) to investigate the effectiveness of the hyperbolic space. The basic principles and losses are the same, but in EuCoRe distances and network layers are defined in the Euclidean space. Table 3 shows the results. With Hype-DGCNN we indicate the hyperbolic version of DGCNN, as represented in Fig. 1, but without any regularization, serving as a baseline to assess the individual effect of the regularizer. We also test the models over a different number of embedding dimensions. We can see that EuCoRe only provides a modest improvement, underlining the importance of the hyperbolic space. We also notice that the hyperbolic baseline struggles to be on par with its Euclidean counterpart, as observed by many recent works [25], [10]. However, when regularized with HyCoRe, we can observe significant gains, even in low dimensions. This also leaves an open research question, about whether better hyperbolic baselines could be built so that HyCoRe starts from a less disadvantaged point. In Table 4 we ablate HyCoRe by removing one of the two regularizers. We can see that the combination of the two provides the overall best gain. In order to study the effect of different space curvatures c, Table 5 evaluates HyCoRe from the standard curvature 1 down to 0.01. We remark that some works [25],[39], report significant improvements when c is very low (e.g., 0.001), but this is counter-intuitive since the hyperbolic space then resembles an almost flat manifold. On the contrary, we do see improved results at higher curvatures. Since HyCoRe constrains the network to learn the relations between parts and whole object, we claim that, at the end of the training process, the model should be better able to classify coarser objects. In Figs. 5a and 5b we show the test accuracy of DGCNN on ModelNet40, when presented with a uniformly subsampled point cloud and with a small randomly chosen and spatially-contiguous part, respectively. Indeed, we can notice that HyCoRe provides a gain up to 20 percentage points for very sparse point clouds, and is also able to successfully detect the object from smaller parts. For a fair comparison, we also report the baseline DGCNN with training augmented by random crops of parts. Even though the augmentation is useful to improve accuracy, HyCoRe is more effective demonstrating the importance of compositional reasoning. 5 Conclusions Although deep learning in the hyperbolic space is in its infancy, in this paper we showed how it can successfully capture the hierarchical nature of 3D point clouds, boosting the performance of state-of-the-art models for classification. Reasoning about the relations between objects and the parts that compose them leads not only to better results but also more robust and explainable models. In the future, it would be interesting to explore different ways of defining parts, not based on spatial nearest neighbors but rather on more semantic constructions. One important extension is to adapt HyCoRe to segmentation. Since segmentation aims to classify single points and the corresponding parts, contrary to classification, the parts embeddings should be placed on the boundary of the Poincarè Ball, where there is more space to correctly cluster them, and the whole objects (made by composition of parts) near the origin. We could exploit the label of the parts to this end or investigate unsupervised settings where the part hierarchy emerges naturally. Acknowledgments and Disclosure of Funding Computational resources were provided by HPC@POLITO, a project of Academic Computing within the Department of Control and Computer Engineering at the Politecnico di Torino (http://www.hpc.polito.it). This research received no external funding.
1. What is the focus and contribution of the paper regarding point cloud analysis? 2. What are the strengths of the proposed approach, particularly in promoting the part-whole hierarchy? 3. What are the weaknesses of the paper, especially regarding its claims and experiments? 4. Do you have any concerns about the technical aspects of the proposed method, such as the regularizer layer? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper presents a method for promoting the part-whole hierarchy in the learned feature space. In particular, it proposes to embed the features of a point cloud encoder into hyperbolic space. The part-whole hierarchy is enhanced through an explicit regularizer. Such a key idea is backed by the theory that the hyperbolic space (space with negative curvature) is the only space that can embed tree structures with low distortion. A regularizer layer is proposed for supervised training of point cloud classification models. It can be applied to existing architectures with a simple modification. Performance boost across a number of popular architectures is reported in the experiments. Strengths And Weaknesses Strengths The idea of promoting a part-whole hierarchy in the hyperbolic space provides a novel perspective for learning discriminative features of the point cloud, which I believe is beneficial to the community. Though it is only verified in the task of classification, I think it could be valuable for more tasks that require in-depth analysis of part hierarchies, e.g. matching incomplete point cloud, point cloud generation from parts, etc. The experiments have shown steady performance improvement by applying the proposed method to existing mainstream backbones. Nice visualization of the formed feature space for a better understanding of the effectiveness of the proposed approach. Code is provided for reproduction. Weaknesses Since hyperbolic learning is new to the point cloud analysis community, more insights on the technical part should be given to assist the audience to comprehend why the proposed method could work as expected. I would point out what should be improved in the questions section. The paper claims the proposed method can be applied to any existing architecture. However, the result section only shows it is applied to a small subset of methods that are compared. This makes me wonder if the results are cherry-picked. Can it provide a performance boost to arbitrary architecture? Questions It is not clear to me why Equation (5) can promote part embeddings to lie closer to the center while whole embeddings to the edge. The main reason is that Equation (5) only specifies constraints on the distance between the embedding of whole and part. It only encourages the part and the whole feature to stay apart. Why the part feature (instead of the whole feature) is guaranteed to be pushed to the center? What is the technical insight behind it? Why hierarchy learning can be beneficial to the task of classification? In the end, only the whole shape is used for classification, and the parts are not used as the input. What is the advantage of the proposed method over a naive method that simply applies contrastive learning on the feature space of the whole shape? Limitations No limitations or potential negative societal impact are provided in the paper. I would recommend the authors provide some failure cases or limitations in the rebuttal (if any).
NIPS
Title Rethinking the compositionality of point clouds through regularization in the hyperbolic space Abstract Point clouds of 3D objects exhibit an inherent compositional nature where simple parts can be assembled into progressively more complex shapes to form whole objects. Explicitly capturing such part-whole hierarchy is a long-sought objective in order to build effective models, but its tree-like nature has made the task elusive. In this paper, we propose to embed the features of a point cloud classifier into the hyperbolic space and explicitly regularize the space to account for the part-whole hierarchy. The hyperbolic space is the only space that can successfully embed the tree-like nature of the hierarchy. This leads to substantial improvements in the performance of state-of-art supervised models for point cloud classification. 1 Introduction Is the whole more than the sum of its parts? While philosophers have been debating such deep question since the time of Aristotle, we can certainly say that understanding and capturing the relationship between parts as constituents of whole complex structures is of paramount importance in building models of reality. In this paper, we turn our attention to the compositional nature of 3D objects, represented as point clouds, where simple parts can be assembled to form progressively more complex shapes. Indeed, the complex geometry of an object can be better understood by unraveling the implicit hierarchy of its parts. Such hierarchy can be intuitively captured by a tree where nodes close to the root represent basic universal shapes, which become progressively more complex as we approach the whole-object leaves. Transforming an object into another requires swapping parts by traversing the tree up to a common ancestor part. It is thus clear that a model extracting features, that claim to capture the nature of 3D objects, needs to incorporate such hierarchy. In the last years, point cloud processing methods have tried to devise methods to extract complex geometric information from points and neighborhoods. Architectures like graph neural networks [1] compose the features extracted by local receptive fields, with sophisticated geometric priors [2] exploiting locality and self-similarity, while a different school of thought argues that simple architectures, such as PointMLP [3] and SimpleView [4], with limited geometric priors are nevertherless very effective. It thus raises a question whether prior knowledge about the data is being exploited effectively. In this sense, works such as PointGLR [5], Info3D [6] and DCGLR [7] recognized the need to reason about local and global interactions in the feature extraction process. In particular, their claim is that ∗Code of the project: https://github.com/diegovalsesia/HyCoRe 36th Conference on Neural Information Processing Systems (NeurIPS 2022). maximizing the mutual information between parts and whole objects leads to understanding of local and global relations. Although these methods present compelling results for unsupervised feature extraction, they still fall short of providing significant improvements when finetuned with supervision. In our work, we argue that those methods do not fulfill their promise of capturing the part-whole relationship because they are unable to represent the tree-like nature of the compositional hierarchy. Indeed, their fundamental weakness lies in the use of spaces that are either flat (Euclidean) or with positive curvature (spherical). However, it is known that only spaces with negative curvature (hyperbolic) are able to embed tree structures with low distortion [8]. This is due to the fact that the volume of the Euclidean space grows only as a power of its radius rather than exponentially, limiting the representation capacity of tree-like data with an exponential number of leaves. This unique characteristic has inspired many researchers to represent hierarchical relations in many domains, from natural language processing [9],[10] to computer vision [11] ,[12]. However, the use of such principles for point clouds and 3D data is still unexplored. The main contributions of this paper lie in the following aspects: • we propose a novel regularizer to supervised training of point cloud classification models that promotes the part-whole hierarchy of compositionality in the hyperbolic space; • this regularizer can be applied to any state-of-art architecture with a simple modification of its head to perform classification with hyperbolic layers in the regularized space, coupled with Riemannian optimization [13]; • we observe a significant improvement in the performance of a number of popular architectures, including state-of-the-art techniques, surpassing the currently known best results on two different datasets; • we are the first to experimentally observe the desired part-whole hierarchy, by noticing that the geodesics in hyperbolic space between whole objects pass through common part ancestors. 2 Related work Point Cloud Analysis Point cloud data are sets of multiple points and, in recent years, several deep neural networks have been studied to process them. Early works adapted models for images through 2D projections [14], [15]. Later, PointNet [16] established new models working directly on the raw set of 3D coordinates by exploiting shared architectures invariant to points permutation. Originally, PointNet independently processed individual points through a shared MLP. To improve performance, PointNet++ [17] exploited spatial correlation by using a hierarchical feature learning paradigm. Other methods [18], [19], [20], treat point clouds as a graph and exploit operators defined over irregular sets to capture relations among points and their neighbors at different resolutions. This is the case of DGCNN [21], where the EdgeConv graph convolution operation aggregates features supported on neighborhoods as defined by a nearest neighbor graph dynamically computed in the feature space. Recently, PointMLP [3] revisits PointNet++ to include the concept of residual connections. Through this simple model, the authors show that sophisticated geometric models are not essential to obtain state-of-the-art performance. Part Compositionality Successfully capturing the semantics of 3D objects represented as point clouds requires to learn interactions between local and global information, and, in particular, the compositional nature of 3D objects as constructed from local parts. Indeed, some works have focused on capturing global-local reasoning in point cloud processing. One of the first and most representative works is PointGLR [5]. In this work, the authors map local features at different levels within the network to a common hypersphere where the global features embedding is made close to such local embeddings. This is the first approach towards modeling the similarities of parts (local features) and whole objects (global features). The use of a hypersphere as embedding space for similarity promotion traces its roots in metric learning works for face recognition [22]. In addition to the global-local embedding, PointGLR added two other pretext tasks, namely normal estimation and self-reconstruction, to further promote learning of highly discriminative features. Our work significantly differs from PointGLR in multiple ways: i) a positive curvature manifold such as the hypersphere is unable to accurately embed hierarchies (tree-like structures), hence our adoption of the hyperbolic space; ii) we actively promote a continuous embedding of part-whole hierarchies by penalizing the hyperbolic norm of parts proportionally to their number of points (a proxy for part complexity); iii) we move the classification head of the model to the hyperbolic space to exploit our regularized geometry. A further limitation of PointGLR is the implicit assumption of a model generating progressive hierarchies (e.g. via expanding receptive fields) in the intermediate layers. In contrast, our work can be readily adopted by any state-of-the-art model with just a replacement of the final layers. Other works revisit the global-local relations using maximization of mutual information between different views [6], clustering and contrastive learning [23], distillation with constrast [7], self-similarity and contrastive learning with hard negative samples [24]. Although most of these works include the contrastive strategy, they differ in the way they contrast the positive and negative samples and in the details of the self-supervision procedures, e.g., contrastive loss and point cloud augmentations. We also notice that most these works focus on unsupervised learning, and, while they show that the features learned in this manner are highly discriminative, they are also mostly unable to improve upon state-of-the-art supervised methods when finetuned with full supervision. These approaches differ from the one followed in this paper, where we focus on regularization of a fully supervised method, and we show improvements upon the supervised baselines that do not adopt our regularizer. Hyperbolic Learning The intuition that the hyperbolic space is crucial to embed hierarchical structures comes from the work of Sarkar [8] who proved that trees can be embedded in the hyperbolic space with arbitrarily low distortion. This inspired several works which investigated how various frameworks of representation learning can be reformulated in non-Euclidean manifolds. In particular, [9] [13] and [10] were some of the first works to explore hyperbolic representation learning by introducing Riemannian adaptive optimization, Poincarè embeddings and hyperbolic neural networks for natural language processing. The new mathematical formalism introduced by Ganea et al. [10] was decisive to demonstrate the effectiveness of hyperbolic variants of neural network layers compared to the Euclidean counterparts. Generalizations to other data, such as images [25] and graphs [26] with the corresponding hyperbolic variants of the main operations like graph convolution [26] and gyroplane convolution [12] have also been studied. In the context of unsupervised learning, new objectives in the hyperbolic space force the models to include the implicit hierarchical structure of the data leading to a better clustering in the embedding space [12], [11]. To the best of our knowledge, no work has yet focused on hyperbolic representations for point clouds. Indeed, 3D objects present an intrinsic hierarchy where whole objects are made by parts of different size. While the smallest parts may be shared across different object classes, the larger the parts the more class-specific they become. This consistently fits with the structure of a tree where simple fundamental parts are shared ancestors of complex objects and hence we show how the hyperbolic space can fruitfully capture this data prior. 3 Method In this section we present our proposed method, named HyCoRe (Hyperbolic Compositional Regularizer). An overview is presented in Fig. 1. At a high level, HyCoRe enhances any state-of-the-art neural network model for point cloud classification by 1) replacing its last layers with layers performing transformations in the hyperbolic space (see Sec. 3.2), and 2) regularizing the classification loss to induce a desirable configuration of the hyperbolic feature space where embeddings of parts both follow a hierarchy and cluster according to class labels. 3.1 Compositional Hierarchy in 3D Point Clouds The objective of HyCoRe is to regularize the feature space produced by a neural network so that it captures the compositional structure of the 3D point cloud at different levels. In particular, we notice that there exists a hierarchy where small parts (e.g., simple structures like disks, squares, triangles) composed of few points are universal ancestors to more complex shapes included in many different objects. As these structures are composed into more complex parts with more points, they progressively become more specific to an object or class. This hierarchy can be mathematically represented by a tree, as depicted in Fig.2 where a simple cylinder can be the ancestor of both pieces of a chair or a table. While the leaves in the tree are whole objects, thus belonging to a specific class, their ancestors are progressively more universal the higher up in the hierarchy they sit. At this point, it is important noting that the graph distance between leaves is determined by the shortest path passing through the first common ancestor for objects in the same or similar classes, while objects from significantly dissimilar classes have the shortest path passing through the root of the hierarchy. In order to ensure that we can embed this tree structure in a feature space, we need a space that preserves the geometrical properties of trees and especially the graph distance. In particular, the embedding space must be able to accommodate the exponential volume growth of a tree along its radius. A classic result by Sarkar [8] showed that flat Euclidean space does not provide this, leading to high errors when embedding trees, even in high dimensions. On the contrary, the hyperbolic space, a Riemannian manifold with negative curvature, does support exponentially increasing volumes and can embed trees with arbitrarily low distortion. Indeed, the geodesic (shortest path) between two points in this space does pass through points closer to the origin, mimicking the behavior of distance defined over a tree. In particular, we will focus on the Poincarè ball model of hyperbolic space. Since hyperbolic space is a non-Euclidean manifold, it cannot benefit from conventional vector representations and linear algebra. As a consequence, classical neural networks cannot operate in such a space. However, we will use extensions [10] of classic layers defined through the concept of gyrovector spaces. 3.2 Hyperbolic Space and Neural Networks The hyperbolic space is a Riemannian manifold with constant negative curvature. The curvature determines the metric of a space by the following formula: gR = (λ c x) 2gE = 2 1 + c∥x∥2 gE (1) where gR is the metric tensor of a generic Riemannian manifold, λcx is the conformal factor that depends on the curvature c and on the point x on which is calculated, and gE is the metric tensor of the Euclidean space Rn, i.e., the identity tensor In. Note how the metric depends on the coordinates (through ∥x∥) for c ̸= 0, and how c = 0 yields gR = 2gE , i.e., the Euclidean space is a flat Riemannian manifold with zero curvature. Spaces with c > 0 are spherical, and with c < 0 hyperbolic. The Poincarè Ball in n dimensions Dn is a hyperbolic space with c = −1, and it is isometric to other models such as the Lorentz model. The distance and norm are defined as: dD(x,y) = cosh −1 ( 1 + 2 ∥x− y∥2 (1− ∥x∥2)(1− ∥y∥2) ) ) , ∥x∥D = 2 tanh−1 (∥x∥) (2) Since the Poincarè Ball is a Riemannian manifold, for each point x ∈ Dn we can define a logarithmic map logx : Dn → TxDn that maps points from the Poincarè Ball to the corresponding tangent space TxDn ∈ Rn, and an exponential map expx : TxDn → Dn that does the opposite. These operations [10] are fundamental to move from one space to the other and viceversa. The formalism to generalize tensor operations in the hyperbolic space is called the gyrovector space, where addition, scalar multiplication, vector-matrix multiplication and other operations are redefined as Möbius operations and work in Riemannian manifolds with curvature c. These become the basic blocks of the hyperbolic neural networks. In particular, we will use the hyperbolic feed forward (FF) layer (also known as Möbius layer). Considering the Euclidean case, for a FF layer, we need a matrix M : Rn → Rm to linearly project the input x ∈ Rn to the feature space Rm, and, additionally, a translation made by a bias addition, i.e., y+ b with y,b ∈ Rm and, finally, a pointwise non-linearity ϕ : Rm → Rm. Matrix multiplication, bias and pointwise non-linearity are replaced by Möbius operations in the gyrovector space and become: y = M⊗c(x) = 1√ c tanh ( ∥Mx∥ ∥x∥ tanh−1( √ c∥x∥) ) Mx ∥x∥ (3) z = y ⊕c b = expcy ( λc0 λcy logc0(b) ) , ϕ⊗c(z) = expcz (ϕ(log c 0(z))) (4) where M and b are the same matrix and vector defined above, c is the magnitude of the curvature. Note that when c → 0 we recover the Euclidean feed-forward layer. An interesting property of the Möbius layer is that it is highly nonlinear; indeed the bias addition in hyperbolic space becomes a nonlinear mapping since geodesics are curved paths in non-flat manifolds. 3.3 Hyperbolic Compositional Regularization Armed with the formalism introduced in the previous section, we are ready to formulate our HyCoRe framework, anticipated in Fig. 1. Consider a point cloud PN as a set of 3D points p ∈ R3 with N elements. We use any state-of-the-art point cloud processing network as a feature extraction backbone E : RN×3 → Rm to encode PN in the corresponding feature space. At this point we apply an exponential map expcx : Rm → Dm to map the Euclidean feature vector into the hyperbolic space and then a Möbius layer H : Dm → Df to project the hyperbolic vector in an f -dimensional Poincarè ball. This is the hyperbolic embedding of the whole point cloud PN , i.e., zwhole = H(exp(E(PN ))) ∈ Df . We repeat the same procedure for a sub-part of PN , which we call PN ′ with a number of points N ′ < N , to create the part embedding zpart = H(exp(E(PN ′))) ∈ Df in the same feature space as before. We now want to regularize the feature space to induce the previously mentioned properties, namely the part-whole hierarchy and clustering according to the class labels. This is performed by defining the following triplet regularizers: Rhier(z + whole, z + part) = max(0,−∥z+whole∥D + ∥z + part∥D + γ/N ′) (5) Rcontr(z + whole, z + part, z − part) = max ( 0, dD(z + whole, z + part)− dD(z+whole, z − part) + δ ) (6) where z+whole and z + part are the hyperbolic representation of the whole and a part from the same point cloud, while z−part is the embedding of a part of a different point cloud from a different class. The Rhier regularizer in Eq. (5) induces the compositional part-whole hierarchy by promoting part embeddings to lie closer to the center of the Poincarè ball and whole embeddings to be closer to the edge. In particular, we use a variable margin γ/N ′ that depends on the number of points N ′ of the part PN ′ . This means that shapes composed by few points (hence simple universal shapes) will be far from the whole object representation and with lower hyperbolic norm (near the centre). On the other hand, embeddings of larger parts will be progressively closer to the edge of the Poincarè ball, depend- ing on the part size. Since geodesics between two points pass closer to the ball center (Fig. 3), this structure we impose to the space allows to visit common part ancestors while traversing a geodesic between two whole objects. This regularization thus mimics a continuous version of a part-whole tree embedded in the Poincarè ball. The Rcontr regularizer in Eq. (6) promotes correct clustering of objects and parts in the hyperbolic space. In particular, parts and whole of the same point cloud are promoted to be close while a part from a different class is mapped far apart with respect to the other whole. It ensures that the parts of a point cloud of a different class are far in terms of geodesic distance. δ is a margin hyperparameter to control the degree of separation between positive and negative samples. The two regularizations are included in the final loss in this way: L = LCE + αRcontr + βRhier (7) where LCE is the conventional classification loss (e.g., cross-entropy) evaluated on the whole objects. The classification head is a hyperbolic Möbius layer followed by softmax. In principle, one could argue that LCE could already promote correct clustering according to class labels, rendering Rcontr redundant. However, several works [10] have noticed that the Möbius-softmax hyperbolic head is weaker than its Euclidean counterpart. We thus found it more effective to evaluate LCE on the whole objects only, and use Rcontr as a metric penalty that explicitly considers geodesic distances to ensure correct clustering of both parts and whole objects. At each iteration of training with HyCoRe we sample shapes with a random N ′ varying within a predefined range. A part is defined as the N ′ nearest neighbors of a random point. In future work, it would be interesting to explore alternative definitions for parts, e.g., using part labels if available but, at the moment, we only address definition via spatial neighbors to avoid extra labeling requirements. Method AA(%) OA(%) 4 Experimental results 4.1 Experimental setting We study the performance of our regularizer HyCoRe on the synthetic dataset ModelNet40 [27] (12,331 objects with 1024 points, 40 classes) and on the real dataset ScanObjectNN [28] (15,000 objects with 1024 points, 15 classes). We apply our method over multiple classification architectures, namely the widely popular DGCNN and PointNet++ baselines, as well as the recent state-of-the-art PointMLP model. We substitute the standard classifier with its hyperbolic version (Möbius+softmax), as shown in Fig. 1. We use f = 256 features to be comparable to the official implementations in the Euclidean space, then we test the model over different embedding dimensions in the ablation study. Moreover, we set α = β = 0.01, γ = 1000 and δ = 4. For the number of points of each part N ′, we select a random number between 200 and 600, and for the whole object a random number between 800 and 1024 to ensure better flexibility of the learned to model to part sizes. We train the models using Riemannian SGD optimization. Our implementation is on Pytorch and we use geoopt [29] for the hyperbolic operations. Models are trained on an Nvidia A6000 GPU. 4.2 Main Results Table 1 shows the results for ModelNet40 classification. In the first part we report well-known and state-of-the-art supervised models. We retrained PointNet++, DGCNN and the state-of-theart PointMLP as baselines, noting some documented difficulty [38] with exactly reproducing the Table 5: Performance vs. curvature of the Poincarè Ball Average Accuracy (%) Curvature c 1 0.5 0.1 0.01 Hype-DGCNN 76.5 76.9 76.6 76.9 DGCNN+HyCoRe 80.2 79.4 78.7 78.5 official results. In addition, the second part of the table reports the performance of methods [33], [34], [7] proposing self-supervised pretraining techniques, after supervised finetuning. Concerning PointGLR [5], the most similar method to HyCoRe, we ensure a fair comparison by using only the L2G embedding loss and not the pretext tasks of normal estimation and reconstruction. Finally, the last part of the table presents the results with HyCoRe applied to the selected baselines. We can see that the proposed method achieves substantial gains not only compared to the randomly initialized models, but also compared to the finetuned models. When applied to the PointMLP, HyCoRe exceed the state-of-the-art performance on ModelNet40. Moreover, it is interesting to notice that the embedding framework of PointGLR is not particularly effective without the pretext tasks. This is due to the unsuitability of the spherical space to embed hierarchical information, as explained in Sec. 2, and it is indeed not far from results we obtain with our method in Euclidean space. Table 2 reports the classification results on the ScanObjectNN dataset. Also in this case, HyCoRe significantly improves the baseline DGCNN leading it to be comparable with the state-of-the-art methods such as SimpleView [4], PRANet [35] and MVTN [36]. In addition, PointMLP that holds the state of the art for this dataset, is further improved by our method and reaches an impressive overall accuracy of 87.2 %, substantially outperforming all the previous approaches. Although the authors in [3] claim that classification performance has reached a saturation point, we show that including novel regularizers in the training process can still lead to significant gains. This demonstrates that the proposed method leverages novel ideas, complementary to what is exploited by existing architectures, and it is thus able to boost the performance even of state-of-the-art methods. It is also remarkable that an older, yet still popular, architecture like DGCNN is able to outperform complex and sophisticated models such as the Point Transformer, when regularized by HyCoRe. 5.32 4.56 2.08 4.07 2.05 1.99 In addition, to further prove that enforcing the hierarchy between parts is useful to build better clusters, we show in Fig. 4 a 2D visualization with UMAP of the hyperbolic representations for the ModelNet40 data. Colors denote classes, big points whole objects and small points parts. Besides the clear clustering according to class labels, it is fascinating to notice the emergence of the part-whole hierarchy with part objects closer to the center of the disk. Importantly, some parts bridge multiple classes, such as the ones in the bottom right zoom, i.e., they are found along a geodesic connecting two class clusters, serving as common ancestors. This can happen due to the fact that some simple parts having roughly the same shape appear with multiple class labels during training, and the net effect of Rcontr is to position them midway across the classes. The tree-likeness of the hyperbolic space can be also be seen in the visualization in Fig. 2 (right). There we embed shapes with gradually large number of points up to the whole object made by 1024 points. We can notice that the parts are moved towards the disk edge as more points are added. Furthermore, a quantitative analysis of the part-whole hierarchy is shown in Table 6. Here we calculated the hyperbolic norms of compositions of labeled parts. We can see that, as the parts are assembled with other parts, their hyperbolic norms grow, up to the whole object that is pushed close to the ball edge. 4.3 Ablation study In the following we show an ablation study focusing on the DGCNN backbone and the ScanObjectNN dataset. The dataset selection is motivated by the fact that it is a real dataset, able to provide more stable and representative results compared to ModelNet40. We first compare HyCoRe with its Euclidean version (EuCoRe) to investigate the effectiveness of the hyperbolic space. The basic principles and losses are the same, but in EuCoRe distances and network layers are defined in the Euclidean space. Table 3 shows the results. With Hype-DGCNN we indicate the hyperbolic version of DGCNN, as represented in Fig. 1, but without any regularization, serving as a baseline to assess the individual effect of the regularizer. We also test the models over a different number of embedding dimensions. We can see that EuCoRe only provides a modest improvement, underlining the importance of the hyperbolic space. We also notice that the hyperbolic baseline struggles to be on par with its Euclidean counterpart, as observed by many recent works [25], [10]. However, when regularized with HyCoRe, we can observe significant gains, even in low dimensions. This also leaves an open research question, about whether better hyperbolic baselines could be built so that HyCoRe starts from a less disadvantaged point. In Table 4 we ablate HyCoRe by removing one of the two regularizers. We can see that the combination of the two provides the overall best gain. In order to study the effect of different space curvatures c, Table 5 evaluates HyCoRe from the standard curvature 1 down to 0.01. We remark that some works [25],[39], report significant improvements when c is very low (e.g., 0.001), but this is counter-intuitive since the hyperbolic space then resembles an almost flat manifold. On the contrary, we do see improved results at higher curvatures. Since HyCoRe constrains the network to learn the relations between parts and whole object, we claim that, at the end of the training process, the model should be better able to classify coarser objects. In Figs. 5a and 5b we show the test accuracy of DGCNN on ModelNet40, when presented with a uniformly subsampled point cloud and with a small randomly chosen and spatially-contiguous part, respectively. Indeed, we can notice that HyCoRe provides a gain up to 20 percentage points for very sparse point clouds, and is also able to successfully detect the object from smaller parts. For a fair comparison, we also report the baseline DGCNN with training augmented by random crops of parts. Even though the augmentation is useful to improve accuracy, HyCoRe is more effective demonstrating the importance of compositional reasoning. 5 Conclusions Although deep learning in the hyperbolic space is in its infancy, in this paper we showed how it can successfully capture the hierarchical nature of 3D point clouds, boosting the performance of state-of-the-art models for classification. Reasoning about the relations between objects and the parts that compose them leads not only to better results but also more robust and explainable models. In the future, it would be interesting to explore different ways of defining parts, not based on spatial nearest neighbors but rather on more semantic constructions. One important extension is to adapt HyCoRe to segmentation. Since segmentation aims to classify single points and the corresponding parts, contrary to classification, the parts embeddings should be placed on the boundary of the Poincarè Ball, where there is more space to correctly cluster them, and the whole objects (made by composition of parts) near the origin. We could exploit the label of the parts to this end or investigate unsupervised settings where the part hierarchy emerges naturally. Acknowledgments and Disclosure of Funding Computational resources were provided by HPC@POLITO, a project of Academic Computing within the Department of Control and Computer Engineering at the Politecnico di Torino (http://www.hpc.polito.it). This research received no external funding.
1. What is the focus and contribution of the paper regarding point cloud classification? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of the use of hyperbolic neural networks and regularization? 3. Do you have any concerns or questions regarding the definition and application of the part-whole hierarchy in the paper? 4. How does the author justify the design of the contrastive and hierarchical loss functions? 5. Can the author provide versions of DGCNN+HyCoRe without R hier? 6. How can the visualization in Figure 4 be improved to better demonstrate the quality of the learned "part-whole hierarchy"? 7. Is there anything else that the author would like to add or clarify in the paper?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes to embed the features of a point cloud classifier into the hyperbolic space and explicitly regularize the space to account for the part-whole hierarchy. To do this, it employs a hyperbolic neural network [10] and introduces losses to regularize the part-whole hierarchy. Strengths And Weaknesses Strengths: It seems the proposed use of hyperbolic neural network with regularization is able to improve accuracy of different point cloud classification backbones. The experiments demonstrated the benefit of the proposed work. Weakness: The intuition of the proposed method looks handwavy to me. In literature, when talking about part-whole hierarchy, it almost always refers to a single object consists of multiple subparts, and the subparts further dissolve into smaller parts. The parts need not to be the same and most times they are assumed to be a mixture of different parts. However, in this work, it is in an unusual opposite direction, i.e. different objects share a single “atom” part and the full objects are “grown” piece by piece in sequential order (as shown in figure 2). Such definition of the hierarchy looks suspicious to me, as there shouldn’t be an unique sequential order / hierarchy to define how an object instance is composed in piecewise order, and it makes less sense to require different objects correspond to a single “common part ancestor”. This is in contrast to embedding WordNet (in the original hyperbolic neural network paper) where the word hierarchy is uniquely defined according to categorical relationships. Also, the way the author designed the contrastive and hierarchical loss (equation 5) is not fully justified by the author’s definition of part-whole hierarchy. The loss in equation 5 only enforces the embedding of the whole object to differ from the embedding of parts, in other words it only distinguishes the last level of hierarchy against the rest levels, but there is no regularization to distinguish subparts between intermediate levels. Looking at the ablation in Table 4, it does seem that the hierarchical loss make little difference to baseline. Could the author also supply a version of DGCNN+HyCoRe without R hier ? The visualization in Figure 4 is not that helpful either, it is hard to tell the size of each points when everything is densely packed, and not straightforward to see the quality of the learned “part-whold hierarchy” other than the clustering. Overall, I feel “hierarchy” may not be a good explanation to what the authors actually did. Probably just part-whole contrastive learning (without the hierarchical part) is more appropriate. Questions (1) further discussion of the definition of the part-whole hierarchy and its difference to object-part hierarchy usually defined in literature. (2) comments on if R hier is sufficient to regularize hierchy with multiple levels, and also supply the result of DGCNN+HyCoRe without R hier (3) improve visualization of figure 4, give more visual example of the pointcloud & embedding pairs. See discussions above for explanations. Limitations I could not find a limitation section in the draft.
NIPS
Title Rethinking the compositionality of point clouds through regularization in the hyperbolic space Abstract Point clouds of 3D objects exhibit an inherent compositional nature where simple parts can be assembled into progressively more complex shapes to form whole objects. Explicitly capturing such part-whole hierarchy is a long-sought objective in order to build effective models, but its tree-like nature has made the task elusive. In this paper, we propose to embed the features of a point cloud classifier into the hyperbolic space and explicitly regularize the space to account for the part-whole hierarchy. The hyperbolic space is the only space that can successfully embed the tree-like nature of the hierarchy. This leads to substantial improvements in the performance of state-of-art supervised models for point cloud classification. 1 Introduction Is the whole more than the sum of its parts? While philosophers have been debating such deep question since the time of Aristotle, we can certainly say that understanding and capturing the relationship between parts as constituents of whole complex structures is of paramount importance in building models of reality. In this paper, we turn our attention to the compositional nature of 3D objects, represented as point clouds, where simple parts can be assembled to form progressively more complex shapes. Indeed, the complex geometry of an object can be better understood by unraveling the implicit hierarchy of its parts. Such hierarchy can be intuitively captured by a tree where nodes close to the root represent basic universal shapes, which become progressively more complex as we approach the whole-object leaves. Transforming an object into another requires swapping parts by traversing the tree up to a common ancestor part. It is thus clear that a model extracting features, that claim to capture the nature of 3D objects, needs to incorporate such hierarchy. In the last years, point cloud processing methods have tried to devise methods to extract complex geometric information from points and neighborhoods. Architectures like graph neural networks [1] compose the features extracted by local receptive fields, with sophisticated geometric priors [2] exploiting locality and self-similarity, while a different school of thought argues that simple architectures, such as PointMLP [3] and SimpleView [4], with limited geometric priors are nevertherless very effective. It thus raises a question whether prior knowledge about the data is being exploited effectively. In this sense, works such as PointGLR [5], Info3D [6] and DCGLR [7] recognized the need to reason about local and global interactions in the feature extraction process. In particular, their claim is that ∗Code of the project: https://github.com/diegovalsesia/HyCoRe 36th Conference on Neural Information Processing Systems (NeurIPS 2022). maximizing the mutual information between parts and whole objects leads to understanding of local and global relations. Although these methods present compelling results for unsupervised feature extraction, they still fall short of providing significant improvements when finetuned with supervision. In our work, we argue that those methods do not fulfill their promise of capturing the part-whole relationship because they are unable to represent the tree-like nature of the compositional hierarchy. Indeed, their fundamental weakness lies in the use of spaces that are either flat (Euclidean) or with positive curvature (spherical). However, it is known that only spaces with negative curvature (hyperbolic) are able to embed tree structures with low distortion [8]. This is due to the fact that the volume of the Euclidean space grows only as a power of its radius rather than exponentially, limiting the representation capacity of tree-like data with an exponential number of leaves. This unique characteristic has inspired many researchers to represent hierarchical relations in many domains, from natural language processing [9],[10] to computer vision [11] ,[12]. However, the use of such principles for point clouds and 3D data is still unexplored. The main contributions of this paper lie in the following aspects: • we propose a novel regularizer to supervised training of point cloud classification models that promotes the part-whole hierarchy of compositionality in the hyperbolic space; • this regularizer can be applied to any state-of-art architecture with a simple modification of its head to perform classification with hyperbolic layers in the regularized space, coupled with Riemannian optimization [13]; • we observe a significant improvement in the performance of a number of popular architectures, including state-of-the-art techniques, surpassing the currently known best results on two different datasets; • we are the first to experimentally observe the desired part-whole hierarchy, by noticing that the geodesics in hyperbolic space between whole objects pass through common part ancestors. 2 Related work Point Cloud Analysis Point cloud data are sets of multiple points and, in recent years, several deep neural networks have been studied to process them. Early works adapted models for images through 2D projections [14], [15]. Later, PointNet [16] established new models working directly on the raw set of 3D coordinates by exploiting shared architectures invariant to points permutation. Originally, PointNet independently processed individual points through a shared MLP. To improve performance, PointNet++ [17] exploited spatial correlation by using a hierarchical feature learning paradigm. Other methods [18], [19], [20], treat point clouds as a graph and exploit operators defined over irregular sets to capture relations among points and their neighbors at different resolutions. This is the case of DGCNN [21], where the EdgeConv graph convolution operation aggregates features supported on neighborhoods as defined by a nearest neighbor graph dynamically computed in the feature space. Recently, PointMLP [3] revisits PointNet++ to include the concept of residual connections. Through this simple model, the authors show that sophisticated geometric models are not essential to obtain state-of-the-art performance. Part Compositionality Successfully capturing the semantics of 3D objects represented as point clouds requires to learn interactions between local and global information, and, in particular, the compositional nature of 3D objects as constructed from local parts. Indeed, some works have focused on capturing global-local reasoning in point cloud processing. One of the first and most representative works is PointGLR [5]. In this work, the authors map local features at different levels within the network to a common hypersphere where the global features embedding is made close to such local embeddings. This is the first approach towards modeling the similarities of parts (local features) and whole objects (global features). The use of a hypersphere as embedding space for similarity promotion traces its roots in metric learning works for face recognition [22]. In addition to the global-local embedding, PointGLR added two other pretext tasks, namely normal estimation and self-reconstruction, to further promote learning of highly discriminative features. Our work significantly differs from PointGLR in multiple ways: i) a positive curvature manifold such as the hypersphere is unable to accurately embed hierarchies (tree-like structures), hence our adoption of the hyperbolic space; ii) we actively promote a continuous embedding of part-whole hierarchies by penalizing the hyperbolic norm of parts proportionally to their number of points (a proxy for part complexity); iii) we move the classification head of the model to the hyperbolic space to exploit our regularized geometry. A further limitation of PointGLR is the implicit assumption of a model generating progressive hierarchies (e.g. via expanding receptive fields) in the intermediate layers. In contrast, our work can be readily adopted by any state-of-the-art model with just a replacement of the final layers. Other works revisit the global-local relations using maximization of mutual information between different views [6], clustering and contrastive learning [23], distillation with constrast [7], self-similarity and contrastive learning with hard negative samples [24]. Although most of these works include the contrastive strategy, they differ in the way they contrast the positive and negative samples and in the details of the self-supervision procedures, e.g., contrastive loss and point cloud augmentations. We also notice that most these works focus on unsupervised learning, and, while they show that the features learned in this manner are highly discriminative, they are also mostly unable to improve upon state-of-the-art supervised methods when finetuned with full supervision. These approaches differ from the one followed in this paper, where we focus on regularization of a fully supervised method, and we show improvements upon the supervised baselines that do not adopt our regularizer. Hyperbolic Learning The intuition that the hyperbolic space is crucial to embed hierarchical structures comes from the work of Sarkar [8] who proved that trees can be embedded in the hyperbolic space with arbitrarily low distortion. This inspired several works which investigated how various frameworks of representation learning can be reformulated in non-Euclidean manifolds. In particular, [9] [13] and [10] were some of the first works to explore hyperbolic representation learning by introducing Riemannian adaptive optimization, Poincarè embeddings and hyperbolic neural networks for natural language processing. The new mathematical formalism introduced by Ganea et al. [10] was decisive to demonstrate the effectiveness of hyperbolic variants of neural network layers compared to the Euclidean counterparts. Generalizations to other data, such as images [25] and graphs [26] with the corresponding hyperbolic variants of the main operations like graph convolution [26] and gyroplane convolution [12] have also been studied. In the context of unsupervised learning, new objectives in the hyperbolic space force the models to include the implicit hierarchical structure of the data leading to a better clustering in the embedding space [12], [11]. To the best of our knowledge, no work has yet focused on hyperbolic representations for point clouds. Indeed, 3D objects present an intrinsic hierarchy where whole objects are made by parts of different size. While the smallest parts may be shared across different object classes, the larger the parts the more class-specific they become. This consistently fits with the structure of a tree where simple fundamental parts are shared ancestors of complex objects and hence we show how the hyperbolic space can fruitfully capture this data prior. 3 Method In this section we present our proposed method, named HyCoRe (Hyperbolic Compositional Regularizer). An overview is presented in Fig. 1. At a high level, HyCoRe enhances any state-of-the-art neural network model for point cloud classification by 1) replacing its last layers with layers performing transformations in the hyperbolic space (see Sec. 3.2), and 2) regularizing the classification loss to induce a desirable configuration of the hyperbolic feature space where embeddings of parts both follow a hierarchy and cluster according to class labels. 3.1 Compositional Hierarchy in 3D Point Clouds The objective of HyCoRe is to regularize the feature space produced by a neural network so that it captures the compositional structure of the 3D point cloud at different levels. In particular, we notice that there exists a hierarchy where small parts (e.g., simple structures like disks, squares, triangles) composed of few points are universal ancestors to more complex shapes included in many different objects. As these structures are composed into more complex parts with more points, they progressively become more specific to an object or class. This hierarchy can be mathematically represented by a tree, as depicted in Fig.2 where a simple cylinder can be the ancestor of both pieces of a chair or a table. While the leaves in the tree are whole objects, thus belonging to a specific class, their ancestors are progressively more universal the higher up in the hierarchy they sit. At this point, it is important noting that the graph distance between leaves is determined by the shortest path passing through the first common ancestor for objects in the same or similar classes, while objects from significantly dissimilar classes have the shortest path passing through the root of the hierarchy. In order to ensure that we can embed this tree structure in a feature space, we need a space that preserves the geometrical properties of trees and especially the graph distance. In particular, the embedding space must be able to accommodate the exponential volume growth of a tree along its radius. A classic result by Sarkar [8] showed that flat Euclidean space does not provide this, leading to high errors when embedding trees, even in high dimensions. On the contrary, the hyperbolic space, a Riemannian manifold with negative curvature, does support exponentially increasing volumes and can embed trees with arbitrarily low distortion. Indeed, the geodesic (shortest path) between two points in this space does pass through points closer to the origin, mimicking the behavior of distance defined over a tree. In particular, we will focus on the Poincarè ball model of hyperbolic space. Since hyperbolic space is a non-Euclidean manifold, it cannot benefit from conventional vector representations and linear algebra. As a consequence, classical neural networks cannot operate in such a space. However, we will use extensions [10] of classic layers defined through the concept of gyrovector spaces. 3.2 Hyperbolic Space and Neural Networks The hyperbolic space is a Riemannian manifold with constant negative curvature. The curvature determines the metric of a space by the following formula: gR = (λ c x) 2gE = 2 1 + c∥x∥2 gE (1) where gR is the metric tensor of a generic Riemannian manifold, λcx is the conformal factor that depends on the curvature c and on the point x on which is calculated, and gE is the metric tensor of the Euclidean space Rn, i.e., the identity tensor In. Note how the metric depends on the coordinates (through ∥x∥) for c ̸= 0, and how c = 0 yields gR = 2gE , i.e., the Euclidean space is a flat Riemannian manifold with zero curvature. Spaces with c > 0 are spherical, and with c < 0 hyperbolic. The Poincarè Ball in n dimensions Dn is a hyperbolic space with c = −1, and it is isometric to other models such as the Lorentz model. The distance and norm are defined as: dD(x,y) = cosh −1 ( 1 + 2 ∥x− y∥2 (1− ∥x∥2)(1− ∥y∥2) ) ) , ∥x∥D = 2 tanh−1 (∥x∥) (2) Since the Poincarè Ball is a Riemannian manifold, for each point x ∈ Dn we can define a logarithmic map logx : Dn → TxDn that maps points from the Poincarè Ball to the corresponding tangent space TxDn ∈ Rn, and an exponential map expx : TxDn → Dn that does the opposite. These operations [10] are fundamental to move from one space to the other and viceversa. The formalism to generalize tensor operations in the hyperbolic space is called the gyrovector space, where addition, scalar multiplication, vector-matrix multiplication and other operations are redefined as Möbius operations and work in Riemannian manifolds with curvature c. These become the basic blocks of the hyperbolic neural networks. In particular, we will use the hyperbolic feed forward (FF) layer (also known as Möbius layer). Considering the Euclidean case, for a FF layer, we need a matrix M : Rn → Rm to linearly project the input x ∈ Rn to the feature space Rm, and, additionally, a translation made by a bias addition, i.e., y+ b with y,b ∈ Rm and, finally, a pointwise non-linearity ϕ : Rm → Rm. Matrix multiplication, bias and pointwise non-linearity are replaced by Möbius operations in the gyrovector space and become: y = M⊗c(x) = 1√ c tanh ( ∥Mx∥ ∥x∥ tanh−1( √ c∥x∥) ) Mx ∥x∥ (3) z = y ⊕c b = expcy ( λc0 λcy logc0(b) ) , ϕ⊗c(z) = expcz (ϕ(log c 0(z))) (4) where M and b are the same matrix and vector defined above, c is the magnitude of the curvature. Note that when c → 0 we recover the Euclidean feed-forward layer. An interesting property of the Möbius layer is that it is highly nonlinear; indeed the bias addition in hyperbolic space becomes a nonlinear mapping since geodesics are curved paths in non-flat manifolds. 3.3 Hyperbolic Compositional Regularization Armed with the formalism introduced in the previous section, we are ready to formulate our HyCoRe framework, anticipated in Fig. 1. Consider a point cloud PN as a set of 3D points p ∈ R3 with N elements. We use any state-of-the-art point cloud processing network as a feature extraction backbone E : RN×3 → Rm to encode PN in the corresponding feature space. At this point we apply an exponential map expcx : Rm → Dm to map the Euclidean feature vector into the hyperbolic space and then a Möbius layer H : Dm → Df to project the hyperbolic vector in an f -dimensional Poincarè ball. This is the hyperbolic embedding of the whole point cloud PN , i.e., zwhole = H(exp(E(PN ))) ∈ Df . We repeat the same procedure for a sub-part of PN , which we call PN ′ with a number of points N ′ < N , to create the part embedding zpart = H(exp(E(PN ′))) ∈ Df in the same feature space as before. We now want to regularize the feature space to induce the previously mentioned properties, namely the part-whole hierarchy and clustering according to the class labels. This is performed by defining the following triplet regularizers: Rhier(z + whole, z + part) = max(0,−∥z+whole∥D + ∥z + part∥D + γ/N ′) (5) Rcontr(z + whole, z + part, z − part) = max ( 0, dD(z + whole, z + part)− dD(z+whole, z − part) + δ ) (6) where z+whole and z + part are the hyperbolic representation of the whole and a part from the same point cloud, while z−part is the embedding of a part of a different point cloud from a different class. The Rhier regularizer in Eq. (5) induces the compositional part-whole hierarchy by promoting part embeddings to lie closer to the center of the Poincarè ball and whole embeddings to be closer to the edge. In particular, we use a variable margin γ/N ′ that depends on the number of points N ′ of the part PN ′ . This means that shapes composed by few points (hence simple universal shapes) will be far from the whole object representation and with lower hyperbolic norm (near the centre). On the other hand, embeddings of larger parts will be progressively closer to the edge of the Poincarè ball, depend- ing on the part size. Since geodesics between two points pass closer to the ball center (Fig. 3), this structure we impose to the space allows to visit common part ancestors while traversing a geodesic between two whole objects. This regularization thus mimics a continuous version of a part-whole tree embedded in the Poincarè ball. The Rcontr regularizer in Eq. (6) promotes correct clustering of objects and parts in the hyperbolic space. In particular, parts and whole of the same point cloud are promoted to be close while a part from a different class is mapped far apart with respect to the other whole. It ensures that the parts of a point cloud of a different class are far in terms of geodesic distance. δ is a margin hyperparameter to control the degree of separation between positive and negative samples. The two regularizations are included in the final loss in this way: L = LCE + αRcontr + βRhier (7) where LCE is the conventional classification loss (e.g., cross-entropy) evaluated on the whole objects. The classification head is a hyperbolic Möbius layer followed by softmax. In principle, one could argue that LCE could already promote correct clustering according to class labels, rendering Rcontr redundant. However, several works [10] have noticed that the Möbius-softmax hyperbolic head is weaker than its Euclidean counterpart. We thus found it more effective to evaluate LCE on the whole objects only, and use Rcontr as a metric penalty that explicitly considers geodesic distances to ensure correct clustering of both parts and whole objects. At each iteration of training with HyCoRe we sample shapes with a random N ′ varying within a predefined range. A part is defined as the N ′ nearest neighbors of a random point. In future work, it would be interesting to explore alternative definitions for parts, e.g., using part labels if available but, at the moment, we only address definition via spatial neighbors to avoid extra labeling requirements. Method AA(%) OA(%) 4 Experimental results 4.1 Experimental setting We study the performance of our regularizer HyCoRe on the synthetic dataset ModelNet40 [27] (12,331 objects with 1024 points, 40 classes) and on the real dataset ScanObjectNN [28] (15,000 objects with 1024 points, 15 classes). We apply our method over multiple classification architectures, namely the widely popular DGCNN and PointNet++ baselines, as well as the recent state-of-the-art PointMLP model. We substitute the standard classifier with its hyperbolic version (Möbius+softmax), as shown in Fig. 1. We use f = 256 features to be comparable to the official implementations in the Euclidean space, then we test the model over different embedding dimensions in the ablation study. Moreover, we set α = β = 0.01, γ = 1000 and δ = 4. For the number of points of each part N ′, we select a random number between 200 and 600, and for the whole object a random number between 800 and 1024 to ensure better flexibility of the learned to model to part sizes. We train the models using Riemannian SGD optimization. Our implementation is on Pytorch and we use geoopt [29] for the hyperbolic operations. Models are trained on an Nvidia A6000 GPU. 4.2 Main Results Table 1 shows the results for ModelNet40 classification. In the first part we report well-known and state-of-the-art supervised models. We retrained PointNet++, DGCNN and the state-of-theart PointMLP as baselines, noting some documented difficulty [38] with exactly reproducing the Table 5: Performance vs. curvature of the Poincarè Ball Average Accuracy (%) Curvature c 1 0.5 0.1 0.01 Hype-DGCNN 76.5 76.9 76.6 76.9 DGCNN+HyCoRe 80.2 79.4 78.7 78.5 official results. In addition, the second part of the table reports the performance of methods [33], [34], [7] proposing self-supervised pretraining techniques, after supervised finetuning. Concerning PointGLR [5], the most similar method to HyCoRe, we ensure a fair comparison by using only the L2G embedding loss and not the pretext tasks of normal estimation and reconstruction. Finally, the last part of the table presents the results with HyCoRe applied to the selected baselines. We can see that the proposed method achieves substantial gains not only compared to the randomly initialized models, but also compared to the finetuned models. When applied to the PointMLP, HyCoRe exceed the state-of-the-art performance on ModelNet40. Moreover, it is interesting to notice that the embedding framework of PointGLR is not particularly effective without the pretext tasks. This is due to the unsuitability of the spherical space to embed hierarchical information, as explained in Sec. 2, and it is indeed not far from results we obtain with our method in Euclidean space. Table 2 reports the classification results on the ScanObjectNN dataset. Also in this case, HyCoRe significantly improves the baseline DGCNN leading it to be comparable with the state-of-the-art methods such as SimpleView [4], PRANet [35] and MVTN [36]. In addition, PointMLP that holds the state of the art for this dataset, is further improved by our method and reaches an impressive overall accuracy of 87.2 %, substantially outperforming all the previous approaches. Although the authors in [3] claim that classification performance has reached a saturation point, we show that including novel regularizers in the training process can still lead to significant gains. This demonstrates that the proposed method leverages novel ideas, complementary to what is exploited by existing architectures, and it is thus able to boost the performance even of state-of-the-art methods. It is also remarkable that an older, yet still popular, architecture like DGCNN is able to outperform complex and sophisticated models such as the Point Transformer, when regularized by HyCoRe. 5.32 4.56 2.08 4.07 2.05 1.99 In addition, to further prove that enforcing the hierarchy between parts is useful to build better clusters, we show in Fig. 4 a 2D visualization with UMAP of the hyperbolic representations for the ModelNet40 data. Colors denote classes, big points whole objects and small points parts. Besides the clear clustering according to class labels, it is fascinating to notice the emergence of the part-whole hierarchy with part objects closer to the center of the disk. Importantly, some parts bridge multiple classes, such as the ones in the bottom right zoom, i.e., they are found along a geodesic connecting two class clusters, serving as common ancestors. This can happen due to the fact that some simple parts having roughly the same shape appear with multiple class labels during training, and the net effect of Rcontr is to position them midway across the classes. The tree-likeness of the hyperbolic space can be also be seen in the visualization in Fig. 2 (right). There we embed shapes with gradually large number of points up to the whole object made by 1024 points. We can notice that the parts are moved towards the disk edge as more points are added. Furthermore, a quantitative analysis of the part-whole hierarchy is shown in Table 6. Here we calculated the hyperbolic norms of compositions of labeled parts. We can see that, as the parts are assembled with other parts, their hyperbolic norms grow, up to the whole object that is pushed close to the ball edge. 4.3 Ablation study In the following we show an ablation study focusing on the DGCNN backbone and the ScanObjectNN dataset. The dataset selection is motivated by the fact that it is a real dataset, able to provide more stable and representative results compared to ModelNet40. We first compare HyCoRe with its Euclidean version (EuCoRe) to investigate the effectiveness of the hyperbolic space. The basic principles and losses are the same, but in EuCoRe distances and network layers are defined in the Euclidean space. Table 3 shows the results. With Hype-DGCNN we indicate the hyperbolic version of DGCNN, as represented in Fig. 1, but without any regularization, serving as a baseline to assess the individual effect of the regularizer. We also test the models over a different number of embedding dimensions. We can see that EuCoRe only provides a modest improvement, underlining the importance of the hyperbolic space. We also notice that the hyperbolic baseline struggles to be on par with its Euclidean counterpart, as observed by many recent works [25], [10]. However, when regularized with HyCoRe, we can observe significant gains, even in low dimensions. This also leaves an open research question, about whether better hyperbolic baselines could be built so that HyCoRe starts from a less disadvantaged point. In Table 4 we ablate HyCoRe by removing one of the two regularizers. We can see that the combination of the two provides the overall best gain. In order to study the effect of different space curvatures c, Table 5 evaluates HyCoRe from the standard curvature 1 down to 0.01. We remark that some works [25],[39], report significant improvements when c is very low (e.g., 0.001), but this is counter-intuitive since the hyperbolic space then resembles an almost flat manifold. On the contrary, we do see improved results at higher curvatures. Since HyCoRe constrains the network to learn the relations between parts and whole object, we claim that, at the end of the training process, the model should be better able to classify coarser objects. In Figs. 5a and 5b we show the test accuracy of DGCNN on ModelNet40, when presented with a uniformly subsampled point cloud and with a small randomly chosen and spatially-contiguous part, respectively. Indeed, we can notice that HyCoRe provides a gain up to 20 percentage points for very sparse point clouds, and is also able to successfully detect the object from smaller parts. For a fair comparison, we also report the baseline DGCNN with training augmented by random crops of parts. Even though the augmentation is useful to improve accuracy, HyCoRe is more effective demonstrating the importance of compositional reasoning. 5 Conclusions Although deep learning in the hyperbolic space is in its infancy, in this paper we showed how it can successfully capture the hierarchical nature of 3D point clouds, boosting the performance of state-of-the-art models for classification. Reasoning about the relations between objects and the parts that compose them leads not only to better results but also more robust and explainable models. In the future, it would be interesting to explore different ways of defining parts, not based on spatial nearest neighbors but rather on more semantic constructions. One important extension is to adapt HyCoRe to segmentation. Since segmentation aims to classify single points and the corresponding parts, contrary to classification, the parts embeddings should be placed on the boundary of the Poincarè Ball, where there is more space to correctly cluster them, and the whole objects (made by composition of parts) near the origin. We could exploit the label of the parts to this end or investigate unsupervised settings where the part hierarchy emerges naturally. Acknowledgments and Disclosure of Funding Computational resources were provided by HPC@POLITO, a project of Academic Computing within the Department of Control and Computer Engineering at the Politecnico di Torino (http://www.hpc.polito.it). This research received no external funding.
1. What is the novel contribution of the paper regarding latent space representation for point clouds? 2. How does the proposed method encourage hierarchical relationships among point clouds? 3. What are the strengths and weaknesses of the proposed approach, particularly in terms of training procedures and claimed hierarchy? 4. Are there any concerns regarding contamination of the feature space or centralizing partial point clouds? 5. How would segmentation tasks benefit from utilizing partial information in the suggested way? 6. Have the authors considered using other datasets for visualization, such as human models with easily identifiable parts? 7. What are the limitations of the proposal, including preparing partial shapes and lacking a framework for shape segmentation?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors present a novel representation for latent space of point clouds to capture and take advantage of the hierarchical nature of the underlying shapes. The proposed method encourages point clouds to be projected to a hyperbolic space, where common simple structures are forced to the center of the space, and the complex shapes are moved to the edge. To achieve this they propose a module that can be added onto existing frameworks. The module takes the feature vector of a set of points representing a shape and projects it to a Poincarè ball. To achieve the proposed hierarchical representation, they also extract partial point cloud from the same shape and other shape as input and use it in the regularization term. The constraints push the partial point clouds to the center, while keeping the parts of the same point cloud close and pushing back parts from other point clouds. The regularization strategy establishes a more effective latent feature space, as seen from the classification accuracies on the benchmark datasets ModelNet40 and ScanObjectNN, as well as from the visualization of the resulting space, which shows features from different classes are kept apart. Strengths And Weaknesses Strengths The representation is very novel and interesting, as it attempts to take advantage of the hyperbolic space in order to establish hierarchical relationships that starts from common simple parts to wholistic characteristic shapes. The authors thoroughly explain the geometric background of the proposal, and define a sound and practical constraints that drives data to be in the desired respective positions in the hierarchy. The training procedure of preparing partial shapes to establish this hierarchy is also unique. The resulting feature space, as illustrated in Fig. 4, suggests that the constraints on the feature space have effectively mapped the latent features of even partial data to the desired locations. The fact that the proposed module can be placed on top of existing point cloud analysis backbone networks is a huge advantage. By employing this representation, the conventional methods gain massive boosts in classification accuracy, which can be observed from Tables 1 and 2. Weaknesses The training procedure seems to require effort. From the text, the partial shapes have to be prepared at each epoch just to calculate the triplet loss defined in the paper. As the process is rather random, we can easily imagine the training process taking much longer than most existing frameworks. The burden of the training process seems to be excluded in the experimental section. It is doubtful whether the claimed hierarchy is actually achieved by the proposal. The colormap in Fig. 4 seems to be well-organized near the edge of the ball, but seems rather random near the middle. The interpolation results in Fig.2 of the appendix, despite the efforts to incorporate various partial shapes in the training phase, also isn't as convincing as desired. They are definitely smaller in size, but the claimed shape commonality is difficult to observe. Despite the efforts to consider parts of shapes, it is disappointing that the authors do not include any point segmentation task, which is deeply related to local information of shapes, in the evaluation. One can easily imagine using the z w h o l e feature as the global feature, which can be concatenated with conventional point-wise feature to conduct such task. Questions I wondered if the partial point clouds would contaminate the feature space of the backbone network. Have the authors tried freezing the backbone networks so that the feature calculation backbone network is not affected by the triplet loss? I also wondered what would happen if the partial point clouds created in each epoch for the constraints were centralized. It would seem to lead to a more robust common part ancestor, but leading to difficulty in establishing hierarchy between whole and partial shapes. If segmentation was conducted in the suggested way in the section above, how would the results be? It is a shame that partial information is only utilized to map the point clouds in the hyperbolic space. All the efforts to include such data do not seem to be fully utilized. Have the authors attempted to use other sets of data for visualization? Although slightly out of context, it may have been better to use a more simple set of targets for the visualization task, such as humans in different poses. Human models have parts that can be used as the partial data, and commonality and transformation are easier to observe. Limitations The authors do mention the procedure of preparing the partial shapes to be the main limitation of the proposal. The method currently takes a random point and collects N ′ nearest points to define one partial shape. Using segment data could improve the partial representation of the proposed feature space. Also, I believe lack of a framework to use the obtained feature for shape segmentation is also a drawback, as the authors go so far as to including parts of shapes as input to the proposed framework.
NIPS
Title Ultrahyperbolic Representation Learning Abstract In machine learning, data is usually represented in a (flat) Euclidean space where distances between points are along straight lines. Researchers have recently considered more exotic (non-Euclidean) Riemannian manifolds such as hyperbolic space which is well suited for tree-like data. In this paper, we propose a representation living on a pseudo-Riemannian manifold of constant nonzero curvature. It is a generalization of hyperbolic and spherical geometries where the nondegenerate metric tensor need not be positive definite. We provide the necessary learning tools in this geometry and extend gradient-based optimization techniques. More specifically, we provide closed-form expressions for distances via geodesics and define a descent direction to minimize some objective function. Our novel framework is applied to graph representations. 1 Introduction In most machine learning applications, data representations lie on a smooth manifold [16] and the training procedure is optimized with an iterative algorithm such as line search or trust region methods [20]. In most cases, the smooth manifold is Riemannian, which means that it is equipped with a positive definite metric. Due to the positive definiteness of the metric, the negative of the (Riemannian) gradient is a descent direction that can be exploited to iteratively minimize some objective function [1]. The choice of metric on the Riemannian manifold determines how relations between points are quantified. The most common Riemannian manifold is the flat Euclidean space, which has constant zero curvature and the distances between points are measured by straight lines. An intuitive example of non-Euclidean Riemannian manifold is the spherical model (i.e. representations lie on a sphere) that has constant positive curvature and is used for instance in face recognition [25, 26]. On the sphere, geodesic distances are a function of angles. Similarly, Riemannian spaces of constant negative curvature are called hyperbolic [23]. Such spaces were shown by Gromov to be well suited to represent tree-like structures [10]. The machine learning community has adopted these spaces to learn tree-like graphs [5] and hierarchical data structures [11, 18, 19], and also to compute means in tree-like shapes [6, 7]. In this paper, we consider a class of pseudo-Riemannian manifolds of constant nonzero curvature [28] not previously considered in machine learning. These manifolds not only generalize the hyperbolic and spherical geometries mentioned above, but also contain hyperbolic and spherical submanifolds and can therefore describe relationships specific to those geometries. The difference is that we consider the larger class of pseudo-Riemannian manifolds where the considered nondegenerate metric tensor need not be positive definite. Optimizing a cost function on our non-flat ultrahyperbolic space requires a descent direction method that follows a path along the curved manifold. We achieve this by employing tools from differential geometry such as geodesics and exponential maps. The theoretical contributions in this paper are two-fold: (1) explicit methods to calculate dissimilarities and (2) general optimization tools on pseudo-Riemannian manifolds of constant nonzero curvature. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. 2 Pseudo-Hyperboloids Notation: We denote points on a smooth manifoldM [16] by boldface Roman characters x ∈M. TxM is the tangent space ofM at x and we write tangent vectors ξ ∈ TxM in boldface Greek fonts. Rd is the (flat) d-dimensional Euclidean space, it is equipped with the (positive definite) dot product denoted by 〈·, ·〉 and defined as 〈x,y〉 = x>y. The `2-norm of x is ‖x‖ = √ 〈x,x〉. Rd∗ = Rd\{0} is the Euclidean space with the origin removed. Pseudo-Riemannian manifolds: A smooth manifoldM is pseudo-Riemannian (also called semiRiemannian [21]) if it is equipped with a pseudo-Riemannian metric tensor (named “metric” for short in differential geometry). The pseudo-Riemannian metric gx : TxM× TxM→ R at some point x ∈M is a nondegenerate symmetric bilinear form. Nondegeneracy means that if for a given ξ ∈ TxM and for all ζ ∈ TxM we have gx(ξ, ζ) = 0, then ξ = 0. If the metric is also positive definite (i.e. ∀ξ ∈ TxM, gx(ξ, ξ) > 0 iff ξ 6= 0), then it is Riemannian. Riemannian geometry is a special case of pseudo-Riemannian geometry where the metric is positive definite. In general, this is not the case and non-Riemannian manifolds distinguish themselves by having some non-vanishing tangent vectors ξ 6= 0 that satisfy gx(ξ, ξ) ≤ 0. We refer the reader to [2, 21, 28] for details. Pseudo-hyperboloids generalize spherical and hyperbolic manifolds to the class of pseudoRiemannian manifolds. Let us note d = p+ q+ 1 ∈ N the dimensionality of some pseudo-Euclidean space where each vector is written x = (x0, x1, · · · , xq+p)>. That space is denoted by Rp,q+1 when it is equipped with the following scalar product (i.e. nondegenerate symmetric bilinear form [21]): ∀a = (a0, · · · , aq+p)> , b = (b0, · · · , bq+p)> , 〈a,b〉q = − q∑ i=0 aibi+ p+q∑ j=q+1 ajbj = a >Gb, (1) where G = G−1 = Iq+1,p is the d× d diagonal matrix with the first q+ 1 diagonal elements equal to −1 and the remaining p equal to 1. Since Rp,q+1 is a vector space, we can identify the tangent space to the space itself by means of the natural isomorphism TxRp,q+1 ≈ Rp,q+1. Using the terminology of special relativity, Rp,q+1 has q + 1 time dimensions and p space dimensions. A pseudo-hyperboloid is the following submanifold of codimension one (i.e. hypersurface) in Rp,q+1: Qp,qβ = { x = (x0, x1, · · · , xp+q)> ∈ Rp,q+1 : ‖x‖2q = β } , (2) where β ∈ R∗ is a nonzero real number and the function ‖ · ‖2q given by ‖x‖2q = 〈x,x〉q is the associated quadratic form of the scalar product. It is equivalent to work with eitherQp,qβ orQ q+1,p−1 −β as they are interchangeable via an anti-isometry (see supp. material). For instance, the unit q-sphere Sq = { x ∈ Rq+1 : ‖x‖ = 1 } is anti-isometric to Q0,q−1 which is then spherical. In the literature, the set Qp,qβ is called a “pseudo-sphere” when β > 0 and a “pseudo-hyperboloid” when β < 0. In the rest of the paper, we only consider the pseudo-hyperbolic case (i.e. β < 0). Moreover, for any β < 0, Qp,qβ is homothetic to Q p,q −1, the value of β can then be considered to be −1. We can obtain the spherical and hyperbolic geometries by constraining all the elements of the space dimensions of a pseudo-hyperboloid to be zero or constraining all the elements of the time dimensions except one to be zero, respectively. Pseudo-hyperboloids then generalize spheres and hyperboloids. The pseudo-hyperboloids that we consider in this paper are hard to visualize as they live in ambient spaces with dimension higher than 3. In Fig. 1, we show iso-surfaces of a projection of the 3-dimensional pseudo-hyperboloid Q2,1−1 (embedded in R2,2) into R3 along its first time dimension. Metric tensor and tangent space: The metric tensor at x ∈ Qp,qβ is gx(·, ·) = 〈·, ·〉q where gx : TxQp,qβ × TxQ p,q β → R. By using the isomorphism TxRp,q+1 ≈ Rp,q+1 mentioned above, the tangent space of Qp,qβ at x can be defined as TxQ p,q β = { ξ ∈ Rp,q+1 : 〈x, ξ〉q = 0 } for all β 6= 0. Finally, the orthogonal projection of an arbitrary d-dimensional vector z onto TxQp,qβ is: Πx(z) = z− 〈z,x〉q 〈x,x〉q x. (3) 3 Measuring Dissimilarity on Pseudo-Hyperboloids This section introduces the differential geometry tools necessary to quantify dissimilarities/distances between points on Qp,qβ . Measuring dissimilarity is an important task in machine learning and has many applications (e.g. in metric learning [29]). Intrinsic geometry: The intrinsic geometry of the hypersurface Qp,qβ embedded in Rp,q+1 (i.e. the geometry perceived by the inhabitants of Qp,qβ [21]) derives solely from its metric tensor applied to tangent vectors to Qp,qβ . For instance, it can be used to measure the arc length of a tangent vector joining two points along a geodesic and define their geodesic distance. Before considering geodesic distances, we consider extrinsic distances (i.e. distances in the ambient space Rp,q+1). Since Rp,q+1 is isomorphic to its tangent space, tangent vectors to Rp,q+1 are naturally identified with points. Using the quadratic form of Eq. (1), the extrinsic distance between two points a,b ∈ Qp,qβ is: dq(a,b) = √ |‖a− b‖2q| = √ |‖a‖2q + ‖b‖2q − 2〈a,b〉q| = √ |2β − 2〈a,b〉q|. (4) This distance is a good proxy for the geodesic distance dγ(·, ·), that we introduce below, if it preserves distance relations: dγ(a,b) < dγ(c,d) iff dq(a,b) < dq(c,d). This relation is satisfied for two special cases of pseudo-hyperboloids for which the geodesic distance is well known: • Spherical manifold (Q0,qβ ): If p = 0, the geodesic distance dγ(a,b) = √ |β| cos−1 ( 〈a,b〉q β ) is called spherical distance. In practice, the cosine similarity 〈·,·〉qβ is often considered instead of dγ(·, ·) since it satisfies dγ(a,b) < dγ(c,d) iff 〈a,b〉q < 〈c,d〉q iff dq(a,b) < dq(c,d). • Hyperbolic manifold (upper sheet of the two-sheet hyperboloid Qp,0β ): If q = 0, the geodesic distance dγ(a,b) = √ |β| cosh−1 ( 〈a,b〉q β ) with a0 > 0 and b0 > 0 is called Poincaré distance [19]. The (extrinsic) Lorentzian distance was shown to be a good proxy in hyperbolic geometry [11]. For the ultrahyperbolic case (i.e. q ≥ 1 and p ≥ 2), the distance relations are not preserved: dγ(a,b) < dγ(c,d) 6⇐⇒ dq(a,b) < dq(c,d). We then need to consider only geodesic distances. This section introduces closed-form expressions for geodesic distances on ultrahyperbolic manifolds. Geodesics: Informally, a geodesic is a curve joining points on a manifoldM that minimizes some “effort” depending on the metric. More precisely, let I ⊆ R be a (maximal) interval containing 0. A geodesic γ : I →M maps a real value t ∈ I to a point on the manifoldM. It is a curve onM defined by its initial point γ(0) = x ∈M and initial tangent vector γ′(0) = ξ ∈ TxM where γ′(t) is the derivative of γ at t. By analogy with physics, t is considered as a time value. Intuitively, one can think of the curve as the trajectory over time of a ball being pushed from a point x at t = 0 with initial velocity ξ and constrained to roll on the manifold. We denote this curve explicitly by γx→ξ(t) unless the dependence is obvious from the context. For this curve to be a geodesic, its acceleration has to be zero: ∀t ∈ I, γ′′(t) = 0. This condition is a second-order ordinary differential equation that has a unique solution for a given set of initial conditions [17]. The interval I is said to be maximal if it cannot be extended to a larger interval. In the case of Qp,qβ , we have I = R and I is then maximal. Geodesic of Qp,qβ : As we show in the supp. material, the geodesics of Q p,q β are a combination of the hyperbolic, flat and spherical cases. The nature of the geodesic γx→ξ depends on the sign of 〈ξ, ξ〉q . For all t ∈ R, the geodesic γx→ξ of Qp,qβ with β < 0 is written: γx→ξ(t) = cosh ( t √ |〈ξ,ξ〉q|√ |β| ) x + √ |β|√ |〈ξ,ξ〉q| sinh ( t √ |〈ξ,ξ〉q|√ |β| ) ξ if 〈ξ, ξ〉q > 0 x + tξ if 〈ξ, ξ〉q = 0 cos ( t √ |〈ξ,ξ〉q|√ |β| ) x + √ |β|√ |〈ξ,ξ〉q| sin ( t √ |〈ξ,ξ〉q|√ |β| ) ξ if 〈ξ, ξ〉q < 0 (5) We recall that 〈ξ, ξ〉q = 0 does not imply ξ = 0. The geodesics are an essential ingredient to define a mapping known as the exponential map. See Fig. 2 (left) for a depiction of these three types of geodesics, and Fig. 2 (right) for a depiction of the other quantities introduced in this section. Exponential map: Exponential maps are a way of collecting all of the geodesics of a pseudoRiemannian manifoldM into a unique differentiable mapping. Let Dx ⊆ TxM be the set of tangent vectors ξ such that γx→ξ is defined at least on the interval [0, 1]. This allows us to uniquely define the exponential map expx : Dx →M such that expx(ξ) = γx→ξ(1). The manifoldQp,qβ is geodesically complete, the domain of its exponential map is thenDx = TxQ p,q β . Using Eq. (5) with t = 1, we obtain an exponential map of the entire tangent space to the manifold: ∀ξ ∈ TxQp,qβ , expx(ξ) = γx→ξ(1). (6) We make the important observation that the image of the exponential map does not necessarily cover the entire manifold: not all points on a manifold are connected by a geodesic. This is the case for our pseudo-hyperboloids. Namely, for a given point x ∈ Qp,qβ there exist points y that are not in the image of the exponential map (i.e. there does not exist a tangent vector ξ such that y = expx(ξ)). Logarithm map: We provide a closed-form expression of the logarithm map for pseudo-hyperboloids. Let Ux ⊆ Qp,qβ be some neighborhood of x. The logarithm map logx : Ux → TxQ p,q β is defined as the inverse of the exponential map on Ux (i.e. logx = exp−1x ). We propose: ∀y ∈ Ux, logx(y) = cosh−1( 〈x,y〉q β )√ ( 〈x,y〉q β ) 2−1 ( y − 〈x,y〉qβ x ) if 〈x,y〉q|β| < −1 y − x if 〈x,y〉q|β| = −1 cos−1( 〈x,y〉q β )√ 1−( 〈x,y〉qβ )2 ( y − 〈x,y〉qβ x ) if 〈x,y〉q|β| ∈ (−1, 1) (7) By substituting ξ = logx(y) into Eq. (6), one can verify that our formulas are the inverse of the exponential map. The set Ux = { y ∈ Qp,qβ : 〈x,y〉q < |β| } is called a normal neighborhood of x ∈ Qp,qβ since for all y ∈ Ux, there exists a geodesic from x to y such that logx(y) = γ′x→logx(y)(0). We show in the supp. material that the logarithm map is not defined if 〈x,y〉q ≥ |β|. Proposed dissmilarity: We define our dissimilarity function based on the general notion of arc length and radius function on pseudo-Riemannian manifolds that we recall in the next paragraph (see details in Chapter 5 of [21]). This corresponds to the geodesic distance in the Riemannian case. Let Ux be a normal neighborhood of x ∈ M withM pseudo-Riemannian. The radius function rx : Ux → R is defined as rx(y) = √ |gx (logx(y), logx(y))| where gx is the metric at x. If σx→ξ is the radial geodesic from x to y ∈ Ux (i.e. ξ = logx(y)), then the arc length of σx→ξ equals rx(y). We then define the geodesic “distance” between x ∈ Qp,qβ and y ∈ Ux as the arc length of σx→logx(y): dγ(x,y) = √ |‖ logx(y)‖2q| = √ |β| cosh−1 ( 〈x,y〉q β ) if 〈x,y〉q|β| < −1 0 if 〈x,y〉q|β| = −1√ |β| cos−1 ( 〈x,y〉q β ) if 〈x,y〉q|β| ∈ (−1, 1) (8) It is important to note that our “distance” is not a distance metric. However, it satisfies the axioms of a symmetric premetric: (i) dγ(x,y) = dγ(y,x) ≥ 0 and (ii) dγ(x,x) = 0. These conditions are sufficient to quantify the notion of nearness via a ρ-ball centered at x: Bρx = {y : dγ(x,y) < ρ}. In general, topological spaces provide a qualitative (not necessarily quantitative) way to detect “nearness” through the concept of a neighborhood at a point [15]. Something is true “near x” if it is true in the neighborhood of x (e.g. inBρx). Our premetric is similar to metric learning methods [13, 14, 29] that learn a Mahalanobis-like distance pseudo-metric parameterized by a positive semi-definite matrix. Pairs of distinct points can have zero “distance” if the matrix is not positive definite. However, unlike classic metric learning, we can have triplets (x,y, z) that satisfy dγ(x,y) = dγ(x, z) = 0 but dγ(y, z) > 0 (e.g. x = (1, 0, 0, 0)>,y = (1, 1, 1, 0)>, z = (1, 1, 0, 1)> in Q2,1−1). Since the logarithm map is not defined if 〈x,y〉q ≥ |β|, we propose to use the following continuous approximation defined on the whole manifold instead: ∀x ∈ Qp,qβ ,y ∈ Q p,q β , Dγ(x,y) = { dγ(x,y) if 〈x,y〉q ≤ 0√ |β| ( π 2 + 〈x,y〉q |β| ) otherwise (9) To the best of our knowledge, the explicit formulation of the logarithm map for Qp,qβ in Eq. (7) and its corresponding radius function in Eq. (8) to define a dissimilarity function are novel. We have also proposed some linear approximation to evaluate dissimilarity when the logarithm map is not defined but other choices are possible. For instance, when a geodesic does not exist, a standard way in differential geometry to calculate curves is to consider broken geodesics. One might consider instead the dissimilarity dγ(x,−x) + dγ(−x,y) = π √ |β| + dγ(−x,y) if logx(y) is not defined since −x ∈ Qp,qβ and log−x(y) is defined. This interesting problem is left for future research. 4 Ultrahyperbolic Optimization In this section we present optimization frameworks to optimize any differentiable function defined on Qp,qβ . Our goal is to compute descent directions on the ultrahyperbolic manifold. We consider two approaches. In the first approach, we map our representation from Euclidean space to ultrahyperbolic space. This is similar to the approach taken by [11] in hyperbolic space. In the second approach, we optimize using gradients defined directly in pseudo-Riemannian tangent space. We propose a novel descent direction which guarantees the minimization of some cost function. 4.1 Euclidean optimization via a differentiable mapping onto Qp,qβ Our first method maps Euclidean representations that lie in Rd to the pseudo-hyperboloid Qp,qβ , and the chain rule is exploited to perform standard gradient descent. To this end, we construct a differentiable mapping ϕ : Rq+1∗ × Rp → Qp,qβ . The image of a point already on Q p,q β under the mapping ϕ is itself: ∀x ∈ Qp,qβ , ϕ(x) = x. Let Sq = { x ∈ Rq+1 : ‖x‖ = 1 } denote the unit q-sphere. We first introduce the following diffeomorphisms: Theorem 4.1 (Diffeomorphisms). For any β < 0, there is a diffeomorphism ψ : Qp,qβ → Sq × Rp. Let us note x = ( t s ) ∈ Qp,qβ with t ∈ R q+1 ∗ and s ∈ Rp, let us note z = ( u v ) ∈ Sq × Rp where u ∈ Sq and v ∈ Rp. The mapping ψ and its inverse ψ−1 are formulated (see proofs in supp. material): ψ(x) = ( 1 ‖t‖t 1√ |β| s ) and ψ−1(z) = √ |β| (√ 1 + ‖v‖2u v ) . (10) With these mappings, any vector x ∈ Rq+1∗ × Rp can be mapped to Qp,qβ via ϕ = ψ−1 ◦ ψ. ϕ is differentiable everywhere except when x0 = · · · = xq = 0, which should never occur in practice. It can therefore be optimized using standard gradient methods. 4.2 Pseudo-Riemannian optimization We now introduce a novel method to optimize any differentiable function f : Qp,qβ → R defined on the pseudo-hyperboloid. As we show below, the (negative of the) pseudo-Riemannian gradient is not a descent direction. We propose a simple and efficient way to calculate a descent direction. Pseudo-Riemannian gradient: Since x ∈ Qp,qβ also lies in the Euclidean ambient space Rd, the function f has a well defined Euclidean gradient∇f(x) = (∂f(x)/∂x0, · · · , ∂f(x)/∂xp+q)> ∈ Rd. The gradient of f in the pseudo-Euclidean ambient space Rp,q+1 is (G−1∇f(x)) = (G∇f(x)) ∈ Rp,q+1. Since Qp,qβ is a submanifold of Rp,q+1, the pseudo-Riemannian gradient Df(x) ∈ TxQ p,q β of f on Qp,qβ is the orthogonal projection of (G∇f(x)) onto TxQ p,q β (see Chapter 4 of [21]): Df(x) = Πx (G∇f(x)) = G∇f(x)− 〈G∇f(x),x〉q 〈x,x〉q x = G∇f(x)− 〈∇f(x),x〉 〈x,x〉q x. (11) This gradient forms the foundation of our descent method optimizer as will be shown in Eq. (13). Iterative optimization: Our goal is to iteratively decrease the value of the function f by following some descent direction. Since Qp,qβ is not a vector space, we do not “follow the descent direction” by adding the descent direction multiplied by a step size as this would result in a new point that does not necessarily lie on Qp,qβ . Instead, to remain on the manifold, we use our exponential map defined in Eq. (6). This is a standard way to optimize on Riemannian manifolds [1]. Given a step size t > 0, one step of descent along a tangent vector ζ ∈ TxQp,qβ is given by: y = expx (tζ) ∈ Q p,q β . (12) Descent direction: We now explain why the negative of the pseudo-Riemannian gradient is not a descent direction. Our explanation extends Chapter 3 of [20] that gives the criteria for a tangent vector ζ to be a descent direction when the domain of the optimized function is a Euclidean space. By using the properties described in Section 3, we know that for all t ∈ R and all ξ ∈ TxQp,qβ , we have the equalities: expx (tξ) = γx→tξ(1) = γx→ξ(t) so we can equivalently fix t to 1 and choose the scale of ξ appropriately. By exploiting Taylor’s first-order approximation, there exists some small enough tangent vector ζ 6= 0 (i.e. with expx(ζ) belonging to a convex neighborhood of x [4, 8]) that satisfies the following conditions: γx→ζ(0) = x ∈ Qp,qβ , γ′x→ζ(0) = ζ ∈ TxQ p,q β , γx→ζ(1) = y ∈ Q p,q β , and the function f ◦ γx→ζ : R→ R can be approximated at t = 1 by: f(y) = f ◦ γx→ζ(1) ' f ◦ γx→ζ(0) + (f ◦ γx→ζ)′(0) = f(x) + 〈Df(x), ζ〉q. (13) where we use the following properties: ∀t, (f ◦ γ)′(t) = df(γ′(t)) = gγ(t) (Df(γ(t)), γ′(t)) (see details in pages 11, 15 and 85 of [21]), df is the differential of f and γ is a geodesic. To be a descent direction at x (i.e. so that f(y) < f(x)), the search direction ζ has to satisfy 〈Df(x), ζ〉q < 0. However, choosing ζ = −ηDf(x), where η > 0 is a step size, might increase the function value if the scalar product 〈·, ·〉q is not positive definite. If p + q ≥ 1, then 〈·, ·〉q is positive definite only if q = 0 (see details in supp. material), and it is negative definite iff p = 0 since 〈·, ·〉q = −〈·, ·〉 in this case. A simple solution would be to choose ζ = ±ηDf(x) depending on the sign of 〈Df(x), ζ〉q, but 〈Df(x), ζ〉q might be equal to 0 even if Df(x) 6= 0 if 〈·, ·〉q is indefinite. The optimization algorithm might then be stuck to a level set of f , which is problematic. Algorithm 1 Pseudo-Riemannian optimization on Qp,qβ input: differentiable function f : Qp,qβ → R to be minimized, some initial value of x ∈ Q p,q β 1: while not converge do 2: Calculate∇f(x) . i.e. the Euclidean gradient of f at x in the Euclidean ambient space 3: χ← Πx(GΠx(G∇f(x))) . see Eq. (14) 4: x← expx(−ηχ) . where η > 0 is a step size (e.g. determined with line search) 5: end while Proposed solution: To ensure that ζ ∈ TxQp,qβ is a descent direction, we propose a simple expression that satisfies 〈Df(x), ζ〉q < 0 if Df(x) 6= 0 and 〈Df(x), ζ〉q = 0 otherwise. We propose to formulate ζ = −ηΠx(GDf(x)) ∈ TxQp,qβ , and we define the following tangent vector χ = − 1 ηζ: χ = Πx(GDf(x)) = ∇f(x)− 〈∇f(x),x〉 〈x,x〉q Gx− 〈∇f(x),x〉q 〈x,x〉q x + ‖x‖2〈∇f(x),x〉 〈x,x〉2q x. (14) The tangent vector ζ is a descent direction because 〈Df(x), ζ〉q = −η〈Df(x),χ〉q is nonpositive: 〈Df(x),χ〉q = ‖∇f(x)‖2 − 2 〈∇f(x),x〉〈∇f(x),x〉q 〈x,x〉q + 〈∇f(x),x〉2‖x‖2 〈x,x〉2q (15) = ‖G∇f(x)− 〈∇f(x),x〉 〈x,x〉q x‖2 = ‖Df(x)‖2 ≥ 0. (16) We also have 〈Df(x),χ〉q = ‖Df(x)‖2 = 0 iff Df(x) = 0 (i.e. x is a stationary point). It is worth noting that Df(x) = 0 implies χ = Πx(G0) = 0. Moreover, χ = 0 implies that ‖Df(x)‖2 = 〈Df(x), 0〉q = 0. We then have χ = 0 iff Df(x) = 0. Our proposed algorithm to the minimization problem minx∈Qp,qβ f(x) is illustrated in Algorithm 1. Following generic Riemannian optimization algorithms [1], at each iteration, it first computes the descent direction−χ ∈ TxQp,qβ , then decreases the function by applying the exponential map defined in Eq. (6). It is worth noting that our proposed descent method can be applied to any differentiable function f : Qp,qβ → R, not only to those that exploit the distance introduced in Section 3. Interestingly, our method can also be seen as a preconditioning technique [20] where the descent direction is obtained by preconditioning the pseudo-Riemannian gradient Df(x) with the matrix Px = [ G− 1〈x,x〉q xx > ] ∈ Rd×d. In other words, we have χ = PxDf(x) = Πx(GDf(x)). In the more general setting of pseudo-Riemannian manifolds, another preconditioning technique was proposed in [8]. The method in [8] requires performing a Gram-Schmidt process at each iteration to obtain an (ordered [28]) orthonormal basis of the tangent space at x w.r.t. the induced quadratic form of the manifold. However, the Gram-Schmidt process is unstable and has algorithmic complexity that is cubic in the dimensionality of the tangent space. On the other hand, our method is more stable and its algorithmic complexity is linear in the dimensionality of the tangent space. 5 Experiments We now experimentally validate our proposed optimization methods and the effectiveness of our dissimilarity function. Our main experimental results can be summarized as follows: • Both optimizers introduced in Section 4 decrease some objective function f : Qp,qβ → R. While both optimizers manage to learn high-dimensional representations that satisfy the problem-dependent training constraints, only the pseudo-Riemannian optimizer satisfies all the constraints in lowerdimensional spaces. This is because it exploits the underlying metric of the manifold. • Hyperbolic representations are popular in machine learning as they are well suited to represent hierarchical trees [10, 18, 19]. On the other hand, hierarchical datasets whose graph contains cycles cannot be represented using trees. Therefore, we propose to represent such graphs using our ultrahyperbolic representations. An important example are community graphs such as Zachary’s karate club [30] that contain leaders. Because our ultrahyperbolic representations are more flexible than hyperbolic representations, we believe that our representations are better suited for these non tree-like hierarchical structures. Graph: Our ultrahyperbolic representations describe graph-structured datasets. Each dataset is an undirected weighted graph G = (V,E) which has node-set V = {vi}ni=1 and edge-set E = {ek}mk=1. Each edge ek is weighted by an arbitrary capacity ck ∈ R+ that models the strength of the relationship between nodes. The higher the capacity ck, the stronger the relationship between the nodes connected by ek. Learned representations: Our problem formulation is inspired by hyperbolic representation learning approaches [18, 19] where the nodes of a tree (i.e. graph without cycles) are represented in hyperbolic space. The hierarchical structure of the tree is then reflected by the order of distances between its nodes. More precisely, a node representation is learned so that each node is closer to its descendants and ancestors in the tree (w.r.t. the hyperbolic distance) than to any other node. For example, in a hierarchy of words, ancestors and descendants are hypernyms and hyponyms, respectively. Our goal is to learn a set of n points x1, · · · ,xn ∈ Qp,qβ (embeddings) from a given graph G. The presence of cycles in the graph makes it difficult to determine ancestors and descendants. For this reason, we introduce for each pair of nodes (vi, vj) = ek ∈ E, the set of “weaker” pairs that have lower capacity: W(ek) = {el : ck > cl} ∪ {(va, vb) : (va, vb) /∈ E}. Our goal is to learn representations such that pairs (vi, vj) with higher capacity have their representations (xi,xj) closer to each other than weaker pairs. Following [18], we formulate our problem as: min x1,··· ,xn∈Qp,qβ ∑ (vi,vj) = ek∈E − log exp (−d(xi,xj)/τ)∑ (va,vb)∈ W(ek)∪{ek} exp (−d(xa,xb)/τ) (17) where d is the chosen dissimilarity function (e.g. Dγ(·, ·) defined in Eq. (9)) and τ > 0 is a fixed temperature parameter. The formulation of Eq. (17) is classic in the metric learning literature [3, 12, 27] and corresponds to optimizing some order on the learned distances via a softmax function. Implementation details: We coded our approach in PyTorch [22] that automatically calculates the Euclidean gradient ∇f(xi). Initially, a random set of vectors {zi}ni=1 is generated close to the positive pole ( √ |β|, 0, · · · , 0) ∈ Qp,qβ with every coordinate perturbed uniformly with a random value in the interval [−ε, ε] where ε > 0 is chosen small enough so that ‖zi‖2q < 0. We set β = −1, ε = 0.1 and τ = 10−2. Initial embeddings are generated as follows: ∀i,xi = √ |β| zi√ |‖zi‖2q| ∈ Qp,qβ . Zachary’s karate club dataset [30] is a social network graph of a karate club comprised of n = 34 nodes, each representing a member of the karate club. The club was split due to a conflict between instructor "Mr. Hi" (node v1) and administrator "John A" (node vn). The remaining members now have to decide whether to join the new club created by v1 or not. In [30], Zachary defines a matrix of relative strengths of the friendships in the karate club called C ∈ {0, 1, · · · , 7}n×n and that depends on various criteria. We note that the matrix is not symmetric and has 7 different pairs (vi, vj) for which Cij 6= Cji. Since our dissimilarity function is symmetric, we consider the symmetric matrix S = C + C> instead. The value of Sij is the capacity/weight assigned to the edge joining vi and vj , and there is no edge between vi and vj if Sij = 0. Fig. 3 (left) illustrates the 34 nodes of the dataset, an edge joining the nodes vi and vj is drawn iff Sij 6= 0. The level of a node in the hierarchy corresponds approximately to its height in the figure. Optimizers: We validate that our optimizers introduced in Section 4 decrease the cost function. First, we consider the simple unweighted case where every edge weight is 1. For each edge ek ∈ E, W(ek) is then the set of pairs of nodes that are not connected. In other words, Eq. (17) learns node representations that have the property that every connected pair of nodes has smaller distance than non-connected pairs. We use this condition as a stopping criterion of our algorithm. Fig. 3 (right) illustrates the loss values of Eq. (17) as a function of the number of iterations with the Euclidean gradient descent (Section 4.1) and our pseudo-Riemannian optimizer (introduced in Section 4.2). In each test, we vary the number of time dimensions q + 1 while the ambient space is of fixed dimensionality d = p+ q + 1 = 10. We omit the case q = 0 since it corresponds to the (hyperbolic) Riemannian case already considered in [11, 19]. Both optimizers decrease the function and manage to satisfy all the expected distance relations. We note that when we use −Df(x) instead of −χ as a search direction, the algorithm does not converge. Moreover, our pseudo-Riemannian optimizer manages to learn representations that satisfy all the constraints for low-dimensional manifolds such as Q4,1−1 and Q 4,2 −1, while the optimizer introduced in Section 4.1 does not. Consequently, we only use the pseudo-Riemannian optimizer in the following results. Hierarchy extraction: To quantitatively evaluate our approach, we apply it to the problem of predicting the high-level nodes in the hierarchy from the weighted matrix S given as supervision. We consider the challenging low-dimensional setting where all the learned representations lie on a 4-dimensional manifold (i.e. p+ q + 1 = 5). Hyperbolic distances are known to grow exponentially as we get further from the origin. Therefore, the sum of distances δi = ∑n j=1 d(xi,xj) of a node vi with all other nodes is a good indication of importance. Intuitively, high-level nodes will be closer to most nodes than low-level nodes. We then sort the scores δ1, · · · , δn in ascending order and report the ranks of the two leaders v1 or vn (in no particular order) in the first two rows of Table 1 averaged over 5 different initializations/runs. Leaders tend to have a smaller δi score with ultrahyperbolic distances than with Euclidean, hyperbolic or spherical distances. Instead of using δi for hyperbolic representations, the importance of a node vi can be evaluated by using the Euclidean norm of its embedding xi as proxy [11, 18, 19]. This is because high-level nodes of a tree in hyperbolic space are usually closer to the origin than low-level nodes. Not surprisingly, this proxy leads to worse performance (8.6± 2.3 and 18.6± 4.9) as the relationships are not that of a tree. Since hierarchy levels are hard to compare for low-level nodes, we select the 10 (or 5) most influential members based on the score si = ∑n j=1 Sij . The corresponding nodes are 34, 1, 33, 3, 2, 32, 24, 4, 9, 14 (in that order). Spearman’s rank correlation coefficient [24] between the selected scores si and corresponding δi is reported in Table 1 and shows the relevance of our representations. Due to lack of space, we also report in the supp. material similar experiments on a larger hierarchical dataset [9] that describes co-authorship from papers published at NIPS from 1988 to 2003. 6 Conclusion We have introduced ultrahyperbolic representations. Our representations lie on a pseudo-Riemannian manifold of constant nonzero curvature which generalizes hyperbolic and spherical geometries and includes them as submanifolds. Any relationship described in those geometries can then be described with our representations that are more flexible. We have introduced new optimization tools and experimentally shown that our representations can extract hierarchies in graphs that contain cycles. Broader Impact We introduce a novel way of representing relationships between data points by considering the geometry of non-Riemannian manifolds of constant nonzero curvature. The relationships between data points are described by a dissimilarity function that we introduce and exploits the structure of the manifold. It is more flexible than the distance metric used in hyperbolic and spherical geometries often used in machine learning and computer vision. Nonetheless, since the problems involving our representations are not straightforward to optimize, we propose novel optimization algorithms that can potentially benefit the machine learning, computer vision and natural language processing communities. Indeed, our method is application agnostic and could extend existing frameworks. Our contribution is mainly theoretical but we have included one practical application. Similarly to hyperbolic representations that are popular for representing tree-like data, we have shown that our representations are well adapted to the more general case of hierarchical graphs with cycles. These graphs appear in many different fields of research such as medicine, molecular biology and the social sciences. For example, an ultrahyperbolic representation of proteins might assist in understanding their complicated folding mechanisms. Moreover, these representations could assist in analyzing features of social media such as discovering new trends and leading "connectors". The impact of community detection for commercial or political advertising is already known in social networking services. We foresee that our method will have many more graph-based practical applications. We know of very few applications outside of general relativity that use pseudo-Riemannian geometry. We hope that our research will stimulate other applications in machine learning and related fields. Finally, although we have introduced a novel descent direction for our optimization algorithm, future research could study and improve its rate of convergence. Acknowledgments and Disclosure of Funding We thank Jonah Philion, Guojun Zhang and the anonymous reviewers for helpful feedback on early versions of this manuscript. This article was entirely funded by NVIDIA corporation. Marc Law and Jos Stam completed this working from home during the COVID-19 pandemic.
1. What is the focus and contribution of the paper on geometric computations? 2. What are the strengths of the proposed approach, particularly in terms of its application to optimization on manifolds? 3. What are the weaknesses of the paper regarding its explanations and clarity? 4. Are there any concerns or suggestions for improving the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper presents a generalization of the hyperbolic and spherical manifolds. The main contribution of the paper is to show the computation of geodesics, exponential, and logarithm maps that allow optimization on this manifold. The experiment results look solid. Strengths A good work on a particular manifold, especially for optimization on the manifold. It details relevant developments. Weaknesses The paper could benefit from a few more explanations as mentioned in the detailed feedback.
NIPS
Title Ultrahyperbolic Representation Learning Abstract In machine learning, data is usually represented in a (flat) Euclidean space where distances between points are along straight lines. Researchers have recently considered more exotic (non-Euclidean) Riemannian manifolds such as hyperbolic space which is well suited for tree-like data. In this paper, we propose a representation living on a pseudo-Riemannian manifold of constant nonzero curvature. It is a generalization of hyperbolic and spherical geometries where the nondegenerate metric tensor need not be positive definite. We provide the necessary learning tools in this geometry and extend gradient-based optimization techniques. More specifically, we provide closed-form expressions for distances via geodesics and define a descent direction to minimize some objective function. Our novel framework is applied to graph representations. 1 Introduction In most machine learning applications, data representations lie on a smooth manifold [16] and the training procedure is optimized with an iterative algorithm such as line search or trust region methods [20]. In most cases, the smooth manifold is Riemannian, which means that it is equipped with a positive definite metric. Due to the positive definiteness of the metric, the negative of the (Riemannian) gradient is a descent direction that can be exploited to iteratively minimize some objective function [1]. The choice of metric on the Riemannian manifold determines how relations between points are quantified. The most common Riemannian manifold is the flat Euclidean space, which has constant zero curvature and the distances between points are measured by straight lines. An intuitive example of non-Euclidean Riemannian manifold is the spherical model (i.e. representations lie on a sphere) that has constant positive curvature and is used for instance in face recognition [25, 26]. On the sphere, geodesic distances are a function of angles. Similarly, Riemannian spaces of constant negative curvature are called hyperbolic [23]. Such spaces were shown by Gromov to be well suited to represent tree-like structures [10]. The machine learning community has adopted these spaces to learn tree-like graphs [5] and hierarchical data structures [11, 18, 19], and also to compute means in tree-like shapes [6, 7]. In this paper, we consider a class of pseudo-Riemannian manifolds of constant nonzero curvature [28] not previously considered in machine learning. These manifolds not only generalize the hyperbolic and spherical geometries mentioned above, but also contain hyperbolic and spherical submanifolds and can therefore describe relationships specific to those geometries. The difference is that we consider the larger class of pseudo-Riemannian manifolds where the considered nondegenerate metric tensor need not be positive definite. Optimizing a cost function on our non-flat ultrahyperbolic space requires a descent direction method that follows a path along the curved manifold. We achieve this by employing tools from differential geometry such as geodesics and exponential maps. The theoretical contributions in this paper are two-fold: (1) explicit methods to calculate dissimilarities and (2) general optimization tools on pseudo-Riemannian manifolds of constant nonzero curvature. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. 2 Pseudo-Hyperboloids Notation: We denote points on a smooth manifoldM [16] by boldface Roman characters x ∈M. TxM is the tangent space ofM at x and we write tangent vectors ξ ∈ TxM in boldface Greek fonts. Rd is the (flat) d-dimensional Euclidean space, it is equipped with the (positive definite) dot product denoted by 〈·, ·〉 and defined as 〈x,y〉 = x>y. The `2-norm of x is ‖x‖ = √ 〈x,x〉. Rd∗ = Rd\{0} is the Euclidean space with the origin removed. Pseudo-Riemannian manifolds: A smooth manifoldM is pseudo-Riemannian (also called semiRiemannian [21]) if it is equipped with a pseudo-Riemannian metric tensor (named “metric” for short in differential geometry). The pseudo-Riemannian metric gx : TxM× TxM→ R at some point x ∈M is a nondegenerate symmetric bilinear form. Nondegeneracy means that if for a given ξ ∈ TxM and for all ζ ∈ TxM we have gx(ξ, ζ) = 0, then ξ = 0. If the metric is also positive definite (i.e. ∀ξ ∈ TxM, gx(ξ, ξ) > 0 iff ξ 6= 0), then it is Riemannian. Riemannian geometry is a special case of pseudo-Riemannian geometry where the metric is positive definite. In general, this is not the case and non-Riemannian manifolds distinguish themselves by having some non-vanishing tangent vectors ξ 6= 0 that satisfy gx(ξ, ξ) ≤ 0. We refer the reader to [2, 21, 28] for details. Pseudo-hyperboloids generalize spherical and hyperbolic manifolds to the class of pseudoRiemannian manifolds. Let us note d = p+ q+ 1 ∈ N the dimensionality of some pseudo-Euclidean space where each vector is written x = (x0, x1, · · · , xq+p)>. That space is denoted by Rp,q+1 when it is equipped with the following scalar product (i.e. nondegenerate symmetric bilinear form [21]): ∀a = (a0, · · · , aq+p)> , b = (b0, · · · , bq+p)> , 〈a,b〉q = − q∑ i=0 aibi+ p+q∑ j=q+1 ajbj = a >Gb, (1) where G = G−1 = Iq+1,p is the d× d diagonal matrix with the first q+ 1 diagonal elements equal to −1 and the remaining p equal to 1. Since Rp,q+1 is a vector space, we can identify the tangent space to the space itself by means of the natural isomorphism TxRp,q+1 ≈ Rp,q+1. Using the terminology of special relativity, Rp,q+1 has q + 1 time dimensions and p space dimensions. A pseudo-hyperboloid is the following submanifold of codimension one (i.e. hypersurface) in Rp,q+1: Qp,qβ = { x = (x0, x1, · · · , xp+q)> ∈ Rp,q+1 : ‖x‖2q = β } , (2) where β ∈ R∗ is a nonzero real number and the function ‖ · ‖2q given by ‖x‖2q = 〈x,x〉q is the associated quadratic form of the scalar product. It is equivalent to work with eitherQp,qβ orQ q+1,p−1 −β as they are interchangeable via an anti-isometry (see supp. material). For instance, the unit q-sphere Sq = { x ∈ Rq+1 : ‖x‖ = 1 } is anti-isometric to Q0,q−1 which is then spherical. In the literature, the set Qp,qβ is called a “pseudo-sphere” when β > 0 and a “pseudo-hyperboloid” when β < 0. In the rest of the paper, we only consider the pseudo-hyperbolic case (i.e. β < 0). Moreover, for any β < 0, Qp,qβ is homothetic to Q p,q −1, the value of β can then be considered to be −1. We can obtain the spherical and hyperbolic geometries by constraining all the elements of the space dimensions of a pseudo-hyperboloid to be zero or constraining all the elements of the time dimensions except one to be zero, respectively. Pseudo-hyperboloids then generalize spheres and hyperboloids. The pseudo-hyperboloids that we consider in this paper are hard to visualize as they live in ambient spaces with dimension higher than 3. In Fig. 1, we show iso-surfaces of a projection of the 3-dimensional pseudo-hyperboloid Q2,1−1 (embedded in R2,2) into R3 along its first time dimension. Metric tensor and tangent space: The metric tensor at x ∈ Qp,qβ is gx(·, ·) = 〈·, ·〉q where gx : TxQp,qβ × TxQ p,q β → R. By using the isomorphism TxRp,q+1 ≈ Rp,q+1 mentioned above, the tangent space of Qp,qβ at x can be defined as TxQ p,q β = { ξ ∈ Rp,q+1 : 〈x, ξ〉q = 0 } for all β 6= 0. Finally, the orthogonal projection of an arbitrary d-dimensional vector z onto TxQp,qβ is: Πx(z) = z− 〈z,x〉q 〈x,x〉q x. (3) 3 Measuring Dissimilarity on Pseudo-Hyperboloids This section introduces the differential geometry tools necessary to quantify dissimilarities/distances between points on Qp,qβ . Measuring dissimilarity is an important task in machine learning and has many applications (e.g. in metric learning [29]). Intrinsic geometry: The intrinsic geometry of the hypersurface Qp,qβ embedded in Rp,q+1 (i.e. the geometry perceived by the inhabitants of Qp,qβ [21]) derives solely from its metric tensor applied to tangent vectors to Qp,qβ . For instance, it can be used to measure the arc length of a tangent vector joining two points along a geodesic and define their geodesic distance. Before considering geodesic distances, we consider extrinsic distances (i.e. distances in the ambient space Rp,q+1). Since Rp,q+1 is isomorphic to its tangent space, tangent vectors to Rp,q+1 are naturally identified with points. Using the quadratic form of Eq. (1), the extrinsic distance between two points a,b ∈ Qp,qβ is: dq(a,b) = √ |‖a− b‖2q| = √ |‖a‖2q + ‖b‖2q − 2〈a,b〉q| = √ |2β − 2〈a,b〉q|. (4) This distance is a good proxy for the geodesic distance dγ(·, ·), that we introduce below, if it preserves distance relations: dγ(a,b) < dγ(c,d) iff dq(a,b) < dq(c,d). This relation is satisfied for two special cases of pseudo-hyperboloids for which the geodesic distance is well known: • Spherical manifold (Q0,qβ ): If p = 0, the geodesic distance dγ(a,b) = √ |β| cos−1 ( 〈a,b〉q β ) is called spherical distance. In practice, the cosine similarity 〈·,·〉qβ is often considered instead of dγ(·, ·) since it satisfies dγ(a,b) < dγ(c,d) iff 〈a,b〉q < 〈c,d〉q iff dq(a,b) < dq(c,d). • Hyperbolic manifold (upper sheet of the two-sheet hyperboloid Qp,0β ): If q = 0, the geodesic distance dγ(a,b) = √ |β| cosh−1 ( 〈a,b〉q β ) with a0 > 0 and b0 > 0 is called Poincaré distance [19]. The (extrinsic) Lorentzian distance was shown to be a good proxy in hyperbolic geometry [11]. For the ultrahyperbolic case (i.e. q ≥ 1 and p ≥ 2), the distance relations are not preserved: dγ(a,b) < dγ(c,d) 6⇐⇒ dq(a,b) < dq(c,d). We then need to consider only geodesic distances. This section introduces closed-form expressions for geodesic distances on ultrahyperbolic manifolds. Geodesics: Informally, a geodesic is a curve joining points on a manifoldM that minimizes some “effort” depending on the metric. More precisely, let I ⊆ R be a (maximal) interval containing 0. A geodesic γ : I →M maps a real value t ∈ I to a point on the manifoldM. It is a curve onM defined by its initial point γ(0) = x ∈M and initial tangent vector γ′(0) = ξ ∈ TxM where γ′(t) is the derivative of γ at t. By analogy with physics, t is considered as a time value. Intuitively, one can think of the curve as the trajectory over time of a ball being pushed from a point x at t = 0 with initial velocity ξ and constrained to roll on the manifold. We denote this curve explicitly by γx→ξ(t) unless the dependence is obvious from the context. For this curve to be a geodesic, its acceleration has to be zero: ∀t ∈ I, γ′′(t) = 0. This condition is a second-order ordinary differential equation that has a unique solution for a given set of initial conditions [17]. The interval I is said to be maximal if it cannot be extended to a larger interval. In the case of Qp,qβ , we have I = R and I is then maximal. Geodesic of Qp,qβ : As we show in the supp. material, the geodesics of Q p,q β are a combination of the hyperbolic, flat and spherical cases. The nature of the geodesic γx→ξ depends on the sign of 〈ξ, ξ〉q . For all t ∈ R, the geodesic γx→ξ of Qp,qβ with β < 0 is written: γx→ξ(t) = cosh ( t √ |〈ξ,ξ〉q|√ |β| ) x + √ |β|√ |〈ξ,ξ〉q| sinh ( t √ |〈ξ,ξ〉q|√ |β| ) ξ if 〈ξ, ξ〉q > 0 x + tξ if 〈ξ, ξ〉q = 0 cos ( t √ |〈ξ,ξ〉q|√ |β| ) x + √ |β|√ |〈ξ,ξ〉q| sin ( t √ |〈ξ,ξ〉q|√ |β| ) ξ if 〈ξ, ξ〉q < 0 (5) We recall that 〈ξ, ξ〉q = 0 does not imply ξ = 0. The geodesics are an essential ingredient to define a mapping known as the exponential map. See Fig. 2 (left) for a depiction of these three types of geodesics, and Fig. 2 (right) for a depiction of the other quantities introduced in this section. Exponential map: Exponential maps are a way of collecting all of the geodesics of a pseudoRiemannian manifoldM into a unique differentiable mapping. Let Dx ⊆ TxM be the set of tangent vectors ξ such that γx→ξ is defined at least on the interval [0, 1]. This allows us to uniquely define the exponential map expx : Dx →M such that expx(ξ) = γx→ξ(1). The manifoldQp,qβ is geodesically complete, the domain of its exponential map is thenDx = TxQ p,q β . Using Eq. (5) with t = 1, we obtain an exponential map of the entire tangent space to the manifold: ∀ξ ∈ TxQp,qβ , expx(ξ) = γx→ξ(1). (6) We make the important observation that the image of the exponential map does not necessarily cover the entire manifold: not all points on a manifold are connected by a geodesic. This is the case for our pseudo-hyperboloids. Namely, for a given point x ∈ Qp,qβ there exist points y that are not in the image of the exponential map (i.e. there does not exist a tangent vector ξ such that y = expx(ξ)). Logarithm map: We provide a closed-form expression of the logarithm map for pseudo-hyperboloids. Let Ux ⊆ Qp,qβ be some neighborhood of x. The logarithm map logx : Ux → TxQ p,q β is defined as the inverse of the exponential map on Ux (i.e. logx = exp−1x ). We propose: ∀y ∈ Ux, logx(y) = cosh−1( 〈x,y〉q β )√ ( 〈x,y〉q β ) 2−1 ( y − 〈x,y〉qβ x ) if 〈x,y〉q|β| < −1 y − x if 〈x,y〉q|β| = −1 cos−1( 〈x,y〉q β )√ 1−( 〈x,y〉qβ )2 ( y − 〈x,y〉qβ x ) if 〈x,y〉q|β| ∈ (−1, 1) (7) By substituting ξ = logx(y) into Eq. (6), one can verify that our formulas are the inverse of the exponential map. The set Ux = { y ∈ Qp,qβ : 〈x,y〉q < |β| } is called a normal neighborhood of x ∈ Qp,qβ since for all y ∈ Ux, there exists a geodesic from x to y such that logx(y) = γ′x→logx(y)(0). We show in the supp. material that the logarithm map is not defined if 〈x,y〉q ≥ |β|. Proposed dissmilarity: We define our dissimilarity function based on the general notion of arc length and radius function on pseudo-Riemannian manifolds that we recall in the next paragraph (see details in Chapter 5 of [21]). This corresponds to the geodesic distance in the Riemannian case. Let Ux be a normal neighborhood of x ∈ M withM pseudo-Riemannian. The radius function rx : Ux → R is defined as rx(y) = √ |gx (logx(y), logx(y))| where gx is the metric at x. If σx→ξ is the radial geodesic from x to y ∈ Ux (i.e. ξ = logx(y)), then the arc length of σx→ξ equals rx(y). We then define the geodesic “distance” between x ∈ Qp,qβ and y ∈ Ux as the arc length of σx→logx(y): dγ(x,y) = √ |‖ logx(y)‖2q| = √ |β| cosh−1 ( 〈x,y〉q β ) if 〈x,y〉q|β| < −1 0 if 〈x,y〉q|β| = −1√ |β| cos−1 ( 〈x,y〉q β ) if 〈x,y〉q|β| ∈ (−1, 1) (8) It is important to note that our “distance” is not a distance metric. However, it satisfies the axioms of a symmetric premetric: (i) dγ(x,y) = dγ(y,x) ≥ 0 and (ii) dγ(x,x) = 0. These conditions are sufficient to quantify the notion of nearness via a ρ-ball centered at x: Bρx = {y : dγ(x,y) < ρ}. In general, topological spaces provide a qualitative (not necessarily quantitative) way to detect “nearness” through the concept of a neighborhood at a point [15]. Something is true “near x” if it is true in the neighborhood of x (e.g. inBρx). Our premetric is similar to metric learning methods [13, 14, 29] that learn a Mahalanobis-like distance pseudo-metric parameterized by a positive semi-definite matrix. Pairs of distinct points can have zero “distance” if the matrix is not positive definite. However, unlike classic metric learning, we can have triplets (x,y, z) that satisfy dγ(x,y) = dγ(x, z) = 0 but dγ(y, z) > 0 (e.g. x = (1, 0, 0, 0)>,y = (1, 1, 1, 0)>, z = (1, 1, 0, 1)> in Q2,1−1). Since the logarithm map is not defined if 〈x,y〉q ≥ |β|, we propose to use the following continuous approximation defined on the whole manifold instead: ∀x ∈ Qp,qβ ,y ∈ Q p,q β , Dγ(x,y) = { dγ(x,y) if 〈x,y〉q ≤ 0√ |β| ( π 2 + 〈x,y〉q |β| ) otherwise (9) To the best of our knowledge, the explicit formulation of the logarithm map for Qp,qβ in Eq. (7) and its corresponding radius function in Eq. (8) to define a dissimilarity function are novel. We have also proposed some linear approximation to evaluate dissimilarity when the logarithm map is not defined but other choices are possible. For instance, when a geodesic does not exist, a standard way in differential geometry to calculate curves is to consider broken geodesics. One might consider instead the dissimilarity dγ(x,−x) + dγ(−x,y) = π √ |β| + dγ(−x,y) if logx(y) is not defined since −x ∈ Qp,qβ and log−x(y) is defined. This interesting problem is left for future research. 4 Ultrahyperbolic Optimization In this section we present optimization frameworks to optimize any differentiable function defined on Qp,qβ . Our goal is to compute descent directions on the ultrahyperbolic manifold. We consider two approaches. In the first approach, we map our representation from Euclidean space to ultrahyperbolic space. This is similar to the approach taken by [11] in hyperbolic space. In the second approach, we optimize using gradients defined directly in pseudo-Riemannian tangent space. We propose a novel descent direction which guarantees the minimization of some cost function. 4.1 Euclidean optimization via a differentiable mapping onto Qp,qβ Our first method maps Euclidean representations that lie in Rd to the pseudo-hyperboloid Qp,qβ , and the chain rule is exploited to perform standard gradient descent. To this end, we construct a differentiable mapping ϕ : Rq+1∗ × Rp → Qp,qβ . The image of a point already on Q p,q β under the mapping ϕ is itself: ∀x ∈ Qp,qβ , ϕ(x) = x. Let Sq = { x ∈ Rq+1 : ‖x‖ = 1 } denote the unit q-sphere. We first introduce the following diffeomorphisms: Theorem 4.1 (Diffeomorphisms). For any β < 0, there is a diffeomorphism ψ : Qp,qβ → Sq × Rp. Let us note x = ( t s ) ∈ Qp,qβ with t ∈ R q+1 ∗ and s ∈ Rp, let us note z = ( u v ) ∈ Sq × Rp where u ∈ Sq and v ∈ Rp. The mapping ψ and its inverse ψ−1 are formulated (see proofs in supp. material): ψ(x) = ( 1 ‖t‖t 1√ |β| s ) and ψ−1(z) = √ |β| (√ 1 + ‖v‖2u v ) . (10) With these mappings, any vector x ∈ Rq+1∗ × Rp can be mapped to Qp,qβ via ϕ = ψ−1 ◦ ψ. ϕ is differentiable everywhere except when x0 = · · · = xq = 0, which should never occur in practice. It can therefore be optimized using standard gradient methods. 4.2 Pseudo-Riemannian optimization We now introduce a novel method to optimize any differentiable function f : Qp,qβ → R defined on the pseudo-hyperboloid. As we show below, the (negative of the) pseudo-Riemannian gradient is not a descent direction. We propose a simple and efficient way to calculate a descent direction. Pseudo-Riemannian gradient: Since x ∈ Qp,qβ also lies in the Euclidean ambient space Rd, the function f has a well defined Euclidean gradient∇f(x) = (∂f(x)/∂x0, · · · , ∂f(x)/∂xp+q)> ∈ Rd. The gradient of f in the pseudo-Euclidean ambient space Rp,q+1 is (G−1∇f(x)) = (G∇f(x)) ∈ Rp,q+1. Since Qp,qβ is a submanifold of Rp,q+1, the pseudo-Riemannian gradient Df(x) ∈ TxQ p,q β of f on Qp,qβ is the orthogonal projection of (G∇f(x)) onto TxQ p,q β (see Chapter 4 of [21]): Df(x) = Πx (G∇f(x)) = G∇f(x)− 〈G∇f(x),x〉q 〈x,x〉q x = G∇f(x)− 〈∇f(x),x〉 〈x,x〉q x. (11) This gradient forms the foundation of our descent method optimizer as will be shown in Eq. (13). Iterative optimization: Our goal is to iteratively decrease the value of the function f by following some descent direction. Since Qp,qβ is not a vector space, we do not “follow the descent direction” by adding the descent direction multiplied by a step size as this would result in a new point that does not necessarily lie on Qp,qβ . Instead, to remain on the manifold, we use our exponential map defined in Eq. (6). This is a standard way to optimize on Riemannian manifolds [1]. Given a step size t > 0, one step of descent along a tangent vector ζ ∈ TxQp,qβ is given by: y = expx (tζ) ∈ Q p,q β . (12) Descent direction: We now explain why the negative of the pseudo-Riemannian gradient is not a descent direction. Our explanation extends Chapter 3 of [20] that gives the criteria for a tangent vector ζ to be a descent direction when the domain of the optimized function is a Euclidean space. By using the properties described in Section 3, we know that for all t ∈ R and all ξ ∈ TxQp,qβ , we have the equalities: expx (tξ) = γx→tξ(1) = γx→ξ(t) so we can equivalently fix t to 1 and choose the scale of ξ appropriately. By exploiting Taylor’s first-order approximation, there exists some small enough tangent vector ζ 6= 0 (i.e. with expx(ζ) belonging to a convex neighborhood of x [4, 8]) that satisfies the following conditions: γx→ζ(0) = x ∈ Qp,qβ , γ′x→ζ(0) = ζ ∈ TxQ p,q β , γx→ζ(1) = y ∈ Q p,q β , and the function f ◦ γx→ζ : R→ R can be approximated at t = 1 by: f(y) = f ◦ γx→ζ(1) ' f ◦ γx→ζ(0) + (f ◦ γx→ζ)′(0) = f(x) + 〈Df(x), ζ〉q. (13) where we use the following properties: ∀t, (f ◦ γ)′(t) = df(γ′(t)) = gγ(t) (Df(γ(t)), γ′(t)) (see details in pages 11, 15 and 85 of [21]), df is the differential of f and γ is a geodesic. To be a descent direction at x (i.e. so that f(y) < f(x)), the search direction ζ has to satisfy 〈Df(x), ζ〉q < 0. However, choosing ζ = −ηDf(x), where η > 0 is a step size, might increase the function value if the scalar product 〈·, ·〉q is not positive definite. If p + q ≥ 1, then 〈·, ·〉q is positive definite only if q = 0 (see details in supp. material), and it is negative definite iff p = 0 since 〈·, ·〉q = −〈·, ·〉 in this case. A simple solution would be to choose ζ = ±ηDf(x) depending on the sign of 〈Df(x), ζ〉q, but 〈Df(x), ζ〉q might be equal to 0 even if Df(x) 6= 0 if 〈·, ·〉q is indefinite. The optimization algorithm might then be stuck to a level set of f , which is problematic. Algorithm 1 Pseudo-Riemannian optimization on Qp,qβ input: differentiable function f : Qp,qβ → R to be minimized, some initial value of x ∈ Q p,q β 1: while not converge do 2: Calculate∇f(x) . i.e. the Euclidean gradient of f at x in the Euclidean ambient space 3: χ← Πx(GΠx(G∇f(x))) . see Eq. (14) 4: x← expx(−ηχ) . where η > 0 is a step size (e.g. determined with line search) 5: end while Proposed solution: To ensure that ζ ∈ TxQp,qβ is a descent direction, we propose a simple expression that satisfies 〈Df(x), ζ〉q < 0 if Df(x) 6= 0 and 〈Df(x), ζ〉q = 0 otherwise. We propose to formulate ζ = −ηΠx(GDf(x)) ∈ TxQp,qβ , and we define the following tangent vector χ = − 1 ηζ: χ = Πx(GDf(x)) = ∇f(x)− 〈∇f(x),x〉 〈x,x〉q Gx− 〈∇f(x),x〉q 〈x,x〉q x + ‖x‖2〈∇f(x),x〉 〈x,x〉2q x. (14) The tangent vector ζ is a descent direction because 〈Df(x), ζ〉q = −η〈Df(x),χ〉q is nonpositive: 〈Df(x),χ〉q = ‖∇f(x)‖2 − 2 〈∇f(x),x〉〈∇f(x),x〉q 〈x,x〉q + 〈∇f(x),x〉2‖x‖2 〈x,x〉2q (15) = ‖G∇f(x)− 〈∇f(x),x〉 〈x,x〉q x‖2 = ‖Df(x)‖2 ≥ 0. (16) We also have 〈Df(x),χ〉q = ‖Df(x)‖2 = 0 iff Df(x) = 0 (i.e. x is a stationary point). It is worth noting that Df(x) = 0 implies χ = Πx(G0) = 0. Moreover, χ = 0 implies that ‖Df(x)‖2 = 〈Df(x), 0〉q = 0. We then have χ = 0 iff Df(x) = 0. Our proposed algorithm to the minimization problem minx∈Qp,qβ f(x) is illustrated in Algorithm 1. Following generic Riemannian optimization algorithms [1], at each iteration, it first computes the descent direction−χ ∈ TxQp,qβ , then decreases the function by applying the exponential map defined in Eq. (6). It is worth noting that our proposed descent method can be applied to any differentiable function f : Qp,qβ → R, not only to those that exploit the distance introduced in Section 3. Interestingly, our method can also be seen as a preconditioning technique [20] where the descent direction is obtained by preconditioning the pseudo-Riemannian gradient Df(x) with the matrix Px = [ G− 1〈x,x〉q xx > ] ∈ Rd×d. In other words, we have χ = PxDf(x) = Πx(GDf(x)). In the more general setting of pseudo-Riemannian manifolds, another preconditioning technique was proposed in [8]. The method in [8] requires performing a Gram-Schmidt process at each iteration to obtain an (ordered [28]) orthonormal basis of the tangent space at x w.r.t. the induced quadratic form of the manifold. However, the Gram-Schmidt process is unstable and has algorithmic complexity that is cubic in the dimensionality of the tangent space. On the other hand, our method is more stable and its algorithmic complexity is linear in the dimensionality of the tangent space. 5 Experiments We now experimentally validate our proposed optimization methods and the effectiveness of our dissimilarity function. Our main experimental results can be summarized as follows: • Both optimizers introduced in Section 4 decrease some objective function f : Qp,qβ → R. While both optimizers manage to learn high-dimensional representations that satisfy the problem-dependent training constraints, only the pseudo-Riemannian optimizer satisfies all the constraints in lowerdimensional spaces. This is because it exploits the underlying metric of the manifold. • Hyperbolic representations are popular in machine learning as they are well suited to represent hierarchical trees [10, 18, 19]. On the other hand, hierarchical datasets whose graph contains cycles cannot be represented using trees. Therefore, we propose to represent such graphs using our ultrahyperbolic representations. An important example are community graphs such as Zachary’s karate club [30] that contain leaders. Because our ultrahyperbolic representations are more flexible than hyperbolic representations, we believe that our representations are better suited for these non tree-like hierarchical structures. Graph: Our ultrahyperbolic representations describe graph-structured datasets. Each dataset is an undirected weighted graph G = (V,E) which has node-set V = {vi}ni=1 and edge-set E = {ek}mk=1. Each edge ek is weighted by an arbitrary capacity ck ∈ R+ that models the strength of the relationship between nodes. The higher the capacity ck, the stronger the relationship between the nodes connected by ek. Learned representations: Our problem formulation is inspired by hyperbolic representation learning approaches [18, 19] where the nodes of a tree (i.e. graph without cycles) are represented in hyperbolic space. The hierarchical structure of the tree is then reflected by the order of distances between its nodes. More precisely, a node representation is learned so that each node is closer to its descendants and ancestors in the tree (w.r.t. the hyperbolic distance) than to any other node. For example, in a hierarchy of words, ancestors and descendants are hypernyms and hyponyms, respectively. Our goal is to learn a set of n points x1, · · · ,xn ∈ Qp,qβ (embeddings) from a given graph G. The presence of cycles in the graph makes it difficult to determine ancestors and descendants. For this reason, we introduce for each pair of nodes (vi, vj) = ek ∈ E, the set of “weaker” pairs that have lower capacity: W(ek) = {el : ck > cl} ∪ {(va, vb) : (va, vb) /∈ E}. Our goal is to learn representations such that pairs (vi, vj) with higher capacity have their representations (xi,xj) closer to each other than weaker pairs. Following [18], we formulate our problem as: min x1,··· ,xn∈Qp,qβ ∑ (vi,vj) = ek∈E − log exp (−d(xi,xj)/τ)∑ (va,vb)∈ W(ek)∪{ek} exp (−d(xa,xb)/τ) (17) where d is the chosen dissimilarity function (e.g. Dγ(·, ·) defined in Eq. (9)) and τ > 0 is a fixed temperature parameter. The formulation of Eq. (17) is classic in the metric learning literature [3, 12, 27] and corresponds to optimizing some order on the learned distances via a softmax function. Implementation details: We coded our approach in PyTorch [22] that automatically calculates the Euclidean gradient ∇f(xi). Initially, a random set of vectors {zi}ni=1 is generated close to the positive pole ( √ |β|, 0, · · · , 0) ∈ Qp,qβ with every coordinate perturbed uniformly with a random value in the interval [−ε, ε] where ε > 0 is chosen small enough so that ‖zi‖2q < 0. We set β = −1, ε = 0.1 and τ = 10−2. Initial embeddings are generated as follows: ∀i,xi = √ |β| zi√ |‖zi‖2q| ∈ Qp,qβ . Zachary’s karate club dataset [30] is a social network graph of a karate club comprised of n = 34 nodes, each representing a member of the karate club. The club was split due to a conflict between instructor "Mr. Hi" (node v1) and administrator "John A" (node vn). The remaining members now have to decide whether to join the new club created by v1 or not. In [30], Zachary defines a matrix of relative strengths of the friendships in the karate club called C ∈ {0, 1, · · · , 7}n×n and that depends on various criteria. We note that the matrix is not symmetric and has 7 different pairs (vi, vj) for which Cij 6= Cji. Since our dissimilarity function is symmetric, we consider the symmetric matrix S = C + C> instead. The value of Sij is the capacity/weight assigned to the edge joining vi and vj , and there is no edge between vi and vj if Sij = 0. Fig. 3 (left) illustrates the 34 nodes of the dataset, an edge joining the nodes vi and vj is drawn iff Sij 6= 0. The level of a node in the hierarchy corresponds approximately to its height in the figure. Optimizers: We validate that our optimizers introduced in Section 4 decrease the cost function. First, we consider the simple unweighted case where every edge weight is 1. For each edge ek ∈ E, W(ek) is then the set of pairs of nodes that are not connected. In other words, Eq. (17) learns node representations that have the property that every connected pair of nodes has smaller distance than non-connected pairs. We use this condition as a stopping criterion of our algorithm. Fig. 3 (right) illustrates the loss values of Eq. (17) as a function of the number of iterations with the Euclidean gradient descent (Section 4.1) and our pseudo-Riemannian optimizer (introduced in Section 4.2). In each test, we vary the number of time dimensions q + 1 while the ambient space is of fixed dimensionality d = p+ q + 1 = 10. We omit the case q = 0 since it corresponds to the (hyperbolic) Riemannian case already considered in [11, 19]. Both optimizers decrease the function and manage to satisfy all the expected distance relations. We note that when we use −Df(x) instead of −χ as a search direction, the algorithm does not converge. Moreover, our pseudo-Riemannian optimizer manages to learn representations that satisfy all the constraints for low-dimensional manifolds such as Q4,1−1 and Q 4,2 −1, while the optimizer introduced in Section 4.1 does not. Consequently, we only use the pseudo-Riemannian optimizer in the following results. Hierarchy extraction: To quantitatively evaluate our approach, we apply it to the problem of predicting the high-level nodes in the hierarchy from the weighted matrix S given as supervision. We consider the challenging low-dimensional setting where all the learned representations lie on a 4-dimensional manifold (i.e. p+ q + 1 = 5). Hyperbolic distances are known to grow exponentially as we get further from the origin. Therefore, the sum of distances δi = ∑n j=1 d(xi,xj) of a node vi with all other nodes is a good indication of importance. Intuitively, high-level nodes will be closer to most nodes than low-level nodes. We then sort the scores δ1, · · · , δn in ascending order and report the ranks of the two leaders v1 or vn (in no particular order) in the first two rows of Table 1 averaged over 5 different initializations/runs. Leaders tend to have a smaller δi score with ultrahyperbolic distances than with Euclidean, hyperbolic or spherical distances. Instead of using δi for hyperbolic representations, the importance of a node vi can be evaluated by using the Euclidean norm of its embedding xi as proxy [11, 18, 19]. This is because high-level nodes of a tree in hyperbolic space are usually closer to the origin than low-level nodes. Not surprisingly, this proxy leads to worse performance (8.6± 2.3 and 18.6± 4.9) as the relationships are not that of a tree. Since hierarchy levels are hard to compare for low-level nodes, we select the 10 (or 5) most influential members based on the score si = ∑n j=1 Sij . The corresponding nodes are 34, 1, 33, 3, 2, 32, 24, 4, 9, 14 (in that order). Spearman’s rank correlation coefficient [24] between the selected scores si and corresponding δi is reported in Table 1 and shows the relevance of our representations. Due to lack of space, we also report in the supp. material similar experiments on a larger hierarchical dataset [9] that describes co-authorship from papers published at NIPS from 1988 to 2003. 6 Conclusion We have introduced ultrahyperbolic representations. Our representations lie on a pseudo-Riemannian manifold of constant nonzero curvature which generalizes hyperbolic and spherical geometries and includes them as submanifolds. Any relationship described in those geometries can then be described with our representations that are more flexible. We have introduced new optimization tools and experimentally shown that our representations can extract hierarchies in graphs that contain cycles. Broader Impact We introduce a novel way of representing relationships between data points by considering the geometry of non-Riemannian manifolds of constant nonzero curvature. The relationships between data points are described by a dissimilarity function that we introduce and exploits the structure of the manifold. It is more flexible than the distance metric used in hyperbolic and spherical geometries often used in machine learning and computer vision. Nonetheless, since the problems involving our representations are not straightforward to optimize, we propose novel optimization algorithms that can potentially benefit the machine learning, computer vision and natural language processing communities. Indeed, our method is application agnostic and could extend existing frameworks. Our contribution is mainly theoretical but we have included one practical application. Similarly to hyperbolic representations that are popular for representing tree-like data, we have shown that our representations are well adapted to the more general case of hierarchical graphs with cycles. These graphs appear in many different fields of research such as medicine, molecular biology and the social sciences. For example, an ultrahyperbolic representation of proteins might assist in understanding their complicated folding mechanisms. Moreover, these representations could assist in analyzing features of social media such as discovering new trends and leading "connectors". The impact of community detection for commercial or political advertising is already known in social networking services. We foresee that our method will have many more graph-based practical applications. We know of very few applications outside of general relativity that use pseudo-Riemannian geometry. We hope that our research will stimulate other applications in machine learning and related fields. Finally, although we have introduced a novel descent direction for our optimization algorithm, future research could study and improve its rate of convergence. Acknowledgments and Disclosure of Funding We thank Jonah Philion, Guojun Zhang and the anonymous reviewers for helpful feedback on early versions of this manuscript. This article was entirely funded by NVIDIA corporation. Marc Law and Jos Stam completed this working from home during the COVID-19 pandemic.
1. What is the focus and contribution of the paper on structured data representation? 2. What are the strengths of the proposed approach, particularly in its theoretical developments? 3. What are the weaknesses of the paper, especially regarding its applicability and experimental evaluation? 4. Do you have any concerns about the conditions or cases where the ultrahyperbolic geometry can be useful? 5. How does the proposed method compare to other approaches in machine learning, such as hyperbolic geometry?
Summary and Contributions Strengths Weaknesses
Summary and Contributions Authors propose a new representation for structured data: the ultrahyperbolic representation. It combines spherical and hyperbolic geometries by "stacking" them. Several expressions are given for this space, including geodesics, expo/log maps and a dissimilarity function. Optimization frameworks are then proposed to optimise a given cost function in that space. The optimisation deserves a specific attention as tangent vectors can have negative square norms, and authors propose a novel direction scheme. Experiments on the karate club dataset (+nips dataset on the supplementary) are then given. AFTER REBUTAL ------------------- I thank the authors for their response. I still think that a deeper investigation of the conditions/cases on which the geometry can be useful is missing, and after reding the rebutal, my concerns on this part remain. Nevertheless, I believe that the theoretical developments of the paper are interesting and that the paper may be published at NeurIPS. Strengths The paper is theoretical and introduces the ultrahyperbolic geometry, that combines both spherical and hyperbolic geometries. Tools to optimise on this geometry are then given, paving the way for new applications in machine learning. The experiments on the karate dataset show that the geometry can be useful when graphs contains cycles, which is a typical scenario on which the hyperbolic geometry fails (as it is supposed to represent tree-like data). As such, the work is a pioneer work in the field of hyperbolic geometry for machine learning ; the claims are sounds. Weaknesses The main weaknesses of the paper are the following: - it is unclear in which case this geometry can be useful "in real applications". In the experiments, authors state that it better embeds graph that contain cycles: a deeper analysis of this statement should be provided, together with all other relevant cases. If I have a problem with some tree- or graph-like data, in which case should I consider the ultrahyperbolic geometry rather than the hyperbolic one? - regarding the experimental evaluation, several questions are also raised. Authors propose to use the capacity of the nodes to learn the representations, as it is difficult to get ancestors in the presence of cycles. How does the ultrahyperbolic geometry (with q>1) behaves in "classical" scenario in which the hyperbolic assumption makes sense? How choosing the appropriate q value ? To what phenomenom it relates to?
NIPS
Title Ultrahyperbolic Representation Learning Abstract In machine learning, data is usually represented in a (flat) Euclidean space where distances between points are along straight lines. Researchers have recently considered more exotic (non-Euclidean) Riemannian manifolds such as hyperbolic space which is well suited for tree-like data. In this paper, we propose a representation living on a pseudo-Riemannian manifold of constant nonzero curvature. It is a generalization of hyperbolic and spherical geometries where the nondegenerate metric tensor need not be positive definite. We provide the necessary learning tools in this geometry and extend gradient-based optimization techniques. More specifically, we provide closed-form expressions for distances via geodesics and define a descent direction to minimize some objective function. Our novel framework is applied to graph representations. 1 Introduction In most machine learning applications, data representations lie on a smooth manifold [16] and the training procedure is optimized with an iterative algorithm such as line search or trust region methods [20]. In most cases, the smooth manifold is Riemannian, which means that it is equipped with a positive definite metric. Due to the positive definiteness of the metric, the negative of the (Riemannian) gradient is a descent direction that can be exploited to iteratively minimize some objective function [1]. The choice of metric on the Riemannian manifold determines how relations between points are quantified. The most common Riemannian manifold is the flat Euclidean space, which has constant zero curvature and the distances between points are measured by straight lines. An intuitive example of non-Euclidean Riemannian manifold is the spherical model (i.e. representations lie on a sphere) that has constant positive curvature and is used for instance in face recognition [25, 26]. On the sphere, geodesic distances are a function of angles. Similarly, Riemannian spaces of constant negative curvature are called hyperbolic [23]. Such spaces were shown by Gromov to be well suited to represent tree-like structures [10]. The machine learning community has adopted these spaces to learn tree-like graphs [5] and hierarchical data structures [11, 18, 19], and also to compute means in tree-like shapes [6, 7]. In this paper, we consider a class of pseudo-Riemannian manifolds of constant nonzero curvature [28] not previously considered in machine learning. These manifolds not only generalize the hyperbolic and spherical geometries mentioned above, but also contain hyperbolic and spherical submanifolds and can therefore describe relationships specific to those geometries. The difference is that we consider the larger class of pseudo-Riemannian manifolds where the considered nondegenerate metric tensor need not be positive definite. Optimizing a cost function on our non-flat ultrahyperbolic space requires a descent direction method that follows a path along the curved manifold. We achieve this by employing tools from differential geometry such as geodesics and exponential maps. The theoretical contributions in this paper are two-fold: (1) explicit methods to calculate dissimilarities and (2) general optimization tools on pseudo-Riemannian manifolds of constant nonzero curvature. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. 2 Pseudo-Hyperboloids Notation: We denote points on a smooth manifoldM [16] by boldface Roman characters x ∈M. TxM is the tangent space ofM at x and we write tangent vectors ξ ∈ TxM in boldface Greek fonts. Rd is the (flat) d-dimensional Euclidean space, it is equipped with the (positive definite) dot product denoted by 〈·, ·〉 and defined as 〈x,y〉 = x>y. The `2-norm of x is ‖x‖ = √ 〈x,x〉. Rd∗ = Rd\{0} is the Euclidean space with the origin removed. Pseudo-Riemannian manifolds: A smooth manifoldM is pseudo-Riemannian (also called semiRiemannian [21]) if it is equipped with a pseudo-Riemannian metric tensor (named “metric” for short in differential geometry). The pseudo-Riemannian metric gx : TxM× TxM→ R at some point x ∈M is a nondegenerate symmetric bilinear form. Nondegeneracy means that if for a given ξ ∈ TxM and for all ζ ∈ TxM we have gx(ξ, ζ) = 0, then ξ = 0. If the metric is also positive definite (i.e. ∀ξ ∈ TxM, gx(ξ, ξ) > 0 iff ξ 6= 0), then it is Riemannian. Riemannian geometry is a special case of pseudo-Riemannian geometry where the metric is positive definite. In general, this is not the case and non-Riemannian manifolds distinguish themselves by having some non-vanishing tangent vectors ξ 6= 0 that satisfy gx(ξ, ξ) ≤ 0. We refer the reader to [2, 21, 28] for details. Pseudo-hyperboloids generalize spherical and hyperbolic manifolds to the class of pseudoRiemannian manifolds. Let us note d = p+ q+ 1 ∈ N the dimensionality of some pseudo-Euclidean space where each vector is written x = (x0, x1, · · · , xq+p)>. That space is denoted by Rp,q+1 when it is equipped with the following scalar product (i.e. nondegenerate symmetric bilinear form [21]): ∀a = (a0, · · · , aq+p)> , b = (b0, · · · , bq+p)> , 〈a,b〉q = − q∑ i=0 aibi+ p+q∑ j=q+1 ajbj = a >Gb, (1) where G = G−1 = Iq+1,p is the d× d diagonal matrix with the first q+ 1 diagonal elements equal to −1 and the remaining p equal to 1. Since Rp,q+1 is a vector space, we can identify the tangent space to the space itself by means of the natural isomorphism TxRp,q+1 ≈ Rp,q+1. Using the terminology of special relativity, Rp,q+1 has q + 1 time dimensions and p space dimensions. A pseudo-hyperboloid is the following submanifold of codimension one (i.e. hypersurface) in Rp,q+1: Qp,qβ = { x = (x0, x1, · · · , xp+q)> ∈ Rp,q+1 : ‖x‖2q = β } , (2) where β ∈ R∗ is a nonzero real number and the function ‖ · ‖2q given by ‖x‖2q = 〈x,x〉q is the associated quadratic form of the scalar product. It is equivalent to work with eitherQp,qβ orQ q+1,p−1 −β as they are interchangeable via an anti-isometry (see supp. material). For instance, the unit q-sphere Sq = { x ∈ Rq+1 : ‖x‖ = 1 } is anti-isometric to Q0,q−1 which is then spherical. In the literature, the set Qp,qβ is called a “pseudo-sphere” when β > 0 and a “pseudo-hyperboloid” when β < 0. In the rest of the paper, we only consider the pseudo-hyperbolic case (i.e. β < 0). Moreover, for any β < 0, Qp,qβ is homothetic to Q p,q −1, the value of β can then be considered to be −1. We can obtain the spherical and hyperbolic geometries by constraining all the elements of the space dimensions of a pseudo-hyperboloid to be zero or constraining all the elements of the time dimensions except one to be zero, respectively. Pseudo-hyperboloids then generalize spheres and hyperboloids. The pseudo-hyperboloids that we consider in this paper are hard to visualize as they live in ambient spaces with dimension higher than 3. In Fig. 1, we show iso-surfaces of a projection of the 3-dimensional pseudo-hyperboloid Q2,1−1 (embedded in R2,2) into R3 along its first time dimension. Metric tensor and tangent space: The metric tensor at x ∈ Qp,qβ is gx(·, ·) = 〈·, ·〉q where gx : TxQp,qβ × TxQ p,q β → R. By using the isomorphism TxRp,q+1 ≈ Rp,q+1 mentioned above, the tangent space of Qp,qβ at x can be defined as TxQ p,q β = { ξ ∈ Rp,q+1 : 〈x, ξ〉q = 0 } for all β 6= 0. Finally, the orthogonal projection of an arbitrary d-dimensional vector z onto TxQp,qβ is: Πx(z) = z− 〈z,x〉q 〈x,x〉q x. (3) 3 Measuring Dissimilarity on Pseudo-Hyperboloids This section introduces the differential geometry tools necessary to quantify dissimilarities/distances between points on Qp,qβ . Measuring dissimilarity is an important task in machine learning and has many applications (e.g. in metric learning [29]). Intrinsic geometry: The intrinsic geometry of the hypersurface Qp,qβ embedded in Rp,q+1 (i.e. the geometry perceived by the inhabitants of Qp,qβ [21]) derives solely from its metric tensor applied to tangent vectors to Qp,qβ . For instance, it can be used to measure the arc length of a tangent vector joining two points along a geodesic and define their geodesic distance. Before considering geodesic distances, we consider extrinsic distances (i.e. distances in the ambient space Rp,q+1). Since Rp,q+1 is isomorphic to its tangent space, tangent vectors to Rp,q+1 are naturally identified with points. Using the quadratic form of Eq. (1), the extrinsic distance between two points a,b ∈ Qp,qβ is: dq(a,b) = √ |‖a− b‖2q| = √ |‖a‖2q + ‖b‖2q − 2〈a,b〉q| = √ |2β − 2〈a,b〉q|. (4) This distance is a good proxy for the geodesic distance dγ(·, ·), that we introduce below, if it preserves distance relations: dγ(a,b) < dγ(c,d) iff dq(a,b) < dq(c,d). This relation is satisfied for two special cases of pseudo-hyperboloids for which the geodesic distance is well known: • Spherical manifold (Q0,qβ ): If p = 0, the geodesic distance dγ(a,b) = √ |β| cos−1 ( 〈a,b〉q β ) is called spherical distance. In practice, the cosine similarity 〈·,·〉qβ is often considered instead of dγ(·, ·) since it satisfies dγ(a,b) < dγ(c,d) iff 〈a,b〉q < 〈c,d〉q iff dq(a,b) < dq(c,d). • Hyperbolic manifold (upper sheet of the two-sheet hyperboloid Qp,0β ): If q = 0, the geodesic distance dγ(a,b) = √ |β| cosh−1 ( 〈a,b〉q β ) with a0 > 0 and b0 > 0 is called Poincaré distance [19]. The (extrinsic) Lorentzian distance was shown to be a good proxy in hyperbolic geometry [11]. For the ultrahyperbolic case (i.e. q ≥ 1 and p ≥ 2), the distance relations are not preserved: dγ(a,b) < dγ(c,d) 6⇐⇒ dq(a,b) < dq(c,d). We then need to consider only geodesic distances. This section introduces closed-form expressions for geodesic distances on ultrahyperbolic manifolds. Geodesics: Informally, a geodesic is a curve joining points on a manifoldM that minimizes some “effort” depending on the metric. More precisely, let I ⊆ R be a (maximal) interval containing 0. A geodesic γ : I →M maps a real value t ∈ I to a point on the manifoldM. It is a curve onM defined by its initial point γ(0) = x ∈M and initial tangent vector γ′(0) = ξ ∈ TxM where γ′(t) is the derivative of γ at t. By analogy with physics, t is considered as a time value. Intuitively, one can think of the curve as the trajectory over time of a ball being pushed from a point x at t = 0 with initial velocity ξ and constrained to roll on the manifold. We denote this curve explicitly by γx→ξ(t) unless the dependence is obvious from the context. For this curve to be a geodesic, its acceleration has to be zero: ∀t ∈ I, γ′′(t) = 0. This condition is a second-order ordinary differential equation that has a unique solution for a given set of initial conditions [17]. The interval I is said to be maximal if it cannot be extended to a larger interval. In the case of Qp,qβ , we have I = R and I is then maximal. Geodesic of Qp,qβ : As we show in the supp. material, the geodesics of Q p,q β are a combination of the hyperbolic, flat and spherical cases. The nature of the geodesic γx→ξ depends on the sign of 〈ξ, ξ〉q . For all t ∈ R, the geodesic γx→ξ of Qp,qβ with β < 0 is written: γx→ξ(t) = cosh ( t √ |〈ξ,ξ〉q|√ |β| ) x + √ |β|√ |〈ξ,ξ〉q| sinh ( t √ |〈ξ,ξ〉q|√ |β| ) ξ if 〈ξ, ξ〉q > 0 x + tξ if 〈ξ, ξ〉q = 0 cos ( t √ |〈ξ,ξ〉q|√ |β| ) x + √ |β|√ |〈ξ,ξ〉q| sin ( t √ |〈ξ,ξ〉q|√ |β| ) ξ if 〈ξ, ξ〉q < 0 (5) We recall that 〈ξ, ξ〉q = 0 does not imply ξ = 0. The geodesics are an essential ingredient to define a mapping known as the exponential map. See Fig. 2 (left) for a depiction of these three types of geodesics, and Fig. 2 (right) for a depiction of the other quantities introduced in this section. Exponential map: Exponential maps are a way of collecting all of the geodesics of a pseudoRiemannian manifoldM into a unique differentiable mapping. Let Dx ⊆ TxM be the set of tangent vectors ξ such that γx→ξ is defined at least on the interval [0, 1]. This allows us to uniquely define the exponential map expx : Dx →M such that expx(ξ) = γx→ξ(1). The manifoldQp,qβ is geodesically complete, the domain of its exponential map is thenDx = TxQ p,q β . Using Eq. (5) with t = 1, we obtain an exponential map of the entire tangent space to the manifold: ∀ξ ∈ TxQp,qβ , expx(ξ) = γx→ξ(1). (6) We make the important observation that the image of the exponential map does not necessarily cover the entire manifold: not all points on a manifold are connected by a geodesic. This is the case for our pseudo-hyperboloids. Namely, for a given point x ∈ Qp,qβ there exist points y that are not in the image of the exponential map (i.e. there does not exist a tangent vector ξ such that y = expx(ξ)). Logarithm map: We provide a closed-form expression of the logarithm map for pseudo-hyperboloids. Let Ux ⊆ Qp,qβ be some neighborhood of x. The logarithm map logx : Ux → TxQ p,q β is defined as the inverse of the exponential map on Ux (i.e. logx = exp−1x ). We propose: ∀y ∈ Ux, logx(y) = cosh−1( 〈x,y〉q β )√ ( 〈x,y〉q β ) 2−1 ( y − 〈x,y〉qβ x ) if 〈x,y〉q|β| < −1 y − x if 〈x,y〉q|β| = −1 cos−1( 〈x,y〉q β )√ 1−( 〈x,y〉qβ )2 ( y − 〈x,y〉qβ x ) if 〈x,y〉q|β| ∈ (−1, 1) (7) By substituting ξ = logx(y) into Eq. (6), one can verify that our formulas are the inverse of the exponential map. The set Ux = { y ∈ Qp,qβ : 〈x,y〉q < |β| } is called a normal neighborhood of x ∈ Qp,qβ since for all y ∈ Ux, there exists a geodesic from x to y such that logx(y) = γ′x→logx(y)(0). We show in the supp. material that the logarithm map is not defined if 〈x,y〉q ≥ |β|. Proposed dissmilarity: We define our dissimilarity function based on the general notion of arc length and radius function on pseudo-Riemannian manifolds that we recall in the next paragraph (see details in Chapter 5 of [21]). This corresponds to the geodesic distance in the Riemannian case. Let Ux be a normal neighborhood of x ∈ M withM pseudo-Riemannian. The radius function rx : Ux → R is defined as rx(y) = √ |gx (logx(y), logx(y))| where gx is the metric at x. If σx→ξ is the radial geodesic from x to y ∈ Ux (i.e. ξ = logx(y)), then the arc length of σx→ξ equals rx(y). We then define the geodesic “distance” between x ∈ Qp,qβ and y ∈ Ux as the arc length of σx→logx(y): dγ(x,y) = √ |‖ logx(y)‖2q| = √ |β| cosh−1 ( 〈x,y〉q β ) if 〈x,y〉q|β| < −1 0 if 〈x,y〉q|β| = −1√ |β| cos−1 ( 〈x,y〉q β ) if 〈x,y〉q|β| ∈ (−1, 1) (8) It is important to note that our “distance” is not a distance metric. However, it satisfies the axioms of a symmetric premetric: (i) dγ(x,y) = dγ(y,x) ≥ 0 and (ii) dγ(x,x) = 0. These conditions are sufficient to quantify the notion of nearness via a ρ-ball centered at x: Bρx = {y : dγ(x,y) < ρ}. In general, topological spaces provide a qualitative (not necessarily quantitative) way to detect “nearness” through the concept of a neighborhood at a point [15]. Something is true “near x” if it is true in the neighborhood of x (e.g. inBρx). Our premetric is similar to metric learning methods [13, 14, 29] that learn a Mahalanobis-like distance pseudo-metric parameterized by a positive semi-definite matrix. Pairs of distinct points can have zero “distance” if the matrix is not positive definite. However, unlike classic metric learning, we can have triplets (x,y, z) that satisfy dγ(x,y) = dγ(x, z) = 0 but dγ(y, z) > 0 (e.g. x = (1, 0, 0, 0)>,y = (1, 1, 1, 0)>, z = (1, 1, 0, 1)> in Q2,1−1). Since the logarithm map is not defined if 〈x,y〉q ≥ |β|, we propose to use the following continuous approximation defined on the whole manifold instead: ∀x ∈ Qp,qβ ,y ∈ Q p,q β , Dγ(x,y) = { dγ(x,y) if 〈x,y〉q ≤ 0√ |β| ( π 2 + 〈x,y〉q |β| ) otherwise (9) To the best of our knowledge, the explicit formulation of the logarithm map for Qp,qβ in Eq. (7) and its corresponding radius function in Eq. (8) to define a dissimilarity function are novel. We have also proposed some linear approximation to evaluate dissimilarity when the logarithm map is not defined but other choices are possible. For instance, when a geodesic does not exist, a standard way in differential geometry to calculate curves is to consider broken geodesics. One might consider instead the dissimilarity dγ(x,−x) + dγ(−x,y) = π √ |β| + dγ(−x,y) if logx(y) is not defined since −x ∈ Qp,qβ and log−x(y) is defined. This interesting problem is left for future research. 4 Ultrahyperbolic Optimization In this section we present optimization frameworks to optimize any differentiable function defined on Qp,qβ . Our goal is to compute descent directions on the ultrahyperbolic manifold. We consider two approaches. In the first approach, we map our representation from Euclidean space to ultrahyperbolic space. This is similar to the approach taken by [11] in hyperbolic space. In the second approach, we optimize using gradients defined directly in pseudo-Riemannian tangent space. We propose a novel descent direction which guarantees the minimization of some cost function. 4.1 Euclidean optimization via a differentiable mapping onto Qp,qβ Our first method maps Euclidean representations that lie in Rd to the pseudo-hyperboloid Qp,qβ , and the chain rule is exploited to perform standard gradient descent. To this end, we construct a differentiable mapping ϕ : Rq+1∗ × Rp → Qp,qβ . The image of a point already on Q p,q β under the mapping ϕ is itself: ∀x ∈ Qp,qβ , ϕ(x) = x. Let Sq = { x ∈ Rq+1 : ‖x‖ = 1 } denote the unit q-sphere. We first introduce the following diffeomorphisms: Theorem 4.1 (Diffeomorphisms). For any β < 0, there is a diffeomorphism ψ : Qp,qβ → Sq × Rp. Let us note x = ( t s ) ∈ Qp,qβ with t ∈ R q+1 ∗ and s ∈ Rp, let us note z = ( u v ) ∈ Sq × Rp where u ∈ Sq and v ∈ Rp. The mapping ψ and its inverse ψ−1 are formulated (see proofs in supp. material): ψ(x) = ( 1 ‖t‖t 1√ |β| s ) and ψ−1(z) = √ |β| (√ 1 + ‖v‖2u v ) . (10) With these mappings, any vector x ∈ Rq+1∗ × Rp can be mapped to Qp,qβ via ϕ = ψ−1 ◦ ψ. ϕ is differentiable everywhere except when x0 = · · · = xq = 0, which should never occur in practice. It can therefore be optimized using standard gradient methods. 4.2 Pseudo-Riemannian optimization We now introduce a novel method to optimize any differentiable function f : Qp,qβ → R defined on the pseudo-hyperboloid. As we show below, the (negative of the) pseudo-Riemannian gradient is not a descent direction. We propose a simple and efficient way to calculate a descent direction. Pseudo-Riemannian gradient: Since x ∈ Qp,qβ also lies in the Euclidean ambient space Rd, the function f has a well defined Euclidean gradient∇f(x) = (∂f(x)/∂x0, · · · , ∂f(x)/∂xp+q)> ∈ Rd. The gradient of f in the pseudo-Euclidean ambient space Rp,q+1 is (G−1∇f(x)) = (G∇f(x)) ∈ Rp,q+1. Since Qp,qβ is a submanifold of Rp,q+1, the pseudo-Riemannian gradient Df(x) ∈ TxQ p,q β of f on Qp,qβ is the orthogonal projection of (G∇f(x)) onto TxQ p,q β (see Chapter 4 of [21]): Df(x) = Πx (G∇f(x)) = G∇f(x)− 〈G∇f(x),x〉q 〈x,x〉q x = G∇f(x)− 〈∇f(x),x〉 〈x,x〉q x. (11) This gradient forms the foundation of our descent method optimizer as will be shown in Eq. (13). Iterative optimization: Our goal is to iteratively decrease the value of the function f by following some descent direction. Since Qp,qβ is not a vector space, we do not “follow the descent direction” by adding the descent direction multiplied by a step size as this would result in a new point that does not necessarily lie on Qp,qβ . Instead, to remain on the manifold, we use our exponential map defined in Eq. (6). This is a standard way to optimize on Riemannian manifolds [1]. Given a step size t > 0, one step of descent along a tangent vector ζ ∈ TxQp,qβ is given by: y = expx (tζ) ∈ Q p,q β . (12) Descent direction: We now explain why the negative of the pseudo-Riemannian gradient is not a descent direction. Our explanation extends Chapter 3 of [20] that gives the criteria for a tangent vector ζ to be a descent direction when the domain of the optimized function is a Euclidean space. By using the properties described in Section 3, we know that for all t ∈ R and all ξ ∈ TxQp,qβ , we have the equalities: expx (tξ) = γx→tξ(1) = γx→ξ(t) so we can equivalently fix t to 1 and choose the scale of ξ appropriately. By exploiting Taylor’s first-order approximation, there exists some small enough tangent vector ζ 6= 0 (i.e. with expx(ζ) belonging to a convex neighborhood of x [4, 8]) that satisfies the following conditions: γx→ζ(0) = x ∈ Qp,qβ , γ′x→ζ(0) = ζ ∈ TxQ p,q β , γx→ζ(1) = y ∈ Q p,q β , and the function f ◦ γx→ζ : R→ R can be approximated at t = 1 by: f(y) = f ◦ γx→ζ(1) ' f ◦ γx→ζ(0) + (f ◦ γx→ζ)′(0) = f(x) + 〈Df(x), ζ〉q. (13) where we use the following properties: ∀t, (f ◦ γ)′(t) = df(γ′(t)) = gγ(t) (Df(γ(t)), γ′(t)) (see details in pages 11, 15 and 85 of [21]), df is the differential of f and γ is a geodesic. To be a descent direction at x (i.e. so that f(y) < f(x)), the search direction ζ has to satisfy 〈Df(x), ζ〉q < 0. However, choosing ζ = −ηDf(x), where η > 0 is a step size, might increase the function value if the scalar product 〈·, ·〉q is not positive definite. If p + q ≥ 1, then 〈·, ·〉q is positive definite only if q = 0 (see details in supp. material), and it is negative definite iff p = 0 since 〈·, ·〉q = −〈·, ·〉 in this case. A simple solution would be to choose ζ = ±ηDf(x) depending on the sign of 〈Df(x), ζ〉q, but 〈Df(x), ζ〉q might be equal to 0 even if Df(x) 6= 0 if 〈·, ·〉q is indefinite. The optimization algorithm might then be stuck to a level set of f , which is problematic. Algorithm 1 Pseudo-Riemannian optimization on Qp,qβ input: differentiable function f : Qp,qβ → R to be minimized, some initial value of x ∈ Q p,q β 1: while not converge do 2: Calculate∇f(x) . i.e. the Euclidean gradient of f at x in the Euclidean ambient space 3: χ← Πx(GΠx(G∇f(x))) . see Eq. (14) 4: x← expx(−ηχ) . where η > 0 is a step size (e.g. determined with line search) 5: end while Proposed solution: To ensure that ζ ∈ TxQp,qβ is a descent direction, we propose a simple expression that satisfies 〈Df(x), ζ〉q < 0 if Df(x) 6= 0 and 〈Df(x), ζ〉q = 0 otherwise. We propose to formulate ζ = −ηΠx(GDf(x)) ∈ TxQp,qβ , and we define the following tangent vector χ = − 1 ηζ: χ = Πx(GDf(x)) = ∇f(x)− 〈∇f(x),x〉 〈x,x〉q Gx− 〈∇f(x),x〉q 〈x,x〉q x + ‖x‖2〈∇f(x),x〉 〈x,x〉2q x. (14) The tangent vector ζ is a descent direction because 〈Df(x), ζ〉q = −η〈Df(x),χ〉q is nonpositive: 〈Df(x),χ〉q = ‖∇f(x)‖2 − 2 〈∇f(x),x〉〈∇f(x),x〉q 〈x,x〉q + 〈∇f(x),x〉2‖x‖2 〈x,x〉2q (15) = ‖G∇f(x)− 〈∇f(x),x〉 〈x,x〉q x‖2 = ‖Df(x)‖2 ≥ 0. (16) We also have 〈Df(x),χ〉q = ‖Df(x)‖2 = 0 iff Df(x) = 0 (i.e. x is a stationary point). It is worth noting that Df(x) = 0 implies χ = Πx(G0) = 0. Moreover, χ = 0 implies that ‖Df(x)‖2 = 〈Df(x), 0〉q = 0. We then have χ = 0 iff Df(x) = 0. Our proposed algorithm to the minimization problem minx∈Qp,qβ f(x) is illustrated in Algorithm 1. Following generic Riemannian optimization algorithms [1], at each iteration, it first computes the descent direction−χ ∈ TxQp,qβ , then decreases the function by applying the exponential map defined in Eq. (6). It is worth noting that our proposed descent method can be applied to any differentiable function f : Qp,qβ → R, not only to those that exploit the distance introduced in Section 3. Interestingly, our method can also be seen as a preconditioning technique [20] where the descent direction is obtained by preconditioning the pseudo-Riemannian gradient Df(x) with the matrix Px = [ G− 1〈x,x〉q xx > ] ∈ Rd×d. In other words, we have χ = PxDf(x) = Πx(GDf(x)). In the more general setting of pseudo-Riemannian manifolds, another preconditioning technique was proposed in [8]. The method in [8] requires performing a Gram-Schmidt process at each iteration to obtain an (ordered [28]) orthonormal basis of the tangent space at x w.r.t. the induced quadratic form of the manifold. However, the Gram-Schmidt process is unstable and has algorithmic complexity that is cubic in the dimensionality of the tangent space. On the other hand, our method is more stable and its algorithmic complexity is linear in the dimensionality of the tangent space. 5 Experiments We now experimentally validate our proposed optimization methods and the effectiveness of our dissimilarity function. Our main experimental results can be summarized as follows: • Both optimizers introduced in Section 4 decrease some objective function f : Qp,qβ → R. While both optimizers manage to learn high-dimensional representations that satisfy the problem-dependent training constraints, only the pseudo-Riemannian optimizer satisfies all the constraints in lowerdimensional spaces. This is because it exploits the underlying metric of the manifold. • Hyperbolic representations are popular in machine learning as they are well suited to represent hierarchical trees [10, 18, 19]. On the other hand, hierarchical datasets whose graph contains cycles cannot be represented using trees. Therefore, we propose to represent such graphs using our ultrahyperbolic representations. An important example are community graphs such as Zachary’s karate club [30] that contain leaders. Because our ultrahyperbolic representations are more flexible than hyperbolic representations, we believe that our representations are better suited for these non tree-like hierarchical structures. Graph: Our ultrahyperbolic representations describe graph-structured datasets. Each dataset is an undirected weighted graph G = (V,E) which has node-set V = {vi}ni=1 and edge-set E = {ek}mk=1. Each edge ek is weighted by an arbitrary capacity ck ∈ R+ that models the strength of the relationship between nodes. The higher the capacity ck, the stronger the relationship between the nodes connected by ek. Learned representations: Our problem formulation is inspired by hyperbolic representation learning approaches [18, 19] where the nodes of a tree (i.e. graph without cycles) are represented in hyperbolic space. The hierarchical structure of the tree is then reflected by the order of distances between its nodes. More precisely, a node representation is learned so that each node is closer to its descendants and ancestors in the tree (w.r.t. the hyperbolic distance) than to any other node. For example, in a hierarchy of words, ancestors and descendants are hypernyms and hyponyms, respectively. Our goal is to learn a set of n points x1, · · · ,xn ∈ Qp,qβ (embeddings) from a given graph G. The presence of cycles in the graph makes it difficult to determine ancestors and descendants. For this reason, we introduce for each pair of nodes (vi, vj) = ek ∈ E, the set of “weaker” pairs that have lower capacity: W(ek) = {el : ck > cl} ∪ {(va, vb) : (va, vb) /∈ E}. Our goal is to learn representations such that pairs (vi, vj) with higher capacity have their representations (xi,xj) closer to each other than weaker pairs. Following [18], we formulate our problem as: min x1,··· ,xn∈Qp,qβ ∑ (vi,vj) = ek∈E − log exp (−d(xi,xj)/τ)∑ (va,vb)∈ W(ek)∪{ek} exp (−d(xa,xb)/τ) (17) where d is the chosen dissimilarity function (e.g. Dγ(·, ·) defined in Eq. (9)) and τ > 0 is a fixed temperature parameter. The formulation of Eq. (17) is classic in the metric learning literature [3, 12, 27] and corresponds to optimizing some order on the learned distances via a softmax function. Implementation details: We coded our approach in PyTorch [22] that automatically calculates the Euclidean gradient ∇f(xi). Initially, a random set of vectors {zi}ni=1 is generated close to the positive pole ( √ |β|, 0, · · · , 0) ∈ Qp,qβ with every coordinate perturbed uniformly with a random value in the interval [−ε, ε] where ε > 0 is chosen small enough so that ‖zi‖2q < 0. We set β = −1, ε = 0.1 and τ = 10−2. Initial embeddings are generated as follows: ∀i,xi = √ |β| zi√ |‖zi‖2q| ∈ Qp,qβ . Zachary’s karate club dataset [30] is a social network graph of a karate club comprised of n = 34 nodes, each representing a member of the karate club. The club was split due to a conflict between instructor "Mr. Hi" (node v1) and administrator "John A" (node vn). The remaining members now have to decide whether to join the new club created by v1 or not. In [30], Zachary defines a matrix of relative strengths of the friendships in the karate club called C ∈ {0, 1, · · · , 7}n×n and that depends on various criteria. We note that the matrix is not symmetric and has 7 different pairs (vi, vj) for which Cij 6= Cji. Since our dissimilarity function is symmetric, we consider the symmetric matrix S = C + C> instead. The value of Sij is the capacity/weight assigned to the edge joining vi and vj , and there is no edge between vi and vj if Sij = 0. Fig. 3 (left) illustrates the 34 nodes of the dataset, an edge joining the nodes vi and vj is drawn iff Sij 6= 0. The level of a node in the hierarchy corresponds approximately to its height in the figure. Optimizers: We validate that our optimizers introduced in Section 4 decrease the cost function. First, we consider the simple unweighted case where every edge weight is 1. For each edge ek ∈ E, W(ek) is then the set of pairs of nodes that are not connected. In other words, Eq. (17) learns node representations that have the property that every connected pair of nodes has smaller distance than non-connected pairs. We use this condition as a stopping criterion of our algorithm. Fig. 3 (right) illustrates the loss values of Eq. (17) as a function of the number of iterations with the Euclidean gradient descent (Section 4.1) and our pseudo-Riemannian optimizer (introduced in Section 4.2). In each test, we vary the number of time dimensions q + 1 while the ambient space is of fixed dimensionality d = p+ q + 1 = 10. We omit the case q = 0 since it corresponds to the (hyperbolic) Riemannian case already considered in [11, 19]. Both optimizers decrease the function and manage to satisfy all the expected distance relations. We note that when we use −Df(x) instead of −χ as a search direction, the algorithm does not converge. Moreover, our pseudo-Riemannian optimizer manages to learn representations that satisfy all the constraints for low-dimensional manifolds such as Q4,1−1 and Q 4,2 −1, while the optimizer introduced in Section 4.1 does not. Consequently, we only use the pseudo-Riemannian optimizer in the following results. Hierarchy extraction: To quantitatively evaluate our approach, we apply it to the problem of predicting the high-level nodes in the hierarchy from the weighted matrix S given as supervision. We consider the challenging low-dimensional setting where all the learned representations lie on a 4-dimensional manifold (i.e. p+ q + 1 = 5). Hyperbolic distances are known to grow exponentially as we get further from the origin. Therefore, the sum of distances δi = ∑n j=1 d(xi,xj) of a node vi with all other nodes is a good indication of importance. Intuitively, high-level nodes will be closer to most nodes than low-level nodes. We then sort the scores δ1, · · · , δn in ascending order and report the ranks of the two leaders v1 or vn (in no particular order) in the first two rows of Table 1 averaged over 5 different initializations/runs. Leaders tend to have a smaller δi score with ultrahyperbolic distances than with Euclidean, hyperbolic or spherical distances. Instead of using δi for hyperbolic representations, the importance of a node vi can be evaluated by using the Euclidean norm of its embedding xi as proxy [11, 18, 19]. This is because high-level nodes of a tree in hyperbolic space are usually closer to the origin than low-level nodes. Not surprisingly, this proxy leads to worse performance (8.6± 2.3 and 18.6± 4.9) as the relationships are not that of a tree. Since hierarchy levels are hard to compare for low-level nodes, we select the 10 (or 5) most influential members based on the score si = ∑n j=1 Sij . The corresponding nodes are 34, 1, 33, 3, 2, 32, 24, 4, 9, 14 (in that order). Spearman’s rank correlation coefficient [24] between the selected scores si and corresponding δi is reported in Table 1 and shows the relevance of our representations. Due to lack of space, we also report in the supp. material similar experiments on a larger hierarchical dataset [9] that describes co-authorship from papers published at NIPS from 1988 to 2003. 6 Conclusion We have introduced ultrahyperbolic representations. Our representations lie on a pseudo-Riemannian manifold of constant nonzero curvature which generalizes hyperbolic and spherical geometries and includes them as submanifolds. Any relationship described in those geometries can then be described with our representations that are more flexible. We have introduced new optimization tools and experimentally shown that our representations can extract hierarchies in graphs that contain cycles. Broader Impact We introduce a novel way of representing relationships between data points by considering the geometry of non-Riemannian manifolds of constant nonzero curvature. The relationships between data points are described by a dissimilarity function that we introduce and exploits the structure of the manifold. It is more flexible than the distance metric used in hyperbolic and spherical geometries often used in machine learning and computer vision. Nonetheless, since the problems involving our representations are not straightforward to optimize, we propose novel optimization algorithms that can potentially benefit the machine learning, computer vision and natural language processing communities. Indeed, our method is application agnostic and could extend existing frameworks. Our contribution is mainly theoretical but we have included one practical application. Similarly to hyperbolic representations that are popular for representing tree-like data, we have shown that our representations are well adapted to the more general case of hierarchical graphs with cycles. These graphs appear in many different fields of research such as medicine, molecular biology and the social sciences. For example, an ultrahyperbolic representation of proteins might assist in understanding their complicated folding mechanisms. Moreover, these representations could assist in analyzing features of social media such as discovering new trends and leading "connectors". The impact of community detection for commercial or political advertising is already known in social networking services. We foresee that our method will have many more graph-based practical applications. We know of very few applications outside of general relativity that use pseudo-Riemannian geometry. We hope that our research will stimulate other applications in machine learning and related fields. Finally, although we have introduced a novel descent direction for our optimization algorithm, future research could study and improve its rate of convergence. Acknowledgments and Disclosure of Funding We thank Jonah Philion, Guojun Zhang and the anonymous reviewers for helpful feedback on early versions of this manuscript. This article was entirely funded by NVIDIA corporation. Marc Law and Jos Stam completed this working from home during the COVID-19 pandemic.
1. What is the focus and contribution of the paper regarding pseudo-Riemannian geometry? 2. What are the strengths of the proposed approach, particularly in terms of theoretical analysis? 3. What are the weaknesses of the paper, especially regarding its application to graph-valued data? 4. Do you have any concerns about the decoupling of theory and application in the paper? 5. Can the authors provide more explanations or grounds for their proposal of using the considered space for representing graphs?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper considers a pseudo-Riemannian geometry that extend the traditional hyperbolic and spherical space. Some theory regarding the geometry and optimization is provided. An application on graph-valued data is provided. == After the rebuttal == I have lowered my score for this paper after the rebuttal. My main concern with the paper is that the theory is too decoupled from the application. I cannot tell a reason why the developed representation should be applicable to graph-data. I had given the authors the benefit of the doubt hoping for an explanation in the rebuttal, but the rebuttal did not have a (to this reviewer) convincing argument why the representation should work well for graphs. Strengths The key contribution of the paper is theoretical. As far as I can tell, the paper provides two novel theoretical results: *) Closed-form expression for geodesics in the considered space is provided. Derivations seem to closely match corresponding results for spheres and hyperbolic space. This is to be expected; and the result remain novel. *) From my perspective the most interesting result is the derivation of a non-trivial descent direction over the considered space. Weaknesses The key contribution of the present paper is theoretical. The main issue is that the theory seems largely to have limited practical use. The paper propose to use the considered space for representing graphs, but this appears to largely be ad hoc with little to no grounding in first principles. The argument is that since trees are known to be well-represented in hyperbolic spaces, then graphs should naturally fit into a space that extend hyperbolic space. From what I can tell, this conclusion has no grounding in mathematics. If I have missed something, I would appreciate if the authors provide more detail in the rebuttal.
NIPS
Title Ultrahyperbolic Representation Learning Abstract In machine learning, data is usually represented in a (flat) Euclidean space where distances between points are along straight lines. Researchers have recently considered more exotic (non-Euclidean) Riemannian manifolds such as hyperbolic space which is well suited for tree-like data. In this paper, we propose a representation living on a pseudo-Riemannian manifold of constant nonzero curvature. It is a generalization of hyperbolic and spherical geometries where the nondegenerate metric tensor need not be positive definite. We provide the necessary learning tools in this geometry and extend gradient-based optimization techniques. More specifically, we provide closed-form expressions for distances via geodesics and define a descent direction to minimize some objective function. Our novel framework is applied to graph representations. 1 Introduction In most machine learning applications, data representations lie on a smooth manifold [16] and the training procedure is optimized with an iterative algorithm such as line search or trust region methods [20]. In most cases, the smooth manifold is Riemannian, which means that it is equipped with a positive definite metric. Due to the positive definiteness of the metric, the negative of the (Riemannian) gradient is a descent direction that can be exploited to iteratively minimize some objective function [1]. The choice of metric on the Riemannian manifold determines how relations between points are quantified. The most common Riemannian manifold is the flat Euclidean space, which has constant zero curvature and the distances between points are measured by straight lines. An intuitive example of non-Euclidean Riemannian manifold is the spherical model (i.e. representations lie on a sphere) that has constant positive curvature and is used for instance in face recognition [25, 26]. On the sphere, geodesic distances are a function of angles. Similarly, Riemannian spaces of constant negative curvature are called hyperbolic [23]. Such spaces were shown by Gromov to be well suited to represent tree-like structures [10]. The machine learning community has adopted these spaces to learn tree-like graphs [5] and hierarchical data structures [11, 18, 19], and also to compute means in tree-like shapes [6, 7]. In this paper, we consider a class of pseudo-Riemannian manifolds of constant nonzero curvature [28] not previously considered in machine learning. These manifolds not only generalize the hyperbolic and spherical geometries mentioned above, but also contain hyperbolic and spherical submanifolds and can therefore describe relationships specific to those geometries. The difference is that we consider the larger class of pseudo-Riemannian manifolds where the considered nondegenerate metric tensor need not be positive definite. Optimizing a cost function on our non-flat ultrahyperbolic space requires a descent direction method that follows a path along the curved manifold. We achieve this by employing tools from differential geometry such as geodesics and exponential maps. The theoretical contributions in this paper are two-fold: (1) explicit methods to calculate dissimilarities and (2) general optimization tools on pseudo-Riemannian manifolds of constant nonzero curvature. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. 2 Pseudo-Hyperboloids Notation: We denote points on a smooth manifoldM [16] by boldface Roman characters x ∈M. TxM is the tangent space ofM at x and we write tangent vectors ξ ∈ TxM in boldface Greek fonts. Rd is the (flat) d-dimensional Euclidean space, it is equipped with the (positive definite) dot product denoted by 〈·, ·〉 and defined as 〈x,y〉 = x>y. The `2-norm of x is ‖x‖ = √ 〈x,x〉. Rd∗ = Rd\{0} is the Euclidean space with the origin removed. Pseudo-Riemannian manifolds: A smooth manifoldM is pseudo-Riemannian (also called semiRiemannian [21]) if it is equipped with a pseudo-Riemannian metric tensor (named “metric” for short in differential geometry). The pseudo-Riemannian metric gx : TxM× TxM→ R at some point x ∈M is a nondegenerate symmetric bilinear form. Nondegeneracy means that if for a given ξ ∈ TxM and for all ζ ∈ TxM we have gx(ξ, ζ) = 0, then ξ = 0. If the metric is also positive definite (i.e. ∀ξ ∈ TxM, gx(ξ, ξ) > 0 iff ξ 6= 0), then it is Riemannian. Riemannian geometry is a special case of pseudo-Riemannian geometry where the metric is positive definite. In general, this is not the case and non-Riemannian manifolds distinguish themselves by having some non-vanishing tangent vectors ξ 6= 0 that satisfy gx(ξ, ξ) ≤ 0. We refer the reader to [2, 21, 28] for details. Pseudo-hyperboloids generalize spherical and hyperbolic manifolds to the class of pseudoRiemannian manifolds. Let us note d = p+ q+ 1 ∈ N the dimensionality of some pseudo-Euclidean space where each vector is written x = (x0, x1, · · · , xq+p)>. That space is denoted by Rp,q+1 when it is equipped with the following scalar product (i.e. nondegenerate symmetric bilinear form [21]): ∀a = (a0, · · · , aq+p)> , b = (b0, · · · , bq+p)> , 〈a,b〉q = − q∑ i=0 aibi+ p+q∑ j=q+1 ajbj = a >Gb, (1) where G = G−1 = Iq+1,p is the d× d diagonal matrix with the first q+ 1 diagonal elements equal to −1 and the remaining p equal to 1. Since Rp,q+1 is a vector space, we can identify the tangent space to the space itself by means of the natural isomorphism TxRp,q+1 ≈ Rp,q+1. Using the terminology of special relativity, Rp,q+1 has q + 1 time dimensions and p space dimensions. A pseudo-hyperboloid is the following submanifold of codimension one (i.e. hypersurface) in Rp,q+1: Qp,qβ = { x = (x0, x1, · · · , xp+q)> ∈ Rp,q+1 : ‖x‖2q = β } , (2) where β ∈ R∗ is a nonzero real number and the function ‖ · ‖2q given by ‖x‖2q = 〈x,x〉q is the associated quadratic form of the scalar product. It is equivalent to work with eitherQp,qβ orQ q+1,p−1 −β as they are interchangeable via an anti-isometry (see supp. material). For instance, the unit q-sphere Sq = { x ∈ Rq+1 : ‖x‖ = 1 } is anti-isometric to Q0,q−1 which is then spherical. In the literature, the set Qp,qβ is called a “pseudo-sphere” when β > 0 and a “pseudo-hyperboloid” when β < 0. In the rest of the paper, we only consider the pseudo-hyperbolic case (i.e. β < 0). Moreover, for any β < 0, Qp,qβ is homothetic to Q p,q −1, the value of β can then be considered to be −1. We can obtain the spherical and hyperbolic geometries by constraining all the elements of the space dimensions of a pseudo-hyperboloid to be zero or constraining all the elements of the time dimensions except one to be zero, respectively. Pseudo-hyperboloids then generalize spheres and hyperboloids. The pseudo-hyperboloids that we consider in this paper are hard to visualize as they live in ambient spaces with dimension higher than 3. In Fig. 1, we show iso-surfaces of a projection of the 3-dimensional pseudo-hyperboloid Q2,1−1 (embedded in R2,2) into R3 along its first time dimension. Metric tensor and tangent space: The metric tensor at x ∈ Qp,qβ is gx(·, ·) = 〈·, ·〉q where gx : TxQp,qβ × TxQ p,q β → R. By using the isomorphism TxRp,q+1 ≈ Rp,q+1 mentioned above, the tangent space of Qp,qβ at x can be defined as TxQ p,q β = { ξ ∈ Rp,q+1 : 〈x, ξ〉q = 0 } for all β 6= 0. Finally, the orthogonal projection of an arbitrary d-dimensional vector z onto TxQp,qβ is: Πx(z) = z− 〈z,x〉q 〈x,x〉q x. (3) 3 Measuring Dissimilarity on Pseudo-Hyperboloids This section introduces the differential geometry tools necessary to quantify dissimilarities/distances between points on Qp,qβ . Measuring dissimilarity is an important task in machine learning and has many applications (e.g. in metric learning [29]). Intrinsic geometry: The intrinsic geometry of the hypersurface Qp,qβ embedded in Rp,q+1 (i.e. the geometry perceived by the inhabitants of Qp,qβ [21]) derives solely from its metric tensor applied to tangent vectors to Qp,qβ . For instance, it can be used to measure the arc length of a tangent vector joining two points along a geodesic and define their geodesic distance. Before considering geodesic distances, we consider extrinsic distances (i.e. distances in the ambient space Rp,q+1). Since Rp,q+1 is isomorphic to its tangent space, tangent vectors to Rp,q+1 are naturally identified with points. Using the quadratic form of Eq. (1), the extrinsic distance between two points a,b ∈ Qp,qβ is: dq(a,b) = √ |‖a− b‖2q| = √ |‖a‖2q + ‖b‖2q − 2〈a,b〉q| = √ |2β − 2〈a,b〉q|. (4) This distance is a good proxy for the geodesic distance dγ(·, ·), that we introduce below, if it preserves distance relations: dγ(a,b) < dγ(c,d) iff dq(a,b) < dq(c,d). This relation is satisfied for two special cases of pseudo-hyperboloids for which the geodesic distance is well known: • Spherical manifold (Q0,qβ ): If p = 0, the geodesic distance dγ(a,b) = √ |β| cos−1 ( 〈a,b〉q β ) is called spherical distance. In practice, the cosine similarity 〈·,·〉qβ is often considered instead of dγ(·, ·) since it satisfies dγ(a,b) < dγ(c,d) iff 〈a,b〉q < 〈c,d〉q iff dq(a,b) < dq(c,d). • Hyperbolic manifold (upper sheet of the two-sheet hyperboloid Qp,0β ): If q = 0, the geodesic distance dγ(a,b) = √ |β| cosh−1 ( 〈a,b〉q β ) with a0 > 0 and b0 > 0 is called Poincaré distance [19]. The (extrinsic) Lorentzian distance was shown to be a good proxy in hyperbolic geometry [11]. For the ultrahyperbolic case (i.e. q ≥ 1 and p ≥ 2), the distance relations are not preserved: dγ(a,b) < dγ(c,d) 6⇐⇒ dq(a,b) < dq(c,d). We then need to consider only geodesic distances. This section introduces closed-form expressions for geodesic distances on ultrahyperbolic manifolds. Geodesics: Informally, a geodesic is a curve joining points on a manifoldM that minimizes some “effort” depending on the metric. More precisely, let I ⊆ R be a (maximal) interval containing 0. A geodesic γ : I →M maps a real value t ∈ I to a point on the manifoldM. It is a curve onM defined by its initial point γ(0) = x ∈M and initial tangent vector γ′(0) = ξ ∈ TxM where γ′(t) is the derivative of γ at t. By analogy with physics, t is considered as a time value. Intuitively, one can think of the curve as the trajectory over time of a ball being pushed from a point x at t = 0 with initial velocity ξ and constrained to roll on the manifold. We denote this curve explicitly by γx→ξ(t) unless the dependence is obvious from the context. For this curve to be a geodesic, its acceleration has to be zero: ∀t ∈ I, γ′′(t) = 0. This condition is a second-order ordinary differential equation that has a unique solution for a given set of initial conditions [17]. The interval I is said to be maximal if it cannot be extended to a larger interval. In the case of Qp,qβ , we have I = R and I is then maximal. Geodesic of Qp,qβ : As we show in the supp. material, the geodesics of Q p,q β are a combination of the hyperbolic, flat and spherical cases. The nature of the geodesic γx→ξ depends on the sign of 〈ξ, ξ〉q . For all t ∈ R, the geodesic γx→ξ of Qp,qβ with β < 0 is written: γx→ξ(t) = cosh ( t √ |〈ξ,ξ〉q|√ |β| ) x + √ |β|√ |〈ξ,ξ〉q| sinh ( t √ |〈ξ,ξ〉q|√ |β| ) ξ if 〈ξ, ξ〉q > 0 x + tξ if 〈ξ, ξ〉q = 0 cos ( t √ |〈ξ,ξ〉q|√ |β| ) x + √ |β|√ |〈ξ,ξ〉q| sin ( t √ |〈ξ,ξ〉q|√ |β| ) ξ if 〈ξ, ξ〉q < 0 (5) We recall that 〈ξ, ξ〉q = 0 does not imply ξ = 0. The geodesics are an essential ingredient to define a mapping known as the exponential map. See Fig. 2 (left) for a depiction of these three types of geodesics, and Fig. 2 (right) for a depiction of the other quantities introduced in this section. Exponential map: Exponential maps are a way of collecting all of the geodesics of a pseudoRiemannian manifoldM into a unique differentiable mapping. Let Dx ⊆ TxM be the set of tangent vectors ξ such that γx→ξ is defined at least on the interval [0, 1]. This allows us to uniquely define the exponential map expx : Dx →M such that expx(ξ) = γx→ξ(1). The manifoldQp,qβ is geodesically complete, the domain of its exponential map is thenDx = TxQ p,q β . Using Eq. (5) with t = 1, we obtain an exponential map of the entire tangent space to the manifold: ∀ξ ∈ TxQp,qβ , expx(ξ) = γx→ξ(1). (6) We make the important observation that the image of the exponential map does not necessarily cover the entire manifold: not all points on a manifold are connected by a geodesic. This is the case for our pseudo-hyperboloids. Namely, for a given point x ∈ Qp,qβ there exist points y that are not in the image of the exponential map (i.e. there does not exist a tangent vector ξ such that y = expx(ξ)). Logarithm map: We provide a closed-form expression of the logarithm map for pseudo-hyperboloids. Let Ux ⊆ Qp,qβ be some neighborhood of x. The logarithm map logx : Ux → TxQ p,q β is defined as the inverse of the exponential map on Ux (i.e. logx = exp−1x ). We propose: ∀y ∈ Ux, logx(y) = cosh−1( 〈x,y〉q β )√ ( 〈x,y〉q β ) 2−1 ( y − 〈x,y〉qβ x ) if 〈x,y〉q|β| < −1 y − x if 〈x,y〉q|β| = −1 cos−1( 〈x,y〉q β )√ 1−( 〈x,y〉qβ )2 ( y − 〈x,y〉qβ x ) if 〈x,y〉q|β| ∈ (−1, 1) (7) By substituting ξ = logx(y) into Eq. (6), one can verify that our formulas are the inverse of the exponential map. The set Ux = { y ∈ Qp,qβ : 〈x,y〉q < |β| } is called a normal neighborhood of x ∈ Qp,qβ since for all y ∈ Ux, there exists a geodesic from x to y such that logx(y) = γ′x→logx(y)(0). We show in the supp. material that the logarithm map is not defined if 〈x,y〉q ≥ |β|. Proposed dissmilarity: We define our dissimilarity function based on the general notion of arc length and radius function on pseudo-Riemannian manifolds that we recall in the next paragraph (see details in Chapter 5 of [21]). This corresponds to the geodesic distance in the Riemannian case. Let Ux be a normal neighborhood of x ∈ M withM pseudo-Riemannian. The radius function rx : Ux → R is defined as rx(y) = √ |gx (logx(y), logx(y))| where gx is the metric at x. If σx→ξ is the radial geodesic from x to y ∈ Ux (i.e. ξ = logx(y)), then the arc length of σx→ξ equals rx(y). We then define the geodesic “distance” between x ∈ Qp,qβ and y ∈ Ux as the arc length of σx→logx(y): dγ(x,y) = √ |‖ logx(y)‖2q| = √ |β| cosh−1 ( 〈x,y〉q β ) if 〈x,y〉q|β| < −1 0 if 〈x,y〉q|β| = −1√ |β| cos−1 ( 〈x,y〉q β ) if 〈x,y〉q|β| ∈ (−1, 1) (8) It is important to note that our “distance” is not a distance metric. However, it satisfies the axioms of a symmetric premetric: (i) dγ(x,y) = dγ(y,x) ≥ 0 and (ii) dγ(x,x) = 0. These conditions are sufficient to quantify the notion of nearness via a ρ-ball centered at x: Bρx = {y : dγ(x,y) < ρ}. In general, topological spaces provide a qualitative (not necessarily quantitative) way to detect “nearness” through the concept of a neighborhood at a point [15]. Something is true “near x” if it is true in the neighborhood of x (e.g. inBρx). Our premetric is similar to metric learning methods [13, 14, 29] that learn a Mahalanobis-like distance pseudo-metric parameterized by a positive semi-definite matrix. Pairs of distinct points can have zero “distance” if the matrix is not positive definite. However, unlike classic metric learning, we can have triplets (x,y, z) that satisfy dγ(x,y) = dγ(x, z) = 0 but dγ(y, z) > 0 (e.g. x = (1, 0, 0, 0)>,y = (1, 1, 1, 0)>, z = (1, 1, 0, 1)> in Q2,1−1). Since the logarithm map is not defined if 〈x,y〉q ≥ |β|, we propose to use the following continuous approximation defined on the whole manifold instead: ∀x ∈ Qp,qβ ,y ∈ Q p,q β , Dγ(x,y) = { dγ(x,y) if 〈x,y〉q ≤ 0√ |β| ( π 2 + 〈x,y〉q |β| ) otherwise (9) To the best of our knowledge, the explicit formulation of the logarithm map for Qp,qβ in Eq. (7) and its corresponding radius function in Eq. (8) to define a dissimilarity function are novel. We have also proposed some linear approximation to evaluate dissimilarity when the logarithm map is not defined but other choices are possible. For instance, when a geodesic does not exist, a standard way in differential geometry to calculate curves is to consider broken geodesics. One might consider instead the dissimilarity dγ(x,−x) + dγ(−x,y) = π √ |β| + dγ(−x,y) if logx(y) is not defined since −x ∈ Qp,qβ and log−x(y) is defined. This interesting problem is left for future research. 4 Ultrahyperbolic Optimization In this section we present optimization frameworks to optimize any differentiable function defined on Qp,qβ . Our goal is to compute descent directions on the ultrahyperbolic manifold. We consider two approaches. In the first approach, we map our representation from Euclidean space to ultrahyperbolic space. This is similar to the approach taken by [11] in hyperbolic space. In the second approach, we optimize using gradients defined directly in pseudo-Riemannian tangent space. We propose a novel descent direction which guarantees the minimization of some cost function. 4.1 Euclidean optimization via a differentiable mapping onto Qp,qβ Our first method maps Euclidean representations that lie in Rd to the pseudo-hyperboloid Qp,qβ , and the chain rule is exploited to perform standard gradient descent. To this end, we construct a differentiable mapping ϕ : Rq+1∗ × Rp → Qp,qβ . The image of a point already on Q p,q β under the mapping ϕ is itself: ∀x ∈ Qp,qβ , ϕ(x) = x. Let Sq = { x ∈ Rq+1 : ‖x‖ = 1 } denote the unit q-sphere. We first introduce the following diffeomorphisms: Theorem 4.1 (Diffeomorphisms). For any β < 0, there is a diffeomorphism ψ : Qp,qβ → Sq × Rp. Let us note x = ( t s ) ∈ Qp,qβ with t ∈ R q+1 ∗ and s ∈ Rp, let us note z = ( u v ) ∈ Sq × Rp where u ∈ Sq and v ∈ Rp. The mapping ψ and its inverse ψ−1 are formulated (see proofs in supp. material): ψ(x) = ( 1 ‖t‖t 1√ |β| s ) and ψ−1(z) = √ |β| (√ 1 + ‖v‖2u v ) . (10) With these mappings, any vector x ∈ Rq+1∗ × Rp can be mapped to Qp,qβ via ϕ = ψ−1 ◦ ψ. ϕ is differentiable everywhere except when x0 = · · · = xq = 0, which should never occur in practice. It can therefore be optimized using standard gradient methods. 4.2 Pseudo-Riemannian optimization We now introduce a novel method to optimize any differentiable function f : Qp,qβ → R defined on the pseudo-hyperboloid. As we show below, the (negative of the) pseudo-Riemannian gradient is not a descent direction. We propose a simple and efficient way to calculate a descent direction. Pseudo-Riemannian gradient: Since x ∈ Qp,qβ also lies in the Euclidean ambient space Rd, the function f has a well defined Euclidean gradient∇f(x) = (∂f(x)/∂x0, · · · , ∂f(x)/∂xp+q)> ∈ Rd. The gradient of f in the pseudo-Euclidean ambient space Rp,q+1 is (G−1∇f(x)) = (G∇f(x)) ∈ Rp,q+1. Since Qp,qβ is a submanifold of Rp,q+1, the pseudo-Riemannian gradient Df(x) ∈ TxQ p,q β of f on Qp,qβ is the orthogonal projection of (G∇f(x)) onto TxQ p,q β (see Chapter 4 of [21]): Df(x) = Πx (G∇f(x)) = G∇f(x)− 〈G∇f(x),x〉q 〈x,x〉q x = G∇f(x)− 〈∇f(x),x〉 〈x,x〉q x. (11) This gradient forms the foundation of our descent method optimizer as will be shown in Eq. (13). Iterative optimization: Our goal is to iteratively decrease the value of the function f by following some descent direction. Since Qp,qβ is not a vector space, we do not “follow the descent direction” by adding the descent direction multiplied by a step size as this would result in a new point that does not necessarily lie on Qp,qβ . Instead, to remain on the manifold, we use our exponential map defined in Eq. (6). This is a standard way to optimize on Riemannian manifolds [1]. Given a step size t > 0, one step of descent along a tangent vector ζ ∈ TxQp,qβ is given by: y = expx (tζ) ∈ Q p,q β . (12) Descent direction: We now explain why the negative of the pseudo-Riemannian gradient is not a descent direction. Our explanation extends Chapter 3 of [20] that gives the criteria for a tangent vector ζ to be a descent direction when the domain of the optimized function is a Euclidean space. By using the properties described in Section 3, we know that for all t ∈ R and all ξ ∈ TxQp,qβ , we have the equalities: expx (tξ) = γx→tξ(1) = γx→ξ(t) so we can equivalently fix t to 1 and choose the scale of ξ appropriately. By exploiting Taylor’s first-order approximation, there exists some small enough tangent vector ζ 6= 0 (i.e. with expx(ζ) belonging to a convex neighborhood of x [4, 8]) that satisfies the following conditions: γx→ζ(0) = x ∈ Qp,qβ , γ′x→ζ(0) = ζ ∈ TxQ p,q β , γx→ζ(1) = y ∈ Q p,q β , and the function f ◦ γx→ζ : R→ R can be approximated at t = 1 by: f(y) = f ◦ γx→ζ(1) ' f ◦ γx→ζ(0) + (f ◦ γx→ζ)′(0) = f(x) + 〈Df(x), ζ〉q. (13) where we use the following properties: ∀t, (f ◦ γ)′(t) = df(γ′(t)) = gγ(t) (Df(γ(t)), γ′(t)) (see details in pages 11, 15 and 85 of [21]), df is the differential of f and γ is a geodesic. To be a descent direction at x (i.e. so that f(y) < f(x)), the search direction ζ has to satisfy 〈Df(x), ζ〉q < 0. However, choosing ζ = −ηDf(x), where η > 0 is a step size, might increase the function value if the scalar product 〈·, ·〉q is not positive definite. If p + q ≥ 1, then 〈·, ·〉q is positive definite only if q = 0 (see details in supp. material), and it is negative definite iff p = 0 since 〈·, ·〉q = −〈·, ·〉 in this case. A simple solution would be to choose ζ = ±ηDf(x) depending on the sign of 〈Df(x), ζ〉q, but 〈Df(x), ζ〉q might be equal to 0 even if Df(x) 6= 0 if 〈·, ·〉q is indefinite. The optimization algorithm might then be stuck to a level set of f , which is problematic. Algorithm 1 Pseudo-Riemannian optimization on Qp,qβ input: differentiable function f : Qp,qβ → R to be minimized, some initial value of x ∈ Q p,q β 1: while not converge do 2: Calculate∇f(x) . i.e. the Euclidean gradient of f at x in the Euclidean ambient space 3: χ← Πx(GΠx(G∇f(x))) . see Eq. (14) 4: x← expx(−ηχ) . where η > 0 is a step size (e.g. determined with line search) 5: end while Proposed solution: To ensure that ζ ∈ TxQp,qβ is a descent direction, we propose a simple expression that satisfies 〈Df(x), ζ〉q < 0 if Df(x) 6= 0 and 〈Df(x), ζ〉q = 0 otherwise. We propose to formulate ζ = −ηΠx(GDf(x)) ∈ TxQp,qβ , and we define the following tangent vector χ = − 1 ηζ: χ = Πx(GDf(x)) = ∇f(x)− 〈∇f(x),x〉 〈x,x〉q Gx− 〈∇f(x),x〉q 〈x,x〉q x + ‖x‖2〈∇f(x),x〉 〈x,x〉2q x. (14) The tangent vector ζ is a descent direction because 〈Df(x), ζ〉q = −η〈Df(x),χ〉q is nonpositive: 〈Df(x),χ〉q = ‖∇f(x)‖2 − 2 〈∇f(x),x〉〈∇f(x),x〉q 〈x,x〉q + 〈∇f(x),x〉2‖x‖2 〈x,x〉2q (15) = ‖G∇f(x)− 〈∇f(x),x〉 〈x,x〉q x‖2 = ‖Df(x)‖2 ≥ 0. (16) We also have 〈Df(x),χ〉q = ‖Df(x)‖2 = 0 iff Df(x) = 0 (i.e. x is a stationary point). It is worth noting that Df(x) = 0 implies χ = Πx(G0) = 0. Moreover, χ = 0 implies that ‖Df(x)‖2 = 〈Df(x), 0〉q = 0. We then have χ = 0 iff Df(x) = 0. Our proposed algorithm to the minimization problem minx∈Qp,qβ f(x) is illustrated in Algorithm 1. Following generic Riemannian optimization algorithms [1], at each iteration, it first computes the descent direction−χ ∈ TxQp,qβ , then decreases the function by applying the exponential map defined in Eq. (6). It is worth noting that our proposed descent method can be applied to any differentiable function f : Qp,qβ → R, not only to those that exploit the distance introduced in Section 3. Interestingly, our method can also be seen as a preconditioning technique [20] where the descent direction is obtained by preconditioning the pseudo-Riemannian gradient Df(x) with the matrix Px = [ G− 1〈x,x〉q xx > ] ∈ Rd×d. In other words, we have χ = PxDf(x) = Πx(GDf(x)). In the more general setting of pseudo-Riemannian manifolds, another preconditioning technique was proposed in [8]. The method in [8] requires performing a Gram-Schmidt process at each iteration to obtain an (ordered [28]) orthonormal basis of the tangent space at x w.r.t. the induced quadratic form of the manifold. However, the Gram-Schmidt process is unstable and has algorithmic complexity that is cubic in the dimensionality of the tangent space. On the other hand, our method is more stable and its algorithmic complexity is linear in the dimensionality of the tangent space. 5 Experiments We now experimentally validate our proposed optimization methods and the effectiveness of our dissimilarity function. Our main experimental results can be summarized as follows: • Both optimizers introduced in Section 4 decrease some objective function f : Qp,qβ → R. While both optimizers manage to learn high-dimensional representations that satisfy the problem-dependent training constraints, only the pseudo-Riemannian optimizer satisfies all the constraints in lowerdimensional spaces. This is because it exploits the underlying metric of the manifold. • Hyperbolic representations are popular in machine learning as they are well suited to represent hierarchical trees [10, 18, 19]. On the other hand, hierarchical datasets whose graph contains cycles cannot be represented using trees. Therefore, we propose to represent such graphs using our ultrahyperbolic representations. An important example are community graphs such as Zachary’s karate club [30] that contain leaders. Because our ultrahyperbolic representations are more flexible than hyperbolic representations, we believe that our representations are better suited for these non tree-like hierarchical structures. Graph: Our ultrahyperbolic representations describe graph-structured datasets. Each dataset is an undirected weighted graph G = (V,E) which has node-set V = {vi}ni=1 and edge-set E = {ek}mk=1. Each edge ek is weighted by an arbitrary capacity ck ∈ R+ that models the strength of the relationship between nodes. The higher the capacity ck, the stronger the relationship between the nodes connected by ek. Learned representations: Our problem formulation is inspired by hyperbolic representation learning approaches [18, 19] where the nodes of a tree (i.e. graph without cycles) are represented in hyperbolic space. The hierarchical structure of the tree is then reflected by the order of distances between its nodes. More precisely, a node representation is learned so that each node is closer to its descendants and ancestors in the tree (w.r.t. the hyperbolic distance) than to any other node. For example, in a hierarchy of words, ancestors and descendants are hypernyms and hyponyms, respectively. Our goal is to learn a set of n points x1, · · · ,xn ∈ Qp,qβ (embeddings) from a given graph G. The presence of cycles in the graph makes it difficult to determine ancestors and descendants. For this reason, we introduce for each pair of nodes (vi, vj) = ek ∈ E, the set of “weaker” pairs that have lower capacity: W(ek) = {el : ck > cl} ∪ {(va, vb) : (va, vb) /∈ E}. Our goal is to learn representations such that pairs (vi, vj) with higher capacity have their representations (xi,xj) closer to each other than weaker pairs. Following [18], we formulate our problem as: min x1,··· ,xn∈Qp,qβ ∑ (vi,vj) = ek∈E − log exp (−d(xi,xj)/τ)∑ (va,vb)∈ W(ek)∪{ek} exp (−d(xa,xb)/τ) (17) where d is the chosen dissimilarity function (e.g. Dγ(·, ·) defined in Eq. (9)) and τ > 0 is a fixed temperature parameter. The formulation of Eq. (17) is classic in the metric learning literature [3, 12, 27] and corresponds to optimizing some order on the learned distances via a softmax function. Implementation details: We coded our approach in PyTorch [22] that automatically calculates the Euclidean gradient ∇f(xi). Initially, a random set of vectors {zi}ni=1 is generated close to the positive pole ( √ |β|, 0, · · · , 0) ∈ Qp,qβ with every coordinate perturbed uniformly with a random value in the interval [−ε, ε] where ε > 0 is chosen small enough so that ‖zi‖2q < 0. We set β = −1, ε = 0.1 and τ = 10−2. Initial embeddings are generated as follows: ∀i,xi = √ |β| zi√ |‖zi‖2q| ∈ Qp,qβ . Zachary’s karate club dataset [30] is a social network graph of a karate club comprised of n = 34 nodes, each representing a member of the karate club. The club was split due to a conflict between instructor "Mr. Hi" (node v1) and administrator "John A" (node vn). The remaining members now have to decide whether to join the new club created by v1 or not. In [30], Zachary defines a matrix of relative strengths of the friendships in the karate club called C ∈ {0, 1, · · · , 7}n×n and that depends on various criteria. We note that the matrix is not symmetric and has 7 different pairs (vi, vj) for which Cij 6= Cji. Since our dissimilarity function is symmetric, we consider the symmetric matrix S = C + C> instead. The value of Sij is the capacity/weight assigned to the edge joining vi and vj , and there is no edge between vi and vj if Sij = 0. Fig. 3 (left) illustrates the 34 nodes of the dataset, an edge joining the nodes vi and vj is drawn iff Sij 6= 0. The level of a node in the hierarchy corresponds approximately to its height in the figure. Optimizers: We validate that our optimizers introduced in Section 4 decrease the cost function. First, we consider the simple unweighted case where every edge weight is 1. For each edge ek ∈ E, W(ek) is then the set of pairs of nodes that are not connected. In other words, Eq. (17) learns node representations that have the property that every connected pair of nodes has smaller distance than non-connected pairs. We use this condition as a stopping criterion of our algorithm. Fig. 3 (right) illustrates the loss values of Eq. (17) as a function of the number of iterations with the Euclidean gradient descent (Section 4.1) and our pseudo-Riemannian optimizer (introduced in Section 4.2). In each test, we vary the number of time dimensions q + 1 while the ambient space is of fixed dimensionality d = p+ q + 1 = 10. We omit the case q = 0 since it corresponds to the (hyperbolic) Riemannian case already considered in [11, 19]. Both optimizers decrease the function and manage to satisfy all the expected distance relations. We note that when we use −Df(x) instead of −χ as a search direction, the algorithm does not converge. Moreover, our pseudo-Riemannian optimizer manages to learn representations that satisfy all the constraints for low-dimensional manifolds such as Q4,1−1 and Q 4,2 −1, while the optimizer introduced in Section 4.1 does not. Consequently, we only use the pseudo-Riemannian optimizer in the following results. Hierarchy extraction: To quantitatively evaluate our approach, we apply it to the problem of predicting the high-level nodes in the hierarchy from the weighted matrix S given as supervision. We consider the challenging low-dimensional setting where all the learned representations lie on a 4-dimensional manifold (i.e. p+ q + 1 = 5). Hyperbolic distances are known to grow exponentially as we get further from the origin. Therefore, the sum of distances δi = ∑n j=1 d(xi,xj) of a node vi with all other nodes is a good indication of importance. Intuitively, high-level nodes will be closer to most nodes than low-level nodes. We then sort the scores δ1, · · · , δn in ascending order and report the ranks of the two leaders v1 or vn (in no particular order) in the first two rows of Table 1 averaged over 5 different initializations/runs. Leaders tend to have a smaller δi score with ultrahyperbolic distances than with Euclidean, hyperbolic or spherical distances. Instead of using δi for hyperbolic representations, the importance of a node vi can be evaluated by using the Euclidean norm of its embedding xi as proxy [11, 18, 19]. This is because high-level nodes of a tree in hyperbolic space are usually closer to the origin than low-level nodes. Not surprisingly, this proxy leads to worse performance (8.6± 2.3 and 18.6± 4.9) as the relationships are not that of a tree. Since hierarchy levels are hard to compare for low-level nodes, we select the 10 (or 5) most influential members based on the score si = ∑n j=1 Sij . The corresponding nodes are 34, 1, 33, 3, 2, 32, 24, 4, 9, 14 (in that order). Spearman’s rank correlation coefficient [24] between the selected scores si and corresponding δi is reported in Table 1 and shows the relevance of our representations. Due to lack of space, we also report in the supp. material similar experiments on a larger hierarchical dataset [9] that describes co-authorship from papers published at NIPS from 1988 to 2003. 6 Conclusion We have introduced ultrahyperbolic representations. Our representations lie on a pseudo-Riemannian manifold of constant nonzero curvature which generalizes hyperbolic and spherical geometries and includes them as submanifolds. Any relationship described in those geometries can then be described with our representations that are more flexible. We have introduced new optimization tools and experimentally shown that our representations can extract hierarchies in graphs that contain cycles. Broader Impact We introduce a novel way of representing relationships between data points by considering the geometry of non-Riemannian manifolds of constant nonzero curvature. The relationships between data points are described by a dissimilarity function that we introduce and exploits the structure of the manifold. It is more flexible than the distance metric used in hyperbolic and spherical geometries often used in machine learning and computer vision. Nonetheless, since the problems involving our representations are not straightforward to optimize, we propose novel optimization algorithms that can potentially benefit the machine learning, computer vision and natural language processing communities. Indeed, our method is application agnostic and could extend existing frameworks. Our contribution is mainly theoretical but we have included one practical application. Similarly to hyperbolic representations that are popular for representing tree-like data, we have shown that our representations are well adapted to the more general case of hierarchical graphs with cycles. These graphs appear in many different fields of research such as medicine, molecular biology and the social sciences. For example, an ultrahyperbolic representation of proteins might assist in understanding their complicated folding mechanisms. Moreover, these representations could assist in analyzing features of social media such as discovering new trends and leading "connectors". The impact of community detection for commercial or political advertising is already known in social networking services. We foresee that our method will have many more graph-based practical applications. We know of very few applications outside of general relativity that use pseudo-Riemannian geometry. We hope that our research will stimulate other applications in machine learning and related fields. Finally, although we have introduced a novel descent direction for our optimization algorithm, future research could study and improve its rate of convergence. Acknowledgments and Disclosure of Funding We thank Jonah Philion, Guojun Zhang and the anonymous reviewers for helpful feedback on early versions of this manuscript. This article was entirely funded by NVIDIA corporation. Marc Law and Jos Stam completed this working from home during the COVID-19 pandemic.
1. What is the main contribution of the paper in the field of representation learning? 2. What are the strengths of the proposed approach, particularly in terms of theoretical grounding and numerical algorithms? 3. What are the weaknesses of the paper regarding the dissimilarity metric and the limitation of the proposed method on real-world applications? 4. Do you think that the experimental results support the efficacy of the proposed algorithm? 5. Are there any suggestions for improving the optimization algorithm or conducting further experiments with diverse datasets and hyperparameter settings?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposed a novel approach in representation learning context using by studying Riemannian geometry properties of pseudo-hyperboloids. The novelty of this paper is mainly theoretical. In particular, the authors proposed for the first time the explicit formulas of the geodesic distance on ultrahyperbolic manifolds and the logarithm map for pseudo-hyperboloids. Also, an optimization scheme using the ultrahyperbolic geometry is proposed, in which the authors have shown a relationship to representation learning of non-tree graphs both theoretically and experimentally. Strengths The theoretical grounding of this paper is solid. With precise geometric analysis, the authors further develop theories on pseudo-hyperboloids, e.g. Riemannian metrics, geodesics, exponential maps, and (pseudo-)distances. Formulas are explicit and concrete, and thus laid a solid foundation for numerical computations. For the numerical algorithm part, the author proposed a novel approach of computing representation learning, with the aim of improving performance on non-tree graphs. The experimental results also the efficacy of the proposed algorithm and its performance on various datasets. Weaknesses Theoretically, the dissimilarity metric (Eq.8) is locally defined due to the (non)existence of the logarithm map in the global scope. The effectiveness of the dissimilar metric is limited by the geometry of the underlying manifold, which might in turn limit the effectiveness of the proposed method on complicated real-world applications. The authors could have justified the applicability of the algorithm by experimenting it on more real-world datasets, alongside the two reported in the paper and the supplementary materials. The proposed optimization algorithm, as the authors pointed out in the conclusion section, lacks a convergence rate analysis. However, this is not a major issue as this is not the emphasis of the paper. Finally, it would be great if more experiments were conducted with various real-world datasets and under different settings of model hyper-parameters.
NIPS
Title DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative Networks Abstract Machine learning models have been criticized for reflecting unfair biases in the training data. Instead of solving for this by introducing fair learning algorithms directly, we focus on generating fair synthetic data, such that any downstream learner is fair. Generating fair synthetic data from unfair data— while remaining truthful to the underlying data-generating process (DGP) —is non-trivial. In this paper, we introduce DECAF: a GAN-based fair synthetic data generator for tabular data. With DECAF we embed the DGP explicitly as a structural causal model in the input layers of the generator, allowing each variable to be reconstructed conditioned on its causal parents. This procedure enables inference-time debiasing, where biased edges can be strategically removed for satisfying user-defined fairness requirements. The DECAF framework is versatile and compatible with several popular definitions of fairness. In our experiments, we show that DECAF successfully removes undesired bias and— in contrast to existing methods —is capable of generating high-quality synthetic data. Furthermore, we provide theoretical guarantees on the generator’s convergence and the fairness of downstream models. 1 Introduction Generative models are optimized to approximate the original data distribution as closely as possible. Most research focuses on three objectives [1]: fidelity, diversity, and privacy. The first and second are concerned with how closely synthetic samples resemble real data and how much of the real data’s distribution is covered by the new distribution, respectively. The third objective aims to avoid simply reproducing samples from the original data, which is important if the data contains privacy-sensitive information [2, 3]. We explore a much-less studied concept: synthetic data fairness. Motivation. Deployed machine learning models have been shown to reflect the bias of the data on which they are trained [4, 5, 6, 7, 8]. This has not only unfairly damaged the discriminated individuals but also society’s trust in machine learning as a whole. A large body of work has explored ways of detecting bias and creating fair predictors [9, 10, 11, 12, 13, 14, 15], while other authors propose debiasing the data itself [9, 10, 11, 16]. This work’s aim is related to the work of [17]: to generate fair synthetic data based on unfair data. Being able to generate fair data is important because end-users creating models based on publicly available data might be unaware they are inadvertently including ∗Equal contribution 35th Conference on Neural Information Processing Systems (NeurIPS 2021). bias or insufficiently knowledgeable to remove it from their model. Furthermore, by debiasing the data prior to public release, one can guarantee any downstream model satisfies desired fairness requirements by assigning the responsibility of debiasing to the data generating entities. Goal. From a biased dataset X , we are interested in learning a model G, that is able to generate an equivalent synthetic unbiased dataset X ′ with minimal loss of data utility. Furthermore, a downstream model trained on the synthetic data needs to make not only unbiased predictions on the synthetic data, but also on real-life datasets (as formalized in Section 4.2). Solution. We approach fairness from a causal standpoint because it provides an intuitive perspective on different definitions of fairness and discrimination [11, 13, 14, 15, 18]. We introduce DEbiasing CAusal Fairness (DECAF), a generative adversarial network (GAN) that leverages causal structure for synthesizing data. Specifically, DECAF is comprised of d generators (one for each variable) that learn the causal conditionals observed in the data. At inference-time, variables are synthesized topologically starting from the root nodes in the causal graph then synthesized sequentially, terminating at the leave nodes. Because of this, DECAF can remove bias at inference-time through targeted (biased) edge removal. As a result, various datasets can be created for desired (or evolving) definitions of fairness. Contributions. We propose a framework of using causal knowledge for fair synthetic data generation. We make three main contributions: i) DECAF, a causal GAN-based model for generating synthetic data, ii) a flexible causal approach for modifying this model such that it can generate fair data, and iii) guarantees that downstream models trained on the synthetic data will also give fair predictions in other settings. Experimentally, we show how DECAF is compatible with several fairness/discrimination definitions used in literature while still maintaining high downstream utility of generated data. 2 Related Works Here we focus on the related work concerned with data generation, in contrast to fairness definitions for which we provide a detailed overview in Section 4 and Appendix C. As an overview of how data generation methods relate to one another, we refer to Table 1 which presents all relevant related methods. Non-parametric generative modeling. The standard models for synthetic data generation are either based on VAEs [19] or GANs [2, 3, 20, 21]. While these models are well known for their highly realistic synthetic data, they are unable to alter the synthetic data distribution to encourage fairness (except for [17, 23], discussed below). Furthermore, these methods have no causal notion, which prohibits targeted interventions for synthesizing fair data (Section 4). We explicitly leave out CausalGAN [24] and CausalVAE [25], which appear similar by incorporating causality-derived ideas but are different in both method and aim (i.e., image generation). Fair data generation. In the bottom section of Table 1, we present methods that, in some way, alter the training data of classifiers to adhere to a notion of fairness [10, 11, 16, 17, 22, 23]. While these methods have proven successful, they lack some important features. For example, none of the related methods allow for post-hoc changes of the synthetic data distribution. This is an important feature, as each situation requires a different perspective on fairness and thus requires a flexible framework for selecting protected variables. Additionally, only [11, 23] allow a causal perspective on fairness, despite causal notions underlying multiple interpretations of what should be considered fair [13]. Furthermore, only [17, 22, 23] offer a flexible framework, while the others are limited to binary [10, 11] or discrete [16] settings. Xu et al. [23] also use a causal architecture for the generator, however their method is not as flexible—e.g. it does not easily extend to multiple protected attributes. Finally, in contrast to other methods DECAF is directly concerned with fairness of the downstream model—which is dependent on the setting in which the downstream model is employed (Section 4.2). In essence, from Table 1 we learn that DECAF is the only method that combines all key areas of interest. At last, we would like to mention [26], who aim to generate data that resembles a small unbiased reference dataset, by leveraging a large but biased dataset. This is very different to our aim, as we are interested in the downstream model’s fairness and explicit notions of fairness. 3 Preliminaries Let X ∈ X ⊆ Rd denote a random variable with distribution PX(X), with protected attributes A ∈ A ⊂ X and target variable Y ∈ Y ⊂ X , let Ŷ denote a prediction of Y . Let the data be given by D = {x(k)}Nk=1, where each x(k) ∈ D is a realization of X . We assume the data generating process can be represented by a directed acyclic graph (DAG)—such that the generation of features can be written as a structural equation model (SEM) [27]—and that this DAG is causally sufficient. Let Xi denote the ith feature in X with causal parents Pa(Xi) ⊂ {Xj : j 6= i}, the SEM is given by: Xi = fi(Pa(Xi), Zi),∀i (1) where {Zi}di=1 are independent random noise variables, that is Pa(Zi) = ∅, ∀i. Note that each fi is a deterministic function that places all randomness of the conditional P (Xi|Pa(Xi)) in the respective noise variable, Zi. 4 Fairness of Synthetic Data Algorithmic fairness is a popular topic (e.g., see [13, 28]), but fair synthetic data has been much less explored. This section highlights how the underlying graphs of the synthetic and downstream data determine whether a model trained on the synthetic data will be fair in practice. We start with the two most popular definitions of fairness, relating to the legal concepts of direct and indirect discrimination. We also explore conditional fairness [29], which is a generalization of the two. In Appendix C we discuss how the ideas in this section transfer to other independence-based definitions [30]. Throughout this section, we separate Y from X by defining X̄ = X\Y , and we will write X ← X̄ for ease of notation. 4.1 Algorithmic fairness The first definition is called Fairness Through Unawareness (e.g. [31]). Definition 1. (Fairness Through Unawareness (FTU): algorithm). A predictor f : X 7→ Ŷ is fair iff protected attributes A are not explicitly used by f to predict Ŷ . This definition prohibits disparate treatment [28, 32], and is related to the legal concept of direct discrimination, i.e., two equally qualified people deserve the same job opportunity independent of their race, gender, beliefs, among others. Though FTU fairness is commonly used, it might result in indirect discrimination: covariates that influence the prediction Ŷ might not be identically distributed across different groups a, a′, which means an algorithm might have disparate impact on a protected group [10]. The second definition of fairness, demographic parity [32], does not allow this: Definition 2. (Demographic Parity (DP): algorithm) A predictor Ŷ is fair iff A ⊥⊥ Ŷ , i.e. ∀a, a′ : P (Ŷ |A = a) = P (Ŷ |A = a′). Evidently, DP puts stringent constraints on the algorithm, whereas FTU might be too lenient. The third definition we include is based on the work of [29], related to unresolved discrimination [14]. The idea is that we do not allow indirect discrimination unless it runs through explanatory factors R ⊂ X . For example, in Simpson’s paradox [33] there seems to be a bias between gender and college admissions, but this is only due to women applying to more competitive courses. In this case, one would want to regard fairness conditioned on the choice of study [14]. Let us define this as conditional fairness: Definition 3. (Conditional Fairness (CF): algorithm) A predictor Ŷ is fair iff A ⊥⊥ Ŷ |R, i.e. ∀r, a, a′ : P (Ŷ |R = r,A = a) = P (Ŷ |R = r,A = a′). CF generalizes FTU and DP Note that conditional fairness is a generalization of FTU and DP, by setting R = X\A and R = ∅, respectively. In Appendix C we elaborate on the connection between these, and more, definitions. 4.2 Synthetic data fairness Algorithmic definitions can be extended to distributional fairness for synthetic data. Let P (X), P ′(X) be probability distributions with protected attributes A ⊂ X and labels Y ⊂ X . Let I(A, Y ) be a definition of algorithmic fairness (e.g., FTU). Note, that under CF, I(A, Y ) is a function of R as well. We propose (I(A, Y ), P )-fairness of distribution P ′(X): Definition 4. (Distributional fairness) A probability distribution P ′(X) is (I(A, Y ), P )-fair, iff the optimal predictor Ŷ = f∗(X) of Y trained on P ′(X) satisfies I(A, Y ) when evaluated on P (X). In other words, when we train a predictor on (I(A, Y ), P )-fair distribution P ′(X), we can only reach maximum performance if our model is fair. Note the explicit reference to P (X), the distribution on which fairness is evaluated, which does not need to coincide with P ′(X). This is a small but relevant detail. For example, when training a model on data D′ ∼ P ′(X) it could seem like the model is fair when we evaluate it on a hold-out set of the data (e.g., if we simply remove the protected attribute from the data). However, when we use the model for real-world predictions of data D ∼ P (X), disparate impact is possibly observed due to a distributional shift. By extension, we define synthetic data as (I(A, Y ), P )-fair, iff it is sampled from an (I(A, Y ), P )- fair distribution. Defining synthetic data as fair w.r.t. an optimal predictor is especially useful when we want to publish a dataset and do not trust end-users to consider anything but performance.2 Choosing P(X). The setting P (X) = P ′(X) corresponds to data being fair with respect to itself. For synthetic data generation, this setting is uninteresting as any dataset can be made fair by randomly sampling or removing A; if A is random, the prediction should not directly or indirectly depend on it. This ignores, however, that a downstream user might use the trained model on a real-world dataset in which other variables B are correlated with A, and thus their model (which is trained to use B for predicting Y ) will be biased. Of specific interest is the setting where P (X) corresponds to the original data distribution PX(X) that contains unfairness. In this scenario, we construct P ′(X) by learning PX(X) and removing the unfair characteristics. The data from P ′(X) can be published online, and models trained on this data can be deployed fairly in real-life scenarios where data follows PX(X). Unless otherwise stated, henceforth, we assume P (X) = PX(X). 4.3 Graphical perspective As reflected in the widely accepted terms direct versus indirect discrimination, it is natural to define distributional fairness from a causal standpoint. Let G′ and G respectively denote the graphs underlying P ′(X) (the synthetic data distribution which we can control) and P (X) (the evaluation distribution that we cannot control). Let ∂GY denote the Markov boundary of Y in graph G. We focus on the conditional fairness definition because it subsumes the definition of DP and FTU (Section 4.1). Let R ⊂ X be the set of explanatory features. Proposition 1. (CF: graphical condition) If for allB ∈ ∂G′Y ,A ⊥⊥G B|R,3 then distribution P ′(X) is CF fair w.r.t P (X) given explanatory factors R. 2Finding the optimal predictor is possible if we assume the downstream user employs any universal function approximator (e.g., MLP) and the amount of synthetic data is sufficiently large. 3Where ⊥⊥G denotes d-separation in G. Here we define A ⊥⊥G B|R to be true for all B ∈ R. Proof. Without loss of generality, let us assume the label is binary.4 The optimal predictor f∗(X) = P (Y |X) = P (Y |∂G′Y ). Thus, if ∂G′Y is d-separated from A in G given R, prediction Ŷ = f∗(X) is independent of A given R and CF holds. Corollary 1. (CF debiasing) Any distribution P ′(X) with graph G′ can be made CF fair w.r.t. P (X) and explanatory features R by removing from G′ edges Ẽ = {(B → Y ) and (Y → B) : ∀B ∈ ∂G′Y with B 6⊥⊥G A|R}. Proof. First note Ẽ is the necessary and sufficient set of edges to remove for (∀B ∈ ∂G′Y , A ⊥⊥G B|R) to be true, subsequently the result follows from Proposition 1. For FTU (i.e. R = X\A) and DP (i.e. R = ∅), this corollary simplifies to: Corollary 2. (FTU debiasing) Any distribution P ′(X) with graph G′ can be made FTU fair w.r.t. any distribution P (X) by removing, if present, i) the edge between A and Y and ii) the edge A → C or Y → C for all shared children C. Corollary 3. (DP debiasing) Any distribution P ′(X) with graph G′ can be made DP fair w.r.t. P (X) by removing, if present, the edge between B and Y for any B ∈ ∂G′Y with B 6⊥⊥G A. Figure 1 shows how the different fairness definitions lead to different sets of edges to be removed. Faithfulness. Usually one assumes distributions are faithful w.r.t. their respective graphs, in which case the if-statement in Proposition 1 become equivalence statements: fairness is only possible when the graphical conditions hold. Theorem 1. If P (X) and P ′(X) are faithful with respect to their respective graphs G and G′, then Proposition 1 becomes an equivalence statement and Corollaries 1, 2 and 3 describe the necessary and sufficient sets of edges to remove for achieving CF, FTU and DP fairness, respectively. Proof. Faithfulness implies A ⊥⊥P (X) B|R =⇒ A ⊥⊥G B|R, e.g. [34]. Thus, if ∃B ∈ ∂G′Y for which A 6⊥⊥G B|R, then A 6⊥⊥ B|R. Because B ∈ ∂G′Y and P ′(X) is faithful to G′, Ŷ = f∗(X) depends on B, and thus Ŷ 6⊥⊥ A|R: CF does not hold. Other definitions. Some authors define similar fairness measures in terms of directed paths (cf. d-separation) [11, 14, 18], which is a milder requirement as it allows correlation via non-causal paths. In Appendix C we highlight the graphical conditions for these definitions. 5 Method: DECAF The primary design goal of DECAF is to generate fair synthetic data from unfair data. We separate DECAF into two stages. The training stage learns the causal conditionals that are observed in the data through a causally-informed GAN. At the generation (inference) stage, we intervene on the learned conditionals via Corollaries 1-3, in such a way that the generator creates fair data. We assume the underlying DGP’s graph G is known; otherwise, G needs to be approximated first using any causal discovery method, see Section 6. 5.1 Training Overview. This stage strives to learn the causal mechanisms {fi(Pa(Xi), Zi)}. Each structural equation fi (Eq. 1) is modelled by a separate generator Gi : R|Pa(Xi)|+1 → R. We achieve this by employing a conditional GAN framework with a causal generator. This process is illustrated in Figure 2 and detailed below. 4If Y is continuous the same result holds, though the “optimal” predictor will depend on the statistic of interest, e.g. mode, mean, median or the entire distribution f(X,Y ) ≈ P (Y |X). Features are generated sequentially following the topological ordering of the underlying causal DAG: first root nodes are generated, then their children (from generated causal parents), etc. Variable X̂i is modelled by the associated generator Gi: X̂i = Gi(P̂a(Xi), Zi) ∀i, (2) where P̂a(Xi) denotes the generated causal parents of Xi (for root nodes the empty set), and each Zi is independently sampled from P (Z) (e.g. standard Gaussian). We denote the full sequential generator by G(Z) = [G1(Z1), ..., Gd(·, Zd)]. Subsequently, the synthetic sample x̂ is passed to a discriminator D : Rd → R, which is trained to distinguish the generated samples from original samples. A typical minimax objective is employed for creating generated samples that confuse the discriminator most: max {Gi}di=1 min D E[logD(G(Z)) + log(1−D(X)], (3) with X sampled from the original data. We optimize the discriminator and generator iteratively and add a regularization loss to both networks. Network parameters are updated using gradient descent. If we assume PX(X) is compatible with graph G, we can show that the sequential generator has the same theoretical convergence guarantees as standard GANs [20]: Theorem 2. (Convergence guarantee) Assuming the following three conditions hold: (i) data generating distribution PX is Markov compatible with a known DAG G; (ii) generator G and discriminator D have enough capacity; and (iii) in every training step the discriminator is trained to optimality given fixed G, and G is subsequently updated as to maximize the discriminator loss (Eq. 3); then generator distribution PG converges to true data distribution PX Proof. See Appendix B Condition (i), compatibility with G, is a weaker assumption than assuming perfect causal knowledge. For example, suppose the Markov equivalence class of the true underlying DAG has been determined through causal discovery. In that case, any graph G in the equivalence class is compatible with the data and can thus be used for synthetic data generation. However, we note that debiasing can require the correct directionality for some definitions of fairness, see Discussion. Remark. The causal GAN we propose, DECAF, is simple and extendable to other generative methods, e.g., VAEs. Furthermore, from the post-processing theorem [35] it follows that DECAF can be directly used for generating private synthetic data by replacing the standard discriminator by a differentially private discriminator [2, 36]. 5.2 Inference-time Debiasing The training phase yields conditional generators {Gi}di=1, which can be sequentially applied to generate data with the same output distribution as the original data (proof in Appendix B). The causal model allows us to go one step further: when the original data has characteristics that we do not want to propagate to the synthetic data (e.g., gender bias), individual generators can be modified to remove these characteristics. Given the generator’s graph G = (X,E), fairness is achieved by removing edges such that the fairness criteria are met, see Section 4. Let Ẽ ∈ E be the set of edges to remove for satisfying the required fairness definition. For CF, FTU and DP,5 the sets Ẽ are given by Corollaries 1, 2 and 3, respectively. Removing an edge constitutes to what we call a “surrogate” do-operation [27] on the conditional distribution. For example, suppose we only want to remove (i → j). For a given sample, Xi is generated normally (Eq. 2), but Xj is generated using the modified: X̂ do(Xi)=x̃ij j = Gj(..., Xi = x̃ij), (4) where Xi = x̃ij is the surrogate parent assignment. Value X̂ do(Xi) j can be interpreted as the counterfactual value of X̂j , had Xi been equal to x̃ij (see also [15]). Choosing the value of surrogate variable x̃ij requires background knowledge of the task and bias at hand. For example, surrogate variable x̃ij can be sampled independently from a distribution for each synthetic sample (e.g., the marginal P (Xi)), be set to a fixed value for all samples in the synthetic data (e.g., if Xi: gender, always set x̃ij = male when generating feature Xj : job opportunity) or be chosen as to maximize/minimize some feature (e.g. x̃ij = arg maxx X̂ do(Xi)=x j ). We emphasize that we do not set Xi = x̃ij in the synthetic sample; Xi = x̃ij is only used for substitution of the removed dependence. We provide more details in Appendix E. More generally, we create surrogate variables for all edges we remove, {x̃ij : (i→ j) ∈ Ẽ}. Each sample is sequentially generated by Eq. 4, with a surrogate variable for each removed incoming edge. Remark. Multiple datasets can be created based on different definitions of fairness and/or different downstream prediction targets. Because debiasing happens at inference-time, this does not require retraining the model. 6 Experiments In this section, we validate the performance of DECAF for synthesizing bias-free data based on two datasets: i) real data with existing bias and ii) real data with synthetically injected bias. The aim of the former is to show that we can remove real, existing bias. The latter experiment provides a ground-truth unbiased target distribution, which means we can evaluate the quality of the synthetic dataset with respect to this ground truth. For example, when historically biased data is first debiased, a model trained on the synthetic data will likely create better predictions in contemporary, unbiased/less-biased settings than benchmarks that do not debias first. In both experiments, the ground-truth DAG is unknown. We use causal discovery to uncover the underlying DAG and show empirically that the performance is still good. Benchmarks. We compare DECAF against the following benchmark generative methods: a GAN, a Wasserstein GAN with gradient penalty (WGAN-GP) [21] and FairGAN [17]. FairGAN is the only benchmark designed to generate synthetic fair data,6 whereas GAN and WGAN-GP only aim to match the original data’s distribution, regardless of inherent underlying bias. For these benchmarks, fair data can be generated by naively removing the protected variable – we refer to these methods with the PR (protected removal) suffix and provide more experimental results and insight into PR in Appendix A. We benchmark DECAF debiasing in four ways: i) with no inference-time debiasing 5Just like in Corollaries 1 and 3, we assume the downstream evaluation distribution is the same as the biased training data distribution: a predictor trained on the synthetic debiased data, is required to give fair predictions in real-life settings with distribution PX(X). 6The works of [11, 16] are not applicable here, as these methods are constrained to discrete data. (DECAF-ND), ii) under FTU (DECAF-FTU), iii) under CF (DECAF-CF) and iv) under DP fairness (DECAF-DP). We provide DECAF7 implementation details in Appendix D.1. Evaluation criteria. We evaluate DECAF using the following metrics: • Data quality is assessed using metrics of precision and recall [37, 38, 39]. Additionally, we evaluate all methods in terms of AUROC of predicting the target variable using a downstream classifier (MLP in these experiments) trained on synthetic data. • FTU is measured by calculating the difference between the predictions of a downstream classifier for setting A to 1 and 0, respectively, such that |PA=0(Ŷ |X) − PA=1(Ŷ |X)|, while keeping all other features the same. This difference measures the direct influence of A on the prediction. • DP is measured in terms of the Total Variation [15]: the difference between the predictions of a downstream classifier in terms of positive to negative ratio between the different classes of protected variable A, i.e., |P (Ŷ |A = 0)− P (Ŷ |A = 1)|. 6.1 Debiasing Census Data In this experiment, we are given a biased dataset D ∼ P (X) and wish to create a synthetic (and debiased) dataset D′, with which a downstream classifier can be trained and subsequently be rolled out in a setting with distribution P (X). We experiment on the Adult dataset [40], with known bias between gender and income [10, 11]. The Adult dataset contains over 65,000 samples and has 11 attributes, such as age, education, gender, income, among others. Following [11], we treat gender as the protected variable and use income as the binary target variable representing whether a person earns over $50K or not. For DAG G, we use the graph discovered and presented by Zhang et al. [11]. In Appendix D.2, we specify edge removals for DECAF-DP, DECAF-CF, and DECAF-FTU. Synthetic data is generated using each benchmark method, after which a separate MLP is trained on each dataset for computing the metrics; see Appendix D.2 for details. We repeat this experiment 10 times for each benchmark method and report the average in Table 2. As shown, DECAF-ND (no debiasing) performs amongst the best methods in terms of data utility. Because the data utility in this experiment is measured with respect to the original (biased) dataset, we see that the methods DECAF-FTU, DECAF-CF, and DECAF-DP score lower than DECAF-ND because these methods distort the distribution – with DECAF-DP distorting the label’s conditional distribution most and thus scoring worst in terms of AUROC. Note also that a downstream user who is only focused on performance would choose the synthetic data from WGAN-GP or DECAF-ND, which are also the most biased methods. Thus, we see that there is a trade-off between fairness and data utility when the evaluation distribution P (X) is the original biased data. 6.2 Fair Credit Approval In this experiment, direct bias, which was not previously present, is synthetically injected into a dataset D resulting in a biased dataset D̃. We show how DECAF can remove the injected bias, resulting in dataset D′ that can be used to train a downstream classifier. This is a relevant scenario if the training data D̃ does not follow real-world distribution P (X), but instead a biased distribution P̃ (X) (due to, e.g., historical bias). In this case, we want downstream models trained on synthetic data D′ to perform well on the real-world data D instead of D̃. We show that DECAF is successful at removing the bias and how this results in higher data utility than benchmarks methods trained on D̃. 7PyTorch Lightning source code at https://github.com/vanderschaarlab/DECAF. We use the Credit Approval dataset from [40], with graph G as discovered by the causal discovery algorithm FGES [41] using Tetrad [42] (details in Appendix D.3). We inject direct bias by decreasing the probability that a sample will have their credit approved based on the chosen A.8 The credit_approval for this population was synthetically denied (set to 0) with some bias probability β, adding a directed edge between label and protected attribute. In Figure 3, we show the results of running our experiment 10 times over various bias probabilities β. We benchmark against FairGAN, as it is the only benchmark designed for synthetic debiased data. Note that in this case, the causal DAG has only one indirect biased edge between the protected variable (see Appendix D), and thus DECAF-DP and DECAF-CF remove the same edges and are the same for this experiment. The plots show that DECAF-FTU and DECAF-DP have similar performance to FairGAN in terms of debiasing; however, all of the DECAF-* methods have significantly better data quality metrics: precision, recall, and AUROC. DECAF-DP is one of the best performers across all 5 of the evaluation metrics and has better DP performance under higher bias. As expected, DECAF-ND (no debiasing) has the same data quality performance in terms of precision and recall as DECAF-FTU and DECAF-DP and has diminishing performance in terms of downstream AUROC, FTU, and DP as bias strength increases. See Appendix D for other benchmarks, and the same experiment under hidden confounding in Appendix G. 7 Discussion We have proposed DECAF, a causally-aware GAN that generates fair synthetic data. DECAF’s sequential generation provides a natural way of removing these edges, with the advantage that the conditional generation of other features is left unaltered. We demonstrated on real datasets that the DECAF framework is both versatile and compatible with several popular definitions of fairness. Lastly, we provided theoretical guarantees on the generator’s convergence and fairness of downstream models. We next discuss limitations as well as applications and opportunities for future work. Definitions. DECAF achieves fairness by removing edges between features, as we have shown for the popular FTU and DP definitions. Other independence-based [30] fairness definitions can be achieved by DECAF too, as we show in Appendix C. Just like related debiasing works [10, 11, 16, 17], DECAF is not compatible with fairness definitions based on separation or sufficiency [30], as these definitions depend on the downstream model more explicitly (e.g. Equality of Opportunity [12]). More on this in Appendix C. Incorrect DAG specification. Our method relies on the provision of causal structure in the form of a DAG for i) deciding the sequential order of feature generation and ii) deciding which edges to remove to achieve fairness. This graph need not be known a priori and can be discovered instead. If discovered, the DAG needs not equal the true DAG for many definitions of fairness, including FTU and DP, but only some (in)dependence statements are required to be correct (see Proposition 1). This is shown in the Experiments, where the DAG was discovered with the PC algorithm [47] and TETRAD [42]. Furthermore, in Appendix B we prove that the causal generator converges to the right distribution for any graph that is Markov compatible with the data. We reiterate, however, that knowing (part of) the true graph is still helpful because i) it often leads to simpler functions {fi}di=1 to approximate,9 and ii) some causal fairness definitions do require correct directionality—see Appendix 8We let A equal (anonymized) ethnicity [43, 44, 45, 46], with randomly chosen A = 4 as the disadvantaged population. 9Specifically, this is the case if modeling the causal direction is simpler than modeling the anti-causal direction. For many classes of models this is true when algorithmic independence holds, see [34]. C. In Appendix F, we include an ablation study on how errors in the DAG specification affect data quality and downstream fairness. Causal sufficiency. We have focused on just one type of graph: causally-sufficient directed graphs. Extending this to undirected or mixed graphs is possible as long as the generation order reflects a valid factorization of the observed distribution. This includes settings with hidden confounders. We note that for some definitions of bias, e.g., counterfactual bias, directionality is essential and hidden confounders would need to be corrected for (which is not generally possible). Time-series. We have focused on the tabular domain. The method can be extended to other domains with causal interaction between features, e.g., time-series. Application to image data is non-trivial, partly because, in this instance, the protected attribute (e.g., skin color) does not correspond to a single observed feature. DECAF might be extended to this setting in the future by first constructing a graph in a disentangled latent space (e.g., [24, 25]). Social implications. Fairness is task and context-dependent, requiring careful public debate. With that being said, DECAF empowers data issuers to take responsibility for downstream model fairness. We hope that this progresses the ubiquity of fairness in machine learning. Acknowledgements We would like to thank the reviewers for their time and valuable feedback. This research was funded by the Office of Naval Research and the WD Armstrong Trust.
1. What is the focus and contribution of the paper regarding fair data generation using GANs? 2. What are the strengths of the proposed approach, particularly in its theoretical motivation and practical solutions? 3. What are the weaknesses or concerns regarding the method's stability, scalability, and assumptions about causal graphs and selection bias? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or suggestions for improving the method, especially in addressing the concerns mentioned in the review?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a GAN based causal aware generative model to generate fair data for downstream models. They provide theoretical motivation and practical solutions with different causal criteria. Review Pros: This paper considers using a causal aware generative model to generate fair datasets, which I believe is the right approach to handle this problem. The paper is presented clearly The paper provides a theory to motivate the proposed solution The paper has a meaningful discussion regarding how the method can be used in practice (tool for discovery and markove compatible) Cons: Concerns regarding stability of the method (2 below) Concerns about biased data leads to wrong DAG, which cannot generate fair data (point 5 below) 5 is my main concern below comparing to others, and I am looking forward to your reply and happy to change the score if 5 can be addressed. Details: You seem need to assume to know the underlying downstream task. If the task selects a different variable as a target, the fairness guarantee won't be there anymore, right? Do you have any thoughts on generating fair datasets generation? I have concerns about the scalability of the method. If it is tabular data with many variables and with many edges, how would you implement it? ALso, do you need to write a new piece of code given any new causal graph or do you automate it in some way? If so, how? Causal graph is known is a strong assumption. But it is fine as you discussed discovery methods. In this case, do you have results for your method with respect to error from causal discovery? line 208. You mentioned Markov compatible and in experiments, you used Tetrad. I assume that the output from the Tetrad with score-based method will be a CPDAG. How did you use CPDAG to design the generator? Did you just take one sample from the Markov equivalent class. How robust is your method to different DAG in the Markov equivalent class? Does these make a difference in experiments? I believe that causal sufficiency is an assumption here. However, bias in the dataset itself may be due to selection bias. And existing causal discovery methods have problems to hand selection bias. Thus, the root of the bias cannot to resolved. (the root cause of unfair also it the root cause of the failure of causal discovery methods. ) For example, in 20s the survey about IQ, gender etc where there is fewer females in university where the survey is taken and leads to wrong conclusions. Such selection bias makes causal discovery fail and also makes other ML methods unfair. [Check causal discovery for MNAR papers] From the generative modeling point of view, the model seems just the GAN version of the CAMA model Zhang C, Zhang K, Li Y. A Causal View on Robustness of Neural Networks. Advances in Neural Information Processing Systems. 2020;33.
NIPS
Title DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative Networks Abstract Machine learning models have been criticized for reflecting unfair biases in the training data. Instead of solving for this by introducing fair learning algorithms directly, we focus on generating fair synthetic data, such that any downstream learner is fair. Generating fair synthetic data from unfair data— while remaining truthful to the underlying data-generating process (DGP) —is non-trivial. In this paper, we introduce DECAF: a GAN-based fair synthetic data generator for tabular data. With DECAF we embed the DGP explicitly as a structural causal model in the input layers of the generator, allowing each variable to be reconstructed conditioned on its causal parents. This procedure enables inference-time debiasing, where biased edges can be strategically removed for satisfying user-defined fairness requirements. The DECAF framework is versatile and compatible with several popular definitions of fairness. In our experiments, we show that DECAF successfully removes undesired bias and— in contrast to existing methods —is capable of generating high-quality synthetic data. Furthermore, we provide theoretical guarantees on the generator’s convergence and the fairness of downstream models. 1 Introduction Generative models are optimized to approximate the original data distribution as closely as possible. Most research focuses on three objectives [1]: fidelity, diversity, and privacy. The first and second are concerned with how closely synthetic samples resemble real data and how much of the real data’s distribution is covered by the new distribution, respectively. The third objective aims to avoid simply reproducing samples from the original data, which is important if the data contains privacy-sensitive information [2, 3]. We explore a much-less studied concept: synthetic data fairness. Motivation. Deployed machine learning models have been shown to reflect the bias of the data on which they are trained [4, 5, 6, 7, 8]. This has not only unfairly damaged the discriminated individuals but also society’s trust in machine learning as a whole. A large body of work has explored ways of detecting bias and creating fair predictors [9, 10, 11, 12, 13, 14, 15], while other authors propose debiasing the data itself [9, 10, 11, 16]. This work’s aim is related to the work of [17]: to generate fair synthetic data based on unfair data. Being able to generate fair data is important because end-users creating models based on publicly available data might be unaware they are inadvertently including ∗Equal contribution 35th Conference on Neural Information Processing Systems (NeurIPS 2021). bias or insufficiently knowledgeable to remove it from their model. Furthermore, by debiasing the data prior to public release, one can guarantee any downstream model satisfies desired fairness requirements by assigning the responsibility of debiasing to the data generating entities. Goal. From a biased dataset X , we are interested in learning a model G, that is able to generate an equivalent synthetic unbiased dataset X ′ with minimal loss of data utility. Furthermore, a downstream model trained on the synthetic data needs to make not only unbiased predictions on the synthetic data, but also on real-life datasets (as formalized in Section 4.2). Solution. We approach fairness from a causal standpoint because it provides an intuitive perspective on different definitions of fairness and discrimination [11, 13, 14, 15, 18]. We introduce DEbiasing CAusal Fairness (DECAF), a generative adversarial network (GAN) that leverages causal structure for synthesizing data. Specifically, DECAF is comprised of d generators (one for each variable) that learn the causal conditionals observed in the data. At inference-time, variables are synthesized topologically starting from the root nodes in the causal graph then synthesized sequentially, terminating at the leave nodes. Because of this, DECAF can remove bias at inference-time through targeted (biased) edge removal. As a result, various datasets can be created for desired (or evolving) definitions of fairness. Contributions. We propose a framework of using causal knowledge for fair synthetic data generation. We make three main contributions: i) DECAF, a causal GAN-based model for generating synthetic data, ii) a flexible causal approach for modifying this model such that it can generate fair data, and iii) guarantees that downstream models trained on the synthetic data will also give fair predictions in other settings. Experimentally, we show how DECAF is compatible with several fairness/discrimination definitions used in literature while still maintaining high downstream utility of generated data. 2 Related Works Here we focus on the related work concerned with data generation, in contrast to fairness definitions for which we provide a detailed overview in Section 4 and Appendix C. As an overview of how data generation methods relate to one another, we refer to Table 1 which presents all relevant related methods. Non-parametric generative modeling. The standard models for synthetic data generation are either based on VAEs [19] or GANs [2, 3, 20, 21]. While these models are well known for their highly realistic synthetic data, they are unable to alter the synthetic data distribution to encourage fairness (except for [17, 23], discussed below). Furthermore, these methods have no causal notion, which prohibits targeted interventions for synthesizing fair data (Section 4). We explicitly leave out CausalGAN [24] and CausalVAE [25], which appear similar by incorporating causality-derived ideas but are different in both method and aim (i.e., image generation). Fair data generation. In the bottom section of Table 1, we present methods that, in some way, alter the training data of classifiers to adhere to a notion of fairness [10, 11, 16, 17, 22, 23]. While these methods have proven successful, they lack some important features. For example, none of the related methods allow for post-hoc changes of the synthetic data distribution. This is an important feature, as each situation requires a different perspective on fairness and thus requires a flexible framework for selecting protected variables. Additionally, only [11, 23] allow a causal perspective on fairness, despite causal notions underlying multiple interpretations of what should be considered fair [13]. Furthermore, only [17, 22, 23] offer a flexible framework, while the others are limited to binary [10, 11] or discrete [16] settings. Xu et al. [23] also use a causal architecture for the generator, however their method is not as flexible—e.g. it does not easily extend to multiple protected attributes. Finally, in contrast to other methods DECAF is directly concerned with fairness of the downstream model—which is dependent on the setting in which the downstream model is employed (Section 4.2). In essence, from Table 1 we learn that DECAF is the only method that combines all key areas of interest. At last, we would like to mention [26], who aim to generate data that resembles a small unbiased reference dataset, by leveraging a large but biased dataset. This is very different to our aim, as we are interested in the downstream model’s fairness and explicit notions of fairness. 3 Preliminaries Let X ∈ X ⊆ Rd denote a random variable with distribution PX(X), with protected attributes A ∈ A ⊂ X and target variable Y ∈ Y ⊂ X , let Ŷ denote a prediction of Y . Let the data be given by D = {x(k)}Nk=1, where each x(k) ∈ D is a realization of X . We assume the data generating process can be represented by a directed acyclic graph (DAG)—such that the generation of features can be written as a structural equation model (SEM) [27]—and that this DAG is causally sufficient. Let Xi denote the ith feature in X with causal parents Pa(Xi) ⊂ {Xj : j 6= i}, the SEM is given by: Xi = fi(Pa(Xi), Zi),∀i (1) where {Zi}di=1 are independent random noise variables, that is Pa(Zi) = ∅, ∀i. Note that each fi is a deterministic function that places all randomness of the conditional P (Xi|Pa(Xi)) in the respective noise variable, Zi. 4 Fairness of Synthetic Data Algorithmic fairness is a popular topic (e.g., see [13, 28]), but fair synthetic data has been much less explored. This section highlights how the underlying graphs of the synthetic and downstream data determine whether a model trained on the synthetic data will be fair in practice. We start with the two most popular definitions of fairness, relating to the legal concepts of direct and indirect discrimination. We also explore conditional fairness [29], which is a generalization of the two. In Appendix C we discuss how the ideas in this section transfer to other independence-based definitions [30]. Throughout this section, we separate Y from X by defining X̄ = X\Y , and we will write X ← X̄ for ease of notation. 4.1 Algorithmic fairness The first definition is called Fairness Through Unawareness (e.g. [31]). Definition 1. (Fairness Through Unawareness (FTU): algorithm). A predictor f : X 7→ Ŷ is fair iff protected attributes A are not explicitly used by f to predict Ŷ . This definition prohibits disparate treatment [28, 32], and is related to the legal concept of direct discrimination, i.e., two equally qualified people deserve the same job opportunity independent of their race, gender, beliefs, among others. Though FTU fairness is commonly used, it might result in indirect discrimination: covariates that influence the prediction Ŷ might not be identically distributed across different groups a, a′, which means an algorithm might have disparate impact on a protected group [10]. The second definition of fairness, demographic parity [32], does not allow this: Definition 2. (Demographic Parity (DP): algorithm) A predictor Ŷ is fair iff A ⊥⊥ Ŷ , i.e. ∀a, a′ : P (Ŷ |A = a) = P (Ŷ |A = a′). Evidently, DP puts stringent constraints on the algorithm, whereas FTU might be too lenient. The third definition we include is based on the work of [29], related to unresolved discrimination [14]. The idea is that we do not allow indirect discrimination unless it runs through explanatory factors R ⊂ X . For example, in Simpson’s paradox [33] there seems to be a bias between gender and college admissions, but this is only due to women applying to more competitive courses. In this case, one would want to regard fairness conditioned on the choice of study [14]. Let us define this as conditional fairness: Definition 3. (Conditional Fairness (CF): algorithm) A predictor Ŷ is fair iff A ⊥⊥ Ŷ |R, i.e. ∀r, a, a′ : P (Ŷ |R = r,A = a) = P (Ŷ |R = r,A = a′). CF generalizes FTU and DP Note that conditional fairness is a generalization of FTU and DP, by setting R = X\A and R = ∅, respectively. In Appendix C we elaborate on the connection between these, and more, definitions. 4.2 Synthetic data fairness Algorithmic definitions can be extended to distributional fairness for synthetic data. Let P (X), P ′(X) be probability distributions with protected attributes A ⊂ X and labels Y ⊂ X . Let I(A, Y ) be a definition of algorithmic fairness (e.g., FTU). Note, that under CF, I(A, Y ) is a function of R as well. We propose (I(A, Y ), P )-fairness of distribution P ′(X): Definition 4. (Distributional fairness) A probability distribution P ′(X) is (I(A, Y ), P )-fair, iff the optimal predictor Ŷ = f∗(X) of Y trained on P ′(X) satisfies I(A, Y ) when evaluated on P (X). In other words, when we train a predictor on (I(A, Y ), P )-fair distribution P ′(X), we can only reach maximum performance if our model is fair. Note the explicit reference to P (X), the distribution on which fairness is evaluated, which does not need to coincide with P ′(X). This is a small but relevant detail. For example, when training a model on data D′ ∼ P ′(X) it could seem like the model is fair when we evaluate it on a hold-out set of the data (e.g., if we simply remove the protected attribute from the data). However, when we use the model for real-world predictions of data D ∼ P (X), disparate impact is possibly observed due to a distributional shift. By extension, we define synthetic data as (I(A, Y ), P )-fair, iff it is sampled from an (I(A, Y ), P )- fair distribution. Defining synthetic data as fair w.r.t. an optimal predictor is especially useful when we want to publish a dataset and do not trust end-users to consider anything but performance.2 Choosing P(X). The setting P (X) = P ′(X) corresponds to data being fair with respect to itself. For synthetic data generation, this setting is uninteresting as any dataset can be made fair by randomly sampling or removing A; if A is random, the prediction should not directly or indirectly depend on it. This ignores, however, that a downstream user might use the trained model on a real-world dataset in which other variables B are correlated with A, and thus their model (which is trained to use B for predicting Y ) will be biased. Of specific interest is the setting where P (X) corresponds to the original data distribution PX(X) that contains unfairness. In this scenario, we construct P ′(X) by learning PX(X) and removing the unfair characteristics. The data from P ′(X) can be published online, and models trained on this data can be deployed fairly in real-life scenarios where data follows PX(X). Unless otherwise stated, henceforth, we assume P (X) = PX(X). 4.3 Graphical perspective As reflected in the widely accepted terms direct versus indirect discrimination, it is natural to define distributional fairness from a causal standpoint. Let G′ and G respectively denote the graphs underlying P ′(X) (the synthetic data distribution which we can control) and P (X) (the evaluation distribution that we cannot control). Let ∂GY denote the Markov boundary of Y in graph G. We focus on the conditional fairness definition because it subsumes the definition of DP and FTU (Section 4.1). Let R ⊂ X be the set of explanatory features. Proposition 1. (CF: graphical condition) If for allB ∈ ∂G′Y ,A ⊥⊥G B|R,3 then distribution P ′(X) is CF fair w.r.t P (X) given explanatory factors R. 2Finding the optimal predictor is possible if we assume the downstream user employs any universal function approximator (e.g., MLP) and the amount of synthetic data is sufficiently large. 3Where ⊥⊥G denotes d-separation in G. Here we define A ⊥⊥G B|R to be true for all B ∈ R. Proof. Without loss of generality, let us assume the label is binary.4 The optimal predictor f∗(X) = P (Y |X) = P (Y |∂G′Y ). Thus, if ∂G′Y is d-separated from A in G given R, prediction Ŷ = f∗(X) is independent of A given R and CF holds. Corollary 1. (CF debiasing) Any distribution P ′(X) with graph G′ can be made CF fair w.r.t. P (X) and explanatory features R by removing from G′ edges Ẽ = {(B → Y ) and (Y → B) : ∀B ∈ ∂G′Y with B 6⊥⊥G A|R}. Proof. First note Ẽ is the necessary and sufficient set of edges to remove for (∀B ∈ ∂G′Y , A ⊥⊥G B|R) to be true, subsequently the result follows from Proposition 1. For FTU (i.e. R = X\A) and DP (i.e. R = ∅), this corollary simplifies to: Corollary 2. (FTU debiasing) Any distribution P ′(X) with graph G′ can be made FTU fair w.r.t. any distribution P (X) by removing, if present, i) the edge between A and Y and ii) the edge A → C or Y → C for all shared children C. Corollary 3. (DP debiasing) Any distribution P ′(X) with graph G′ can be made DP fair w.r.t. P (X) by removing, if present, the edge between B and Y for any B ∈ ∂G′Y with B 6⊥⊥G A. Figure 1 shows how the different fairness definitions lead to different sets of edges to be removed. Faithfulness. Usually one assumes distributions are faithful w.r.t. their respective graphs, in which case the if-statement in Proposition 1 become equivalence statements: fairness is only possible when the graphical conditions hold. Theorem 1. If P (X) and P ′(X) are faithful with respect to their respective graphs G and G′, then Proposition 1 becomes an equivalence statement and Corollaries 1, 2 and 3 describe the necessary and sufficient sets of edges to remove for achieving CF, FTU and DP fairness, respectively. Proof. Faithfulness implies A ⊥⊥P (X) B|R =⇒ A ⊥⊥G B|R, e.g. [34]. Thus, if ∃B ∈ ∂G′Y for which A 6⊥⊥G B|R, then A 6⊥⊥ B|R. Because B ∈ ∂G′Y and P ′(X) is faithful to G′, Ŷ = f∗(X) depends on B, and thus Ŷ 6⊥⊥ A|R: CF does not hold. Other definitions. Some authors define similar fairness measures in terms of directed paths (cf. d-separation) [11, 14, 18], which is a milder requirement as it allows correlation via non-causal paths. In Appendix C we highlight the graphical conditions for these definitions. 5 Method: DECAF The primary design goal of DECAF is to generate fair synthetic data from unfair data. We separate DECAF into two stages. The training stage learns the causal conditionals that are observed in the data through a causally-informed GAN. At the generation (inference) stage, we intervene on the learned conditionals via Corollaries 1-3, in such a way that the generator creates fair data. We assume the underlying DGP’s graph G is known; otherwise, G needs to be approximated first using any causal discovery method, see Section 6. 5.1 Training Overview. This stage strives to learn the causal mechanisms {fi(Pa(Xi), Zi)}. Each structural equation fi (Eq. 1) is modelled by a separate generator Gi : R|Pa(Xi)|+1 → R. We achieve this by employing a conditional GAN framework with a causal generator. This process is illustrated in Figure 2 and detailed below. 4If Y is continuous the same result holds, though the “optimal” predictor will depend on the statistic of interest, e.g. mode, mean, median or the entire distribution f(X,Y ) ≈ P (Y |X). Features are generated sequentially following the topological ordering of the underlying causal DAG: first root nodes are generated, then their children (from generated causal parents), etc. Variable X̂i is modelled by the associated generator Gi: X̂i = Gi(P̂a(Xi), Zi) ∀i, (2) where P̂a(Xi) denotes the generated causal parents of Xi (for root nodes the empty set), and each Zi is independently sampled from P (Z) (e.g. standard Gaussian). We denote the full sequential generator by G(Z) = [G1(Z1), ..., Gd(·, Zd)]. Subsequently, the synthetic sample x̂ is passed to a discriminator D : Rd → R, which is trained to distinguish the generated samples from original samples. A typical minimax objective is employed for creating generated samples that confuse the discriminator most: max {Gi}di=1 min D E[logD(G(Z)) + log(1−D(X)], (3) with X sampled from the original data. We optimize the discriminator and generator iteratively and add a regularization loss to both networks. Network parameters are updated using gradient descent. If we assume PX(X) is compatible with graph G, we can show that the sequential generator has the same theoretical convergence guarantees as standard GANs [20]: Theorem 2. (Convergence guarantee) Assuming the following three conditions hold: (i) data generating distribution PX is Markov compatible with a known DAG G; (ii) generator G and discriminator D have enough capacity; and (iii) in every training step the discriminator is trained to optimality given fixed G, and G is subsequently updated as to maximize the discriminator loss (Eq. 3); then generator distribution PG converges to true data distribution PX Proof. See Appendix B Condition (i), compatibility with G, is a weaker assumption than assuming perfect causal knowledge. For example, suppose the Markov equivalence class of the true underlying DAG has been determined through causal discovery. In that case, any graph G in the equivalence class is compatible with the data and can thus be used for synthetic data generation. However, we note that debiasing can require the correct directionality for some definitions of fairness, see Discussion. Remark. The causal GAN we propose, DECAF, is simple and extendable to other generative methods, e.g., VAEs. Furthermore, from the post-processing theorem [35] it follows that DECAF can be directly used for generating private synthetic data by replacing the standard discriminator by a differentially private discriminator [2, 36]. 5.2 Inference-time Debiasing The training phase yields conditional generators {Gi}di=1, which can be sequentially applied to generate data with the same output distribution as the original data (proof in Appendix B). The causal model allows us to go one step further: when the original data has characteristics that we do not want to propagate to the synthetic data (e.g., gender bias), individual generators can be modified to remove these characteristics. Given the generator’s graph G = (X,E), fairness is achieved by removing edges such that the fairness criteria are met, see Section 4. Let Ẽ ∈ E be the set of edges to remove for satisfying the required fairness definition. For CF, FTU and DP,5 the sets Ẽ are given by Corollaries 1, 2 and 3, respectively. Removing an edge constitutes to what we call a “surrogate” do-operation [27] on the conditional distribution. For example, suppose we only want to remove (i → j). For a given sample, Xi is generated normally (Eq. 2), but Xj is generated using the modified: X̂ do(Xi)=x̃ij j = Gj(..., Xi = x̃ij), (4) where Xi = x̃ij is the surrogate parent assignment. Value X̂ do(Xi) j can be interpreted as the counterfactual value of X̂j , had Xi been equal to x̃ij (see also [15]). Choosing the value of surrogate variable x̃ij requires background knowledge of the task and bias at hand. For example, surrogate variable x̃ij can be sampled independently from a distribution for each synthetic sample (e.g., the marginal P (Xi)), be set to a fixed value for all samples in the synthetic data (e.g., if Xi: gender, always set x̃ij = male when generating feature Xj : job opportunity) or be chosen as to maximize/minimize some feature (e.g. x̃ij = arg maxx X̂ do(Xi)=x j ). We emphasize that we do not set Xi = x̃ij in the synthetic sample; Xi = x̃ij is only used for substitution of the removed dependence. We provide more details in Appendix E. More generally, we create surrogate variables for all edges we remove, {x̃ij : (i→ j) ∈ Ẽ}. Each sample is sequentially generated by Eq. 4, with a surrogate variable for each removed incoming edge. Remark. Multiple datasets can be created based on different definitions of fairness and/or different downstream prediction targets. Because debiasing happens at inference-time, this does not require retraining the model. 6 Experiments In this section, we validate the performance of DECAF for synthesizing bias-free data based on two datasets: i) real data with existing bias and ii) real data with synthetically injected bias. The aim of the former is to show that we can remove real, existing bias. The latter experiment provides a ground-truth unbiased target distribution, which means we can evaluate the quality of the synthetic dataset with respect to this ground truth. For example, when historically biased data is first debiased, a model trained on the synthetic data will likely create better predictions in contemporary, unbiased/less-biased settings than benchmarks that do not debias first. In both experiments, the ground-truth DAG is unknown. We use causal discovery to uncover the underlying DAG and show empirically that the performance is still good. Benchmarks. We compare DECAF against the following benchmark generative methods: a GAN, a Wasserstein GAN with gradient penalty (WGAN-GP) [21] and FairGAN [17]. FairGAN is the only benchmark designed to generate synthetic fair data,6 whereas GAN and WGAN-GP only aim to match the original data’s distribution, regardless of inherent underlying bias. For these benchmarks, fair data can be generated by naively removing the protected variable – we refer to these methods with the PR (protected removal) suffix and provide more experimental results and insight into PR in Appendix A. We benchmark DECAF debiasing in four ways: i) with no inference-time debiasing 5Just like in Corollaries 1 and 3, we assume the downstream evaluation distribution is the same as the biased training data distribution: a predictor trained on the synthetic debiased data, is required to give fair predictions in real-life settings with distribution PX(X). 6The works of [11, 16] are not applicable here, as these methods are constrained to discrete data. (DECAF-ND), ii) under FTU (DECAF-FTU), iii) under CF (DECAF-CF) and iv) under DP fairness (DECAF-DP). We provide DECAF7 implementation details in Appendix D.1. Evaluation criteria. We evaluate DECAF using the following metrics: • Data quality is assessed using metrics of precision and recall [37, 38, 39]. Additionally, we evaluate all methods in terms of AUROC of predicting the target variable using a downstream classifier (MLP in these experiments) trained on synthetic data. • FTU is measured by calculating the difference between the predictions of a downstream classifier for setting A to 1 and 0, respectively, such that |PA=0(Ŷ |X) − PA=1(Ŷ |X)|, while keeping all other features the same. This difference measures the direct influence of A on the prediction. • DP is measured in terms of the Total Variation [15]: the difference between the predictions of a downstream classifier in terms of positive to negative ratio between the different classes of protected variable A, i.e., |P (Ŷ |A = 0)− P (Ŷ |A = 1)|. 6.1 Debiasing Census Data In this experiment, we are given a biased dataset D ∼ P (X) and wish to create a synthetic (and debiased) dataset D′, with which a downstream classifier can be trained and subsequently be rolled out in a setting with distribution P (X). We experiment on the Adult dataset [40], with known bias between gender and income [10, 11]. The Adult dataset contains over 65,000 samples and has 11 attributes, such as age, education, gender, income, among others. Following [11], we treat gender as the protected variable and use income as the binary target variable representing whether a person earns over $50K or not. For DAG G, we use the graph discovered and presented by Zhang et al. [11]. In Appendix D.2, we specify edge removals for DECAF-DP, DECAF-CF, and DECAF-FTU. Synthetic data is generated using each benchmark method, after which a separate MLP is trained on each dataset for computing the metrics; see Appendix D.2 for details. We repeat this experiment 10 times for each benchmark method and report the average in Table 2. As shown, DECAF-ND (no debiasing) performs amongst the best methods in terms of data utility. Because the data utility in this experiment is measured with respect to the original (biased) dataset, we see that the methods DECAF-FTU, DECAF-CF, and DECAF-DP score lower than DECAF-ND because these methods distort the distribution – with DECAF-DP distorting the label’s conditional distribution most and thus scoring worst in terms of AUROC. Note also that a downstream user who is only focused on performance would choose the synthetic data from WGAN-GP or DECAF-ND, which are also the most biased methods. Thus, we see that there is a trade-off between fairness and data utility when the evaluation distribution P (X) is the original biased data. 6.2 Fair Credit Approval In this experiment, direct bias, which was not previously present, is synthetically injected into a dataset D resulting in a biased dataset D̃. We show how DECAF can remove the injected bias, resulting in dataset D′ that can be used to train a downstream classifier. This is a relevant scenario if the training data D̃ does not follow real-world distribution P (X), but instead a biased distribution P̃ (X) (due to, e.g., historical bias). In this case, we want downstream models trained on synthetic data D′ to perform well on the real-world data D instead of D̃. We show that DECAF is successful at removing the bias and how this results in higher data utility than benchmarks methods trained on D̃. 7PyTorch Lightning source code at https://github.com/vanderschaarlab/DECAF. We use the Credit Approval dataset from [40], with graph G as discovered by the causal discovery algorithm FGES [41] using Tetrad [42] (details in Appendix D.3). We inject direct bias by decreasing the probability that a sample will have their credit approved based on the chosen A.8 The credit_approval for this population was synthetically denied (set to 0) with some bias probability β, adding a directed edge between label and protected attribute. In Figure 3, we show the results of running our experiment 10 times over various bias probabilities β. We benchmark against FairGAN, as it is the only benchmark designed for synthetic debiased data. Note that in this case, the causal DAG has only one indirect biased edge between the protected variable (see Appendix D), and thus DECAF-DP and DECAF-CF remove the same edges and are the same for this experiment. The plots show that DECAF-FTU and DECAF-DP have similar performance to FairGAN in terms of debiasing; however, all of the DECAF-* methods have significantly better data quality metrics: precision, recall, and AUROC. DECAF-DP is one of the best performers across all 5 of the evaluation metrics and has better DP performance under higher bias. As expected, DECAF-ND (no debiasing) has the same data quality performance in terms of precision and recall as DECAF-FTU and DECAF-DP and has diminishing performance in terms of downstream AUROC, FTU, and DP as bias strength increases. See Appendix D for other benchmarks, and the same experiment under hidden confounding in Appendix G. 7 Discussion We have proposed DECAF, a causally-aware GAN that generates fair synthetic data. DECAF’s sequential generation provides a natural way of removing these edges, with the advantage that the conditional generation of other features is left unaltered. We demonstrated on real datasets that the DECAF framework is both versatile and compatible with several popular definitions of fairness. Lastly, we provided theoretical guarantees on the generator’s convergence and fairness of downstream models. We next discuss limitations as well as applications and opportunities for future work. Definitions. DECAF achieves fairness by removing edges between features, as we have shown for the popular FTU and DP definitions. Other independence-based [30] fairness definitions can be achieved by DECAF too, as we show in Appendix C. Just like related debiasing works [10, 11, 16, 17], DECAF is not compatible with fairness definitions based on separation or sufficiency [30], as these definitions depend on the downstream model more explicitly (e.g. Equality of Opportunity [12]). More on this in Appendix C. Incorrect DAG specification. Our method relies on the provision of causal structure in the form of a DAG for i) deciding the sequential order of feature generation and ii) deciding which edges to remove to achieve fairness. This graph need not be known a priori and can be discovered instead. If discovered, the DAG needs not equal the true DAG for many definitions of fairness, including FTU and DP, but only some (in)dependence statements are required to be correct (see Proposition 1). This is shown in the Experiments, where the DAG was discovered with the PC algorithm [47] and TETRAD [42]. Furthermore, in Appendix B we prove that the causal generator converges to the right distribution for any graph that is Markov compatible with the data. We reiterate, however, that knowing (part of) the true graph is still helpful because i) it often leads to simpler functions {fi}di=1 to approximate,9 and ii) some causal fairness definitions do require correct directionality—see Appendix 8We let A equal (anonymized) ethnicity [43, 44, 45, 46], with randomly chosen A = 4 as the disadvantaged population. 9Specifically, this is the case if modeling the causal direction is simpler than modeling the anti-causal direction. For many classes of models this is true when algorithmic independence holds, see [34]. C. In Appendix F, we include an ablation study on how errors in the DAG specification affect data quality and downstream fairness. Causal sufficiency. We have focused on just one type of graph: causally-sufficient directed graphs. Extending this to undirected or mixed graphs is possible as long as the generation order reflects a valid factorization of the observed distribution. This includes settings with hidden confounders. We note that for some definitions of bias, e.g., counterfactual bias, directionality is essential and hidden confounders would need to be corrected for (which is not generally possible). Time-series. We have focused on the tabular domain. The method can be extended to other domains with causal interaction between features, e.g., time-series. Application to image data is non-trivial, partly because, in this instance, the protected attribute (e.g., skin color) does not correspond to a single observed feature. DECAF might be extended to this setting in the future by first constructing a graph in a disentangled latent space (e.g., [24, 25]). Social implications. Fairness is task and context-dependent, requiring careful public debate. With that being said, DECAF empowers data issuers to take responsibility for downstream model fairness. We hope that this progresses the ubiquity of fairness in machine learning. Acknowledgements We would like to thank the reviewers for their time and valuable feedback. This research was funded by the Office of Naval Research and the WD Armstrong Trust.
1. What is the focus and contribution of the paper regarding fair synthetic data generation? 2. What are the strengths and weaknesses of the proposed GAN-based framework? 3. How does the reviewer assess the clarity, quality, originality, and significance of the paper's content? 4. Are there any concerns or suggestions regarding the efficiency, comparisons with other methods, and handling of categorical columns?
Summary Of The Paper Review
Summary Of The Paper This work proposes a novel task of generating fair synthetic data by a GAN-based model. The generation process follows the order on a causal DAG and fairness is achieved by debiasing during the inference phase. Experiment results are carried out on Adult and Credit Approval datasets to show the effectiveness of generating fair synthetic data. Review Originality: Generating fair synthetic data for machine learning is an interesting task. This paper designed a GAN-based framework which can achieve fair data generation by simply intervening the inference phase. The idea is novel and interesting. Quality: Some discussion on efficiency: Although the idea of generating synthetic data following a topological order is an interesting idea, it also raises a concern about the efficiency of the algorithm. The algorithm needs to run the generator d times, where d is the number of columns in the table. If there are only a few protected columns, it’s possible to simplify the DAG by merging irrelevant columns. However, if the number of protected columns are also large, then there’s no easy fix to the efficiency. Comparing with more baseline methods: The method is only compared with GAN-based baselines. Since statistical methods are also capable of generating synthetic data, such baselines should also be included, for example PrivBayes [1]. Also, there are GANs (TableGAN [2], CTGAN [3]) and VAEs (TVAE [3]) designed specifically for synthetic table generation. The experiment is carried out on 2 small dataset, each with less than 20 columns. It’s unclear how the model would perform on a dataset with more columns. Clarity This paper clearly defines different fairness in synthetic data. The high-level idea is also understandable. However, I would ask the authors to precisely describe the GAN model using either pseudo code or equations in the main paper, rather than burying all details deeply in the supplementary material and even in the source code. For example, the sharing weights in D.1 is important and quite hard to understand. It’s also unclear how the paper handles categorical (discrete) columns in tabular data. I also recommend the authors also provide stats for fair credit dataset. Significance: This paper clearly lays out different definitions of fair synthetic data, and the proposed method is novel and interesting. These ideas are inspiring for future research. [1] PrivBayes: Private Data Release via Bayesian Networks [2] Data Synthesis based on Generative Adversarial Networks [3] Modeling Tabular data using Conditional GAN
NIPS
Title DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative Networks Abstract Machine learning models have been criticized for reflecting unfair biases in the training data. Instead of solving for this by introducing fair learning algorithms directly, we focus on generating fair synthetic data, such that any downstream learner is fair. Generating fair synthetic data from unfair data— while remaining truthful to the underlying data-generating process (DGP) —is non-trivial. In this paper, we introduce DECAF: a GAN-based fair synthetic data generator for tabular data. With DECAF we embed the DGP explicitly as a structural causal model in the input layers of the generator, allowing each variable to be reconstructed conditioned on its causal parents. This procedure enables inference-time debiasing, where biased edges can be strategically removed for satisfying user-defined fairness requirements. The DECAF framework is versatile and compatible with several popular definitions of fairness. In our experiments, we show that DECAF successfully removes undesired bias and— in contrast to existing methods —is capable of generating high-quality synthetic data. Furthermore, we provide theoretical guarantees on the generator’s convergence and the fairness of downstream models. 1 Introduction Generative models are optimized to approximate the original data distribution as closely as possible. Most research focuses on three objectives [1]: fidelity, diversity, and privacy. The first and second are concerned with how closely synthetic samples resemble real data and how much of the real data’s distribution is covered by the new distribution, respectively. The third objective aims to avoid simply reproducing samples from the original data, which is important if the data contains privacy-sensitive information [2, 3]. We explore a much-less studied concept: synthetic data fairness. Motivation. Deployed machine learning models have been shown to reflect the bias of the data on which they are trained [4, 5, 6, 7, 8]. This has not only unfairly damaged the discriminated individuals but also society’s trust in machine learning as a whole. A large body of work has explored ways of detecting bias and creating fair predictors [9, 10, 11, 12, 13, 14, 15], while other authors propose debiasing the data itself [9, 10, 11, 16]. This work’s aim is related to the work of [17]: to generate fair synthetic data based on unfair data. Being able to generate fair data is important because end-users creating models based on publicly available data might be unaware they are inadvertently including ∗Equal contribution 35th Conference on Neural Information Processing Systems (NeurIPS 2021). bias or insufficiently knowledgeable to remove it from their model. Furthermore, by debiasing the data prior to public release, one can guarantee any downstream model satisfies desired fairness requirements by assigning the responsibility of debiasing to the data generating entities. Goal. From a biased dataset X , we are interested in learning a model G, that is able to generate an equivalent synthetic unbiased dataset X ′ with minimal loss of data utility. Furthermore, a downstream model trained on the synthetic data needs to make not only unbiased predictions on the synthetic data, but also on real-life datasets (as formalized in Section 4.2). Solution. We approach fairness from a causal standpoint because it provides an intuitive perspective on different definitions of fairness and discrimination [11, 13, 14, 15, 18]. We introduce DEbiasing CAusal Fairness (DECAF), a generative adversarial network (GAN) that leverages causal structure for synthesizing data. Specifically, DECAF is comprised of d generators (one for each variable) that learn the causal conditionals observed in the data. At inference-time, variables are synthesized topologically starting from the root nodes in the causal graph then synthesized sequentially, terminating at the leave nodes. Because of this, DECAF can remove bias at inference-time through targeted (biased) edge removal. As a result, various datasets can be created for desired (or evolving) definitions of fairness. Contributions. We propose a framework of using causal knowledge for fair synthetic data generation. We make three main contributions: i) DECAF, a causal GAN-based model for generating synthetic data, ii) a flexible causal approach for modifying this model such that it can generate fair data, and iii) guarantees that downstream models trained on the synthetic data will also give fair predictions in other settings. Experimentally, we show how DECAF is compatible with several fairness/discrimination definitions used in literature while still maintaining high downstream utility of generated data. 2 Related Works Here we focus on the related work concerned with data generation, in contrast to fairness definitions for which we provide a detailed overview in Section 4 and Appendix C. As an overview of how data generation methods relate to one another, we refer to Table 1 which presents all relevant related methods. Non-parametric generative modeling. The standard models for synthetic data generation are either based on VAEs [19] or GANs [2, 3, 20, 21]. While these models are well known for their highly realistic synthetic data, they are unable to alter the synthetic data distribution to encourage fairness (except for [17, 23], discussed below). Furthermore, these methods have no causal notion, which prohibits targeted interventions for synthesizing fair data (Section 4). We explicitly leave out CausalGAN [24] and CausalVAE [25], which appear similar by incorporating causality-derived ideas but are different in both method and aim (i.e., image generation). Fair data generation. In the bottom section of Table 1, we present methods that, in some way, alter the training data of classifiers to adhere to a notion of fairness [10, 11, 16, 17, 22, 23]. While these methods have proven successful, they lack some important features. For example, none of the related methods allow for post-hoc changes of the synthetic data distribution. This is an important feature, as each situation requires a different perspective on fairness and thus requires a flexible framework for selecting protected variables. Additionally, only [11, 23] allow a causal perspective on fairness, despite causal notions underlying multiple interpretations of what should be considered fair [13]. Furthermore, only [17, 22, 23] offer a flexible framework, while the others are limited to binary [10, 11] or discrete [16] settings. Xu et al. [23] also use a causal architecture for the generator, however their method is not as flexible—e.g. it does not easily extend to multiple protected attributes. Finally, in contrast to other methods DECAF is directly concerned with fairness of the downstream model—which is dependent on the setting in which the downstream model is employed (Section 4.2). In essence, from Table 1 we learn that DECAF is the only method that combines all key areas of interest. At last, we would like to mention [26], who aim to generate data that resembles a small unbiased reference dataset, by leveraging a large but biased dataset. This is very different to our aim, as we are interested in the downstream model’s fairness and explicit notions of fairness. 3 Preliminaries Let X ∈ X ⊆ Rd denote a random variable with distribution PX(X), with protected attributes A ∈ A ⊂ X and target variable Y ∈ Y ⊂ X , let Ŷ denote a prediction of Y . Let the data be given by D = {x(k)}Nk=1, where each x(k) ∈ D is a realization of X . We assume the data generating process can be represented by a directed acyclic graph (DAG)—such that the generation of features can be written as a structural equation model (SEM) [27]—and that this DAG is causally sufficient. Let Xi denote the ith feature in X with causal parents Pa(Xi) ⊂ {Xj : j 6= i}, the SEM is given by: Xi = fi(Pa(Xi), Zi),∀i (1) where {Zi}di=1 are independent random noise variables, that is Pa(Zi) = ∅, ∀i. Note that each fi is a deterministic function that places all randomness of the conditional P (Xi|Pa(Xi)) in the respective noise variable, Zi. 4 Fairness of Synthetic Data Algorithmic fairness is a popular topic (e.g., see [13, 28]), but fair synthetic data has been much less explored. This section highlights how the underlying graphs of the synthetic and downstream data determine whether a model trained on the synthetic data will be fair in practice. We start with the two most popular definitions of fairness, relating to the legal concepts of direct and indirect discrimination. We also explore conditional fairness [29], which is a generalization of the two. In Appendix C we discuss how the ideas in this section transfer to other independence-based definitions [30]. Throughout this section, we separate Y from X by defining X̄ = X\Y , and we will write X ← X̄ for ease of notation. 4.1 Algorithmic fairness The first definition is called Fairness Through Unawareness (e.g. [31]). Definition 1. (Fairness Through Unawareness (FTU): algorithm). A predictor f : X 7→ Ŷ is fair iff protected attributes A are not explicitly used by f to predict Ŷ . This definition prohibits disparate treatment [28, 32], and is related to the legal concept of direct discrimination, i.e., two equally qualified people deserve the same job opportunity independent of their race, gender, beliefs, among others. Though FTU fairness is commonly used, it might result in indirect discrimination: covariates that influence the prediction Ŷ might not be identically distributed across different groups a, a′, which means an algorithm might have disparate impact on a protected group [10]. The second definition of fairness, demographic parity [32], does not allow this: Definition 2. (Demographic Parity (DP): algorithm) A predictor Ŷ is fair iff A ⊥⊥ Ŷ , i.e. ∀a, a′ : P (Ŷ |A = a) = P (Ŷ |A = a′). Evidently, DP puts stringent constraints on the algorithm, whereas FTU might be too lenient. The third definition we include is based on the work of [29], related to unresolved discrimination [14]. The idea is that we do not allow indirect discrimination unless it runs through explanatory factors R ⊂ X . For example, in Simpson’s paradox [33] there seems to be a bias between gender and college admissions, but this is only due to women applying to more competitive courses. In this case, one would want to regard fairness conditioned on the choice of study [14]. Let us define this as conditional fairness: Definition 3. (Conditional Fairness (CF): algorithm) A predictor Ŷ is fair iff A ⊥⊥ Ŷ |R, i.e. ∀r, a, a′ : P (Ŷ |R = r,A = a) = P (Ŷ |R = r,A = a′). CF generalizes FTU and DP Note that conditional fairness is a generalization of FTU and DP, by setting R = X\A and R = ∅, respectively. In Appendix C we elaborate on the connection between these, and more, definitions. 4.2 Synthetic data fairness Algorithmic definitions can be extended to distributional fairness for synthetic data. Let P (X), P ′(X) be probability distributions with protected attributes A ⊂ X and labels Y ⊂ X . Let I(A, Y ) be a definition of algorithmic fairness (e.g., FTU). Note, that under CF, I(A, Y ) is a function of R as well. We propose (I(A, Y ), P )-fairness of distribution P ′(X): Definition 4. (Distributional fairness) A probability distribution P ′(X) is (I(A, Y ), P )-fair, iff the optimal predictor Ŷ = f∗(X) of Y trained on P ′(X) satisfies I(A, Y ) when evaluated on P (X). In other words, when we train a predictor on (I(A, Y ), P )-fair distribution P ′(X), we can only reach maximum performance if our model is fair. Note the explicit reference to P (X), the distribution on which fairness is evaluated, which does not need to coincide with P ′(X). This is a small but relevant detail. For example, when training a model on data D′ ∼ P ′(X) it could seem like the model is fair when we evaluate it on a hold-out set of the data (e.g., if we simply remove the protected attribute from the data). However, when we use the model for real-world predictions of data D ∼ P (X), disparate impact is possibly observed due to a distributional shift. By extension, we define synthetic data as (I(A, Y ), P )-fair, iff it is sampled from an (I(A, Y ), P )- fair distribution. Defining synthetic data as fair w.r.t. an optimal predictor is especially useful when we want to publish a dataset and do not trust end-users to consider anything but performance.2 Choosing P(X). The setting P (X) = P ′(X) corresponds to data being fair with respect to itself. For synthetic data generation, this setting is uninteresting as any dataset can be made fair by randomly sampling or removing A; if A is random, the prediction should not directly or indirectly depend on it. This ignores, however, that a downstream user might use the trained model on a real-world dataset in which other variables B are correlated with A, and thus their model (which is trained to use B for predicting Y ) will be biased. Of specific interest is the setting where P (X) corresponds to the original data distribution PX(X) that contains unfairness. In this scenario, we construct P ′(X) by learning PX(X) and removing the unfair characteristics. The data from P ′(X) can be published online, and models trained on this data can be deployed fairly in real-life scenarios where data follows PX(X). Unless otherwise stated, henceforth, we assume P (X) = PX(X). 4.3 Graphical perspective As reflected in the widely accepted terms direct versus indirect discrimination, it is natural to define distributional fairness from a causal standpoint. Let G′ and G respectively denote the graphs underlying P ′(X) (the synthetic data distribution which we can control) and P (X) (the evaluation distribution that we cannot control). Let ∂GY denote the Markov boundary of Y in graph G. We focus on the conditional fairness definition because it subsumes the definition of DP and FTU (Section 4.1). Let R ⊂ X be the set of explanatory features. Proposition 1. (CF: graphical condition) If for allB ∈ ∂G′Y ,A ⊥⊥G B|R,3 then distribution P ′(X) is CF fair w.r.t P (X) given explanatory factors R. 2Finding the optimal predictor is possible if we assume the downstream user employs any universal function approximator (e.g., MLP) and the amount of synthetic data is sufficiently large. 3Where ⊥⊥G denotes d-separation in G. Here we define A ⊥⊥G B|R to be true for all B ∈ R. Proof. Without loss of generality, let us assume the label is binary.4 The optimal predictor f∗(X) = P (Y |X) = P (Y |∂G′Y ). Thus, if ∂G′Y is d-separated from A in G given R, prediction Ŷ = f∗(X) is independent of A given R and CF holds. Corollary 1. (CF debiasing) Any distribution P ′(X) with graph G′ can be made CF fair w.r.t. P (X) and explanatory features R by removing from G′ edges Ẽ = {(B → Y ) and (Y → B) : ∀B ∈ ∂G′Y with B 6⊥⊥G A|R}. Proof. First note Ẽ is the necessary and sufficient set of edges to remove for (∀B ∈ ∂G′Y , A ⊥⊥G B|R) to be true, subsequently the result follows from Proposition 1. For FTU (i.e. R = X\A) and DP (i.e. R = ∅), this corollary simplifies to: Corollary 2. (FTU debiasing) Any distribution P ′(X) with graph G′ can be made FTU fair w.r.t. any distribution P (X) by removing, if present, i) the edge between A and Y and ii) the edge A → C or Y → C for all shared children C. Corollary 3. (DP debiasing) Any distribution P ′(X) with graph G′ can be made DP fair w.r.t. P (X) by removing, if present, the edge between B and Y for any B ∈ ∂G′Y with B 6⊥⊥G A. Figure 1 shows how the different fairness definitions lead to different sets of edges to be removed. Faithfulness. Usually one assumes distributions are faithful w.r.t. their respective graphs, in which case the if-statement in Proposition 1 become equivalence statements: fairness is only possible when the graphical conditions hold. Theorem 1. If P (X) and P ′(X) are faithful with respect to their respective graphs G and G′, then Proposition 1 becomes an equivalence statement and Corollaries 1, 2 and 3 describe the necessary and sufficient sets of edges to remove for achieving CF, FTU and DP fairness, respectively. Proof. Faithfulness implies A ⊥⊥P (X) B|R =⇒ A ⊥⊥G B|R, e.g. [34]. Thus, if ∃B ∈ ∂G′Y for which A 6⊥⊥G B|R, then A 6⊥⊥ B|R. Because B ∈ ∂G′Y and P ′(X) is faithful to G′, Ŷ = f∗(X) depends on B, and thus Ŷ 6⊥⊥ A|R: CF does not hold. Other definitions. Some authors define similar fairness measures in terms of directed paths (cf. d-separation) [11, 14, 18], which is a milder requirement as it allows correlation via non-causal paths. In Appendix C we highlight the graphical conditions for these definitions. 5 Method: DECAF The primary design goal of DECAF is to generate fair synthetic data from unfair data. We separate DECAF into two stages. The training stage learns the causal conditionals that are observed in the data through a causally-informed GAN. At the generation (inference) stage, we intervene on the learned conditionals via Corollaries 1-3, in such a way that the generator creates fair data. We assume the underlying DGP’s graph G is known; otherwise, G needs to be approximated first using any causal discovery method, see Section 6. 5.1 Training Overview. This stage strives to learn the causal mechanisms {fi(Pa(Xi), Zi)}. Each structural equation fi (Eq. 1) is modelled by a separate generator Gi : R|Pa(Xi)|+1 → R. We achieve this by employing a conditional GAN framework with a causal generator. This process is illustrated in Figure 2 and detailed below. 4If Y is continuous the same result holds, though the “optimal” predictor will depend on the statistic of interest, e.g. mode, mean, median or the entire distribution f(X,Y ) ≈ P (Y |X). Features are generated sequentially following the topological ordering of the underlying causal DAG: first root nodes are generated, then their children (from generated causal parents), etc. Variable X̂i is modelled by the associated generator Gi: X̂i = Gi(P̂a(Xi), Zi) ∀i, (2) where P̂a(Xi) denotes the generated causal parents of Xi (for root nodes the empty set), and each Zi is independently sampled from P (Z) (e.g. standard Gaussian). We denote the full sequential generator by G(Z) = [G1(Z1), ..., Gd(·, Zd)]. Subsequently, the synthetic sample x̂ is passed to a discriminator D : Rd → R, which is trained to distinguish the generated samples from original samples. A typical minimax objective is employed for creating generated samples that confuse the discriminator most: max {Gi}di=1 min D E[logD(G(Z)) + log(1−D(X)], (3) with X sampled from the original data. We optimize the discriminator and generator iteratively and add a regularization loss to both networks. Network parameters are updated using gradient descent. If we assume PX(X) is compatible with graph G, we can show that the sequential generator has the same theoretical convergence guarantees as standard GANs [20]: Theorem 2. (Convergence guarantee) Assuming the following three conditions hold: (i) data generating distribution PX is Markov compatible with a known DAG G; (ii) generator G and discriminator D have enough capacity; and (iii) in every training step the discriminator is trained to optimality given fixed G, and G is subsequently updated as to maximize the discriminator loss (Eq. 3); then generator distribution PG converges to true data distribution PX Proof. See Appendix B Condition (i), compatibility with G, is a weaker assumption than assuming perfect causal knowledge. For example, suppose the Markov equivalence class of the true underlying DAG has been determined through causal discovery. In that case, any graph G in the equivalence class is compatible with the data and can thus be used for synthetic data generation. However, we note that debiasing can require the correct directionality for some definitions of fairness, see Discussion. Remark. The causal GAN we propose, DECAF, is simple and extendable to other generative methods, e.g., VAEs. Furthermore, from the post-processing theorem [35] it follows that DECAF can be directly used for generating private synthetic data by replacing the standard discriminator by a differentially private discriminator [2, 36]. 5.2 Inference-time Debiasing The training phase yields conditional generators {Gi}di=1, which can be sequentially applied to generate data with the same output distribution as the original data (proof in Appendix B). The causal model allows us to go one step further: when the original data has characteristics that we do not want to propagate to the synthetic data (e.g., gender bias), individual generators can be modified to remove these characteristics. Given the generator’s graph G = (X,E), fairness is achieved by removing edges such that the fairness criteria are met, see Section 4. Let Ẽ ∈ E be the set of edges to remove for satisfying the required fairness definition. For CF, FTU and DP,5 the sets Ẽ are given by Corollaries 1, 2 and 3, respectively. Removing an edge constitutes to what we call a “surrogate” do-operation [27] on the conditional distribution. For example, suppose we only want to remove (i → j). For a given sample, Xi is generated normally (Eq. 2), but Xj is generated using the modified: X̂ do(Xi)=x̃ij j = Gj(..., Xi = x̃ij), (4) where Xi = x̃ij is the surrogate parent assignment. Value X̂ do(Xi) j can be interpreted as the counterfactual value of X̂j , had Xi been equal to x̃ij (see also [15]). Choosing the value of surrogate variable x̃ij requires background knowledge of the task and bias at hand. For example, surrogate variable x̃ij can be sampled independently from a distribution for each synthetic sample (e.g., the marginal P (Xi)), be set to a fixed value for all samples in the synthetic data (e.g., if Xi: gender, always set x̃ij = male when generating feature Xj : job opportunity) or be chosen as to maximize/minimize some feature (e.g. x̃ij = arg maxx X̂ do(Xi)=x j ). We emphasize that we do not set Xi = x̃ij in the synthetic sample; Xi = x̃ij is only used for substitution of the removed dependence. We provide more details in Appendix E. More generally, we create surrogate variables for all edges we remove, {x̃ij : (i→ j) ∈ Ẽ}. Each sample is sequentially generated by Eq. 4, with a surrogate variable for each removed incoming edge. Remark. Multiple datasets can be created based on different definitions of fairness and/or different downstream prediction targets. Because debiasing happens at inference-time, this does not require retraining the model. 6 Experiments In this section, we validate the performance of DECAF for synthesizing bias-free data based on two datasets: i) real data with existing bias and ii) real data with synthetically injected bias. The aim of the former is to show that we can remove real, existing bias. The latter experiment provides a ground-truth unbiased target distribution, which means we can evaluate the quality of the synthetic dataset with respect to this ground truth. For example, when historically biased data is first debiased, a model trained on the synthetic data will likely create better predictions in contemporary, unbiased/less-biased settings than benchmarks that do not debias first. In both experiments, the ground-truth DAG is unknown. We use causal discovery to uncover the underlying DAG and show empirically that the performance is still good. Benchmarks. We compare DECAF against the following benchmark generative methods: a GAN, a Wasserstein GAN with gradient penalty (WGAN-GP) [21] and FairGAN [17]. FairGAN is the only benchmark designed to generate synthetic fair data,6 whereas GAN and WGAN-GP only aim to match the original data’s distribution, regardless of inherent underlying bias. For these benchmarks, fair data can be generated by naively removing the protected variable – we refer to these methods with the PR (protected removal) suffix and provide more experimental results and insight into PR in Appendix A. We benchmark DECAF debiasing in four ways: i) with no inference-time debiasing 5Just like in Corollaries 1 and 3, we assume the downstream evaluation distribution is the same as the biased training data distribution: a predictor trained on the synthetic debiased data, is required to give fair predictions in real-life settings with distribution PX(X). 6The works of [11, 16] are not applicable here, as these methods are constrained to discrete data. (DECAF-ND), ii) under FTU (DECAF-FTU), iii) under CF (DECAF-CF) and iv) under DP fairness (DECAF-DP). We provide DECAF7 implementation details in Appendix D.1. Evaluation criteria. We evaluate DECAF using the following metrics: • Data quality is assessed using metrics of precision and recall [37, 38, 39]. Additionally, we evaluate all methods in terms of AUROC of predicting the target variable using a downstream classifier (MLP in these experiments) trained on synthetic data. • FTU is measured by calculating the difference between the predictions of a downstream classifier for setting A to 1 and 0, respectively, such that |PA=0(Ŷ |X) − PA=1(Ŷ |X)|, while keeping all other features the same. This difference measures the direct influence of A on the prediction. • DP is measured in terms of the Total Variation [15]: the difference between the predictions of a downstream classifier in terms of positive to negative ratio between the different classes of protected variable A, i.e., |P (Ŷ |A = 0)− P (Ŷ |A = 1)|. 6.1 Debiasing Census Data In this experiment, we are given a biased dataset D ∼ P (X) and wish to create a synthetic (and debiased) dataset D′, with which a downstream classifier can be trained and subsequently be rolled out in a setting with distribution P (X). We experiment on the Adult dataset [40], with known bias between gender and income [10, 11]. The Adult dataset contains over 65,000 samples and has 11 attributes, such as age, education, gender, income, among others. Following [11], we treat gender as the protected variable and use income as the binary target variable representing whether a person earns over $50K or not. For DAG G, we use the graph discovered and presented by Zhang et al. [11]. In Appendix D.2, we specify edge removals for DECAF-DP, DECAF-CF, and DECAF-FTU. Synthetic data is generated using each benchmark method, after which a separate MLP is trained on each dataset for computing the metrics; see Appendix D.2 for details. We repeat this experiment 10 times for each benchmark method and report the average in Table 2. As shown, DECAF-ND (no debiasing) performs amongst the best methods in terms of data utility. Because the data utility in this experiment is measured with respect to the original (biased) dataset, we see that the methods DECAF-FTU, DECAF-CF, and DECAF-DP score lower than DECAF-ND because these methods distort the distribution – with DECAF-DP distorting the label’s conditional distribution most and thus scoring worst in terms of AUROC. Note also that a downstream user who is only focused on performance would choose the synthetic data from WGAN-GP or DECAF-ND, which are also the most biased methods. Thus, we see that there is a trade-off between fairness and data utility when the evaluation distribution P (X) is the original biased data. 6.2 Fair Credit Approval In this experiment, direct bias, which was not previously present, is synthetically injected into a dataset D resulting in a biased dataset D̃. We show how DECAF can remove the injected bias, resulting in dataset D′ that can be used to train a downstream classifier. This is a relevant scenario if the training data D̃ does not follow real-world distribution P (X), but instead a biased distribution P̃ (X) (due to, e.g., historical bias). In this case, we want downstream models trained on synthetic data D′ to perform well on the real-world data D instead of D̃. We show that DECAF is successful at removing the bias and how this results in higher data utility than benchmarks methods trained on D̃. 7PyTorch Lightning source code at https://github.com/vanderschaarlab/DECAF. We use the Credit Approval dataset from [40], with graph G as discovered by the causal discovery algorithm FGES [41] using Tetrad [42] (details in Appendix D.3). We inject direct bias by decreasing the probability that a sample will have their credit approved based on the chosen A.8 The credit_approval for this population was synthetically denied (set to 0) with some bias probability β, adding a directed edge between label and protected attribute. In Figure 3, we show the results of running our experiment 10 times over various bias probabilities β. We benchmark against FairGAN, as it is the only benchmark designed for synthetic debiased data. Note that in this case, the causal DAG has only one indirect biased edge between the protected variable (see Appendix D), and thus DECAF-DP and DECAF-CF remove the same edges and are the same for this experiment. The plots show that DECAF-FTU and DECAF-DP have similar performance to FairGAN in terms of debiasing; however, all of the DECAF-* methods have significantly better data quality metrics: precision, recall, and AUROC. DECAF-DP is one of the best performers across all 5 of the evaluation metrics and has better DP performance under higher bias. As expected, DECAF-ND (no debiasing) has the same data quality performance in terms of precision and recall as DECAF-FTU and DECAF-DP and has diminishing performance in terms of downstream AUROC, FTU, and DP as bias strength increases. See Appendix D for other benchmarks, and the same experiment under hidden confounding in Appendix G. 7 Discussion We have proposed DECAF, a causally-aware GAN that generates fair synthetic data. DECAF’s sequential generation provides a natural way of removing these edges, with the advantage that the conditional generation of other features is left unaltered. We demonstrated on real datasets that the DECAF framework is both versatile and compatible with several popular definitions of fairness. Lastly, we provided theoretical guarantees on the generator’s convergence and fairness of downstream models. We next discuss limitations as well as applications and opportunities for future work. Definitions. DECAF achieves fairness by removing edges between features, as we have shown for the popular FTU and DP definitions. Other independence-based [30] fairness definitions can be achieved by DECAF too, as we show in Appendix C. Just like related debiasing works [10, 11, 16, 17], DECAF is not compatible with fairness definitions based on separation or sufficiency [30], as these definitions depend on the downstream model more explicitly (e.g. Equality of Opportunity [12]). More on this in Appendix C. Incorrect DAG specification. Our method relies on the provision of causal structure in the form of a DAG for i) deciding the sequential order of feature generation and ii) deciding which edges to remove to achieve fairness. This graph need not be known a priori and can be discovered instead. If discovered, the DAG needs not equal the true DAG for many definitions of fairness, including FTU and DP, but only some (in)dependence statements are required to be correct (see Proposition 1). This is shown in the Experiments, where the DAG was discovered with the PC algorithm [47] and TETRAD [42]. Furthermore, in Appendix B we prove that the causal generator converges to the right distribution for any graph that is Markov compatible with the data. We reiterate, however, that knowing (part of) the true graph is still helpful because i) it often leads to simpler functions {fi}di=1 to approximate,9 and ii) some causal fairness definitions do require correct directionality—see Appendix 8We let A equal (anonymized) ethnicity [43, 44, 45, 46], with randomly chosen A = 4 as the disadvantaged population. 9Specifically, this is the case if modeling the causal direction is simpler than modeling the anti-causal direction. For many classes of models this is true when algorithmic independence holds, see [34]. C. In Appendix F, we include an ablation study on how errors in the DAG specification affect data quality and downstream fairness. Causal sufficiency. We have focused on just one type of graph: causally-sufficient directed graphs. Extending this to undirected or mixed graphs is possible as long as the generation order reflects a valid factorization of the observed distribution. This includes settings with hidden confounders. We note that for some definitions of bias, e.g., counterfactual bias, directionality is essential and hidden confounders would need to be corrected for (which is not generally possible). Time-series. We have focused on the tabular domain. The method can be extended to other domains with causal interaction between features, e.g., time-series. Application to image data is non-trivial, partly because, in this instance, the protected attribute (e.g., skin color) does not correspond to a single observed feature. DECAF might be extended to this setting in the future by first constructing a graph in a disentangled latent space (e.g., [24, 25]). Social implications. Fairness is task and context-dependent, requiring careful public debate. With that being said, DECAF empowers data issuers to take responsibility for downstream model fairness. We hope that this progresses the ubiquity of fairness in machine learning. Acknowledgements We would like to thank the reviewers for their time and valuable feedback. This research was funded by the Office of Naval Research and the WD Armstrong Trust.
1. What is the main contribution of the paper regarding fairness in machine learning systems? 2. What are the strengths and weaknesses of the proposed pre-processing approach using a GAN? 3. Do you have any questions regarding the theoretical guarantees provided in the paper? 4. How does the reviewer assess the originality and relevance of the paper's content? 5. What is the significance of the paper's approach in practical applications?
Summary Of The Paper Review
Summary Of The Paper This work tackles the problem of fairness from a pre-processing perspective by handling the data first using some form of generative transformation. In particular, they use a GAN as this generation and embed a causal structure within the layers. There are then theoretical guarantees presented on the convergence and guarantees of fairness when using the processed data on other tasks. Review The goal is the submission is to make work towards handling bias in machine learning systems but in particular take a pre-processing approach to this. Overall, such an approach has modularity since any high-performing or even biased algorithm can then use this method to produce fair ML systems, attesting to the significance of this approach. The major strength of this submission is precisely this method due to its modularity, which as illustrated seems to be a practical method for practitioners. The weakness I see in this work are the theoretical guarantees, which seem to be quite implicit and lacking - for example, can the authors provide some kind of convergence rates as opposed to just guarantees? On first reading of 'convergence guarantees of the generator', I expected some form of distributional convergence or functional convergence (since they mention the generator will converge) so it would be better if the authors can be more candid about this contribution in the abstract. However, I do not think this is such a major concern. I am not an expert in this area however the proposal of a GAN-based method that embeds causal structure seems to be quite original and the relevant work seems to be discussed. The paper is well-written with the relevant notation being self-contained and motivation spelled out clearly. I briefly checked the proofs and think the quality of correctness makes sense. Overall, I rate this paper above the threshold due to the modularity of the approach however since it is not my expertise I cannot comment entirely on the novelty. I think this work is highly relevant to the NeurIPS community and certain can benefit practitioners. One question I had for the authors to better understand this work is: How would this method perform if you trained a GAN on the pre-processed data to create larger datasets?
NIPS
Title DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative Networks Abstract Machine learning models have been criticized for reflecting unfair biases in the training data. Instead of solving for this by introducing fair learning algorithms directly, we focus on generating fair synthetic data, such that any downstream learner is fair. Generating fair synthetic data from unfair data— while remaining truthful to the underlying data-generating process (DGP) —is non-trivial. In this paper, we introduce DECAF: a GAN-based fair synthetic data generator for tabular data. With DECAF we embed the DGP explicitly as a structural causal model in the input layers of the generator, allowing each variable to be reconstructed conditioned on its causal parents. This procedure enables inference-time debiasing, where biased edges can be strategically removed for satisfying user-defined fairness requirements. The DECAF framework is versatile and compatible with several popular definitions of fairness. In our experiments, we show that DECAF successfully removes undesired bias and— in contrast to existing methods —is capable of generating high-quality synthetic data. Furthermore, we provide theoretical guarantees on the generator’s convergence and the fairness of downstream models. 1 Introduction Generative models are optimized to approximate the original data distribution as closely as possible. Most research focuses on three objectives [1]: fidelity, diversity, and privacy. The first and second are concerned with how closely synthetic samples resemble real data and how much of the real data’s distribution is covered by the new distribution, respectively. The third objective aims to avoid simply reproducing samples from the original data, which is important if the data contains privacy-sensitive information [2, 3]. We explore a much-less studied concept: synthetic data fairness. Motivation. Deployed machine learning models have been shown to reflect the bias of the data on which they are trained [4, 5, 6, 7, 8]. This has not only unfairly damaged the discriminated individuals but also society’s trust in machine learning as a whole. A large body of work has explored ways of detecting bias and creating fair predictors [9, 10, 11, 12, 13, 14, 15], while other authors propose debiasing the data itself [9, 10, 11, 16]. This work’s aim is related to the work of [17]: to generate fair synthetic data based on unfair data. Being able to generate fair data is important because end-users creating models based on publicly available data might be unaware they are inadvertently including ∗Equal contribution 35th Conference on Neural Information Processing Systems (NeurIPS 2021). bias or insufficiently knowledgeable to remove it from their model. Furthermore, by debiasing the data prior to public release, one can guarantee any downstream model satisfies desired fairness requirements by assigning the responsibility of debiasing to the data generating entities. Goal. From a biased dataset X , we are interested in learning a model G, that is able to generate an equivalent synthetic unbiased dataset X ′ with minimal loss of data utility. Furthermore, a downstream model trained on the synthetic data needs to make not only unbiased predictions on the synthetic data, but also on real-life datasets (as formalized in Section 4.2). Solution. We approach fairness from a causal standpoint because it provides an intuitive perspective on different definitions of fairness and discrimination [11, 13, 14, 15, 18]. We introduce DEbiasing CAusal Fairness (DECAF), a generative adversarial network (GAN) that leverages causal structure for synthesizing data. Specifically, DECAF is comprised of d generators (one for each variable) that learn the causal conditionals observed in the data. At inference-time, variables are synthesized topologically starting from the root nodes in the causal graph then synthesized sequentially, terminating at the leave nodes. Because of this, DECAF can remove bias at inference-time through targeted (biased) edge removal. As a result, various datasets can be created for desired (or evolving) definitions of fairness. Contributions. We propose a framework of using causal knowledge for fair synthetic data generation. We make three main contributions: i) DECAF, a causal GAN-based model for generating synthetic data, ii) a flexible causal approach for modifying this model such that it can generate fair data, and iii) guarantees that downstream models trained on the synthetic data will also give fair predictions in other settings. Experimentally, we show how DECAF is compatible with several fairness/discrimination definitions used in literature while still maintaining high downstream utility of generated data. 2 Related Works Here we focus on the related work concerned with data generation, in contrast to fairness definitions for which we provide a detailed overview in Section 4 and Appendix C. As an overview of how data generation methods relate to one another, we refer to Table 1 which presents all relevant related methods. Non-parametric generative modeling. The standard models for synthetic data generation are either based on VAEs [19] or GANs [2, 3, 20, 21]. While these models are well known for their highly realistic synthetic data, they are unable to alter the synthetic data distribution to encourage fairness (except for [17, 23], discussed below). Furthermore, these methods have no causal notion, which prohibits targeted interventions for synthesizing fair data (Section 4). We explicitly leave out CausalGAN [24] and CausalVAE [25], which appear similar by incorporating causality-derived ideas but are different in both method and aim (i.e., image generation). Fair data generation. In the bottom section of Table 1, we present methods that, in some way, alter the training data of classifiers to adhere to a notion of fairness [10, 11, 16, 17, 22, 23]. While these methods have proven successful, they lack some important features. For example, none of the related methods allow for post-hoc changes of the synthetic data distribution. This is an important feature, as each situation requires a different perspective on fairness and thus requires a flexible framework for selecting protected variables. Additionally, only [11, 23] allow a causal perspective on fairness, despite causal notions underlying multiple interpretations of what should be considered fair [13]. Furthermore, only [17, 22, 23] offer a flexible framework, while the others are limited to binary [10, 11] or discrete [16] settings. Xu et al. [23] also use a causal architecture for the generator, however their method is not as flexible—e.g. it does not easily extend to multiple protected attributes. Finally, in contrast to other methods DECAF is directly concerned with fairness of the downstream model—which is dependent on the setting in which the downstream model is employed (Section 4.2). In essence, from Table 1 we learn that DECAF is the only method that combines all key areas of interest. At last, we would like to mention [26], who aim to generate data that resembles a small unbiased reference dataset, by leveraging a large but biased dataset. This is very different to our aim, as we are interested in the downstream model’s fairness and explicit notions of fairness. 3 Preliminaries Let X ∈ X ⊆ Rd denote a random variable with distribution PX(X), with protected attributes A ∈ A ⊂ X and target variable Y ∈ Y ⊂ X , let Ŷ denote a prediction of Y . Let the data be given by D = {x(k)}Nk=1, where each x(k) ∈ D is a realization of X . We assume the data generating process can be represented by a directed acyclic graph (DAG)—such that the generation of features can be written as a structural equation model (SEM) [27]—and that this DAG is causally sufficient. Let Xi denote the ith feature in X with causal parents Pa(Xi) ⊂ {Xj : j 6= i}, the SEM is given by: Xi = fi(Pa(Xi), Zi),∀i (1) where {Zi}di=1 are independent random noise variables, that is Pa(Zi) = ∅, ∀i. Note that each fi is a deterministic function that places all randomness of the conditional P (Xi|Pa(Xi)) in the respective noise variable, Zi. 4 Fairness of Synthetic Data Algorithmic fairness is a popular topic (e.g., see [13, 28]), but fair synthetic data has been much less explored. This section highlights how the underlying graphs of the synthetic and downstream data determine whether a model trained on the synthetic data will be fair in practice. We start with the two most popular definitions of fairness, relating to the legal concepts of direct and indirect discrimination. We also explore conditional fairness [29], which is a generalization of the two. In Appendix C we discuss how the ideas in this section transfer to other independence-based definitions [30]. Throughout this section, we separate Y from X by defining X̄ = X\Y , and we will write X ← X̄ for ease of notation. 4.1 Algorithmic fairness The first definition is called Fairness Through Unawareness (e.g. [31]). Definition 1. (Fairness Through Unawareness (FTU): algorithm). A predictor f : X 7→ Ŷ is fair iff protected attributes A are not explicitly used by f to predict Ŷ . This definition prohibits disparate treatment [28, 32], and is related to the legal concept of direct discrimination, i.e., two equally qualified people deserve the same job opportunity independent of their race, gender, beliefs, among others. Though FTU fairness is commonly used, it might result in indirect discrimination: covariates that influence the prediction Ŷ might not be identically distributed across different groups a, a′, which means an algorithm might have disparate impact on a protected group [10]. The second definition of fairness, demographic parity [32], does not allow this: Definition 2. (Demographic Parity (DP): algorithm) A predictor Ŷ is fair iff A ⊥⊥ Ŷ , i.e. ∀a, a′ : P (Ŷ |A = a) = P (Ŷ |A = a′). Evidently, DP puts stringent constraints on the algorithm, whereas FTU might be too lenient. The third definition we include is based on the work of [29], related to unresolved discrimination [14]. The idea is that we do not allow indirect discrimination unless it runs through explanatory factors R ⊂ X . For example, in Simpson’s paradox [33] there seems to be a bias between gender and college admissions, but this is only due to women applying to more competitive courses. In this case, one would want to regard fairness conditioned on the choice of study [14]. Let us define this as conditional fairness: Definition 3. (Conditional Fairness (CF): algorithm) A predictor Ŷ is fair iff A ⊥⊥ Ŷ |R, i.e. ∀r, a, a′ : P (Ŷ |R = r,A = a) = P (Ŷ |R = r,A = a′). CF generalizes FTU and DP Note that conditional fairness is a generalization of FTU and DP, by setting R = X\A and R = ∅, respectively. In Appendix C we elaborate on the connection between these, and more, definitions. 4.2 Synthetic data fairness Algorithmic definitions can be extended to distributional fairness for synthetic data. Let P (X), P ′(X) be probability distributions with protected attributes A ⊂ X and labels Y ⊂ X . Let I(A, Y ) be a definition of algorithmic fairness (e.g., FTU). Note, that under CF, I(A, Y ) is a function of R as well. We propose (I(A, Y ), P )-fairness of distribution P ′(X): Definition 4. (Distributional fairness) A probability distribution P ′(X) is (I(A, Y ), P )-fair, iff the optimal predictor Ŷ = f∗(X) of Y trained on P ′(X) satisfies I(A, Y ) when evaluated on P (X). In other words, when we train a predictor on (I(A, Y ), P )-fair distribution P ′(X), we can only reach maximum performance if our model is fair. Note the explicit reference to P (X), the distribution on which fairness is evaluated, which does not need to coincide with P ′(X). This is a small but relevant detail. For example, when training a model on data D′ ∼ P ′(X) it could seem like the model is fair when we evaluate it on a hold-out set of the data (e.g., if we simply remove the protected attribute from the data). However, when we use the model for real-world predictions of data D ∼ P (X), disparate impact is possibly observed due to a distributional shift. By extension, we define synthetic data as (I(A, Y ), P )-fair, iff it is sampled from an (I(A, Y ), P )- fair distribution. Defining synthetic data as fair w.r.t. an optimal predictor is especially useful when we want to publish a dataset and do not trust end-users to consider anything but performance.2 Choosing P(X). The setting P (X) = P ′(X) corresponds to data being fair with respect to itself. For synthetic data generation, this setting is uninteresting as any dataset can be made fair by randomly sampling or removing A; if A is random, the prediction should not directly or indirectly depend on it. This ignores, however, that a downstream user might use the trained model on a real-world dataset in which other variables B are correlated with A, and thus their model (which is trained to use B for predicting Y ) will be biased. Of specific interest is the setting where P (X) corresponds to the original data distribution PX(X) that contains unfairness. In this scenario, we construct P ′(X) by learning PX(X) and removing the unfair characteristics. The data from P ′(X) can be published online, and models trained on this data can be deployed fairly in real-life scenarios where data follows PX(X). Unless otherwise stated, henceforth, we assume P (X) = PX(X). 4.3 Graphical perspective As reflected in the widely accepted terms direct versus indirect discrimination, it is natural to define distributional fairness from a causal standpoint. Let G′ and G respectively denote the graphs underlying P ′(X) (the synthetic data distribution which we can control) and P (X) (the evaluation distribution that we cannot control). Let ∂GY denote the Markov boundary of Y in graph G. We focus on the conditional fairness definition because it subsumes the definition of DP and FTU (Section 4.1). Let R ⊂ X be the set of explanatory features. Proposition 1. (CF: graphical condition) If for allB ∈ ∂G′Y ,A ⊥⊥G B|R,3 then distribution P ′(X) is CF fair w.r.t P (X) given explanatory factors R. 2Finding the optimal predictor is possible if we assume the downstream user employs any universal function approximator (e.g., MLP) and the amount of synthetic data is sufficiently large. 3Where ⊥⊥G denotes d-separation in G. Here we define A ⊥⊥G B|R to be true for all B ∈ R. Proof. Without loss of generality, let us assume the label is binary.4 The optimal predictor f∗(X) = P (Y |X) = P (Y |∂G′Y ). Thus, if ∂G′Y is d-separated from A in G given R, prediction Ŷ = f∗(X) is independent of A given R and CF holds. Corollary 1. (CF debiasing) Any distribution P ′(X) with graph G′ can be made CF fair w.r.t. P (X) and explanatory features R by removing from G′ edges Ẽ = {(B → Y ) and (Y → B) : ∀B ∈ ∂G′Y with B 6⊥⊥G A|R}. Proof. First note Ẽ is the necessary and sufficient set of edges to remove for (∀B ∈ ∂G′Y , A ⊥⊥G B|R) to be true, subsequently the result follows from Proposition 1. For FTU (i.e. R = X\A) and DP (i.e. R = ∅), this corollary simplifies to: Corollary 2. (FTU debiasing) Any distribution P ′(X) with graph G′ can be made FTU fair w.r.t. any distribution P (X) by removing, if present, i) the edge between A and Y and ii) the edge A → C or Y → C for all shared children C. Corollary 3. (DP debiasing) Any distribution P ′(X) with graph G′ can be made DP fair w.r.t. P (X) by removing, if present, the edge between B and Y for any B ∈ ∂G′Y with B 6⊥⊥G A. Figure 1 shows how the different fairness definitions lead to different sets of edges to be removed. Faithfulness. Usually one assumes distributions are faithful w.r.t. their respective graphs, in which case the if-statement in Proposition 1 become equivalence statements: fairness is only possible when the graphical conditions hold. Theorem 1. If P (X) and P ′(X) are faithful with respect to their respective graphs G and G′, then Proposition 1 becomes an equivalence statement and Corollaries 1, 2 and 3 describe the necessary and sufficient sets of edges to remove for achieving CF, FTU and DP fairness, respectively. Proof. Faithfulness implies A ⊥⊥P (X) B|R =⇒ A ⊥⊥G B|R, e.g. [34]. Thus, if ∃B ∈ ∂G′Y for which A 6⊥⊥G B|R, then A 6⊥⊥ B|R. Because B ∈ ∂G′Y and P ′(X) is faithful to G′, Ŷ = f∗(X) depends on B, and thus Ŷ 6⊥⊥ A|R: CF does not hold. Other definitions. Some authors define similar fairness measures in terms of directed paths (cf. d-separation) [11, 14, 18], which is a milder requirement as it allows correlation via non-causal paths. In Appendix C we highlight the graphical conditions for these definitions. 5 Method: DECAF The primary design goal of DECAF is to generate fair synthetic data from unfair data. We separate DECAF into two stages. The training stage learns the causal conditionals that are observed in the data through a causally-informed GAN. At the generation (inference) stage, we intervene on the learned conditionals via Corollaries 1-3, in such a way that the generator creates fair data. We assume the underlying DGP’s graph G is known; otherwise, G needs to be approximated first using any causal discovery method, see Section 6. 5.1 Training Overview. This stage strives to learn the causal mechanisms {fi(Pa(Xi), Zi)}. Each structural equation fi (Eq. 1) is modelled by a separate generator Gi : R|Pa(Xi)|+1 → R. We achieve this by employing a conditional GAN framework with a causal generator. This process is illustrated in Figure 2 and detailed below. 4If Y is continuous the same result holds, though the “optimal” predictor will depend on the statistic of interest, e.g. mode, mean, median or the entire distribution f(X,Y ) ≈ P (Y |X). Features are generated sequentially following the topological ordering of the underlying causal DAG: first root nodes are generated, then their children (from generated causal parents), etc. Variable X̂i is modelled by the associated generator Gi: X̂i = Gi(P̂a(Xi), Zi) ∀i, (2) where P̂a(Xi) denotes the generated causal parents of Xi (for root nodes the empty set), and each Zi is independently sampled from P (Z) (e.g. standard Gaussian). We denote the full sequential generator by G(Z) = [G1(Z1), ..., Gd(·, Zd)]. Subsequently, the synthetic sample x̂ is passed to a discriminator D : Rd → R, which is trained to distinguish the generated samples from original samples. A typical minimax objective is employed for creating generated samples that confuse the discriminator most: max {Gi}di=1 min D E[logD(G(Z)) + log(1−D(X)], (3) with X sampled from the original data. We optimize the discriminator and generator iteratively and add a regularization loss to both networks. Network parameters are updated using gradient descent. If we assume PX(X) is compatible with graph G, we can show that the sequential generator has the same theoretical convergence guarantees as standard GANs [20]: Theorem 2. (Convergence guarantee) Assuming the following three conditions hold: (i) data generating distribution PX is Markov compatible with a known DAG G; (ii) generator G and discriminator D have enough capacity; and (iii) in every training step the discriminator is trained to optimality given fixed G, and G is subsequently updated as to maximize the discriminator loss (Eq. 3); then generator distribution PG converges to true data distribution PX Proof. See Appendix B Condition (i), compatibility with G, is a weaker assumption than assuming perfect causal knowledge. For example, suppose the Markov equivalence class of the true underlying DAG has been determined through causal discovery. In that case, any graph G in the equivalence class is compatible with the data and can thus be used for synthetic data generation. However, we note that debiasing can require the correct directionality for some definitions of fairness, see Discussion. Remark. The causal GAN we propose, DECAF, is simple and extendable to other generative methods, e.g., VAEs. Furthermore, from the post-processing theorem [35] it follows that DECAF can be directly used for generating private synthetic data by replacing the standard discriminator by a differentially private discriminator [2, 36]. 5.2 Inference-time Debiasing The training phase yields conditional generators {Gi}di=1, which can be sequentially applied to generate data with the same output distribution as the original data (proof in Appendix B). The causal model allows us to go one step further: when the original data has characteristics that we do not want to propagate to the synthetic data (e.g., gender bias), individual generators can be modified to remove these characteristics. Given the generator’s graph G = (X,E), fairness is achieved by removing edges such that the fairness criteria are met, see Section 4. Let Ẽ ∈ E be the set of edges to remove for satisfying the required fairness definition. For CF, FTU and DP,5 the sets Ẽ are given by Corollaries 1, 2 and 3, respectively. Removing an edge constitutes to what we call a “surrogate” do-operation [27] on the conditional distribution. For example, suppose we only want to remove (i → j). For a given sample, Xi is generated normally (Eq. 2), but Xj is generated using the modified: X̂ do(Xi)=x̃ij j = Gj(..., Xi = x̃ij), (4) where Xi = x̃ij is the surrogate parent assignment. Value X̂ do(Xi) j can be interpreted as the counterfactual value of X̂j , had Xi been equal to x̃ij (see also [15]). Choosing the value of surrogate variable x̃ij requires background knowledge of the task and bias at hand. For example, surrogate variable x̃ij can be sampled independently from a distribution for each synthetic sample (e.g., the marginal P (Xi)), be set to a fixed value for all samples in the synthetic data (e.g., if Xi: gender, always set x̃ij = male when generating feature Xj : job opportunity) or be chosen as to maximize/minimize some feature (e.g. x̃ij = arg maxx X̂ do(Xi)=x j ). We emphasize that we do not set Xi = x̃ij in the synthetic sample; Xi = x̃ij is only used for substitution of the removed dependence. We provide more details in Appendix E. More generally, we create surrogate variables for all edges we remove, {x̃ij : (i→ j) ∈ Ẽ}. Each sample is sequentially generated by Eq. 4, with a surrogate variable for each removed incoming edge. Remark. Multiple datasets can be created based on different definitions of fairness and/or different downstream prediction targets. Because debiasing happens at inference-time, this does not require retraining the model. 6 Experiments In this section, we validate the performance of DECAF for synthesizing bias-free data based on two datasets: i) real data with existing bias and ii) real data with synthetically injected bias. The aim of the former is to show that we can remove real, existing bias. The latter experiment provides a ground-truth unbiased target distribution, which means we can evaluate the quality of the synthetic dataset with respect to this ground truth. For example, when historically biased data is first debiased, a model trained on the synthetic data will likely create better predictions in contemporary, unbiased/less-biased settings than benchmarks that do not debias first. In both experiments, the ground-truth DAG is unknown. We use causal discovery to uncover the underlying DAG and show empirically that the performance is still good. Benchmarks. We compare DECAF against the following benchmark generative methods: a GAN, a Wasserstein GAN with gradient penalty (WGAN-GP) [21] and FairGAN [17]. FairGAN is the only benchmark designed to generate synthetic fair data,6 whereas GAN and WGAN-GP only aim to match the original data’s distribution, regardless of inherent underlying bias. For these benchmarks, fair data can be generated by naively removing the protected variable – we refer to these methods with the PR (protected removal) suffix and provide more experimental results and insight into PR in Appendix A. We benchmark DECAF debiasing in four ways: i) with no inference-time debiasing 5Just like in Corollaries 1 and 3, we assume the downstream evaluation distribution is the same as the biased training data distribution: a predictor trained on the synthetic debiased data, is required to give fair predictions in real-life settings with distribution PX(X). 6The works of [11, 16] are not applicable here, as these methods are constrained to discrete data. (DECAF-ND), ii) under FTU (DECAF-FTU), iii) under CF (DECAF-CF) and iv) under DP fairness (DECAF-DP). We provide DECAF7 implementation details in Appendix D.1. Evaluation criteria. We evaluate DECAF using the following metrics: • Data quality is assessed using metrics of precision and recall [37, 38, 39]. Additionally, we evaluate all methods in terms of AUROC of predicting the target variable using a downstream classifier (MLP in these experiments) trained on synthetic data. • FTU is measured by calculating the difference between the predictions of a downstream classifier for setting A to 1 and 0, respectively, such that |PA=0(Ŷ |X) − PA=1(Ŷ |X)|, while keeping all other features the same. This difference measures the direct influence of A on the prediction. • DP is measured in terms of the Total Variation [15]: the difference between the predictions of a downstream classifier in terms of positive to negative ratio between the different classes of protected variable A, i.e., |P (Ŷ |A = 0)− P (Ŷ |A = 1)|. 6.1 Debiasing Census Data In this experiment, we are given a biased dataset D ∼ P (X) and wish to create a synthetic (and debiased) dataset D′, with which a downstream classifier can be trained and subsequently be rolled out in a setting with distribution P (X). We experiment on the Adult dataset [40], with known bias between gender and income [10, 11]. The Adult dataset contains over 65,000 samples and has 11 attributes, such as age, education, gender, income, among others. Following [11], we treat gender as the protected variable and use income as the binary target variable representing whether a person earns over $50K or not. For DAG G, we use the graph discovered and presented by Zhang et al. [11]. In Appendix D.2, we specify edge removals for DECAF-DP, DECAF-CF, and DECAF-FTU. Synthetic data is generated using each benchmark method, after which a separate MLP is trained on each dataset for computing the metrics; see Appendix D.2 for details. We repeat this experiment 10 times for each benchmark method and report the average in Table 2. As shown, DECAF-ND (no debiasing) performs amongst the best methods in terms of data utility. Because the data utility in this experiment is measured with respect to the original (biased) dataset, we see that the methods DECAF-FTU, DECAF-CF, and DECAF-DP score lower than DECAF-ND because these methods distort the distribution – with DECAF-DP distorting the label’s conditional distribution most and thus scoring worst in terms of AUROC. Note also that a downstream user who is only focused on performance would choose the synthetic data from WGAN-GP or DECAF-ND, which are also the most biased methods. Thus, we see that there is a trade-off between fairness and data utility when the evaluation distribution P (X) is the original biased data. 6.2 Fair Credit Approval In this experiment, direct bias, which was not previously present, is synthetically injected into a dataset D resulting in a biased dataset D̃. We show how DECAF can remove the injected bias, resulting in dataset D′ that can be used to train a downstream classifier. This is a relevant scenario if the training data D̃ does not follow real-world distribution P (X), but instead a biased distribution P̃ (X) (due to, e.g., historical bias). In this case, we want downstream models trained on synthetic data D′ to perform well on the real-world data D instead of D̃. We show that DECAF is successful at removing the bias and how this results in higher data utility than benchmarks methods trained on D̃. 7PyTorch Lightning source code at https://github.com/vanderschaarlab/DECAF. We use the Credit Approval dataset from [40], with graph G as discovered by the causal discovery algorithm FGES [41] using Tetrad [42] (details in Appendix D.3). We inject direct bias by decreasing the probability that a sample will have their credit approved based on the chosen A.8 The credit_approval for this population was synthetically denied (set to 0) with some bias probability β, adding a directed edge between label and protected attribute. In Figure 3, we show the results of running our experiment 10 times over various bias probabilities β. We benchmark against FairGAN, as it is the only benchmark designed for synthetic debiased data. Note that in this case, the causal DAG has only one indirect biased edge between the protected variable (see Appendix D), and thus DECAF-DP and DECAF-CF remove the same edges and are the same for this experiment. The plots show that DECAF-FTU and DECAF-DP have similar performance to FairGAN in terms of debiasing; however, all of the DECAF-* methods have significantly better data quality metrics: precision, recall, and AUROC. DECAF-DP is one of the best performers across all 5 of the evaluation metrics and has better DP performance under higher bias. As expected, DECAF-ND (no debiasing) has the same data quality performance in terms of precision and recall as DECAF-FTU and DECAF-DP and has diminishing performance in terms of downstream AUROC, FTU, and DP as bias strength increases. See Appendix D for other benchmarks, and the same experiment under hidden confounding in Appendix G. 7 Discussion We have proposed DECAF, a causally-aware GAN that generates fair synthetic data. DECAF’s sequential generation provides a natural way of removing these edges, with the advantage that the conditional generation of other features is left unaltered. We demonstrated on real datasets that the DECAF framework is both versatile and compatible with several popular definitions of fairness. Lastly, we provided theoretical guarantees on the generator’s convergence and fairness of downstream models. We next discuss limitations as well as applications and opportunities for future work. Definitions. DECAF achieves fairness by removing edges between features, as we have shown for the popular FTU and DP definitions. Other independence-based [30] fairness definitions can be achieved by DECAF too, as we show in Appendix C. Just like related debiasing works [10, 11, 16, 17], DECAF is not compatible with fairness definitions based on separation or sufficiency [30], as these definitions depend on the downstream model more explicitly (e.g. Equality of Opportunity [12]). More on this in Appendix C. Incorrect DAG specification. Our method relies on the provision of causal structure in the form of a DAG for i) deciding the sequential order of feature generation and ii) deciding which edges to remove to achieve fairness. This graph need not be known a priori and can be discovered instead. If discovered, the DAG needs not equal the true DAG for many definitions of fairness, including FTU and DP, but only some (in)dependence statements are required to be correct (see Proposition 1). This is shown in the Experiments, where the DAG was discovered with the PC algorithm [47] and TETRAD [42]. Furthermore, in Appendix B we prove that the causal generator converges to the right distribution for any graph that is Markov compatible with the data. We reiterate, however, that knowing (part of) the true graph is still helpful because i) it often leads to simpler functions {fi}di=1 to approximate,9 and ii) some causal fairness definitions do require correct directionality—see Appendix 8We let A equal (anonymized) ethnicity [43, 44, 45, 46], with randomly chosen A = 4 as the disadvantaged population. 9Specifically, this is the case if modeling the causal direction is simpler than modeling the anti-causal direction. For many classes of models this is true when algorithmic independence holds, see [34]. C. In Appendix F, we include an ablation study on how errors in the DAG specification affect data quality and downstream fairness. Causal sufficiency. We have focused on just one type of graph: causally-sufficient directed graphs. Extending this to undirected or mixed graphs is possible as long as the generation order reflects a valid factorization of the observed distribution. This includes settings with hidden confounders. We note that for some definitions of bias, e.g., counterfactual bias, directionality is essential and hidden confounders would need to be corrected for (which is not generally possible). Time-series. We have focused on the tabular domain. The method can be extended to other domains with causal interaction between features, e.g., time-series. Application to image data is non-trivial, partly because, in this instance, the protected attribute (e.g., skin color) does not correspond to a single observed feature. DECAF might be extended to this setting in the future by first constructing a graph in a disentangled latent space (e.g., [24, 25]). Social implications. Fairness is task and context-dependent, requiring careful public debate. With that being said, DECAF empowers data issuers to take responsibility for downstream model fairness. We hope that this progresses the ubiquity of fairness in machine learning. Acknowledgements We would like to thank the reviewers for their time and valuable feedback. This research was funded by the Office of Naval Research and the WD Armstrong Trust.
1. What is the focus of the paper regarding unfairness mitigation in data generation? 2. What are the strengths of the proposed approach, particularly in terms of theoretical guarantees and generalizability? 3. What are the weaknesses of the method, such as potential loss of information or lack of realism in generated data? 4. How does the reviewer assess the effectiveness of DECAF compared to other methods like FairGAN and weak supervision approaches? 5. Are there any concerns regarding the use of certain technical concepts without proper definitions? 6. What are the limitations of the experimental section, and what additional experiments or analyses could be helpful? 7. How do the authors address the reviewer's concerns in their response?
Summary Of The Paper Review
Summary Of The Paper This paper proposes DECAF, a fair data generation method using causal DAG structures and GANs. The DAG is used to remove any causal dependencies that may have a negative effect on the fairness and is general enough to model various fairness measures: FTU, DP, and CF. Then synthetic data is generated according to the DAG, but made realistic using a discriminator that distinguishes the generated samples from the real ones. Experiments show that DECAF has comparable fairness to the state-of-the-art method FairGAN, but significantly outperforms it in terms of precision, recall, and AUROC. Review This paper proposes a novel approach for removing data biases that negatively affect fairness, yet generating realistic data using GAN techniques. There are some concerns on whether the generated data is realistic and whether DECAF should also be compared with weak supervision approaches. Strong points: The unfairness mitigation method is principled and has theoretical guarantees. By removing edges from the DAG, one has a clear idea exactly what bias is being removed. The DAG representation is general and can be used to express prominent group fairness measures. The experiments show that DECAF outperforms the state-of-the-art method FairGAN among others. Weak points: One concern is whether removing edges is too drastic and may result in unrealistic data. For example, Sec 5.2 mentions that surrogate variables can be sampled independently from a distribution or set to a fixed value. Let us say that we are using the Adult dataset and set all its gender values to male. From a practical point of view, wouldn't the dataset lose too much information? To answer this question, the authors should also show DECAF's discriminator performance because it may actually be able to distinguish generated versus real data better than say FairGAN's real/fake discriminator. An alternative approach for fair data generation is to use weak supervision. In particular, the authors should cite and compare DECAF with Choi et al., "Fair Generative Modeling via Weak Supervision", ICML 2020. The idea is to exploit an additional small, unlabeled reference dataset as the supervision signal to generate unbiased data. Some notions are used without definitions. In Sec. 3, please define "independent causal mechanism". In Sec. 4.3, define "Markov boundary", "d-separation", and "explanatory feature". The experiments section looks a bit thin (accuracy and fairness experiments for two datasets), probably because many experiments are moved to the supplementary. In particular, the Adult dataset DAG is quite interesting and should be more visible. Also, it would be useful to see an ablation study that evaluates DECAF without some of its components (e.g., the discriminator). In Sec. 6, please specify how the values of surrogate variables were chosen. There are some details in the supplementary, but it is not clear if the same settings were used in Sec. 6. ========= The author feedback addresses my concerns. I will keep my rating of 6.
NIPS
Title Efficient coding, channel capacity, and the emergence of retinal mosaics Abstract Among the most striking features of retinal organization is the grouping of its output neurons, the retinal ganglion cells (RGCs), into a diversity of functional types. Each of these types exhibits a mosaic-like organization of receptive fields (RFs) that tiles the retina and visual space. Previous work has shown that many features of RGC organization, including the existence of ON and OFF cell types, the structure of spatial RFs, and their relative arrangement, can be predicted on the basis of efficient coding theory. This theory posits that the nervous system is organized to maximize information in its encoding of stimuli while minimizing metabolic costs. Here, we use efficient coding theory to present a comprehensive account of mosaic organization in the case of natural videos as the retinal channel capacity—the number of simulated RGCs available for encoding—is varied. We show that mosaic density increases with channel capacity up to a series of critical points at which, surprisingly, new cell types emerge. Each successive cell type focuses on increasingly high temporal frequencies and integrates signals over larger spatial areas. In addition, we show theoretically and in simulation that a transition from mosaic alignment to anti-alignment across pairs of cell types is observed with increasing output noise and decreasing input noise. Together, these results offer a unified perspective on the relationship between retinal mosaics, efficient coding, and channel capacity that can help to explain the stunning functional diversity of retinal cell types. 1 Introduction The retina is one of the most intensely studied neural circuits, yet we still lack a computational understanding of its organization in relation to its function. At a structural level, the retina forms a three-layer circuit, with its primary feedforward pathway consisting of photoreceptors to bipolar cells to retinal ganglion cells (RGCs), the axons of which form the optic nerve [1]. RGCs can be divided into 30-50 functionally distinct cell types (depending on species) with each cell responsive to a localized area of visual space (its receptive field (RF)), and the collection of RFs for each type tiling space to form a “mosaic” [2, 3, 4, 5]. Each mosaic represents the extraction of a specific type 36th Conference on Neural Information Processing Systems (NeurIPS 2022). of information across the visual scene by a particular cell type, with different mosaics responding to light increments or decrements (ON and OFF cells), high or low spatial and temporal frequencies, color, motion, and a host of other features. While much is known about the response properties of each RGC type, the computational principles that drive RGC diversity remain unclear. Efficient coding theory has proven one of the most powerful ideas for understanding retinal organization and sensory processing. Efficient coding posits that the nervous system attempts to encode sensory input by minimizing redundancy subject to biological costs and constraints [6, 7]. As more commonly formulated, it seeks to maximize the mutual information between sensory data and neural representations, with the most common cost in the retinal case being the energetic cost of action potentials transmitted by the RGCs. Despite its simplicity, this principle has proven useful, predicting the center-surround structure of RFs [8], the frequency response profile of contrast sensitivity [9], the structure of retinal mosaics [10, 11], the role of nonlinear rectification [12], different spatiotemporal kernels [13], and inter-mosaic arrangements [14, 15]. While previous studies have largely focused on either spatial or temporal aspects of efficient coding, we optimize an efficient coding model of retinal processing in both space and time to natural videos [16]. We systematically varied the number of cells available to the system and found that larger numbers of available cells led to more cell types. Each of these functionally distinct types formed its own mosaic of RFs that tiled space. We show that when and how new cell types emerge and form mosaics is the result of tradeoffs between power constraints and the benefits of specialized encoding that shift as more cells are available to the system. We show that cell types begin by capturing low-frequency temporal information and capture increasingly higher-frequency temporal information over larger spatial RFs as new cell types form. Finally, we investigated the relative arrangement of these mosaics and their dependence on noise. We show that mosaic pairs can be aligned or anti-aligned depending on input and output noise in the system [14]. Together, these results demonstrate for the first time how efficient coding principles can explain, even predict, the formation of cell types and which types are most informative when channel capacity is limited. 2 Model The model we develop is an extension of [14], a retinal model for efficient coding of natural images, which is based on a mutual information maximization objective proposed in [10]. The retinal model takes D-pixel patches of natural images x 2 RD corrupted by input noise nin ⇠ N (0,Cnin), filters these with unit-norm linear kernels {wj 2 RD | kwjk = 1}j=1,··· ,J representing J RGCs, and then feeds the resulting signals yj = w>j (x+nin) through softplus nonlinearities ⌘(y) = log 1 + e y / (we used = 0.25) with gain j and threshold ✓j . Finally, these signals are further corrupted by additive output noise nout ⇠ N (0,Cnout), to produce firing rates rj : rj = j · ⌘(yj ✓j) + nout,j . (1) The model learns parameters wj , j , and ✓j to maximize the mutual information between the inputs x and the outputs r, under a mean firing rate constraint [10, 14]: maximize log det GW> (Cx +Cnin)WG+Cnout det (GW>CninWG+Cnout) (2) subject to E[rj ] = 1. (3) Here Cx is the covariance matrix of the input distribution, W 2 RD⇥J contains the filters wj as its columns, the gain matrix G = diag ⇣ j d⌘ dy |yj ✓j ⌘ , and the noise covariances are Cnin = 2in D⇥D and Cnout = 2out J⇥J . This objective is equivalent to the formulation in [10], which assumes normally distributed inputs and locally linear responses in order to approximate the mutual information in a closed form. Here, we extend this model to time-varying inputs x(t) 2 RD representing natural videos (Figure 1A-B), which are convolved with linear spatiotemporal kernels {wj(t)}j=1,··· ,J : yj(t) = w > j (t) ⇤ x(t) = Z 1 1 wj(⌧) >x(t ⌧)d⌧. (4) We additionally assume that the convolutional kernels are separable in time and space: wj(t) = j(t)wj , kwjk = 1, j(t) 2 R, Z 1 1 (t)2dt = 1, (5) and the temporal kernels are unit-norm impulse responses taking the following parametric form: j(t) / ( ↵jtne t/⌧j ↵0jt ne t/⌧ 0 j if delay t 0 0 otherwise , (6) where ↵j , ↵0j , ⌧j > 0, ⌧ 0j > 0 are learnable parameters, and n 2 N is fixed. Previous work assumed an unconstrained form for these filters, adding zero-padding before and after the model’s image inputs to produce the characteristic shape of the temporal filters in primate midget and parasol cells [13], but this zero-padding represents a biologically implausible constraint, and the results fail to correctly reproduce the observed delay in retinal responses [17, 18, 19]. Rather, optimizing (2) with unconstrained temporal filters produces a filter bank uniformly tiling time (Supplementary Figure 4). By contrast, (6) is motivated by the arguments of [20], which showed that the optimal minimum-phase temporal filters of retinal bipolar cells, the inputs to the RGCs, take the form (t > 0) / e t/⌧ [sin!t !t cos!t] ⇡ e t/⌧ (!t)3 3 (7) when !⌧ ⌧ 1. Thus, we model RGC temporal filters as a linear combination of these forms. In practice, we take only two filters and use n = 6 rather than n = 3, since these have been shown to perform well in capturing observed retinal responses [19]. The results produced by more filters or different exponents are qualitatively unchanged (Supplementary Figure 7). For training on video data, we use discrete temporal filters and convolutions with PT 1 t=0 j [t] 2 = 1. Finally, while unconstrained spatial kernels wj converge to characteristic center-surround shapes under optimization of (2) (Figure 1C), for computational efficiency and stability, we parameterized these filters using a radially-symmetric difference of Gaussians wj(r) / e ajr2 cje bjr2 , bj > aj > 0, 0 < cj < 1, (8) where r measures the spatial distance to the center of the RF, and the parameters aj , bj , cj that determine the center location and spatial kernel shape are potentially different for each RGC j. The result of optimizing (2) using these forms is a set of spatial and temporal kernels (Figure 1D-E) that replicate experimentally-observed shapes and spatial RF tiling. 3 Efficient coding as a function of channel capacity: linear theory Before presenting results from our numerical experiments optimizing the model (2, 3), we begin by deriving intuitions about its behavior by studing the case of linear filters analytically. That is, we assume a single gain for all cells, no bias (✓ = 0), and a linear transfer function ⌘(y) = y. As we will see, this linear analysis correctly predicts the same types of mosaic formation and filling observed in the full nonlinear model. Here, we sketch the main results, deferring full details to Appendix A. 3.1 Linear model in the infinite retina limit For analytical simplicity, we begin by assuming an infinite retina on which RFs form mosaics described by a regular lattice. Under these conditions, we can write the log determinants in (2) as integrals and optimize over the unnormalized filter v ⌘ w subject to a power constraint: max v Z G0 d2k (2⇡)2 " log P g2G |v(k + g)| 2(Cx(k + g) + 2 in) + 2 outP g2G |v(k + g)|2 2in + 2out ⌫ X g2G |v(k + g)|2(Cx(k + g) + 2in) 3 5 , (9) where Cx(k) is the Fourier transform of the stationary image covariance Cx(z z0), the integral is over all frequencies k 2 G0 unique up to aliasing caused by the spatial regularity of the mosaic, and the sums over g account for aliased frequencies (Appendix A.1). In [8], the range [ ⇡,⇡] is used for the integral, corresponding to a one-dimensional lattice and units of mosaic spacing z = 1. Now, solving the optimization in (9) results in a spatial kernel with the spectral form (Appendix A.2) |v(k)|2 = 2out 2in " 1 2 Cx(k) Cx(k) + 2in s 1 + 2in 2out 4 ⌫ Cx(k) + 1 ! 1 # + , k 2 G0, (10) where k = kkk and ⌫ is chosen to enforce the constraint on total power. This is exactly the solution found in [8], linking it (in the linear case) to the model of [10, 11]. Note, however, that (10) is only nonzero within G0, since RF spacing sets an upper limit on the passband of the resulting filters. The generalization of this formulation to the spacetime case is straightforward. Given a spacetime stationary image spectrum Cx(z z0, t t0) and radially-symmetric, causal filter w(z, t), the same infinite retina limit as above requires calculating determinants across both neurons i, j and time points t, t0 of matrices with entries of the form Fijtt0 = Z dzdz0d⌧d⌧ 0 2w(zi z, t ⌧)Cx(z z 0, ⌧ ⌧ 0)w(zj z 0, t0 ⌧ 0) = Z d2k (2⇡)2 d! 2⇡ eik·(zi zj)+i!(t t 0) |v(k,!)|2Cx(k,!). (11) Again, such matrices can be diagonalized in the Fourier basis, with the result that the optimal spacetime filter once again takes the form (10) with the substitutions v(k) ! v(k,!), Cx(k) ! Cx(k,!) (Appendix A.3). Figure 2A depicts the frequency response of this filter in d = 1 spatial dimensions, with corresponding spatial and temporal sections plotted in Figures 2B-C. 3.2 Multiple cell types and the effects of channel capacity Up to this point, we have only considered a single type of filter v(k,!), corresponding to a single cell type. However, multiple cell types might increase the coding efficiency of the entire retina if they specialize, devoting their limited energy budget to non-overlapping regions of frequency space. Indeed, optimal encoding in the multi-cell-type case selects filters v and v0 that satisfy v⇤(k,!)v0(k,!) = 0, corresponding encoding independent visual information (Appendix A.4). This result naturally raises two questions: First, how many filter types are optimal? And second, how should a given budget of J RGCs be allocated across multiple filter types? As detailed in Appendix A.5, we can proceed by analyzing the case of a finite retina in the Fourier domain, approximating the information encoded by a mosaic of J RGCs with spatial filters given by (10) and nonoverlapping bandpass temporal filters that divide the available spectrum (e.g., Figure 2B, C). Following [21], we approximate the correlation spectrum of images by the factorized power law Cx(k,!) ' Ak↵!2 with ↵ ⇡ 1.3 and find that in this case, the optimal filter response exhibits two regimes as a function of spatial frequency (Supplementary Figure 1A): First, below kf = A/ 2in! 2 1/↵, the optimal filter is separable and log-linear, and the filtered image spectrum is white: |v(k,!)|2 ⇡ k↵!2 A⌫ , |v(k,!)|2Cx(k,!) ⇡ ⌫ 1, where ⌫, the Lagrange multiplier in (9) that enforces the power constraint, scales as 1/P for small values of maximal power P and 1/P 2 for larger values (Supplementary Figure 1D). Second, for k & kf , the filter response decreases as k ↵/2 until reaching its upper cutoff at kc = kf/(⌫ 2out)1/↵, with the filtered image spectrum falling off at the same rate (Supplementary Figure 1B). But what do these regimes have to do with mosaic formation? The link between the two is given by the fact that, for a finite retina with regularly spaced RFs, adding RGCs decreases the distance between RF centers and so increases the resolving power of the mosaic. That is, the maximal value of k grows roughly as k ⇠ p J in d = 2, such that larger numbers of RGCs capture more information at increasingly higher spatial frequencies (Supplementary Figure 1A). However, while information gain is roughly uniform in the whitening regime, it falls off sharply for k & kf (Supplementary Figure 1C), suggesting the interpretation that the k . kf regime is a “mosaic filling” phase in which information accumulates almost linearly as RFs capture new locations in visual space, while the k & kf regime constitutes a “compression phase” in which information gains are slower as RFs shrink to accommodate higher numbers (Figure 2D). Indeed, one can derive the scaling of total Temporal RFs of all ON cells (1) (2) (3) (4) A B C (1) All videos (2) (3) (4) Temporal RFs of all OFF cells All videos Slow videos only Past Figure 3: Statistics of natural videos affect learned RFs. (A) Histogram of spectral attenuation (fraction of power < 3 Hz) for each video clip from the Chicago Motion Database. A significant portion of the dataset exhibits predominaly low-frequency spectral content in time. Videos with spectral attenuation above 0.9, 0.8, and 0.7, are denoted (4), (3), and (2), respectively, while (1) refers to all videos in the dataset. (B) Spatial (top) and temporal (bottom) spectral density of the four subsets. (C) Temporal filters learned by training on each of the four subsets. Training on slow videos produced only smoothing kernels, while training on all videos produced a variety of temporal filters. information as a function of J : I ' 8 >< >: J log ⇣ 1 + P0 2out ⌘ 2P0 (↵+2) 2out ⇣ J Jf ⌘↵ 2 k . kf (J Jf ) ⇣ J Jf ⌘ ↵2 2 2 ↵ k & kf , (12) where P0 is the power budget per RGC and Jf is the RGC number corresponding to k = kf . Thus, mosaic filling exhibits diminishing marginal returns (Figure 2E), such that new cell types are favored when the marginal gain for growing mosaics with lower temporal frequency drops below the gain from initiating a new cell type specialized for higher temporal frequencies. Moreover, the difference between these gain curves implies that new RFs are not added to all mosaics at equal rates, but in proportion to their marginal information (Figure 2F). As we demonstrate in the next section, these features of cell type and mosaic formation continue to hold in the full nonlinear model in simulation. 4 Experiments We analyzed the characteristics of the optimal spatiotemporal RFs obtained from the model (2, 3) trained on videos from the Chicago Motion Database [22]. Model parameters for spatial kernels, temporal kernels, and the nonlinearities were jointly optimized using Adam [23] to maximize (2) subject to the mean firing rate constraint (3) using the augmented Lagrangian method with the quadratic penalty ⇢ = 1 [24]. Further technical details of model training are in Appendix E. All model code and reproducible examples are available at https://github.com/pearsonlab/ efficientcoding. As previously noted, the power spectral density of natural videos can be well approximated by a product of spatial and temporal power-law densities, implying an anticorrelation between high spatial and temporal frequency content [21]. Supplementary Figure 5 shows the data spectrum of the videos in our experiments is also well-approximated by separable power-law fits. To examine the effect of these statistics on the learned RFs, we divided the dataset into four progressively smaller subsets by the proportion of their temporal spectral content below 3 Hz, their spectral attenuation. Using values of 70%, 80%, and 90% then yielded a progression of datasets ranging from most videos to only the slowest videos (Figure 3A, B). Indeed, when the model was trained on these progressively slower data subsets, it produced only temporal smoothing filters, whereas the same model trained on all videos produced a variety of “fast” temporal filter types (Figure 3C). We also note that these experiments used unconstrained spatial kernels in place of (8), yet still converged on spatial RFs with typical center-surround structure as in [10, 15, 14]. Thus, these preliminary experiments suggest that the optimal encoding strategy—in particular, the number of distinct cell types found—depends critically on the statistics of the video distribution to be encoded. 4.1 Mosaics fill in order of temporal frequency As the number of RGCs available to the model increased, we observed the formation of new cell types with new spectral properties (Figure 4). We characterized the learned filters for each RGC in terms of their spectral centroid, defined as the center of mass of the Fourier (spatial) or Discrete Cosine (temporal) transform. Despite the fact that each model RGC was given its own spatial and temporal filter parameters (8, 6), the learned filter shapes strongly clustered, forming mosaics with nearly uniform response properties (Figure 4A–C). Critically, the emergence of new cell types shifted the spectral responses of previously established ones, with new cell types compressing the spectral windows of one another as they further specialized. Moreover, mosaic density increased with increasing RGC number, shifting the centroids of early mosaics toward increasingly higher spatial frequencies. This is also apparent in the forms of the typical learned filters and their power spectra: new filters selected for increasingly high-frequency content in the temporal domain (Figure 4D). We likewise analyzed the coverage factors of both individual mosaics and the entire collection, defined as the proportion of visual space covered by the learned RFs. More specifically, we defined the spatial radius of an RF as the distance from its center at which intensity dropped to 20% of its peak and used this area to compute a coverage factor, the ratio of total RF area to total visual space (⇡/4 of the square’s area due to circular masking). Since coverage factors depend not simply on RGC number but on RF density, they provide an alternative measure of the effective number of distinct cell types learned by the model. As Figure 4E shows, coverage increases nearly linearly with RGC number, while coverage for newly formed mosaics increases linearly before leveling off. In other words, new cell types initially increase coverage of visual space by adding new RFs, but marginal gains in coverage diminish as density increases. In all cases, the model dynamically adjusts the number of learned cell types and the proportion of RGCs assigned to them as channel capacity increases. 4.2 Phase changes in mosaic arrangement In addition to retinal organization at the level of mosaics, a pair of recent papers reported both experimental [15] and theoretical [14] evidence for an additional degree of freedom in optimizing information encoding: the relative arrangement of ON and OFF mosaics. Jun et al. studied this for the case of natural images in [14], demonstrating that the optimal configuration of ON and OFF mosaics is alignment (RFs co-located) at low output noise levels and anti-alignment (OFF RFs between ON RFs and vice-versa) under higher levels of retinal output noise. Moreover, this transition is abrupt, constituting a phase change in optimal mosaic arrangement. We thus asked whether learned mosaics exhibited a similar phase transition for natural video encoding. To do so, following [14], we repeatedly optimized a small model (J = 14, 7 ON, 7 OFF) for multiple learned filter types while systematically varying levels of input and output noise. In each case, one ON-OFF pair was fixed at the center of the space, while the locations of the others were allowed to vary. We used RF size D = 82 pixels for Slow and D = 122 for FastA and FastB cell types to allow the size of spatial kernels to be similar to those of the previous experiments, and we imposed the additional constraint that the shape parameters aj , bj , and cj in (8) be shared across RGCs. Under these conditions, the six free pairs of RFs converged to either aligned (overlapping) or anti-aligned (alternating) positions along the edges of the circular visual space, allowing for a straightforward examination of the effect of input and output noises on mosaic arrangement. Figure 5A-C shows that the phase transition boundaries closely follow the pattern observed in [14]: increasing output noise shifts the optimal configuration from alignment to anti-alignment. Moreover, for each of the tested filters, increasing input noise discourages this transition. This effect also follows from the analysis presented in [14], since higher input noise increases coactivation of nearby pairs of RFs, requiring larger thresholds to render ON-OFF pairs approximately indpendent (Appendix B). 5 Discussion Related work: As reviewed in the introduction, this study builds on a long line of work using efficient coding principles to understand retinal processing. In addition, it is related to work examining encoding of natural videos [25, 22, 16] and prediction in space-time. The most closely related work to this one is that of [13], which also considered efficient coding of natural videos and considered the tradeoffs involved in multiple cell types. Our treatment here differs from that work in several key ways: First, while [13] was concerned with demonstrating that multiple cell types could prove beneficial for encoding (in a framework focused on reconstruction error), that study predetermined the number of cell types and mosaic structure, only optimizing their relative spacing. By contrast, this work is focused on how the number of cell types is dynamically determined, and how the resulting mosaics arrange themselves, as a function of the number of units available for encoding (i.e., the channel capacity). Specifically, we follow previous efficient coding models [8, 9, 10, 11] in maximizing mutual information and do not assume an a priori mosaic arrangement, a particular cell spacing, or a particular number of cell types— all of these emerge via optimization in our formulation. Second, while the computational model of [13] optimized strides for a pair of rectangular arrays of RGCs, we individually optimize RF locations and shapes, allowing us to study changes in optimal RF size and density as new, partial mosaics begin to form. Third, while [13] used zero-padding of natural videos to bias learned temporal filters toward those of observed RGCs, we link the form of temporal RFs to biophysical limits on the filtering properties of bipolar cells, producing temporal filters with the delay properties observed in real data. Finally, while [13] only considered a single noise source in their model, we consider noise in both photoreceptor responses (input noise) and RGC responses (output noise), allowing us to investigate transitions in the optimal relative arrangement of mosaics [14, 15]. We have shown that efficient coding of natural videos produces multiple cell types with complementary RF properties. In addition, we have shown for the first time that the number and characteristics of these cell types depend crucially on the channel capacity: the number of available RGCs. As new simulated RGCs become available, they are initially concentrated into mosaics with more densely packed RFs, improving the spatial frequency bandwidth over which information is encoded. However, as this strategy produces diminishing returns, new cell types encoding higher-frequency temporal features emerge in the optimization process. These new cell types capture information over distinct spatiotemporal frequency bands, and their formation leads to upward shifts in the spatial frequency responses of previously formed cell types. Moreover, pairs of ON and OFF mosaics continue to exhibit the phase transition between alignment and anti-alignment revealed in a purely spatial optimization of efficient coding [14], suggesting that mosaic coordination is a general strategy for increasing coding efficiency. Furthermore, despite the assumptions of this model—linear filtering, separable filters, firing rates instead of spikes—our results are consistent with observed retinal data. For example, RGCs with small spatial RFs exhibit more prolonged temporal integration: they are also more low-pass in their temporal frequency tuning. Second, there is greater variability in the size and shape of spatial RFs at a given retinal location, but temporal RFs exhibit remarkably little variability in our simulations and in data [19]. Thus, these results further testify to the power of efficient coding principles in providing a conceptual framework for understanding the nervous system. Acknowledgments and Disclosure of Funding This work was supported by NIH/National Eye Institute Grant R01 EY031396.
1. What is the focus and contribution of the paper regarding retinal function analysis? 2. What are the strengths and weaknesses of the proposed approach, particularly in its technical work and connection to neurobiological mechanisms? 3. Are there any concerns or questions regarding specific parts of the paper, such as the difference of Gaussians model, linear model in the continuum limit, power spectral density approximation, and figure 4? 4. What is the reviewer's overall assessment of the paper's clarity, message, and impact?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Presents mathematical analysis and computer simulation of model that maximizes mutual information between photoreceptors and the rgc outputs of the retina. Shows that several different spatiotemporal filters are derived that mosaic the retina in different ways. Strengths And Weaknesses Strengths A nice formulation and modeling approach to understand retinal function. Impressive technical work. Weakness Not sure what the main take away is. The goal appears to be to understand the neural encoding in the retina, but after that the analysis and results, there is no attempt to tie these back to neurobiological mechanisms. It seems one could, but the paper just ends with the statement, "our results are in strong agreement with observed retinal data," which leaves you hanging. Specific issues: The difference of Gaussians model in eq. 8: it mentions that the center position of each kernel is different for each neuron, but is this also learned? not mentioned. Section 3: linear model in the continuum limit - this is very unclear. what is being continuized? space? The integral is over frequency space - not following what's going on. principal vectors a_1, a_2 and reciprocal vectors b_1, b_2 - what are these? Section 4.1: " power spectral density can be well approximated by a product of spatial and temporal power-law densities" - Dong & Atick is cited, but curiously the claim the exact opposite, it is not separable. Figure 4, panel A shows striking clustering in temporal spectral centroids - they are all stacked neatly in tight columns, no scatter. is this what emerges from the learned filters, or is somehow the quantization imposed? The mosaics are interesting to look at, but not clear what to take away from this. Overall this seems like a very promising direction, I want to like this paper, but I find it a bit confusing and lacking a clear message. Questions see above Limitations see above; no societal impact issues.
NIPS
Title Efficient coding, channel capacity, and the emergence of retinal mosaics Abstract Among the most striking features of retinal organization is the grouping of its output neurons, the retinal ganglion cells (RGCs), into a diversity of functional types. Each of these types exhibits a mosaic-like organization of receptive fields (RFs) that tiles the retina and visual space. Previous work has shown that many features of RGC organization, including the existence of ON and OFF cell types, the structure of spatial RFs, and their relative arrangement, can be predicted on the basis of efficient coding theory. This theory posits that the nervous system is organized to maximize information in its encoding of stimuli while minimizing metabolic costs. Here, we use efficient coding theory to present a comprehensive account of mosaic organization in the case of natural videos as the retinal channel capacity—the number of simulated RGCs available for encoding—is varied. We show that mosaic density increases with channel capacity up to a series of critical points at which, surprisingly, new cell types emerge. Each successive cell type focuses on increasingly high temporal frequencies and integrates signals over larger spatial areas. In addition, we show theoretically and in simulation that a transition from mosaic alignment to anti-alignment across pairs of cell types is observed with increasing output noise and decreasing input noise. Together, these results offer a unified perspective on the relationship between retinal mosaics, efficient coding, and channel capacity that can help to explain the stunning functional diversity of retinal cell types. 1 Introduction The retina is one of the most intensely studied neural circuits, yet we still lack a computational understanding of its organization in relation to its function. At a structural level, the retina forms a three-layer circuit, with its primary feedforward pathway consisting of photoreceptors to bipolar cells to retinal ganglion cells (RGCs), the axons of which form the optic nerve [1]. RGCs can be divided into 30-50 functionally distinct cell types (depending on species) with each cell responsive to a localized area of visual space (its receptive field (RF)), and the collection of RFs for each type tiling space to form a “mosaic” [2, 3, 4, 5]. Each mosaic represents the extraction of a specific type 36th Conference on Neural Information Processing Systems (NeurIPS 2022). of information across the visual scene by a particular cell type, with different mosaics responding to light increments or decrements (ON and OFF cells), high or low spatial and temporal frequencies, color, motion, and a host of other features. While much is known about the response properties of each RGC type, the computational principles that drive RGC diversity remain unclear. Efficient coding theory has proven one of the most powerful ideas for understanding retinal organization and sensory processing. Efficient coding posits that the nervous system attempts to encode sensory input by minimizing redundancy subject to biological costs and constraints [6, 7]. As more commonly formulated, it seeks to maximize the mutual information between sensory data and neural representations, with the most common cost in the retinal case being the energetic cost of action potentials transmitted by the RGCs. Despite its simplicity, this principle has proven useful, predicting the center-surround structure of RFs [8], the frequency response profile of contrast sensitivity [9], the structure of retinal mosaics [10, 11], the role of nonlinear rectification [12], different spatiotemporal kernels [13], and inter-mosaic arrangements [14, 15]. While previous studies have largely focused on either spatial or temporal aspects of efficient coding, we optimize an efficient coding model of retinal processing in both space and time to natural videos [16]. We systematically varied the number of cells available to the system and found that larger numbers of available cells led to more cell types. Each of these functionally distinct types formed its own mosaic of RFs that tiled space. We show that when and how new cell types emerge and form mosaics is the result of tradeoffs between power constraints and the benefits of specialized encoding that shift as more cells are available to the system. We show that cell types begin by capturing low-frequency temporal information and capture increasingly higher-frequency temporal information over larger spatial RFs as new cell types form. Finally, we investigated the relative arrangement of these mosaics and their dependence on noise. We show that mosaic pairs can be aligned or anti-aligned depending on input and output noise in the system [14]. Together, these results demonstrate for the first time how efficient coding principles can explain, even predict, the formation of cell types and which types are most informative when channel capacity is limited. 2 Model The model we develop is an extension of [14], a retinal model for efficient coding of natural images, which is based on a mutual information maximization objective proposed in [10]. The retinal model takes D-pixel patches of natural images x 2 RD corrupted by input noise nin ⇠ N (0,Cnin), filters these with unit-norm linear kernels {wj 2 RD | kwjk = 1}j=1,··· ,J representing J RGCs, and then feeds the resulting signals yj = w>j (x+nin) through softplus nonlinearities ⌘(y) = log 1 + e y / (we used = 0.25) with gain j and threshold ✓j . Finally, these signals are further corrupted by additive output noise nout ⇠ N (0,Cnout), to produce firing rates rj : rj = j · ⌘(yj ✓j) + nout,j . (1) The model learns parameters wj , j , and ✓j to maximize the mutual information between the inputs x and the outputs r, under a mean firing rate constraint [10, 14]: maximize log det GW> (Cx +Cnin)WG+Cnout det (GW>CninWG+Cnout) (2) subject to E[rj ] = 1. (3) Here Cx is the covariance matrix of the input distribution, W 2 RD⇥J contains the filters wj as its columns, the gain matrix G = diag ⇣ j d⌘ dy |yj ✓j ⌘ , and the noise covariances are Cnin = 2in D⇥D and Cnout = 2out J⇥J . This objective is equivalent to the formulation in [10], which assumes normally distributed inputs and locally linear responses in order to approximate the mutual information in a closed form. Here, we extend this model to time-varying inputs x(t) 2 RD representing natural videos (Figure 1A-B), which are convolved with linear spatiotemporal kernels {wj(t)}j=1,··· ,J : yj(t) = w > j (t) ⇤ x(t) = Z 1 1 wj(⌧) >x(t ⌧)d⌧. (4) We additionally assume that the convolutional kernels are separable in time and space: wj(t) = j(t)wj , kwjk = 1, j(t) 2 R, Z 1 1 (t)2dt = 1, (5) and the temporal kernels are unit-norm impulse responses taking the following parametric form: j(t) / ( ↵jtne t/⌧j ↵0jt ne t/⌧ 0 j if delay t 0 0 otherwise , (6) where ↵j , ↵0j , ⌧j > 0, ⌧ 0j > 0 are learnable parameters, and n 2 N is fixed. Previous work assumed an unconstrained form for these filters, adding zero-padding before and after the model’s image inputs to produce the characteristic shape of the temporal filters in primate midget and parasol cells [13], but this zero-padding represents a biologically implausible constraint, and the results fail to correctly reproduce the observed delay in retinal responses [17, 18, 19]. Rather, optimizing (2) with unconstrained temporal filters produces a filter bank uniformly tiling time (Supplementary Figure 4). By contrast, (6) is motivated by the arguments of [20], which showed that the optimal minimum-phase temporal filters of retinal bipolar cells, the inputs to the RGCs, take the form (t > 0) / e t/⌧ [sin!t !t cos!t] ⇡ e t/⌧ (!t)3 3 (7) when !⌧ ⌧ 1. Thus, we model RGC temporal filters as a linear combination of these forms. In practice, we take only two filters and use n = 6 rather than n = 3, since these have been shown to perform well in capturing observed retinal responses [19]. The results produced by more filters or different exponents are qualitatively unchanged (Supplementary Figure 7). For training on video data, we use discrete temporal filters and convolutions with PT 1 t=0 j [t] 2 = 1. Finally, while unconstrained spatial kernels wj converge to characteristic center-surround shapes under optimization of (2) (Figure 1C), for computational efficiency and stability, we parameterized these filters using a radially-symmetric difference of Gaussians wj(r) / e ajr2 cje bjr2 , bj > aj > 0, 0 < cj < 1, (8) where r measures the spatial distance to the center of the RF, and the parameters aj , bj , cj that determine the center location and spatial kernel shape are potentially different for each RGC j. The result of optimizing (2) using these forms is a set of spatial and temporal kernels (Figure 1D-E) that replicate experimentally-observed shapes and spatial RF tiling. 3 Efficient coding as a function of channel capacity: linear theory Before presenting results from our numerical experiments optimizing the model (2, 3), we begin by deriving intuitions about its behavior by studing the case of linear filters analytically. That is, we assume a single gain for all cells, no bias (✓ = 0), and a linear transfer function ⌘(y) = y. As we will see, this linear analysis correctly predicts the same types of mosaic formation and filling observed in the full nonlinear model. Here, we sketch the main results, deferring full details to Appendix A. 3.1 Linear model in the infinite retina limit For analytical simplicity, we begin by assuming an infinite retina on which RFs form mosaics described by a regular lattice. Under these conditions, we can write the log determinants in (2) as integrals and optimize over the unnormalized filter v ⌘ w subject to a power constraint: max v Z G0 d2k (2⇡)2 " log P g2G |v(k + g)| 2(Cx(k + g) + 2 in) + 2 outP g2G |v(k + g)|2 2in + 2out ⌫ X g2G |v(k + g)|2(Cx(k + g) + 2in) 3 5 , (9) where Cx(k) is the Fourier transform of the stationary image covariance Cx(z z0), the integral is over all frequencies k 2 G0 unique up to aliasing caused by the spatial regularity of the mosaic, and the sums over g account for aliased frequencies (Appendix A.1). In [8], the range [ ⇡,⇡] is used for the integral, corresponding to a one-dimensional lattice and units of mosaic spacing z = 1. Now, solving the optimization in (9) results in a spatial kernel with the spectral form (Appendix A.2) |v(k)|2 = 2out 2in " 1 2 Cx(k) Cx(k) + 2in s 1 + 2in 2out 4 ⌫ Cx(k) + 1 ! 1 # + , k 2 G0, (10) where k = kkk and ⌫ is chosen to enforce the constraint on total power. This is exactly the solution found in [8], linking it (in the linear case) to the model of [10, 11]. Note, however, that (10) is only nonzero within G0, since RF spacing sets an upper limit on the passband of the resulting filters. The generalization of this formulation to the spacetime case is straightforward. Given a spacetime stationary image spectrum Cx(z z0, t t0) and radially-symmetric, causal filter w(z, t), the same infinite retina limit as above requires calculating determinants across both neurons i, j and time points t, t0 of matrices with entries of the form Fijtt0 = Z dzdz0d⌧d⌧ 0 2w(zi z, t ⌧)Cx(z z 0, ⌧ ⌧ 0)w(zj z 0, t0 ⌧ 0) = Z d2k (2⇡)2 d! 2⇡ eik·(zi zj)+i!(t t 0) |v(k,!)|2Cx(k,!). (11) Again, such matrices can be diagonalized in the Fourier basis, with the result that the optimal spacetime filter once again takes the form (10) with the substitutions v(k) ! v(k,!), Cx(k) ! Cx(k,!) (Appendix A.3). Figure 2A depicts the frequency response of this filter in d = 1 spatial dimensions, with corresponding spatial and temporal sections plotted in Figures 2B-C. 3.2 Multiple cell types and the effects of channel capacity Up to this point, we have only considered a single type of filter v(k,!), corresponding to a single cell type. However, multiple cell types might increase the coding efficiency of the entire retina if they specialize, devoting their limited energy budget to non-overlapping regions of frequency space. Indeed, optimal encoding in the multi-cell-type case selects filters v and v0 that satisfy v⇤(k,!)v0(k,!) = 0, corresponding encoding independent visual information (Appendix A.4). This result naturally raises two questions: First, how many filter types are optimal? And second, how should a given budget of J RGCs be allocated across multiple filter types? As detailed in Appendix A.5, we can proceed by analyzing the case of a finite retina in the Fourier domain, approximating the information encoded by a mosaic of J RGCs with spatial filters given by (10) and nonoverlapping bandpass temporal filters that divide the available spectrum (e.g., Figure 2B, C). Following [21], we approximate the correlation spectrum of images by the factorized power law Cx(k,!) ' Ak↵!2 with ↵ ⇡ 1.3 and find that in this case, the optimal filter response exhibits two regimes as a function of spatial frequency (Supplementary Figure 1A): First, below kf = A/ 2in! 2 1/↵, the optimal filter is separable and log-linear, and the filtered image spectrum is white: |v(k,!)|2 ⇡ k↵!2 A⌫ , |v(k,!)|2Cx(k,!) ⇡ ⌫ 1, where ⌫, the Lagrange multiplier in (9) that enforces the power constraint, scales as 1/P for small values of maximal power P and 1/P 2 for larger values (Supplementary Figure 1D). Second, for k & kf , the filter response decreases as k ↵/2 until reaching its upper cutoff at kc = kf/(⌫ 2out)1/↵, with the filtered image spectrum falling off at the same rate (Supplementary Figure 1B). But what do these regimes have to do with mosaic formation? The link between the two is given by the fact that, for a finite retina with regularly spaced RFs, adding RGCs decreases the distance between RF centers and so increases the resolving power of the mosaic. That is, the maximal value of k grows roughly as k ⇠ p J in d = 2, such that larger numbers of RGCs capture more information at increasingly higher spatial frequencies (Supplementary Figure 1A). However, while information gain is roughly uniform in the whitening regime, it falls off sharply for k & kf (Supplementary Figure 1C), suggesting the interpretation that the k . kf regime is a “mosaic filling” phase in which information accumulates almost linearly as RFs capture new locations in visual space, while the k & kf regime constitutes a “compression phase” in which information gains are slower as RFs shrink to accommodate higher numbers (Figure 2D). Indeed, one can derive the scaling of total Temporal RFs of all ON cells (1) (2) (3) (4) A B C (1) All videos (2) (3) (4) Temporal RFs of all OFF cells All videos Slow videos only Past Figure 3: Statistics of natural videos affect learned RFs. (A) Histogram of spectral attenuation (fraction of power < 3 Hz) for each video clip from the Chicago Motion Database. A significant portion of the dataset exhibits predominaly low-frequency spectral content in time. Videos with spectral attenuation above 0.9, 0.8, and 0.7, are denoted (4), (3), and (2), respectively, while (1) refers to all videos in the dataset. (B) Spatial (top) and temporal (bottom) spectral density of the four subsets. (C) Temporal filters learned by training on each of the four subsets. Training on slow videos produced only smoothing kernels, while training on all videos produced a variety of temporal filters. information as a function of J : I ' 8 >< >: J log ⇣ 1 + P0 2out ⌘ 2P0 (↵+2) 2out ⇣ J Jf ⌘↵ 2 k . kf (J Jf ) ⇣ J Jf ⌘ ↵2 2 2 ↵ k & kf , (12) where P0 is the power budget per RGC and Jf is the RGC number corresponding to k = kf . Thus, mosaic filling exhibits diminishing marginal returns (Figure 2E), such that new cell types are favored when the marginal gain for growing mosaics with lower temporal frequency drops below the gain from initiating a new cell type specialized for higher temporal frequencies. Moreover, the difference between these gain curves implies that new RFs are not added to all mosaics at equal rates, but in proportion to their marginal information (Figure 2F). As we demonstrate in the next section, these features of cell type and mosaic formation continue to hold in the full nonlinear model in simulation. 4 Experiments We analyzed the characteristics of the optimal spatiotemporal RFs obtained from the model (2, 3) trained on videos from the Chicago Motion Database [22]. Model parameters for spatial kernels, temporal kernels, and the nonlinearities were jointly optimized using Adam [23] to maximize (2) subject to the mean firing rate constraint (3) using the augmented Lagrangian method with the quadratic penalty ⇢ = 1 [24]. Further technical details of model training are in Appendix E. All model code and reproducible examples are available at https://github.com/pearsonlab/ efficientcoding. As previously noted, the power spectral density of natural videos can be well approximated by a product of spatial and temporal power-law densities, implying an anticorrelation between high spatial and temporal frequency content [21]. Supplementary Figure 5 shows the data spectrum of the videos in our experiments is also well-approximated by separable power-law fits. To examine the effect of these statistics on the learned RFs, we divided the dataset into four progressively smaller subsets by the proportion of their temporal spectral content below 3 Hz, their spectral attenuation. Using values of 70%, 80%, and 90% then yielded a progression of datasets ranging from most videos to only the slowest videos (Figure 3A, B). Indeed, when the model was trained on these progressively slower data subsets, it produced only temporal smoothing filters, whereas the same model trained on all videos produced a variety of “fast” temporal filter types (Figure 3C). We also note that these experiments used unconstrained spatial kernels in place of (8), yet still converged on spatial RFs with typical center-surround structure as in [10, 15, 14]. Thus, these preliminary experiments suggest that the optimal encoding strategy—in particular, the number of distinct cell types found—depends critically on the statistics of the video distribution to be encoded. 4.1 Mosaics fill in order of temporal frequency As the number of RGCs available to the model increased, we observed the formation of new cell types with new spectral properties (Figure 4). We characterized the learned filters for each RGC in terms of their spectral centroid, defined as the center of mass of the Fourier (spatial) or Discrete Cosine (temporal) transform. Despite the fact that each model RGC was given its own spatial and temporal filter parameters (8, 6), the learned filter shapes strongly clustered, forming mosaics with nearly uniform response properties (Figure 4A–C). Critically, the emergence of new cell types shifted the spectral responses of previously established ones, with new cell types compressing the spectral windows of one another as they further specialized. Moreover, mosaic density increased with increasing RGC number, shifting the centroids of early mosaics toward increasingly higher spatial frequencies. This is also apparent in the forms of the typical learned filters and their power spectra: new filters selected for increasingly high-frequency content in the temporal domain (Figure 4D). We likewise analyzed the coverage factors of both individual mosaics and the entire collection, defined as the proportion of visual space covered by the learned RFs. More specifically, we defined the spatial radius of an RF as the distance from its center at which intensity dropped to 20% of its peak and used this area to compute a coverage factor, the ratio of total RF area to total visual space (⇡/4 of the square’s area due to circular masking). Since coverage factors depend not simply on RGC number but on RF density, they provide an alternative measure of the effective number of distinct cell types learned by the model. As Figure 4E shows, coverage increases nearly linearly with RGC number, while coverage for newly formed mosaics increases linearly before leveling off. In other words, new cell types initially increase coverage of visual space by adding new RFs, but marginal gains in coverage diminish as density increases. In all cases, the model dynamically adjusts the number of learned cell types and the proportion of RGCs assigned to them as channel capacity increases. 4.2 Phase changes in mosaic arrangement In addition to retinal organization at the level of mosaics, a pair of recent papers reported both experimental [15] and theoretical [14] evidence for an additional degree of freedom in optimizing information encoding: the relative arrangement of ON and OFF mosaics. Jun et al. studied this for the case of natural images in [14], demonstrating that the optimal configuration of ON and OFF mosaics is alignment (RFs co-located) at low output noise levels and anti-alignment (OFF RFs between ON RFs and vice-versa) under higher levels of retinal output noise. Moreover, this transition is abrupt, constituting a phase change in optimal mosaic arrangement. We thus asked whether learned mosaics exhibited a similar phase transition for natural video encoding. To do so, following [14], we repeatedly optimized a small model (J = 14, 7 ON, 7 OFF) for multiple learned filter types while systematically varying levels of input and output noise. In each case, one ON-OFF pair was fixed at the center of the space, while the locations of the others were allowed to vary. We used RF size D = 82 pixels for Slow and D = 122 for FastA and FastB cell types to allow the size of spatial kernels to be similar to those of the previous experiments, and we imposed the additional constraint that the shape parameters aj , bj , and cj in (8) be shared across RGCs. Under these conditions, the six free pairs of RFs converged to either aligned (overlapping) or anti-aligned (alternating) positions along the edges of the circular visual space, allowing for a straightforward examination of the effect of input and output noises on mosaic arrangement. Figure 5A-C shows that the phase transition boundaries closely follow the pattern observed in [14]: increasing output noise shifts the optimal configuration from alignment to anti-alignment. Moreover, for each of the tested filters, increasing input noise discourages this transition. This effect also follows from the analysis presented in [14], since higher input noise increases coactivation of nearby pairs of RFs, requiring larger thresholds to render ON-OFF pairs approximately indpendent (Appendix B). 5 Discussion Related work: As reviewed in the introduction, this study builds on a long line of work using efficient coding principles to understand retinal processing. In addition, it is related to work examining encoding of natural videos [25, 22, 16] and prediction in space-time. The most closely related work to this one is that of [13], which also considered efficient coding of natural videos and considered the tradeoffs involved in multiple cell types. Our treatment here differs from that work in several key ways: First, while [13] was concerned with demonstrating that multiple cell types could prove beneficial for encoding (in a framework focused on reconstruction error), that study predetermined the number of cell types and mosaic structure, only optimizing their relative spacing. By contrast, this work is focused on how the number of cell types is dynamically determined, and how the resulting mosaics arrange themselves, as a function of the number of units available for encoding (i.e., the channel capacity). Specifically, we follow previous efficient coding models [8, 9, 10, 11] in maximizing mutual information and do not assume an a priori mosaic arrangement, a particular cell spacing, or a particular number of cell types— all of these emerge via optimization in our formulation. Second, while the computational model of [13] optimized strides for a pair of rectangular arrays of RGCs, we individually optimize RF locations and shapes, allowing us to study changes in optimal RF size and density as new, partial mosaics begin to form. Third, while [13] used zero-padding of natural videos to bias learned temporal filters toward those of observed RGCs, we link the form of temporal RFs to biophysical limits on the filtering properties of bipolar cells, producing temporal filters with the delay properties observed in real data. Finally, while [13] only considered a single noise source in their model, we consider noise in both photoreceptor responses (input noise) and RGC responses (output noise), allowing us to investigate transitions in the optimal relative arrangement of mosaics [14, 15]. We have shown that efficient coding of natural videos produces multiple cell types with complementary RF properties. In addition, we have shown for the first time that the number and characteristics of these cell types depend crucially on the channel capacity: the number of available RGCs. As new simulated RGCs become available, they are initially concentrated into mosaics with more densely packed RFs, improving the spatial frequency bandwidth over which information is encoded. However, as this strategy produces diminishing returns, new cell types encoding higher-frequency temporal features emerge in the optimization process. These new cell types capture information over distinct spatiotemporal frequency bands, and their formation leads to upward shifts in the spatial frequency responses of previously formed cell types. Moreover, pairs of ON and OFF mosaics continue to exhibit the phase transition between alignment and anti-alignment revealed in a purely spatial optimization of efficient coding [14], suggesting that mosaic coordination is a general strategy for increasing coding efficiency. Furthermore, despite the assumptions of this model—linear filtering, separable filters, firing rates instead of spikes—our results are consistent with observed retinal data. For example, RGCs with small spatial RFs exhibit more prolonged temporal integration: they are also more low-pass in their temporal frequency tuning. Second, there is greater variability in the size and shape of spatial RFs at a given retinal location, but temporal RFs exhibit remarkably little variability in our simulations and in data [19]. Thus, these results further testify to the power of efficient coding principles in providing a conceptual framework for understanding the nervous system. Acknowledgments and Disclosure of Funding This work was supported by NIH/National Eye Institute Grant R01 EY031396.
1. What is the primary contribution of the paper regarding retinal mosaic organization? 2. What are the strengths and weaknesses of the proposed model? 3. Are there any concerns or questions regarding the tight coupling with reference [10] and the relegation of some parts of the paper to the appendix? 4. Can the differences between this work and [10] affecting the main conclusions of the paper be clarified better? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 6. Any suggestions for improving the paper, such as making the results backing up certain statements easier to find in the text and providing more explicit connections between different parts of the paper?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper presents a model for retinal mosaic organization, derived by application of the efficient coding principle to natural movies. In particular, the model shows that the total number of retinal ganglion cells is a key parameter that controls the emergence of distinct cell type. The model is also used to extract predictions about the relative phase of distinct retinal mosaics as a function of input noise. Strengths And Weaknesses Strengths The paper tackles a relevant problem, that is modeling the emergence of distinct retinal mosaics from an efficient coding perspective, by building on a solid foundation (reference [10]). Weaknesses The paper is very hard to read, mostly because (1) its tight coupling with reference [10], and (2) how much of it is relegated to the Appendix. With respect to (1), I am not sure I understand fully what parts of the model, and which results, are fully novel here and which are minor refinements or variants of the results in [10]. Regarding (2), the Appendix includes not only derivations for most of the equations shown in the main text, or technical details for the realization of some of the plots, but also - to my understanding - the basic logic of some of the results (I'll give more details on this below, under "questions"). Additionally, the paper claims "strong agreement with observed data" but contains no data and no quantitative assessment of such claim. Questions Is it possible to clarify better how the differences between this work and [10] (listed in the "related work" paragraph) affect the main conclusions of the paper? Or is the main point that the results here are qualitatively similar to those in [10], despite the differences listed under "related work"? (I am referring mostly to the material in Section 3). For instance, the first sentence of the Discussion reads: "Here, we have shown that efficient coding of natural movies results in the formation of multiple receptive field mosaics with complementary filtering properties [10]." Is this suggesting that the main result of the paper is something for which ref [10] should be credited? Lines 50-53 read: "In the case of linear encoding, we show analytically how a tradeoff between information gains from increasing mosaic density and informationg gains from adding new, specialized cell types leads to the addition of new mosaics capturing higher temporal frequency information." Can you make sure that the results backing up this statement are easier to find in the text? If I am not mistaken, the passage should be lines 182--189. That part of the text hinges on the fact that "as detailed above, adding filters covering non-overlapping spectral windows allows mosaics to specialize to different regions of spatial and temporal frequency". The "above" here probably refers to the previous paragraph, that ends with the following: "In [10], the authors considered a form of this with spatial bandwidth partitioning (v(k)*v'(k) = 0), but as we show below, this special case encounters a limit beyond the first two mosaics, when the spatial passband spectra of new filters substantially overlap. Rather, the more general solution is that mosaic filters are band-limited in both space and time, arranging themselves to tile minimally overlapping regions with highest power in the (k,ω) plane (Appendix A)." This passage, in turn, contains a reference to ref [10], one to "below", and one to the Appendix. Forgetting about [10] and assuming that "below" here refers to Section 4 (which is not valid for the original point we were trying to find support for, as we're looking for an analytical solution, as claimed in the Introduction), we therefore follow this chain of references to the Appendix. The relevant section seems to be section A.4, "Effects of channel capacity and new mosaic formation", but it would be great to have this spelled out more clearly, and perhaps the result in the Appendix also connected back to the claim in the main text in a more explicit way. The Discussion states (lines 278--279) that the results are "in strong agreement with observed retinal data". I found this statement surprising, given the absence of quantitative comparisons with empirical data in the paper. Is it possible to clarify what is meant by this, and to substantiate the statement with direct comparisons? Limitations The limitations of the work are acknowledged very briefly in the discussion ("despite the strong assumptions of our model - linear filtering, separable filters, firing rates instead of spikes -"), but not unsuitably so for a modelling work of this type.
NIPS
Title Efficient coding, channel capacity, and the emergence of retinal mosaics Abstract Among the most striking features of retinal organization is the grouping of its output neurons, the retinal ganglion cells (RGCs), into a diversity of functional types. Each of these types exhibits a mosaic-like organization of receptive fields (RFs) that tiles the retina and visual space. Previous work has shown that many features of RGC organization, including the existence of ON and OFF cell types, the structure of spatial RFs, and their relative arrangement, can be predicted on the basis of efficient coding theory. This theory posits that the nervous system is organized to maximize information in its encoding of stimuli while minimizing metabolic costs. Here, we use efficient coding theory to present a comprehensive account of mosaic organization in the case of natural videos as the retinal channel capacity—the number of simulated RGCs available for encoding—is varied. We show that mosaic density increases with channel capacity up to a series of critical points at which, surprisingly, new cell types emerge. Each successive cell type focuses on increasingly high temporal frequencies and integrates signals over larger spatial areas. In addition, we show theoretically and in simulation that a transition from mosaic alignment to anti-alignment across pairs of cell types is observed with increasing output noise and decreasing input noise. Together, these results offer a unified perspective on the relationship between retinal mosaics, efficient coding, and channel capacity that can help to explain the stunning functional diversity of retinal cell types. 1 Introduction The retina is one of the most intensely studied neural circuits, yet we still lack a computational understanding of its organization in relation to its function. At a structural level, the retina forms a three-layer circuit, with its primary feedforward pathway consisting of photoreceptors to bipolar cells to retinal ganglion cells (RGCs), the axons of which form the optic nerve [1]. RGCs can be divided into 30-50 functionally distinct cell types (depending on species) with each cell responsive to a localized area of visual space (its receptive field (RF)), and the collection of RFs for each type tiling space to form a “mosaic” [2, 3, 4, 5]. Each mosaic represents the extraction of a specific type 36th Conference on Neural Information Processing Systems (NeurIPS 2022). of information across the visual scene by a particular cell type, with different mosaics responding to light increments or decrements (ON and OFF cells), high or low spatial and temporal frequencies, color, motion, and a host of other features. While much is known about the response properties of each RGC type, the computational principles that drive RGC diversity remain unclear. Efficient coding theory has proven one of the most powerful ideas for understanding retinal organization and sensory processing. Efficient coding posits that the nervous system attempts to encode sensory input by minimizing redundancy subject to biological costs and constraints [6, 7]. As more commonly formulated, it seeks to maximize the mutual information between sensory data and neural representations, with the most common cost in the retinal case being the energetic cost of action potentials transmitted by the RGCs. Despite its simplicity, this principle has proven useful, predicting the center-surround structure of RFs [8], the frequency response profile of contrast sensitivity [9], the structure of retinal mosaics [10, 11], the role of nonlinear rectification [12], different spatiotemporal kernels [13], and inter-mosaic arrangements [14, 15]. While previous studies have largely focused on either spatial or temporal aspects of efficient coding, we optimize an efficient coding model of retinal processing in both space and time to natural videos [16]. We systematically varied the number of cells available to the system and found that larger numbers of available cells led to more cell types. Each of these functionally distinct types formed its own mosaic of RFs that tiled space. We show that when and how new cell types emerge and form mosaics is the result of tradeoffs between power constraints and the benefits of specialized encoding that shift as more cells are available to the system. We show that cell types begin by capturing low-frequency temporal information and capture increasingly higher-frequency temporal information over larger spatial RFs as new cell types form. Finally, we investigated the relative arrangement of these mosaics and their dependence on noise. We show that mosaic pairs can be aligned or anti-aligned depending on input and output noise in the system [14]. Together, these results demonstrate for the first time how efficient coding principles can explain, even predict, the formation of cell types and which types are most informative when channel capacity is limited. 2 Model The model we develop is an extension of [14], a retinal model for efficient coding of natural images, which is based on a mutual information maximization objective proposed in [10]. The retinal model takes D-pixel patches of natural images x 2 RD corrupted by input noise nin ⇠ N (0,Cnin), filters these with unit-norm linear kernels {wj 2 RD | kwjk = 1}j=1,··· ,J representing J RGCs, and then feeds the resulting signals yj = w>j (x+nin) through softplus nonlinearities ⌘(y) = log 1 + e y / (we used = 0.25) with gain j and threshold ✓j . Finally, these signals are further corrupted by additive output noise nout ⇠ N (0,Cnout), to produce firing rates rj : rj = j · ⌘(yj ✓j) + nout,j . (1) The model learns parameters wj , j , and ✓j to maximize the mutual information between the inputs x and the outputs r, under a mean firing rate constraint [10, 14]: maximize log det GW> (Cx +Cnin)WG+Cnout det (GW>CninWG+Cnout) (2) subject to E[rj ] = 1. (3) Here Cx is the covariance matrix of the input distribution, W 2 RD⇥J contains the filters wj as its columns, the gain matrix G = diag ⇣ j d⌘ dy |yj ✓j ⌘ , and the noise covariances are Cnin = 2in D⇥D and Cnout = 2out J⇥J . This objective is equivalent to the formulation in [10], which assumes normally distributed inputs and locally linear responses in order to approximate the mutual information in a closed form. Here, we extend this model to time-varying inputs x(t) 2 RD representing natural videos (Figure 1A-B), which are convolved with linear spatiotemporal kernels {wj(t)}j=1,··· ,J : yj(t) = w > j (t) ⇤ x(t) = Z 1 1 wj(⌧) >x(t ⌧)d⌧. (4) We additionally assume that the convolutional kernels are separable in time and space: wj(t) = j(t)wj , kwjk = 1, j(t) 2 R, Z 1 1 (t)2dt = 1, (5) and the temporal kernels are unit-norm impulse responses taking the following parametric form: j(t) / ( ↵jtne t/⌧j ↵0jt ne t/⌧ 0 j if delay t 0 0 otherwise , (6) where ↵j , ↵0j , ⌧j > 0, ⌧ 0j > 0 are learnable parameters, and n 2 N is fixed. Previous work assumed an unconstrained form for these filters, adding zero-padding before and after the model’s image inputs to produce the characteristic shape of the temporal filters in primate midget and parasol cells [13], but this zero-padding represents a biologically implausible constraint, and the results fail to correctly reproduce the observed delay in retinal responses [17, 18, 19]. Rather, optimizing (2) with unconstrained temporal filters produces a filter bank uniformly tiling time (Supplementary Figure 4). By contrast, (6) is motivated by the arguments of [20], which showed that the optimal minimum-phase temporal filters of retinal bipolar cells, the inputs to the RGCs, take the form (t > 0) / e t/⌧ [sin!t !t cos!t] ⇡ e t/⌧ (!t)3 3 (7) when !⌧ ⌧ 1. Thus, we model RGC temporal filters as a linear combination of these forms. In practice, we take only two filters and use n = 6 rather than n = 3, since these have been shown to perform well in capturing observed retinal responses [19]. The results produced by more filters or different exponents are qualitatively unchanged (Supplementary Figure 7). For training on video data, we use discrete temporal filters and convolutions with PT 1 t=0 j [t] 2 = 1. Finally, while unconstrained spatial kernels wj converge to characteristic center-surround shapes under optimization of (2) (Figure 1C), for computational efficiency and stability, we parameterized these filters using a radially-symmetric difference of Gaussians wj(r) / e ajr2 cje bjr2 , bj > aj > 0, 0 < cj < 1, (8) where r measures the spatial distance to the center of the RF, and the parameters aj , bj , cj that determine the center location and spatial kernel shape are potentially different for each RGC j. The result of optimizing (2) using these forms is a set of spatial and temporal kernels (Figure 1D-E) that replicate experimentally-observed shapes and spatial RF tiling. 3 Efficient coding as a function of channel capacity: linear theory Before presenting results from our numerical experiments optimizing the model (2, 3), we begin by deriving intuitions about its behavior by studing the case of linear filters analytically. That is, we assume a single gain for all cells, no bias (✓ = 0), and a linear transfer function ⌘(y) = y. As we will see, this linear analysis correctly predicts the same types of mosaic formation and filling observed in the full nonlinear model. Here, we sketch the main results, deferring full details to Appendix A. 3.1 Linear model in the infinite retina limit For analytical simplicity, we begin by assuming an infinite retina on which RFs form mosaics described by a regular lattice. Under these conditions, we can write the log determinants in (2) as integrals and optimize over the unnormalized filter v ⌘ w subject to a power constraint: max v Z G0 d2k (2⇡)2 " log P g2G |v(k + g)| 2(Cx(k + g) + 2 in) + 2 outP g2G |v(k + g)|2 2in + 2out ⌫ X g2G |v(k + g)|2(Cx(k + g) + 2in) 3 5 , (9) where Cx(k) is the Fourier transform of the stationary image covariance Cx(z z0), the integral is over all frequencies k 2 G0 unique up to aliasing caused by the spatial regularity of the mosaic, and the sums over g account for aliased frequencies (Appendix A.1). In [8], the range [ ⇡,⇡] is used for the integral, corresponding to a one-dimensional lattice and units of mosaic spacing z = 1. Now, solving the optimization in (9) results in a spatial kernel with the spectral form (Appendix A.2) |v(k)|2 = 2out 2in " 1 2 Cx(k) Cx(k) + 2in s 1 + 2in 2out 4 ⌫ Cx(k) + 1 ! 1 # + , k 2 G0, (10) where k = kkk and ⌫ is chosen to enforce the constraint on total power. This is exactly the solution found in [8], linking it (in the linear case) to the model of [10, 11]. Note, however, that (10) is only nonzero within G0, since RF spacing sets an upper limit on the passband of the resulting filters. The generalization of this formulation to the spacetime case is straightforward. Given a spacetime stationary image spectrum Cx(z z0, t t0) and radially-symmetric, causal filter w(z, t), the same infinite retina limit as above requires calculating determinants across both neurons i, j and time points t, t0 of matrices with entries of the form Fijtt0 = Z dzdz0d⌧d⌧ 0 2w(zi z, t ⌧)Cx(z z 0, ⌧ ⌧ 0)w(zj z 0, t0 ⌧ 0) = Z d2k (2⇡)2 d! 2⇡ eik·(zi zj)+i!(t t 0) |v(k,!)|2Cx(k,!). (11) Again, such matrices can be diagonalized in the Fourier basis, with the result that the optimal spacetime filter once again takes the form (10) with the substitutions v(k) ! v(k,!), Cx(k) ! Cx(k,!) (Appendix A.3). Figure 2A depicts the frequency response of this filter in d = 1 spatial dimensions, with corresponding spatial and temporal sections plotted in Figures 2B-C. 3.2 Multiple cell types and the effects of channel capacity Up to this point, we have only considered a single type of filter v(k,!), corresponding to a single cell type. However, multiple cell types might increase the coding efficiency of the entire retina if they specialize, devoting their limited energy budget to non-overlapping regions of frequency space. Indeed, optimal encoding in the multi-cell-type case selects filters v and v0 that satisfy v⇤(k,!)v0(k,!) = 0, corresponding encoding independent visual information (Appendix A.4). This result naturally raises two questions: First, how many filter types are optimal? And second, how should a given budget of J RGCs be allocated across multiple filter types? As detailed in Appendix A.5, we can proceed by analyzing the case of a finite retina in the Fourier domain, approximating the information encoded by a mosaic of J RGCs with spatial filters given by (10) and nonoverlapping bandpass temporal filters that divide the available spectrum (e.g., Figure 2B, C). Following [21], we approximate the correlation spectrum of images by the factorized power law Cx(k,!) ' Ak↵!2 with ↵ ⇡ 1.3 and find that in this case, the optimal filter response exhibits two regimes as a function of spatial frequency (Supplementary Figure 1A): First, below kf = A/ 2in! 2 1/↵, the optimal filter is separable and log-linear, and the filtered image spectrum is white: |v(k,!)|2 ⇡ k↵!2 A⌫ , |v(k,!)|2Cx(k,!) ⇡ ⌫ 1, where ⌫, the Lagrange multiplier in (9) that enforces the power constraint, scales as 1/P for small values of maximal power P and 1/P 2 for larger values (Supplementary Figure 1D). Second, for k & kf , the filter response decreases as k ↵/2 until reaching its upper cutoff at kc = kf/(⌫ 2out)1/↵, with the filtered image spectrum falling off at the same rate (Supplementary Figure 1B). But what do these regimes have to do with mosaic formation? The link between the two is given by the fact that, for a finite retina with regularly spaced RFs, adding RGCs decreases the distance between RF centers and so increases the resolving power of the mosaic. That is, the maximal value of k grows roughly as k ⇠ p J in d = 2, such that larger numbers of RGCs capture more information at increasingly higher spatial frequencies (Supplementary Figure 1A). However, while information gain is roughly uniform in the whitening regime, it falls off sharply for k & kf (Supplementary Figure 1C), suggesting the interpretation that the k . kf regime is a “mosaic filling” phase in which information accumulates almost linearly as RFs capture new locations in visual space, while the k & kf regime constitutes a “compression phase” in which information gains are slower as RFs shrink to accommodate higher numbers (Figure 2D). Indeed, one can derive the scaling of total Temporal RFs of all ON cells (1) (2) (3) (4) A B C (1) All videos (2) (3) (4) Temporal RFs of all OFF cells All videos Slow videos only Past Figure 3: Statistics of natural videos affect learned RFs. (A) Histogram of spectral attenuation (fraction of power < 3 Hz) for each video clip from the Chicago Motion Database. A significant portion of the dataset exhibits predominaly low-frequency spectral content in time. Videos with spectral attenuation above 0.9, 0.8, and 0.7, are denoted (4), (3), and (2), respectively, while (1) refers to all videos in the dataset. (B) Spatial (top) and temporal (bottom) spectral density of the four subsets. (C) Temporal filters learned by training on each of the four subsets. Training on slow videos produced only smoothing kernels, while training on all videos produced a variety of temporal filters. information as a function of J : I ' 8 >< >: J log ⇣ 1 + P0 2out ⌘ 2P0 (↵+2) 2out ⇣ J Jf ⌘↵ 2 k . kf (J Jf ) ⇣ J Jf ⌘ ↵2 2 2 ↵ k & kf , (12) where P0 is the power budget per RGC and Jf is the RGC number corresponding to k = kf . Thus, mosaic filling exhibits diminishing marginal returns (Figure 2E), such that new cell types are favored when the marginal gain for growing mosaics with lower temporal frequency drops below the gain from initiating a new cell type specialized for higher temporal frequencies. Moreover, the difference between these gain curves implies that new RFs are not added to all mosaics at equal rates, but in proportion to their marginal information (Figure 2F). As we demonstrate in the next section, these features of cell type and mosaic formation continue to hold in the full nonlinear model in simulation. 4 Experiments We analyzed the characteristics of the optimal spatiotemporal RFs obtained from the model (2, 3) trained on videos from the Chicago Motion Database [22]. Model parameters for spatial kernels, temporal kernels, and the nonlinearities were jointly optimized using Adam [23] to maximize (2) subject to the mean firing rate constraint (3) using the augmented Lagrangian method with the quadratic penalty ⇢ = 1 [24]. Further technical details of model training are in Appendix E. All model code and reproducible examples are available at https://github.com/pearsonlab/ efficientcoding. As previously noted, the power spectral density of natural videos can be well approximated by a product of spatial and temporal power-law densities, implying an anticorrelation between high spatial and temporal frequency content [21]. Supplementary Figure 5 shows the data spectrum of the videos in our experiments is also well-approximated by separable power-law fits. To examine the effect of these statistics on the learned RFs, we divided the dataset into four progressively smaller subsets by the proportion of their temporal spectral content below 3 Hz, their spectral attenuation. Using values of 70%, 80%, and 90% then yielded a progression of datasets ranging from most videos to only the slowest videos (Figure 3A, B). Indeed, when the model was trained on these progressively slower data subsets, it produced only temporal smoothing filters, whereas the same model trained on all videos produced a variety of “fast” temporal filter types (Figure 3C). We also note that these experiments used unconstrained spatial kernels in place of (8), yet still converged on spatial RFs with typical center-surround structure as in [10, 15, 14]. Thus, these preliminary experiments suggest that the optimal encoding strategy—in particular, the number of distinct cell types found—depends critically on the statistics of the video distribution to be encoded. 4.1 Mosaics fill in order of temporal frequency As the number of RGCs available to the model increased, we observed the formation of new cell types with new spectral properties (Figure 4). We characterized the learned filters for each RGC in terms of their spectral centroid, defined as the center of mass of the Fourier (spatial) or Discrete Cosine (temporal) transform. Despite the fact that each model RGC was given its own spatial and temporal filter parameters (8, 6), the learned filter shapes strongly clustered, forming mosaics with nearly uniform response properties (Figure 4A–C). Critically, the emergence of new cell types shifted the spectral responses of previously established ones, with new cell types compressing the spectral windows of one another as they further specialized. Moreover, mosaic density increased with increasing RGC number, shifting the centroids of early mosaics toward increasingly higher spatial frequencies. This is also apparent in the forms of the typical learned filters and their power spectra: new filters selected for increasingly high-frequency content in the temporal domain (Figure 4D). We likewise analyzed the coverage factors of both individual mosaics and the entire collection, defined as the proportion of visual space covered by the learned RFs. More specifically, we defined the spatial radius of an RF as the distance from its center at which intensity dropped to 20% of its peak and used this area to compute a coverage factor, the ratio of total RF area to total visual space (⇡/4 of the square’s area due to circular masking). Since coverage factors depend not simply on RGC number but on RF density, they provide an alternative measure of the effective number of distinct cell types learned by the model. As Figure 4E shows, coverage increases nearly linearly with RGC number, while coverage for newly formed mosaics increases linearly before leveling off. In other words, new cell types initially increase coverage of visual space by adding new RFs, but marginal gains in coverage diminish as density increases. In all cases, the model dynamically adjusts the number of learned cell types and the proportion of RGCs assigned to them as channel capacity increases. 4.2 Phase changes in mosaic arrangement In addition to retinal organization at the level of mosaics, a pair of recent papers reported both experimental [15] and theoretical [14] evidence for an additional degree of freedom in optimizing information encoding: the relative arrangement of ON and OFF mosaics. Jun et al. studied this for the case of natural images in [14], demonstrating that the optimal configuration of ON and OFF mosaics is alignment (RFs co-located) at low output noise levels and anti-alignment (OFF RFs between ON RFs and vice-versa) under higher levels of retinal output noise. Moreover, this transition is abrupt, constituting a phase change in optimal mosaic arrangement. We thus asked whether learned mosaics exhibited a similar phase transition for natural video encoding. To do so, following [14], we repeatedly optimized a small model (J = 14, 7 ON, 7 OFF) for multiple learned filter types while systematically varying levels of input and output noise. In each case, one ON-OFF pair was fixed at the center of the space, while the locations of the others were allowed to vary. We used RF size D = 82 pixels for Slow and D = 122 for FastA and FastB cell types to allow the size of spatial kernels to be similar to those of the previous experiments, and we imposed the additional constraint that the shape parameters aj , bj , and cj in (8) be shared across RGCs. Under these conditions, the six free pairs of RFs converged to either aligned (overlapping) or anti-aligned (alternating) positions along the edges of the circular visual space, allowing for a straightforward examination of the effect of input and output noises on mosaic arrangement. Figure 5A-C shows that the phase transition boundaries closely follow the pattern observed in [14]: increasing output noise shifts the optimal configuration from alignment to anti-alignment. Moreover, for each of the tested filters, increasing input noise discourages this transition. This effect also follows from the analysis presented in [14], since higher input noise increases coactivation of nearby pairs of RFs, requiring larger thresholds to render ON-OFF pairs approximately indpendent (Appendix B). 5 Discussion Related work: As reviewed in the introduction, this study builds on a long line of work using efficient coding principles to understand retinal processing. In addition, it is related to work examining encoding of natural videos [25, 22, 16] and prediction in space-time. The most closely related work to this one is that of [13], which also considered efficient coding of natural videos and considered the tradeoffs involved in multiple cell types. Our treatment here differs from that work in several key ways: First, while [13] was concerned with demonstrating that multiple cell types could prove beneficial for encoding (in a framework focused on reconstruction error), that study predetermined the number of cell types and mosaic structure, only optimizing their relative spacing. By contrast, this work is focused on how the number of cell types is dynamically determined, and how the resulting mosaics arrange themselves, as a function of the number of units available for encoding (i.e., the channel capacity). Specifically, we follow previous efficient coding models [8, 9, 10, 11] in maximizing mutual information and do not assume an a priori mosaic arrangement, a particular cell spacing, or a particular number of cell types— all of these emerge via optimization in our formulation. Second, while the computational model of [13] optimized strides for a pair of rectangular arrays of RGCs, we individually optimize RF locations and shapes, allowing us to study changes in optimal RF size and density as new, partial mosaics begin to form. Third, while [13] used zero-padding of natural videos to bias learned temporal filters toward those of observed RGCs, we link the form of temporal RFs to biophysical limits on the filtering properties of bipolar cells, producing temporal filters with the delay properties observed in real data. Finally, while [13] only considered a single noise source in their model, we consider noise in both photoreceptor responses (input noise) and RGC responses (output noise), allowing us to investigate transitions in the optimal relative arrangement of mosaics [14, 15]. We have shown that efficient coding of natural videos produces multiple cell types with complementary RF properties. In addition, we have shown for the first time that the number and characteristics of these cell types depend crucially on the channel capacity: the number of available RGCs. As new simulated RGCs become available, they are initially concentrated into mosaics with more densely packed RFs, improving the spatial frequency bandwidth over which information is encoded. However, as this strategy produces diminishing returns, new cell types encoding higher-frequency temporal features emerge in the optimization process. These new cell types capture information over distinct spatiotemporal frequency bands, and their formation leads to upward shifts in the spatial frequency responses of previously formed cell types. Moreover, pairs of ON and OFF mosaics continue to exhibit the phase transition between alignment and anti-alignment revealed in a purely spatial optimization of efficient coding [14], suggesting that mosaic coordination is a general strategy for increasing coding efficiency. Furthermore, despite the assumptions of this model—linear filtering, separable filters, firing rates instead of spikes—our results are consistent with observed retinal data. For example, RGCs with small spatial RFs exhibit more prolonged temporal integration: they are also more low-pass in their temporal frequency tuning. Second, there is greater variability in the size and shape of spatial RFs at a given retinal location, but temporal RFs exhibit remarkably little variability in our simulations and in data [19]. Thus, these results further testify to the power of efficient coding principles in providing a conceptual framework for understanding the nervous system. Acknowledgments and Disclosure of Funding This work was supported by NIH/National Eye Institute Grant R01 EY031396.
1. What is the focus and contribution of the paper regarding ganglion cell types? 2. What are the strengths of the proposed approach, particularly in its theoretical and experimental aspects? 3. What are the weaknesses of the paper, especially regarding some open questions and technical parts? 4. Do you have any concerns about the conclusion of the paper regarding the change in temporal frequency? 5. What are the minor comments and suggestions for improving certain parts of the paper?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors present a unified perspective on the appearance of different ganglion cell types (in terms of receptive field size, temporal properties and polarity) in the framework of efficient coding. They deploy a previously developed model and efficient coding framework to spatio-temporal movies. They show, first analytically in the case of a simplified linear model, later experimentally in the non-linear case, how optimal spatial and temporal filters change as the number of neurons (‘channel capacity’) increases. Additionally they investigate the resulting mosaics of similar types for (anti-) alignment and show that input as well as output noise has an important but opposed influence on the alignment of ON/OFF receptive fields. Strengths And Weaknesses Strengths The authors embed their work excellently into the previous literature and work clearly out the parallels and their novel contributions. The theoretical as well as the experimental approaches are well described and cover different interesting arrangements and detailed analyses. All experiments are carried out carefully and are of very high quality, as well as the figures. The authors push the understanding of retinal layout further and try to unify previous approaches. However, there are some minor weaknesses: Weaknesses There are some open questions (see below). A simple schema for the model would help to understand the setup. From equation (2) it does not become clear what the differences to [7] are, and how the formula is derived (as I could not find the exact formulation in [7]). Some parts of section 3 (and especially Suppl. A) are quite technical and it is sometimes difficult to follow. The authors could try to strengthen a ‘red thread’ and give also an intuition where possible. Sections 3 and 4 seem to be rather detached from each other. While formulas are derived thoroughly, their interpretation and link to section 4 could be improved. Their final conclusion that previous cell types change down in temporal frequency (l. 274) is not well supported by the presented experiments. While it is more obvious for the spatial frequency, an additional analysis for the temporal domain could help to support this argument. Minor comments: l. 256: shouldn’t it reference to eq (8)? Questions Fig. 2 a: How do the ‘experimental dots’ relate to the presented isoclines? Could the authors comment on how their argumentation in line l. 186 (and also later) is depend on the assumption of time-space separability? While a detailed analysis is beyond the scope of the manuscript, the specialization to different regions of spatial and temporal frequencies is certainly influenced by this assumption. l. 196: The data distribution was approximated by a multivariate Gaussian. Is this an appropriate approximations given the known power laws for natural scene statistics? How would a different distribution change the results? Limitations The authors did not comment on limitations or negative societal impact. While latter remains quite abstract, the limitations could be indeed discussed in more detail.
NIPS
Title Efficient coding, channel capacity, and the emergence of retinal mosaics Abstract Among the most striking features of retinal organization is the grouping of its output neurons, the retinal ganglion cells (RGCs), into a diversity of functional types. Each of these types exhibits a mosaic-like organization of receptive fields (RFs) that tiles the retina and visual space. Previous work has shown that many features of RGC organization, including the existence of ON and OFF cell types, the structure of spatial RFs, and their relative arrangement, can be predicted on the basis of efficient coding theory. This theory posits that the nervous system is organized to maximize information in its encoding of stimuli while minimizing metabolic costs. Here, we use efficient coding theory to present a comprehensive account of mosaic organization in the case of natural videos as the retinal channel capacity—the number of simulated RGCs available for encoding—is varied. We show that mosaic density increases with channel capacity up to a series of critical points at which, surprisingly, new cell types emerge. Each successive cell type focuses on increasingly high temporal frequencies and integrates signals over larger spatial areas. In addition, we show theoretically and in simulation that a transition from mosaic alignment to anti-alignment across pairs of cell types is observed with increasing output noise and decreasing input noise. Together, these results offer a unified perspective on the relationship between retinal mosaics, efficient coding, and channel capacity that can help to explain the stunning functional diversity of retinal cell types. 1 Introduction The retina is one of the most intensely studied neural circuits, yet we still lack a computational understanding of its organization in relation to its function. At a structural level, the retina forms a three-layer circuit, with its primary feedforward pathway consisting of photoreceptors to bipolar cells to retinal ganglion cells (RGCs), the axons of which form the optic nerve [1]. RGCs can be divided into 30-50 functionally distinct cell types (depending on species) with each cell responsive to a localized area of visual space (its receptive field (RF)), and the collection of RFs for each type tiling space to form a “mosaic” [2, 3, 4, 5]. Each mosaic represents the extraction of a specific type 36th Conference on Neural Information Processing Systems (NeurIPS 2022). of information across the visual scene by a particular cell type, with different mosaics responding to light increments or decrements (ON and OFF cells), high or low spatial and temporal frequencies, color, motion, and a host of other features. While much is known about the response properties of each RGC type, the computational principles that drive RGC diversity remain unclear. Efficient coding theory has proven one of the most powerful ideas for understanding retinal organization and sensory processing. Efficient coding posits that the nervous system attempts to encode sensory input by minimizing redundancy subject to biological costs and constraints [6, 7]. As more commonly formulated, it seeks to maximize the mutual information between sensory data and neural representations, with the most common cost in the retinal case being the energetic cost of action potentials transmitted by the RGCs. Despite its simplicity, this principle has proven useful, predicting the center-surround structure of RFs [8], the frequency response profile of contrast sensitivity [9], the structure of retinal mosaics [10, 11], the role of nonlinear rectification [12], different spatiotemporal kernels [13], and inter-mosaic arrangements [14, 15]. While previous studies have largely focused on either spatial or temporal aspects of efficient coding, we optimize an efficient coding model of retinal processing in both space and time to natural videos [16]. We systematically varied the number of cells available to the system and found that larger numbers of available cells led to more cell types. Each of these functionally distinct types formed its own mosaic of RFs that tiled space. We show that when and how new cell types emerge and form mosaics is the result of tradeoffs between power constraints and the benefits of specialized encoding that shift as more cells are available to the system. We show that cell types begin by capturing low-frequency temporal information and capture increasingly higher-frequency temporal information over larger spatial RFs as new cell types form. Finally, we investigated the relative arrangement of these mosaics and their dependence on noise. We show that mosaic pairs can be aligned or anti-aligned depending on input and output noise in the system [14]. Together, these results demonstrate for the first time how efficient coding principles can explain, even predict, the formation of cell types and which types are most informative when channel capacity is limited. 2 Model The model we develop is an extension of [14], a retinal model for efficient coding of natural images, which is based on a mutual information maximization objective proposed in [10]. The retinal model takes D-pixel patches of natural images x 2 RD corrupted by input noise nin ⇠ N (0,Cnin), filters these with unit-norm linear kernels {wj 2 RD | kwjk = 1}j=1,··· ,J representing J RGCs, and then feeds the resulting signals yj = w>j (x+nin) through softplus nonlinearities ⌘(y) = log 1 + e y / (we used = 0.25) with gain j and threshold ✓j . Finally, these signals are further corrupted by additive output noise nout ⇠ N (0,Cnout), to produce firing rates rj : rj = j · ⌘(yj ✓j) + nout,j . (1) The model learns parameters wj , j , and ✓j to maximize the mutual information between the inputs x and the outputs r, under a mean firing rate constraint [10, 14]: maximize log det GW> (Cx +Cnin)WG+Cnout det (GW>CninWG+Cnout) (2) subject to E[rj ] = 1. (3) Here Cx is the covariance matrix of the input distribution, W 2 RD⇥J contains the filters wj as its columns, the gain matrix G = diag ⇣ j d⌘ dy |yj ✓j ⌘ , and the noise covariances are Cnin = 2in D⇥D and Cnout = 2out J⇥J . This objective is equivalent to the formulation in [10], which assumes normally distributed inputs and locally linear responses in order to approximate the mutual information in a closed form. Here, we extend this model to time-varying inputs x(t) 2 RD representing natural videos (Figure 1A-B), which are convolved with linear spatiotemporal kernels {wj(t)}j=1,··· ,J : yj(t) = w > j (t) ⇤ x(t) = Z 1 1 wj(⌧) >x(t ⌧)d⌧. (4) We additionally assume that the convolutional kernels are separable in time and space: wj(t) = j(t)wj , kwjk = 1, j(t) 2 R, Z 1 1 (t)2dt = 1, (5) and the temporal kernels are unit-norm impulse responses taking the following parametric form: j(t) / ( ↵jtne t/⌧j ↵0jt ne t/⌧ 0 j if delay t 0 0 otherwise , (6) where ↵j , ↵0j , ⌧j > 0, ⌧ 0j > 0 are learnable parameters, and n 2 N is fixed. Previous work assumed an unconstrained form for these filters, adding zero-padding before and after the model’s image inputs to produce the characteristic shape of the temporal filters in primate midget and parasol cells [13], but this zero-padding represents a biologically implausible constraint, and the results fail to correctly reproduce the observed delay in retinal responses [17, 18, 19]. Rather, optimizing (2) with unconstrained temporal filters produces a filter bank uniformly tiling time (Supplementary Figure 4). By contrast, (6) is motivated by the arguments of [20], which showed that the optimal minimum-phase temporal filters of retinal bipolar cells, the inputs to the RGCs, take the form (t > 0) / e t/⌧ [sin!t !t cos!t] ⇡ e t/⌧ (!t)3 3 (7) when !⌧ ⌧ 1. Thus, we model RGC temporal filters as a linear combination of these forms. In practice, we take only two filters and use n = 6 rather than n = 3, since these have been shown to perform well in capturing observed retinal responses [19]. The results produced by more filters or different exponents are qualitatively unchanged (Supplementary Figure 7). For training on video data, we use discrete temporal filters and convolutions with PT 1 t=0 j [t] 2 = 1. Finally, while unconstrained spatial kernels wj converge to characteristic center-surround shapes under optimization of (2) (Figure 1C), for computational efficiency and stability, we parameterized these filters using a radially-symmetric difference of Gaussians wj(r) / e ajr2 cje bjr2 , bj > aj > 0, 0 < cj < 1, (8) where r measures the spatial distance to the center of the RF, and the parameters aj , bj , cj that determine the center location and spatial kernel shape are potentially different for each RGC j. The result of optimizing (2) using these forms is a set of spatial and temporal kernels (Figure 1D-E) that replicate experimentally-observed shapes and spatial RF tiling. 3 Efficient coding as a function of channel capacity: linear theory Before presenting results from our numerical experiments optimizing the model (2, 3), we begin by deriving intuitions about its behavior by studing the case of linear filters analytically. That is, we assume a single gain for all cells, no bias (✓ = 0), and a linear transfer function ⌘(y) = y. As we will see, this linear analysis correctly predicts the same types of mosaic formation and filling observed in the full nonlinear model. Here, we sketch the main results, deferring full details to Appendix A. 3.1 Linear model in the infinite retina limit For analytical simplicity, we begin by assuming an infinite retina on which RFs form mosaics described by a regular lattice. Under these conditions, we can write the log determinants in (2) as integrals and optimize over the unnormalized filter v ⌘ w subject to a power constraint: max v Z G0 d2k (2⇡)2 " log P g2G |v(k + g)| 2(Cx(k + g) + 2 in) + 2 outP g2G |v(k + g)|2 2in + 2out ⌫ X g2G |v(k + g)|2(Cx(k + g) + 2in) 3 5 , (9) where Cx(k) is the Fourier transform of the stationary image covariance Cx(z z0), the integral is over all frequencies k 2 G0 unique up to aliasing caused by the spatial regularity of the mosaic, and the sums over g account for aliased frequencies (Appendix A.1). In [8], the range [ ⇡,⇡] is used for the integral, corresponding to a one-dimensional lattice and units of mosaic spacing z = 1. Now, solving the optimization in (9) results in a spatial kernel with the spectral form (Appendix A.2) |v(k)|2 = 2out 2in " 1 2 Cx(k) Cx(k) + 2in s 1 + 2in 2out 4 ⌫ Cx(k) + 1 ! 1 # + , k 2 G0, (10) where k = kkk and ⌫ is chosen to enforce the constraint on total power. This is exactly the solution found in [8], linking it (in the linear case) to the model of [10, 11]. Note, however, that (10) is only nonzero within G0, since RF spacing sets an upper limit on the passband of the resulting filters. The generalization of this formulation to the spacetime case is straightforward. Given a spacetime stationary image spectrum Cx(z z0, t t0) and radially-symmetric, causal filter w(z, t), the same infinite retina limit as above requires calculating determinants across both neurons i, j and time points t, t0 of matrices with entries of the form Fijtt0 = Z dzdz0d⌧d⌧ 0 2w(zi z, t ⌧)Cx(z z 0, ⌧ ⌧ 0)w(zj z 0, t0 ⌧ 0) = Z d2k (2⇡)2 d! 2⇡ eik·(zi zj)+i!(t t 0) |v(k,!)|2Cx(k,!). (11) Again, such matrices can be diagonalized in the Fourier basis, with the result that the optimal spacetime filter once again takes the form (10) with the substitutions v(k) ! v(k,!), Cx(k) ! Cx(k,!) (Appendix A.3). Figure 2A depicts the frequency response of this filter in d = 1 spatial dimensions, with corresponding spatial and temporal sections plotted in Figures 2B-C. 3.2 Multiple cell types and the effects of channel capacity Up to this point, we have only considered a single type of filter v(k,!), corresponding to a single cell type. However, multiple cell types might increase the coding efficiency of the entire retina if they specialize, devoting their limited energy budget to non-overlapping regions of frequency space. Indeed, optimal encoding in the multi-cell-type case selects filters v and v0 that satisfy v⇤(k,!)v0(k,!) = 0, corresponding encoding independent visual information (Appendix A.4). This result naturally raises two questions: First, how many filter types are optimal? And second, how should a given budget of J RGCs be allocated across multiple filter types? As detailed in Appendix A.5, we can proceed by analyzing the case of a finite retina in the Fourier domain, approximating the information encoded by a mosaic of J RGCs with spatial filters given by (10) and nonoverlapping bandpass temporal filters that divide the available spectrum (e.g., Figure 2B, C). Following [21], we approximate the correlation spectrum of images by the factorized power law Cx(k,!) ' Ak↵!2 with ↵ ⇡ 1.3 and find that in this case, the optimal filter response exhibits two regimes as a function of spatial frequency (Supplementary Figure 1A): First, below kf = A/ 2in! 2 1/↵, the optimal filter is separable and log-linear, and the filtered image spectrum is white: |v(k,!)|2 ⇡ k↵!2 A⌫ , |v(k,!)|2Cx(k,!) ⇡ ⌫ 1, where ⌫, the Lagrange multiplier in (9) that enforces the power constraint, scales as 1/P for small values of maximal power P and 1/P 2 for larger values (Supplementary Figure 1D). Second, for k & kf , the filter response decreases as k ↵/2 until reaching its upper cutoff at kc = kf/(⌫ 2out)1/↵, with the filtered image spectrum falling off at the same rate (Supplementary Figure 1B). But what do these regimes have to do with mosaic formation? The link between the two is given by the fact that, for a finite retina with regularly spaced RFs, adding RGCs decreases the distance between RF centers and so increases the resolving power of the mosaic. That is, the maximal value of k grows roughly as k ⇠ p J in d = 2, such that larger numbers of RGCs capture more information at increasingly higher spatial frequencies (Supplementary Figure 1A). However, while information gain is roughly uniform in the whitening regime, it falls off sharply for k & kf (Supplementary Figure 1C), suggesting the interpretation that the k . kf regime is a “mosaic filling” phase in which information accumulates almost linearly as RFs capture new locations in visual space, while the k & kf regime constitutes a “compression phase” in which information gains are slower as RFs shrink to accommodate higher numbers (Figure 2D). Indeed, one can derive the scaling of total Temporal RFs of all ON cells (1) (2) (3) (4) A B C (1) All videos (2) (3) (4) Temporal RFs of all OFF cells All videos Slow videos only Past Figure 3: Statistics of natural videos affect learned RFs. (A) Histogram of spectral attenuation (fraction of power < 3 Hz) for each video clip from the Chicago Motion Database. A significant portion of the dataset exhibits predominaly low-frequency spectral content in time. Videos with spectral attenuation above 0.9, 0.8, and 0.7, are denoted (4), (3), and (2), respectively, while (1) refers to all videos in the dataset. (B) Spatial (top) and temporal (bottom) spectral density of the four subsets. (C) Temporal filters learned by training on each of the four subsets. Training on slow videos produced only smoothing kernels, while training on all videos produced a variety of temporal filters. information as a function of J : I ' 8 >< >: J log ⇣ 1 + P0 2out ⌘ 2P0 (↵+2) 2out ⇣ J Jf ⌘↵ 2 k . kf (J Jf ) ⇣ J Jf ⌘ ↵2 2 2 ↵ k & kf , (12) where P0 is the power budget per RGC and Jf is the RGC number corresponding to k = kf . Thus, mosaic filling exhibits diminishing marginal returns (Figure 2E), such that new cell types are favored when the marginal gain for growing mosaics with lower temporal frequency drops below the gain from initiating a new cell type specialized for higher temporal frequencies. Moreover, the difference between these gain curves implies that new RFs are not added to all mosaics at equal rates, but in proportion to their marginal information (Figure 2F). As we demonstrate in the next section, these features of cell type and mosaic formation continue to hold in the full nonlinear model in simulation. 4 Experiments We analyzed the characteristics of the optimal spatiotemporal RFs obtained from the model (2, 3) trained on videos from the Chicago Motion Database [22]. Model parameters for spatial kernels, temporal kernels, and the nonlinearities were jointly optimized using Adam [23] to maximize (2) subject to the mean firing rate constraint (3) using the augmented Lagrangian method with the quadratic penalty ⇢ = 1 [24]. Further technical details of model training are in Appendix E. All model code and reproducible examples are available at https://github.com/pearsonlab/ efficientcoding. As previously noted, the power spectral density of natural videos can be well approximated by a product of spatial and temporal power-law densities, implying an anticorrelation between high spatial and temporal frequency content [21]. Supplementary Figure 5 shows the data spectrum of the videos in our experiments is also well-approximated by separable power-law fits. To examine the effect of these statistics on the learned RFs, we divided the dataset into four progressively smaller subsets by the proportion of their temporal spectral content below 3 Hz, their spectral attenuation. Using values of 70%, 80%, and 90% then yielded a progression of datasets ranging from most videos to only the slowest videos (Figure 3A, B). Indeed, when the model was trained on these progressively slower data subsets, it produced only temporal smoothing filters, whereas the same model trained on all videos produced a variety of “fast” temporal filter types (Figure 3C). We also note that these experiments used unconstrained spatial kernels in place of (8), yet still converged on spatial RFs with typical center-surround structure as in [10, 15, 14]. Thus, these preliminary experiments suggest that the optimal encoding strategy—in particular, the number of distinct cell types found—depends critically on the statistics of the video distribution to be encoded. 4.1 Mosaics fill in order of temporal frequency As the number of RGCs available to the model increased, we observed the formation of new cell types with new spectral properties (Figure 4). We characterized the learned filters for each RGC in terms of their spectral centroid, defined as the center of mass of the Fourier (spatial) or Discrete Cosine (temporal) transform. Despite the fact that each model RGC was given its own spatial and temporal filter parameters (8, 6), the learned filter shapes strongly clustered, forming mosaics with nearly uniform response properties (Figure 4A–C). Critically, the emergence of new cell types shifted the spectral responses of previously established ones, with new cell types compressing the spectral windows of one another as they further specialized. Moreover, mosaic density increased with increasing RGC number, shifting the centroids of early mosaics toward increasingly higher spatial frequencies. This is also apparent in the forms of the typical learned filters and their power spectra: new filters selected for increasingly high-frequency content in the temporal domain (Figure 4D). We likewise analyzed the coverage factors of both individual mosaics and the entire collection, defined as the proportion of visual space covered by the learned RFs. More specifically, we defined the spatial radius of an RF as the distance from its center at which intensity dropped to 20% of its peak and used this area to compute a coverage factor, the ratio of total RF area to total visual space (⇡/4 of the square’s area due to circular masking). Since coverage factors depend not simply on RGC number but on RF density, they provide an alternative measure of the effective number of distinct cell types learned by the model. As Figure 4E shows, coverage increases nearly linearly with RGC number, while coverage for newly formed mosaics increases linearly before leveling off. In other words, new cell types initially increase coverage of visual space by adding new RFs, but marginal gains in coverage diminish as density increases. In all cases, the model dynamically adjusts the number of learned cell types and the proportion of RGCs assigned to them as channel capacity increases. 4.2 Phase changes in mosaic arrangement In addition to retinal organization at the level of mosaics, a pair of recent papers reported both experimental [15] and theoretical [14] evidence for an additional degree of freedom in optimizing information encoding: the relative arrangement of ON and OFF mosaics. Jun et al. studied this for the case of natural images in [14], demonstrating that the optimal configuration of ON and OFF mosaics is alignment (RFs co-located) at low output noise levels and anti-alignment (OFF RFs between ON RFs and vice-versa) under higher levels of retinal output noise. Moreover, this transition is abrupt, constituting a phase change in optimal mosaic arrangement. We thus asked whether learned mosaics exhibited a similar phase transition for natural video encoding. To do so, following [14], we repeatedly optimized a small model (J = 14, 7 ON, 7 OFF) for multiple learned filter types while systematically varying levels of input and output noise. In each case, one ON-OFF pair was fixed at the center of the space, while the locations of the others were allowed to vary. We used RF size D = 82 pixels for Slow and D = 122 for FastA and FastB cell types to allow the size of spatial kernels to be similar to those of the previous experiments, and we imposed the additional constraint that the shape parameters aj , bj , and cj in (8) be shared across RGCs. Under these conditions, the six free pairs of RFs converged to either aligned (overlapping) or anti-aligned (alternating) positions along the edges of the circular visual space, allowing for a straightforward examination of the effect of input and output noises on mosaic arrangement. Figure 5A-C shows that the phase transition boundaries closely follow the pattern observed in [14]: increasing output noise shifts the optimal configuration from alignment to anti-alignment. Moreover, for each of the tested filters, increasing input noise discourages this transition. This effect also follows from the analysis presented in [14], since higher input noise increases coactivation of nearby pairs of RFs, requiring larger thresholds to render ON-OFF pairs approximately indpendent (Appendix B). 5 Discussion Related work: As reviewed in the introduction, this study builds on a long line of work using efficient coding principles to understand retinal processing. In addition, it is related to work examining encoding of natural videos [25, 22, 16] and prediction in space-time. The most closely related work to this one is that of [13], which also considered efficient coding of natural videos and considered the tradeoffs involved in multiple cell types. Our treatment here differs from that work in several key ways: First, while [13] was concerned with demonstrating that multiple cell types could prove beneficial for encoding (in a framework focused on reconstruction error), that study predetermined the number of cell types and mosaic structure, only optimizing their relative spacing. By contrast, this work is focused on how the number of cell types is dynamically determined, and how the resulting mosaics arrange themselves, as a function of the number of units available for encoding (i.e., the channel capacity). Specifically, we follow previous efficient coding models [8, 9, 10, 11] in maximizing mutual information and do not assume an a priori mosaic arrangement, a particular cell spacing, or a particular number of cell types— all of these emerge via optimization in our formulation. Second, while the computational model of [13] optimized strides for a pair of rectangular arrays of RGCs, we individually optimize RF locations and shapes, allowing us to study changes in optimal RF size and density as new, partial mosaics begin to form. Third, while [13] used zero-padding of natural videos to bias learned temporal filters toward those of observed RGCs, we link the form of temporal RFs to biophysical limits on the filtering properties of bipolar cells, producing temporal filters with the delay properties observed in real data. Finally, while [13] only considered a single noise source in their model, we consider noise in both photoreceptor responses (input noise) and RGC responses (output noise), allowing us to investigate transitions in the optimal relative arrangement of mosaics [14, 15]. We have shown that efficient coding of natural videos produces multiple cell types with complementary RF properties. In addition, we have shown for the first time that the number and characteristics of these cell types depend crucially on the channel capacity: the number of available RGCs. As new simulated RGCs become available, they are initially concentrated into mosaics with more densely packed RFs, improving the spatial frequency bandwidth over which information is encoded. However, as this strategy produces diminishing returns, new cell types encoding higher-frequency temporal features emerge in the optimization process. These new cell types capture information over distinct spatiotemporal frequency bands, and their formation leads to upward shifts in the spatial frequency responses of previously formed cell types. Moreover, pairs of ON and OFF mosaics continue to exhibit the phase transition between alignment and anti-alignment revealed in a purely spatial optimization of efficient coding [14], suggesting that mosaic coordination is a general strategy for increasing coding efficiency. Furthermore, despite the assumptions of this model—linear filtering, separable filters, firing rates instead of spikes—our results are consistent with observed retinal data. For example, RGCs with small spatial RFs exhibit more prolonged temporal integration: they are also more low-pass in their temporal frequency tuning. Second, there is greater variability in the size and shape of spatial RFs at a given retinal location, but temporal RFs exhibit remarkably little variability in our simulations and in data [19]. Thus, these results further testify to the power of efficient coding principles in providing a conceptual framework for understanding the nervous system. Acknowledgments and Disclosure of Funding This work was supported by NIH/National Eye Institute Grant R01 EY031396.
1. What is the focus and contribution of the paper regarding retinal mosaics and efficient coding frameworks? 2. What are the strengths of the proposed approach, particularly in extending previous models to spatiotemporal signals and studying the relationship between natural video statistics and optimal neural codes? 3. Do you have any concerns or suggestions regarding the clarity and understandability of certain sections of the paper, especially for non-theoreticians? 4. What are the limitations of the paper's focus on the efficient coding objective, and could the authors discuss other potential goals of the visual system? 5. Are there any specific questions or suggestions you have regarding the figures and their legends, such as Fig. 2A and Fig. 5?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors study how retinal mosaics emerge from an efficient coding framework. They extend the framework to spatial-temporal kernels and video inputs, which is a crucial extensions to understand how natural environmental statistics influence coding in neural systems. Also, the approach is highly innovative as it allows to derive multiple optimized mosaics. Strengths And Weaknesses Strengths: Extend previous models to spatiotemporal signals Study relationship between natural video statistics and optimal neural codes Interesting approach to derive multiple mosaics with interesting results Weaknesses: Sometimes a bit jargony/dense - the authors should check how understandable each section is and make sure they can be followed by non-theoreticians Limited by focus on efficient coding objective - the authors could discucss whether reconstruction of the visual environment is really the ultimate goal of the visual system Questions Lines 124: The Bravais lattice is not sufficiently explained for non-physicists. The section up to Line 129 could be better explained. What are principal and reciprocal vectors? Fig 2A: The colored points should be defined in the legend. Fig. 5 is very dense and hard to understand - maybe more intuitive labels would help? Limitations The authors should discuss the limitation of the efficient coding objective - is that really all that the organism cares about?
NIPS
Title DeepMed: Semiparametric Causal Mediation Analysis with Debiased Deep Learning Abstract Causal mediation analysis can unpack the black box of causality and is therefore a powerful tool for disentangling causal pathways in biomedical and social sciences, and also for evaluating machine learning fairness. To reduce bias for estimating Natural Direct and Indirect Effects in mediation analysis, we propose a new method called DeepMed that uses deep neural networks (DNNs) to cross-fit the infinitedimensional nuisance functions in the efficient influence functions. We obtain novel theoretical results that our DeepMed method (1) can achieve semiparametric efficiency bound without imposing sparsity constraints on the DNN architecture and (2) can adapt to certain low-dimensional structures of the nuisance functions, significantly advancing the existing literature on DNN-based semiparametric causal inference. Extensive synthetic experiments are conducted to support our findings and also expose the gap between theory and practice. As a proof of concept, we apply DeepMed to analyze two real datasets on machine learning fairness and reach conclusions consistent with previous findings. 1 Introduction Tremendous progress has been made in this decade on deploying deep neural networks (DNNs) in real-world problems (Krizhevsky et al., 2012; Wolf et al., 2019; Jumper et al., 2021; Brown et al., 2022). Causal inference is no exception. In semiparametric causal inference, a series of seminal works (Chen et al., 2020; Chernozhukov et al., 2020; Farrell et al., 2021) initiated the investigation of ⇤Co-corresponding authors, alphabetical order 36th Conference on Neural Information Processing Systems (NeurIPS 2022). statistical properties of causal effect estimators when the nuisance functions (the outcome regressions and propensity scores) are estimated by DNNs. However, there are a few limitations in the current literature that need to be addressed before the theoretical results can be used to guide practice: (1) Most recent works mainly focus on total effect (Chen et al., 2020; Farrell et al., 2021). In many settings, however, more intricate causal parameters are often of greater interests. In biomedical and social sciences, one is often interested in “mediation analysis” to decompose the total effect into direct and indirect effect to unpack the underlying black-box causal mechanism (Baron and Kenny, 1986). More recently, mediation analysis also percolated into machine learning fairness. For instance, in the context of predicting the recidivism risk, Nabi and Shpitser (2018) argued that, for a “fair” algorithm, sensitive features such as race should have no direct effect on the predicted recidivism risk. If such direct effects can be accurately estimated, one can detect the potential unfairness of a machine learning algorithm. We will revisit such applications in Section 5 and Appendix G. (2) Statistical properties of DNN-based causal estimators in recent works mostly follow from several (recent) results on the convergence rates of DNN-based nonparametric regression estimators (Suzuki, 2019; Schmidt-Hieber, 2020; Tsuji and Suzuki, 2021), with the limitation of relying on sparse DNN architectures. The theoretical properties are in turn evaluated by relatively simple synthetic experiments not designed to generate nearly infinite-dimensional nuisance functions, a setting considered by almost all the above related works. The above limitations raise the tantalizing question whether the available statistical guarantees for DNN-based causal inference have practical relevance. In this work, we plan to partially fill these gaps by developing a new method called DeepMed for semiparametric mediation analysis with DNNs. We focus on the Natural Direct/Indirect Effects (NDE/NIE) (Robins and Greenland, 1992; Pearl, 2001) (defined in Section 2.1), but our results can also be applied to more general settings; see Remark 2. The DeepMed estimators leverage the “multiply-robust” property of the efficient influence function (EIF) of NDE/NIE (Tchetgen Tchetgen and Shpitser, 2012; Farbmacher et al., 2022) (see Proposition 1 in Section 2.2), together with the flexibility and superior predictive power of DNNs (see Section 3.1 and Algorithm 1). In particular, we also make the following novel contributions to deepen our understanding of DNN-based semiparametric causal inference: • On the theoretical side, we obtain new results that our DeepMed method can achieve semiparametric efficiency bound without imposing sparsity constraints on the DNN architecture and can adapt to certain low-dimensional structures of the nuisance functions (see Section 3.2), thus significantly advancing the existing literature on DNN-based semiparametric causal inference. Non-sparse DNN architecture is more commonly employed in practice (Farrell et al., 2021), and the low-dimensional structures of nuisance functions can help avoid curse-of-dimensionality. These two points, taken together, significantly advance our understanding of the statistical guarantee of DNN-based causal inference. • More importantly, on the empirical side, in Section 4, we designed sophisticated synthetic experiments to simulate nearly infinite-dimensional functions, which are much more complex than those in previous related works (Chen et al., 2020; Farrell et al., 2021; Adcock and Dexter, 2021). We emphasize that these nontrivial experiments could be of independent interest to the theory of deep learning beyond causal inference, to further expose the gap between deep learning theory and practice (Adcock and Dexter, 2021; Gottschling et al., 2020); see Remark 9 for an extended discussion. As a proof of concept, in Section 5 and Appendix G, we also apply DeepMed to re-analyze two real-world datasets on algorithmic fairness and reach similar conclusions to related works. • Finally, a user-friendly R package can be found at https://github.com/siqixu/DeepMed. Making such resources available helps enhance reproducibility, a highly recognized problem in all scientific disciplines, including (causal) machine learning (Pineau et al., 2021; Kaddour et al., 2022). 2 Definition, identification, and estimation of NDE and NIE 2.1 Definition of NDE and NIE Throughout this paper, we denote Y as the primary outcome of interest, D as a binary treatment variable, M as the mediator on the causal pathway from D to Y , and X 2 [0, 1]p (or more generally, compactly supported in Rp) as baseline covariates including all potential confounders. We denote the observed data vector as O ⌘ (X,D,M, Y ). Let M(d) denote the potential outcome for the mediator when setting D = d and Y (d,m) be the potential outcome of Y under D = d and M = m, where d 2 {0, 1} and m is in the support M of M . We define the average total (treatment) effect as ⌧tot := E[Y (1,M(1)) Y (0,M(0))], the average NDE of the treatment D on the outcome Y when the mediator takes the natural potential outcome when D = d as ⌧NDE(d) := E[Y (1,M(d)) Y (0,M(d))], and the average NIE of the treatment D on the outcome Y via the mediator M as ⌧NIE(d) := E[Y (d,M(1)) Y (d,M(0))]. We have the trivial decomposition ⌧tot ⌘ ⌧NDE(d) + ⌧NIE(d0) for d 6= d0. In causal mediation analysis, the parameters of interest are ⌧NDE(d) and ⌧NIE(d). 2.2 Semiparametric multiply-robust estimators of NDE/NIE Estimating ⌧NDE(d) and ⌧NIE(d) can be reduced to estimating (d, d0) := E[Y (d,M(d0))] for d, d0 2 {0, 1}. We make the following standard identification assumptions: i. Consistency: if D = d, then M = M(d) for all d 2 {0, 1}; while if D = d and M = m, then Y = Y (d,m) for all d 2 {0, 1} and all m in the support of M . ii. Ignorability: Y (d,m) ? D|X , Y (d,m) ? M |X,D, M(d) ? D|X , and Y (d,m) ? M(d0)|X , almost surely for all d,2 {0, 1} and all m 2 M. The first three conditions are, respectively, no unmeasured treatment-outcome, mediator-outcome and treatment-mediator confounding, whereas the fourth condition is often referred to as the “cross-world” condition. We provide more detailed comments on these four conditions in Appendix A. iii. Positivity: The propensity score a(d|X) ⌘ Pr(D = d|X) 2 (c, C) for some constants 0 < c C < 1, almost surely for all d 2 {0, 1}; f(m|X, d), the conditional density (mass) function of M = m (when M is discrete) given X and D = d, is strictly bounded between [ ¯ ⇢, ⇢̄] for some constants 0 < ¯ ⇢ ⇢̄ < 1 almost surely for all m in M and all d 2 {0, 1}. Under the above assumptions, the causal parameter (d, d0) for d, d0 2 {0, 1} can be identified as either of the following three observed-data functionals: (d, d0) ⌘ E {D = d}f(M |X, d0)Y a(d|X)f(M |X, d) ⌘ E {D = d0} a(d0|X) µ(X, d,M) ⌘ Z µ(x, d,m)f(m|x, d0)p(x) dmdx, (1) where {·} denotes the indicator function, p(x) denotes the marginal density of X , and µ(x, d,m) := E[Y |X = x,D = d,M = m] is the outcome regression model, for which we also make the following standard boundedness assumption: iv. µ(x, d,m) is also strictly bounded between [ R,R] for some constant R > 0. Following the convention in the semiparametric causal inference literature, we call a, f, µ “nuisance functions”. Tchetgen Tchetgen and Shpitser (2012) derived the EIF of (d, d0): EIFd,d0 ⌘ d,d0(O) (d, d0), where d,d0(O) = {D = d} · f(M |X, d0) a(d|X) · f(M |X, d) (Y µ(X, d,M)) + ✓ 1 {D = d 0} a(d0|X) ◆Z m2M µ(X, d,m)f(m|X, d0)dm+ {D = d 0} a(d0|X) µ(X, d,M). (2) The nuisance functions µ(x, d,m), a(d|x) and f(m|x, d) appeared in d,d0(o) are unknown and generally high-dimensional. But with a sample D ⌘ {Oj}Nj=1 of the observed data, based on d,d0(o), one can construct the following generic sample-splitting multiply-robust estimator of (d, d0): e (d, d0) = 1 n X i2Dn e d,d0(Oi), (3) where Dn ⌘ {Oi}ni=1 is a subset of all N data, and e d,d0(o) replaces the unknown nuisance functions a, f, µ in d,d0(o) by some generic estimators ea, ef, eµ computed using the remaining N n nuisance sample data, denoted as D⌫ . Cross-fit is then needed to recover the information lost due to sample splitting; see Algorithm 1. It is clear from (2) that e (d, d0) is a consistent estimator of (d, d0) as long as any two of ea, ef, eµ are consistent estimators of the corresponding true nuisance functions, hence the name “multiply-robust”. Throughout this paper, we take n ⇣ N n and assume: v. Any nuisance function estimators are strictly bounded within the respective lower and upper bounds of a, f, µ. To further ease notation, we define: for any d 2 {0, 1}, ra,d := R a,d(x)2dF (x) 1/2 , rf,d := R f,d(x,m)2dF (x,m|d = 0) 1/2 , and rµ,d := R µ,d(x,m)2dF (x,m|d = 0) 1/2 , where a,d(x) := ea(d|x) a(d|x), f,d(x,m) := ef(m|x, d) f(m|x, d) and µ,d(x,m) := eµ(x, d,m) µ(x, d,m) are point-wise estimation errors of the estimated nuisance functions. In defining the above L2-estimation errors, we choose to take expectation with respect to (w.r.t.) the law F (m,x|d = 0) only for convenience, with no loss of generality by Assumptions iii and v. To show the cross-fit version of e (d, d0) is semiparametric efficient for (d, d0), we shall demonstrate under what conditions p n(e (d, d0) (d, d0)) L! N (0,E[EIF2d,d0 ]) (Newey, 1990). The following proposition on the statistical properties of e (d, d0) is a key step towards this objective. Proposition 1. Denote Bias(e (d, d0)) := E[e (d, d0) (d, d0)|D⌫ ] as the bias of e (d, d0) conditional on the nuisance sample D⌫ . Under Assumptions i – v, Bias(e (d, d0)) is of second-order: |Bias(e (d, d0))| . max ⇢ ra,d · rf,d, max d002{0,1} rf,d00 · rµ,d, ra,d · rµ,d . (4) Furthermore, if the RHS of (4) is o(n 1/2), then p n ⇣ e (d, d0) (d, d0) ⌘ = 1 p n nX i=1 ( d,d0(Oi) (d, d 0)) + o(1) d ! N 0,E ⇥ EIF2d,d0 ⇤ . (5) Although the above result is a direct consequence of the EIF d,d0(O), we prove Proposition 1 in Appendix B for completeness. Remark 2. The total effect ⌧tot = (1, 1) (0, 0) can be viewed as a special case, for which d = d0 for (d, d0). Then EIFd,d ⌘ EIFd corresponds to the nonparametric EIF of (d, d) ⌘ (d) ⌘ E[Y (d,M(d))]: EIFd = d(O) (d) with d(O) = {D = d} a(d|X) Y + ✓ 1 {D = d} a(d|X) ◆ µ(X, d), where µ(x, d) := E[Y |X = x,D = d]. Hence all the theoretical results in this paper are applicable to total effect estimation. Our framework can also be applied to all the statistical functionals that satisfy a so-called “mixed-bias” property, characterized recently in Rotnitzky et al. (2021). This class includes the quadratic functional, which is important for uncertainty quantification in machine learning. 3 Estimation and inference of NDE/NIE using DeepMed We now introduce DeepMed, a method for mediation analysis with nuisance functions estimated by DNNs. By leveraging the second-order bias property of the multiply-robust estimators of NDE/NIE (Proposition 1), we will derive statistical properties of DeepMed in this section. The nuisance function estimators by DNNs are denoted as ba, bf, bµ. 3.1 Details on DeepMed First, we introduce the fully-connected feed-forward neural network with the rectified linear units (ReLU) as the activation function for the hidden layer neurons (FNN-ReLU), which will be used to estimate the nuisance functions. Then, we will introduce an estimation procedure using a V -fold cross-fitting with sample-splitting to avoid the Donsker-type empirical-process assumption on the nuisance functions, which, in general, is violated in high-dimensional setup. Finally, we provide the asymptotic statistical properties of the DNN-based estimators of ⌧tot, ⌧NDE(d) and ⌧NIE(d). We denote the ReLU activation function as (u) := max(u, 0) for any u 2 R. Given vectors x, b, we denote b(x) := (x b), with acting on the vector x b component-wise. Let Fnn denote the class of the FNN-ReLU functions Fnn := n f : Rp ! R; f(x) = W (L) b(L) · · · W (1) b(1)(x) o , where is the composition operator, L is the number of layers (i.e. depth) of the network, and for l = 1, · · · , L, W (l) is a Kl+1 ⇥ Kl-dimensional weight matrix with Kl being the number of neurons in the l-th layer (i.e. width) of the network, with K1 = p and KL+1 = 1, and b(l) is a Kl-dimensional vector. To avoid notation clutter, we concatenate all the network parameters as ⇥ = (W (l), b(l), l = 1, · · · , L) and simply take K2 = · · · = KL = K. We also assume ⇥ to be bounded: k⇥k1 B for some universal constant B > 0. We may let the dependence on L, K, B explicit by writing Fnn as Fnn(L,K,B). DeepMed estimates ⌧tot, ⌧NDE(d), ⌧NIE(d) by (3), with the nuisance functions a, f, µ estimated using Fnn with the V -fold cross-fitting strategy, summarized in Algorithm 1 below; also see Farbmacher et al. (2022). DeepMed inputs the observed data D ⌘ {Oi}Ni=1 and outputs the estimated total effect b⌧tot, NDE b⌧NDE(d) and NIE b⌧NIE(d), together with their variance estimators b 2tot, b 2NDE(d) and b 2NIE(d). Algorithm 1 DeepMed with V -fold cross-fitting 1: Choose some integer V (usually V 2 {2, 3, · · · , 10}) 2: Split the N observations into V subsamples Iv ⇢ {1, · · · , N} ⌘ [N ] with equal size n = N/V ; 3: for v = 1, · · · , V : do 4: Fit the nuisance functions by DNNs using observations in [N ] \ Iv 5: Compute the nuisance functions in the subsample Iv using the estimated DNNs in step 4 6: Obtain { b d(Oi), b d,d0(Oi)}i2Iv for the subsample Iv based on (2), respectively, with the nuisance functions replaced by their estimates in step 5 7: end for 8: Estimate average potential outcomes by b (d) := 1N NP i=1 b d(Oi), b (d, d0) := 1N NP i=1 b d,d0(Oi) 9: Estimate causal effects by b⌧tot, b⌧NDE(d) and b⌧NIE(d) with b (d) and b (d, d0) 10: Estimate the variances of b⌧tot, b⌧NDE(d) and b⌧NIE(d) by: b 2tot := 1N2 NP i=1 ( b 1(Oi) b 0(Oi))2 1N b⌧ 2 tot; b 2NDE(d) := 1N2 NP i=1 ( b 1,d(Oi) b 0,d(Oi))2 1N b⌧ 2 NDE(d); b 2NIE(d) := 1N2 NP i=1 ( b d,1(Oi) b d,0(Oi))2 1N b⌧ 2 NIE(d) Output: b⌧tot, b⌧NDE(d), b⌧NIE(d), b 2tot, b 2NDE(d) and b 2NIE(d) Remark 3 (Continuous or multi-dimensional mediators). For binary treatment D and continuous or multi-dimensional M , to avoid nonparametric/high-dimensional conditional density estimation, we can rewrite f(m|x,d 0) a(d|x)f(m|x,d) as 1 a(d|x,m) a(d|x,m)(1 a(d|x)) by the Bayes’ rule and the integral w.r.t. f(m|x, d 0) in (2) as E[µ(X, d,M)|X = x,D = d0]. Then we can first estimate µ(x, d,m) by bµ(x, d,m) and in turn estimate E[µ(X, d,M)|X = x,D = d0] by regressing bµ(X, d,M) against (X,D) using the FNN-ReLU class. We mainly consider binary M to avoid unnecessary complications; but see Appendix G for an example in which this strategy is used. Finally, the potential incompatibility between models posited for a(d|x) and a(d|x,m) and the joint distribution of (X,A,M, Y ) is not of great concern under the semiparametric framework because all nuisance functions are estimated nonparametrically; again, see Appendix G for an extended discussion. 3.2 Statistical properties of DeepMed: Non-sparse DNN architecture and low-dimensional structures of the nuisance functions According to Proposition 1, to analyze the statistical properties DeepMed, it is sufficient to control the L2-estimation errors of nuisance function estimates ba, bf, bµ fit by DNNs. To ease presentation, we first study the theoretical guarantees on the L2-estimation error for a generic nuisance function g : W 2 [0, 1]p ! Z 2 R, for which we assume: vi. Z = g(W ) + ⇠, with ⇠ sub-Gaussian with mean zero and independent of W . Note that when g corresponds to a, f, µ, (W,Z) corresponds to (X, (D = 1)), ((X,D), (M = 1)) and ((X,D,M), Y ), respectively. We denote the DNN output from the nuisance sample D⌫ as bg. For theoretical results, we consider bg as the following empirical risk minimizer (ERM): bg := arg min ḡ2Fnn(L,K,B) X i2D⌫ (Zi ḡ(Wi)) 2 . (6) To avoid model misspecification, one often assumes g 2 G, where G is some infinite-dimensional function space. A common choice is G = Hp(↵;C), the Hölder ball on the input domain [0, 1]p, with smoothness exponent ↵ and radius C. Hölder space is one of the most well-studied function spaces in statistics and it is convenient to quantify its complexity by a single smoothness parameter ↵; see Appendix C for a review. It is well-known that estimating Hölder functions suffers from curse-of-dimensionality (Stone, 1982). One remedy is to consider the following generalized Hölder space, by imposing certain low-dimensional structures on g: H † k(↵;C) := g(w) = h( w) : h 2 Hk(↵;C), 2 Rk⇥p unknown, k p . Remark 4. The above definition contains g(w) = h(wI), where I ⇢ {1, · · · , p}, as a special case, in which g is assumed to only depend on a subset of the feature vector w. One can easily generalize the above definition to additive models g(w) = Pp j=1 hj(wj) where hj 2 Hkj (↵j ;Cj), allowing even more modeling flexibility. To avoid complications, we only consider the above simpler model. We can show that the ERM estimator bg (6) from the FNN-ReLU class Fnn(L,K,B) attains the optimal estimation rate over H†k(↵;C) up to log factors, by choosing the depth and width appropriately without assuming sparse neural nets. Lemma 5. Under Assumptions iii – vi, if g 2 H†k(↵;C) for k p, with LK ⇣ n k 2(k+2↵) , we have supg2H†k(↵;C) E ⇥ (g(W ) bg(W ))2 ⇤ 1/2 . n ↵2↵+k (log n)3. Lemma 5, together with Proposition 1, implies the main theoretical result of the paper. Theorem 6. Under Assumptions i – vi and the following condition on a, f, µ: a 2 H†k(↵a;C), f 2 H † k(↵f ;C), µ 2 H † k(↵µ;C), with min ⇢ ↵a 2↵a + k + ↵f 2↵f + k , ↵f 2↵f + k + ↵µ 2↵µ + k , ↵a 2↵a + k + ↵µ 2↵µ + k > 1 2 + ✏, (7) for k p and some arbitrarily small ✏ > 0, if ba, bf , bµ are respectively the ERM (6) from FNN-ReLU classes Fnn(La,Ka, B), Fnn(Lf ,Kf , B), Fnn(Lµ,Kµ, B), of which the product of the depth and width satisfies LgKg ⇣ n k 2(k+2↵g) for g 2 {a, f, µ}, then the DeepMed estimators b⌧tot, b⌧NDE(d) and b⌧NIE(d) computed by Algorithm 1 are semiparametric efficient: b 1tot(b⌧tot ⌧tot), b 1NDE(d)(b⌧NDE(d) ⌧NDE(d)), b 1 NIE(d)(b⌧NIE(d) ⌧NIE(d)) L ! N (0, 1), with Nb 2tot p ! E[(EIF1 EIF0)2], Nb 2NDE(d) p ! E[(EIF1,d EIF0,d)2], and Nb 2NIE(d) p ! E[(EIFd,1 EIFd,0)2], i.e. b 2tot, b 2NDE and b 2NIE are consistent variance estimators. Remark 7. To unload notation in the above theorem, consider the special case where the smoothness of all the nuisance functions coincides, i.e. ↵a = ↵f = ↵µ = ↵. Then Condition (7) reduces to ↵ > k/2 + ✏ for some arbitrarily small ✏ > 0. For example, if the covariates X have dimension p = 2 and no low-dimensional structures are imposed on the nuisance functions (i.e. k ⌘ p), one needs ↵ > 1 to ensure semiparametric efficiency of the DeepMed estimators. We emphasize that Lemma 5 and Theorem 6 do not constrain the network sparsity S, better reflecting how DNNs are usually used in practice. Theorem 6 advances results on total and decomposition effect estimation with non-sparse DNNs (Farrell et al., 2021, Theorem 1) in terms of (1) weaker smoothness conditions and (2) adapting to certain low-dimensional structures of the nuisance functions. The proof of Lemma 5 follows from a combination of the improved DNN approximation rate obtained in Lu et al. (2021); Jiao et al. (2021) and standard DNN metric entropy bound (Suzuki, 2019). We prove Lemma 5 and Theorem 6 in Appendix C for completeness. One weakness of Lemma 5 and Theorem 6, as well as in other contemporary works (Chen et al., 2020; Farrell et al., 2021), is the lack of algorithmic/training process considerations (Chen et al., 2022); see Remark 10 and Appendix E for extended discussions. Remark 8 (Explicit input-layer regularization). Training DNNs in practice involves hyperparameter tuning, including the depth L and width K in Theorem 6 and others like epochs. In the synthetic experiments, we consider the nuisance functions only depending on a k-subset of p-dimensional input. A reasonable heuristic is to add L1-regularization in the input-layer of the DNN. Then the regularization weight is also a hyperparameter. In practice, we simply use cross-validation to select the hyperparameters that minimize the validation loss. We leave its theoretical justification and the performance of other alternative approaches such as the minimax criterion (Robins et al., 2020; Cui and Tchetgen Tchetgen, 2019) to future works. 4 Synthetic experiments In this section and Appendix E, we showcase five synthetic experiments. Since ground truth is rarely known in real data, we believe synthetic experiments play an equally, if not more, important role as real data. Before describing the experimental setups, we garner the following key take-home message: (a) Compared with the other competing methods, DeepMed exhibits better finite-sample performance in most of our experiments; (b) Cross-validation for DNN hyperparameter tuning works reasonably well in our experiments; (c) We find DeepMed with explicit regularization in the input layer improves performance (see Table A2) when the true nuisance functions have certain low-dimensional structures in their dependence on the covariates. Farrell et al. (2021) warned against blind explicit regularization in DNNs for total effect estimation. Our observation does not contradict Farrell et al. (2021) as (1) the purpose of the input-layer regularization is not to control the sparsity of the DNN architecture and (2) we do not further regularize hidden layers; (d) Experimental setups for Cases 3 to 5 generate nuisance functions that are nearly infinitedimensional and close to the boundary of a Hölder ball with a given smoothness exponent (Liu et al., 2020; Li et al., 2005). Thus these synthetic experiments should be better benchmarks than Cases 1 and 2 or settings in other related works such as Farrell et al. (2021). We hope that these highly nontrivial synthetic experiments are helpful to researchers beyond mediation analysis or causal inference. We share the code for generating these functions as a part of the DeepMed package. We consider a sample with 10,000 i.i.d. observations. The covariates X = (X1, ..., Xp)> are independently drawn from uniform distribution Uniform([ 1, 1]). The outcome Y , treatment D and mediator M are generated as follows: D ⇠ Bernoulli(s(d(X))),M ⇠ 0.2D +m(X) +N (0, 1), Y ⇠ 0.2D +M + y(X) +N (0, 1), where s(x) := (1 + e x) 1, and we consider the following three cases to generate the nonlinear functions d(x),m(x) and y(x) in the main text: • Case 1 (simple functions): d(x) = x1x2 + x3x4x5 + sinx1,m(x) = 4 5X i=1 sin 3xi, y(x) = (x1 + x2) 2 + 5 sin 5X i=1 xi. • Case 2 (composition of simple functions): we simulate more complex interactions among covariates by composing simple functions as follow: d(x) = d2 d1 d0(x1, · · · , x5), with d0(x1, · · · , x5) = 2Y i=1 xi, 5Y i=3 xi, 2Y i=1 sinxi, 5Y i=3 sinxi ! , d1(a1, · · · , a4) = (sin(a1 + a2), sin a2, a3, a4) , and d2(b1, · · · , b4) = 0.5 sin(b1 + b2) + 0.5(b3 + b4), m(x) = m1 m0(x1, . . . , x5),with m0(x1, · · · , x5) = (sinx1, · · · , sinx5) ,m1(a1, · · · , a5) = 5 sin 5X i=1 ai and y(x) = y2 y1 y0(x1, · · · , x5), with y0(x1, · · · , x5) = sin 2X i=1 xi, sin 5X i=3 xi, sin 5X i=1 xi ! , y1(a1, a2, a3) = (sin(a1 + a2), a3) , and y2(b1, b2) = 10 sin(b1 + b2). • Case 3 (Hölder functions): we consider more complex nonlinear functions as follows: d(x) = x1x2 + x3x4x5 + 0.5⌘(0.2x1;↵),m(x) = 5X i=1 ⌘ (0.5xi;↵) , y(x) = x1x2 + 3⌘ 0.2 5X i=1 xi;↵ ! where ⌘(x;↵) = P j2J,l2Z 2 j(↵+0.25) wj,l(x) with J = {0, 3, 6, 9, 10, 16} and wj,l(·) is the D6 father wavelet functions dilated at resolution j shifted by l. By construction, ⌘(x;↵) 2 H1(↵;B) for some known constant B > 0 following Härdle et al. (1998, Theorem 9.6). Here we set ↵ = 1.2 and the intrinsic dimension k = 1. Thus we expect the DeepMed estimators are semiparametric efficient. It is indeed the case based on the columns corresponding to Case 3 in Table 1, suggesting that DNNs can be adaptive to certain low-dimensional structures. Remark 9. The nuisance functions in Cases 3 – 5 (see Appendix E) are less smooth than what have been considered elsewhere, including Farrell et al. (2021), Chen et al. (2020), and even Adcock and Dexter (2021), a paper dedicated to exposing the gap between theoretical approximation rates and DNN practice. These nuisance functions are designed to be near the boundary of a Hölder ball with a given smoothness exponent as we add wavelets at very high resolution in ⌘(x;↵). This is the assumption under which most of the known statistical properties of DNNs are developed. In all the above cases, ⌧tot = 0.4 and ⌧NDE(d) = ⌧NIE(d) = 0.2 for d 2 {0, 1}. We also consider the cases where the total number of covariates p = 20 and 100 but only the first five covariates are relevant to Y , M and D. All simulation results are based on 200 replicates. The sigmoid function is used in the final layer when the response variable is binary. For comparison, we also use the Lasso, random forest (RF) and gradient boosted machine (GBM) to estimate the nuisance functions, and use the true nuisance functions (Oracle) as the benchmark. The Lasso is implemented using the R package “hdm” with a data-driven penalty. The DNN, RF and GBM are implemented using the R packages “keras”, “randomForest” and “gbm”, respectively. We adopt a 3-fold cross-validation to choose the hyperparameters for DNNs (depth L, width K, L1-regularization parameter and epochs), RF (number of trees and maximum number of nodes) and GBM (numbers of trees and depth). We use a completely independent sample for the hyperparameter selection. In this paper, we only use one extra dataset to conduct the cross-validation for hyperparameter selection, so our simulation results are conditional on this extra dataset. We use the cross-entropy loss for the binary response and the mean-squared loss for the continuous response. We fix the batch-size as 100 and the other hyperparameters for the other methods are set to the default values in their R packages. See Appendix E for more details. We compare the performances of different methods in terms of the biases, empirical standard errors (SE) and root mean squared errors (RMSE) of the estimates as well as the coverage probabilities (CP) of their 95% confidence intervals. When p = k = 5 (all covariates are relevant or no low-dimensional structures), DeepMed has smaller bias and RMSE than the other competing methods, and is only slightly worse than Oracle. Lasso has the largest bias and poor CP as expected since it does not capture the nonlinearity of the nuisance functions. RF and GBM also have substantial biases, especially in Case 2 with compositions of simple functions. Overall, DeepMed performs better than the competing methods (Table 1). From the empirical distributions, we can also see that they are nearly unbiased and normally distributed in Cases 1-3 (Figures A1-A3). When p = 20 or 100 but only the first five covariates are relevant (k = 5), L1-regularization in the input-layer drastically improves the performance of DeepMed (Table A2). DeepMed with L1-regularization in the input-layer also has smaller bias and RMSE than the other competing methods (Tables A3 and A4). As expected, more precise nuisance function estimates (i.e., smaller validation loss) generally lead to more precise causal effect estimates. The validation losses of nuisance function estimates from DeepMed are generally much smaller than those using Lasso, RF and GBM (Tables A5-A7). Remark 10. Due to space limitations, we defer Cases 4, 5 to Appendix E, in which DeepMed fails to be semiparametric efficient, compared to the Oracle; see an extended discussion in Appendix E. We conjecture this may be due to the implicit regularization of gradient-based training algorithm such as SGD (Table A11) or adam (Kingma and Ba, 2015) (all simulation results except Table A11), which is used to train the DNNs to estimate the nuisance parameters, instead of actually solving the ERM (6). Most previous works focus on the benefit of implicit regularization (Neyshabur, 2017; Bartlett et al., 2020) on generalization. Yet, implicit regularization might inject implicit bias into causal effect estimates, which could make statistical inference invalid. Such a potential curse of implicit regularization has not been documented in the DNN-based causal inference literature before and exemplify the value of our synthetic experiments. We believe this is an important open research direction for theoretical results to better capture the empirical performance of DNN-based causal inference methods such as DeepMed. 5 Real data analysis on fairness As a proof of concept, we use DeepMed and other competing methods to re-analyze the COMPAS algorithm (Dressel and Farid, 2018). In particular, we are interested in the NDE of race D on the recidivism risk (or the COMPAS score) Y with the number of prior convictions as the mediator M . For race, we mainly focus on the Caucasians population (D = 0) and the African-Americans population (D = 1), and exclude the individuals of other ethnicity groups. The COMPAS score (Y ) is ordinal, ranging from 1 to 10 (1: lowest risk; 10: highest risk). We also include the demographic information (age and gender) as covariates X . All the methods find significant positive NDE of race on the COMPAS score at ↵-level 0.005 (Table 2; all p-values < 10 7), consistent with previous findings (Nabi and Shpitser, 2018). Thus the COMPAS algorithm tends to assign higher recidivism risks to African-Americans than to Caucasians, even when they have the same number of prior convictions. The validation losses of nuisance function estimates by DeepMed are smaller than the other competing methods (Table A8), possibly suggesting smaller biases of the corresponding NDE/NIE estimators. We emphasize that research in machine learning fairness should be held accountable (Bao et al., 2021). Our data analysis is merely a proof-of-concept that DeepMed works in practice and the conclusion from our data analysis should not be treated as definitive. We defer the comments on potential issues of unmeasured confounding to Appendix F and another real data analysis to Appendix G. 6 Conclusion and Discussion In this paper, we proposed DeepMed for semiparametric mediation analysis with DNNs. We established novel statistical properties for DNN-based causal effect estimation that can (1) circumvent sparse DNN architectures and (2) leverage certain low-dimensional structures of the nuisance functions. These results significantly advance our current understanding of DNN-based causal inference including mediation analysis. Evaluated by our extensive synthetic experiments, DeepMed mostly exhibits improved finite-sample performance over the other competing machine learning methods. But as mentioned in Remark 10, there is still a large gap between statistical guarantees and empirical observations. Therefore an important future direction is to incorporate the training process while investigating the statistical properties to have a deeper theoretical understanding of DNN-based causal inference. It is also of future research interests to enable DeepMed to handle unmeasured confounding and more complex path-specific effects (Malinsky et al., 2019; Miles et al., 2020), and incorporate other hyperparameter tuning strategies that leverage the multiply-robustness property, such as the minimax criterion (Robins et al., 2020; Cui and Tchetgen Tchetgen, 2019). Finally, we warn readers that all causal inference methods, including DeepMed, may have negative societal impact if they are used without carefully checking their working assumptions. Acknowledgement and Disclosure of Funding The authors thank four anonymous reviewers and one anonymous area chair for helpful comments, Fengnan Gao for some initial discussion on how to incorporate low-dimensional manifold assumptions using DNNs and Ling Guo for discussion on DNN training. The authors would also like to thank Department of Statistics and Actuarial Sciences at The University of Hong Kong for providing highperformance computing servers that supported the numerical experiments in this paper. L. Liu gratefully acknowledges funding support by Natural Science Foundation of China Grant No.12101397 and No.12090024, Pujiang National Lab Grant No. P22KN00524, Natural Science Foundation of Shanghai Grant No.21ZR1431000, Shanghai Science and Technology Commission Grant No.21JC1402900, Shanghai Municipal Science and Technology Major Project No.2021SHZDZX0102, and Shanghai Pujiang Program Research Grant No.20PJ140890.
1. What is the focus and contribution of the paper on semiparametric mediation analysis with deep neural networks (DNNs)? 2. What are the strengths of the proposed approach, particularly in relaxing sparsity constraints and using multiply-robust behavior? 3. Are there any concerns regarding the novelty of the theoretical claims specific to using DNNs to fit the nuisances? 4. How does the reviewer assess the quality and effectiveness of the synthetic experiments provided by the authors? 5. Do you have any questions or suggestions regarding the alternative solution suggested by the authors to avoid the curse of dimensionality? 6. Is there anything unclear or confusing about the "cross-world" ignorability assumption in section 2.2 that needs further explanation? 7. Should the identifying functional include an integration over dPx as well? 8. Why is X defined as {0, 1}^p in Section 2.1, and are there any limitations to this definition?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors propose a new method called DeepMed for performing semiparametric mediation analysis with DNNs. Strengths And Weaknesses Their main theoretical contribution is using the multiply-robust behavior of IF-based estimators for (in)direct effect in order to relax the sparsity constraints in training nuisance functions with DNN architectures. However, this work https://academic.oup.com/ectj/article/25/2/277/6517682 (and possibly others) discusses most of what hacve been proposed in this draft regarding the second order bias and properties of the estimator within the sample splitting scheme (theorem 1). I’m less familiar with the DNN literature, and can’t comment on the novelty of the theoretical claims specific to using DNNs to fit the nuisances. But I find a full-fledged procedure to fit nuisances via DNNs to compute mediated effects worthwhile and not immediately straightforward. I also think the authors have done a good job with the synthetic experiments. Questions In remark 3, authors have suggested to use a(d | x, m) and a(d | x) to avoid curse of dimensionality in fitting f(m | x, d) if m is multidimensional and or continuous. However, this alternative solution can lead to new complications such as incompatibility in posing models for a(d | x, m) and a(d | x) while having a coherent joint over X, D, M, Y. That would be great if authors could comment on this and possibly provide remedies to resolve the issue. In section 2.2., It would be more clear if the “cross-world” ignobility assumption is written down more explicitly, i.e., Y(d, m) \indep M(d’) \mid X The identifying functional on top of page 3, should have an integration over dPx as well. Why is X defined as {0, 1}^p in Section 2.1? The arguments don't seem to rely on restricted state space on X. Limitations NA
NIPS
Title DeepMed: Semiparametric Causal Mediation Analysis with Debiased Deep Learning Abstract Causal mediation analysis can unpack the black box of causality and is therefore a powerful tool for disentangling causal pathways in biomedical and social sciences, and also for evaluating machine learning fairness. To reduce bias for estimating Natural Direct and Indirect Effects in mediation analysis, we propose a new method called DeepMed that uses deep neural networks (DNNs) to cross-fit the infinitedimensional nuisance functions in the efficient influence functions. We obtain novel theoretical results that our DeepMed method (1) can achieve semiparametric efficiency bound without imposing sparsity constraints on the DNN architecture and (2) can adapt to certain low-dimensional structures of the nuisance functions, significantly advancing the existing literature on DNN-based semiparametric causal inference. Extensive synthetic experiments are conducted to support our findings and also expose the gap between theory and practice. As a proof of concept, we apply DeepMed to analyze two real datasets on machine learning fairness and reach conclusions consistent with previous findings. 1 Introduction Tremendous progress has been made in this decade on deploying deep neural networks (DNNs) in real-world problems (Krizhevsky et al., 2012; Wolf et al., 2019; Jumper et al., 2021; Brown et al., 2022). Causal inference is no exception. In semiparametric causal inference, a series of seminal works (Chen et al., 2020; Chernozhukov et al., 2020; Farrell et al., 2021) initiated the investigation of ⇤Co-corresponding authors, alphabetical order 36th Conference on Neural Information Processing Systems (NeurIPS 2022). statistical properties of causal effect estimators when the nuisance functions (the outcome regressions and propensity scores) are estimated by DNNs. However, there are a few limitations in the current literature that need to be addressed before the theoretical results can be used to guide practice: (1) Most recent works mainly focus on total effect (Chen et al., 2020; Farrell et al., 2021). In many settings, however, more intricate causal parameters are often of greater interests. In biomedical and social sciences, one is often interested in “mediation analysis” to decompose the total effect into direct and indirect effect to unpack the underlying black-box causal mechanism (Baron and Kenny, 1986). More recently, mediation analysis also percolated into machine learning fairness. For instance, in the context of predicting the recidivism risk, Nabi and Shpitser (2018) argued that, for a “fair” algorithm, sensitive features such as race should have no direct effect on the predicted recidivism risk. If such direct effects can be accurately estimated, one can detect the potential unfairness of a machine learning algorithm. We will revisit such applications in Section 5 and Appendix G. (2) Statistical properties of DNN-based causal estimators in recent works mostly follow from several (recent) results on the convergence rates of DNN-based nonparametric regression estimators (Suzuki, 2019; Schmidt-Hieber, 2020; Tsuji and Suzuki, 2021), with the limitation of relying on sparse DNN architectures. The theoretical properties are in turn evaluated by relatively simple synthetic experiments not designed to generate nearly infinite-dimensional nuisance functions, a setting considered by almost all the above related works. The above limitations raise the tantalizing question whether the available statistical guarantees for DNN-based causal inference have practical relevance. In this work, we plan to partially fill these gaps by developing a new method called DeepMed for semiparametric mediation analysis with DNNs. We focus on the Natural Direct/Indirect Effects (NDE/NIE) (Robins and Greenland, 1992; Pearl, 2001) (defined in Section 2.1), but our results can also be applied to more general settings; see Remark 2. The DeepMed estimators leverage the “multiply-robust” property of the efficient influence function (EIF) of NDE/NIE (Tchetgen Tchetgen and Shpitser, 2012; Farbmacher et al., 2022) (see Proposition 1 in Section 2.2), together with the flexibility and superior predictive power of DNNs (see Section 3.1 and Algorithm 1). In particular, we also make the following novel contributions to deepen our understanding of DNN-based semiparametric causal inference: • On the theoretical side, we obtain new results that our DeepMed method can achieve semiparametric efficiency bound without imposing sparsity constraints on the DNN architecture and can adapt to certain low-dimensional structures of the nuisance functions (see Section 3.2), thus significantly advancing the existing literature on DNN-based semiparametric causal inference. Non-sparse DNN architecture is more commonly employed in practice (Farrell et al., 2021), and the low-dimensional structures of nuisance functions can help avoid curse-of-dimensionality. These two points, taken together, significantly advance our understanding of the statistical guarantee of DNN-based causal inference. • More importantly, on the empirical side, in Section 4, we designed sophisticated synthetic experiments to simulate nearly infinite-dimensional functions, which are much more complex than those in previous related works (Chen et al., 2020; Farrell et al., 2021; Adcock and Dexter, 2021). We emphasize that these nontrivial experiments could be of independent interest to the theory of deep learning beyond causal inference, to further expose the gap between deep learning theory and practice (Adcock and Dexter, 2021; Gottschling et al., 2020); see Remark 9 for an extended discussion. As a proof of concept, in Section 5 and Appendix G, we also apply DeepMed to re-analyze two real-world datasets on algorithmic fairness and reach similar conclusions to related works. • Finally, a user-friendly R package can be found at https://github.com/siqixu/DeepMed. Making such resources available helps enhance reproducibility, a highly recognized problem in all scientific disciplines, including (causal) machine learning (Pineau et al., 2021; Kaddour et al., 2022). 2 Definition, identification, and estimation of NDE and NIE 2.1 Definition of NDE and NIE Throughout this paper, we denote Y as the primary outcome of interest, D as a binary treatment variable, M as the mediator on the causal pathway from D to Y , and X 2 [0, 1]p (or more generally, compactly supported in Rp) as baseline covariates including all potential confounders. We denote the observed data vector as O ⌘ (X,D,M, Y ). Let M(d) denote the potential outcome for the mediator when setting D = d and Y (d,m) be the potential outcome of Y under D = d and M = m, where d 2 {0, 1} and m is in the support M of M . We define the average total (treatment) effect as ⌧tot := E[Y (1,M(1)) Y (0,M(0))], the average NDE of the treatment D on the outcome Y when the mediator takes the natural potential outcome when D = d as ⌧NDE(d) := E[Y (1,M(d)) Y (0,M(d))], and the average NIE of the treatment D on the outcome Y via the mediator M as ⌧NIE(d) := E[Y (d,M(1)) Y (d,M(0))]. We have the trivial decomposition ⌧tot ⌘ ⌧NDE(d) + ⌧NIE(d0) for d 6= d0. In causal mediation analysis, the parameters of interest are ⌧NDE(d) and ⌧NIE(d). 2.2 Semiparametric multiply-robust estimators of NDE/NIE Estimating ⌧NDE(d) and ⌧NIE(d) can be reduced to estimating (d, d0) := E[Y (d,M(d0))] for d, d0 2 {0, 1}. We make the following standard identification assumptions: i. Consistency: if D = d, then M = M(d) for all d 2 {0, 1}; while if D = d and M = m, then Y = Y (d,m) for all d 2 {0, 1} and all m in the support of M . ii. Ignorability: Y (d,m) ? D|X , Y (d,m) ? M |X,D, M(d) ? D|X , and Y (d,m) ? M(d0)|X , almost surely for all d,2 {0, 1} and all m 2 M. The first three conditions are, respectively, no unmeasured treatment-outcome, mediator-outcome and treatment-mediator confounding, whereas the fourth condition is often referred to as the “cross-world” condition. We provide more detailed comments on these four conditions in Appendix A. iii. Positivity: The propensity score a(d|X) ⌘ Pr(D = d|X) 2 (c, C) for some constants 0 < c C < 1, almost surely for all d 2 {0, 1}; f(m|X, d), the conditional density (mass) function of M = m (when M is discrete) given X and D = d, is strictly bounded between [ ¯ ⇢, ⇢̄] for some constants 0 < ¯ ⇢ ⇢̄ < 1 almost surely for all m in M and all d 2 {0, 1}. Under the above assumptions, the causal parameter (d, d0) for d, d0 2 {0, 1} can be identified as either of the following three observed-data functionals: (d, d0) ⌘ E {D = d}f(M |X, d0)Y a(d|X)f(M |X, d) ⌘ E {D = d0} a(d0|X) µ(X, d,M) ⌘ Z µ(x, d,m)f(m|x, d0)p(x) dmdx, (1) where {·} denotes the indicator function, p(x) denotes the marginal density of X , and µ(x, d,m) := E[Y |X = x,D = d,M = m] is the outcome regression model, for which we also make the following standard boundedness assumption: iv. µ(x, d,m) is also strictly bounded between [ R,R] for some constant R > 0. Following the convention in the semiparametric causal inference literature, we call a, f, µ “nuisance functions”. Tchetgen Tchetgen and Shpitser (2012) derived the EIF of (d, d0): EIFd,d0 ⌘ d,d0(O) (d, d0), where d,d0(O) = {D = d} · f(M |X, d0) a(d|X) · f(M |X, d) (Y µ(X, d,M)) + ✓ 1 {D = d 0} a(d0|X) ◆Z m2M µ(X, d,m)f(m|X, d0)dm+ {D = d 0} a(d0|X) µ(X, d,M). (2) The nuisance functions µ(x, d,m), a(d|x) and f(m|x, d) appeared in d,d0(o) are unknown and generally high-dimensional. But with a sample D ⌘ {Oj}Nj=1 of the observed data, based on d,d0(o), one can construct the following generic sample-splitting multiply-robust estimator of (d, d0): e (d, d0) = 1 n X i2Dn e d,d0(Oi), (3) where Dn ⌘ {Oi}ni=1 is a subset of all N data, and e d,d0(o) replaces the unknown nuisance functions a, f, µ in d,d0(o) by some generic estimators ea, ef, eµ computed using the remaining N n nuisance sample data, denoted as D⌫ . Cross-fit is then needed to recover the information lost due to sample splitting; see Algorithm 1. It is clear from (2) that e (d, d0) is a consistent estimator of (d, d0) as long as any two of ea, ef, eµ are consistent estimators of the corresponding true nuisance functions, hence the name “multiply-robust”. Throughout this paper, we take n ⇣ N n and assume: v. Any nuisance function estimators are strictly bounded within the respective lower and upper bounds of a, f, µ. To further ease notation, we define: for any d 2 {0, 1}, ra,d := R a,d(x)2dF (x) 1/2 , rf,d := R f,d(x,m)2dF (x,m|d = 0) 1/2 , and rµ,d := R µ,d(x,m)2dF (x,m|d = 0) 1/2 , where a,d(x) := ea(d|x) a(d|x), f,d(x,m) := ef(m|x, d) f(m|x, d) and µ,d(x,m) := eµ(x, d,m) µ(x, d,m) are point-wise estimation errors of the estimated nuisance functions. In defining the above L2-estimation errors, we choose to take expectation with respect to (w.r.t.) the law F (m,x|d = 0) only for convenience, with no loss of generality by Assumptions iii and v. To show the cross-fit version of e (d, d0) is semiparametric efficient for (d, d0), we shall demonstrate under what conditions p n(e (d, d0) (d, d0)) L! N (0,E[EIF2d,d0 ]) (Newey, 1990). The following proposition on the statistical properties of e (d, d0) is a key step towards this objective. Proposition 1. Denote Bias(e (d, d0)) := E[e (d, d0) (d, d0)|D⌫ ] as the bias of e (d, d0) conditional on the nuisance sample D⌫ . Under Assumptions i – v, Bias(e (d, d0)) is of second-order: |Bias(e (d, d0))| . max ⇢ ra,d · rf,d, max d002{0,1} rf,d00 · rµ,d, ra,d · rµ,d . (4) Furthermore, if the RHS of (4) is o(n 1/2), then p n ⇣ e (d, d0) (d, d0) ⌘ = 1 p n nX i=1 ( d,d0(Oi) (d, d 0)) + o(1) d ! N 0,E ⇥ EIF2d,d0 ⇤ . (5) Although the above result is a direct consequence of the EIF d,d0(O), we prove Proposition 1 in Appendix B for completeness. Remark 2. The total effect ⌧tot = (1, 1) (0, 0) can be viewed as a special case, for which d = d0 for (d, d0). Then EIFd,d ⌘ EIFd corresponds to the nonparametric EIF of (d, d) ⌘ (d) ⌘ E[Y (d,M(d))]: EIFd = d(O) (d) with d(O) = {D = d} a(d|X) Y + ✓ 1 {D = d} a(d|X) ◆ µ(X, d), where µ(x, d) := E[Y |X = x,D = d]. Hence all the theoretical results in this paper are applicable to total effect estimation. Our framework can also be applied to all the statistical functionals that satisfy a so-called “mixed-bias” property, characterized recently in Rotnitzky et al. (2021). This class includes the quadratic functional, which is important for uncertainty quantification in machine learning. 3 Estimation and inference of NDE/NIE using DeepMed We now introduce DeepMed, a method for mediation analysis with nuisance functions estimated by DNNs. By leveraging the second-order bias property of the multiply-robust estimators of NDE/NIE (Proposition 1), we will derive statistical properties of DeepMed in this section. The nuisance function estimators by DNNs are denoted as ba, bf, bµ. 3.1 Details on DeepMed First, we introduce the fully-connected feed-forward neural network with the rectified linear units (ReLU) as the activation function for the hidden layer neurons (FNN-ReLU), which will be used to estimate the nuisance functions. Then, we will introduce an estimation procedure using a V -fold cross-fitting with sample-splitting to avoid the Donsker-type empirical-process assumption on the nuisance functions, which, in general, is violated in high-dimensional setup. Finally, we provide the asymptotic statistical properties of the DNN-based estimators of ⌧tot, ⌧NDE(d) and ⌧NIE(d). We denote the ReLU activation function as (u) := max(u, 0) for any u 2 R. Given vectors x, b, we denote b(x) := (x b), with acting on the vector x b component-wise. Let Fnn denote the class of the FNN-ReLU functions Fnn := n f : Rp ! R; f(x) = W (L) b(L) · · · W (1) b(1)(x) o , where is the composition operator, L is the number of layers (i.e. depth) of the network, and for l = 1, · · · , L, W (l) is a Kl+1 ⇥ Kl-dimensional weight matrix with Kl being the number of neurons in the l-th layer (i.e. width) of the network, with K1 = p and KL+1 = 1, and b(l) is a Kl-dimensional vector. To avoid notation clutter, we concatenate all the network parameters as ⇥ = (W (l), b(l), l = 1, · · · , L) and simply take K2 = · · · = KL = K. We also assume ⇥ to be bounded: k⇥k1 B for some universal constant B > 0. We may let the dependence on L, K, B explicit by writing Fnn as Fnn(L,K,B). DeepMed estimates ⌧tot, ⌧NDE(d), ⌧NIE(d) by (3), with the nuisance functions a, f, µ estimated using Fnn with the V -fold cross-fitting strategy, summarized in Algorithm 1 below; also see Farbmacher et al. (2022). DeepMed inputs the observed data D ⌘ {Oi}Ni=1 and outputs the estimated total effect b⌧tot, NDE b⌧NDE(d) and NIE b⌧NIE(d), together with their variance estimators b 2tot, b 2NDE(d) and b 2NIE(d). Algorithm 1 DeepMed with V -fold cross-fitting 1: Choose some integer V (usually V 2 {2, 3, · · · , 10}) 2: Split the N observations into V subsamples Iv ⇢ {1, · · · , N} ⌘ [N ] with equal size n = N/V ; 3: for v = 1, · · · , V : do 4: Fit the nuisance functions by DNNs using observations in [N ] \ Iv 5: Compute the nuisance functions in the subsample Iv using the estimated DNNs in step 4 6: Obtain { b d(Oi), b d,d0(Oi)}i2Iv for the subsample Iv based on (2), respectively, with the nuisance functions replaced by their estimates in step 5 7: end for 8: Estimate average potential outcomes by b (d) := 1N NP i=1 b d(Oi), b (d, d0) := 1N NP i=1 b d,d0(Oi) 9: Estimate causal effects by b⌧tot, b⌧NDE(d) and b⌧NIE(d) with b (d) and b (d, d0) 10: Estimate the variances of b⌧tot, b⌧NDE(d) and b⌧NIE(d) by: b 2tot := 1N2 NP i=1 ( b 1(Oi) b 0(Oi))2 1N b⌧ 2 tot; b 2NDE(d) := 1N2 NP i=1 ( b 1,d(Oi) b 0,d(Oi))2 1N b⌧ 2 NDE(d); b 2NIE(d) := 1N2 NP i=1 ( b d,1(Oi) b d,0(Oi))2 1N b⌧ 2 NIE(d) Output: b⌧tot, b⌧NDE(d), b⌧NIE(d), b 2tot, b 2NDE(d) and b 2NIE(d) Remark 3 (Continuous or multi-dimensional mediators). For binary treatment D and continuous or multi-dimensional M , to avoid nonparametric/high-dimensional conditional density estimation, we can rewrite f(m|x,d 0) a(d|x)f(m|x,d) as 1 a(d|x,m) a(d|x,m)(1 a(d|x)) by the Bayes’ rule and the integral w.r.t. f(m|x, d 0) in (2) as E[µ(X, d,M)|X = x,D = d0]. Then we can first estimate µ(x, d,m) by bµ(x, d,m) and in turn estimate E[µ(X, d,M)|X = x,D = d0] by regressing bµ(X, d,M) against (X,D) using the FNN-ReLU class. We mainly consider binary M to avoid unnecessary complications; but see Appendix G for an example in which this strategy is used. Finally, the potential incompatibility between models posited for a(d|x) and a(d|x,m) and the joint distribution of (X,A,M, Y ) is not of great concern under the semiparametric framework because all nuisance functions are estimated nonparametrically; again, see Appendix G for an extended discussion. 3.2 Statistical properties of DeepMed: Non-sparse DNN architecture and low-dimensional structures of the nuisance functions According to Proposition 1, to analyze the statistical properties DeepMed, it is sufficient to control the L2-estimation errors of nuisance function estimates ba, bf, bµ fit by DNNs. To ease presentation, we first study the theoretical guarantees on the L2-estimation error for a generic nuisance function g : W 2 [0, 1]p ! Z 2 R, for which we assume: vi. Z = g(W ) + ⇠, with ⇠ sub-Gaussian with mean zero and independent of W . Note that when g corresponds to a, f, µ, (W,Z) corresponds to (X, (D = 1)), ((X,D), (M = 1)) and ((X,D,M), Y ), respectively. We denote the DNN output from the nuisance sample D⌫ as bg. For theoretical results, we consider bg as the following empirical risk minimizer (ERM): bg := arg min ḡ2Fnn(L,K,B) X i2D⌫ (Zi ḡ(Wi)) 2 . (6) To avoid model misspecification, one often assumes g 2 G, where G is some infinite-dimensional function space. A common choice is G = Hp(↵;C), the Hölder ball on the input domain [0, 1]p, with smoothness exponent ↵ and radius C. Hölder space is one of the most well-studied function spaces in statistics and it is convenient to quantify its complexity by a single smoothness parameter ↵; see Appendix C for a review. It is well-known that estimating Hölder functions suffers from curse-of-dimensionality (Stone, 1982). One remedy is to consider the following generalized Hölder space, by imposing certain low-dimensional structures on g: H † k(↵;C) := g(w) = h( w) : h 2 Hk(↵;C), 2 Rk⇥p unknown, k p . Remark 4. The above definition contains g(w) = h(wI), where I ⇢ {1, · · · , p}, as a special case, in which g is assumed to only depend on a subset of the feature vector w. One can easily generalize the above definition to additive models g(w) = Pp j=1 hj(wj) where hj 2 Hkj (↵j ;Cj), allowing even more modeling flexibility. To avoid complications, we only consider the above simpler model. We can show that the ERM estimator bg (6) from the FNN-ReLU class Fnn(L,K,B) attains the optimal estimation rate over H†k(↵;C) up to log factors, by choosing the depth and width appropriately without assuming sparse neural nets. Lemma 5. Under Assumptions iii – vi, if g 2 H†k(↵;C) for k p, with LK ⇣ n k 2(k+2↵) , we have supg2H†k(↵;C) E ⇥ (g(W ) bg(W ))2 ⇤ 1/2 . n ↵2↵+k (log n)3. Lemma 5, together with Proposition 1, implies the main theoretical result of the paper. Theorem 6. Under Assumptions i – vi and the following condition on a, f, µ: a 2 H†k(↵a;C), f 2 H † k(↵f ;C), µ 2 H † k(↵µ;C), with min ⇢ ↵a 2↵a + k + ↵f 2↵f + k , ↵f 2↵f + k + ↵µ 2↵µ + k , ↵a 2↵a + k + ↵µ 2↵µ + k > 1 2 + ✏, (7) for k p and some arbitrarily small ✏ > 0, if ba, bf , bµ are respectively the ERM (6) from FNN-ReLU classes Fnn(La,Ka, B), Fnn(Lf ,Kf , B), Fnn(Lµ,Kµ, B), of which the product of the depth and width satisfies LgKg ⇣ n k 2(k+2↵g) for g 2 {a, f, µ}, then the DeepMed estimators b⌧tot, b⌧NDE(d) and b⌧NIE(d) computed by Algorithm 1 are semiparametric efficient: b 1tot(b⌧tot ⌧tot), b 1NDE(d)(b⌧NDE(d) ⌧NDE(d)), b 1 NIE(d)(b⌧NIE(d) ⌧NIE(d)) L ! N (0, 1), with Nb 2tot p ! E[(EIF1 EIF0)2], Nb 2NDE(d) p ! E[(EIF1,d EIF0,d)2], and Nb 2NIE(d) p ! E[(EIFd,1 EIFd,0)2], i.e. b 2tot, b 2NDE and b 2NIE are consistent variance estimators. Remark 7. To unload notation in the above theorem, consider the special case where the smoothness of all the nuisance functions coincides, i.e. ↵a = ↵f = ↵µ = ↵. Then Condition (7) reduces to ↵ > k/2 + ✏ for some arbitrarily small ✏ > 0. For example, if the covariates X have dimension p = 2 and no low-dimensional structures are imposed on the nuisance functions (i.e. k ⌘ p), one needs ↵ > 1 to ensure semiparametric efficiency of the DeepMed estimators. We emphasize that Lemma 5 and Theorem 6 do not constrain the network sparsity S, better reflecting how DNNs are usually used in practice. Theorem 6 advances results on total and decomposition effect estimation with non-sparse DNNs (Farrell et al., 2021, Theorem 1) in terms of (1) weaker smoothness conditions and (2) adapting to certain low-dimensional structures of the nuisance functions. The proof of Lemma 5 follows from a combination of the improved DNN approximation rate obtained in Lu et al. (2021); Jiao et al. (2021) and standard DNN metric entropy bound (Suzuki, 2019). We prove Lemma 5 and Theorem 6 in Appendix C for completeness. One weakness of Lemma 5 and Theorem 6, as well as in other contemporary works (Chen et al., 2020; Farrell et al., 2021), is the lack of algorithmic/training process considerations (Chen et al., 2022); see Remark 10 and Appendix E for extended discussions. Remark 8 (Explicit input-layer regularization). Training DNNs in practice involves hyperparameter tuning, including the depth L and width K in Theorem 6 and others like epochs. In the synthetic experiments, we consider the nuisance functions only depending on a k-subset of p-dimensional input. A reasonable heuristic is to add L1-regularization in the input-layer of the DNN. Then the regularization weight is also a hyperparameter. In practice, we simply use cross-validation to select the hyperparameters that minimize the validation loss. We leave its theoretical justification and the performance of other alternative approaches such as the minimax criterion (Robins et al., 2020; Cui and Tchetgen Tchetgen, 2019) to future works. 4 Synthetic experiments In this section and Appendix E, we showcase five synthetic experiments. Since ground truth is rarely known in real data, we believe synthetic experiments play an equally, if not more, important role as real data. Before describing the experimental setups, we garner the following key take-home message: (a) Compared with the other competing methods, DeepMed exhibits better finite-sample performance in most of our experiments; (b) Cross-validation for DNN hyperparameter tuning works reasonably well in our experiments; (c) We find DeepMed with explicit regularization in the input layer improves performance (see Table A2) when the true nuisance functions have certain low-dimensional structures in their dependence on the covariates. Farrell et al. (2021) warned against blind explicit regularization in DNNs for total effect estimation. Our observation does not contradict Farrell et al. (2021) as (1) the purpose of the input-layer regularization is not to control the sparsity of the DNN architecture and (2) we do not further regularize hidden layers; (d) Experimental setups for Cases 3 to 5 generate nuisance functions that are nearly infinitedimensional and close to the boundary of a Hölder ball with a given smoothness exponent (Liu et al., 2020; Li et al., 2005). Thus these synthetic experiments should be better benchmarks than Cases 1 and 2 or settings in other related works such as Farrell et al. (2021). We hope that these highly nontrivial synthetic experiments are helpful to researchers beyond mediation analysis or causal inference. We share the code for generating these functions as a part of the DeepMed package. We consider a sample with 10,000 i.i.d. observations. The covariates X = (X1, ..., Xp)> are independently drawn from uniform distribution Uniform([ 1, 1]). The outcome Y , treatment D and mediator M are generated as follows: D ⇠ Bernoulli(s(d(X))),M ⇠ 0.2D +m(X) +N (0, 1), Y ⇠ 0.2D +M + y(X) +N (0, 1), where s(x) := (1 + e x) 1, and we consider the following three cases to generate the nonlinear functions d(x),m(x) and y(x) in the main text: • Case 1 (simple functions): d(x) = x1x2 + x3x4x5 + sinx1,m(x) = 4 5X i=1 sin 3xi, y(x) = (x1 + x2) 2 + 5 sin 5X i=1 xi. • Case 2 (composition of simple functions): we simulate more complex interactions among covariates by composing simple functions as follow: d(x) = d2 d1 d0(x1, · · · , x5), with d0(x1, · · · , x5) = 2Y i=1 xi, 5Y i=3 xi, 2Y i=1 sinxi, 5Y i=3 sinxi ! , d1(a1, · · · , a4) = (sin(a1 + a2), sin a2, a3, a4) , and d2(b1, · · · , b4) = 0.5 sin(b1 + b2) + 0.5(b3 + b4), m(x) = m1 m0(x1, . . . , x5),with m0(x1, · · · , x5) = (sinx1, · · · , sinx5) ,m1(a1, · · · , a5) = 5 sin 5X i=1 ai and y(x) = y2 y1 y0(x1, · · · , x5), with y0(x1, · · · , x5) = sin 2X i=1 xi, sin 5X i=3 xi, sin 5X i=1 xi ! , y1(a1, a2, a3) = (sin(a1 + a2), a3) , and y2(b1, b2) = 10 sin(b1 + b2). • Case 3 (Hölder functions): we consider more complex nonlinear functions as follows: d(x) = x1x2 + x3x4x5 + 0.5⌘(0.2x1;↵),m(x) = 5X i=1 ⌘ (0.5xi;↵) , y(x) = x1x2 + 3⌘ 0.2 5X i=1 xi;↵ ! where ⌘(x;↵) = P j2J,l2Z 2 j(↵+0.25) wj,l(x) with J = {0, 3, 6, 9, 10, 16} and wj,l(·) is the D6 father wavelet functions dilated at resolution j shifted by l. By construction, ⌘(x;↵) 2 H1(↵;B) for some known constant B > 0 following Härdle et al. (1998, Theorem 9.6). Here we set ↵ = 1.2 and the intrinsic dimension k = 1. Thus we expect the DeepMed estimators are semiparametric efficient. It is indeed the case based on the columns corresponding to Case 3 in Table 1, suggesting that DNNs can be adaptive to certain low-dimensional structures. Remark 9. The nuisance functions in Cases 3 – 5 (see Appendix E) are less smooth than what have been considered elsewhere, including Farrell et al. (2021), Chen et al. (2020), and even Adcock and Dexter (2021), a paper dedicated to exposing the gap between theoretical approximation rates and DNN practice. These nuisance functions are designed to be near the boundary of a Hölder ball with a given smoothness exponent as we add wavelets at very high resolution in ⌘(x;↵). This is the assumption under which most of the known statistical properties of DNNs are developed. In all the above cases, ⌧tot = 0.4 and ⌧NDE(d) = ⌧NIE(d) = 0.2 for d 2 {0, 1}. We also consider the cases where the total number of covariates p = 20 and 100 but only the first five covariates are relevant to Y , M and D. All simulation results are based on 200 replicates. The sigmoid function is used in the final layer when the response variable is binary. For comparison, we also use the Lasso, random forest (RF) and gradient boosted machine (GBM) to estimate the nuisance functions, and use the true nuisance functions (Oracle) as the benchmark. The Lasso is implemented using the R package “hdm” with a data-driven penalty. The DNN, RF and GBM are implemented using the R packages “keras”, “randomForest” and “gbm”, respectively. We adopt a 3-fold cross-validation to choose the hyperparameters for DNNs (depth L, width K, L1-regularization parameter and epochs), RF (number of trees and maximum number of nodes) and GBM (numbers of trees and depth). We use a completely independent sample for the hyperparameter selection. In this paper, we only use one extra dataset to conduct the cross-validation for hyperparameter selection, so our simulation results are conditional on this extra dataset. We use the cross-entropy loss for the binary response and the mean-squared loss for the continuous response. We fix the batch-size as 100 and the other hyperparameters for the other methods are set to the default values in their R packages. See Appendix E for more details. We compare the performances of different methods in terms of the biases, empirical standard errors (SE) and root mean squared errors (RMSE) of the estimates as well as the coverage probabilities (CP) of their 95% confidence intervals. When p = k = 5 (all covariates are relevant or no low-dimensional structures), DeepMed has smaller bias and RMSE than the other competing methods, and is only slightly worse than Oracle. Lasso has the largest bias and poor CP as expected since it does not capture the nonlinearity of the nuisance functions. RF and GBM also have substantial biases, especially in Case 2 with compositions of simple functions. Overall, DeepMed performs better than the competing methods (Table 1). From the empirical distributions, we can also see that they are nearly unbiased and normally distributed in Cases 1-3 (Figures A1-A3). When p = 20 or 100 but only the first five covariates are relevant (k = 5), L1-regularization in the input-layer drastically improves the performance of DeepMed (Table A2). DeepMed with L1-regularization in the input-layer also has smaller bias and RMSE than the other competing methods (Tables A3 and A4). As expected, more precise nuisance function estimates (i.e., smaller validation loss) generally lead to more precise causal effect estimates. The validation losses of nuisance function estimates from DeepMed are generally much smaller than those using Lasso, RF and GBM (Tables A5-A7). Remark 10. Due to space limitations, we defer Cases 4, 5 to Appendix E, in which DeepMed fails to be semiparametric efficient, compared to the Oracle; see an extended discussion in Appendix E. We conjecture this may be due to the implicit regularization of gradient-based training algorithm such as SGD (Table A11) or adam (Kingma and Ba, 2015) (all simulation results except Table A11), which is used to train the DNNs to estimate the nuisance parameters, instead of actually solving the ERM (6). Most previous works focus on the benefit of implicit regularization (Neyshabur, 2017; Bartlett et al., 2020) on generalization. Yet, implicit regularization might inject implicit bias into causal effect estimates, which could make statistical inference invalid. Such a potential curse of implicit regularization has not been documented in the DNN-based causal inference literature before and exemplify the value of our synthetic experiments. We believe this is an important open research direction for theoretical results to better capture the empirical performance of DNN-based causal inference methods such as DeepMed. 5 Real data analysis on fairness As a proof of concept, we use DeepMed and other competing methods to re-analyze the COMPAS algorithm (Dressel and Farid, 2018). In particular, we are interested in the NDE of race D on the recidivism risk (or the COMPAS score) Y with the number of prior convictions as the mediator M . For race, we mainly focus on the Caucasians population (D = 0) and the African-Americans population (D = 1), and exclude the individuals of other ethnicity groups. The COMPAS score (Y ) is ordinal, ranging from 1 to 10 (1: lowest risk; 10: highest risk). We also include the demographic information (age and gender) as covariates X . All the methods find significant positive NDE of race on the COMPAS score at ↵-level 0.005 (Table 2; all p-values < 10 7), consistent with previous findings (Nabi and Shpitser, 2018). Thus the COMPAS algorithm tends to assign higher recidivism risks to African-Americans than to Caucasians, even when they have the same number of prior convictions. The validation losses of nuisance function estimates by DeepMed are smaller than the other competing methods (Table A8), possibly suggesting smaller biases of the corresponding NDE/NIE estimators. We emphasize that research in machine learning fairness should be held accountable (Bao et al., 2021). Our data analysis is merely a proof-of-concept that DeepMed works in practice and the conclusion from our data analysis should not be treated as definitive. We defer the comments on potential issues of unmeasured confounding to Appendix F and another real data analysis to Appendix G. 6 Conclusion and Discussion In this paper, we proposed DeepMed for semiparametric mediation analysis with DNNs. We established novel statistical properties for DNN-based causal effect estimation that can (1) circumvent sparse DNN architectures and (2) leverage certain low-dimensional structures of the nuisance functions. These results significantly advance our current understanding of DNN-based causal inference including mediation analysis. Evaluated by our extensive synthetic experiments, DeepMed mostly exhibits improved finite-sample performance over the other competing machine learning methods. But as mentioned in Remark 10, there is still a large gap between statistical guarantees and empirical observations. Therefore an important future direction is to incorporate the training process while investigating the statistical properties to have a deeper theoretical understanding of DNN-based causal inference. It is also of future research interests to enable DeepMed to handle unmeasured confounding and more complex path-specific effects (Malinsky et al., 2019; Miles et al., 2020), and incorporate other hyperparameter tuning strategies that leverage the multiply-robustness property, such as the minimax criterion (Robins et al., 2020; Cui and Tchetgen Tchetgen, 2019). Finally, we warn readers that all causal inference methods, including DeepMed, may have negative societal impact if they are used without carefully checking their working assumptions. Acknowledgement and Disclosure of Funding The authors thank four anonymous reviewers and one anonymous area chair for helpful comments, Fengnan Gao for some initial discussion on how to incorporate low-dimensional manifold assumptions using DNNs and Ling Guo for discussion on DNN training. The authors would also like to thank Department of Statistics and Actuarial Sciences at The University of Hong Kong for providing highperformance computing servers that supported the numerical experiments in this paper. L. Liu gratefully acknowledges funding support by Natural Science Foundation of China Grant No.12101397 and No.12090024, Pujiang National Lab Grant No. P22KN00524, Natural Science Foundation of Shanghai Grant No.21ZR1431000, Shanghai Science and Technology Commission Grant No.21JC1402900, Shanghai Municipal Science and Technology Major Project No.2021SHZDZX0102, and Shanghai Pujiang Program Research Grant No.20PJ140890.
1. What is the focus and contribution of the paper on causal medication analysis? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis? 3. What are the weaknesses of the paper, especially regarding its reliance on synthetic experiments? 4. Do you have any concerns about the method's ability to outperform baseline methods in real-world scenarios? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This work proposes DeepMed, a semiparametric causal medication analysis framework, which focuses on natural direct/indirect effects (NDE/NIE) in the causal analysis domain. By leveraging the second-order bias property of the multiply-robust estimators of NDE/NIE, DeepMed is able to outperform baseline methods in both simulated datasets and real datasets. Strengths And Weaknesses Strengths: The authors provide extensive theoretical analysis as to the semiparametric multiply-robust estimators of NDE/NIE, as well as the statistical properties of DeepMed Weaknesses: The authors rely on synthetic experiments to provide in-depth analysis of how DeepMed is able to outperform baselines as well as perform better causal mediation analysis. Questions N/A Limitations N/A
NIPS
Title DeepMed: Semiparametric Causal Mediation Analysis with Debiased Deep Learning Abstract Causal mediation analysis can unpack the black box of causality and is therefore a powerful tool for disentangling causal pathways in biomedical and social sciences, and also for evaluating machine learning fairness. To reduce bias for estimating Natural Direct and Indirect Effects in mediation analysis, we propose a new method called DeepMed that uses deep neural networks (DNNs) to cross-fit the infinitedimensional nuisance functions in the efficient influence functions. We obtain novel theoretical results that our DeepMed method (1) can achieve semiparametric efficiency bound without imposing sparsity constraints on the DNN architecture and (2) can adapt to certain low-dimensional structures of the nuisance functions, significantly advancing the existing literature on DNN-based semiparametric causal inference. Extensive synthetic experiments are conducted to support our findings and also expose the gap between theory and practice. As a proof of concept, we apply DeepMed to analyze two real datasets on machine learning fairness and reach conclusions consistent with previous findings. 1 Introduction Tremendous progress has been made in this decade on deploying deep neural networks (DNNs) in real-world problems (Krizhevsky et al., 2012; Wolf et al., 2019; Jumper et al., 2021; Brown et al., 2022). Causal inference is no exception. In semiparametric causal inference, a series of seminal works (Chen et al., 2020; Chernozhukov et al., 2020; Farrell et al., 2021) initiated the investigation of ⇤Co-corresponding authors, alphabetical order 36th Conference on Neural Information Processing Systems (NeurIPS 2022). statistical properties of causal effect estimators when the nuisance functions (the outcome regressions and propensity scores) are estimated by DNNs. However, there are a few limitations in the current literature that need to be addressed before the theoretical results can be used to guide practice: (1) Most recent works mainly focus on total effect (Chen et al., 2020; Farrell et al., 2021). In many settings, however, more intricate causal parameters are often of greater interests. In biomedical and social sciences, one is often interested in “mediation analysis” to decompose the total effect into direct and indirect effect to unpack the underlying black-box causal mechanism (Baron and Kenny, 1986). More recently, mediation analysis also percolated into machine learning fairness. For instance, in the context of predicting the recidivism risk, Nabi and Shpitser (2018) argued that, for a “fair” algorithm, sensitive features such as race should have no direct effect on the predicted recidivism risk. If such direct effects can be accurately estimated, one can detect the potential unfairness of a machine learning algorithm. We will revisit such applications in Section 5 and Appendix G. (2) Statistical properties of DNN-based causal estimators in recent works mostly follow from several (recent) results on the convergence rates of DNN-based nonparametric regression estimators (Suzuki, 2019; Schmidt-Hieber, 2020; Tsuji and Suzuki, 2021), with the limitation of relying on sparse DNN architectures. The theoretical properties are in turn evaluated by relatively simple synthetic experiments not designed to generate nearly infinite-dimensional nuisance functions, a setting considered by almost all the above related works. The above limitations raise the tantalizing question whether the available statistical guarantees for DNN-based causal inference have practical relevance. In this work, we plan to partially fill these gaps by developing a new method called DeepMed for semiparametric mediation analysis with DNNs. We focus on the Natural Direct/Indirect Effects (NDE/NIE) (Robins and Greenland, 1992; Pearl, 2001) (defined in Section 2.1), but our results can also be applied to more general settings; see Remark 2. The DeepMed estimators leverage the “multiply-robust” property of the efficient influence function (EIF) of NDE/NIE (Tchetgen Tchetgen and Shpitser, 2012; Farbmacher et al., 2022) (see Proposition 1 in Section 2.2), together with the flexibility and superior predictive power of DNNs (see Section 3.1 and Algorithm 1). In particular, we also make the following novel contributions to deepen our understanding of DNN-based semiparametric causal inference: • On the theoretical side, we obtain new results that our DeepMed method can achieve semiparametric efficiency bound without imposing sparsity constraints on the DNN architecture and can adapt to certain low-dimensional structures of the nuisance functions (see Section 3.2), thus significantly advancing the existing literature on DNN-based semiparametric causal inference. Non-sparse DNN architecture is more commonly employed in practice (Farrell et al., 2021), and the low-dimensional structures of nuisance functions can help avoid curse-of-dimensionality. These two points, taken together, significantly advance our understanding of the statistical guarantee of DNN-based causal inference. • More importantly, on the empirical side, in Section 4, we designed sophisticated synthetic experiments to simulate nearly infinite-dimensional functions, which are much more complex than those in previous related works (Chen et al., 2020; Farrell et al., 2021; Adcock and Dexter, 2021). We emphasize that these nontrivial experiments could be of independent interest to the theory of deep learning beyond causal inference, to further expose the gap between deep learning theory and practice (Adcock and Dexter, 2021; Gottschling et al., 2020); see Remark 9 for an extended discussion. As a proof of concept, in Section 5 and Appendix G, we also apply DeepMed to re-analyze two real-world datasets on algorithmic fairness and reach similar conclusions to related works. • Finally, a user-friendly R package can be found at https://github.com/siqixu/DeepMed. Making such resources available helps enhance reproducibility, a highly recognized problem in all scientific disciplines, including (causal) machine learning (Pineau et al., 2021; Kaddour et al., 2022). 2 Definition, identification, and estimation of NDE and NIE 2.1 Definition of NDE and NIE Throughout this paper, we denote Y as the primary outcome of interest, D as a binary treatment variable, M as the mediator on the causal pathway from D to Y , and X 2 [0, 1]p (or more generally, compactly supported in Rp) as baseline covariates including all potential confounders. We denote the observed data vector as O ⌘ (X,D,M, Y ). Let M(d) denote the potential outcome for the mediator when setting D = d and Y (d,m) be the potential outcome of Y under D = d and M = m, where d 2 {0, 1} and m is in the support M of M . We define the average total (treatment) effect as ⌧tot := E[Y (1,M(1)) Y (0,M(0))], the average NDE of the treatment D on the outcome Y when the mediator takes the natural potential outcome when D = d as ⌧NDE(d) := E[Y (1,M(d)) Y (0,M(d))], and the average NIE of the treatment D on the outcome Y via the mediator M as ⌧NIE(d) := E[Y (d,M(1)) Y (d,M(0))]. We have the trivial decomposition ⌧tot ⌘ ⌧NDE(d) + ⌧NIE(d0) for d 6= d0. In causal mediation analysis, the parameters of interest are ⌧NDE(d) and ⌧NIE(d). 2.2 Semiparametric multiply-robust estimators of NDE/NIE Estimating ⌧NDE(d) and ⌧NIE(d) can be reduced to estimating (d, d0) := E[Y (d,M(d0))] for d, d0 2 {0, 1}. We make the following standard identification assumptions: i. Consistency: if D = d, then M = M(d) for all d 2 {0, 1}; while if D = d and M = m, then Y = Y (d,m) for all d 2 {0, 1} and all m in the support of M . ii. Ignorability: Y (d,m) ? D|X , Y (d,m) ? M |X,D, M(d) ? D|X , and Y (d,m) ? M(d0)|X , almost surely for all d,2 {0, 1} and all m 2 M. The first three conditions are, respectively, no unmeasured treatment-outcome, mediator-outcome and treatment-mediator confounding, whereas the fourth condition is often referred to as the “cross-world” condition. We provide more detailed comments on these four conditions in Appendix A. iii. Positivity: The propensity score a(d|X) ⌘ Pr(D = d|X) 2 (c, C) for some constants 0 < c C < 1, almost surely for all d 2 {0, 1}; f(m|X, d), the conditional density (mass) function of M = m (when M is discrete) given X and D = d, is strictly bounded between [ ¯ ⇢, ⇢̄] for some constants 0 < ¯ ⇢ ⇢̄ < 1 almost surely for all m in M and all d 2 {0, 1}. Under the above assumptions, the causal parameter (d, d0) for d, d0 2 {0, 1} can be identified as either of the following three observed-data functionals: (d, d0) ⌘ E {D = d}f(M |X, d0)Y a(d|X)f(M |X, d) ⌘ E {D = d0} a(d0|X) µ(X, d,M) ⌘ Z µ(x, d,m)f(m|x, d0)p(x) dmdx, (1) where {·} denotes the indicator function, p(x) denotes the marginal density of X , and µ(x, d,m) := E[Y |X = x,D = d,M = m] is the outcome regression model, for which we also make the following standard boundedness assumption: iv. µ(x, d,m) is also strictly bounded between [ R,R] for some constant R > 0. Following the convention in the semiparametric causal inference literature, we call a, f, µ “nuisance functions”. Tchetgen Tchetgen and Shpitser (2012) derived the EIF of (d, d0): EIFd,d0 ⌘ d,d0(O) (d, d0), where d,d0(O) = {D = d} · f(M |X, d0) a(d|X) · f(M |X, d) (Y µ(X, d,M)) + ✓ 1 {D = d 0} a(d0|X) ◆Z m2M µ(X, d,m)f(m|X, d0)dm+ {D = d 0} a(d0|X) µ(X, d,M). (2) The nuisance functions µ(x, d,m), a(d|x) and f(m|x, d) appeared in d,d0(o) are unknown and generally high-dimensional. But with a sample D ⌘ {Oj}Nj=1 of the observed data, based on d,d0(o), one can construct the following generic sample-splitting multiply-robust estimator of (d, d0): e (d, d0) = 1 n X i2Dn e d,d0(Oi), (3) where Dn ⌘ {Oi}ni=1 is a subset of all N data, and e d,d0(o) replaces the unknown nuisance functions a, f, µ in d,d0(o) by some generic estimators ea, ef, eµ computed using the remaining N n nuisance sample data, denoted as D⌫ . Cross-fit is then needed to recover the information lost due to sample splitting; see Algorithm 1. It is clear from (2) that e (d, d0) is a consistent estimator of (d, d0) as long as any two of ea, ef, eµ are consistent estimators of the corresponding true nuisance functions, hence the name “multiply-robust”. Throughout this paper, we take n ⇣ N n and assume: v. Any nuisance function estimators are strictly bounded within the respective lower and upper bounds of a, f, µ. To further ease notation, we define: for any d 2 {0, 1}, ra,d := R a,d(x)2dF (x) 1/2 , rf,d := R f,d(x,m)2dF (x,m|d = 0) 1/2 , and rµ,d := R µ,d(x,m)2dF (x,m|d = 0) 1/2 , where a,d(x) := ea(d|x) a(d|x), f,d(x,m) := ef(m|x, d) f(m|x, d) and µ,d(x,m) := eµ(x, d,m) µ(x, d,m) are point-wise estimation errors of the estimated nuisance functions. In defining the above L2-estimation errors, we choose to take expectation with respect to (w.r.t.) the law F (m,x|d = 0) only for convenience, with no loss of generality by Assumptions iii and v. To show the cross-fit version of e (d, d0) is semiparametric efficient for (d, d0), we shall demonstrate under what conditions p n(e (d, d0) (d, d0)) L! N (0,E[EIF2d,d0 ]) (Newey, 1990). The following proposition on the statistical properties of e (d, d0) is a key step towards this objective. Proposition 1. Denote Bias(e (d, d0)) := E[e (d, d0) (d, d0)|D⌫ ] as the bias of e (d, d0) conditional on the nuisance sample D⌫ . Under Assumptions i – v, Bias(e (d, d0)) is of second-order: |Bias(e (d, d0))| . max ⇢ ra,d · rf,d, max d002{0,1} rf,d00 · rµ,d, ra,d · rµ,d . (4) Furthermore, if the RHS of (4) is o(n 1/2), then p n ⇣ e (d, d0) (d, d0) ⌘ = 1 p n nX i=1 ( d,d0(Oi) (d, d 0)) + o(1) d ! N 0,E ⇥ EIF2d,d0 ⇤ . (5) Although the above result is a direct consequence of the EIF d,d0(O), we prove Proposition 1 in Appendix B for completeness. Remark 2. The total effect ⌧tot = (1, 1) (0, 0) can be viewed as a special case, for which d = d0 for (d, d0). Then EIFd,d ⌘ EIFd corresponds to the nonparametric EIF of (d, d) ⌘ (d) ⌘ E[Y (d,M(d))]: EIFd = d(O) (d) with d(O) = {D = d} a(d|X) Y + ✓ 1 {D = d} a(d|X) ◆ µ(X, d), where µ(x, d) := E[Y |X = x,D = d]. Hence all the theoretical results in this paper are applicable to total effect estimation. Our framework can also be applied to all the statistical functionals that satisfy a so-called “mixed-bias” property, characterized recently in Rotnitzky et al. (2021). This class includes the quadratic functional, which is important for uncertainty quantification in machine learning. 3 Estimation and inference of NDE/NIE using DeepMed We now introduce DeepMed, a method for mediation analysis with nuisance functions estimated by DNNs. By leveraging the second-order bias property of the multiply-robust estimators of NDE/NIE (Proposition 1), we will derive statistical properties of DeepMed in this section. The nuisance function estimators by DNNs are denoted as ba, bf, bµ. 3.1 Details on DeepMed First, we introduce the fully-connected feed-forward neural network with the rectified linear units (ReLU) as the activation function for the hidden layer neurons (FNN-ReLU), which will be used to estimate the nuisance functions. Then, we will introduce an estimation procedure using a V -fold cross-fitting with sample-splitting to avoid the Donsker-type empirical-process assumption on the nuisance functions, which, in general, is violated in high-dimensional setup. Finally, we provide the asymptotic statistical properties of the DNN-based estimators of ⌧tot, ⌧NDE(d) and ⌧NIE(d). We denote the ReLU activation function as (u) := max(u, 0) for any u 2 R. Given vectors x, b, we denote b(x) := (x b), with acting on the vector x b component-wise. Let Fnn denote the class of the FNN-ReLU functions Fnn := n f : Rp ! R; f(x) = W (L) b(L) · · · W (1) b(1)(x) o , where is the composition operator, L is the number of layers (i.e. depth) of the network, and for l = 1, · · · , L, W (l) is a Kl+1 ⇥ Kl-dimensional weight matrix with Kl being the number of neurons in the l-th layer (i.e. width) of the network, with K1 = p and KL+1 = 1, and b(l) is a Kl-dimensional vector. To avoid notation clutter, we concatenate all the network parameters as ⇥ = (W (l), b(l), l = 1, · · · , L) and simply take K2 = · · · = KL = K. We also assume ⇥ to be bounded: k⇥k1 B for some universal constant B > 0. We may let the dependence on L, K, B explicit by writing Fnn as Fnn(L,K,B). DeepMed estimates ⌧tot, ⌧NDE(d), ⌧NIE(d) by (3), with the nuisance functions a, f, µ estimated using Fnn with the V -fold cross-fitting strategy, summarized in Algorithm 1 below; also see Farbmacher et al. (2022). DeepMed inputs the observed data D ⌘ {Oi}Ni=1 and outputs the estimated total effect b⌧tot, NDE b⌧NDE(d) and NIE b⌧NIE(d), together with their variance estimators b 2tot, b 2NDE(d) and b 2NIE(d). Algorithm 1 DeepMed with V -fold cross-fitting 1: Choose some integer V (usually V 2 {2, 3, · · · , 10}) 2: Split the N observations into V subsamples Iv ⇢ {1, · · · , N} ⌘ [N ] with equal size n = N/V ; 3: for v = 1, · · · , V : do 4: Fit the nuisance functions by DNNs using observations in [N ] \ Iv 5: Compute the nuisance functions in the subsample Iv using the estimated DNNs in step 4 6: Obtain { b d(Oi), b d,d0(Oi)}i2Iv for the subsample Iv based on (2), respectively, with the nuisance functions replaced by their estimates in step 5 7: end for 8: Estimate average potential outcomes by b (d) := 1N NP i=1 b d(Oi), b (d, d0) := 1N NP i=1 b d,d0(Oi) 9: Estimate causal effects by b⌧tot, b⌧NDE(d) and b⌧NIE(d) with b (d) and b (d, d0) 10: Estimate the variances of b⌧tot, b⌧NDE(d) and b⌧NIE(d) by: b 2tot := 1N2 NP i=1 ( b 1(Oi) b 0(Oi))2 1N b⌧ 2 tot; b 2NDE(d) := 1N2 NP i=1 ( b 1,d(Oi) b 0,d(Oi))2 1N b⌧ 2 NDE(d); b 2NIE(d) := 1N2 NP i=1 ( b d,1(Oi) b d,0(Oi))2 1N b⌧ 2 NIE(d) Output: b⌧tot, b⌧NDE(d), b⌧NIE(d), b 2tot, b 2NDE(d) and b 2NIE(d) Remark 3 (Continuous or multi-dimensional mediators). For binary treatment D and continuous or multi-dimensional M , to avoid nonparametric/high-dimensional conditional density estimation, we can rewrite f(m|x,d 0) a(d|x)f(m|x,d) as 1 a(d|x,m) a(d|x,m)(1 a(d|x)) by the Bayes’ rule and the integral w.r.t. f(m|x, d 0) in (2) as E[µ(X, d,M)|X = x,D = d0]. Then we can first estimate µ(x, d,m) by bµ(x, d,m) and in turn estimate E[µ(X, d,M)|X = x,D = d0] by regressing bµ(X, d,M) against (X,D) using the FNN-ReLU class. We mainly consider binary M to avoid unnecessary complications; but see Appendix G for an example in which this strategy is used. Finally, the potential incompatibility between models posited for a(d|x) and a(d|x,m) and the joint distribution of (X,A,M, Y ) is not of great concern under the semiparametric framework because all nuisance functions are estimated nonparametrically; again, see Appendix G for an extended discussion. 3.2 Statistical properties of DeepMed: Non-sparse DNN architecture and low-dimensional structures of the nuisance functions According to Proposition 1, to analyze the statistical properties DeepMed, it is sufficient to control the L2-estimation errors of nuisance function estimates ba, bf, bµ fit by DNNs. To ease presentation, we first study the theoretical guarantees on the L2-estimation error for a generic nuisance function g : W 2 [0, 1]p ! Z 2 R, for which we assume: vi. Z = g(W ) + ⇠, with ⇠ sub-Gaussian with mean zero and independent of W . Note that when g corresponds to a, f, µ, (W,Z) corresponds to (X, (D = 1)), ((X,D), (M = 1)) and ((X,D,M), Y ), respectively. We denote the DNN output from the nuisance sample D⌫ as bg. For theoretical results, we consider bg as the following empirical risk minimizer (ERM): bg := arg min ḡ2Fnn(L,K,B) X i2D⌫ (Zi ḡ(Wi)) 2 . (6) To avoid model misspecification, one often assumes g 2 G, where G is some infinite-dimensional function space. A common choice is G = Hp(↵;C), the Hölder ball on the input domain [0, 1]p, with smoothness exponent ↵ and radius C. Hölder space is one of the most well-studied function spaces in statistics and it is convenient to quantify its complexity by a single smoothness parameter ↵; see Appendix C for a review. It is well-known that estimating Hölder functions suffers from curse-of-dimensionality (Stone, 1982). One remedy is to consider the following generalized Hölder space, by imposing certain low-dimensional structures on g: H † k(↵;C) := g(w) = h( w) : h 2 Hk(↵;C), 2 Rk⇥p unknown, k p . Remark 4. The above definition contains g(w) = h(wI), where I ⇢ {1, · · · , p}, as a special case, in which g is assumed to only depend on a subset of the feature vector w. One can easily generalize the above definition to additive models g(w) = Pp j=1 hj(wj) where hj 2 Hkj (↵j ;Cj), allowing even more modeling flexibility. To avoid complications, we only consider the above simpler model. We can show that the ERM estimator bg (6) from the FNN-ReLU class Fnn(L,K,B) attains the optimal estimation rate over H†k(↵;C) up to log factors, by choosing the depth and width appropriately without assuming sparse neural nets. Lemma 5. Under Assumptions iii – vi, if g 2 H†k(↵;C) for k p, with LK ⇣ n k 2(k+2↵) , we have supg2H†k(↵;C) E ⇥ (g(W ) bg(W ))2 ⇤ 1/2 . n ↵2↵+k (log n)3. Lemma 5, together with Proposition 1, implies the main theoretical result of the paper. Theorem 6. Under Assumptions i – vi and the following condition on a, f, µ: a 2 H†k(↵a;C), f 2 H † k(↵f ;C), µ 2 H † k(↵µ;C), with min ⇢ ↵a 2↵a + k + ↵f 2↵f + k , ↵f 2↵f + k + ↵µ 2↵µ + k , ↵a 2↵a + k + ↵µ 2↵µ + k > 1 2 + ✏, (7) for k p and some arbitrarily small ✏ > 0, if ba, bf , bµ are respectively the ERM (6) from FNN-ReLU classes Fnn(La,Ka, B), Fnn(Lf ,Kf , B), Fnn(Lµ,Kµ, B), of which the product of the depth and width satisfies LgKg ⇣ n k 2(k+2↵g) for g 2 {a, f, µ}, then the DeepMed estimators b⌧tot, b⌧NDE(d) and b⌧NIE(d) computed by Algorithm 1 are semiparametric efficient: b 1tot(b⌧tot ⌧tot), b 1NDE(d)(b⌧NDE(d) ⌧NDE(d)), b 1 NIE(d)(b⌧NIE(d) ⌧NIE(d)) L ! N (0, 1), with Nb 2tot p ! E[(EIF1 EIF0)2], Nb 2NDE(d) p ! E[(EIF1,d EIF0,d)2], and Nb 2NIE(d) p ! E[(EIFd,1 EIFd,0)2], i.e. b 2tot, b 2NDE and b 2NIE are consistent variance estimators. Remark 7. To unload notation in the above theorem, consider the special case where the smoothness of all the nuisance functions coincides, i.e. ↵a = ↵f = ↵µ = ↵. Then Condition (7) reduces to ↵ > k/2 + ✏ for some arbitrarily small ✏ > 0. For example, if the covariates X have dimension p = 2 and no low-dimensional structures are imposed on the nuisance functions (i.e. k ⌘ p), one needs ↵ > 1 to ensure semiparametric efficiency of the DeepMed estimators. We emphasize that Lemma 5 and Theorem 6 do not constrain the network sparsity S, better reflecting how DNNs are usually used in practice. Theorem 6 advances results on total and decomposition effect estimation with non-sparse DNNs (Farrell et al., 2021, Theorem 1) in terms of (1) weaker smoothness conditions and (2) adapting to certain low-dimensional structures of the nuisance functions. The proof of Lemma 5 follows from a combination of the improved DNN approximation rate obtained in Lu et al. (2021); Jiao et al. (2021) and standard DNN metric entropy bound (Suzuki, 2019). We prove Lemma 5 and Theorem 6 in Appendix C for completeness. One weakness of Lemma 5 and Theorem 6, as well as in other contemporary works (Chen et al., 2020; Farrell et al., 2021), is the lack of algorithmic/training process considerations (Chen et al., 2022); see Remark 10 and Appendix E for extended discussions. Remark 8 (Explicit input-layer regularization). Training DNNs in practice involves hyperparameter tuning, including the depth L and width K in Theorem 6 and others like epochs. In the synthetic experiments, we consider the nuisance functions only depending on a k-subset of p-dimensional input. A reasonable heuristic is to add L1-regularization in the input-layer of the DNN. Then the regularization weight is also a hyperparameter. In practice, we simply use cross-validation to select the hyperparameters that minimize the validation loss. We leave its theoretical justification and the performance of other alternative approaches such as the minimax criterion (Robins et al., 2020; Cui and Tchetgen Tchetgen, 2019) to future works. 4 Synthetic experiments In this section and Appendix E, we showcase five synthetic experiments. Since ground truth is rarely known in real data, we believe synthetic experiments play an equally, if not more, important role as real data. Before describing the experimental setups, we garner the following key take-home message: (a) Compared with the other competing methods, DeepMed exhibits better finite-sample performance in most of our experiments; (b) Cross-validation for DNN hyperparameter tuning works reasonably well in our experiments; (c) We find DeepMed with explicit regularization in the input layer improves performance (see Table A2) when the true nuisance functions have certain low-dimensional structures in their dependence on the covariates. Farrell et al. (2021) warned against blind explicit regularization in DNNs for total effect estimation. Our observation does not contradict Farrell et al. (2021) as (1) the purpose of the input-layer regularization is not to control the sparsity of the DNN architecture and (2) we do not further regularize hidden layers; (d) Experimental setups for Cases 3 to 5 generate nuisance functions that are nearly infinitedimensional and close to the boundary of a Hölder ball with a given smoothness exponent (Liu et al., 2020; Li et al., 2005). Thus these synthetic experiments should be better benchmarks than Cases 1 and 2 or settings in other related works such as Farrell et al. (2021). We hope that these highly nontrivial synthetic experiments are helpful to researchers beyond mediation analysis or causal inference. We share the code for generating these functions as a part of the DeepMed package. We consider a sample with 10,000 i.i.d. observations. The covariates X = (X1, ..., Xp)> are independently drawn from uniform distribution Uniform([ 1, 1]). The outcome Y , treatment D and mediator M are generated as follows: D ⇠ Bernoulli(s(d(X))),M ⇠ 0.2D +m(X) +N (0, 1), Y ⇠ 0.2D +M + y(X) +N (0, 1), where s(x) := (1 + e x) 1, and we consider the following three cases to generate the nonlinear functions d(x),m(x) and y(x) in the main text: • Case 1 (simple functions): d(x) = x1x2 + x3x4x5 + sinx1,m(x) = 4 5X i=1 sin 3xi, y(x) = (x1 + x2) 2 + 5 sin 5X i=1 xi. • Case 2 (composition of simple functions): we simulate more complex interactions among covariates by composing simple functions as follow: d(x) = d2 d1 d0(x1, · · · , x5), with d0(x1, · · · , x5) = 2Y i=1 xi, 5Y i=3 xi, 2Y i=1 sinxi, 5Y i=3 sinxi ! , d1(a1, · · · , a4) = (sin(a1 + a2), sin a2, a3, a4) , and d2(b1, · · · , b4) = 0.5 sin(b1 + b2) + 0.5(b3 + b4), m(x) = m1 m0(x1, . . . , x5),with m0(x1, · · · , x5) = (sinx1, · · · , sinx5) ,m1(a1, · · · , a5) = 5 sin 5X i=1 ai and y(x) = y2 y1 y0(x1, · · · , x5), with y0(x1, · · · , x5) = sin 2X i=1 xi, sin 5X i=3 xi, sin 5X i=1 xi ! , y1(a1, a2, a3) = (sin(a1 + a2), a3) , and y2(b1, b2) = 10 sin(b1 + b2). • Case 3 (Hölder functions): we consider more complex nonlinear functions as follows: d(x) = x1x2 + x3x4x5 + 0.5⌘(0.2x1;↵),m(x) = 5X i=1 ⌘ (0.5xi;↵) , y(x) = x1x2 + 3⌘ 0.2 5X i=1 xi;↵ ! where ⌘(x;↵) = P j2J,l2Z 2 j(↵+0.25) wj,l(x) with J = {0, 3, 6, 9, 10, 16} and wj,l(·) is the D6 father wavelet functions dilated at resolution j shifted by l. By construction, ⌘(x;↵) 2 H1(↵;B) for some known constant B > 0 following Härdle et al. (1998, Theorem 9.6). Here we set ↵ = 1.2 and the intrinsic dimension k = 1. Thus we expect the DeepMed estimators are semiparametric efficient. It is indeed the case based on the columns corresponding to Case 3 in Table 1, suggesting that DNNs can be adaptive to certain low-dimensional structures. Remark 9. The nuisance functions in Cases 3 – 5 (see Appendix E) are less smooth than what have been considered elsewhere, including Farrell et al. (2021), Chen et al. (2020), and even Adcock and Dexter (2021), a paper dedicated to exposing the gap between theoretical approximation rates and DNN practice. These nuisance functions are designed to be near the boundary of a Hölder ball with a given smoothness exponent as we add wavelets at very high resolution in ⌘(x;↵). This is the assumption under which most of the known statistical properties of DNNs are developed. In all the above cases, ⌧tot = 0.4 and ⌧NDE(d) = ⌧NIE(d) = 0.2 for d 2 {0, 1}. We also consider the cases where the total number of covariates p = 20 and 100 but only the first five covariates are relevant to Y , M and D. All simulation results are based on 200 replicates. The sigmoid function is used in the final layer when the response variable is binary. For comparison, we also use the Lasso, random forest (RF) and gradient boosted machine (GBM) to estimate the nuisance functions, and use the true nuisance functions (Oracle) as the benchmark. The Lasso is implemented using the R package “hdm” with a data-driven penalty. The DNN, RF and GBM are implemented using the R packages “keras”, “randomForest” and “gbm”, respectively. We adopt a 3-fold cross-validation to choose the hyperparameters for DNNs (depth L, width K, L1-regularization parameter and epochs), RF (number of trees and maximum number of nodes) and GBM (numbers of trees and depth). We use a completely independent sample for the hyperparameter selection. In this paper, we only use one extra dataset to conduct the cross-validation for hyperparameter selection, so our simulation results are conditional on this extra dataset. We use the cross-entropy loss for the binary response and the mean-squared loss for the continuous response. We fix the batch-size as 100 and the other hyperparameters for the other methods are set to the default values in their R packages. See Appendix E for more details. We compare the performances of different methods in terms of the biases, empirical standard errors (SE) and root mean squared errors (RMSE) of the estimates as well as the coverage probabilities (CP) of their 95% confidence intervals. When p = k = 5 (all covariates are relevant or no low-dimensional structures), DeepMed has smaller bias and RMSE than the other competing methods, and is only slightly worse than Oracle. Lasso has the largest bias and poor CP as expected since it does not capture the nonlinearity of the nuisance functions. RF and GBM also have substantial biases, especially in Case 2 with compositions of simple functions. Overall, DeepMed performs better than the competing methods (Table 1). From the empirical distributions, we can also see that they are nearly unbiased and normally distributed in Cases 1-3 (Figures A1-A3). When p = 20 or 100 but only the first five covariates are relevant (k = 5), L1-regularization in the input-layer drastically improves the performance of DeepMed (Table A2). DeepMed with L1-regularization in the input-layer also has smaller bias and RMSE than the other competing methods (Tables A3 and A4). As expected, more precise nuisance function estimates (i.e., smaller validation loss) generally lead to more precise causal effect estimates. The validation losses of nuisance function estimates from DeepMed are generally much smaller than those using Lasso, RF and GBM (Tables A5-A7). Remark 10. Due to space limitations, we defer Cases 4, 5 to Appendix E, in which DeepMed fails to be semiparametric efficient, compared to the Oracle; see an extended discussion in Appendix E. We conjecture this may be due to the implicit regularization of gradient-based training algorithm such as SGD (Table A11) or adam (Kingma and Ba, 2015) (all simulation results except Table A11), which is used to train the DNNs to estimate the nuisance parameters, instead of actually solving the ERM (6). Most previous works focus on the benefit of implicit regularization (Neyshabur, 2017; Bartlett et al., 2020) on generalization. Yet, implicit regularization might inject implicit bias into causal effect estimates, which could make statistical inference invalid. Such a potential curse of implicit regularization has not been documented in the DNN-based causal inference literature before and exemplify the value of our synthetic experiments. We believe this is an important open research direction for theoretical results to better capture the empirical performance of DNN-based causal inference methods such as DeepMed. 5 Real data analysis on fairness As a proof of concept, we use DeepMed and other competing methods to re-analyze the COMPAS algorithm (Dressel and Farid, 2018). In particular, we are interested in the NDE of race D on the recidivism risk (or the COMPAS score) Y with the number of prior convictions as the mediator M . For race, we mainly focus on the Caucasians population (D = 0) and the African-Americans population (D = 1), and exclude the individuals of other ethnicity groups. The COMPAS score (Y ) is ordinal, ranging from 1 to 10 (1: lowest risk; 10: highest risk). We also include the demographic information (age and gender) as covariates X . All the methods find significant positive NDE of race on the COMPAS score at ↵-level 0.005 (Table 2; all p-values < 10 7), consistent with previous findings (Nabi and Shpitser, 2018). Thus the COMPAS algorithm tends to assign higher recidivism risks to African-Americans than to Caucasians, even when they have the same number of prior convictions. The validation losses of nuisance function estimates by DeepMed are smaller than the other competing methods (Table A8), possibly suggesting smaller biases of the corresponding NDE/NIE estimators. We emphasize that research in machine learning fairness should be held accountable (Bao et al., 2021). Our data analysis is merely a proof-of-concept that DeepMed works in practice and the conclusion from our data analysis should not be treated as definitive. We defer the comments on potential issues of unmeasured confounding to Appendix F and another real data analysis to Appendix G. 6 Conclusion and Discussion In this paper, we proposed DeepMed for semiparametric mediation analysis with DNNs. We established novel statistical properties for DNN-based causal effect estimation that can (1) circumvent sparse DNN architectures and (2) leverage certain low-dimensional structures of the nuisance functions. These results significantly advance our current understanding of DNN-based causal inference including mediation analysis. Evaluated by our extensive synthetic experiments, DeepMed mostly exhibits improved finite-sample performance over the other competing machine learning methods. But as mentioned in Remark 10, there is still a large gap between statistical guarantees and empirical observations. Therefore an important future direction is to incorporate the training process while investigating the statistical properties to have a deeper theoretical understanding of DNN-based causal inference. It is also of future research interests to enable DeepMed to handle unmeasured confounding and more complex path-specific effects (Malinsky et al., 2019; Miles et al., 2020), and incorporate other hyperparameter tuning strategies that leverage the multiply-robustness property, such as the minimax criterion (Robins et al., 2020; Cui and Tchetgen Tchetgen, 2019). Finally, we warn readers that all causal inference methods, including DeepMed, may have negative societal impact if they are used without carefully checking their working assumptions. Acknowledgement and Disclosure of Funding The authors thank four anonymous reviewers and one anonymous area chair for helpful comments, Fengnan Gao for some initial discussion on how to incorporate low-dimensional manifold assumptions using DNNs and Ling Guo for discussion on DNN training. The authors would also like to thank Department of Statistics and Actuarial Sciences at The University of Hong Kong for providing highperformance computing servers that supported the numerical experiments in this paper. L. Liu gratefully acknowledges funding support by Natural Science Foundation of China Grant No.12101397 and No.12090024, Pujiang National Lab Grant No. P22KN00524, Natural Science Foundation of Shanghai Grant No.21ZR1431000, Shanghai Science and Technology Commission Grant No.21JC1402900, Shanghai Municipal Science and Technology Major Project No.2021SHZDZX0102, and Shanghai Pujiang Program Research Grant No.20PJ140890.
1. What is the focus and contribution of the paper on reducing bias in semiparametric mediation analysis? 2. What are the strengths of the proposed approach, particularly in terms of neural representation? 3. Do you have any concerns or questions regarding the paper's conjecture about the impact of Adam optimizer on efficiency? 4. What are the limitations of the proposed method, and how do they compare to other approaches in the field?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This work proposes DeepMed, which reduces bias in semiparametric mediation analysis using the power of neural networks. It relaxes sparsity assumptions of the prior work on the theoretical side which gives more flexibility and expressivity to neural networks and validates it with synthetic and real-world data experiments. Strengths And Weaknesses The Paper is well written. Questions In remark 10, the authors conjecture that the reason DeepMed is not semiparametric efficient in cases 4 and 5 is due to implicit regularization of Adam. It would be interesting to test this conjecture by running these experiments with normal SGD as the optimizer, and checking if DeepMed becomes efficient. Limitations yes
NIPS
Title DeepMed: Semiparametric Causal Mediation Analysis with Debiased Deep Learning Abstract Causal mediation analysis can unpack the black box of causality and is therefore a powerful tool for disentangling causal pathways in biomedical and social sciences, and also for evaluating machine learning fairness. To reduce bias for estimating Natural Direct and Indirect Effects in mediation analysis, we propose a new method called DeepMed that uses deep neural networks (DNNs) to cross-fit the infinitedimensional nuisance functions in the efficient influence functions. We obtain novel theoretical results that our DeepMed method (1) can achieve semiparametric efficiency bound without imposing sparsity constraints on the DNN architecture and (2) can adapt to certain low-dimensional structures of the nuisance functions, significantly advancing the existing literature on DNN-based semiparametric causal inference. Extensive synthetic experiments are conducted to support our findings and also expose the gap between theory and practice. As a proof of concept, we apply DeepMed to analyze two real datasets on machine learning fairness and reach conclusions consistent with previous findings. 1 Introduction Tremendous progress has been made in this decade on deploying deep neural networks (DNNs) in real-world problems (Krizhevsky et al., 2012; Wolf et al., 2019; Jumper et al., 2021; Brown et al., 2022). Causal inference is no exception. In semiparametric causal inference, a series of seminal works (Chen et al., 2020; Chernozhukov et al., 2020; Farrell et al., 2021) initiated the investigation of ⇤Co-corresponding authors, alphabetical order 36th Conference on Neural Information Processing Systems (NeurIPS 2022). statistical properties of causal effect estimators when the nuisance functions (the outcome regressions and propensity scores) are estimated by DNNs. However, there are a few limitations in the current literature that need to be addressed before the theoretical results can be used to guide practice: (1) Most recent works mainly focus on total effect (Chen et al., 2020; Farrell et al., 2021). In many settings, however, more intricate causal parameters are often of greater interests. In biomedical and social sciences, one is often interested in “mediation analysis” to decompose the total effect into direct and indirect effect to unpack the underlying black-box causal mechanism (Baron and Kenny, 1986). More recently, mediation analysis also percolated into machine learning fairness. For instance, in the context of predicting the recidivism risk, Nabi and Shpitser (2018) argued that, for a “fair” algorithm, sensitive features such as race should have no direct effect on the predicted recidivism risk. If such direct effects can be accurately estimated, one can detect the potential unfairness of a machine learning algorithm. We will revisit such applications in Section 5 and Appendix G. (2) Statistical properties of DNN-based causal estimators in recent works mostly follow from several (recent) results on the convergence rates of DNN-based nonparametric regression estimators (Suzuki, 2019; Schmidt-Hieber, 2020; Tsuji and Suzuki, 2021), with the limitation of relying on sparse DNN architectures. The theoretical properties are in turn evaluated by relatively simple synthetic experiments not designed to generate nearly infinite-dimensional nuisance functions, a setting considered by almost all the above related works. The above limitations raise the tantalizing question whether the available statistical guarantees for DNN-based causal inference have practical relevance. In this work, we plan to partially fill these gaps by developing a new method called DeepMed for semiparametric mediation analysis with DNNs. We focus on the Natural Direct/Indirect Effects (NDE/NIE) (Robins and Greenland, 1992; Pearl, 2001) (defined in Section 2.1), but our results can also be applied to more general settings; see Remark 2. The DeepMed estimators leverage the “multiply-robust” property of the efficient influence function (EIF) of NDE/NIE (Tchetgen Tchetgen and Shpitser, 2012; Farbmacher et al., 2022) (see Proposition 1 in Section 2.2), together with the flexibility and superior predictive power of DNNs (see Section 3.1 and Algorithm 1). In particular, we also make the following novel contributions to deepen our understanding of DNN-based semiparametric causal inference: • On the theoretical side, we obtain new results that our DeepMed method can achieve semiparametric efficiency bound without imposing sparsity constraints on the DNN architecture and can adapt to certain low-dimensional structures of the nuisance functions (see Section 3.2), thus significantly advancing the existing literature on DNN-based semiparametric causal inference. Non-sparse DNN architecture is more commonly employed in practice (Farrell et al., 2021), and the low-dimensional structures of nuisance functions can help avoid curse-of-dimensionality. These two points, taken together, significantly advance our understanding of the statistical guarantee of DNN-based causal inference. • More importantly, on the empirical side, in Section 4, we designed sophisticated synthetic experiments to simulate nearly infinite-dimensional functions, which are much more complex than those in previous related works (Chen et al., 2020; Farrell et al., 2021; Adcock and Dexter, 2021). We emphasize that these nontrivial experiments could be of independent interest to the theory of deep learning beyond causal inference, to further expose the gap between deep learning theory and practice (Adcock and Dexter, 2021; Gottschling et al., 2020); see Remark 9 for an extended discussion. As a proof of concept, in Section 5 and Appendix G, we also apply DeepMed to re-analyze two real-world datasets on algorithmic fairness and reach similar conclusions to related works. • Finally, a user-friendly R package can be found at https://github.com/siqixu/DeepMed. Making such resources available helps enhance reproducibility, a highly recognized problem in all scientific disciplines, including (causal) machine learning (Pineau et al., 2021; Kaddour et al., 2022). 2 Definition, identification, and estimation of NDE and NIE 2.1 Definition of NDE and NIE Throughout this paper, we denote Y as the primary outcome of interest, D as a binary treatment variable, M as the mediator on the causal pathway from D to Y , and X 2 [0, 1]p (or more generally, compactly supported in Rp) as baseline covariates including all potential confounders. We denote the observed data vector as O ⌘ (X,D,M, Y ). Let M(d) denote the potential outcome for the mediator when setting D = d and Y (d,m) be the potential outcome of Y under D = d and M = m, where d 2 {0, 1} and m is in the support M of M . We define the average total (treatment) effect as ⌧tot := E[Y (1,M(1)) Y (0,M(0))], the average NDE of the treatment D on the outcome Y when the mediator takes the natural potential outcome when D = d as ⌧NDE(d) := E[Y (1,M(d)) Y (0,M(d))], and the average NIE of the treatment D on the outcome Y via the mediator M as ⌧NIE(d) := E[Y (d,M(1)) Y (d,M(0))]. We have the trivial decomposition ⌧tot ⌘ ⌧NDE(d) + ⌧NIE(d0) for d 6= d0. In causal mediation analysis, the parameters of interest are ⌧NDE(d) and ⌧NIE(d). 2.2 Semiparametric multiply-robust estimators of NDE/NIE Estimating ⌧NDE(d) and ⌧NIE(d) can be reduced to estimating (d, d0) := E[Y (d,M(d0))] for d, d0 2 {0, 1}. We make the following standard identification assumptions: i. Consistency: if D = d, then M = M(d) for all d 2 {0, 1}; while if D = d and M = m, then Y = Y (d,m) for all d 2 {0, 1} and all m in the support of M . ii. Ignorability: Y (d,m) ? D|X , Y (d,m) ? M |X,D, M(d) ? D|X , and Y (d,m) ? M(d0)|X , almost surely for all d,2 {0, 1} and all m 2 M. The first three conditions are, respectively, no unmeasured treatment-outcome, mediator-outcome and treatment-mediator confounding, whereas the fourth condition is often referred to as the “cross-world” condition. We provide more detailed comments on these four conditions in Appendix A. iii. Positivity: The propensity score a(d|X) ⌘ Pr(D = d|X) 2 (c, C) for some constants 0 < c C < 1, almost surely for all d 2 {0, 1}; f(m|X, d), the conditional density (mass) function of M = m (when M is discrete) given X and D = d, is strictly bounded between [ ¯ ⇢, ⇢̄] for some constants 0 < ¯ ⇢ ⇢̄ < 1 almost surely for all m in M and all d 2 {0, 1}. Under the above assumptions, the causal parameter (d, d0) for d, d0 2 {0, 1} can be identified as either of the following three observed-data functionals: (d, d0) ⌘ E {D = d}f(M |X, d0)Y a(d|X)f(M |X, d) ⌘ E {D = d0} a(d0|X) µ(X, d,M) ⌘ Z µ(x, d,m)f(m|x, d0)p(x) dmdx, (1) where {·} denotes the indicator function, p(x) denotes the marginal density of X , and µ(x, d,m) := E[Y |X = x,D = d,M = m] is the outcome regression model, for which we also make the following standard boundedness assumption: iv. µ(x, d,m) is also strictly bounded between [ R,R] for some constant R > 0. Following the convention in the semiparametric causal inference literature, we call a, f, µ “nuisance functions”. Tchetgen Tchetgen and Shpitser (2012) derived the EIF of (d, d0): EIFd,d0 ⌘ d,d0(O) (d, d0), where d,d0(O) = {D = d} · f(M |X, d0) a(d|X) · f(M |X, d) (Y µ(X, d,M)) + ✓ 1 {D = d 0} a(d0|X) ◆Z m2M µ(X, d,m)f(m|X, d0)dm+ {D = d 0} a(d0|X) µ(X, d,M). (2) The nuisance functions µ(x, d,m), a(d|x) and f(m|x, d) appeared in d,d0(o) are unknown and generally high-dimensional. But with a sample D ⌘ {Oj}Nj=1 of the observed data, based on d,d0(o), one can construct the following generic sample-splitting multiply-robust estimator of (d, d0): e (d, d0) = 1 n X i2Dn e d,d0(Oi), (3) where Dn ⌘ {Oi}ni=1 is a subset of all N data, and e d,d0(o) replaces the unknown nuisance functions a, f, µ in d,d0(o) by some generic estimators ea, ef, eµ computed using the remaining N n nuisance sample data, denoted as D⌫ . Cross-fit is then needed to recover the information lost due to sample splitting; see Algorithm 1. It is clear from (2) that e (d, d0) is a consistent estimator of (d, d0) as long as any two of ea, ef, eµ are consistent estimators of the corresponding true nuisance functions, hence the name “multiply-robust”. Throughout this paper, we take n ⇣ N n and assume: v. Any nuisance function estimators are strictly bounded within the respective lower and upper bounds of a, f, µ. To further ease notation, we define: for any d 2 {0, 1}, ra,d := R a,d(x)2dF (x) 1/2 , rf,d := R f,d(x,m)2dF (x,m|d = 0) 1/2 , and rµ,d := R µ,d(x,m)2dF (x,m|d = 0) 1/2 , where a,d(x) := ea(d|x) a(d|x), f,d(x,m) := ef(m|x, d) f(m|x, d) and µ,d(x,m) := eµ(x, d,m) µ(x, d,m) are point-wise estimation errors of the estimated nuisance functions. In defining the above L2-estimation errors, we choose to take expectation with respect to (w.r.t.) the law F (m,x|d = 0) only for convenience, with no loss of generality by Assumptions iii and v. To show the cross-fit version of e (d, d0) is semiparametric efficient for (d, d0), we shall demonstrate under what conditions p n(e (d, d0) (d, d0)) L! N (0,E[EIF2d,d0 ]) (Newey, 1990). The following proposition on the statistical properties of e (d, d0) is a key step towards this objective. Proposition 1. Denote Bias(e (d, d0)) := E[e (d, d0) (d, d0)|D⌫ ] as the bias of e (d, d0) conditional on the nuisance sample D⌫ . Under Assumptions i – v, Bias(e (d, d0)) is of second-order: |Bias(e (d, d0))| . max ⇢ ra,d · rf,d, max d002{0,1} rf,d00 · rµ,d, ra,d · rµ,d . (4) Furthermore, if the RHS of (4) is o(n 1/2), then p n ⇣ e (d, d0) (d, d0) ⌘ = 1 p n nX i=1 ( d,d0(Oi) (d, d 0)) + o(1) d ! N 0,E ⇥ EIF2d,d0 ⇤ . (5) Although the above result is a direct consequence of the EIF d,d0(O), we prove Proposition 1 in Appendix B for completeness. Remark 2. The total effect ⌧tot = (1, 1) (0, 0) can be viewed as a special case, for which d = d0 for (d, d0). Then EIFd,d ⌘ EIFd corresponds to the nonparametric EIF of (d, d) ⌘ (d) ⌘ E[Y (d,M(d))]: EIFd = d(O) (d) with d(O) = {D = d} a(d|X) Y + ✓ 1 {D = d} a(d|X) ◆ µ(X, d), where µ(x, d) := E[Y |X = x,D = d]. Hence all the theoretical results in this paper are applicable to total effect estimation. Our framework can also be applied to all the statistical functionals that satisfy a so-called “mixed-bias” property, characterized recently in Rotnitzky et al. (2021). This class includes the quadratic functional, which is important for uncertainty quantification in machine learning. 3 Estimation and inference of NDE/NIE using DeepMed We now introduce DeepMed, a method for mediation analysis with nuisance functions estimated by DNNs. By leveraging the second-order bias property of the multiply-robust estimators of NDE/NIE (Proposition 1), we will derive statistical properties of DeepMed in this section. The nuisance function estimators by DNNs are denoted as ba, bf, bµ. 3.1 Details on DeepMed First, we introduce the fully-connected feed-forward neural network with the rectified linear units (ReLU) as the activation function for the hidden layer neurons (FNN-ReLU), which will be used to estimate the nuisance functions. Then, we will introduce an estimation procedure using a V -fold cross-fitting with sample-splitting to avoid the Donsker-type empirical-process assumption on the nuisance functions, which, in general, is violated in high-dimensional setup. Finally, we provide the asymptotic statistical properties of the DNN-based estimators of ⌧tot, ⌧NDE(d) and ⌧NIE(d). We denote the ReLU activation function as (u) := max(u, 0) for any u 2 R. Given vectors x, b, we denote b(x) := (x b), with acting on the vector x b component-wise. Let Fnn denote the class of the FNN-ReLU functions Fnn := n f : Rp ! R; f(x) = W (L) b(L) · · · W (1) b(1)(x) o , where is the composition operator, L is the number of layers (i.e. depth) of the network, and for l = 1, · · · , L, W (l) is a Kl+1 ⇥ Kl-dimensional weight matrix with Kl being the number of neurons in the l-th layer (i.e. width) of the network, with K1 = p and KL+1 = 1, and b(l) is a Kl-dimensional vector. To avoid notation clutter, we concatenate all the network parameters as ⇥ = (W (l), b(l), l = 1, · · · , L) and simply take K2 = · · · = KL = K. We also assume ⇥ to be bounded: k⇥k1 B for some universal constant B > 0. We may let the dependence on L, K, B explicit by writing Fnn as Fnn(L,K,B). DeepMed estimates ⌧tot, ⌧NDE(d), ⌧NIE(d) by (3), with the nuisance functions a, f, µ estimated using Fnn with the V -fold cross-fitting strategy, summarized in Algorithm 1 below; also see Farbmacher et al. (2022). DeepMed inputs the observed data D ⌘ {Oi}Ni=1 and outputs the estimated total effect b⌧tot, NDE b⌧NDE(d) and NIE b⌧NIE(d), together with their variance estimators b 2tot, b 2NDE(d) and b 2NIE(d). Algorithm 1 DeepMed with V -fold cross-fitting 1: Choose some integer V (usually V 2 {2, 3, · · · , 10}) 2: Split the N observations into V subsamples Iv ⇢ {1, · · · , N} ⌘ [N ] with equal size n = N/V ; 3: for v = 1, · · · , V : do 4: Fit the nuisance functions by DNNs using observations in [N ] \ Iv 5: Compute the nuisance functions in the subsample Iv using the estimated DNNs in step 4 6: Obtain { b d(Oi), b d,d0(Oi)}i2Iv for the subsample Iv based on (2), respectively, with the nuisance functions replaced by their estimates in step 5 7: end for 8: Estimate average potential outcomes by b (d) := 1N NP i=1 b d(Oi), b (d, d0) := 1N NP i=1 b d,d0(Oi) 9: Estimate causal effects by b⌧tot, b⌧NDE(d) and b⌧NIE(d) with b (d) and b (d, d0) 10: Estimate the variances of b⌧tot, b⌧NDE(d) and b⌧NIE(d) by: b 2tot := 1N2 NP i=1 ( b 1(Oi) b 0(Oi))2 1N b⌧ 2 tot; b 2NDE(d) := 1N2 NP i=1 ( b 1,d(Oi) b 0,d(Oi))2 1N b⌧ 2 NDE(d); b 2NIE(d) := 1N2 NP i=1 ( b d,1(Oi) b d,0(Oi))2 1N b⌧ 2 NIE(d) Output: b⌧tot, b⌧NDE(d), b⌧NIE(d), b 2tot, b 2NDE(d) and b 2NIE(d) Remark 3 (Continuous or multi-dimensional mediators). For binary treatment D and continuous or multi-dimensional M , to avoid nonparametric/high-dimensional conditional density estimation, we can rewrite f(m|x,d 0) a(d|x)f(m|x,d) as 1 a(d|x,m) a(d|x,m)(1 a(d|x)) by the Bayes’ rule and the integral w.r.t. f(m|x, d 0) in (2) as E[µ(X, d,M)|X = x,D = d0]. Then we can first estimate µ(x, d,m) by bµ(x, d,m) and in turn estimate E[µ(X, d,M)|X = x,D = d0] by regressing bµ(X, d,M) against (X,D) using the FNN-ReLU class. We mainly consider binary M to avoid unnecessary complications; but see Appendix G for an example in which this strategy is used. Finally, the potential incompatibility between models posited for a(d|x) and a(d|x,m) and the joint distribution of (X,A,M, Y ) is not of great concern under the semiparametric framework because all nuisance functions are estimated nonparametrically; again, see Appendix G for an extended discussion. 3.2 Statistical properties of DeepMed: Non-sparse DNN architecture and low-dimensional structures of the nuisance functions According to Proposition 1, to analyze the statistical properties DeepMed, it is sufficient to control the L2-estimation errors of nuisance function estimates ba, bf, bµ fit by DNNs. To ease presentation, we first study the theoretical guarantees on the L2-estimation error for a generic nuisance function g : W 2 [0, 1]p ! Z 2 R, for which we assume: vi. Z = g(W ) + ⇠, with ⇠ sub-Gaussian with mean zero and independent of W . Note that when g corresponds to a, f, µ, (W,Z) corresponds to (X, (D = 1)), ((X,D), (M = 1)) and ((X,D,M), Y ), respectively. We denote the DNN output from the nuisance sample D⌫ as bg. For theoretical results, we consider bg as the following empirical risk minimizer (ERM): bg := arg min ḡ2Fnn(L,K,B) X i2D⌫ (Zi ḡ(Wi)) 2 . (6) To avoid model misspecification, one often assumes g 2 G, where G is some infinite-dimensional function space. A common choice is G = Hp(↵;C), the Hölder ball on the input domain [0, 1]p, with smoothness exponent ↵ and radius C. Hölder space is one of the most well-studied function spaces in statistics and it is convenient to quantify its complexity by a single smoothness parameter ↵; see Appendix C for a review. It is well-known that estimating Hölder functions suffers from curse-of-dimensionality (Stone, 1982). One remedy is to consider the following generalized Hölder space, by imposing certain low-dimensional structures on g: H † k(↵;C) := g(w) = h( w) : h 2 Hk(↵;C), 2 Rk⇥p unknown, k p . Remark 4. The above definition contains g(w) = h(wI), where I ⇢ {1, · · · , p}, as a special case, in which g is assumed to only depend on a subset of the feature vector w. One can easily generalize the above definition to additive models g(w) = Pp j=1 hj(wj) where hj 2 Hkj (↵j ;Cj), allowing even more modeling flexibility. To avoid complications, we only consider the above simpler model. We can show that the ERM estimator bg (6) from the FNN-ReLU class Fnn(L,K,B) attains the optimal estimation rate over H†k(↵;C) up to log factors, by choosing the depth and width appropriately without assuming sparse neural nets. Lemma 5. Under Assumptions iii – vi, if g 2 H†k(↵;C) for k p, with LK ⇣ n k 2(k+2↵) , we have supg2H†k(↵;C) E ⇥ (g(W ) bg(W ))2 ⇤ 1/2 . n ↵2↵+k (log n)3. Lemma 5, together with Proposition 1, implies the main theoretical result of the paper. Theorem 6. Under Assumptions i – vi and the following condition on a, f, µ: a 2 H†k(↵a;C), f 2 H † k(↵f ;C), µ 2 H † k(↵µ;C), with min ⇢ ↵a 2↵a + k + ↵f 2↵f + k , ↵f 2↵f + k + ↵µ 2↵µ + k , ↵a 2↵a + k + ↵µ 2↵µ + k > 1 2 + ✏, (7) for k p and some arbitrarily small ✏ > 0, if ba, bf , bµ are respectively the ERM (6) from FNN-ReLU classes Fnn(La,Ka, B), Fnn(Lf ,Kf , B), Fnn(Lµ,Kµ, B), of which the product of the depth and width satisfies LgKg ⇣ n k 2(k+2↵g) for g 2 {a, f, µ}, then the DeepMed estimators b⌧tot, b⌧NDE(d) and b⌧NIE(d) computed by Algorithm 1 are semiparametric efficient: b 1tot(b⌧tot ⌧tot), b 1NDE(d)(b⌧NDE(d) ⌧NDE(d)), b 1 NIE(d)(b⌧NIE(d) ⌧NIE(d)) L ! N (0, 1), with Nb 2tot p ! E[(EIF1 EIF0)2], Nb 2NDE(d) p ! E[(EIF1,d EIF0,d)2], and Nb 2NIE(d) p ! E[(EIFd,1 EIFd,0)2], i.e. b 2tot, b 2NDE and b 2NIE are consistent variance estimators. Remark 7. To unload notation in the above theorem, consider the special case where the smoothness of all the nuisance functions coincides, i.e. ↵a = ↵f = ↵µ = ↵. Then Condition (7) reduces to ↵ > k/2 + ✏ for some arbitrarily small ✏ > 0. For example, if the covariates X have dimension p = 2 and no low-dimensional structures are imposed on the nuisance functions (i.e. k ⌘ p), one needs ↵ > 1 to ensure semiparametric efficiency of the DeepMed estimators. We emphasize that Lemma 5 and Theorem 6 do not constrain the network sparsity S, better reflecting how DNNs are usually used in practice. Theorem 6 advances results on total and decomposition effect estimation with non-sparse DNNs (Farrell et al., 2021, Theorem 1) in terms of (1) weaker smoothness conditions and (2) adapting to certain low-dimensional structures of the nuisance functions. The proof of Lemma 5 follows from a combination of the improved DNN approximation rate obtained in Lu et al. (2021); Jiao et al. (2021) and standard DNN metric entropy bound (Suzuki, 2019). We prove Lemma 5 and Theorem 6 in Appendix C for completeness. One weakness of Lemma 5 and Theorem 6, as well as in other contemporary works (Chen et al., 2020; Farrell et al., 2021), is the lack of algorithmic/training process considerations (Chen et al., 2022); see Remark 10 and Appendix E for extended discussions. Remark 8 (Explicit input-layer regularization). Training DNNs in practice involves hyperparameter tuning, including the depth L and width K in Theorem 6 and others like epochs. In the synthetic experiments, we consider the nuisance functions only depending on a k-subset of p-dimensional input. A reasonable heuristic is to add L1-regularization in the input-layer of the DNN. Then the regularization weight is also a hyperparameter. In practice, we simply use cross-validation to select the hyperparameters that minimize the validation loss. We leave its theoretical justification and the performance of other alternative approaches such as the minimax criterion (Robins et al., 2020; Cui and Tchetgen Tchetgen, 2019) to future works. 4 Synthetic experiments In this section and Appendix E, we showcase five synthetic experiments. Since ground truth is rarely known in real data, we believe synthetic experiments play an equally, if not more, important role as real data. Before describing the experimental setups, we garner the following key take-home message: (a) Compared with the other competing methods, DeepMed exhibits better finite-sample performance in most of our experiments; (b) Cross-validation for DNN hyperparameter tuning works reasonably well in our experiments; (c) We find DeepMed with explicit regularization in the input layer improves performance (see Table A2) when the true nuisance functions have certain low-dimensional structures in their dependence on the covariates. Farrell et al. (2021) warned against blind explicit regularization in DNNs for total effect estimation. Our observation does not contradict Farrell et al. (2021) as (1) the purpose of the input-layer regularization is not to control the sparsity of the DNN architecture and (2) we do not further regularize hidden layers; (d) Experimental setups for Cases 3 to 5 generate nuisance functions that are nearly infinitedimensional and close to the boundary of a Hölder ball with a given smoothness exponent (Liu et al., 2020; Li et al., 2005). Thus these synthetic experiments should be better benchmarks than Cases 1 and 2 or settings in other related works such as Farrell et al. (2021). We hope that these highly nontrivial synthetic experiments are helpful to researchers beyond mediation analysis or causal inference. We share the code for generating these functions as a part of the DeepMed package. We consider a sample with 10,000 i.i.d. observations. The covariates X = (X1, ..., Xp)> are independently drawn from uniform distribution Uniform([ 1, 1]). The outcome Y , treatment D and mediator M are generated as follows: D ⇠ Bernoulli(s(d(X))),M ⇠ 0.2D +m(X) +N (0, 1), Y ⇠ 0.2D +M + y(X) +N (0, 1), where s(x) := (1 + e x) 1, and we consider the following three cases to generate the nonlinear functions d(x),m(x) and y(x) in the main text: • Case 1 (simple functions): d(x) = x1x2 + x3x4x5 + sinx1,m(x) = 4 5X i=1 sin 3xi, y(x) = (x1 + x2) 2 + 5 sin 5X i=1 xi. • Case 2 (composition of simple functions): we simulate more complex interactions among covariates by composing simple functions as follow: d(x) = d2 d1 d0(x1, · · · , x5), with d0(x1, · · · , x5) = 2Y i=1 xi, 5Y i=3 xi, 2Y i=1 sinxi, 5Y i=3 sinxi ! , d1(a1, · · · , a4) = (sin(a1 + a2), sin a2, a3, a4) , and d2(b1, · · · , b4) = 0.5 sin(b1 + b2) + 0.5(b3 + b4), m(x) = m1 m0(x1, . . . , x5),with m0(x1, · · · , x5) = (sinx1, · · · , sinx5) ,m1(a1, · · · , a5) = 5 sin 5X i=1 ai and y(x) = y2 y1 y0(x1, · · · , x5), with y0(x1, · · · , x5) = sin 2X i=1 xi, sin 5X i=3 xi, sin 5X i=1 xi ! , y1(a1, a2, a3) = (sin(a1 + a2), a3) , and y2(b1, b2) = 10 sin(b1 + b2). • Case 3 (Hölder functions): we consider more complex nonlinear functions as follows: d(x) = x1x2 + x3x4x5 + 0.5⌘(0.2x1;↵),m(x) = 5X i=1 ⌘ (0.5xi;↵) , y(x) = x1x2 + 3⌘ 0.2 5X i=1 xi;↵ ! where ⌘(x;↵) = P j2J,l2Z 2 j(↵+0.25) wj,l(x) with J = {0, 3, 6, 9, 10, 16} and wj,l(·) is the D6 father wavelet functions dilated at resolution j shifted by l. By construction, ⌘(x;↵) 2 H1(↵;B) for some known constant B > 0 following Härdle et al. (1998, Theorem 9.6). Here we set ↵ = 1.2 and the intrinsic dimension k = 1. Thus we expect the DeepMed estimators are semiparametric efficient. It is indeed the case based on the columns corresponding to Case 3 in Table 1, suggesting that DNNs can be adaptive to certain low-dimensional structures. Remark 9. The nuisance functions in Cases 3 – 5 (see Appendix E) are less smooth than what have been considered elsewhere, including Farrell et al. (2021), Chen et al. (2020), and even Adcock and Dexter (2021), a paper dedicated to exposing the gap between theoretical approximation rates and DNN practice. These nuisance functions are designed to be near the boundary of a Hölder ball with a given smoothness exponent as we add wavelets at very high resolution in ⌘(x;↵). This is the assumption under which most of the known statistical properties of DNNs are developed. In all the above cases, ⌧tot = 0.4 and ⌧NDE(d) = ⌧NIE(d) = 0.2 for d 2 {0, 1}. We also consider the cases where the total number of covariates p = 20 and 100 but only the first five covariates are relevant to Y , M and D. All simulation results are based on 200 replicates. The sigmoid function is used in the final layer when the response variable is binary. For comparison, we also use the Lasso, random forest (RF) and gradient boosted machine (GBM) to estimate the nuisance functions, and use the true nuisance functions (Oracle) as the benchmark. The Lasso is implemented using the R package “hdm” with a data-driven penalty. The DNN, RF and GBM are implemented using the R packages “keras”, “randomForest” and “gbm”, respectively. We adopt a 3-fold cross-validation to choose the hyperparameters for DNNs (depth L, width K, L1-regularization parameter and epochs), RF (number of trees and maximum number of nodes) and GBM (numbers of trees and depth). We use a completely independent sample for the hyperparameter selection. In this paper, we only use one extra dataset to conduct the cross-validation for hyperparameter selection, so our simulation results are conditional on this extra dataset. We use the cross-entropy loss for the binary response and the mean-squared loss for the continuous response. We fix the batch-size as 100 and the other hyperparameters for the other methods are set to the default values in their R packages. See Appendix E for more details. We compare the performances of different methods in terms of the biases, empirical standard errors (SE) and root mean squared errors (RMSE) of the estimates as well as the coverage probabilities (CP) of their 95% confidence intervals. When p = k = 5 (all covariates are relevant or no low-dimensional structures), DeepMed has smaller bias and RMSE than the other competing methods, and is only slightly worse than Oracle. Lasso has the largest bias and poor CP as expected since it does not capture the nonlinearity of the nuisance functions. RF and GBM also have substantial biases, especially in Case 2 with compositions of simple functions. Overall, DeepMed performs better than the competing methods (Table 1). From the empirical distributions, we can also see that they are nearly unbiased and normally distributed in Cases 1-3 (Figures A1-A3). When p = 20 or 100 but only the first five covariates are relevant (k = 5), L1-regularization in the input-layer drastically improves the performance of DeepMed (Table A2). DeepMed with L1-regularization in the input-layer also has smaller bias and RMSE than the other competing methods (Tables A3 and A4). As expected, more precise nuisance function estimates (i.e., smaller validation loss) generally lead to more precise causal effect estimates. The validation losses of nuisance function estimates from DeepMed are generally much smaller than those using Lasso, RF and GBM (Tables A5-A7). Remark 10. Due to space limitations, we defer Cases 4, 5 to Appendix E, in which DeepMed fails to be semiparametric efficient, compared to the Oracle; see an extended discussion in Appendix E. We conjecture this may be due to the implicit regularization of gradient-based training algorithm such as SGD (Table A11) or adam (Kingma and Ba, 2015) (all simulation results except Table A11), which is used to train the DNNs to estimate the nuisance parameters, instead of actually solving the ERM (6). Most previous works focus on the benefit of implicit regularization (Neyshabur, 2017; Bartlett et al., 2020) on generalization. Yet, implicit regularization might inject implicit bias into causal effect estimates, which could make statistical inference invalid. Such a potential curse of implicit regularization has not been documented in the DNN-based causal inference literature before and exemplify the value of our synthetic experiments. We believe this is an important open research direction for theoretical results to better capture the empirical performance of DNN-based causal inference methods such as DeepMed. 5 Real data analysis on fairness As a proof of concept, we use DeepMed and other competing methods to re-analyze the COMPAS algorithm (Dressel and Farid, 2018). In particular, we are interested in the NDE of race D on the recidivism risk (or the COMPAS score) Y with the number of prior convictions as the mediator M . For race, we mainly focus on the Caucasians population (D = 0) and the African-Americans population (D = 1), and exclude the individuals of other ethnicity groups. The COMPAS score (Y ) is ordinal, ranging from 1 to 10 (1: lowest risk; 10: highest risk). We also include the demographic information (age and gender) as covariates X . All the methods find significant positive NDE of race on the COMPAS score at ↵-level 0.005 (Table 2; all p-values < 10 7), consistent with previous findings (Nabi and Shpitser, 2018). Thus the COMPAS algorithm tends to assign higher recidivism risks to African-Americans than to Caucasians, even when they have the same number of prior convictions. The validation losses of nuisance function estimates by DeepMed are smaller than the other competing methods (Table A8), possibly suggesting smaller biases of the corresponding NDE/NIE estimators. We emphasize that research in machine learning fairness should be held accountable (Bao et al., 2021). Our data analysis is merely a proof-of-concept that DeepMed works in practice and the conclusion from our data analysis should not be treated as definitive. We defer the comments on potential issues of unmeasured confounding to Appendix F and another real data analysis to Appendix G. 6 Conclusion and Discussion In this paper, we proposed DeepMed for semiparametric mediation analysis with DNNs. We established novel statistical properties for DNN-based causal effect estimation that can (1) circumvent sparse DNN architectures and (2) leverage certain low-dimensional structures of the nuisance functions. These results significantly advance our current understanding of DNN-based causal inference including mediation analysis. Evaluated by our extensive synthetic experiments, DeepMed mostly exhibits improved finite-sample performance over the other competing machine learning methods. But as mentioned in Remark 10, there is still a large gap between statistical guarantees and empirical observations. Therefore an important future direction is to incorporate the training process while investigating the statistical properties to have a deeper theoretical understanding of DNN-based causal inference. It is also of future research interests to enable DeepMed to handle unmeasured confounding and more complex path-specific effects (Malinsky et al., 2019; Miles et al., 2020), and incorporate other hyperparameter tuning strategies that leverage the multiply-robustness property, such as the minimax criterion (Robins et al., 2020; Cui and Tchetgen Tchetgen, 2019). Finally, we warn readers that all causal inference methods, including DeepMed, may have negative societal impact if they are used without carefully checking their working assumptions. Acknowledgement and Disclosure of Funding The authors thank four anonymous reviewers and one anonymous area chair for helpful comments, Fengnan Gao for some initial discussion on how to incorporate low-dimensional manifold assumptions using DNNs and Ling Guo for discussion on DNN training. The authors would also like to thank Department of Statistics and Actuarial Sciences at The University of Hong Kong for providing highperformance computing servers that supported the numerical experiments in this paper. L. Liu gratefully acknowledges funding support by Natural Science Foundation of China Grant No.12101397 and No.12090024, Pujiang National Lab Grant No. P22KN00524, Natural Science Foundation of Shanghai Grant No.21ZR1431000, Shanghai Science and Technology Commission Grant No.21JC1402900, Shanghai Municipal Science and Technology Major Project No.2021SHZDZX0102, and Shanghai Pujiang Program Research Grant No.20PJ140890.
1. What is the focus and contribution of the paper on semi-parametric estimation of direct and indirect effects? 2. What are the strengths of the proposed approach, particularly in terms of its use of deep neural networks? 3. What are the weaknesses of the paper, especially regarding its clarity and explanations of mathematical notation? 4. How does the reviewer assess the novelty and originality of the paper's content? 5. What are the limitations of the proposed method, and how do they affect its societal impact?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes a semi-parametric method for estimating direct and indirect effects from the observational data with deep neural networks. Strengths And Weaknesses The paper is somewhat hard to follow with a lot of Mathematical notation that would benefit from a bit better explanation Quality of theoretical and experimental results appears sound - even though I did not go through any derivations The reviewer is unclear on the level of novelty/originality it appears to be ok but can't say for certain Questions I think the paper doesn't raise too many questions Limitations somewhat unclear what the limitations are societal impact has been addressed
NIPS
Title UCSG-NET- Unsupervised Discovering of Constructive Solid Geometry Tree Abstract Signed distance field (SDF) is a prominent implicit representation of 3D meshes. Methods that are based on such representation achieved state-of-the-art 3D shape reconstruction quality. However, these methods struggle to reconstruct non-convex shapes. One remedy is to incorporate a constructive solid geometry framework (CSG) that represents a shape as a decomposition into primitives. It allows to embody a 3D shape of high complexity and non-convexity with a simple tree representation of Boolean operations. Nevertheless, existing approaches are supervised and require the entire CSG parse tree that is given upfront during the training process. On the contrary, we propose a model that extracts a CSG parse tree without any supervision UCSG-NET. Our model predicts parameters of primitives and binarizes their SDF representation through differentiable indicator function. It is achieved jointly with discovering the structure of a Boolean operators tree. The model selects dynamically which operator combination over primitives leads to the reconstruction of high fidelity. We evaluate our method on 2D and 3D autoencoding tasks. We show that the predicted parse tree representation is interpretable and can be used in CAD software.1 1 Introduction Neural networks for 3D shape analysis gained much popularity in recent years. Among their main advantages are fast inference for unknown shapes and high generalization power. Many approaches rely on the different representations of the input: implicit such as voxel grids, point clouds and signed distance fields [1–3], or explicit - meshes [4]. Meshes can be found in computer-aided design applications, where a graphic designer often composes complex shapes out simple shapes primitives, such as boxes and spheres. Existing methods for representing meshes, such as BSP-NET [5] and CVXNET [6], achieve remarkable accuracy on a reconstruction tasks. However, the process of generating the mesh from predicted planes requires an additional post-processing step. These methods also assume that any object can be decomposed into a union of convex primitives. While holding, it requires many such primitives to represent concave shapes. Consequently, the decoding process is difficult to explain and modified with some external expert knowledge. On the other hand, there are fully interpretable approaches, like CSG-NET [7, 8], that utilize CSG parse tree to represent 3D shape construction process. Such solutions require expensive supervision that assumes assigned CSG parse tree for each example given during training. ∗Now at Warsaw University of Technology 1We published our code at https://github.com/kacperkan/ucsgnet 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this work, we propose a novel model for representing 3D meshes capable of learning CSG parse trees in an unsupervised manner - UCSG-NET. We achieve the stated goal by introducing so-called CSG Layers capable of learning explainable Boolean operations for pairs primitives. CSG Layers create the interpretable network of the geometric operations that produce complex shapes from a limited number of simple primitives. We evaluate the representation capabilities of meshes of our approach using challenging 2D and 3D datasets. We summarize our main contributions as: • Our method is the first one that is able to predict CSG tree without any supervision and achieve state-of-the-art results on the 2D reconstruction task comparing to CSG-NET trained in a supervised manner. Predictions of our method are fully interpretable and can aid in CAD applications. • We define and describe a novel formulation of constructive solid geometry operations for occupancy value representation for 2D and 3D data. 2 Method We propose an end-to-end neural network model that predicts parameters of simple geometric primitives and their constructive solid geometry composition to reconstruct a given object. Using our approach, one can predict the CSG parse tree that can be further passed to an external rendering software in order to reconstruct the shape. To achieve this, our model predicts primitive shapes in SDF representation. Then, it converts them into occupancy values O taking 1 if a point in the 2D or the 3D space is inside the shape and 0 otherwise. CSG operations on such a representation are defined as clipped summations and differences of binary values. The model dynamically chooses which operation should be used. During the validation, we retrieve the predicted CSG parse tree and shape primitives, and pass them to the rendering software. Thus, we need a single point in 3D space to infer the structure of the CSG tree. It is possible since primitive parameters and CSG operations are predicted independently from sampled points. In the following subsections, we present 2D examples for clarity. The method scales to 3D inputs trivially. 2.1 Constructive Solid Geometry Network The UCSG-NET architecture is provided in Figure 1. The model is composed of the following main components: encoder, primitive parameter prediction network, signed distance field to indicator function converter and constructive solid geometry layers. Encoder We process the input object I by mapping it into low dimensional latent vector z of length dz using an encoder fθ, e.g. fθ(I) = z. Depending on the data type, we use either a 2D or 3D convolutional neural network as an encoder. The latent vector is then passed to the primitive parameter prediction network. Primitive parameter prediction network The role of this component is to extract the parameters of the primitives, given the latent representation of the input object. The primitive parameter prediction network gφ consists of multiple fully connected layers interleaved with activation functions. The last layer predicts parameters of primitives in the SDF representation. We consider primitives such as boxes and spheres that allow us to calculate signed distance analytically. We note that planes can be used as well, thus extending approaches like BSP-NET [5] and CVXNET [6]. The mathematical formulation of used shapes is provided in the supplementary material. The network producesN tuples of {i ∈ N |pi, ti,qi}. pi ∈ Rdp describes vector of parameters of a particular shape (ex. radius of a sphere), while ti ∈ Rdt is the translation of the shape and qi ∈ Rdq - the rotation represented as a quaternion for 3D shapes and a matrix for 2D shapes. We further combine k different shapes to be predicted by using a fully connected layer for each shape type separately, thus producing kN =M shapes and M × (dp + dt + dq) parameters in total. Once parameters are predicted, we use them to calculate signed distance values for sampled points x from volume of space that boundaries are normalized to unit square (or unit cube for 3D data). For each shape, that has an analytical equation dist parametrized by p that calculates signed distance from a point x to its surface, we obtain Di = dist(q−1i (x− ti);pi). Signed Distance Field to Indicator Function Converter CSG operations in SDF representation are often defined as a combination of min and max functions on distance values. One has to apply either LogSumExp operation as in CVXNET or standard Softmax function to obtain differentiable approximation. However, we cast our problem to predict CSG operations for occupancy-valued sets. The motivation is that these are linear operations, hence they provide better training stability. We transform signed distances D to occupancy values O ∈ {0, 1}. We use parametrized α clipping function that is learned with the rest of the pipeline: O = [ 1− D α ] [0,1] { inside, O = 1 outside, O ∈ [0, 1) (1) where α is a learnable scalar and α > 0, [·][0,1] clips values to the given range and O means an approximation of occupancy values. O = 1 indicates the inside and the surface of a shape. O ∈ [0, 1) means outside of the shape and limα→0O ∈ {0, 1}. Gradual learning of α allows to distribute gradients to all shapes in early stages of training. There are no specific restrictions for α initialization and we set α = 1 in our experiments. The value is pushed towards 0 by optimizing jointly with the rest of parameters by adding the |α| term to the optimized loss. The method follows findings of Sakr et al. [9] that increasing slope of clipping function can be used to obtain binary activations. Constructive Solid Geometry Layer Predicted sets of occupancy values and output of the encoder z are passed to a sequence of L ≥ 1 CSG layers that combine shapes using boolean operators: union (denoted by ∪∗), intersection (∩∗) and difference (−∗). To grasp an idea of how CSG is performed on occupancy-valued sets, we show example operations in Figure 2. CSG operations for two sets A and B are described as: A ∪∗ B = [A+B][0,1] A ∩∗ B = [A+B − 1][0,1] A−∗ B = [A−B][0,1] B −∗ A = [B −A][0,1] (2) The question is how to choose operands A and B, denoted as left and right operands, from input shapes O(l) that would compose the output shape in O(l+1). We create two learnable matrices K (l) left,K (l) right ∈ RM×dz . Vectors stored in rows of these matrices serve as keys for a query z to select appropriate shapes for all 4 operations. The input latent code z is used as a query to retrieve the most appropriate operand shapes for each layer. We perform dot product between matrices K (l) left,K (l) right and z, and compute softmax along M input shapes. V (l) left = softmax(K (l) leftz) V (l) right = softmax(K (l) rightz) (3) The index of a particular operand is retrieved using Gumbel-Softmax [10] reparametrization of the categorical distribution: V̂ (l) side,i = exp (( log(V (l) side,i) + ci ) /τ (l) ) ∑M j=1 exp (( log(V (l) side,j) + cj ) /τ (l) ) for i = 1, ...,M and side ∈ {left,right} (4) A ∪∗ B : [ + ] [0,1] = A ∩∗ B : [ + − 1 ] [0,1] = A−∗ B : [ − ] [0,1] = B −∗ A : [ − ] [0,1] = Figure 2: Example of constructive solid geometry on occupancy-valued sets where ci is a sample from Gumbel(0, 1). The benefit of the reparametrization is twofold. Firstly, the expectation over the distribution stays the same despite changing τ (l). Secondly, we can manipulate τ (l) so for τ (l) → 0 the distribution degenerates to categorical distribution. Hence, a single shape selection replaces the fuzzy sum of all input shapes in that case. That way, we allow the network to select the most appropriate shape for the composition during learning by decreasing τ (l) gradually. By the end of the learning process, we can retrieve a single shape to be used for the CSG. The temperature τ (l) is learned jointly with the rest of the parameters. Left and right operands O(l)left,O (l) right are retrieved as: O(l)right = M∑ i=1 O(l)i V̂ (l) right,i O (l) left = M∑ i=1 O(l)i V̂ (l) left,i (5) A set of output shapes from the l+1 CSG layer is obtained by performing all operations in Equation 2 on selected operands: O(l+1)A∪∗B = [ O(l)left +O (l) right ] [0,1] O(l+1)A∩∗B = [ O(l)left +O (l) right − 1 ] [0,1] O(l+1)A−∗B = [ O(l)left −O (l) right ] [0,1] O(l+1)B−∗A = [ O(l)right −O (l) left ] [0,1] (6) O(l+1) = [ O(l+1)A∪∗B ;O (l+1) A∩∗B ;O (l+1) A−∗B ;O (l+1) B−∗A ] (7) where left,right ∈ M denotes left and right operands of the operation. By performing these operations manually, we increase the diversity of possible shape combinations and leave to the model which operations should be used for the reconstruction. Operations can be repeated to output multiple shapes. Note that the computation overhead increases linearly with the number of output shapes per layer. The whole procedure can be stacked in l ≤ L layers to create a CSG network. The L-th layer outputs a union since it is guaranteed to return a non-empty shape in most cases. At this point, the network has to learn passing primitives untouched by operators if any primitive should be used in later layers of the CSG tree to create, for example, nested rings. To mitigate the problem, each l+1 layer receives outputs from the l-th layer concatenated with the original binarized values O(0). For the first layer l = 1, it means receiving initial shapes only. Additional information passing The information about what is left to reconstruct changes layer by layer. Therefore, we incorporate it into the latent code to improve the reconstruction quality and stabilize training. Firstly, we encode V̂(l) = [V̂(l)left; V̂ (l) right] with a neural network h (l) containing a single hidden layer. Then, we employ GRU unit [11] that takes the latent code z(l) and encoded V̂(l) as an input, and outputs the updated latent code z(l+1) for the next layer. The hidden state of the GRU unit is learnable. The initial z(0) is the output from the encoder. Interpretability All introduced components of the UCSG-NET lead us to interpretable predictions of mesh reconstructions. To see this, consider the following case. When α ≈ 0, we obtain occupancy values calculated with Equation 1. Thus, shapes represented as these values will occupy the same volume as meshes reconstructed from parameters {i ∈M |pi, ti,qi}. These meshes can be visualized and edited explicitly. To further combine these primitives through CSG operations, we calculate argmaxi∈M V̂ (l) left,i, argmaxj∈M V̂ (l) right,j for left and right operands respectively. Then, we perform operations A∪∗B, A∩∗B, A−∗B and B−∗A. When ∀l≤Lτ (l) ≈ 0, both V̂(l)left, V̂ (l) right are one-hot vectors, and operations performed on occupancy values, as in Figure 2, are equivalent to CSG operations executed on aforementioned meshes, ex. by merging binary space partitioning trees of meshes [12]. Additionally, the whole CSG tree can be pruned to form binary tree, by investigating which meshes were selected through V̂(l)left, V̂ (l) right for the reconstruction, thus leaving the tree with 2L−l nodes at each layer l ≤ L.2 2.2 Training The pipeline is optimized end-to-end using a backpropagation algorithm in a two-stage process. First stage The goal is to find compositions of primitives that minimize the reconstruction error. We employ mean squared error of predicted occupancy values Ô(L) with the ground truthO∗. Values are calculated for X which combines points sampled from the surface of the ground truth, and randomly sampled inside a unit cube (or square for 2D case): LMSE = Ex∈X[(O(L) −O∗)2] (8) We also ensure that the network predicts only positive values of parameters of shapes since only for such these shapes have analytical descriptions: LP = M∑ i=1 ∑ pi∈pi max(−pi, 0) (9) To stop primitives from drifting away from the center of considered space in the early stages of the training, we minimize the clipped squared norm of the translation vector. At the same time, we allow primitives to be freely translated inside the space of interest: LT = M∑ i=1 max(||ti||2, 0.5) (10) The last component includes minimizing |α| to perform continuous binarization of distances into {inside, outside} indicator values. Our goal is to find optimal parameters of our model by minimizing the total loss: Ltotal = LMSE + LP + λTLT + λα|α| (11) where we set λT = λα = 0.1. Second stage We strive for interpretable CSG relations. To achieve this, we output occupancy values, obtained with Equation 1, so these values create binary-valued sets since the α at this stage is near 0. The stage is triggered, when α ≤ 0.05. Its main goal is to enforce V̂(l) for l ≤ L to resemble one-hot mask by decreasing the temperature τ (l) in CSG layers. The optimized loss is defined as: L∗total = Ltotal + λτ L∑ l=1 |τ (l)| (12) where we set λτ = 0.1 for all experiments. Once α ≈ 0 and ∀l≤Lτ (l) ≈ 0, predictions of the CSG layers become fully interpretable as described above, i.e. CSG parse trees of reconstructions can be retrieved and processed using explicit representation of meshes. We also ensure that α and τ (l) stay positive by manual clipping values to small positive number ≈ 10−5, if they become negative. During experiments, we initialize them to α = 1 and τ (l) = 2. Additional implementation details are provided in supplementary material. 2We consider the worst case, since some shapes can be reused in consecutive layers, hence number of used shapes in the layer l can be less than 2L−l. 3 Related Works Problem of the 3D reconstruction gained momentum when the ShapeNet dataset was published [13]. The dataset contains sets of simple, textures meshes, split into multiple, unbalanced categories. Since then, many methods were invented for a discriminative [14–17] and generative applications [5, 6, 18, 19]. Currently, presenting results on this dataset allows the potential reader to quickly grasp how a particular method performs. There exists also a high volume ABC dataset [20] which consists of many complex CAD shapes. However, it is not well established as a benchmark in the community. 3D surface representation Surface representations fall mainly into two categories: explicit (meshes) and implicit (ex. point clouds, voxels, signed distance fields). Many approaches working on meshes assume genus 0 as an initial shape that was refined to retrieve the final shape [21–23, 4, 24, 25]. Recent methods use step-by-step prediction of each vertex which position is conditioned on all previous vertices [26] and reinforcement learning to imitate real 3D graphics designer [27]. In MeshRCNN [28] a voxelized shape is retrieved first and then converted into mesh with the Pixel2Mesh [4] framework. Implicit representations need an external method to convert an object to a mesh. 3D-R2N2 [1] and Pix2Vox [29] predict voxelized objects and leverage multiple views of the same object. These methods struggle with the cubic complexity of predictions. To overcome the problem, octree-based convolutional networks [30, 31] use encoded voxel volume to take an advantage of the sparsity of the representation. Point clouds does not include vertex connectivity information. Therefore, ball-pivoting or Poisson surface reconstruction methods has to be employed to reconstruct the mesh [32, 33]. The representation is convenient to be processed using PointNet [14] framework. Objects can be generated using flow-based generative networks [34, 19]. Signed distance fields allow to model shapes with an arbitrary level details in theory. DeepSDF [3] and DualSDF [35] use a variational autodecoder approach to generate shapes. OccNet [36] and IM-NET [18] predict whether a point lies inside or outside of the shape. Such a representation is explored in BSP-NET [5] and CVXNET [6] which decompose shapes into union of convexes. Each convex is created by intersecting binary space partitions. Complexity of these methods provide high reconstruction accuracy but suffer from low interpretability in CAD applications. Convexes used in both methods are also problematic to modify from the perspective of a 3D graphic designer. Moreover, their CSG structure is fixed by definition. They use an intersection of hyperplanes first, and then perform union of predicted convexes. Other approaches such as Visual Primitives (VP) [37] and Superquadrics (SQ) [38] base on a learnable union of defined primitives and provide high interpretability of results. However, superquadrics as primitives contain parameters that control shape and need to be on closed domain. Otherwise, distance function is not well-defined for them and learning these parameters become unstable. Constructive Solid Geometry CSG allows to combine shape primitives with boolean operators to obtain complex shapes. Much research is focused on probabilistic methods that find the most probable explanation of the shape through the process of inverse CSG [39] that outputs a parse tree. Approaches such as CSG-NET [7, 8] and DeepPrimitive [40] integrate finding CSG parse trees with neural networks. However, they heavily rely on a supervision. At each step of the parse tree, a neural network is given a primitive to output and a relation between primitives. The CSG-NET outputs a program with a defined grammar that can be used for rendering. 4 Experiments We evaluate our approach on 2D autoencoding and 3D autoencoding tasks, and compare the results with state-of-the-art reference approaches for object reconstruction: CSG-NET [8] for the 2D task, and VP [37], SQ [37], BAE [41] and BSP-NET [5] for 3D tasks. 4.1 2D Reconstruction For this experiment, we used CAD dataset [7] consisting of 8,000 CAD shapes in three categories: chair, desk, and lamps. Each shape was rendered to 64× 64 image. We compare our method with the CSG-NETSTACK [8], improved version of the CSG-NET [7], on the same validation split. Table 1 contains comparison with CSG-NET working in both modes. Following the methodology introduced in existing reference works, methods are evaluated on Chamfer Distance (CD) of reconstructions. We set 2 CSG layers for our method, where each outputs 16 shapes in total. The decoder predicts parameters of 16 circles and 16 rectangles. Our method, while being fully unsupervised, is better then the best variants of CSG-NET and is significantly better with no output refinement. Results show that the method is able to discover good CSG parse trees without explicit ground truth for each level of the tree. Therefore, it can be used where such ground truth is not available. We present qualitative evaluation results in Figure 4 and visualize used shapes for the reconstruction. The UCSG-NET uses proper operations at each level that lead to the correct shape reconstruction. In most cases, it puts rectangles only. The nature of the dataset causes that phenomenon. To avoid possible errors, the network often uses a union of overlapping shapes to pass the primitive untouched. 4.2 3D Autoencoding For the 3D autoencoding task, we train the model on 643 volumes of voxelized shapes in the ShapeNet dataset. We sample 16384 points as a ground truth with a higher probability of sampling near the surface. To speed up the training, we applied early stopping heuristic and stop after 40 epochs of no improvement on the L∗total loss. The data was provided by Chen et al. [5] and bases on the 13 most common classes in the ShapeNet dataset [13]. We used 5 CSG layers to increase the diversity of predictions and set 64 parameters of spheres and boxes to handle the complex nature of the dataset. Each layer predicts CSG 48 combinations of these primitives. Training takes about two days on Nvidia Titan RTX GPU. The CSG inference for a single sample takes 0.068s and the reconstruction - 1.68s using the libigl library. We follow the procedure described in [5] and report Chamfer Distance as a quality measure of the reconstruction. We evaluate it on 4096 points sampled from the surface of the reconstructed object. We reconstruct shapes from CSG trees retrieved from predictions of our model. Obtained results are shown in Table 2. Examples of reconstructed shapes are presented in Figure 5. We can see that it accurately reconstructs the main components of a shape which resembles Visual Primitives (VP) [37] approach where outputs can be treated as shape abstractions. The remaining reference approaches outperformed our model with respect to CD measure. It was mainly caused by failed reconstructions of details, such as engines on wings of airplanes, to which the metric is sensitive. However, our ultimate goal was to provide an effective and interpretable method to construct a CSG tree with limited number of primitives. Finally, we show an example parse tree in Figure 6, used to reconstruct an example shape from the validation set. The model manages to create diverse combinations of primitives and reuse them at any level. Since many primitives were used in later layers, the tree complexity is not necessarily 2L. Notice that the main body and wings were reconstructed separately. We found that the model learns to reconstruct particular semantic parts of the object separately, for example, wings and the hull of an airplane or legs and the counter of a desk. These parts are merged in the final CSG layer where we force a union operation to be performed. See the supplementary material for additional CSG tree visualizations. 5 Conclusions We demonstrate UCSG-NET - an unsupervised method for discovering constructive solid geometry parse trees that composes primitives to reconstruct an input shape. Our method predicts CSG trees and is able to use different Boolean operations while maintaining reasonable accuracy of reconstructions. Inferred CSG trees are used to form meshes directly, without the need to use explicit reconstruction methods for implicit representations. We show that these trees can be easily visualized, thus providing interpretability about reconstructions step-by-step. Therefore, the method can be applied in CAD applications for quick prototyping of 3D objects. We identified three interesting venues to be taken in future works. In one of them, we would incorporate weak supervision to provide hints to the network what CSG operations are expected to be used for a particular shape. Since there are many CSG trees that reconstruct the same object and the space of solution is vast, such a supervision can improve the final results. Other paths include: using efficient RANSAC [42] to provide initial primitives, formulating a single CSG layer as a Set Transformer [43] or applying regularization techniques known in transformers [44] to increase diversity of predicted CSG trees. 6 Acknowledgments We thank the reviewers for their insightful comments that led us to improve the final manuscript. This work was supported in part by the National Science Centre, Poland research project no. 2016/21/D/ST6/02948, statutory funds of Department of Computational Intelligence and by Microsoft Research. We also acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used in a part of the research. Broader Impact UCSG-NET can find applications in CAD software. When applied, it is possible to retrieve a CSG parse tree for a particular object of interest. Hence, for a situation when a 3D object was modeled with a sculpting tool, the model can approximate it with single primitives and operations between them. Then, such a reconstruction can be integrated into existing CAD models. We find that beneficial in speeding up the prototyping process in 3D modeling. However, inexperienced CAD software users can rely heavily on presented assumptions. In the era of 3D printing ubiquity, printed elements out of reconstructed CSG parse trees can be erroneous, thus breaking the whole item. Therefore, we note that integrating our method into existing software should serve mainly as a prototyping device. We encourage further research on an unsupervised CSG parse tree recovery. We suspect that this area stagnated due to constraining limitations that a CSG tree creates a single object, but a single object can be created out of infinity many CSG trees. Therefore, new methods need to be invented that provide good approximations of CSG trees with short inference times.
1. What is the main contribution of the paper in terms of CSG parsing? 2. What are the strengths of the proposed approach compared to existing methods? 3. What are the weaknesses of the paper regarding the reconstruction and comparison with other works? 4. How does the reviewer assess the interpretability of the proposed method? 5. Are there any suggestions or recommendations for future improvements?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This work proposes a interpretable CSG parsing network without explicit CSG parse tree as the supervision. Existing approaches along this direction are supervised and require the entire CSG parse tree that is given upfront during the training process. This work is evaluated on 2D and 3D auto-encoder tasks, and it shows reasonable results. Strengths I believe the idea is novel and very interesting. It can train the network to generate CSG parse tree without explicit CSG parsing tree as the supervision. This work is applicable to a broader set of datasets (and thus applications) comparing to CSG-NET. Weaknesses The reconstruction is lack of details, which leads to inferior results in the 3D task comparing to the SOTA. I think further learning a SDF for the local primitives might help to solve this problem. The major advantage of this work over BSP-Net (and other SOTA) is its interpretability, and thus I think it should add more discussion about the interpretability. Since BSP-Net generate a BSP tree, comparing the generated tree might help the authors to understand the difference and show the value of this work.
NIPS
Title UCSG-NET- Unsupervised Discovering of Constructive Solid Geometry Tree Abstract Signed distance field (SDF) is a prominent implicit representation of 3D meshes. Methods that are based on such representation achieved state-of-the-art 3D shape reconstruction quality. However, these methods struggle to reconstruct non-convex shapes. One remedy is to incorporate a constructive solid geometry framework (CSG) that represents a shape as a decomposition into primitives. It allows to embody a 3D shape of high complexity and non-convexity with a simple tree representation of Boolean operations. Nevertheless, existing approaches are supervised and require the entire CSG parse tree that is given upfront during the training process. On the contrary, we propose a model that extracts a CSG parse tree without any supervision UCSG-NET. Our model predicts parameters of primitives and binarizes their SDF representation through differentiable indicator function. It is achieved jointly with discovering the structure of a Boolean operators tree. The model selects dynamically which operator combination over primitives leads to the reconstruction of high fidelity. We evaluate our method on 2D and 3D autoencoding tasks. We show that the predicted parse tree representation is interpretable and can be used in CAD software.1 1 Introduction Neural networks for 3D shape analysis gained much popularity in recent years. Among their main advantages are fast inference for unknown shapes and high generalization power. Many approaches rely on the different representations of the input: implicit such as voxel grids, point clouds and signed distance fields [1–3], or explicit - meshes [4]. Meshes can be found in computer-aided design applications, where a graphic designer often composes complex shapes out simple shapes primitives, such as boxes and spheres. Existing methods for representing meshes, such as BSP-NET [5] and CVXNET [6], achieve remarkable accuracy on a reconstruction tasks. However, the process of generating the mesh from predicted planes requires an additional post-processing step. These methods also assume that any object can be decomposed into a union of convex primitives. While holding, it requires many such primitives to represent concave shapes. Consequently, the decoding process is difficult to explain and modified with some external expert knowledge. On the other hand, there are fully interpretable approaches, like CSG-NET [7, 8], that utilize CSG parse tree to represent 3D shape construction process. Such solutions require expensive supervision that assumes assigned CSG parse tree for each example given during training. ∗Now at Warsaw University of Technology 1We published our code at https://github.com/kacperkan/ucsgnet 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this work, we propose a novel model for representing 3D meshes capable of learning CSG parse trees in an unsupervised manner - UCSG-NET. We achieve the stated goal by introducing so-called CSG Layers capable of learning explainable Boolean operations for pairs primitives. CSG Layers create the interpretable network of the geometric operations that produce complex shapes from a limited number of simple primitives. We evaluate the representation capabilities of meshes of our approach using challenging 2D and 3D datasets. We summarize our main contributions as: • Our method is the first one that is able to predict CSG tree without any supervision and achieve state-of-the-art results on the 2D reconstruction task comparing to CSG-NET trained in a supervised manner. Predictions of our method are fully interpretable and can aid in CAD applications. • We define and describe a novel formulation of constructive solid geometry operations for occupancy value representation for 2D and 3D data. 2 Method We propose an end-to-end neural network model that predicts parameters of simple geometric primitives and their constructive solid geometry composition to reconstruct a given object. Using our approach, one can predict the CSG parse tree that can be further passed to an external rendering software in order to reconstruct the shape. To achieve this, our model predicts primitive shapes in SDF representation. Then, it converts them into occupancy values O taking 1 if a point in the 2D or the 3D space is inside the shape and 0 otherwise. CSG operations on such a representation are defined as clipped summations and differences of binary values. The model dynamically chooses which operation should be used. During the validation, we retrieve the predicted CSG parse tree and shape primitives, and pass them to the rendering software. Thus, we need a single point in 3D space to infer the structure of the CSG tree. It is possible since primitive parameters and CSG operations are predicted independently from sampled points. In the following subsections, we present 2D examples for clarity. The method scales to 3D inputs trivially. 2.1 Constructive Solid Geometry Network The UCSG-NET architecture is provided in Figure 1. The model is composed of the following main components: encoder, primitive parameter prediction network, signed distance field to indicator function converter and constructive solid geometry layers. Encoder We process the input object I by mapping it into low dimensional latent vector z of length dz using an encoder fθ, e.g. fθ(I) = z. Depending on the data type, we use either a 2D or 3D convolutional neural network as an encoder. The latent vector is then passed to the primitive parameter prediction network. Primitive parameter prediction network The role of this component is to extract the parameters of the primitives, given the latent representation of the input object. The primitive parameter prediction network gφ consists of multiple fully connected layers interleaved with activation functions. The last layer predicts parameters of primitives in the SDF representation. We consider primitives such as boxes and spheres that allow us to calculate signed distance analytically. We note that planes can be used as well, thus extending approaches like BSP-NET [5] and CVXNET [6]. The mathematical formulation of used shapes is provided in the supplementary material. The network producesN tuples of {i ∈ N |pi, ti,qi}. pi ∈ Rdp describes vector of parameters of a particular shape (ex. radius of a sphere), while ti ∈ Rdt is the translation of the shape and qi ∈ Rdq - the rotation represented as a quaternion for 3D shapes and a matrix for 2D shapes. We further combine k different shapes to be predicted by using a fully connected layer for each shape type separately, thus producing kN =M shapes and M × (dp + dt + dq) parameters in total. Once parameters are predicted, we use them to calculate signed distance values for sampled points x from volume of space that boundaries are normalized to unit square (or unit cube for 3D data). For each shape, that has an analytical equation dist parametrized by p that calculates signed distance from a point x to its surface, we obtain Di = dist(q−1i (x− ti);pi). Signed Distance Field to Indicator Function Converter CSG operations in SDF representation are often defined as a combination of min and max functions on distance values. One has to apply either LogSumExp operation as in CVXNET or standard Softmax function to obtain differentiable approximation. However, we cast our problem to predict CSG operations for occupancy-valued sets. The motivation is that these are linear operations, hence they provide better training stability. We transform signed distances D to occupancy values O ∈ {0, 1}. We use parametrized α clipping function that is learned with the rest of the pipeline: O = [ 1− D α ] [0,1] { inside, O = 1 outside, O ∈ [0, 1) (1) where α is a learnable scalar and α > 0, [·][0,1] clips values to the given range and O means an approximation of occupancy values. O = 1 indicates the inside and the surface of a shape. O ∈ [0, 1) means outside of the shape and limα→0O ∈ {0, 1}. Gradual learning of α allows to distribute gradients to all shapes in early stages of training. There are no specific restrictions for α initialization and we set α = 1 in our experiments. The value is pushed towards 0 by optimizing jointly with the rest of parameters by adding the |α| term to the optimized loss. The method follows findings of Sakr et al. [9] that increasing slope of clipping function can be used to obtain binary activations. Constructive Solid Geometry Layer Predicted sets of occupancy values and output of the encoder z are passed to a sequence of L ≥ 1 CSG layers that combine shapes using boolean operators: union (denoted by ∪∗), intersection (∩∗) and difference (−∗). To grasp an idea of how CSG is performed on occupancy-valued sets, we show example operations in Figure 2. CSG operations for two sets A and B are described as: A ∪∗ B = [A+B][0,1] A ∩∗ B = [A+B − 1][0,1] A−∗ B = [A−B][0,1] B −∗ A = [B −A][0,1] (2) The question is how to choose operands A and B, denoted as left and right operands, from input shapes O(l) that would compose the output shape in O(l+1). We create two learnable matrices K (l) left,K (l) right ∈ RM×dz . Vectors stored in rows of these matrices serve as keys for a query z to select appropriate shapes for all 4 operations. The input latent code z is used as a query to retrieve the most appropriate operand shapes for each layer. We perform dot product between matrices K (l) left,K (l) right and z, and compute softmax along M input shapes. V (l) left = softmax(K (l) leftz) V (l) right = softmax(K (l) rightz) (3) The index of a particular operand is retrieved using Gumbel-Softmax [10] reparametrization of the categorical distribution: V̂ (l) side,i = exp (( log(V (l) side,i) + ci ) /τ (l) ) ∑M j=1 exp (( log(V (l) side,j) + cj ) /τ (l) ) for i = 1, ...,M and side ∈ {left,right} (4) A ∪∗ B : [ + ] [0,1] = A ∩∗ B : [ + − 1 ] [0,1] = A−∗ B : [ − ] [0,1] = B −∗ A : [ − ] [0,1] = Figure 2: Example of constructive solid geometry on occupancy-valued sets where ci is a sample from Gumbel(0, 1). The benefit of the reparametrization is twofold. Firstly, the expectation over the distribution stays the same despite changing τ (l). Secondly, we can manipulate τ (l) so for τ (l) → 0 the distribution degenerates to categorical distribution. Hence, a single shape selection replaces the fuzzy sum of all input shapes in that case. That way, we allow the network to select the most appropriate shape for the composition during learning by decreasing τ (l) gradually. By the end of the learning process, we can retrieve a single shape to be used for the CSG. The temperature τ (l) is learned jointly with the rest of the parameters. Left and right operands O(l)left,O (l) right are retrieved as: O(l)right = M∑ i=1 O(l)i V̂ (l) right,i O (l) left = M∑ i=1 O(l)i V̂ (l) left,i (5) A set of output shapes from the l+1 CSG layer is obtained by performing all operations in Equation 2 on selected operands: O(l+1)A∪∗B = [ O(l)left +O (l) right ] [0,1] O(l+1)A∩∗B = [ O(l)left +O (l) right − 1 ] [0,1] O(l+1)A−∗B = [ O(l)left −O (l) right ] [0,1] O(l+1)B−∗A = [ O(l)right −O (l) left ] [0,1] (6) O(l+1) = [ O(l+1)A∪∗B ;O (l+1) A∩∗B ;O (l+1) A−∗B ;O (l+1) B−∗A ] (7) where left,right ∈ M denotes left and right operands of the operation. By performing these operations manually, we increase the diversity of possible shape combinations and leave to the model which operations should be used for the reconstruction. Operations can be repeated to output multiple shapes. Note that the computation overhead increases linearly with the number of output shapes per layer. The whole procedure can be stacked in l ≤ L layers to create a CSG network. The L-th layer outputs a union since it is guaranteed to return a non-empty shape in most cases. At this point, the network has to learn passing primitives untouched by operators if any primitive should be used in later layers of the CSG tree to create, for example, nested rings. To mitigate the problem, each l+1 layer receives outputs from the l-th layer concatenated with the original binarized values O(0). For the first layer l = 1, it means receiving initial shapes only. Additional information passing The information about what is left to reconstruct changes layer by layer. Therefore, we incorporate it into the latent code to improve the reconstruction quality and stabilize training. Firstly, we encode V̂(l) = [V̂(l)left; V̂ (l) right] with a neural network h (l) containing a single hidden layer. Then, we employ GRU unit [11] that takes the latent code z(l) and encoded V̂(l) as an input, and outputs the updated latent code z(l+1) for the next layer. The hidden state of the GRU unit is learnable. The initial z(0) is the output from the encoder. Interpretability All introduced components of the UCSG-NET lead us to interpretable predictions of mesh reconstructions. To see this, consider the following case. When α ≈ 0, we obtain occupancy values calculated with Equation 1. Thus, shapes represented as these values will occupy the same volume as meshes reconstructed from parameters {i ∈M |pi, ti,qi}. These meshes can be visualized and edited explicitly. To further combine these primitives through CSG operations, we calculate argmaxi∈M V̂ (l) left,i, argmaxj∈M V̂ (l) right,j for left and right operands respectively. Then, we perform operations A∪∗B, A∩∗B, A−∗B and B−∗A. When ∀l≤Lτ (l) ≈ 0, both V̂(l)left, V̂ (l) right are one-hot vectors, and operations performed on occupancy values, as in Figure 2, are equivalent to CSG operations executed on aforementioned meshes, ex. by merging binary space partitioning trees of meshes [12]. Additionally, the whole CSG tree can be pruned to form binary tree, by investigating which meshes were selected through V̂(l)left, V̂ (l) right for the reconstruction, thus leaving the tree with 2L−l nodes at each layer l ≤ L.2 2.2 Training The pipeline is optimized end-to-end using a backpropagation algorithm in a two-stage process. First stage The goal is to find compositions of primitives that minimize the reconstruction error. We employ mean squared error of predicted occupancy values Ô(L) with the ground truthO∗. Values are calculated for X which combines points sampled from the surface of the ground truth, and randomly sampled inside a unit cube (or square for 2D case): LMSE = Ex∈X[(O(L) −O∗)2] (8) We also ensure that the network predicts only positive values of parameters of shapes since only for such these shapes have analytical descriptions: LP = M∑ i=1 ∑ pi∈pi max(−pi, 0) (9) To stop primitives from drifting away from the center of considered space in the early stages of the training, we minimize the clipped squared norm of the translation vector. At the same time, we allow primitives to be freely translated inside the space of interest: LT = M∑ i=1 max(||ti||2, 0.5) (10) The last component includes minimizing |α| to perform continuous binarization of distances into {inside, outside} indicator values. Our goal is to find optimal parameters of our model by minimizing the total loss: Ltotal = LMSE + LP + λTLT + λα|α| (11) where we set λT = λα = 0.1. Second stage We strive for interpretable CSG relations. To achieve this, we output occupancy values, obtained with Equation 1, so these values create binary-valued sets since the α at this stage is near 0. The stage is triggered, when α ≤ 0.05. Its main goal is to enforce V̂(l) for l ≤ L to resemble one-hot mask by decreasing the temperature τ (l) in CSG layers. The optimized loss is defined as: L∗total = Ltotal + λτ L∑ l=1 |τ (l)| (12) where we set λτ = 0.1 for all experiments. Once α ≈ 0 and ∀l≤Lτ (l) ≈ 0, predictions of the CSG layers become fully interpretable as described above, i.e. CSG parse trees of reconstructions can be retrieved and processed using explicit representation of meshes. We also ensure that α and τ (l) stay positive by manual clipping values to small positive number ≈ 10−5, if they become negative. During experiments, we initialize them to α = 1 and τ (l) = 2. Additional implementation details are provided in supplementary material. 2We consider the worst case, since some shapes can be reused in consecutive layers, hence number of used shapes in the layer l can be less than 2L−l. 3 Related Works Problem of the 3D reconstruction gained momentum when the ShapeNet dataset was published [13]. The dataset contains sets of simple, textures meshes, split into multiple, unbalanced categories. Since then, many methods were invented for a discriminative [14–17] and generative applications [5, 6, 18, 19]. Currently, presenting results on this dataset allows the potential reader to quickly grasp how a particular method performs. There exists also a high volume ABC dataset [20] which consists of many complex CAD shapes. However, it is not well established as a benchmark in the community. 3D surface representation Surface representations fall mainly into two categories: explicit (meshes) and implicit (ex. point clouds, voxels, signed distance fields). Many approaches working on meshes assume genus 0 as an initial shape that was refined to retrieve the final shape [21–23, 4, 24, 25]. Recent methods use step-by-step prediction of each vertex which position is conditioned on all previous vertices [26] and reinforcement learning to imitate real 3D graphics designer [27]. In MeshRCNN [28] a voxelized shape is retrieved first and then converted into mesh with the Pixel2Mesh [4] framework. Implicit representations need an external method to convert an object to a mesh. 3D-R2N2 [1] and Pix2Vox [29] predict voxelized objects and leverage multiple views of the same object. These methods struggle with the cubic complexity of predictions. To overcome the problem, octree-based convolutional networks [30, 31] use encoded voxel volume to take an advantage of the sparsity of the representation. Point clouds does not include vertex connectivity information. Therefore, ball-pivoting or Poisson surface reconstruction methods has to be employed to reconstruct the mesh [32, 33]. The representation is convenient to be processed using PointNet [14] framework. Objects can be generated using flow-based generative networks [34, 19]. Signed distance fields allow to model shapes with an arbitrary level details in theory. DeepSDF [3] and DualSDF [35] use a variational autodecoder approach to generate shapes. OccNet [36] and IM-NET [18] predict whether a point lies inside or outside of the shape. Such a representation is explored in BSP-NET [5] and CVXNET [6] which decompose shapes into union of convexes. Each convex is created by intersecting binary space partitions. Complexity of these methods provide high reconstruction accuracy but suffer from low interpretability in CAD applications. Convexes used in both methods are also problematic to modify from the perspective of a 3D graphic designer. Moreover, their CSG structure is fixed by definition. They use an intersection of hyperplanes first, and then perform union of predicted convexes. Other approaches such as Visual Primitives (VP) [37] and Superquadrics (SQ) [38] base on a learnable union of defined primitives and provide high interpretability of results. However, superquadrics as primitives contain parameters that control shape and need to be on closed domain. Otherwise, distance function is not well-defined for them and learning these parameters become unstable. Constructive Solid Geometry CSG allows to combine shape primitives with boolean operators to obtain complex shapes. Much research is focused on probabilistic methods that find the most probable explanation of the shape through the process of inverse CSG [39] that outputs a parse tree. Approaches such as CSG-NET [7, 8] and DeepPrimitive [40] integrate finding CSG parse trees with neural networks. However, they heavily rely on a supervision. At each step of the parse tree, a neural network is given a primitive to output and a relation between primitives. The CSG-NET outputs a program with a defined grammar that can be used for rendering. 4 Experiments We evaluate our approach on 2D autoencoding and 3D autoencoding tasks, and compare the results with state-of-the-art reference approaches for object reconstruction: CSG-NET [8] for the 2D task, and VP [37], SQ [37], BAE [41] and BSP-NET [5] for 3D tasks. 4.1 2D Reconstruction For this experiment, we used CAD dataset [7] consisting of 8,000 CAD shapes in three categories: chair, desk, and lamps. Each shape was rendered to 64× 64 image. We compare our method with the CSG-NETSTACK [8], improved version of the CSG-NET [7], on the same validation split. Table 1 contains comparison with CSG-NET working in both modes. Following the methodology introduced in existing reference works, methods are evaluated on Chamfer Distance (CD) of reconstructions. We set 2 CSG layers for our method, where each outputs 16 shapes in total. The decoder predicts parameters of 16 circles and 16 rectangles. Our method, while being fully unsupervised, is better then the best variants of CSG-NET and is significantly better with no output refinement. Results show that the method is able to discover good CSG parse trees without explicit ground truth for each level of the tree. Therefore, it can be used where such ground truth is not available. We present qualitative evaluation results in Figure 4 and visualize used shapes for the reconstruction. The UCSG-NET uses proper operations at each level that lead to the correct shape reconstruction. In most cases, it puts rectangles only. The nature of the dataset causes that phenomenon. To avoid possible errors, the network often uses a union of overlapping shapes to pass the primitive untouched. 4.2 3D Autoencoding For the 3D autoencoding task, we train the model on 643 volumes of voxelized shapes in the ShapeNet dataset. We sample 16384 points as a ground truth with a higher probability of sampling near the surface. To speed up the training, we applied early stopping heuristic and stop after 40 epochs of no improvement on the L∗total loss. The data was provided by Chen et al. [5] and bases on the 13 most common classes in the ShapeNet dataset [13]. We used 5 CSG layers to increase the diversity of predictions and set 64 parameters of spheres and boxes to handle the complex nature of the dataset. Each layer predicts CSG 48 combinations of these primitives. Training takes about two days on Nvidia Titan RTX GPU. The CSG inference for a single sample takes 0.068s and the reconstruction - 1.68s using the libigl library. We follow the procedure described in [5] and report Chamfer Distance as a quality measure of the reconstruction. We evaluate it on 4096 points sampled from the surface of the reconstructed object. We reconstruct shapes from CSG trees retrieved from predictions of our model. Obtained results are shown in Table 2. Examples of reconstructed shapes are presented in Figure 5. We can see that it accurately reconstructs the main components of a shape which resembles Visual Primitives (VP) [37] approach where outputs can be treated as shape abstractions. The remaining reference approaches outperformed our model with respect to CD measure. It was mainly caused by failed reconstructions of details, such as engines on wings of airplanes, to which the metric is sensitive. However, our ultimate goal was to provide an effective and interpretable method to construct a CSG tree with limited number of primitives. Finally, we show an example parse tree in Figure 6, used to reconstruct an example shape from the validation set. The model manages to create diverse combinations of primitives and reuse them at any level. Since many primitives were used in later layers, the tree complexity is not necessarily 2L. Notice that the main body and wings were reconstructed separately. We found that the model learns to reconstruct particular semantic parts of the object separately, for example, wings and the hull of an airplane or legs and the counter of a desk. These parts are merged in the final CSG layer where we force a union operation to be performed. See the supplementary material for additional CSG tree visualizations. 5 Conclusions We demonstrate UCSG-NET - an unsupervised method for discovering constructive solid geometry parse trees that composes primitives to reconstruct an input shape. Our method predicts CSG trees and is able to use different Boolean operations while maintaining reasonable accuracy of reconstructions. Inferred CSG trees are used to form meshes directly, without the need to use explicit reconstruction methods for implicit representations. We show that these trees can be easily visualized, thus providing interpretability about reconstructions step-by-step. Therefore, the method can be applied in CAD applications for quick prototyping of 3D objects. We identified three interesting venues to be taken in future works. In one of them, we would incorporate weak supervision to provide hints to the network what CSG operations are expected to be used for a particular shape. Since there are many CSG trees that reconstruct the same object and the space of solution is vast, such a supervision can improve the final results. Other paths include: using efficient RANSAC [42] to provide initial primitives, formulating a single CSG layer as a Set Transformer [43] or applying regularization techniques known in transformers [44] to increase diversity of predicted CSG trees. 6 Acknowledgments We thank the reviewers for their insightful comments that led us to improve the final manuscript. This work was supported in part by the National Science Centre, Poland research project no. 2016/21/D/ST6/02948, statutory funds of Department of Computational Intelligence and by Microsoft Research. We also acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used in a part of the research. Broader Impact UCSG-NET can find applications in CAD software. When applied, it is possible to retrieve a CSG parse tree for a particular object of interest. Hence, for a situation when a 3D object was modeled with a sculpting tool, the model can approximate it with single primitives and operations between them. Then, such a reconstruction can be integrated into existing CAD models. We find that beneficial in speeding up the prototyping process in 3D modeling. However, inexperienced CAD software users can rely heavily on presented assumptions. In the era of 3D printing ubiquity, printed elements out of reconstructed CSG parse trees can be erroneous, thus breaking the whole item. Therefore, we note that integrating our method into existing software should serve mainly as a prototyping device. We encourage further research on an unsupervised CSG parse tree recovery. We suspect that this area stagnated due to constraining limitations that a CSG tree creates a single object, but a single object can be created out of infinity many CSG trees. Therefore, new methods need to be invented that provide good approximations of CSG trees with short inference times.
1. What is the focus and contribution of the paper regarding semantic correspondence? 2. What are the strengths of the proposed approach, particularly in terms of neural representation? 3. What are the weaknesses of the paper, especially for the experiment section? 4. Do you have any concerns about the semantic correspondence representation? 5. What are the limitations regarding the NeMF approach?
Summary and Contributions Strengths Weaknesses
Summary and Contributions Given an input 3D shape (or 2D mask), this paper aims to predict a CSG representation that explains the input. The proposed approach is to a) encode the input and generate a set of primitives at the initial layer, and b) hierarchically build a CSG tree via proposed CSG layers, and c) compose the entities output via the last layer to obtain the predicted shape. The key contribution here is the idea of the CSG layer which, given some entities and a shape embedding as input, produces a set of output entities by computing union/intersection/difference of (multiple) pairs of input entities, where the decision of which pairs to select adaptively depends on the shape encoding. In sense, I think this approach is a generalization of BSPNet where combinations beyond intersections are allowed. The overall approach allows learning CSG predictions in an unsupervised manner and results are presented across different categories. Strengths - I really like the overall idea. The key insight of having a CSG layer that can yield different shapes via operating on selected pairs is very simple, elegant and novel. I essentially view this as a generalization of BSPNet which only considered half-space intersections, whereas this paper generalizes to more operations (and a broader set of base primitives) to allow predicted CSG tree. - In context of predicting a CSG representation, I think this is the first framework I've seen that allows learning without supervision. The previous CSG prediction approaches e.g. CSGNet relied on having the full shape programs that generated the shapes whereas this work obviates the need of such supervision. Weaknesses - While I theoretically like the approach, the empirical results are unfortunately not very convincing. In particular, the approach seems to indicate (in L102-104) that different pairs can be selected based on the input embedding Z, this is not apparent from the examples. As an illustration, all the results shown in Figure 3 have the exact same CSG tree. I am therefore not convinced that this approach does indeed predict different CSG trees for different instances. - On a point related to above, for 3D shape inference, only a single example is shown per category (including supplementary and main text) and this is not convincing. Ideally, the paper should show several random examples per category of the obtained CSGs to convince the reader e.g. how are chairs with 4 legs vs chairs with wheels handled. - The proposed representation seems to be empirically worse at representing the shapes than the arguably simpler approach of composing superquadrics and therefore the experiments are not convincing of why one would leverage this CSG representation. I feel that this maybe partly due to the data considered - ShapeNet objects are generally union of primitives, whereas this paper's strength is in also allowing operations like difference. Towards highlighting this better, I would strongly recommend considering alternate data to showcase this method e.g. 'ABC Dataset'.
NIPS
Title UCSG-NET- Unsupervised Discovering of Constructive Solid Geometry Tree Abstract Signed distance field (SDF) is a prominent implicit representation of 3D meshes. Methods that are based on such representation achieved state-of-the-art 3D shape reconstruction quality. However, these methods struggle to reconstruct non-convex shapes. One remedy is to incorporate a constructive solid geometry framework (CSG) that represents a shape as a decomposition into primitives. It allows to embody a 3D shape of high complexity and non-convexity with a simple tree representation of Boolean operations. Nevertheless, existing approaches are supervised and require the entire CSG parse tree that is given upfront during the training process. On the contrary, we propose a model that extracts a CSG parse tree without any supervision UCSG-NET. Our model predicts parameters of primitives and binarizes their SDF representation through differentiable indicator function. It is achieved jointly with discovering the structure of a Boolean operators tree. The model selects dynamically which operator combination over primitives leads to the reconstruction of high fidelity. We evaluate our method on 2D and 3D autoencoding tasks. We show that the predicted parse tree representation is interpretable and can be used in CAD software.1 1 Introduction Neural networks for 3D shape analysis gained much popularity in recent years. Among their main advantages are fast inference for unknown shapes and high generalization power. Many approaches rely on the different representations of the input: implicit such as voxel grids, point clouds and signed distance fields [1–3], or explicit - meshes [4]. Meshes can be found in computer-aided design applications, where a graphic designer often composes complex shapes out simple shapes primitives, such as boxes and spheres. Existing methods for representing meshes, such as BSP-NET [5] and CVXNET [6], achieve remarkable accuracy on a reconstruction tasks. However, the process of generating the mesh from predicted planes requires an additional post-processing step. These methods also assume that any object can be decomposed into a union of convex primitives. While holding, it requires many such primitives to represent concave shapes. Consequently, the decoding process is difficult to explain and modified with some external expert knowledge. On the other hand, there are fully interpretable approaches, like CSG-NET [7, 8], that utilize CSG parse tree to represent 3D shape construction process. Such solutions require expensive supervision that assumes assigned CSG parse tree for each example given during training. ∗Now at Warsaw University of Technology 1We published our code at https://github.com/kacperkan/ucsgnet 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this work, we propose a novel model for representing 3D meshes capable of learning CSG parse trees in an unsupervised manner - UCSG-NET. We achieve the stated goal by introducing so-called CSG Layers capable of learning explainable Boolean operations for pairs primitives. CSG Layers create the interpretable network of the geometric operations that produce complex shapes from a limited number of simple primitives. We evaluate the representation capabilities of meshes of our approach using challenging 2D and 3D datasets. We summarize our main contributions as: • Our method is the first one that is able to predict CSG tree without any supervision and achieve state-of-the-art results on the 2D reconstruction task comparing to CSG-NET trained in a supervised manner. Predictions of our method are fully interpretable and can aid in CAD applications. • We define and describe a novel formulation of constructive solid geometry operations for occupancy value representation for 2D and 3D data. 2 Method We propose an end-to-end neural network model that predicts parameters of simple geometric primitives and their constructive solid geometry composition to reconstruct a given object. Using our approach, one can predict the CSG parse tree that can be further passed to an external rendering software in order to reconstruct the shape. To achieve this, our model predicts primitive shapes in SDF representation. Then, it converts them into occupancy values O taking 1 if a point in the 2D or the 3D space is inside the shape and 0 otherwise. CSG operations on such a representation are defined as clipped summations and differences of binary values. The model dynamically chooses which operation should be used. During the validation, we retrieve the predicted CSG parse tree and shape primitives, and pass them to the rendering software. Thus, we need a single point in 3D space to infer the structure of the CSG tree. It is possible since primitive parameters and CSG operations are predicted independently from sampled points. In the following subsections, we present 2D examples for clarity. The method scales to 3D inputs trivially. 2.1 Constructive Solid Geometry Network The UCSG-NET architecture is provided in Figure 1. The model is composed of the following main components: encoder, primitive parameter prediction network, signed distance field to indicator function converter and constructive solid geometry layers. Encoder We process the input object I by mapping it into low dimensional latent vector z of length dz using an encoder fθ, e.g. fθ(I) = z. Depending on the data type, we use either a 2D or 3D convolutional neural network as an encoder. The latent vector is then passed to the primitive parameter prediction network. Primitive parameter prediction network The role of this component is to extract the parameters of the primitives, given the latent representation of the input object. The primitive parameter prediction network gφ consists of multiple fully connected layers interleaved with activation functions. The last layer predicts parameters of primitives in the SDF representation. We consider primitives such as boxes and spheres that allow us to calculate signed distance analytically. We note that planes can be used as well, thus extending approaches like BSP-NET [5] and CVXNET [6]. The mathematical formulation of used shapes is provided in the supplementary material. The network producesN tuples of {i ∈ N |pi, ti,qi}. pi ∈ Rdp describes vector of parameters of a particular shape (ex. radius of a sphere), while ti ∈ Rdt is the translation of the shape and qi ∈ Rdq - the rotation represented as a quaternion for 3D shapes and a matrix for 2D shapes. We further combine k different shapes to be predicted by using a fully connected layer for each shape type separately, thus producing kN =M shapes and M × (dp + dt + dq) parameters in total. Once parameters are predicted, we use them to calculate signed distance values for sampled points x from volume of space that boundaries are normalized to unit square (or unit cube for 3D data). For each shape, that has an analytical equation dist parametrized by p that calculates signed distance from a point x to its surface, we obtain Di = dist(q−1i (x− ti);pi). Signed Distance Field to Indicator Function Converter CSG operations in SDF representation are often defined as a combination of min and max functions on distance values. One has to apply either LogSumExp operation as in CVXNET or standard Softmax function to obtain differentiable approximation. However, we cast our problem to predict CSG operations for occupancy-valued sets. The motivation is that these are linear operations, hence they provide better training stability. We transform signed distances D to occupancy values O ∈ {0, 1}. We use parametrized α clipping function that is learned with the rest of the pipeline: O = [ 1− D α ] [0,1] { inside, O = 1 outside, O ∈ [0, 1) (1) where α is a learnable scalar and α > 0, [·][0,1] clips values to the given range and O means an approximation of occupancy values. O = 1 indicates the inside and the surface of a shape. O ∈ [0, 1) means outside of the shape and limα→0O ∈ {0, 1}. Gradual learning of α allows to distribute gradients to all shapes in early stages of training. There are no specific restrictions for α initialization and we set α = 1 in our experiments. The value is pushed towards 0 by optimizing jointly with the rest of parameters by adding the |α| term to the optimized loss. The method follows findings of Sakr et al. [9] that increasing slope of clipping function can be used to obtain binary activations. Constructive Solid Geometry Layer Predicted sets of occupancy values and output of the encoder z are passed to a sequence of L ≥ 1 CSG layers that combine shapes using boolean operators: union (denoted by ∪∗), intersection (∩∗) and difference (−∗). To grasp an idea of how CSG is performed on occupancy-valued sets, we show example operations in Figure 2. CSG operations for two sets A and B are described as: A ∪∗ B = [A+B][0,1] A ∩∗ B = [A+B − 1][0,1] A−∗ B = [A−B][0,1] B −∗ A = [B −A][0,1] (2) The question is how to choose operands A and B, denoted as left and right operands, from input shapes O(l) that would compose the output shape in O(l+1). We create two learnable matrices K (l) left,K (l) right ∈ RM×dz . Vectors stored in rows of these matrices serve as keys for a query z to select appropriate shapes for all 4 operations. The input latent code z is used as a query to retrieve the most appropriate operand shapes for each layer. We perform dot product between matrices K (l) left,K (l) right and z, and compute softmax along M input shapes. V (l) left = softmax(K (l) leftz) V (l) right = softmax(K (l) rightz) (3) The index of a particular operand is retrieved using Gumbel-Softmax [10] reparametrization of the categorical distribution: V̂ (l) side,i = exp (( log(V (l) side,i) + ci ) /τ (l) ) ∑M j=1 exp (( log(V (l) side,j) + cj ) /τ (l) ) for i = 1, ...,M and side ∈ {left,right} (4) A ∪∗ B : [ + ] [0,1] = A ∩∗ B : [ + − 1 ] [0,1] = A−∗ B : [ − ] [0,1] = B −∗ A : [ − ] [0,1] = Figure 2: Example of constructive solid geometry on occupancy-valued sets where ci is a sample from Gumbel(0, 1). The benefit of the reparametrization is twofold. Firstly, the expectation over the distribution stays the same despite changing τ (l). Secondly, we can manipulate τ (l) so for τ (l) → 0 the distribution degenerates to categorical distribution. Hence, a single shape selection replaces the fuzzy sum of all input shapes in that case. That way, we allow the network to select the most appropriate shape for the composition during learning by decreasing τ (l) gradually. By the end of the learning process, we can retrieve a single shape to be used for the CSG. The temperature τ (l) is learned jointly with the rest of the parameters. Left and right operands O(l)left,O (l) right are retrieved as: O(l)right = M∑ i=1 O(l)i V̂ (l) right,i O (l) left = M∑ i=1 O(l)i V̂ (l) left,i (5) A set of output shapes from the l+1 CSG layer is obtained by performing all operations in Equation 2 on selected operands: O(l+1)A∪∗B = [ O(l)left +O (l) right ] [0,1] O(l+1)A∩∗B = [ O(l)left +O (l) right − 1 ] [0,1] O(l+1)A−∗B = [ O(l)left −O (l) right ] [0,1] O(l+1)B−∗A = [ O(l)right −O (l) left ] [0,1] (6) O(l+1) = [ O(l+1)A∪∗B ;O (l+1) A∩∗B ;O (l+1) A−∗B ;O (l+1) B−∗A ] (7) where left,right ∈ M denotes left and right operands of the operation. By performing these operations manually, we increase the diversity of possible shape combinations and leave to the model which operations should be used for the reconstruction. Operations can be repeated to output multiple shapes. Note that the computation overhead increases linearly with the number of output shapes per layer. The whole procedure can be stacked in l ≤ L layers to create a CSG network. The L-th layer outputs a union since it is guaranteed to return a non-empty shape in most cases. At this point, the network has to learn passing primitives untouched by operators if any primitive should be used in later layers of the CSG tree to create, for example, nested rings. To mitigate the problem, each l+1 layer receives outputs from the l-th layer concatenated with the original binarized values O(0). For the first layer l = 1, it means receiving initial shapes only. Additional information passing The information about what is left to reconstruct changes layer by layer. Therefore, we incorporate it into the latent code to improve the reconstruction quality and stabilize training. Firstly, we encode V̂(l) = [V̂(l)left; V̂ (l) right] with a neural network h (l) containing a single hidden layer. Then, we employ GRU unit [11] that takes the latent code z(l) and encoded V̂(l) as an input, and outputs the updated latent code z(l+1) for the next layer. The hidden state of the GRU unit is learnable. The initial z(0) is the output from the encoder. Interpretability All introduced components of the UCSG-NET lead us to interpretable predictions of mesh reconstructions. To see this, consider the following case. When α ≈ 0, we obtain occupancy values calculated with Equation 1. Thus, shapes represented as these values will occupy the same volume as meshes reconstructed from parameters {i ∈M |pi, ti,qi}. These meshes can be visualized and edited explicitly. To further combine these primitives through CSG operations, we calculate argmaxi∈M V̂ (l) left,i, argmaxj∈M V̂ (l) right,j for left and right operands respectively. Then, we perform operations A∪∗B, A∩∗B, A−∗B and B−∗A. When ∀l≤Lτ (l) ≈ 0, both V̂(l)left, V̂ (l) right are one-hot vectors, and operations performed on occupancy values, as in Figure 2, are equivalent to CSG operations executed on aforementioned meshes, ex. by merging binary space partitioning trees of meshes [12]. Additionally, the whole CSG tree can be pruned to form binary tree, by investigating which meshes were selected through V̂(l)left, V̂ (l) right for the reconstruction, thus leaving the tree with 2L−l nodes at each layer l ≤ L.2 2.2 Training The pipeline is optimized end-to-end using a backpropagation algorithm in a two-stage process. First stage The goal is to find compositions of primitives that minimize the reconstruction error. We employ mean squared error of predicted occupancy values Ô(L) with the ground truthO∗. Values are calculated for X which combines points sampled from the surface of the ground truth, and randomly sampled inside a unit cube (or square for 2D case): LMSE = Ex∈X[(O(L) −O∗)2] (8) We also ensure that the network predicts only positive values of parameters of shapes since only for such these shapes have analytical descriptions: LP = M∑ i=1 ∑ pi∈pi max(−pi, 0) (9) To stop primitives from drifting away from the center of considered space in the early stages of the training, we minimize the clipped squared norm of the translation vector. At the same time, we allow primitives to be freely translated inside the space of interest: LT = M∑ i=1 max(||ti||2, 0.5) (10) The last component includes minimizing |α| to perform continuous binarization of distances into {inside, outside} indicator values. Our goal is to find optimal parameters of our model by minimizing the total loss: Ltotal = LMSE + LP + λTLT + λα|α| (11) where we set λT = λα = 0.1. Second stage We strive for interpretable CSG relations. To achieve this, we output occupancy values, obtained with Equation 1, so these values create binary-valued sets since the α at this stage is near 0. The stage is triggered, when α ≤ 0.05. Its main goal is to enforce V̂(l) for l ≤ L to resemble one-hot mask by decreasing the temperature τ (l) in CSG layers. The optimized loss is defined as: L∗total = Ltotal + λτ L∑ l=1 |τ (l)| (12) where we set λτ = 0.1 for all experiments. Once α ≈ 0 and ∀l≤Lτ (l) ≈ 0, predictions of the CSG layers become fully interpretable as described above, i.e. CSG parse trees of reconstructions can be retrieved and processed using explicit representation of meshes. We also ensure that α and τ (l) stay positive by manual clipping values to small positive number ≈ 10−5, if they become negative. During experiments, we initialize them to α = 1 and τ (l) = 2. Additional implementation details are provided in supplementary material. 2We consider the worst case, since some shapes can be reused in consecutive layers, hence number of used shapes in the layer l can be less than 2L−l. 3 Related Works Problem of the 3D reconstruction gained momentum when the ShapeNet dataset was published [13]. The dataset contains sets of simple, textures meshes, split into multiple, unbalanced categories. Since then, many methods were invented for a discriminative [14–17] and generative applications [5, 6, 18, 19]. Currently, presenting results on this dataset allows the potential reader to quickly grasp how a particular method performs. There exists also a high volume ABC dataset [20] which consists of many complex CAD shapes. However, it is not well established as a benchmark in the community. 3D surface representation Surface representations fall mainly into two categories: explicit (meshes) and implicit (ex. point clouds, voxels, signed distance fields). Many approaches working on meshes assume genus 0 as an initial shape that was refined to retrieve the final shape [21–23, 4, 24, 25]. Recent methods use step-by-step prediction of each vertex which position is conditioned on all previous vertices [26] and reinforcement learning to imitate real 3D graphics designer [27]. In MeshRCNN [28] a voxelized shape is retrieved first and then converted into mesh with the Pixel2Mesh [4] framework. Implicit representations need an external method to convert an object to a mesh. 3D-R2N2 [1] and Pix2Vox [29] predict voxelized objects and leverage multiple views of the same object. These methods struggle with the cubic complexity of predictions. To overcome the problem, octree-based convolutional networks [30, 31] use encoded voxel volume to take an advantage of the sparsity of the representation. Point clouds does not include vertex connectivity information. Therefore, ball-pivoting or Poisson surface reconstruction methods has to be employed to reconstruct the mesh [32, 33]. The representation is convenient to be processed using PointNet [14] framework. Objects can be generated using flow-based generative networks [34, 19]. Signed distance fields allow to model shapes with an arbitrary level details in theory. DeepSDF [3] and DualSDF [35] use a variational autodecoder approach to generate shapes. OccNet [36] and IM-NET [18] predict whether a point lies inside or outside of the shape. Such a representation is explored in BSP-NET [5] and CVXNET [6] which decompose shapes into union of convexes. Each convex is created by intersecting binary space partitions. Complexity of these methods provide high reconstruction accuracy but suffer from low interpretability in CAD applications. Convexes used in both methods are also problematic to modify from the perspective of a 3D graphic designer. Moreover, their CSG structure is fixed by definition. They use an intersection of hyperplanes first, and then perform union of predicted convexes. Other approaches such as Visual Primitives (VP) [37] and Superquadrics (SQ) [38] base on a learnable union of defined primitives and provide high interpretability of results. However, superquadrics as primitives contain parameters that control shape and need to be on closed domain. Otherwise, distance function is not well-defined for them and learning these parameters become unstable. Constructive Solid Geometry CSG allows to combine shape primitives with boolean operators to obtain complex shapes. Much research is focused on probabilistic methods that find the most probable explanation of the shape through the process of inverse CSG [39] that outputs a parse tree. Approaches such as CSG-NET [7, 8] and DeepPrimitive [40] integrate finding CSG parse trees with neural networks. However, they heavily rely on a supervision. At each step of the parse tree, a neural network is given a primitive to output and a relation between primitives. The CSG-NET outputs a program with a defined grammar that can be used for rendering. 4 Experiments We evaluate our approach on 2D autoencoding and 3D autoencoding tasks, and compare the results with state-of-the-art reference approaches for object reconstruction: CSG-NET [8] for the 2D task, and VP [37], SQ [37], BAE [41] and BSP-NET [5] for 3D tasks. 4.1 2D Reconstruction For this experiment, we used CAD dataset [7] consisting of 8,000 CAD shapes in three categories: chair, desk, and lamps. Each shape was rendered to 64× 64 image. We compare our method with the CSG-NETSTACK [8], improved version of the CSG-NET [7], on the same validation split. Table 1 contains comparison with CSG-NET working in both modes. Following the methodology introduced in existing reference works, methods are evaluated on Chamfer Distance (CD) of reconstructions. We set 2 CSG layers for our method, where each outputs 16 shapes in total. The decoder predicts parameters of 16 circles and 16 rectangles. Our method, while being fully unsupervised, is better then the best variants of CSG-NET and is significantly better with no output refinement. Results show that the method is able to discover good CSG parse trees without explicit ground truth for each level of the tree. Therefore, it can be used where such ground truth is not available. We present qualitative evaluation results in Figure 4 and visualize used shapes for the reconstruction. The UCSG-NET uses proper operations at each level that lead to the correct shape reconstruction. In most cases, it puts rectangles only. The nature of the dataset causes that phenomenon. To avoid possible errors, the network often uses a union of overlapping shapes to pass the primitive untouched. 4.2 3D Autoencoding For the 3D autoencoding task, we train the model on 643 volumes of voxelized shapes in the ShapeNet dataset. We sample 16384 points as a ground truth with a higher probability of sampling near the surface. To speed up the training, we applied early stopping heuristic and stop after 40 epochs of no improvement on the L∗total loss. The data was provided by Chen et al. [5] and bases on the 13 most common classes in the ShapeNet dataset [13]. We used 5 CSG layers to increase the diversity of predictions and set 64 parameters of spheres and boxes to handle the complex nature of the dataset. Each layer predicts CSG 48 combinations of these primitives. Training takes about two days on Nvidia Titan RTX GPU. The CSG inference for a single sample takes 0.068s and the reconstruction - 1.68s using the libigl library. We follow the procedure described in [5] and report Chamfer Distance as a quality measure of the reconstruction. We evaluate it on 4096 points sampled from the surface of the reconstructed object. We reconstruct shapes from CSG trees retrieved from predictions of our model. Obtained results are shown in Table 2. Examples of reconstructed shapes are presented in Figure 5. We can see that it accurately reconstructs the main components of a shape which resembles Visual Primitives (VP) [37] approach where outputs can be treated as shape abstractions. The remaining reference approaches outperformed our model with respect to CD measure. It was mainly caused by failed reconstructions of details, such as engines on wings of airplanes, to which the metric is sensitive. However, our ultimate goal was to provide an effective and interpretable method to construct a CSG tree with limited number of primitives. Finally, we show an example parse tree in Figure 6, used to reconstruct an example shape from the validation set. The model manages to create diverse combinations of primitives and reuse them at any level. Since many primitives were used in later layers, the tree complexity is not necessarily 2L. Notice that the main body and wings were reconstructed separately. We found that the model learns to reconstruct particular semantic parts of the object separately, for example, wings and the hull of an airplane or legs and the counter of a desk. These parts are merged in the final CSG layer where we force a union operation to be performed. See the supplementary material for additional CSG tree visualizations. 5 Conclusions We demonstrate UCSG-NET - an unsupervised method for discovering constructive solid geometry parse trees that composes primitives to reconstruct an input shape. Our method predicts CSG trees and is able to use different Boolean operations while maintaining reasonable accuracy of reconstructions. Inferred CSG trees are used to form meshes directly, without the need to use explicit reconstruction methods for implicit representations. We show that these trees can be easily visualized, thus providing interpretability about reconstructions step-by-step. Therefore, the method can be applied in CAD applications for quick prototyping of 3D objects. We identified three interesting venues to be taken in future works. In one of them, we would incorporate weak supervision to provide hints to the network what CSG operations are expected to be used for a particular shape. Since there are many CSG trees that reconstruct the same object and the space of solution is vast, such a supervision can improve the final results. Other paths include: using efficient RANSAC [42] to provide initial primitives, formulating a single CSG layer as a Set Transformer [43] or applying regularization techniques known in transformers [44] to increase diversity of predicted CSG trees. 6 Acknowledgments We thank the reviewers for their insightful comments that led us to improve the final manuscript. This work was supported in part by the National Science Centre, Poland research project no. 2016/21/D/ST6/02948, statutory funds of Department of Computational Intelligence and by Microsoft Research. We also acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used in a part of the research. Broader Impact UCSG-NET can find applications in CAD software. When applied, it is possible to retrieve a CSG parse tree for a particular object of interest. Hence, for a situation when a 3D object was modeled with a sculpting tool, the model can approximate it with single primitives and operations between them. Then, such a reconstruction can be integrated into existing CAD models. We find that beneficial in speeding up the prototyping process in 3D modeling. However, inexperienced CAD software users can rely heavily on presented assumptions. In the era of 3D printing ubiquity, printed elements out of reconstructed CSG parse trees can be erroneous, thus breaking the whole item. Therefore, we note that integrating our method into existing software should serve mainly as a prototyping device. We encourage further research on an unsupervised CSG parse tree recovery. We suspect that this area stagnated due to constraining limitations that a CSG tree creates a single object, but a single object can be created out of infinity many CSG trees. Therefore, new methods need to be invented that provide good approximations of CSG trees with short inference times.
1. What is the focus and contribution of the paper on CSG representation? 2. What are the strengths of the proposed approach, particularly in terms of training without supervision? 3. What are the weaknesses of the paper, especially regarding its comparison with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes the first neural method to create a CSG tree to represent an input shape without supervision. CSG representation is highly appealing, since it is more interpretable than most of the other neural representations and can be used as a starting point for interactive modeling. The downside, is that the reconstruction accuracy seems quite low. Strengths Training CGS prediction without explicit supervision is an important contribution. In 2D case, the method seems to outperform the strongly-supervised variant (CSG-Net). Weaknesses This paper should include a more thorough discussion on the technical differences from CSG-Net. The high-level contribution (i.e., ability to train in an unsupervised fashion) is clear, but what's not clear is what are the low-level representation decisions that enabled this.
NIPS
Title UCSG-NET- Unsupervised Discovering of Constructive Solid Geometry Tree Abstract Signed distance field (SDF) is a prominent implicit representation of 3D meshes. Methods that are based on such representation achieved state-of-the-art 3D shape reconstruction quality. However, these methods struggle to reconstruct non-convex shapes. One remedy is to incorporate a constructive solid geometry framework (CSG) that represents a shape as a decomposition into primitives. It allows to embody a 3D shape of high complexity and non-convexity with a simple tree representation of Boolean operations. Nevertheless, existing approaches are supervised and require the entire CSG parse tree that is given upfront during the training process. On the contrary, we propose a model that extracts a CSG parse tree without any supervision UCSG-NET. Our model predicts parameters of primitives and binarizes their SDF representation through differentiable indicator function. It is achieved jointly with discovering the structure of a Boolean operators tree. The model selects dynamically which operator combination over primitives leads to the reconstruction of high fidelity. We evaluate our method on 2D and 3D autoencoding tasks. We show that the predicted parse tree representation is interpretable and can be used in CAD software.1 1 Introduction Neural networks for 3D shape analysis gained much popularity in recent years. Among their main advantages are fast inference for unknown shapes and high generalization power. Many approaches rely on the different representations of the input: implicit such as voxel grids, point clouds and signed distance fields [1–3], or explicit - meshes [4]. Meshes can be found in computer-aided design applications, where a graphic designer often composes complex shapes out simple shapes primitives, such as boxes and spheres. Existing methods for representing meshes, such as BSP-NET [5] and CVXNET [6], achieve remarkable accuracy on a reconstruction tasks. However, the process of generating the mesh from predicted planes requires an additional post-processing step. These methods also assume that any object can be decomposed into a union of convex primitives. While holding, it requires many such primitives to represent concave shapes. Consequently, the decoding process is difficult to explain and modified with some external expert knowledge. On the other hand, there are fully interpretable approaches, like CSG-NET [7, 8], that utilize CSG parse tree to represent 3D shape construction process. Such solutions require expensive supervision that assumes assigned CSG parse tree for each example given during training. ∗Now at Warsaw University of Technology 1We published our code at https://github.com/kacperkan/ucsgnet 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this work, we propose a novel model for representing 3D meshes capable of learning CSG parse trees in an unsupervised manner - UCSG-NET. We achieve the stated goal by introducing so-called CSG Layers capable of learning explainable Boolean operations for pairs primitives. CSG Layers create the interpretable network of the geometric operations that produce complex shapes from a limited number of simple primitives. We evaluate the representation capabilities of meshes of our approach using challenging 2D and 3D datasets. We summarize our main contributions as: • Our method is the first one that is able to predict CSG tree without any supervision and achieve state-of-the-art results on the 2D reconstruction task comparing to CSG-NET trained in a supervised manner. Predictions of our method are fully interpretable and can aid in CAD applications. • We define and describe a novel formulation of constructive solid geometry operations for occupancy value representation for 2D and 3D data. 2 Method We propose an end-to-end neural network model that predicts parameters of simple geometric primitives and their constructive solid geometry composition to reconstruct a given object. Using our approach, one can predict the CSG parse tree that can be further passed to an external rendering software in order to reconstruct the shape. To achieve this, our model predicts primitive shapes in SDF representation. Then, it converts them into occupancy values O taking 1 if a point in the 2D or the 3D space is inside the shape and 0 otherwise. CSG operations on such a representation are defined as clipped summations and differences of binary values. The model dynamically chooses which operation should be used. During the validation, we retrieve the predicted CSG parse tree and shape primitives, and pass them to the rendering software. Thus, we need a single point in 3D space to infer the structure of the CSG tree. It is possible since primitive parameters and CSG operations are predicted independently from sampled points. In the following subsections, we present 2D examples for clarity. The method scales to 3D inputs trivially. 2.1 Constructive Solid Geometry Network The UCSG-NET architecture is provided in Figure 1. The model is composed of the following main components: encoder, primitive parameter prediction network, signed distance field to indicator function converter and constructive solid geometry layers. Encoder We process the input object I by mapping it into low dimensional latent vector z of length dz using an encoder fθ, e.g. fθ(I) = z. Depending on the data type, we use either a 2D or 3D convolutional neural network as an encoder. The latent vector is then passed to the primitive parameter prediction network. Primitive parameter prediction network The role of this component is to extract the parameters of the primitives, given the latent representation of the input object. The primitive parameter prediction network gφ consists of multiple fully connected layers interleaved with activation functions. The last layer predicts parameters of primitives in the SDF representation. We consider primitives such as boxes and spheres that allow us to calculate signed distance analytically. We note that planes can be used as well, thus extending approaches like BSP-NET [5] and CVXNET [6]. The mathematical formulation of used shapes is provided in the supplementary material. The network producesN tuples of {i ∈ N |pi, ti,qi}. pi ∈ Rdp describes vector of parameters of a particular shape (ex. radius of a sphere), while ti ∈ Rdt is the translation of the shape and qi ∈ Rdq - the rotation represented as a quaternion for 3D shapes and a matrix for 2D shapes. We further combine k different shapes to be predicted by using a fully connected layer for each shape type separately, thus producing kN =M shapes and M × (dp + dt + dq) parameters in total. Once parameters are predicted, we use them to calculate signed distance values for sampled points x from volume of space that boundaries are normalized to unit square (or unit cube for 3D data). For each shape, that has an analytical equation dist parametrized by p that calculates signed distance from a point x to its surface, we obtain Di = dist(q−1i (x− ti);pi). Signed Distance Field to Indicator Function Converter CSG operations in SDF representation are often defined as a combination of min and max functions on distance values. One has to apply either LogSumExp operation as in CVXNET or standard Softmax function to obtain differentiable approximation. However, we cast our problem to predict CSG operations for occupancy-valued sets. The motivation is that these are linear operations, hence they provide better training stability. We transform signed distances D to occupancy values O ∈ {0, 1}. We use parametrized α clipping function that is learned with the rest of the pipeline: O = [ 1− D α ] [0,1] { inside, O = 1 outside, O ∈ [0, 1) (1) where α is a learnable scalar and α > 0, [·][0,1] clips values to the given range and O means an approximation of occupancy values. O = 1 indicates the inside and the surface of a shape. O ∈ [0, 1) means outside of the shape and limα→0O ∈ {0, 1}. Gradual learning of α allows to distribute gradients to all shapes in early stages of training. There are no specific restrictions for α initialization and we set α = 1 in our experiments. The value is pushed towards 0 by optimizing jointly with the rest of parameters by adding the |α| term to the optimized loss. The method follows findings of Sakr et al. [9] that increasing slope of clipping function can be used to obtain binary activations. Constructive Solid Geometry Layer Predicted sets of occupancy values and output of the encoder z are passed to a sequence of L ≥ 1 CSG layers that combine shapes using boolean operators: union (denoted by ∪∗), intersection (∩∗) and difference (−∗). To grasp an idea of how CSG is performed on occupancy-valued sets, we show example operations in Figure 2. CSG operations for two sets A and B are described as: A ∪∗ B = [A+B][0,1] A ∩∗ B = [A+B − 1][0,1] A−∗ B = [A−B][0,1] B −∗ A = [B −A][0,1] (2) The question is how to choose operands A and B, denoted as left and right operands, from input shapes O(l) that would compose the output shape in O(l+1). We create two learnable matrices K (l) left,K (l) right ∈ RM×dz . Vectors stored in rows of these matrices serve as keys for a query z to select appropriate shapes for all 4 operations. The input latent code z is used as a query to retrieve the most appropriate operand shapes for each layer. We perform dot product between matrices K (l) left,K (l) right and z, and compute softmax along M input shapes. V (l) left = softmax(K (l) leftz) V (l) right = softmax(K (l) rightz) (3) The index of a particular operand is retrieved using Gumbel-Softmax [10] reparametrization of the categorical distribution: V̂ (l) side,i = exp (( log(V (l) side,i) + ci ) /τ (l) ) ∑M j=1 exp (( log(V (l) side,j) + cj ) /τ (l) ) for i = 1, ...,M and side ∈ {left,right} (4) A ∪∗ B : [ + ] [0,1] = A ∩∗ B : [ + − 1 ] [0,1] = A−∗ B : [ − ] [0,1] = B −∗ A : [ − ] [0,1] = Figure 2: Example of constructive solid geometry on occupancy-valued sets where ci is a sample from Gumbel(0, 1). The benefit of the reparametrization is twofold. Firstly, the expectation over the distribution stays the same despite changing τ (l). Secondly, we can manipulate τ (l) so for τ (l) → 0 the distribution degenerates to categorical distribution. Hence, a single shape selection replaces the fuzzy sum of all input shapes in that case. That way, we allow the network to select the most appropriate shape for the composition during learning by decreasing τ (l) gradually. By the end of the learning process, we can retrieve a single shape to be used for the CSG. The temperature τ (l) is learned jointly with the rest of the parameters. Left and right operands O(l)left,O (l) right are retrieved as: O(l)right = M∑ i=1 O(l)i V̂ (l) right,i O (l) left = M∑ i=1 O(l)i V̂ (l) left,i (5) A set of output shapes from the l+1 CSG layer is obtained by performing all operations in Equation 2 on selected operands: O(l+1)A∪∗B = [ O(l)left +O (l) right ] [0,1] O(l+1)A∩∗B = [ O(l)left +O (l) right − 1 ] [0,1] O(l+1)A−∗B = [ O(l)left −O (l) right ] [0,1] O(l+1)B−∗A = [ O(l)right −O (l) left ] [0,1] (6) O(l+1) = [ O(l+1)A∪∗B ;O (l+1) A∩∗B ;O (l+1) A−∗B ;O (l+1) B−∗A ] (7) where left,right ∈ M denotes left and right operands of the operation. By performing these operations manually, we increase the diversity of possible shape combinations and leave to the model which operations should be used for the reconstruction. Operations can be repeated to output multiple shapes. Note that the computation overhead increases linearly with the number of output shapes per layer. The whole procedure can be stacked in l ≤ L layers to create a CSG network. The L-th layer outputs a union since it is guaranteed to return a non-empty shape in most cases. At this point, the network has to learn passing primitives untouched by operators if any primitive should be used in later layers of the CSG tree to create, for example, nested rings. To mitigate the problem, each l+1 layer receives outputs from the l-th layer concatenated with the original binarized values O(0). For the first layer l = 1, it means receiving initial shapes only. Additional information passing The information about what is left to reconstruct changes layer by layer. Therefore, we incorporate it into the latent code to improve the reconstruction quality and stabilize training. Firstly, we encode V̂(l) = [V̂(l)left; V̂ (l) right] with a neural network h (l) containing a single hidden layer. Then, we employ GRU unit [11] that takes the latent code z(l) and encoded V̂(l) as an input, and outputs the updated latent code z(l+1) for the next layer. The hidden state of the GRU unit is learnable. The initial z(0) is the output from the encoder. Interpretability All introduced components of the UCSG-NET lead us to interpretable predictions of mesh reconstructions. To see this, consider the following case. When α ≈ 0, we obtain occupancy values calculated with Equation 1. Thus, shapes represented as these values will occupy the same volume as meshes reconstructed from parameters {i ∈M |pi, ti,qi}. These meshes can be visualized and edited explicitly. To further combine these primitives through CSG operations, we calculate argmaxi∈M V̂ (l) left,i, argmaxj∈M V̂ (l) right,j for left and right operands respectively. Then, we perform operations A∪∗B, A∩∗B, A−∗B and B−∗A. When ∀l≤Lτ (l) ≈ 0, both V̂(l)left, V̂ (l) right are one-hot vectors, and operations performed on occupancy values, as in Figure 2, are equivalent to CSG operations executed on aforementioned meshes, ex. by merging binary space partitioning trees of meshes [12]. Additionally, the whole CSG tree can be pruned to form binary tree, by investigating which meshes were selected through V̂(l)left, V̂ (l) right for the reconstruction, thus leaving the tree with 2L−l nodes at each layer l ≤ L.2 2.2 Training The pipeline is optimized end-to-end using a backpropagation algorithm in a two-stage process. First stage The goal is to find compositions of primitives that minimize the reconstruction error. We employ mean squared error of predicted occupancy values Ô(L) with the ground truthO∗. Values are calculated for X which combines points sampled from the surface of the ground truth, and randomly sampled inside a unit cube (or square for 2D case): LMSE = Ex∈X[(O(L) −O∗)2] (8) We also ensure that the network predicts only positive values of parameters of shapes since only for such these shapes have analytical descriptions: LP = M∑ i=1 ∑ pi∈pi max(−pi, 0) (9) To stop primitives from drifting away from the center of considered space in the early stages of the training, we minimize the clipped squared norm of the translation vector. At the same time, we allow primitives to be freely translated inside the space of interest: LT = M∑ i=1 max(||ti||2, 0.5) (10) The last component includes minimizing |α| to perform continuous binarization of distances into {inside, outside} indicator values. Our goal is to find optimal parameters of our model by minimizing the total loss: Ltotal = LMSE + LP + λTLT + λα|α| (11) where we set λT = λα = 0.1. Second stage We strive for interpretable CSG relations. To achieve this, we output occupancy values, obtained with Equation 1, so these values create binary-valued sets since the α at this stage is near 0. The stage is triggered, when α ≤ 0.05. Its main goal is to enforce V̂(l) for l ≤ L to resemble one-hot mask by decreasing the temperature τ (l) in CSG layers. The optimized loss is defined as: L∗total = Ltotal + λτ L∑ l=1 |τ (l)| (12) where we set λτ = 0.1 for all experiments. Once α ≈ 0 and ∀l≤Lτ (l) ≈ 0, predictions of the CSG layers become fully interpretable as described above, i.e. CSG parse trees of reconstructions can be retrieved and processed using explicit representation of meshes. We also ensure that α and τ (l) stay positive by manual clipping values to small positive number ≈ 10−5, if they become negative. During experiments, we initialize them to α = 1 and τ (l) = 2. Additional implementation details are provided in supplementary material. 2We consider the worst case, since some shapes can be reused in consecutive layers, hence number of used shapes in the layer l can be less than 2L−l. 3 Related Works Problem of the 3D reconstruction gained momentum when the ShapeNet dataset was published [13]. The dataset contains sets of simple, textures meshes, split into multiple, unbalanced categories. Since then, many methods were invented for a discriminative [14–17] and generative applications [5, 6, 18, 19]. Currently, presenting results on this dataset allows the potential reader to quickly grasp how a particular method performs. There exists also a high volume ABC dataset [20] which consists of many complex CAD shapes. However, it is not well established as a benchmark in the community. 3D surface representation Surface representations fall mainly into two categories: explicit (meshes) and implicit (ex. point clouds, voxels, signed distance fields). Many approaches working on meshes assume genus 0 as an initial shape that was refined to retrieve the final shape [21–23, 4, 24, 25]. Recent methods use step-by-step prediction of each vertex which position is conditioned on all previous vertices [26] and reinforcement learning to imitate real 3D graphics designer [27]. In MeshRCNN [28] a voxelized shape is retrieved first and then converted into mesh with the Pixel2Mesh [4] framework. Implicit representations need an external method to convert an object to a mesh. 3D-R2N2 [1] and Pix2Vox [29] predict voxelized objects and leverage multiple views of the same object. These methods struggle with the cubic complexity of predictions. To overcome the problem, octree-based convolutional networks [30, 31] use encoded voxel volume to take an advantage of the sparsity of the representation. Point clouds does not include vertex connectivity information. Therefore, ball-pivoting or Poisson surface reconstruction methods has to be employed to reconstruct the mesh [32, 33]. The representation is convenient to be processed using PointNet [14] framework. Objects can be generated using flow-based generative networks [34, 19]. Signed distance fields allow to model shapes with an arbitrary level details in theory. DeepSDF [3] and DualSDF [35] use a variational autodecoder approach to generate shapes. OccNet [36] and IM-NET [18] predict whether a point lies inside or outside of the shape. Such a representation is explored in BSP-NET [5] and CVXNET [6] which decompose shapes into union of convexes. Each convex is created by intersecting binary space partitions. Complexity of these methods provide high reconstruction accuracy but suffer from low interpretability in CAD applications. Convexes used in both methods are also problematic to modify from the perspective of a 3D graphic designer. Moreover, their CSG structure is fixed by definition. They use an intersection of hyperplanes first, and then perform union of predicted convexes. Other approaches such as Visual Primitives (VP) [37] and Superquadrics (SQ) [38] base on a learnable union of defined primitives and provide high interpretability of results. However, superquadrics as primitives contain parameters that control shape and need to be on closed domain. Otherwise, distance function is not well-defined for them and learning these parameters become unstable. Constructive Solid Geometry CSG allows to combine shape primitives with boolean operators to obtain complex shapes. Much research is focused on probabilistic methods that find the most probable explanation of the shape through the process of inverse CSG [39] that outputs a parse tree. Approaches such as CSG-NET [7, 8] and DeepPrimitive [40] integrate finding CSG parse trees with neural networks. However, they heavily rely on a supervision. At each step of the parse tree, a neural network is given a primitive to output and a relation between primitives. The CSG-NET outputs a program with a defined grammar that can be used for rendering. 4 Experiments We evaluate our approach on 2D autoencoding and 3D autoencoding tasks, and compare the results with state-of-the-art reference approaches for object reconstruction: CSG-NET [8] for the 2D task, and VP [37], SQ [37], BAE [41] and BSP-NET [5] for 3D tasks. 4.1 2D Reconstruction For this experiment, we used CAD dataset [7] consisting of 8,000 CAD shapes in three categories: chair, desk, and lamps. Each shape was rendered to 64× 64 image. We compare our method with the CSG-NETSTACK [8], improved version of the CSG-NET [7], on the same validation split. Table 1 contains comparison with CSG-NET working in both modes. Following the methodology introduced in existing reference works, methods are evaluated on Chamfer Distance (CD) of reconstructions. We set 2 CSG layers for our method, where each outputs 16 shapes in total. The decoder predicts parameters of 16 circles and 16 rectangles. Our method, while being fully unsupervised, is better then the best variants of CSG-NET and is significantly better with no output refinement. Results show that the method is able to discover good CSG parse trees without explicit ground truth for each level of the tree. Therefore, it can be used where such ground truth is not available. We present qualitative evaluation results in Figure 4 and visualize used shapes for the reconstruction. The UCSG-NET uses proper operations at each level that lead to the correct shape reconstruction. In most cases, it puts rectangles only. The nature of the dataset causes that phenomenon. To avoid possible errors, the network often uses a union of overlapping shapes to pass the primitive untouched. 4.2 3D Autoencoding For the 3D autoencoding task, we train the model on 643 volumes of voxelized shapes in the ShapeNet dataset. We sample 16384 points as a ground truth with a higher probability of sampling near the surface. To speed up the training, we applied early stopping heuristic and stop after 40 epochs of no improvement on the L∗total loss. The data was provided by Chen et al. [5] and bases on the 13 most common classes in the ShapeNet dataset [13]. We used 5 CSG layers to increase the diversity of predictions and set 64 parameters of spheres and boxes to handle the complex nature of the dataset. Each layer predicts CSG 48 combinations of these primitives. Training takes about two days on Nvidia Titan RTX GPU. The CSG inference for a single sample takes 0.068s and the reconstruction - 1.68s using the libigl library. We follow the procedure described in [5] and report Chamfer Distance as a quality measure of the reconstruction. We evaluate it on 4096 points sampled from the surface of the reconstructed object. We reconstruct shapes from CSG trees retrieved from predictions of our model. Obtained results are shown in Table 2. Examples of reconstructed shapes are presented in Figure 5. We can see that it accurately reconstructs the main components of a shape which resembles Visual Primitives (VP) [37] approach where outputs can be treated as shape abstractions. The remaining reference approaches outperformed our model with respect to CD measure. It was mainly caused by failed reconstructions of details, such as engines on wings of airplanes, to which the metric is sensitive. However, our ultimate goal was to provide an effective and interpretable method to construct a CSG tree with limited number of primitives. Finally, we show an example parse tree in Figure 6, used to reconstruct an example shape from the validation set. The model manages to create diverse combinations of primitives and reuse them at any level. Since many primitives were used in later layers, the tree complexity is not necessarily 2L. Notice that the main body and wings were reconstructed separately. We found that the model learns to reconstruct particular semantic parts of the object separately, for example, wings and the hull of an airplane or legs and the counter of a desk. These parts are merged in the final CSG layer where we force a union operation to be performed. See the supplementary material for additional CSG tree visualizations. 5 Conclusions We demonstrate UCSG-NET - an unsupervised method for discovering constructive solid geometry parse trees that composes primitives to reconstruct an input shape. Our method predicts CSG trees and is able to use different Boolean operations while maintaining reasonable accuracy of reconstructions. Inferred CSG trees are used to form meshes directly, without the need to use explicit reconstruction methods for implicit representations. We show that these trees can be easily visualized, thus providing interpretability about reconstructions step-by-step. Therefore, the method can be applied in CAD applications for quick prototyping of 3D objects. We identified three interesting venues to be taken in future works. In one of them, we would incorporate weak supervision to provide hints to the network what CSG operations are expected to be used for a particular shape. Since there are many CSG trees that reconstruct the same object and the space of solution is vast, such a supervision can improve the final results. Other paths include: using efficient RANSAC [42] to provide initial primitives, formulating a single CSG layer as a Set Transformer [43] or applying regularization techniques known in transformers [44] to increase diversity of predicted CSG trees. 6 Acknowledgments We thank the reviewers for their insightful comments that led us to improve the final manuscript. This work was supported in part by the National Science Centre, Poland research project no. 2016/21/D/ST6/02948, statutory funds of Department of Computational Intelligence and by Microsoft Research. We also acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used in a part of the research. Broader Impact UCSG-NET can find applications in CAD software. When applied, it is possible to retrieve a CSG parse tree for a particular object of interest. Hence, for a situation when a 3D object was modeled with a sculpting tool, the model can approximate it with single primitives and operations between them. Then, such a reconstruction can be integrated into existing CAD models. We find that beneficial in speeding up the prototyping process in 3D modeling. However, inexperienced CAD software users can rely heavily on presented assumptions. In the era of 3D printing ubiquity, printed elements out of reconstructed CSG parse trees can be erroneous, thus breaking the whole item. Therefore, we note that integrating our method into existing software should serve mainly as a prototyping device. We encourage further research on an unsupervised CSG parse tree recovery. We suspect that this area stagnated due to constraining limitations that a CSG tree creates a single object, but a single object can be created out of infinity many CSG trees. Therefore, new methods need to be invented that provide good approximations of CSG trees with short inference times.
1. What is the main contribution of the paper regarding unsupervised learning of constructive solid geometry parse trees? 2. What are the strengths of the proposed approach, particularly in its combination of primitive-based shape decomposition and (supervised) CSG tree parsing? 3. What are the weaknesses of the paper, especially regarding its results and their comparison to prior works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes a method to, in an unsupervised manner, learn a constructive solid geometry parse tree that represents 2D/3D shapes. The model first predicts a set of implicit primitives from an encoding of the input shape. It then learn to apply CSG operations in a bottom-up, hierarchical way to create the final output shape. The model is trained with interpretability in mind, and the results show that the model can achieve good performances on 2D/3D reconstruction tasks and the resulting CSG tree is of acceptable quality, though not as good as fully supervised ones. Strengths -A novel enough idea that combines the strengths of two lines of work: primitive-based shape decomposition and (supervised) CSG tree parsing. The resulting paradigm decomposes shapes into more interpretable primitives than works such as BSP-NET, whereas also require much less supervision than works such as CSG-NET. -Sound technical details that all seem important to the ultimate goal of the method (accurate and interpretable CSG trees). -Good evaluation protocols that compares the method accurately with relevant prior works. Weaknesses -The results are not clearly superior, both quantitatively (worse reconstruction than some prior works) and qualitatively (some of the results, especially 3D, look questionable). It is also worth noting that the resulting CSG-trees are often redundant and qualitatively dissimilar to human created ones. However, I feel that all these limitations are acceptable for a paper that aims to establish a momentum in a new direction. -The evaluation mostly focuses on reconstruction quality, but since the goal is interpretable CSG programs, more evaluations on interpretability / similarity to human made programs are desirable.
NIPS
Title Online Meta-Critic Learning for Off-Policy Actor-Critic Methods Abstract Off-Policy Actor-Critic (OffP-AC) methods have proven successful in a variety of continuous control tasks. Normally, the critic’s action-value function is updated using temporal-difference, and the critic in turn provides a loss for the actor that trains it to take actions with higher expected return. In this paper, we introduce a flexible meta-critic framework based on observing the learning process and metalearning an additional loss for the actor that accelerates and improves actor-critic learning. Compared to existing meta-learning algorithms, meta-critic is rapidly learned online for a single task, rather than slowly over a family of tasks. Crucially, our meta-critic is designed for off-policy based learners, which currently provide state-of-the-art reinforcement learning sample efficiency. We demonstrate that online meta-critic learning benefits to a variety of continuous control tasks when combined with contemporary OffP-AC methods DDPG, TD3 and SAC. 1 Introduction Off-policy Actor-Critic (OffP-AC) methods are currently central in deep reinforcement learning (RL) research due to their greater sample efficiency compared to on-policy alternatives. On-policy learning requires new trajectories to be collected for each update to the policy, and is expensive as the number of gradient steps and samples per step increases with task-complexity even for contemporary TRPO [33], PPO [34] and A3C [27] algorithms. Off-policy methods, such as DDPG [20], TD3 [9] and SAC [13] achieve greater sample efficiency as they can learn from randomly sampled historical transitions without a time sequence requirement, making better use of past experience. The critic estimates action-value (Q-value) function using a differentiable function approximator, and the actor updates its policy parameters in the direction of the approximate action-value gradient. Briefly, the critic provides a loss to guide the actor, and is trained in turn to estimate the environmental action-value under the current policy via temporal-difference learning [38]. In all these cases the learning objective function is hand-crafted and fixed. Recently, meta-learning [14] has become topical as a paradigm to accelerate RL by learning aspects of the learning strategy, for example, learning fast adaptation strategies [7, 30, 31], losses [3, 15, 17, 36], optimisation strategies [6], exploration strategies [11], hyperparameters [40, 42], and intrinsic rewards [44]. However, most of these works perform meta-learning on a family of tasks or environments and amortize this huge cost by deploying the trained strategy for fast learning on a new task. ∗Contributed equally. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this paper we introduce a meta-critic network to enhance OffP-AC learning methods. The metacritic augments the vanilla critic to provide an additional loss to guide the actor’s learning. However, compared to the vanilla critic, the meta-critic is explicitly (meta)-trained to accelerate the learning process rather than merely estimate the action-value function. Overall, the actor is trained by both critic and meta-critic provided losses, the critic is trained by temporal-difference as usual, and crucially the meta-critic is trained to generate maximum learning progress in the actor. Both the critic and meta-critic use randomly sampled transitions for effective OffP-AC learning, providing superior sample efficiency compared to existing on-policy meta-learners. We emphasize that meta-critic can be successfully learned online within a single task. This is in contrast to the currently widely used meta-learning paradigm – where entire task families are required to provide enough data for meta-learning, and to provide new tasks to amortize the huge cost of meta-learning. Our framework augments vanilla AC learning with an additional meta-learned critic, which can be seen as providing intrinsic motivation towards optimum actor learning progress [28]. As analogously observed in recent meta-learning studies [8], our loss-learning can be formalized as bi-level optimisation with the upper level being meta-critic learning, and lower level being conventional learning. We solve this joint optimisation by iteratively updating the meta-critic and base learner online in a single task. Our strategy is related to the meta-loss learning in EPG [15], but learned online rather than offline, and integrated with OffP-AC rather than their on-policy policy-gradient learning. The most related prior work is LIRPG [44], which meta-learns an intrinsic reward online. However, their intrinsic reward just provides a helpful scalar offset to the environmental reward for on-policy trajectory optimisation via policy-gradient [37]. In contrast our meta-critic provides a loss for direct actor optimisation using sampled transitions, and achieves dramatically better sample efficiency than LIRPG reward learning. We evaluate several continuous control benchmarks and show that online meta-critic learning can improve contemporary OffP-AC algorithms including DDPG, TD3 and SAC. 2 Background and Related Work Policy-Gradient (PG) RL Methods. Reinforcement learning involves an agent interacting with environment E. At each time t, the agent receives an observation st, takes a (possibly stochastic) action at based on its policy π : S → A, and receives a reward rt and new state st+1. The tuple (st, at, rt, st+1) describes a state transition. The objective of RL is to find the optimal policy πφ, which maximizes the expected cumulative return J . In on-policy RL, J is defined as the discounted episodic return based on a sequential trajectory over horizon H: (s0, a0, r0, s1 · · · , sH , aH , rH , sH+1). J = Ert,st∼E,at∼π [∑H t=0 γ trt ] . In on-policy AC, r is represented by a surrogate state-value V (st) from its critic. Since J is only a scalar value that is not differentiable, the gradient of J with respect to policy πφ has to be optimised under the policy gradient theorem [37]: ∇φJ(φ) = E [J ∇φ log πφ(at|st)]. However, with respect to sample efficiency, even exploiting tricks like importance sampling and improved application of A2C [44], the use of full trajectories is less effective than the use of individual transitions by off-policy methods. Off-policy actor-critic architectures provide better sample efficiency by reusing past experience (previously collected transitions). DDPG [20] borrows two main ideas from Deep Q Networks [25, 26]: a replay buffer and a target Q network to give consistent targets during temporal-difference backups. TD3 (Twin Delayed Deep Deterministic policy gradient) [9] develops a variant of Double Q-learning by taking the minimum value between a pair of critics to limit over-estimation, and the computational cost is reduced by using a single actor optimised with respect to Qθ1 . SAC (Soft Actor-Critic) [12, 13] proposes a maximum entropy RL framework where its stochastic actor aims to simultaneously maximize expected action-value and entropy. The latest version of SAC [13] also includes the “the minimum value between both critics” idea in its implementation. Specifically, in these off-policy AC methods, parameterized policies πφ can be directly updated by defining actor loss in terms of the expected return J(φ) and taking its gradient ∇φJ(φ), where J(φ) depends on the action-value Qθ(s, a). Based on a batch of transitions randomly sampled from the buffer, the loss for actor provided by the critic is basically calculated as: Lcritic = −J(φ) = −Es∼pπQθ(s, a)|a=πφ(s). (1) Specifically, the loss Lcritic for actor in TD3 and SAC is calculated as Eq. (2) and Eq. (3) respectively: LcriticTD3 = −Es∼pπQθ1(s, a)|a=πφ(s); (2) LcriticSAC = Es∼pπ [α log (πφ(a|s))−Qθ(s, a)|a=πφ(s)]. (3) The actor is then updated as ∆φ = α∇φLcritic, following the critic’s gradient to increase the likelihood of actions that achieve a higher Q-value. Meanwhile, the critic θ uses Q-learning updates to estimate the action-value function: θ ← arg min θ E(Qθ(st, at)− rt − γQθ(st+1, π(st+1))2. (4) Meta Learning for RL. Meta-learning (a.k.a. learning to learn) [7, 14, 32] has received a resurgence in interest recently due to its potential to improve learning performance and sample efficiency in RL [11]. Several studies learn optimisers that provide policy updates with respect to known loss or reward functions [1, 6, 23]. A few studies learn hyperparameters [40, 42], loss functions [3, 15, 36] or rewards [44] that steer the learning of standard optimisers. Our meta-critic framework is in the category of loss-function meta-learning, but unlike most of these we are able to meta-learn the loss function online in parallel to learning a single extrinsic task rather. No costly offline learning on a task family is required as in Houthooft et al. [15], Sung et al. [36]. Most current Meta-RL methods are based on on-policy policy-gradient, limiting sample efficiency. For example, while LIRPG [44] is one of the few prior works to attempt online meta-learning, it is ineffective in practice due to only providing a scalar reward increment rather than a loss for direct optimisation. A few meta-RL studies have begun to address off-policy RL, for conventional multi-task meta-learning [30] and for optimising transfer vs forgetting in continual learning of multiple tasks [31]. The contribution of our Meta-Critic is to enhance state-of-the-art single-task OffP-AC RL with online meta-learning. Loss Learning. Loss learning has been exploited in ‘learning to teach’ [41] and surrogate loss learning [10, 16] where a teacher network predicts the parameters of a manually designed loss in the supervised learning. In contrast our meta-critic is itself a differentiable loss, and is designed for use in RL. Other applications learn losses that improve model robustness to out of distribution samples [2, 19]. Some recent loss learning studies in RL focus mainly on the multi-task adaptation scenarios [3, 15, 36] or the generalization to entirely different environments [17]. Our loss learning architecture is related to Li et al. [19], but designed for accelerating single-task OffP-AC RL rather than improving robustness in multi-domain supervised learning. 3 Methodology We aim to learn a meta-critic which augments the vanilla critic by providing an additional loss Lmcriticω for the actor. The vanilla loss for the policy (actor) is Lcritic given by the conventional critic. The actor is trained by Lcritic and Lmcriticω via stochastic gradient descent. The meta-critic parameter ω is optimized by meta-learning to accelerate actor learning progress. Here we follow the notation in TD3 and SAC that φ and θ denote actors and critics respectively. Algorithm Overview. We train a meta-critic loss Lmcriticω that augments the vanilla critic Lcritic to enhance actor learning. Specifically, it should lead to the actor φ having improved performance on the normal task, as measured by Lcritic on the validation data, after learning on both meta-critic and vanilla critic losses. This can be seen as a bi-level optimisation problem1 [8, 14, 29] of the form: ω = arg min ω Lmeta(dval;φ ∗) s.t. φ∗ = arg min φ (Lcritic(dtrn;φ) + L mcritic ω (dtrn;φ)), (5) where we can assume Lmeta(·) = Lcritic(·) for now. dtrn and dval are different transition batches from replay buffer. Here the lower-level optimisation trains actor φ to minimize both the normal loss and meta-critic-provided loss on training samples. The upper-level optimisation further requires meta-critic ω to have produced a learned actor φ∗ that minimizes a meta-loss that measures actor’s normal performance on a set of validation samples, after being trained by meta-critic. Note that in principle the lower-level optimisation could purely rely on Lmcriticω analogously to the procedure in EPG [15], but we find optimising their sum greatly increases learning stability and speed. Eq. (5) is satisfied when meta-critic successfully trains the actor for good performance on the normal task 1See Franceschi et al. [8] for a discussion on convergence of bi-level algorithms. Algorithm 1 Online Meta-Critic Learning for OffP-AC RL φ, θ, ω,D ← ∅ // Initialise actor, critic, meta-critic and buffer for each iteration do for each environment step do at ∼ πφ(at|st) // Select action according to the current policy st+1 ∼ p(st+1|st, at), rt // Observe reward rt and new state st+1 D ← D ∪ {(st, at, rt, st+1)} // Store the transition in the replay buffer end for for each gradient step do Sample mini-batch dtrn from D Update θ ← Eq. (4) // Update the critic parameters meta-train: Lcritic ← Eqs. (1), (2) or (3) // Vanilla-critic-provided loss for actor Lmcriticω ← Eqs. (10) or (11) // Meta-critic-provided loss for actor φold = φ− η∇φLcritic // Update actor according to Lcritic only φnew = φold − η∇φLmcriticω // Update actor according to Lcritic and Lmcriticω meta-test: Sample mini-batch dval from D Lmeta(dval;φnew) or L meta clip(dval;φold, φnew)← Eqs. (8) or (9) // Meta-loss meta-optimisation φ← φ− η(∇φLcritic +∇φLmcriticω ) // Update the actor parameters ω ← ω − η∇ωLmeta or ω − η∇ωLmetaclip // Update the meta-critic parameters end for end for=0 as measured by validation meta loss. The update of vanilla-critic is also in the lower loop, but as it updates as usual, we focus on the actor and meta-critic optimisation for simplicity of exposition. In this setup the meta-critic is a neural network hω(dtrn;φ) that takes as input some featurisation of the actor φ and the states and actions in dtrn. The meta-critic network must produce a scalar output, which we can then treat as a loss Lmcriticω := hω , and must be differentiable with respect to φ. We next discuss the overall optimisation flow and the specific meta-critic architecture. Meta-Optimisation Flow. To optimise Eq. (5), we iteratively update the meta-critic parameter ω (upper-level) and actor and vanilla-critic parameters φ and θ (lower-level). At each iteration, we perform: (i) Meta-train: Sample a mini-batch of transitions and putatively update policy φ based on the vanilla-critic-provided Lcritic and the meta-critic-provided Lmcriticω losses. (ii) Meta-test: Sample another mini-batch of transitions to evaluate the performance of the updated policy according to Lmeta. (iii) Meta-optimisation: Update meta-critic ω to maximize the performance on the validation batch, and perform the real actor update according to both losses. Thus the meta-critic co-evolves with the actor as they are trained online and in parallel. Figure 1 and Alg. 1 summarize the process and the details of each step are explained next. Meta-critic can be flexibly integrated with any OffP-AC algorithms, and the further implementation details for DDPG, TD3 and SAC are in the supplementary material. Updating Actor Parameters (φ). During metatrain, we sample a mini-batch of transitions dtrn = {(si, ai, ri, si+1)} with batch size N from the replay buffer D. We update the policy using both losses as: φnew = φ− η ∂ Lcritic(dtrn) ∂φ − η ∂ L mcritic ω (dtrn) ∂φ . (6) We also compute a separate update: φold = φ− η ∂Lcritic(dtrn) ∂φ (7) that only leverages the vanilla-critic-provided loss. If meta-critic provided a beneficial source of loss, φnew should be a better parameter than φ, and in particular a better parameter than φold. We will use this comparison in the next meta-test step. Updating Meta-Critic Parameters (ω). To train the meta-critic, we sample another mini-batch of transitions: dval = {(svali , avali , rvali , svali+1)} with batch size M . The use of a validation batch for bi-level meta-optimisation [8, 29] ensures the meta-learned component does not overfit. As our framework is off-policy, this does not incur any sample efficiency cost. The meta-critic is then updated by a meta-loss ω ← ω − ηLmeta(·) that measures actor performance after learning. Meta-Loss Definition. The most intuitive meta-loss definition is the validation performance of updated actor φnew as measured by the normal critic: Lmeta = Lcritic(dval;φnew). (8) However, we find it helpful for optimisation efficiency and stability to optimise the clipped difference between updates with- and without meta-critic’s input as: Lmetaclip = tanh(L critic(dval;φnew)− Lcritic(dval;φold)). (9) This is simply a monotonic re-centering and re-scaling of Lcritic. (The parameter ω that minimizes Lmetaclip as Eq. (9) also minimizes L meta of Eq. (8) and vice-versa.) Note that in Eq. (9) the updated actor φnew depends on the feedback given by meta-critic ω and φold does not. Thus only the first term is optimised for ω. In this setup the Lcritic(dval;φnew) term should obtain high reward/low loss on the validation batch and the latter Lcritic(dval;φold) provides a baseline, analogous to the baseline widely used to accelerate and stabilize the policy-gradient RL. tanh ensures meta-loss range is always nicely distributed in (−1, 1), and caps the magnitude of the meta-gradient. In essence, meta-loss is for the agent to ask itself: “Did meta-critic learning improve validation performance compared to vanilla learning?”, and adjusts meta-critic ω accordingly. We will compare the options Lmeta and Lmetaclip later. Designing Meta-Critic (hω). The meta-critic hω implements the additional loss for actor. The design-space for hω has several requirements: (i) Its input must depend on the policy parameters φ, because this meta-critic-provided loss is also used to update the policy. (ii) It should be permutation invariant to transitions in dtrn, i.e., it should not make a difference if we feed the randomly sampled transitions indexed [1,2,3] or [3,2,1]. A naivest way to achieve (i) is given in MetaReg [2] which meta-learns a parameter regularizer: hω(φ) = ∑ i ωi|φi|. Although this form of hω acts directly on φ, it does not exploit state information, and introduces a large number of parameters in hω, as φ may be a high-dimensional neural network. Therefore, we design a more efficient and effective form of hω that also meets both of these requirements. Similar to the feature extractor in supervised learning, the actor needs to analyse and extract information from states for decision-making. We assume the policy network can be represented as πφ(s) = π̂(π̄(s)) and decomposed into the feature extraction π̄φ and decision-making π̂φ (i.e., the last layer of the full policy network) modules. Thus the output of the penultimate layer of full policy network is just the output of feature extraction π̄φ(s), and such output of feature jointly encodes φ and s. Given this encoding, we implement hw(dtrn;φ) as a three-layer multi-layer perceptron fω whose input is the extracted feature from π̄φ(s). Here we consider two designs for meta-critic (hω): using our joint feature alone (Eq. (10)) or augmenting the joint feature with states and actions (Eq. (11)): hw(dtrn;φ) = 1 N N∑ i=1 fω(π̄φ(si)), (10) hw(dtrn;φ) = 1 N N∑ i=1 fω(π̄φ(si), si, ai). (11) hω provides as an auxiliary critic whose input is based on the batch-wise set-embedding [43] of our joint actor-state feature. That is to say, dtrn is a randomly sampled mini-batch transitions from the replay buffer, and then s (and a) of transitions are inputted to hω, and finally we obtain the meta-critic-provided loss for dtrn. Here, our design of Eq. (11) also includes the cues in LIRPG and EPG where si and ai are used as the input of their learned reward and loss respectively. We set a softplus activation to the final layer of hω, following the idea in TD3 that vanilla critic may over-estimate and so the a non-negative additional actor loss can mitigate such over-estimation. Moreover, note that only si (and ai) from dtrn are used to calculate Lcritic and Lmcriticω , while si, ai, ri and si+1 are all used for optimising the vanilla critic. 4 Experiments and Evaluation We take the algorithms DDPG, TD3 and SAC as our vanilla baselines, and denote their enhancements by meta-critic as DDPG-MC, TD3-MC, SAC-MC. All -MCs augment their built-in vanilla critic with the proposed meta-critic. We take Eq. (10) and Lmetaclip as the default meta-critic setup, and compare alternatives in the ablation study. For our implementation of meta-critic, we use a three-layer neural network with an input dimension of π̄ (300 in DDPG and TD3, 256 in SAC), two hidden feed-forward layers of 100 hidden nodes each, and ReLU non-linearity between layers. Implementation Details. We evaluate the methods on a suite of seven MuJoCo tasks [39] in OpenAI Gym [4], two MuJoCo tasks in rllab [5], and a simulated racing car TORCS [22]. For MuJoCo-Gym, we use the latest V2 tasks instead of V1 used in TD3 and the old-SAC [12] without modification to their original environment or reward. We use the open-source implementations “OurDDPG”2, TD33 and SAC4. Here, “OurDDPG” is the re-tuned version of DDPG implemented in Fujimoto et al. [9] with the same hyper-parameters. In MuJoCo cases we integrate our meta-critic with learning rate 0.001. The details of TORCS hyper-parameters are in the supplementary material. Our demo code can be viewed on https://github.com/zwfightzw/Meta-Critic. 4.1 Evaluation of Meta-Critic OffP-AC Learning TD3 and SAC. Figure 3 reports the learning curves for TD3. For some tasks the vanilla TD3’s performance declines in the long run, while TD3-MC shows improved stability with much higher asymptotic performance. Thus TD3-MC provides comparable or better learning performance in each case, while Table 1 shows the clear improvement in the max average return. For SAC in Figure 4, note that we use the most recent update of SAC [13], which is actually the combination of SAC+TD3. Although SAC+TD3 is arguably the strongest existing method, SAC-MC still gives a clear boost on the asymptotic performance for many tasks, especially the most challenging TORCS. 2https://github.com/sfujim/TD3/blob/master/OurDDPG.py 3https://github.com/sfujim/TD3/blob/master/TD3.py 4https://github.com/pranz24/pytorch-soft-actor-critic Comparison vs PPO-LIRPG. Intrinsic Reward Learning for PPO [44] is the most related method to our work in performing online single-task meta-learning of an additional reward/loss. Their original PPO-LIRPG evaluated on a modified environment with hidden rewards. Here we apply it to the standard unmodified learning tasks that we aim to improve. Table 1 tells that: (i) In this conventional setting, PPO-LIRPG worsens rather than improves basic PPO performance. (ii) Overall OffP-AC methods generally perform better than on-policy PPO for most environments. This shows the importance of our meta-learning contribution to the off-policy setting. In general Meta-Critic is preferred compared to PPO-LIRPG because the latter only provides a scalar reward bonus that helps the policy indirectly via high-variance policy-gradient updates, while ours provides a direct loss. Summary. Table 1 and Figure 5 summarize all the results by max average return. SAC-MC generally performs best and -MCs are generally comparable or better than their corresponding vanilla alternatives. -MCs usually provide improved variance in return compared to their baselines. 4.2 Further Analysis Loss and Optimisation Analysis. We take tabular MDP [6] (|S| = 2, |A| = 2) as an example using DDPG. Figure 6 first reports the normal Lcritic of actor, and the introduced hω (i.e., Lmcriticω ) and Lmetaclip over 5 trials. We also plot model optimisation trajectories (pink dots) via a 2D weight-space slice in right part of Figure 6. They are plotted over the average reward surface. Following the network visualization in Li et al. [18], we calculate the subspace to plot as: Let φi denote model parameters at episode i and the final estimate as φn (here n = 100). We apply PCA to matrix M = [φ0 − φn, . . . , φn−1 − φn], and take the two most explanatory directions of this optimisation path. Parameters are then projected onto the plane defined by these directions for plotting; and models at each point are densely evaluated to get average reward. Figure 6 shows: (i) DDPG-MC convergences faster to a lower value of Lcritic, demonstrating the meta-critic’s ability to accelerate learning. (ii) Meta-loss is randomly initialised at the start, but as ω begins to be trained via meta-test on validation data, meta-loss drops swiftly below zero and then φnew is better than φold. In the late stage, meta-loss goes towards zero, indicating all of hω’s knowledge has been distilled to help the actor. Thus meta-critic is helpful in defining better update directions in the early stages of learning (but note that it can still impact later stage learning via changing choices made early). (iii) Lmcriticω converges smoothly under the supervision of meta-loss. (iv) DDPG-MC has a very direct and fast optimisation movement to the high reward zone of parameter space, while the vanilla DDPG moves slowly through the low reward space before finally finding the direction to the high-reward zone. Ablation on hω design. We run Walker2d under SAC-MC with the alternative hω from Eq. (11) or in MetaReg [2] format (input actor parameters directly). In Table 2, we record the max average return and sum average return (area under the average reward curve) of evaluations over all time steps. Eq. (11) achieves the highest max average return and our default hω (Eq. (10)) attains the highest mean average return. We can also see some improvement for hω(φ) in MetaReg format, but the huge number (73484) of parameters is expensive. Overall, all meta-critic designs provide at least a small improvement on vanilla SAC. Ablation on meta-loss design. We considered two meta-loss designs in Eqs. (8&9). For Lmetaclip in Eq. (9), we use Lcritic(dval;φold) as a baseline to improve numerical stability of the gradient update. To evaluate this design, we also compare using vanilla Lmeta in Eq. (8). The last column in Table 2 shows vanilla Lmeta barely improves on vanilla SAC, validating our meta-loss design. Controlling for compute cost and parameter count. We find that meta-critic increases 15-30% compute cost and 10% parameter count above the baselines (the latter is neglectable as it is small compared to the replay buffer’s memory footprint) during training, and this is primarily attributable to the cost of evaluating the meta-loss Lmetaclip and hence L mcritic ω . To investigate whether the benefit of meta-critic can be replicated simply by increasing compute expenditure or model size, we perform control experiments by increasing the vanilla baselines’ compute budget or parameter count to match the -MCs. Specifically, if meta-critic takes K% more compute than the baseline, then we re-run the baseline with K% more update steps per iteration. This ‘+updates’ condition provides the baseline with more mini-batch samples while controlling the number of environment interactions. Note that due to implementation constraints of SAC, increasing updates in ‘SAC+updates’ requires taking at least 2x gradient updates per environment step compared to SAC and SAC-MC. Thus it takes 100% more updates than SAC and significantly more compute time than SAC-MC. To control for parameter count, if meta-critic takesN% more parameters than baseline, then we increase the baselines’ network size with N% more parameters by linearly scaling up the size of all hidden layers (‘+params’). The max average return results for the seven tasks in these control experiments are shown in Table 3, and the detailed learning curves of the control experiments are in the supplementary material. Overall, there is no consistent benefit in providing the baseline with more compute iterations or parameters, and in many environments they perform worse than the baseline or even fail entirely, especially in ‘+updates’ condition. Thus -MCs’ good performance can not be simply replicated by a corresponding increase in gradient steps or parameter size taken by the baseline. Discussion. We introduce an auxiliary meta-critic that goes beyond the information available to vanilla critic to leverage measured actor learning progress (Eq. (9)). This is a generic module that can potentially improve any off-policy actor-critic derivative-based RL method for a minor overhead at train time, and no overhead at test time; and can be applied directly to single tasks without requiring task-families as per most other meta-RL methods [3, 7, 15, 30]. Our method is myopic, in that it uses a single inner (base) step per outer (meta) step. A longer horizon look-ahead may ultimately lead to superior performance. However, this incurs the cost of additional higher-order gradients and associated memory use, and risk of unstable high-variance gradients [21, 29]. New meta-optimizers [24] may ultimately enable these issues to be solved, but we leave this to future work. 5 Conclusion We present Meta-Critic, a derivative-based auxiliary critic module for off-policy actor-critic reinforcement learning methods that can be meta-learned online during single task learning. The meta-critic is trained to provide an additional loss for the actor to assist the actor learning progress, and leads to long run performance gains in continuous control. This meta-critic module can be flexibly incorporated into various contemporary OffP-AC methods to boost performance. In future work, we plan to apply the meta-critic to conventional meta-learning with multi-task and multi-domain RL. Acknowledgements This work was partially supported by the National Natural Science Foundation of China (No. 61751208) and the Advanced Research Program (No. 41412050202) and the Engineering and Physical Sciences Research Council of the UK (EPSRC) Grant number EP/S000631/1. Broader Impact We introduced a framework for meta RL where learning is improved through the addition of an auxiliary meta-critic which is trained online to maximise learning progress. This technology could benefit all current and potential future downstream applications of reinforcement learning, where learning speed and/or asymptotic performance can still be improved – such as in game playing agents and robot control. Faster reinforcement learning algorithms such as meta-critic could help to reduce the energy requirements training agents, which can add up to a significant environmental cost [35]; and bring us one step closer to enabling learning-based control of physical robots, which is currently rare due to the sample inefficiency of RL algorithms in comparison to the limited robustness of real robots to physical wear and tear of prolonged operation. Returning to our specific algorithmic contribution, introducing learnable reward functions rather than relying solely on manually specified rewards introduces a certain additional level of complexity and associated risk above that of conventional reinforcement learning. If the agent participates in defining its own reward, one might like to be able to interpret the learned reward function and validate that it is reasonable and will not lead to the robot learning to perform undesirable behaviours. This suggests that development of explainable AI techniques suited for reward function analysis could be a good topic for future research.
1. What is the focus and contribution of the paper regarding actor-critic reinforcement learning? 2. What are the strengths of the proposed approach, particularly in its novelty and analysis? 3. What are the weaknesses of the paper, especially regarding its experimental presentation and comparison with other works? 4. Do you have any concerns or suggestions for improving the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper introduces a meta-learning method that improves performance of actor-critic reinforcement learning algorithms. The authors achieve that by adding a meta-learning loss to the critic, which in combination with the standard TD-error improves the performance of the actor. The method is general enough to be applied to any actor-critic RL algorithm. ------- after rebuttal -------- Given the authors' response I'm willing to keep my current, positive score but I really hope that the authors introduce all the promised changes. Strengths - The paper introduces a novel idea that I haven't seen before. Meta-RL has been traditionally applied to multi-task settings whereas the authors show how it can be done in a single-task actor-critic scenario. - The authors provide very interesting analysis in Fig. 5 indicating on a simple example, why and how their method works - The authors provide interesting ablation experiments explaining why they made certain decisions Weaknesses - Supplementary material is very valuable and provides much more context than what we see in the paper in terms of the final results. Unfortunately, the results presented in the paper don't normalize the aspects of the environments that vary across different methods, making it very difficult to compare the algorithms - In terms of the supplementary material, the authors only show the parameter, and updates-adjusted results on 1-2 environments and they don't comment on other environments. These are the most important results in the paper! This makes it impossible to judge the actual contribution of their algorithm
NIPS
Title Online Meta-Critic Learning for Off-Policy Actor-Critic Methods Abstract Off-Policy Actor-Critic (OffP-AC) methods have proven successful in a variety of continuous control tasks. Normally, the critic’s action-value function is updated using temporal-difference, and the critic in turn provides a loss for the actor that trains it to take actions with higher expected return. In this paper, we introduce a flexible meta-critic framework based on observing the learning process and metalearning an additional loss for the actor that accelerates and improves actor-critic learning. Compared to existing meta-learning algorithms, meta-critic is rapidly learned online for a single task, rather than slowly over a family of tasks. Crucially, our meta-critic is designed for off-policy based learners, which currently provide state-of-the-art reinforcement learning sample efficiency. We demonstrate that online meta-critic learning benefits to a variety of continuous control tasks when combined with contemporary OffP-AC methods DDPG, TD3 and SAC. 1 Introduction Off-policy Actor-Critic (OffP-AC) methods are currently central in deep reinforcement learning (RL) research due to their greater sample efficiency compared to on-policy alternatives. On-policy learning requires new trajectories to be collected for each update to the policy, and is expensive as the number of gradient steps and samples per step increases with task-complexity even for contemporary TRPO [33], PPO [34] and A3C [27] algorithms. Off-policy methods, such as DDPG [20], TD3 [9] and SAC [13] achieve greater sample efficiency as they can learn from randomly sampled historical transitions without a time sequence requirement, making better use of past experience. The critic estimates action-value (Q-value) function using a differentiable function approximator, and the actor updates its policy parameters in the direction of the approximate action-value gradient. Briefly, the critic provides a loss to guide the actor, and is trained in turn to estimate the environmental action-value under the current policy via temporal-difference learning [38]. In all these cases the learning objective function is hand-crafted and fixed. Recently, meta-learning [14] has become topical as a paradigm to accelerate RL by learning aspects of the learning strategy, for example, learning fast adaptation strategies [7, 30, 31], losses [3, 15, 17, 36], optimisation strategies [6], exploration strategies [11], hyperparameters [40, 42], and intrinsic rewards [44]. However, most of these works perform meta-learning on a family of tasks or environments and amortize this huge cost by deploying the trained strategy for fast learning on a new task. ∗Contributed equally. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this paper we introduce a meta-critic network to enhance OffP-AC learning methods. The metacritic augments the vanilla critic to provide an additional loss to guide the actor’s learning. However, compared to the vanilla critic, the meta-critic is explicitly (meta)-trained to accelerate the learning process rather than merely estimate the action-value function. Overall, the actor is trained by both critic and meta-critic provided losses, the critic is trained by temporal-difference as usual, and crucially the meta-critic is trained to generate maximum learning progress in the actor. Both the critic and meta-critic use randomly sampled transitions for effective OffP-AC learning, providing superior sample efficiency compared to existing on-policy meta-learners. We emphasize that meta-critic can be successfully learned online within a single task. This is in contrast to the currently widely used meta-learning paradigm – where entire task families are required to provide enough data for meta-learning, and to provide new tasks to amortize the huge cost of meta-learning. Our framework augments vanilla AC learning with an additional meta-learned critic, which can be seen as providing intrinsic motivation towards optimum actor learning progress [28]. As analogously observed in recent meta-learning studies [8], our loss-learning can be formalized as bi-level optimisation with the upper level being meta-critic learning, and lower level being conventional learning. We solve this joint optimisation by iteratively updating the meta-critic and base learner online in a single task. Our strategy is related to the meta-loss learning in EPG [15], but learned online rather than offline, and integrated with OffP-AC rather than their on-policy policy-gradient learning. The most related prior work is LIRPG [44], which meta-learns an intrinsic reward online. However, their intrinsic reward just provides a helpful scalar offset to the environmental reward for on-policy trajectory optimisation via policy-gradient [37]. In contrast our meta-critic provides a loss for direct actor optimisation using sampled transitions, and achieves dramatically better sample efficiency than LIRPG reward learning. We evaluate several continuous control benchmarks and show that online meta-critic learning can improve contemporary OffP-AC algorithms including DDPG, TD3 and SAC. 2 Background and Related Work Policy-Gradient (PG) RL Methods. Reinforcement learning involves an agent interacting with environment E. At each time t, the agent receives an observation st, takes a (possibly stochastic) action at based on its policy π : S → A, and receives a reward rt and new state st+1. The tuple (st, at, rt, st+1) describes a state transition. The objective of RL is to find the optimal policy πφ, which maximizes the expected cumulative return J . In on-policy RL, J is defined as the discounted episodic return based on a sequential trajectory over horizon H: (s0, a0, r0, s1 · · · , sH , aH , rH , sH+1). J = Ert,st∼E,at∼π [∑H t=0 γ trt ] . In on-policy AC, r is represented by a surrogate state-value V (st) from its critic. Since J is only a scalar value that is not differentiable, the gradient of J with respect to policy πφ has to be optimised under the policy gradient theorem [37]: ∇φJ(φ) = E [J ∇φ log πφ(at|st)]. However, with respect to sample efficiency, even exploiting tricks like importance sampling and improved application of A2C [44], the use of full trajectories is less effective than the use of individual transitions by off-policy methods. Off-policy actor-critic architectures provide better sample efficiency by reusing past experience (previously collected transitions). DDPG [20] borrows two main ideas from Deep Q Networks [25, 26]: a replay buffer and a target Q network to give consistent targets during temporal-difference backups. TD3 (Twin Delayed Deep Deterministic policy gradient) [9] develops a variant of Double Q-learning by taking the minimum value between a pair of critics to limit over-estimation, and the computational cost is reduced by using a single actor optimised with respect to Qθ1 . SAC (Soft Actor-Critic) [12, 13] proposes a maximum entropy RL framework where its stochastic actor aims to simultaneously maximize expected action-value and entropy. The latest version of SAC [13] also includes the “the minimum value between both critics” idea in its implementation. Specifically, in these off-policy AC methods, parameterized policies πφ can be directly updated by defining actor loss in terms of the expected return J(φ) and taking its gradient ∇φJ(φ), where J(φ) depends on the action-value Qθ(s, a). Based on a batch of transitions randomly sampled from the buffer, the loss for actor provided by the critic is basically calculated as: Lcritic = −J(φ) = −Es∼pπQθ(s, a)|a=πφ(s). (1) Specifically, the loss Lcritic for actor in TD3 and SAC is calculated as Eq. (2) and Eq. (3) respectively: LcriticTD3 = −Es∼pπQθ1(s, a)|a=πφ(s); (2) LcriticSAC = Es∼pπ [α log (πφ(a|s))−Qθ(s, a)|a=πφ(s)]. (3) The actor is then updated as ∆φ = α∇φLcritic, following the critic’s gradient to increase the likelihood of actions that achieve a higher Q-value. Meanwhile, the critic θ uses Q-learning updates to estimate the action-value function: θ ← arg min θ E(Qθ(st, at)− rt − γQθ(st+1, π(st+1))2. (4) Meta Learning for RL. Meta-learning (a.k.a. learning to learn) [7, 14, 32] has received a resurgence in interest recently due to its potential to improve learning performance and sample efficiency in RL [11]. Several studies learn optimisers that provide policy updates with respect to known loss or reward functions [1, 6, 23]. A few studies learn hyperparameters [40, 42], loss functions [3, 15, 36] or rewards [44] that steer the learning of standard optimisers. Our meta-critic framework is in the category of loss-function meta-learning, but unlike most of these we are able to meta-learn the loss function online in parallel to learning a single extrinsic task rather. No costly offline learning on a task family is required as in Houthooft et al. [15], Sung et al. [36]. Most current Meta-RL methods are based on on-policy policy-gradient, limiting sample efficiency. For example, while LIRPG [44] is one of the few prior works to attempt online meta-learning, it is ineffective in practice due to only providing a scalar reward increment rather than a loss for direct optimisation. A few meta-RL studies have begun to address off-policy RL, for conventional multi-task meta-learning [30] and for optimising transfer vs forgetting in continual learning of multiple tasks [31]. The contribution of our Meta-Critic is to enhance state-of-the-art single-task OffP-AC RL with online meta-learning. Loss Learning. Loss learning has been exploited in ‘learning to teach’ [41] and surrogate loss learning [10, 16] where a teacher network predicts the parameters of a manually designed loss in the supervised learning. In contrast our meta-critic is itself a differentiable loss, and is designed for use in RL. Other applications learn losses that improve model robustness to out of distribution samples [2, 19]. Some recent loss learning studies in RL focus mainly on the multi-task adaptation scenarios [3, 15, 36] or the generalization to entirely different environments [17]. Our loss learning architecture is related to Li et al. [19], but designed for accelerating single-task OffP-AC RL rather than improving robustness in multi-domain supervised learning. 3 Methodology We aim to learn a meta-critic which augments the vanilla critic by providing an additional loss Lmcriticω for the actor. The vanilla loss for the policy (actor) is Lcritic given by the conventional critic. The actor is trained by Lcritic and Lmcriticω via stochastic gradient descent. The meta-critic parameter ω is optimized by meta-learning to accelerate actor learning progress. Here we follow the notation in TD3 and SAC that φ and θ denote actors and critics respectively. Algorithm Overview. We train a meta-critic loss Lmcriticω that augments the vanilla critic Lcritic to enhance actor learning. Specifically, it should lead to the actor φ having improved performance on the normal task, as measured by Lcritic on the validation data, after learning on both meta-critic and vanilla critic losses. This can be seen as a bi-level optimisation problem1 [8, 14, 29] of the form: ω = arg min ω Lmeta(dval;φ ∗) s.t. φ∗ = arg min φ (Lcritic(dtrn;φ) + L mcritic ω (dtrn;φ)), (5) where we can assume Lmeta(·) = Lcritic(·) for now. dtrn and dval are different transition batches from replay buffer. Here the lower-level optimisation trains actor φ to minimize both the normal loss and meta-critic-provided loss on training samples. The upper-level optimisation further requires meta-critic ω to have produced a learned actor φ∗ that minimizes a meta-loss that measures actor’s normal performance on a set of validation samples, after being trained by meta-critic. Note that in principle the lower-level optimisation could purely rely on Lmcriticω analogously to the procedure in EPG [15], but we find optimising their sum greatly increases learning stability and speed. Eq. (5) is satisfied when meta-critic successfully trains the actor for good performance on the normal task 1See Franceschi et al. [8] for a discussion on convergence of bi-level algorithms. Algorithm 1 Online Meta-Critic Learning for OffP-AC RL φ, θ, ω,D ← ∅ // Initialise actor, critic, meta-critic and buffer for each iteration do for each environment step do at ∼ πφ(at|st) // Select action according to the current policy st+1 ∼ p(st+1|st, at), rt // Observe reward rt and new state st+1 D ← D ∪ {(st, at, rt, st+1)} // Store the transition in the replay buffer end for for each gradient step do Sample mini-batch dtrn from D Update θ ← Eq. (4) // Update the critic parameters meta-train: Lcritic ← Eqs. (1), (2) or (3) // Vanilla-critic-provided loss for actor Lmcriticω ← Eqs. (10) or (11) // Meta-critic-provided loss for actor φold = φ− η∇φLcritic // Update actor according to Lcritic only φnew = φold − η∇φLmcriticω // Update actor according to Lcritic and Lmcriticω meta-test: Sample mini-batch dval from D Lmeta(dval;φnew) or L meta clip(dval;φold, φnew)← Eqs. (8) or (9) // Meta-loss meta-optimisation φ← φ− η(∇φLcritic +∇φLmcriticω ) // Update the actor parameters ω ← ω − η∇ωLmeta or ω − η∇ωLmetaclip // Update the meta-critic parameters end for end for=0 as measured by validation meta loss. The update of vanilla-critic is also in the lower loop, but as it updates as usual, we focus on the actor and meta-critic optimisation for simplicity of exposition. In this setup the meta-critic is a neural network hω(dtrn;φ) that takes as input some featurisation of the actor φ and the states and actions in dtrn. The meta-critic network must produce a scalar output, which we can then treat as a loss Lmcriticω := hω , and must be differentiable with respect to φ. We next discuss the overall optimisation flow and the specific meta-critic architecture. Meta-Optimisation Flow. To optimise Eq. (5), we iteratively update the meta-critic parameter ω (upper-level) and actor and vanilla-critic parameters φ and θ (lower-level). At each iteration, we perform: (i) Meta-train: Sample a mini-batch of transitions and putatively update policy φ based on the vanilla-critic-provided Lcritic and the meta-critic-provided Lmcriticω losses. (ii) Meta-test: Sample another mini-batch of transitions to evaluate the performance of the updated policy according to Lmeta. (iii) Meta-optimisation: Update meta-critic ω to maximize the performance on the validation batch, and perform the real actor update according to both losses. Thus the meta-critic co-evolves with the actor as they are trained online and in parallel. Figure 1 and Alg. 1 summarize the process and the details of each step are explained next. Meta-critic can be flexibly integrated with any OffP-AC algorithms, and the further implementation details for DDPG, TD3 and SAC are in the supplementary material. Updating Actor Parameters (φ). During metatrain, we sample a mini-batch of transitions dtrn = {(si, ai, ri, si+1)} with batch size N from the replay buffer D. We update the policy using both losses as: φnew = φ− η ∂ Lcritic(dtrn) ∂φ − η ∂ L mcritic ω (dtrn) ∂φ . (6) We also compute a separate update: φold = φ− η ∂Lcritic(dtrn) ∂φ (7) that only leverages the vanilla-critic-provided loss. If meta-critic provided a beneficial source of loss, φnew should be a better parameter than φ, and in particular a better parameter than φold. We will use this comparison in the next meta-test step. Updating Meta-Critic Parameters (ω). To train the meta-critic, we sample another mini-batch of transitions: dval = {(svali , avali , rvali , svali+1)} with batch size M . The use of a validation batch for bi-level meta-optimisation [8, 29] ensures the meta-learned component does not overfit. As our framework is off-policy, this does not incur any sample efficiency cost. The meta-critic is then updated by a meta-loss ω ← ω − ηLmeta(·) that measures actor performance after learning. Meta-Loss Definition. The most intuitive meta-loss definition is the validation performance of updated actor φnew as measured by the normal critic: Lmeta = Lcritic(dval;φnew). (8) However, we find it helpful for optimisation efficiency and stability to optimise the clipped difference between updates with- and without meta-critic’s input as: Lmetaclip = tanh(L critic(dval;φnew)− Lcritic(dval;φold)). (9) This is simply a monotonic re-centering and re-scaling of Lcritic. (The parameter ω that minimizes Lmetaclip as Eq. (9) also minimizes L meta of Eq. (8) and vice-versa.) Note that in Eq. (9) the updated actor φnew depends on the feedback given by meta-critic ω and φold does not. Thus only the first term is optimised for ω. In this setup the Lcritic(dval;φnew) term should obtain high reward/low loss on the validation batch and the latter Lcritic(dval;φold) provides a baseline, analogous to the baseline widely used to accelerate and stabilize the policy-gradient RL. tanh ensures meta-loss range is always nicely distributed in (−1, 1), and caps the magnitude of the meta-gradient. In essence, meta-loss is for the agent to ask itself: “Did meta-critic learning improve validation performance compared to vanilla learning?”, and adjusts meta-critic ω accordingly. We will compare the options Lmeta and Lmetaclip later. Designing Meta-Critic (hω). The meta-critic hω implements the additional loss for actor. The design-space for hω has several requirements: (i) Its input must depend on the policy parameters φ, because this meta-critic-provided loss is also used to update the policy. (ii) It should be permutation invariant to transitions in dtrn, i.e., it should not make a difference if we feed the randomly sampled transitions indexed [1,2,3] or [3,2,1]. A naivest way to achieve (i) is given in MetaReg [2] which meta-learns a parameter regularizer: hω(φ) = ∑ i ωi|φi|. Although this form of hω acts directly on φ, it does not exploit state information, and introduces a large number of parameters in hω, as φ may be a high-dimensional neural network. Therefore, we design a more efficient and effective form of hω that also meets both of these requirements. Similar to the feature extractor in supervised learning, the actor needs to analyse and extract information from states for decision-making. We assume the policy network can be represented as πφ(s) = π̂(π̄(s)) and decomposed into the feature extraction π̄φ and decision-making π̂φ (i.e., the last layer of the full policy network) modules. Thus the output of the penultimate layer of full policy network is just the output of feature extraction π̄φ(s), and such output of feature jointly encodes φ and s. Given this encoding, we implement hw(dtrn;φ) as a three-layer multi-layer perceptron fω whose input is the extracted feature from π̄φ(s). Here we consider two designs for meta-critic (hω): using our joint feature alone (Eq. (10)) or augmenting the joint feature with states and actions (Eq. (11)): hw(dtrn;φ) = 1 N N∑ i=1 fω(π̄φ(si)), (10) hw(dtrn;φ) = 1 N N∑ i=1 fω(π̄φ(si), si, ai). (11) hω provides as an auxiliary critic whose input is based on the batch-wise set-embedding [43] of our joint actor-state feature. That is to say, dtrn is a randomly sampled mini-batch transitions from the replay buffer, and then s (and a) of transitions are inputted to hω, and finally we obtain the meta-critic-provided loss for dtrn. Here, our design of Eq. (11) also includes the cues in LIRPG and EPG where si and ai are used as the input of their learned reward and loss respectively. We set a softplus activation to the final layer of hω, following the idea in TD3 that vanilla critic may over-estimate and so the a non-negative additional actor loss can mitigate such over-estimation. Moreover, note that only si (and ai) from dtrn are used to calculate Lcritic and Lmcriticω , while si, ai, ri and si+1 are all used for optimising the vanilla critic. 4 Experiments and Evaluation We take the algorithms DDPG, TD3 and SAC as our vanilla baselines, and denote their enhancements by meta-critic as DDPG-MC, TD3-MC, SAC-MC. All -MCs augment their built-in vanilla critic with the proposed meta-critic. We take Eq. (10) and Lmetaclip as the default meta-critic setup, and compare alternatives in the ablation study. For our implementation of meta-critic, we use a three-layer neural network with an input dimension of π̄ (300 in DDPG and TD3, 256 in SAC), two hidden feed-forward layers of 100 hidden nodes each, and ReLU non-linearity between layers. Implementation Details. We evaluate the methods on a suite of seven MuJoCo tasks [39] in OpenAI Gym [4], two MuJoCo tasks in rllab [5], and a simulated racing car TORCS [22]. For MuJoCo-Gym, we use the latest V2 tasks instead of V1 used in TD3 and the old-SAC [12] without modification to their original environment or reward. We use the open-source implementations “OurDDPG”2, TD33 and SAC4. Here, “OurDDPG” is the re-tuned version of DDPG implemented in Fujimoto et al. [9] with the same hyper-parameters. In MuJoCo cases we integrate our meta-critic with learning rate 0.001. The details of TORCS hyper-parameters are in the supplementary material. Our demo code can be viewed on https://github.com/zwfightzw/Meta-Critic. 4.1 Evaluation of Meta-Critic OffP-AC Learning TD3 and SAC. Figure 3 reports the learning curves for TD3. For some tasks the vanilla TD3’s performance declines in the long run, while TD3-MC shows improved stability with much higher asymptotic performance. Thus TD3-MC provides comparable or better learning performance in each case, while Table 1 shows the clear improvement in the max average return. For SAC in Figure 4, note that we use the most recent update of SAC [13], which is actually the combination of SAC+TD3. Although SAC+TD3 is arguably the strongest existing method, SAC-MC still gives a clear boost on the asymptotic performance for many tasks, especially the most challenging TORCS. 2https://github.com/sfujim/TD3/blob/master/OurDDPG.py 3https://github.com/sfujim/TD3/blob/master/TD3.py 4https://github.com/pranz24/pytorch-soft-actor-critic Comparison vs PPO-LIRPG. Intrinsic Reward Learning for PPO [44] is the most related method to our work in performing online single-task meta-learning of an additional reward/loss. Their original PPO-LIRPG evaluated on a modified environment with hidden rewards. Here we apply it to the standard unmodified learning tasks that we aim to improve. Table 1 tells that: (i) In this conventional setting, PPO-LIRPG worsens rather than improves basic PPO performance. (ii) Overall OffP-AC methods generally perform better than on-policy PPO for most environments. This shows the importance of our meta-learning contribution to the off-policy setting. In general Meta-Critic is preferred compared to PPO-LIRPG because the latter only provides a scalar reward bonus that helps the policy indirectly via high-variance policy-gradient updates, while ours provides a direct loss. Summary. Table 1 and Figure 5 summarize all the results by max average return. SAC-MC generally performs best and -MCs are generally comparable or better than their corresponding vanilla alternatives. -MCs usually provide improved variance in return compared to their baselines. 4.2 Further Analysis Loss and Optimisation Analysis. We take tabular MDP [6] (|S| = 2, |A| = 2) as an example using DDPG. Figure 6 first reports the normal Lcritic of actor, and the introduced hω (i.e., Lmcriticω ) and Lmetaclip over 5 trials. We also plot model optimisation trajectories (pink dots) via a 2D weight-space slice in right part of Figure 6. They are plotted over the average reward surface. Following the network visualization in Li et al. [18], we calculate the subspace to plot as: Let φi denote model parameters at episode i and the final estimate as φn (here n = 100). We apply PCA to matrix M = [φ0 − φn, . . . , φn−1 − φn], and take the two most explanatory directions of this optimisation path. Parameters are then projected onto the plane defined by these directions for plotting; and models at each point are densely evaluated to get average reward. Figure 6 shows: (i) DDPG-MC convergences faster to a lower value of Lcritic, demonstrating the meta-critic’s ability to accelerate learning. (ii) Meta-loss is randomly initialised at the start, but as ω begins to be trained via meta-test on validation data, meta-loss drops swiftly below zero and then φnew is better than φold. In the late stage, meta-loss goes towards zero, indicating all of hω’s knowledge has been distilled to help the actor. Thus meta-critic is helpful in defining better update directions in the early stages of learning (but note that it can still impact later stage learning via changing choices made early). (iii) Lmcriticω converges smoothly under the supervision of meta-loss. (iv) DDPG-MC has a very direct and fast optimisation movement to the high reward zone of parameter space, while the vanilla DDPG moves slowly through the low reward space before finally finding the direction to the high-reward zone. Ablation on hω design. We run Walker2d under SAC-MC with the alternative hω from Eq. (11) or in MetaReg [2] format (input actor parameters directly). In Table 2, we record the max average return and sum average return (area under the average reward curve) of evaluations over all time steps. Eq. (11) achieves the highest max average return and our default hω (Eq. (10)) attains the highest mean average return. We can also see some improvement for hω(φ) in MetaReg format, but the huge number (73484) of parameters is expensive. Overall, all meta-critic designs provide at least a small improvement on vanilla SAC. Ablation on meta-loss design. We considered two meta-loss designs in Eqs. (8&9). For Lmetaclip in Eq. (9), we use Lcritic(dval;φold) as a baseline to improve numerical stability of the gradient update. To evaluate this design, we also compare using vanilla Lmeta in Eq. (8). The last column in Table 2 shows vanilla Lmeta barely improves on vanilla SAC, validating our meta-loss design. Controlling for compute cost and parameter count. We find that meta-critic increases 15-30% compute cost and 10% parameter count above the baselines (the latter is neglectable as it is small compared to the replay buffer’s memory footprint) during training, and this is primarily attributable to the cost of evaluating the meta-loss Lmetaclip and hence L mcritic ω . To investigate whether the benefit of meta-critic can be replicated simply by increasing compute expenditure or model size, we perform control experiments by increasing the vanilla baselines’ compute budget or parameter count to match the -MCs. Specifically, if meta-critic takes K% more compute than the baseline, then we re-run the baseline with K% more update steps per iteration. This ‘+updates’ condition provides the baseline with more mini-batch samples while controlling the number of environment interactions. Note that due to implementation constraints of SAC, increasing updates in ‘SAC+updates’ requires taking at least 2x gradient updates per environment step compared to SAC and SAC-MC. Thus it takes 100% more updates than SAC and significantly more compute time than SAC-MC. To control for parameter count, if meta-critic takesN% more parameters than baseline, then we increase the baselines’ network size with N% more parameters by linearly scaling up the size of all hidden layers (‘+params’). The max average return results for the seven tasks in these control experiments are shown in Table 3, and the detailed learning curves of the control experiments are in the supplementary material. Overall, there is no consistent benefit in providing the baseline with more compute iterations or parameters, and in many environments they perform worse than the baseline or even fail entirely, especially in ‘+updates’ condition. Thus -MCs’ good performance can not be simply replicated by a corresponding increase in gradient steps or parameter size taken by the baseline. Discussion. We introduce an auxiliary meta-critic that goes beyond the information available to vanilla critic to leverage measured actor learning progress (Eq. (9)). This is a generic module that can potentially improve any off-policy actor-critic derivative-based RL method for a minor overhead at train time, and no overhead at test time; and can be applied directly to single tasks without requiring task-families as per most other meta-RL methods [3, 7, 15, 30]. Our method is myopic, in that it uses a single inner (base) step per outer (meta) step. A longer horizon look-ahead may ultimately lead to superior performance. However, this incurs the cost of additional higher-order gradients and associated memory use, and risk of unstable high-variance gradients [21, 29]. New meta-optimizers [24] may ultimately enable these issues to be solved, but we leave this to future work. 5 Conclusion We present Meta-Critic, a derivative-based auxiliary critic module for off-policy actor-critic reinforcement learning methods that can be meta-learned online during single task learning. The meta-critic is trained to provide an additional loss for the actor to assist the actor learning progress, and leads to long run performance gains in continuous control. This meta-critic module can be flexibly incorporated into various contemporary OffP-AC methods to boost performance. In future work, we plan to apply the meta-critic to conventional meta-learning with multi-task and multi-domain RL. Acknowledgements This work was partially supported by the National Natural Science Foundation of China (No. 61751208) and the Advanced Research Program (No. 41412050202) and the Engineering and Physical Sciences Research Council of the UK (EPSRC) Grant number EP/S000631/1. Broader Impact We introduced a framework for meta RL where learning is improved through the addition of an auxiliary meta-critic which is trained online to maximise learning progress. This technology could benefit all current and potential future downstream applications of reinforcement learning, where learning speed and/or asymptotic performance can still be improved – such as in game playing agents and robot control. Faster reinforcement learning algorithms such as meta-critic could help to reduce the energy requirements training agents, which can add up to a significant environmental cost [35]; and bring us one step closer to enabling learning-based control of physical robots, which is currently rare due to the sample inefficiency of RL algorithms in comparison to the limited robustness of real robots to physical wear and tear of prolonged operation. Returning to our specific algorithmic contribution, introducing learnable reward functions rather than relying solely on manually specified rewards introduces a certain additional level of complexity and associated risk above that of conventional reinforcement learning. If the agent participates in defining its own reward, one might like to be able to interpret the learned reward function and validate that it is reasonable and will not lead to the robot learning to perform undesirable behaviours. This suggests that development of explainable AI techniques suited for reward function analysis could be a good topic for future research.
1. What is the focus and contribution of the paper regarding off-policy actor-critic methods? 2. What are the strengths of the proposed approach, particularly in its generality and empirical results? 3. What are the weaknesses of the paper, especially regarding its limitations and lack of discussion on feasibility and usefulness? 4. Do you have any concerns or questions regarding the method's comparison to other auxiliary losses or its impact on computation time and memory footprint?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes a method to improve final performance for off-policy actor-critic methods, by adding a learned loss to the critic that is meta-learned online alongside the off-policy method on a single task. In the paper, the authors give an overview of their algorithm and then dive into the different aspects of it in more detail. Empirically, the proposed meta-critic is applied on top of three off-policy methods (DDPG, TD3, SAC) on seven different environments (MuJoCo tasks from gym and rllab, and a simulated racing car environment called TORCS) for five seeds each. For DDPG, the performance increase is significant on most environments, for TD3 and SAC the performance is either similar or slightly higher (except on TORCS where the performance increase is pretty significant). ----------------------------------------------------------------------- ----------------------------------------------------------------------- Update: I read the other reviews and the authors feedback. R2Q1 - Thanks for the response, it would be great if you could include such a discussion in a revised version of the paper. Additional experiments / comparisons to other auxiliary tasks that boost performance could strengthen the paper even further IMO but are not strictly necessary. R2Q2 - Thanks. Again, bringing this up in the paper even in 1-2 sentences would be nice for the reader just to be aware of this. I still wonder if there's situations where a non-myopic loss is necessary and the current approach would break down, and am curious to potentially read about this in future work. Given the author's feedback and promises made for updating the paper I'm keeping my current score. Strengths The proposed method is well explained, and is general in that it can be applied to any off-policy algorithms. I like that the authors outline how to exactly do this for three prominent off-policy actor-critic methods, and explain how to add the meta-critic loss for each. The empirical results on DDPG are strong compared to the baselines. Weaknesses The value of the proposed method for me is twofold: First, it is interesting to see that meta-learning a critic loss online alongside an RL algorithm works at all, and is an interesting addition to the literature that looks at improving performance for *single* tasks using meta-learning. Second, this method could be used to increase performance when training agents on an environment and where we care about getting a high end-performance. A weakness that I see in the paper is that I find it hard to assess how feasible / useful this actually is. Why and when should I use the meta-critic on top of my off-policy algorithm, compared to other options? How does this compare to other auxiliary losses that can just be added on top of off-policy actor-critic algorithms? (Which ones are there?) How much does it increase my computation time / memory footprint? (Some information on this is in the appendix; I think 1-2 sentences about this in the main text would be nice as well.) So for me, any information that the authors can include into the paper that might help somebody make a decision about whether/when/where to use meta-critic would strengthen the paper.